text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4762 COMET : Commonsense Transformers for Automatic Knowledge Graph Construction Antoine Bosselut ♦♠Hannah Rashkin ♦♠Maarten Sap ♦♠Chaitanya Malaviya ♦ Asli Celikyilmaz ♣Yejin Choi ♦♠ ♦Allen Institute for Artificial Intelligence, Seattle, WA, USA ♠Paul G. Allen School of Computer Science & Engineering, Seattle, WA, USA ♣Microsoft Research, Redmond, WA, USA Abstract We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet (Speer et al., 2017). Contrary to many conventional KBs that store knowledge with canonical templates, commonsense KBs only store loosely structured open-text descriptions of knowledge. We posit that an important step toward automatic commonsense completion is the development of generative models of commonsense knowledge, and propose COMmonsEnse Transformers (COMET ) that learn to generate rich and diverse commonsense descriptions in natural language. Despite the challenges of commonsense modeling, our investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs. Empirical results demonstrate that COMET is able to generate novel knowledge that humans rate as high quality, with up to 77.5% (ATOMIC) and 91.7% (ConceptNet) precision at top 1, which approaches human performance for these resources. Our findings suggest that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods. 1 Introduction When reading text, humans make commonsense inferences that frame their understanding of the narrative being presented. For machines to achieve this capability, they must be able to acquire relevant and correct commonsense for an unbounded set of situations. In this work, we cast commonsense acquisition as knowledge base construction and investigate whether large-scale language models can effectively learn to generate the knowledge PersonX puts their arms around PersonY loving towards PersonY to comfort PersonY caring PersonX goes to the store bring a wallet feels loved Commonsense Knowledge Bases 
 (seen events) Automatic KB Completion xAttr xAttr xIntent oReact xNeed Unseen Events PersonX buys lunch to get food xIntent xNeed nap having a rest dozing off HasSubevent HasSubevent Going to a movie having fun UsedFor energy Causes Atomic ConceptNet Throwing a party Causes Figure 1: COMET learns from an existing knowledge base (solid lines) to be able to generate novel nodes and edges (dashed lines). necessary to automatically construct a commonsense knowledge base (KB). Automatic KB construction is a long-standing goal of artificial intelligence research due to the difficulty of achieving high concept coverage in high-precision curated KBs (Lenat, 1995; Miller, 1995). Previous work has developed models capable of reading and extracting semi-structured text (Suchanek et al., 2007; Hoffart et al., 2013; Auer et al., 2007; Bollacker et al., 2008) and unstructured text (Dong et al., 2014; Carlson et al., 2010; Nakashole et al., 2011, 2012; Niu, 2012) into relational schemas that can be queried for downstream applications. A common thread of these approaches, however, is the focus on encyclopedic knowledge, which lends itself to a well-defined space of entities and relations that can be modeled. Commonsense knowledge, however, does not cleanly fit into a schema comparing two entities with a known relation, leading current approaches 4763 Commonsense Transformer (COMeT) Multi-headed Attention Transformer Block W K 1 W V 1 W Q 1 W K b W Q b … K V Q W V b Attention Head 1 Attention Head b Multi-headed Attention + ,…, { } Layer Normalization Layer Normalization + Feedforward Network Block Block Block Block Block Block Block Block Block Block … … … … … e0 p0 e1 p1 e |s| p |s| + + + + + … PersonX sails … <xNeed> … sail boat boat <END> … … Vocab Vocab Vocab Vocab Vocab [MASK][MASK] have Concatenation Linear Projection g~ ht ( a ) ( b ) ( c ) ht ht-1 h0 l - 1 l - 1 l - 1 l l Figure 2: Model diagram. (a) In the multi-headed attention module, the key, value, and query all pass through a head-specific projection before a scaled dot-product attention is computed between them. The outputs of the heads are concatenated and projected. (b) Inside the transformer block, the outputs of all the previous layer blocks from earlier time steps are input to the multi-headed attention with the preceding block for the current time step as the query. (c) Each token is an input to a first-layer block along with all preceding tokens. Dotted lines indicate outputs to all future blocks in the next layer and inputs from all preceding blocks in the previous layer. to model “entities" as natural language phrases and relations as any concept that can link them (Li et al., 2016; Sap et al., 2019). OpenIE approaches display this property of open text entities and relations (Etzioni et al., 2011; Fader et al., 2011; Mausam et al., 2012), but being extractive, they only capture knowledge that is explicitly mentioned in text, limiting their applicability for capturing commonsense knowledge, which is often implicit (Gordon and Van Durme, 2013). Meanwhile, recent progress in training deep contextualized language models (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018) provides an opportunity to explore beyond extractive methods as an avenue for commonsense KB construction. These large-scale language models display impressive performance when their underlying representations are tuned to solve end tasks, achieving state-of-the-art results on a variety of complex problems. In this work, we define the COMmonsEnse Transformer (COMET ), which constructs commonsense KBs by using existing tuples as a seed set of knowledge on which to train. Using this seed set, a pre-trained language model learns to adapt its learned representations to knowledge generation, and produces novel tuples that are high quality. We summarize our contributions in this work as follows. First, we develop a generative approach to knowledge base construction. A model must learn to produce new nodes and identify edges between existing nodes by generating phrases that coherently complete an existing seed phrase and relation type1. Second, we develop a framework for using large-scale transformer language models to learn to produce commonsense knowledge tuples2. Finally, we perform an empirical study on the quality, novelty, and diversity of the commonsense knowledge produced by our approach for two domains, ATOMIC and ConceptNet, as well as an efficiency study on the number of seed tuples needed to learn an effective knowledge model. The results indicate that COMET is able to produce high quality tuples as human judges find that 77.5% of generated tuples for ATOMIC events and 91.7% of generated tuples for ConceptNet relations are correct. 2 Learning to Generate Commonsense COMET is an adaptation framework for constructing commonsense knowledge bases from language models by training the language model on a seed set of knowledge tuples. These tuples provide COMET with the KB structure and relations that must be learned, and COMET learns to adapt the language model representations learned from pretraining to add novel nodes and edges to the seed knowledge graph. 1Demo is available at https://mosaickg.apps. allenai.org/ 2Code is available at https://github.com/ atcbosselut/comet-commonsense 4764 2.1 Task More specifically, the problem assumes COMET is given a training knowledge base of natural language tuples in {s, r, o} format, where s is the phrase subject of the tuple, r is the relation of the tuple, and o is the phrase object of the tuple. For example, a ConceptNet tuple relating to “taking a nap" would be: (s=“take a nap", r=Causes, o=“have energy"). The task is to generate o given s and r as inputs. Notation We define Xs = {xs 0, ..., xs |s|} as the tokens that make up the subject of the relation, Xr = {xr 0, ..., xr |r|} as the tokens that make up the relation of the tuple, and Xo = {xo 0, ..., xo |o|} as the tokens that make up the object of the tuple. The embedding for any word x is denoted as e. 2.2 Transformer Language Model While COMET is agnostic to the language model with which it is initialized, in this work, we use the transformer language model architecture introduced in Radford et al. (2018) (GPT), which uses multiple transformer blocks of multi-headed scaled dot product attention and fully connected layers to encode input text (Vaswani et al., 2017). Figure 2 depicts different components of the GPT architecture and we define each component in more depth below. Transformer Block As shown in Figure 2(b), each transformer layer l contains an architecturally identical transformer block (though with unique trainable parameters) that applies the following transformations to the input to the block: ˜gl = MULTIATTN(hl−1) (1) gl = LAYERNORM(˜gl + hl−1) (2) ˜hl = FFN(gl) (3) hl = LAYERNORM(˜hl + gl) (4) where MULTIATTN is a multi-headed selfattention mechanism (defined below), FFN is a two-layer feed-forward network, and LAYERNORM represents a layer normalization (Ba et al., 2016) operation that is applied to the output of the self-attention and the feedforward network. Note that the inputs to the LAYERNORM operations contain a residual connection that sums the output of and input to the previous operation. Multi-headed Attention The multi-headed attention module of each transformer block, shown in Figure 2(a), is identical to the one originally defined by Vaswani et al. (2017). The attention function receives three inputs, a query Q, key K, and value V . The attention is made of multiple heads that each compute a unique scaled dot product attention distribution over V using Q and K: ATTENTION(Q, K, V ) = softmax QKT √dk  V (5) where dk is the dimensionality of the input vectors representing the query, key and value. For each of the heads, Q, K, and V are uniquely projected prior to the attention being computed: Hi = ATTENTION(QW Q i , KW K i , V W V i ) (6) where Hi is the output of a single attention head and W Q i , W K i , and W V i are head-specific projections for Q, K, and V , respectively. The outputs of the attention heads Hi are then concatenated: MULTIH(Q, K, V) = [H1; ...; Hb]W O (7) where W O is an output projection of the concatenated outputs of the attention heads. As shown in Figure 2(c), we follow Radford et al. (2018) and use the output of the previous layer’s transformer block as the query input for the multi-headed attention of the next block. The keys and values are outputs of the previous layer’s block for all preceding time steps: MULTIATTN(hl−1 t ) = MULTIH(hl−1 t , hl−1 t , hl−1 t ) (8) where hl−1 t = {hl−1}<t is the set of previous layer transformer block outputs for time steps preceding t. Input Encoder As input to the model, we represent a knowledge tuple {s, r, o} as a concatenated sequence of the words of each item of the tuple: X = {Xs, Xr, Xo} (9) Since the transformer (a self-attention model) has no concept of ordering of tokens, a position embedding pt is initialized for each absolute position in the sequence (Vaswani et al., 2017). For any input word xt ∈X, our encoding of the input is 4765 s tokens r token mask tokens o tokens s tokens r tokens mask tokens o tokens mask tokens ATOMIC Input Template and ConceptNet Relation-only Input Template ConceptNet Relation to Language Input Template PersonX goes to the mall [MASK] <xIntent> to buy clothes go to mall [MASK] [MASK] has prerequisite [MASK] have money Figure 3: Input token setup for training configurations. For the ATOMIC dataset, the tokens of the subject, Xs (e.g., PersonX goes to the mall) are followed by masking tokens, which is followed by a single relation token Xr (e.g., xIntent), and then the object tokens Xo (e.g., to buy clothes). The model receives the same input for ConceptNet, except that a second set of masking tokens separate Xr and Xo because Xr can have a variable number of tokens for ConceptNet (§5.2) the sum of its word embedding, et with a position embedding encoding its absolute position in the sequence X: h0 t = et + pt (10) where pt is the position embedding for time step t, and h0 is the input to the first transformer layer. 3 Training COMET COMET is trained to learn to produce the phrase object o of a knowledge tuple given the tuple’s phrase subject s and relation r. More specifically, given the concatenation of the tokens of s and r: [Xs, Xr] as input, the model must learn to generate the tokens of o: Xo (See §2.1 for definitions of these variables). Loss Function To achieve this goal, COMET is trained to maximize the conditional loglikelihood of predicting the phrase object tokens, Xo: L = − |s|+|r|+|o| X t=|s|+|r| log P(xt|x<t) (11) where |s|, |r|, and |o| are the number of tokens in the subject phrase, relation, and object phrase, respectively. Figure 3 outlines how the tokens in s, r, and o are organized for different training tasks. Datasets COMET relies on a seed set of knowledge tuples from an existing KB to learn to produce commonsense knowledge. In this work, we use ATOMIC and ConceptNet as knowledge seed sets, but other commonsense knowledge resources could have been used as well as COMET is domain-agnostic. Initialization Parameters are initialized to the final language model weights from Radford et al. (2018). Additional special tokens that are added to the vocabulary for fine tuning (e.g., relation embeddings such as oReact for ATOMIC and IsA for ConceptNet) are initialized by sampling from the standard normal distribution. Hyperparameters Following Radford et al. (2018)’s design of the GPT model, we initialize COMET with 12 layers, 768-dimensional hidden states, and 12 attention heads. We use a dropout rate of 0.1 and use GeLU (Hendrycks and Gimpel, 2016) units as activation functions. During training, our batch size is 64. Other dataset-specific hyperparameters are provided in Appendix A.1. 4 ATOMIC Experiments The ATOMIC dataset3, released by Sap et al. (2019), contains 877K tuples covering a variety of social commonsense knowledge around specific event prompts (e.g., “X goes to the store”). Specifically, ATOMIC distills its commonsense in nine dimensions, covering the event’s causes (e.g., “X needs to drive there”), its effects on the agent (e.g., “to get food”) and its effect on other direct (or implied) participants (e.g., “Others will be fed”). More details about ATOMIC can be found in Appendix D. For our experiments, ATOMIC events (e.g., “X goes to the store”) are phrase subjects, s, the dimension (e.g., xIntent) is the phrase relation, r, and the causes/effects (e.g., “to get food”) are phrase objects, o. We use the training splits from Sap et al. (2019), resulting in 710k training, 80k development, and 87k test tuples respectively. 4.1 Setup Metrics Following Sap et al. (2019), we evaluate our method using BLEU-2 as an automatic evaluation metric. We also report the perplexity of the model on its gold generations. The remaining automatic metrics in Table 1 measure the proportion of generated tuples and generated objects which are not in the training set. We report the proportion of all generated tuples that are novel (% N/T sro) and that have a novel object (% N/T o)4. To show that these novel objects are diverse (i.e., the same novel object is not the only one being generated), we also report the number of novel 3https://homes.cs.washington.edu/ ~msap/atomic/ 4a new o represents a new node in the knowledge graph 4766 Model PPL5 BLEU-2 N/T sro6 N/T o N/U o 9ENC9DEC (Sap et al., 2019) 10.01 100.00 8.61 40.77 NearestNeighbor (Sap et al., 2019) 6.61 Event2(IN)VOLUN (Sap et al., 2019) 9.67 100.00 9.52 45.06 Event2PERSONX/Y (Sap et al., 2019) 9.24 100.00 8.22 41.66 Event2PRE/POST (Sap et al., 2019) 9.93 100.00 7.38 41.99 COMET (- pretrain) 15.42 13.88 100.00 7.25 45.71 COMET 11.14 15.10 100.00 9.71 51.20 Table 1: Automatic evaluations of quality and novelty for generations of ATOMIC commonsense. No novelty scores are reported for the NearestNeighbor baseline because all retrieved sequences are in the training set. Model oEffect oReact oWant xAttr xEffect xIntent xNeed xReact xWant Avg 9Enc9Dec (Sap et al., 2019) 22.92 32.92 35.50 52.20 47.52 51.70 48.74 63.57 51.56 45.32 Event2(In)voluntary (Sap et al., 2019) 26.46 36.04 34.70 52.58 46.76 61.32 49.82 71.22 52.44 47.93 Event2PersonX/Y (Sap et al., 2019) 24.72 33.80 35.08 52.98 48.86 53.93 54.05 66.42 54.04 46.41 Event2Pre/Post (Sap et al., 2019) 26.26 34.48 35.78 52.20 46.78 57.77 47.94 72.22 47.94 46.76 COMET (- pretrain) 25.90 35.40 40.76 48.04 47.20 58.88 59.16 64.52 65.66 49.50 COMET 29.02 37.68 44.48 57.48 55.50 68.32 64.24 76.18 75.16 56.45 Table 2: Human score of generations of ATOMIC commonsense. We present comparisons to the baselines from Sap et al. (2019). Underlined results are those where COMET is not significantly better at p < 0.05 objects as a function of the set of unique objects produced for all test set events (% N/U o). Finally, we perform a human evaluation using workers from Amazon Mechanical Turk (AMT). Workers are asked to identify whether a model generation of ATOMIC commonsense adequately completes a plausible tuple of phrase subject, relation, and phrase object. Following the setup of Sap et al. (2019), we evaluate 100 randomly selected events from the test set. For each event and relation type, 10 candidates are generated using beam search and the full beam is evaluated by five different workers. Overall, n=5000 ratings are produced per relation (100 events × 5 workers × 10 candidates). The reported Avg in Table 2 is an average of these scores, yielding n=45000 total ratings for each model. We use Pitman’s test (Noreen, 1989) with 100k permutations to test for statistical significance. Because 50 different hypotheses are tested (9 relations + the total), the HolmBonferroni method (Holm, 1979) is used to correct significance thresholds. Example events from the development set and their generated phrase objects are available in Table 5. Baselines We report the performance of our method against the models trained in Sap et al. (2019) that use LSTM sequence-to-sequence models (Sutskever et al., 2014) to encode the input subject and relation and produce an output object. Ablations To evaluate how pre-training on a large corpus helps the model learn to produce knowledge, we train a version of COMET that is not initialized with pre-trained weights (COMET (pretrain)). We also evaluate the data efficiency of our method by training models on different proportions of the training data. Finally, because the ultimate goal of our method is to be able to perform high-quality, diverse knowledge base construction, we explore how various decoding schemes affect the quality of candidate knowledge tuples. We present the effect of the following generation strategies: argmax greedy decoding, beam search with beam sizes, b=2, 5, 10, and top-k sampling with k = 5, 10. For each decoding method, we conduct the human evaluation on the number of final candidates produced by each method. 4.2 Results Overall performance The BLEU-2 results in Table 1 indicate that COMET exceeds the performance of all baselines, achieving a 51% relative improvement over the top performing model of Sap et al. (2019). More interesting, however, is the result of the human evaluation, where COMET reported a statistically significant relative Avg performance increase of 18% over the top baseline, 5Sap et al. (2019)’s models were trained with a different vocabulary so a direct perplexity comparison is not possible. 6All test set s do not appear in the training set so all full tuples must be novel. 4767 COMET Decoding method oEffect oReact oWant xAttr xEffect xIntent xNeed xReact xWant Avg Top-5 random sampling (n=2500 per relation) 34.60 44.04 35.56 64.56 55.68 58.84 46.68 80.96 58.52 53.27 Top-10 random sampling (n=5000 per relation) 25.20 37.42 27.34 49.20 47.34 47.06 38.24 72.60 48.10 43.61 Beam search - 2 beams (n=1000 per relation) 43.70 54.20 47.60 84.00 51.10 73.80 50.70 85.80 78.70 63.29 Beam search - 5 beams (n=2500 per relation) 37.12 45.36 42.04 63.64 61.76 63.60 57.60 78.64 68.40 57.57 Beam search - 10 beams (n=5000 per relation) 29.02 37.68 44.48 57.48 55.50 68.32 64.24 76.18 75.16 56.45 Greedy decoding (n=500 per relation) 61.20 69.80 80.00 77.00 53.00 89.60 85.60 92.20 89.40 77.53 Human validation of gold ATOMIC 84.62 86.13 83.12 78.44 83.92 91.37 81.98 95.18 90.90 86.18 Table 3: Human evaluation testing effect of different decoding schemes on candidate tuple quality. The number of ratings made per relation for each decoding method is provided in the first column. % train data PPL BLEU-2 N/T o N/U o 1% train 23.81 5.08 7.24 49.36 10% train 13.74 12.72 9.54 58.34 50% train 11.82 13.97 9.32 50.37 FULL (- pretrain) 15.18 13.22 7.14 44.55 FULL train 11.13 14.34 9.51 50.05 Table 4: Effect of amount of training data on automatic evaluation of commonsense generations Event2IN(VOLUN). This performance increase is consistent, as well, with an improvement being observed across every relation type. In addition to the quality improvements, Table 1 shows that COMET produces more novel tuple objects than the baselines, as well. Learning knowledge from language Significant differences were also observed between the performance of the model whose weights were initialized with the pre-trained parameters from the GPT model of Radford et al. (2018) and a model with the same architecture that was trained from random initialization. This 14% relative improvement in overall human performance confirms that the language representations learned by the GPT model are transferable to generating natural language commonsense knowledge. Effect of decoding algorithm In Table 3, we show the effect of different generation policies on knowledge quality. The most interesting result is that using greedy decoding to produce knowledge tuples only results in a 10% relative performance gap compared to a human evaluation of the ATOMIC test set, showing that the knowledge produced by the model approaches human performance. While producing more total candidates does lower overall performance, quality assessSeed Concept Relation Generated Plausible X holds out X’s hand to Y xAttr helpful ✓ X meets Y eyes xAttr intense ✓ X watches Y every ___ xAttr observant ✓ X eats red meat xEffect gets fat ✓ X makes crafts xEffect gets dirty ✓ X turns X’s phone xEffect gets a text X pours ___ over Y’s head oEffect gets hurt ✓ X takes Y’s head off oEffect bleeds ✓ X pisses on Y’s bonfire oEffect gets burned X spoils somebody rotten xIntent to be mean X gives Y some pills xIntent to help ✓ X provides for Y’s needs xIntent to be helpful ✓ X explains Y’s reasons xNeed to know Y ✓ X fulfils X’s needs xNeed to have a plan ✓ X gives Y everything xNeed to buy something ✓ X eats pancakes xReact satisfied ✓ X makes ___ at work xReact proud ✓ X moves house xReact happy ✓ X gives birth to the Y oReact happy ✓ X gives Y’s friend ___ oReact grateful ✓ X goes ___ with friends oReact happy ✓ X gets all the supplies xWant to make a list ✓ X murders Y’s wife xWant to hide the body ✓ X starts shopping xWant to go home ✓ X develops Y theory oWant to thank X ✓ X offer Y a position oWant to accept the job ✓ X takes ___ out for dinner oWant to eat ✓ Table 5: Generations that were randomly selected from a subset of novel generations from the ATOMIC development set. A novel generation is a sro tuple not found in the training set. Manual evaluation of each tuple indicates whether the tuple is considered plausible by a human annotator. ments still hover around 55%7 for a beam size of 10. This result suggests that COMET could be effective with human evaluators in the loop to confirm the correctness of generated tuples. Efficiency of learning from seed tuples Because not all domains will have large available commonsense KBs on which to train, we explore how varying the amount of training data available for learning affects the quality and novelty of the knowledge that is produced. Our results in Table 4 indicate that even with only 10% of the available training data, the model is still able to 7This number is partially low due to the many “none" references in the oEffect, oReact, oWant categories. In any set of 10 candidates, “none" can only be predicted once, which causes most candidates in the beam to be incorrect if “none" is the appropriate answer. 4768 produce generations that are coherent, adequate, and novel. Using only 1% of the training data clearly diminishes the quality of the produced generations, with significantly lower observed results across both quality and novelty metrics. Interestingly, we note that training the model without pretrained weights performs comparably to training with 10% of the seed tuples, quantifying the impact of using pre-trained language representations. 5 ConceptNet Experiments The ConceptNet dataset8, provided by Li et al. (2016), consists of tuples obtained from the Open Mind Common Sense (OMCS) entries in ConceptNet 5 (Speer et al., 2017). Tuples are in the standard sro form – (e.g., take a nap, Causes, have energy). The most confident 1200 tuples were used to create the test set, while the next 1200 tuples were used to create two development sets, which we combine in this work. The 100k version of the training set was used to train models, which contains 34 relation types. 5.1 Setup Metrics We evaluate our models that generate ConceptNet relations using the following metrics. First, we report the perplexity of the gold relations in the test set (PPL). To evaluate the quality of generated knowledge, we also report the number of generated positive examples in the test set that are scored as correct by the pre-trained Bilinear AVG model developed by Li et al. (2016).9 For a given sro tuple, this model produces a probability for whether the tuple is correct. We threshold scores at 50% probability to identify positive predictions. On the completion task originally proposed in Li et al. (2016), this model achieved 92.5% accuracy on the test set, indicating that it is a strong proxy for automatically evaluating whether a generated tuple is correct. Finally, we report the same novelty metrics as for ATOMIC: N/T sro and N/T o. Baselines As a baseline, we re-implement the BiLSTM model proposed by Saito et al. (2018) with minor modifications outlined in Appendix A.2. This model is trained to learn to encode knowledge in both directions: sr →o and 8https://ttic.uchicago.edu/~kgimpel/ commonsense.html 9 A pre-trained model can be found at https: //ttic.uchicago.edu/~kgimpel/comsense_ resources/ckbc-demo.tar.gz Model PPL Score N/T sro N/T o Human LSTM - s 60.83 86.25 7.83 63.86 CKBG (Saito et al., 2018) 57.17 86.25 8.67 53.95 COMET (- pretrain) 8.05 89.25 36.17 6.00 83.49 COMET - RELTOK 4.39 95.17 56.42 2.62 92.11 COMET 4.32 95.25 59.25 3.75 91.69 Table 6: ConceptNet generation Results or →s to help augment a knowledge base completion model. It is only evaluated on the sr →o tuple generation task, however. For posterity, we also include the result from a LSTM model that is only trained on the sr →o task (LSTM - s). Ablations We include the following ablations of our full model. First, we evaluate how pretraining on a large-scale corpus (Radford et al., 2018) helps performance by training a comparison model from scratch, denoted COMET (- pretrain) in Table 6. Second, in our main model, we map relation names to natural language (e.g., IsA → “is a”; HasSubevent →“has subevent”) so the model can learn to represent these concepts with language, as opposed to learning a special embedding from scratch for each relation (Levy et al., 2017). As an ablation, we train a model without converting relation tokens to natural language (e.g., IsA ̸→“is a”), which we denote COMET RELTOK. 5.2 Results Quality Our results indicate that high-quality knowledge can be generated by the model: the low perplexity scores in Table 6 indicate high model confidence in its predictions, while the high classifier score (95.25%) indicates that the KB completion model of Li et al. (2016) scores the generated tuples as correct in most of the cases. While adversarial generations could be responsible for this high score, a human evaluation (following the same design as for ATOMIC) scores 91.7% of greedily decoded tuples as correct. Randomly selected examples provided in Table 7 also point to the quality of knowledge produced by the model. Novelty In addition to being high quality, the generated tuples from COMET are also novel, with 59.25% of the tuples not being present in the training set, showing that the model is capable of generating new edges between nodes, and even creating new nodes – 3.75% of o nodes are novel – to extend the size of the knowledge graph. One shortcoming, however, is that novel generations 4769 Classifier Accuracy 0.00 0.25 0.50 0.75 1.00 % of tuples with edit distance >= X 0% 25% 50% 75% 100% Edit Distance 0.0 0.33 0.5 0.67 1.0 % of novel tuples Accuracy Figure 4: The percentage of novel ConceptNet development set tuples per minimum edit distance from training tuples. In green: classifier-scored accuracy of each subset. are sometimes simplified forms of tuples from the training set. In Table 7, for example, the tuple “doctor CapableOf save life” is not present in the training set, but “doctor CapableOf save person life” is. Many tuples, however, are completely novel, such as “bird bone HasProperty fragile” and “driftwood AtLocation beach”, which have no related tuples in the training set. To explore further, we investigate by how much novel tuples from the development set differ from training set phrase objects for the same s, r using minimum edit distance of phrase objects. We measure the edit distance of phrase object odev in the tuple (s, r, odev) to the otrn from the nearest training tuple (s, r, otrn). Edit distance is measured using word tokens (excluding stop words) and normalized by the maximum number of words in odev or otrn. The maximum edit distance is one (i.e., entirely different word sequences) and the minimum edit distance is zero (i.e., the same sequence excluding stopwords). Figure 4 shows the percentage of novel development set tuples that have an edit distance from the closest training set tuple of at least the value on the x-axis. Over 75% of the novel tuples have objects that are a normalized edit distance of >= 0.5 from the training phrase objects, indicating that most of the novel phrase objects have significantly different word sequences from their closest analogues in the training set. Learning knowledge from language Similarly to ATOMIC, we explore how pre-training COMET on a large language corpus affects its ability to generalize commonsense. This effect is apparent in Table 6, with a clear improvement on automatic and human evaluations by the pretrained COMET over the randomly initialized Seed Relation Completion Plausible piece PartOf machine ✓ bread IsA food ✓ oldsmobile IsA car ✓ happiness IsA feel ✓ math IsA subject ✓ mango IsA fruit ✓ maine IsA state ✓ planet AtLocation space ✓ dust AtLocation fridge puzzle AtLocation your mind college AtLocation town ✓ dental chair AtLocation dentist ✓ finger AtLocation your finger sing Causes you feel good ✓ doctor CapableOf save life ✓ post office CapableOf receive letter ✓ dove SymbolOf purity ✓ sun HasProperty big ✓ bird bone HasProperty fragile ✓ earth HasA many plant ✓ yard UsedFor play game ✓ get pay HasPrerequisite work ✓ print on printer HasPrerequisite get printer ✓ play game HasPrerequisite have game ✓ live HasLastSubevent die ✓ swim HasSubevent get wet ✓ sit down MotivatedByGoal you be tire ✓ all paper ReceivesAction recycle ✓ chair MadeOf wood ✓ earth DefinedAs planet ✓ Table 7: Randomly selected and novel generations from the ConceptNet development set. Novel generations are sro tuples not found in the training set. Manual evaluation of each tuple indicates whether the tuple is considered plausible by a human annotator model. Qualitatively, we observe this effect in Table 7 with the generated example tuple “mango IsA fruit", which is not present in the training set. The only tuple containing the “mango" entity in the training set is “mango UsedFor salsa", which is not informative enough. As confirmation, we observe that the output from COMET (- pretrain) is “mango IsA spice”, which could be a reasonable inference given the information about “mango" in the seed set of knowledge. Representing relations with language While the automatic metrics point to insignificant differences when comparing models with symbol relations and those with natural language relations (Table 6), examples can provide qualitative insights into the benefits of representing relations as language. While the only non-ornithological reference to a “dove" in the ConceptNet training set is “dove CapableOf fly”, our model learns to generalize to produce the tuple “dove SymbolOf purity”. The model that uses symbol relation embeddings only manages to produce the relation “dove SymbolOf submarine”, which seems to relate “submarine" to a more nautical (and unrelated) word sense of “dove". 4770 6 Related Work Knowledge base construction Previous work has looked at constructing knowledge bases as relational schemas using expert knowledge (Lenat, 1995; Bodenreider, 2004; Miller, 1995), semistructured text extraction (Suchanek et al., 2007; Hoffart et al., 2013; Auer et al., 2007; Bollacker et al., 2008) and unstructured text extraction (Dong et al., 2014; Carlson et al., 2010; Nakashole et al., 2011, 2012; Niu, 2012). In our work, we focus on construction of commonsense knowledge bases which require the use of open-text events rather than a well-defined relational schema structure. Other work in information extraction can also be applied to knowledge base construction with open-text entities (Soderland et al., 2010; Etzioni et al., 2011; Fader et al., 2011; Mausam et al., 2012; Fan et al., 2010; Cui et al., 2018), but these methods typically extract explicitly stated text relations. Conversely, our approach generates new knowledge that is often unstated in text, as commonsense information typically is (Gordon and Van Durme, 2013). Commonsense knowledge base completion Existing work on generation of novel commonsense knowledge has also used ConceptNet and ATOMIC as underlying KBs. Specifically, Li et al. (2016) proposed a set of neural network models for scoring tuples in ConceptNet. Our work differs from this approach as their models evaluate full tuples rather than learning to generate the phrases to make new nodes in the knowledge graph. Saito et al. (2018) builds upon this work by proposing a joint model for completion and generation of commonsense tuples. Their work, however, focuses on using tuple generation to augment their KB completion model, rather than to increase coverage in commonsense KB construction. Finally, Sap et al. (2019) use LSTM encoder-decoder models to generate commonsense knowledge about social situations. We use transformers and investigate the effect of using pre-trained language representations (Radford et al., 2018) to initialize them. Transformers and pre-training Finally, our work builds on previous work on adapting pretrained language models for various sequence labeling, classification, and NLI end tasks (Radford et al., 2018; Peters et al., 2018; Devlin et al., 2018). Our research investigates how pre-trained language models can be used for large-scale commonsense KB construction by generating new graph nodes and edges between nodes. 7 Conclusion We introduce COMmonsense Transformers (COMET) for automatic construction of commonsense knowledge bases. COMET is a framework for adapting the weights of language models to learn to produce novel and diverse commonsense knowledge tuples. Empirical results on two commonsense knowledge bases, ATOMIC and ConceptNet, show that COMET frequently produces novel commonsense knowledge that human evaluators deem to be correct. These positive results point to future work in extending the approach to a variety of other types of knowledge bases, as well as investigating whether COMET can learn to produce OpenIE-style knowledge tuples for arbitrary knowledge seeds. Acknowledgments We thank Thomas Wolf, Ari Holtzman, Chandra Bhagavatula, Peter Clark, Rob Dalton, Ronan Le Bras, Rowan Zellers and Scott Yih for helpful discussions over the course of this project, as well as the anonymous reviewers for their insightful comments. This research was supported in part by NSF (IIS-1524371, IIS-1714566, NRI-1525251), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and Samsung Research. This material is based, in part, upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082. References Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In ISWC/ASWC. Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450. Olivier Bodenreider. 2004. The unified medical language system (umls): Integrating biomedical terminology. Nucleic acids research, 32:D267–70. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. 4771 Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD ’08, pages 1247–1250, New York, NY, USA. ACM. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka, Jr., and Tom M. Mitchell. 2010. Toward an architecture for neverending language learning. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI’10, pages 1306–1313. AAAI Press. Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 601– 610, New York, NY, USA. ACM. Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam. 2011. Open information extraction: The second generation. In IJCAI. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the conference on empirical methods in natural language processing, pages 1535–1545. Association for Computational Linguistics. James Fan, David A. Ferrucci, David Gondek, and Aditya Kalyanpur. 2010. Prismatic: Inducing knowledge from a large scale lexicalized relation resource. In NAACL-HLT 2010. Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 25–30. ACM. Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR, abs/1606.08415. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8). Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, and Gerhard Weikum. 2013. Yago2: A spatially and temporally enhanced knowledge base from wikipedia. Artificial Intelligence, 194:28 – 61. Artificial Intelligence, Wikipedia and SemiStructured Resources. Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6(2):65–70. Douglas B Lenat. 1995. Cyc: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33–38. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke S. Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In CoNLL. Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In ACL, volume 1, pages 1445–1455. Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In EMNLP-CoNLL. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41. Ndapandula Nakashole, Martin Theobald, and Gerhard Weikum. 2011. Scalable knowledge harvesting with high precision and high recall. In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, WSDM ’11, pages 227– 236, New York, NY, USA. ACM. Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: A taxonomy of relational patterns with semantic types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1135–1145. Association for Computational Linguistics. Feng Niu. 2012. Web-scale Knowledge-base Construction via Statistical Inference and Learning. Ph.D. thesis, Madison, WI, USA. AAI3524067. Eric W Noreen. 1989. Computer intensive methods for hypothesis testing: An introduction. Wiley, NY. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matthew Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. CoRR, abs/1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Itsumi Saito, Kyosuke Nishida, Hisako Asano, and Junji Tomita. 2018. Commonsense knowledge base completion and generation. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 141–150. 4772 Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for ifthen reasoning. In AAAI. Stephen Soderland, Brendan Roof, Bo Qin, Shi Xu, Mausam, and Oren Etzioni. 2010. Adapting open information extraction to domain-specific relations. AI Magazine, 31:93–102. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web, WWW ’07, pages 697– 706, New York, NY, USA. ACM. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. 4773 A Additional Training Details A.1 Training Hyperparameters ATOMIC For ATOMIC, we use a maximum learning rate of 6.25e-5 with a warmup period of 100 minibatches. After, we decay the learning rate linearly until the end of training. We train for 50k minibatches and use early stopping. We clip gradients when their norm is greater than 1. The remainder of our hyperparameters are the same as in Radford et al. (2018). We use the public HuggingFace implementation of the GPT model as a base for our experiments available at: https://github.com/huggingface/ pytorch-openai-transformer-lm. ConceptNet For ConceptNet, we use a maximum learning rate of 1e-5 and a warm-up period of 200 minibatches. The learning rate is decayed linearly until the end of training, which lasts for 100k minibatches. All other hyperparameters are the same as for training on the ATOMIC corpus. A.2 ConceptNet baseline We train the ConceptNet baseline with a learning rate of 1e-4 for 100k minibatches. Early stopping is used with the validation loss. Similarly to Saito et al. (2018), we use 200-dimension hidden states and 200-dimensional word embeddings. We use a single-layer bidirectional LSTM (Hochreiter and Schmidhuber, 1997) to encode the first phrase and a single-layer unidirectional LSTM to decode the target phrase. Relation embeddings are concatenated with the word embeddings of the decoder before being input to the decoder LSTM. We set the dropout rate to 0.2 before the output projection layer and after the word embedding layers. We outline the following differences between our reimplementation of the model of Saito et al. (2018) and their original implementation and the reason for the change. 1. We use Glove (Pennington et al., 2014) embeddings rather than fastText embeddings (Bojanowski et al., 2017) to initialize word embeddings. Because the model indicated that 200-dimensional word embeddings were used, we could not use the pretrained embeddings provided by the fastText group1. In Saito et al. (2018), the authors described training their fastText embeddings on 1https://fasttext.cc/ Wikipedia. With no reference to the precise corpus used, we opted to use Glove embeddings to initialize the word embeddings of the encoder and decoder instead. 2. We use the Adam optimizer with learning rate of 0.0001, rather than SGD with a learning rate of 1.0 because after training both models, we found that the Adam-trained model performed better on development set perplexity. We also do not use weight decay, as this seemed to lower validation performance, as well. 3. We do not train the generation model jointly with the completion model. We only train an individual generator. The results of Saito et al. (2018) did not show a significant difference in generation performance between the two on the ConceptNet dataset. 4. We train a second baseline (LSTM - s) that does not learn to produce relations in both directions (i.e., sr →o and or →s). Instead if only learns parameters that can produce relations in the forward direction (sr →o) 5. We do not decay the learning rate because it was unclear from the original paper what the exact learning rate schedule was. B Additional Evaluation Details B.1 Human Evaluations We used Amazon Mechanical Turk to get ratings of model output accuracy. We selected seed concepts and relations from the test set and generated completions using each model to create (s, r, o) tuples. For ATOMIC, we selected tuples by choosing all possible relations (9) for each of 100 randomly selected seed concepts (900 total (s, r) pairs) following the procedure from Sap et al. (2019). For ConceptNet, we used the full test set (1200 total (s, r) pairs). For Beam-2/5/10 and top-5/10 sampling generations, we used the model to generate 2, 5, or 10 (respectively) possible completions (o) per (s, r) pair. Workers were shown the full set and asked to select all of the o that are valid completions for the (s, r) pair. Each set of tuples was rated by 5 workers. For greedy sampling generations, we used the model to generate one possible completion (o) per 4774 (s, r) pair. Workers were shown the completed tuple (s, r, o) and asked whether it is valid or not. Each tuple was rated by 5 workers. We measure accuracy as the percentage of distinct worker responses where the (s, r, o) tuple is marked as valid (i.e., #valid 5·|(s,r,o)|). C Example Outputs Additional examples can be seen in Figures 5, 6, and 7 that are produced using the demo at https://mosaickg.apps.allenai. org. D Additional Training Experiments In addition to the more naive setups for knowledge graph completion, we explore various multitask and hierarchical learning setups on top of the taxonomy of commonsense relations given by Sap et al. (2019), which group together along various axes (e.g., related to agent/theme, related to causes/effects, etc.). D.1 Multi-relation Training For the ATOMIC corpus, we experiment with multiple multi-task training setups, similar to Sap et al. (2019). First, we train an individual model for each relation type (oReact, oEffect, etc.), which we denote as COMET - 9LM in the Table 9. We also experiment with various informationsharing dataset configurations that organize different relations across common dimensions. We outline these dimensions and the makeup of each split in Table 9. For ConceptNet, all models are always trained on all relation types jointly. Results on automatic evaluation metrics are provided in Table 11. Because there did not seem to be significant differences between these performances and that of COMET - FULL, we did not run additional experiments on these ablations. D.2 Concept Hierarchy Training Leveraging the prior knowledge that certain relation types in the ATOMIC knowledge graph are linked to each other, we explore providing these group identities as additional tokens in the relation. For example, when generating the completion of a xReact relation, the model would receive as input the following meta-tokens: <xReact>, <X>, <POST>, <Involuntary> – thereby providing common context with other relations that are part of the same groupings (e.g., generating a phrase for a xWant relation would receive the <X> and <POST> tokens as input, but not <Involuntary>). Depending on the relation for a particular training example (e.g., xReact), a set of meta-tokens are appended to the relation tokens, Xr, that provide hierarchical relational information, allowing the model to share information across relation types. We provide a more in-depth description of the category hierarchy training combinations in Table 10. Results on human evaluation metrics are provided in Table 12. Because the model with the hierarchical meta-tokens performed worse than the regular COMET, we did not run additional experiments on this ablations. 4775 Figure 5: Example outputs for the event "PersonX gives PersonY a pep talk" from COMET trained on the ATOMIC knowledge graph 4776 Figure 6: Example outputs for the event "Eric wants to see a movie" from COMET trained on the ATOMIC knowledge graph. COMET is able to generalize beyond the templates of the ATOMIC knowledge graph (i.e., PersonX) and can be used directly with names. 4777 Figure 7: Example outputs for the event "Tom asked Jessica if he could use her car" from COMET trained on the ATOMIC knowledge graph 4778 Event Description Example Completion: Person X puts Person X’s trust in Person Y oEffect The effect the event has on others besides Person X is considered trustworthy is believed gains Person X’s loyalty oReact The reaction of others besides Person X to the event trusted honored trustworthy oWant What others besides Person X may want to do after the event work with Person X partner with Person X to help Person X xAttr How Person X might be described given their part in the event faithful hopeful trusting xEffect The effect that the event would have on Person X gets relieved stays faithful Is betrayed xIntent The reason why X would cause the event to be trusting his or her help/guidance/advice to be friends xNeed What Person X might need to do before the event to be friends with Person Y to have heard a lot of good things about Person Y to get to know Person Y xReact The reaction that Person X would have to the event trusting safe, not alone understood xWant What Person X may want to do after the event to rely on Person Y to go into business with Person Y to make sure that their heart feeling is right Table 8: Definitions of the relations in ATOMIC. Events in ATOMIC center around the personal situations of a central figure, Person X, with potentially more participants. Organization Description Relations PERSON X/Y The training set is split into relations for the subjects of the event (Person X) and relations for other participants in the event T1 = {xAttr, xEffect, xIntent, xNeed, xReact, xWant} T2 = {oEffect, oReact, oWant} PRE/POST Event preconditions are jointly trained (i.e., intentions, needs). Event postconditions are jointly trained. T1 = {xIntent, xNeed} T2 = {oEffect, oReact, oWant, xEffect, xReact, xWant} (IN)VOLUN Involuntary relations are trained jointly, such as reactions and effects. Voluntary relations are trained jointly, such as needs, wants, and intents. T1 = {oWant, xIntent, xNeed, xWant} T2 = {oEffect, oReact, xAttr, xEffect, xReact} FULL The training set is made up of all relations and the model is trained jointly on all of them T1 = {oEffect, oReact, oWant, xAttr, xEffect, xIntent, xNeed, xReact, xWant} Table 9: Multi-relation training setups. Following Sap et al. (2019), the xAttr relation is not included in the PRE/POST training configuration 4779 Meta-Token Description Relations <X> Appended to relations that describe an attribute of Person X xAttr, xEffect, xIntent, xNeed, xReact, xWant <Y> Appended to relations that describes an attribute of a participant that is not Person X oEffect, oReact, oWant <Pre> Appended to relations that correspond to pre-conditions of the event xIntent, xNeed <Post> Appended to relations that correspond to post-conditions of the event oEffect, oReact, oWant, xEffect, xReact, xWant <Voluntary> Appended to relations that correspond to voluntary dimensions of the situation oWant, xIntent, xNeed, xWant <Involuntary> Appended to relations that correspond to involuntary dimensions of the situation oEffect, oReact, xAttr, xEffect, xReact Table 10: Category hierarchy meta-tokens, along with the description and the relations to which they are appended Model PPL3 BLEU-2 N/T sro4 N/T o N/U o COMET- 9LM 11.72 14.89 100.00 9.45 49.89 COMET- (IN)VOLUN 11.38 14.99 100.00 8.60 48.36 COMET- PERSONX/Y 11.30 15.21 100.00 9.12 49.59 COMET- PRE/POST 11.35 14.88 100.00 9.86 51.86 COMET- FULL (- pretrain) 15.42 13.88 100.00 7.25 45.71 COMET- FULL 11.14 15.10 100.00 9.71 51.20 COMET- FULL (+ hierarchy meta-tokens) 10.98 15.27 100.00 10.03 51.97 Table 11: Automatic evaluations of quality and novelty for generations of ATOMIC commonsense that are trained with the training set split along different relation types. The training splits are outlined in Table 9. Model oEffect oReact oWant xAttr xEffect xIntent xNeed xReact xWant Total COMET 29.02 37.68 44.48 57.48 55.50 68.32 64.24 76.18 75.16 56.45 COMET (+ hierarchy meta-tokens) 28.46 38.96 43.64 51.90 50.84 63.00 63.98 66.20 75.82 53.64 Table 12: Human score of generations of ATOMIC commonsense for the regular COMET model and the COMET + category meta tokens
2019
470
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4780–4790 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4780 Detecting Subevents using Discourse and Narrative Features Mohammed Aldawsari & Mark A. Finlayson School of Computing and Information Sciences Florida International University Miami FL, 33199 {malda021,markaf}@fiu.edu Abstract Recognizing the internal structure of events is a challenging language processing task of great importance for text understanding. We present a supervised model for automatically identifying when one event is a subevent of another. Building on prior work, we introduce several novel features, in particular discourse and narrative features, that significantly improve upon prior state-of-the-art performance. Error analysis further demonstrates the utility of these features. We evaluate our model on the only two annotated corpora with event hierarchies: HiEve and the Intelligence Community corpus. No prior system has been evaluated on both corpora. Our model outperforms previous systems on both corpora, achieving 0.74 BLANC F1 on the Intelligence Community corpus and 0.70 F1 on the HiEve corpus, respectively a 15 and 5 percentage point improvement over previous models. 1 Introduction An event is something that occurs in a certain place at a certain time (Pustejovsky et al., 2003). Understanding events plays a major role in various natural language processing tasks such as information extraction (Humphreys et al., 1997), question answering (Narayanan and Harabagiu, 2004), textual entailment (Haghighi et al., 2005), event coreference (Choubey and Huang, 2018) and contradiction detection (De Marneffe et al., 2008). There has been a significant amount of work on automatic processing of events in text including systems for events extraction, event coreference resolution, and temporal relation detection (Araki, 2018; Ning et al., 2017). However, events are not atomic entities: they often have complex internal structure that can be expressed in a variety of ways (Huttunen et al., 2002; Bejan and Harabagiu, 2008; Hovy et al., 2013). One of the unsolved problems related to event understanding is the detection of subevents, also referred to as event hierarchy construction. As described by Glavaˇs and ˇSnajder (2014a), there have been efforts that have focused on detecting temporal and spatial subevent containment individually. However, it is clear that subevent detection requires both simultaneously. The subevent relationship is defined in terms of (e1,e2), where e1 and e2 are events: event e2 is a subevent of event e1 if e2 is spatiotemporally contained by e1. More precisely, we say that an event e1 is a parent event of event e2, and e2 is a child event of e1 if (1) e1 is collector event that contains a complex sequence of activities; (2) e2 is one of these activities; and (3) e2 is spatially and temporally contained within e1 (i.e., e2 occur at the same time and same place as e1) (Hovy et al., 2013; Glavaˇs and ˇSnajder, 2014b). This subevent relationship is independent of other types of relationships, e.g., causal relationship between the events. Example 1 illustrates a text expression of a complex event hierarchy. Figure 1 shows a corresponding graphical representation of the hierarchy. Egyptian police have said that five protesters were killed1 when they were attacked2 by an armed group near the Defense Ministry building in Cairo. The statement said that early this morning, the armed group attacked3 the demonstrators who have for days been staging their protest4 against the military government. ...Police said that the attack5 on Wednesday wounded6 at least 50 protesters. Example 1: Excerpt from the HiEve corpus (Glavaˇs et al., 2014a). Events are in bold and given a numerical subscript for reference. In all the examples the identified events are gold annotations, but for clarity not all annotations are included. 4781 protest attacked attack attacked killed wounded Figure 1: The corresponding event hierarchy of example 1. Bolded arrows indicate subevent relationships and bolded lines indicate event coreference relationships, when they are explicitly indicated in the HiEve annotations. Dashed lines indicate implicit subevent relationship. In Figure 1, we see that killed1 and wounded6 are explicitly annotated as subevents of attacked3, while that event in turn is a subevent of protest4. Events attacked2 and attack5 are explicitly indicated as coreferent with attacked3. These relationships induce the implicit subevent relations shown by dashed lines. In this work we propose a pairwise model that leverages new discourse and narrative features to significantly improve subevent relation detection. evaluate our model on two corpora, namely, the HiEve corpus (Glavaˇs et al., 2014a) and the Intelligence Community (IC) corpus1 (Hovy et al., 2013). We build on feature sets proposed in previous work, but propose several important discourse and narrative level features. We show that our model outperforms current systems on the subevent detection task by a significant margin. An error analysis reveals why these features are important and further details on why the subevent detection task is difficult. We begin the paper by discussing prior work on subevent detection task (§2). Then we introduce our model and the feature set (§3). Following that, we describe the corpora (§4.1) we used and the experimental setup (§4.2). We then present the evaluation metrics and the performance of our model (§4.3) as well as compare our model performance to previous works (§5). To the end, we show an extensive error analysis (§6) and conclude with a list of contributions (§7). 2 Related Work There are two pieces of prior work that are most related to our work. Araki et al. (2014) pro1The IC corpus is unfortunately not publically available; we obtained a copy from Hovy et al. (2013). posed a logistic regression model to classify pairs of events into four classes: coreference, subevent, sister, and no relation. They then used sister relations and their parents to improve the system performance. Their model was trained and tested on 65 articles from the IC corpus developed by (Hovy et al., 2013). Similarly, Glavaˇs and ˇSnajder (2014b) used a logistic regression model to classify pairs of event into three classes: subevent relations (SuperSub and SubSuper) and no relation. They enforced structural coherence which improved the quality of the extracted event hierarchies by 7.6% F1 score. They trained and tested their approach on the HiEve corpus developed by (Glavaˇs et al., 2014a). Both approaches were evaluated using different evaluations metrics. Araki et al. evaluated their model using BLANC evaluation metric (Recasens and Hovy, 2011) whereas Glavaˇs and ˇSnajder evaluated their model using the standard F1 evaluation metric. Both works introduced a variety of features. The main contribution of our work is to note that the subevent detection task requires a better understanding of the discourse. Thus here we introduce several new features, including discourse structure and narrative structure. The error analysis (§6) demonstrates why these features are effective and also reveals more details on why subevent detection is difficult. 3 Features In this section, we explain the features used in our model. As discussed, both the HiEve and IC corpus (Hovy et al., 2013; Glavaˇs et al., 2014a) are annotated with both subevent and event coreference relationships. We compute features over all pairs of events (e1, e2) where e1 precedes e2 in the text. Each pair of events is either related by a forward pointing parent-child relationship (PC), a backward pointing parent-child relationship (CP), or no relation (NoRel). Our features can be divided into five sets as shown in Table 1. In the following sections we first illustrate the features we directly obtained from prior work (§3.1); next we explain the features that were inspired by prior work but that we modified significantly (§3.2); and finally we introduce our new discourse and narrative features (§3.3). 3.1 Prior Features We obtained most of the lexical and syntactic features, and several of the semantic features, directly 4782 Feature Set or Feature Representation Description Lexical Event Expression Bag-of-Events The surface form of e1 and e2. Same Lemma Binary Whether e1 and e2 have the same lemma. Temporal Signals* Bag-of-Signals If both events are in the same sentence, the temporal signals appearing in the sentence between the events, based on the temporal signals list from (Derczynski and Gaizauskas, 2010). Event String Similarity Numeric The string similarity between surface forms of the events using a Levenshtein distance measure. Syntactic Major POS One-hot The Major POS of e1 and e2 (e.g., Noun, Verb, or Adjective) [2 features]. Same Major POS Binary Whether the Major POS of e1 and e2 are the same. POS Tag One-hot The POS Tag of e1 and e2. [2 features] Same POS Tag Binary Whether the POS Tag of the e1 and e2 are the same. Syntactic Dependency* One-hot The ancestor event of the other event in the dependency tree. Determiner Binary Whether each event has a determiner. [2 features] Semantic Semantic Frame Binary Whether e1 and e2 have the same semantic frame using SEMAFOR (Das et al., 2010). Event Type* One-hot The event type of e1 and e2 extracted from the mapping from frames to event types (Liu et al., 2016). [2 features] Same Event Type Binary Whether event types of e1 and e2 are the same. VerbOccan Score Numeric The VerbOcean score (Chklovski and Pantel, 2004) between e1 and e2 for each of VerbOcean’s five relations. [5 features] Semantic Similarity* Numeric The cosine similarity between e1 and e2 embeddings using FastText (Mikolov et al., 2018) pre-trained model (wiki-news-300d-1M). Most Likely Parent Event* One-hot Which event is most likely to be a parent of the other event if both exist in the training data (see §3.2). WordNet Similarity Numeric The WordNet Similarity scores between e1 and e2 using (Lin, 1998; Wu and Palmer, 1994) similarity measures.[2 features] Arguments Co-refering Event Arguments* One-hot Whether specific arguments of e1 and e2 corefer (Lee et al., 2017). Verb arguments are computed with Allennlp’s SRL (Gardner et al., 2018; He et al., 2017), Nouns and Adjectives with SEMAFOR. # of Coreferring Args Numeric The number of coreferring arguments between e1 and e2. Event in the Other’s Args One-hot Whether one event is mentioned in one of the other event’s arguments, if both events are in the same sentence. Discourse & Narrative Sentence Distance Numeric The number of sentences between e1 and e2. Event Distance Numeric The number of events between e1 and e2. Same Sentence Binary Whether e1 and e2 are in the same sentence. Reported Speech Binary Whether an event mention is mentioned in a direct speech (see §3.3.1). Non Major Mention Binary Whether the sentences, in which the events are mentioned, share coreferential non major mentions (see §3.3.2). RST-DTs Relation One-hot The discourse relation between elementary discourse units (EDUs), where e1 or e2 are mentioned in, in Rhetorical Structure Tree Discourse Trees (RST-DTs; see §3.3.1). Table 1: Features used in the model. Novel features are underlined. Features modified from prior work are marked with an asterisk. from prior work on subevent detection (Araki et al., 2014; Glavaˇs and ˇSnajder, 2014b). We used spaCy (Honnibal and Montani) to compute lexical and syntactic features. 3.2 Modified Features Five of our features were inspired by those in prior work, but we modified them for our system. Temporal Signals We observed that if a sentence mentions two events from different event hierarchies, then a temporal signal often exists between them (e.g., after and since). This is illustrated by the first sentence in Example 6. To capture this we used a temporal signals list (Derczynski and Gaizauskas, 2010) to find intervening temporal signal words between the events, and encoded this as a bag of temporal signals. Syntactic Dependency Both prior systems encoded a feature which captured whether one event in a pair was an immediate child (i.e., governed) of 4783 the other. We expand that to checking for ancestry more generally. This is encoded as one-hot vector. Event Type We use the mapping from frames to 33 ACE 2005 event types introduced in (Liu et al., 2016) to determine the event type of each event. Prior work relied on the IBM SIRE system to compute event types (Florian et al., 2010). This is encode as a one-hot vector. Semantic Similarity We used the FastText (Mikolov et al., 2018) pre-trained model (wiki-news-300d-1M) to measure the semantic similarity between pairs of events. Prior work used the SENNA system for this feature (Collobert et al., 2011). This is encoded as a numeric feature. Most Likely Parent Event Similar to (Araki et al., 2014), we count the number of times in the training data that a particular event lemma and POS pair is observed as a parent of another event lemma/POS pair. For a pair (e1, e2), if the lemma and POS of e1 is more often found as a parent of e2, this is encoded as the vector (1,0,0); if the opposite is true, this is encoded as (0,1,0). If there were no observations, this is encoded as (0,0,1). Prior work did not take into account the part of speech, or the direction of the subevent relationship. Co-referring Event Arguments When matching arguments, we allowed ARG0 to match ARG0 or ARG1 and vice versa, and we also examined LOC and TMP modifying arguments. This is encoded as six-place binary vector for ARG0/ARG1, LOC, and TMP. 3.3 New Features The new features are divided into three types: two discourse features (§3.3.1), one narrative feature (§3.3.2) and two semantic features (§3.3.3). 3.3.1 Discourse Features We for the first time investigate the importance of discourse features for detecting subevents. We introduced two new features: rhetorical structure and reported speech. Rhetorical Structure Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is a hierarchical model aims to identify the discourse structure of a text. The text is first segmented into Elementary Discourse Units (EDUs) which in turn are linked in binary or multi-way discourse relations (see Carlson and Marcu, 2001). Rhetorical analysis has been shown to be beneficial in many NLP tasks including sentiment analysis (Somasundaran, 2010; Lazaridou et al., 2013; Bhatia et al., 2015), text generation (Prasad et al., 2005), information extraction (Maslennikov and Chua, 2007), question answering (Verberne et al., 2007) and coreference resolution (Cristea et al., 1998; Joty et al., 2015). Therefore we hypothesized that discourse structure could be useful to the subevent detection task. We employ the CODRA discourse parser (COmplete probabilistic Discriminative framework for performing Rhetorical Analysis; Joty et al., 2015) to build a discourse tree of each text. We use (Neumann, 2015) for post-processing the CODRA output to build a graph representing the result. We then extract the rhetorical relation between event mentions using the rhetorical relation between the EDUs in which the event are found. The feature is encoded as a one-hot vector covering all 16 main relation classes. Consider Example 2. When applied to this text, the discourse parser identifies the relation between raid3 and killed4 as an Elaboration relation. Furthermore, the parser also captures a Topic-Change relation between offensive6 and each of killed1, wounded2, raid3, killed4, and injured5. Although the discourse parser is useful primarily for providing information about inter-sentential relationships between events, it can also give useful information about intra-sentential relationships. Consider Example 3. For this text the discourse parser finds the Background relation between abduction1 and each of killed2 and rescued3. Reported Speech We also observed that One Palestinian was killed1 and at least four others were wounded2 in an Israeli air raid3 near the southern Gaza town of Rafah on Sunday, Palestinian security sources said. ... Palestinian security sources said that one Palestinian bystander was killed4 and at least four others were injured5. ... Israeli troops continued a massive ground and air offensive6 in the Gaza Strip on Sunday. Example 2: Excerpt from IC corpus (Hovy et al., 2013). Events relevant to explaining the discourse features are bolded. Mentions relevant to explaining the narrative feature are underlined. Note that, for clarity, not all events marked in the corpus are bolded here (e.g., Reporting events such as said). 4784 Mahsud, a former prisoner at Guantanamo Bay, is being hunted for the abduction1 of two Chinese engineers, which ended last Thursday when commandos killed2 five kidnappers and rescued3 one Chinese. Example 3: A sentence where intra-sentential discourse relations are useful for discovering subevent relations. subevents are often reported in direct and indirect speech. Direct speech is speech set off with quotes, while indirect speech is speech reported without quotes. We only considered direct speech in this work, primarily because it is easy to detect; however, subevents are also likely to be reported in indirect speech as can be seen in example 2 where killed4 and injured5 (which are subevents of raid3) are mentioned in indirect speech. 3.3.2 Narrative Feature: Non-Major Mentions We also introduced what we are calling a narrative feature that we found informative in detecting subevent relations. This feature recognizes that other entities mentioned in a sentence besides those in the event arguments can be useful in subevent detection. This feature is narrative in the sense that it takes into account whether an entity is central to the story in the text. In particular, we observed that many sentences which shared an event hierarchy also share some coreferring mentions beside events argument. Despite this, certain entities are so central to the text that they are mentioned nearly everywhere and are thus no especially informative. Therefore we filter out these major mentions and encode as a binary feature whether or not the sentences contain the events share a non-major mention. The trick, of course, is defining what is a major mention. A simple and effective way of filtering out major mentions is to measure the distribution of coreference chain lengths (normalized to the number of the corresponding article’s chains), and discard all chains with a length above a certain threshold. This threshold can be tuned to the data. In our experiment we estimated the mean and standard deviation of the distribution of coreference chains in each text and filtered out chains that were longer than a single standard deviation above the mean. In Example 2, the threshold of the corresponding article is 2, thus Palestinian security sources, which is mentioned only twice, is The Al-Qaeda linked Army of Ansar alSunna claimed responsibility Tuesday for a car bomb attack1 which killed2 four Iraqi guardsmen ... Example 4: A sentence where one event appears inside the argument for another event. Event killed2 is a subevent of attack1. not considered a major mention. 3.3.3 Semantic Features Event in the Other’s Arguments We observed that if an event hierarchy is expressed within a sentence, one of the events is often mentioned as part of the other event’s arguments as can be seen in Example 4, where the attack1 event appears as ARG0 of killed2. Although this feature is related to the Syntactic Dependency feature, an event’s arguments are not always syntactically dependent on the event head, so it adds useful information. Number of Coreferring Arguments We also include the number of coreferring event arguments as numeric feature. 4 Experiment Here we describe the corpora on which the experiment were performed and the evaluation metrics used to measure the performance of our model. Then we compare the performance of our model with previous models, specifically those of Araki et al. (2014) and Glavaˇs and ˇSnajder (2014b). 4.1 Corpora As already mentioned, we used two corpora: the Intelligence Community (IC) (Hovy et al., 2013) corpus and HiEve corpus (Glavaˇs et al., 2014a) to train and test our model. The IC corpus contains 100 news articles in the Violent Event domain (attacks, killings, wars, etc). The HiEve corpus is an open domain corpus that also contains 100 news articles. Both corpora are annotated with both coreference and subevent relations. The inter-annotator agreement for the IC corpus is 0.467 Fleiss’s kappa for subevent relations. The approach proposed for temporal relations by (UzZaman and Allen, 2011) was used to measure the inter-annotator agreement in HiEve corpus, resulting in 0.69 F1. There is a small conceptual difference between the annotation of subevent relations in both corpora. The annotation of subevents in the IC corpus follows (Hovy et al., 2013) where they argued that there are three degrees of event iden4785 tity: fully identical, quasi-identical (a.k.a., partial co-reference) and fully independent (not identical). Quasi-identity in turn appears in two ways: membership or subevent. Membership is defined as when an event is a set of multiple instances of the same type of event and the other event is one of the instances. In Example 5, attack1 and operation2 are members of blows3, not subevents. In contrast, the HiEve corpus considers the membership relation as a subevent relation. When training on the IC corpus we considered only the subevent relations, and ignore the membership relations. The Al-Qaeda linked group which said it carried out the deadly attack1 against US soldiers in the Iraqi city of Mosul accused the United States . . . The operation2 is one of the heaviest blows3 in the city of Mosul . . . Example 5: Illustration of the membership quasiidentity relationship of Hovy et al. (2013) For both corpora we extend the annotations by computing the transitive closure of both coreference and subevent relations according to the following rules, where e1, e2 and e3 are event mentions, ≡indicates event coreference, e1 > e2 indicates e1 is a parent of e2 , and e1 < e2 indicates e1 is a child of e2. All of these rules are taken from the work by Glavaˇs et al. (2014a). We confirmed that this closure produces a consistent graph, and thus is insensitive to the order of computation of the closure. Table 2 shows the statistics of both corpora. 1. (e1 ≡e2) & (e2 ≡e3) ⇒(e1 ≡e3) 2. (e1 > e2) & (e2 > e3) ⇒(e1 > e3) 3. (e1 < e2) & (e2 < e3) ⇒(e1 < e3) 4. (e1 > e2) & (e2 ≡e3) ⇒(e1 > e3) 5. (e1 > e2) & (e1 ≡e3) ⇒(e3 > e2) 6. (e1 < e2) & (e2 ≡e3) ⇒(e1 < e3) 7. (e1 < e2) & (e1 ≡e3) ⇒(e3 < e2) 4.2 Experimental Setup We use Linear SVM classifier from scikit-learn package for classification over the gold annotated event mentions. Linear SVM can handle multiclass classification using a one-vs-rest scheme (Pedregosa et al., 2011). Most of the parameters are default parameters 2, but to address the issue 2penalty=l2,C=0.01, random state=0, max iter=1000, class weight=balanced, multi class=ovr. IC HiEve # of sentences 1,973 1,377 # of tokens 48,737 34,917 # PC relations, original 472 609 # PC relations, transitive closure 1632 1802 # CP relations, original 257 351 # CP relations, transitive closure 1665 1846 # NoRel relations 48567 42094 Avg # of sents. per article 19.7 13.7 Avg # of sents. in an event boundary 6.2 8.3 Avg # of events per article 30.5 26.0 Avg # of events in each hierarchy 5.2 7.0 Avg # of hierarchies per article 3.29 2.19 Table 2: Statistics of IC and HiEve corpora. of the data imbalance as shown in Table 3, we use the parameter class weight=balanced to assign a higher misclassification penalty on the minority class (PC and CP). We conducted 5-fold cross-validation for the experiment. Average fold statistics are shown in Table 3. 4.3 Evaluation and Result We use the same evaluation metrics used in previous models. (Araki et al., 2014) evaluated their model using BLANC evaluation metric (Recasens and Hovy, 2011) whereas (Glavaˇs and ˇSnajder, 2014b) evaluated their model using the standard F1 evaluation metric. The results of the performance averaged across all five folds on the three classes (PC, CP and NoRel) are shown in Table 4 using both evaluation metrics on both corpora. Table 5 shows the comparison between our model and previous models. Although it is not clear to us how Araki et al. handled the direction of the subevent relation, we take the average of our model classes (PC and CP) and compare it with the subevent class in Araki et al.’s work. For Glavaˇs and ˇSnajder, we consider only their coherent model, which is the best model that does not use the gold coreference relations. Therefore, in Table 5, the reported result of all models are the average of both classes (PC and CP). From Table 5, we can see that our model outperforms both prior models, by 15 and 5 percentage points. We also see that the precision is lower than the recall which indicate that the subevent detection task is still a difficult and complex task that needs more work. In the next two sections we explain why the performance of our model is low on IC corpus compared to the HiEve corpus, as well as an extensive error analysis. 4786 IC corpus HiEve corpus Training Test Total Training Test Total # articles 80 20 100 80 20 100 # PC (avg.) 1299.2 332.8 1632 1484 318 1802 # CP (avg.) 1317.8 347.2 1665 1456.4 389.6 1846 # NoRel (avg.) 39469 9098 48567 35621.2 6472.8 42094 Table 3: Average statistics of the folds. PC stands for parent-child relation. CP stands for child-parent relation. NoRel stands for no relation. Evaluation Metrics F1 Score BLANC Pos Links Neg Links Avg Corpus Relation P R F1 P R P R F1 HiEve PC 0.576 0.807 0.67 0.661 0.832 0.989 0.973 0.857 CP 0.661 0.832 0.733 0.576 0.807 0.990 0.971 0.825 NoRel 0.98 0.945 0.962 0.980 0.945 0.625 0.830 0.836 IC PC 0.469 0.564 0.506 0.455 0.549 0.982 0.973 0.735 CP 0.454 0.550 0.492 0.468 0.564 0.983 0.975 0.743 NoRel 0.966 0.905 0.958 0.966 0.949 0.461 0.557 0.729 Table 4: Our model result on IC corpus (Hovy et al., 2013) and HiEve corpus (Glavaˇs et al., 2014a) using BLANC and F1 standard evaluation metrics. PC stands for parent-child relation. CP stands for child-parent relation. 5 Discussion As shown in Table 4, our model performs worse on the IC corpus than on HiEve. This is not surprising given the large difference in annotation agreement between IC and HiEve as well as the the removal of membership relations on IC corpus (see §4.1). Beside its lower annotation agreement, the IC corpus is also domain specific, with events only related to the intelligence community. This make general resources and tools (e.g., VerbOcean, WordNet) less effective. We investigated the importance of each of the five feature sets (Table 1) to our model by retraining it while leaving out one set at time. In order of importance they are (1) Syntactic, (2) Semantic, (3) Discourse & Narrative, (4) Lexical, and (5) Arguments. The importance of the syntactic features derived from the fact that children events are most often mentioned in the same sentence as their parent events. The three most important features among the Semantic features are Most Likely Parent Event, Event Type, and Semantic Frame. For the Lexical feature set, the Event Feature and Temporal Signals are the most important. 6 Error Analysis Inspection of the results revealed several types of errors, aside from the usual noise introduced by the various sub-components, such as the discourse parser or co-reference systems. We cluster the errors into three types: (1) an event pair that should be classified as PC but classified as CP and vice versa (about 28%); (2) an event pair is wrongly classified as NoRel (missed subevent relation; about 12%); (3) an event pair that is actually NoRel is wrongly classified as subevent (PC or CP; about 60% of the errors). Type 1: PC as CP or vice versa About a third of the model errors were this type. Most of the errors are a result of an incorrect Event Type feature. This feature plays a major role in capturing the direction of the subevent relation. For example, if an event e1 with event type Die occurs in the text before an event e2 with event type Attack, then the direction of the relation is mostly childparent relation. But if e2 occurs before e1, then the direction of the relation is mostly parent-child. If the event type is unknown for one of the event mentions, then our model commonly usually fails to capture the direction. Type 2: Incorrect NoRel Most of the type 2 errors occur when an event is far away from its related event, in terms of number of intervening sentences. The larger the distance between events the more likely the model makes this error. For this type of error, we calculated the average number of sentences and the average number of events intervening between a missed pair of event, which the model should capture its subevent relation, and 4787 F1 Score BLANC Pos Links Neg Links Avg Corpus Model P R F1 P R P R F1 IC Araki et al. (2014) 0.144 0.333 0.993 0.981 0.594 Araki et al. Re-Impl. 0.242 0.285 0.262 Our model 0.461 0.557 0.499 0.461 0.557 0.983 0.974 0.739 HiEve Glavaˇs and ˇSnajder (2014b) 0.766 0.565 0.65 Glavaˇs and ˇSnajder Re-Impl. 0.562 0.750 0.983 0.971 0.813 Our model 0.618 0.82 0.701 0.618 0.82 0.99 0.972 0.841 Table 5: Our model performance compared to previous models (Araki et al., 2014; Glavaˇs and ˇSnajder, 2014b). Each row represent the average of both classes parent-child (PC) and child-parent (CP). Because the prior systems both did not report both metrics, we approximated the metrics for those systems by reimplementing them. found that when the distance is greater than 9 sentences and the number of events is greater than 14 events, the more likely the model would conduct this error. Subevents tend to be close to their parents in the text as shown in Table 2. Moreover, we observed that the Non-Major Mention (§3.3.2) and Discourse Relation features (§3.3.1), were less useful the larger the distance between the events. Type 3: False Positive PC or CP Most of the errors were of this type. There were a variety of causes, but the most common was when a sentence contained multiple event hierarchies. Consider Example 6 where the sentence contains two different event hierarchies, namely, one hierarchy containing offensive3 and another containing abduction4. Over 90 Palestinians and one Israeli soldier have been killed1 since Israel launched2 a massive air and ground offensive3 into the Gaza Strip on June 28, three days after the abduction4 of one Israeli soldier by Palestinian militants in a cross-border raid5. Example 6: Excerpt from IC corpus (Hovy et al., 2013) showing a passage that results in an error of Type 3. In this example, killed1 and launched2 are subevents of offensive3, whereas abduction4 is a subevent of raid5. When processing this example the discourse parser failed to capture the discourse relation between offensive3 and abduction4 because both events are in the same EDU. Moreover, even though we introduced features such as temporal signals (after, since, etc.) to capture subevent relation between intra-sentential events, this error can still occur if the intra-sentential events are syntactically related (i.e., killed1 syntactically dominates abduction4, or there is a causal relation between events). Based on this observation, we ran an experiment on the IC corpus to examine the impact on subevent detection of having two different events in the same sentence. We construct a subset of the IC corpus (58 articles) which excluded all articles that contain at least one sentence with two different event hierarchy, and re-ran our main experiment. Under these conditions, the model performance increased by 6 and 4.6 points F1 on PC and CP classes, respectively (because of the smaller set, we used 3 folds instead of 5). Returning to the original corpus, we observed that two different event hierarchies are mostly found in compound and complex sentences, and one of the them is usually background event. This observation indicates that splitting compound or complex sentences into two simple sentences in advance might be useful in detecting subevents. Even though the discourse parser does this splitting automatically, this split is not currently propagated to the other features. 7 Contributions We present a model to detect subevent relation in news articles which outperforms the two prior approaches by 15 and 5 percentage points, respectively. Our model involves several novel discourse and narrative features, as well as a small number of feature modifications. Our error analysis indicates that having two event hierarchies in the same sentence is a major problem, as well as having significant separation between a parent and child event. Acknowledgments Mr. Aldawsari was supported by a doctoral fellowship from Prince Sattam Bin Abdulaziz University, and thanks Dr. Sultan Aldossary for his advice and support. This work was also supported 4788 by US National Science Foundation grant number IIS-1749917 to Dr. Finlayson. Both authors would like to thank Ed Hovy for providing the IC Corpus for our use. References Jun Araki. 2018. Extraction of Event Structures from Text. Ph.D. thesis, Carnegie Mellon University. Jun Araki, Zhengzhong Liu, Eduard H Hovy, and Teruko Mitamura. 2014. Detecting subevent structure for event coreference resolution. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC), pages 4553–4558, Lisbon, Portugal. Cosmin Adrian Bejan and Sanda M Harabagiu. 2008. A linguistic resource for discovering event structures and resolving event coreference. In Proceedings of the 6th Language Resources and Evaluation Conference (LREC), pages 2881–2887, Marrakech, Morocco. Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level sentiment analysis from RST discourse parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2212–2218, Lisbon, Portugal. Lynn Carlson and Daniel Marcu. 2001. Discourse tagging reference manual. ISI Technical Report ISI-TR545. Timothy Chklovski and Patrick Pantel. 2004. VerbOcean: Mining the web for fine-grained semantic verb relations. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 33–40, Barcelona, Spain. Prafulla Kumar Choubey and Ruihong Huang. 2018. Improving event coreference resolution by modeling correlations between event coreference chains and document topic structures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Volume I, pages 485–495, Melbourne, Australia. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Dan Cristea, Nancy Ide, and Laurent Romary. 1998. Veins theory: A model of global discourse cohesion and coherence. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (ACL-ICCL), pages 281–285, Montreal, Canada. Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A Smith. 2010. Probabilistic frame-semantic parsing. In Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pages 948–956, Los Angeles, CA. Marie-Catherine De Marneffe, Anna N Rafferty, and Christopher D Manning. 2008. Finding contradictions in text. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-08 HLT), pages 1039–1047, Columbus, OH. Leon Derczynski and Robert Gaizauskas. 2010. USFD2: Annotating temporal expresions and tlinks for tempeval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation (SemEval’10), pages 337–340, Los Angeles, CA. Radu Florian, John F Pitrelli, Salim Roukos, and Imed Zitouni. 2010. Improving mention detection robustness to noisy input. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 335–345, Cambridge, MA. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640. Goran Glavaˇs and Jan ˇSnajder. 2014b. Constructing coherent event hierarchies from news stories. In Proceedings of the Workshop on Graph-based Methods for Natural Language Processing (TextGraphs9), pages 34–38, Doha, Qatar. Goran Glavaˇs, Jan ˇSnajder, Parisa Kordjamshidi, and Marie-Francine Moens. 2014a. Hieve: A corpus for extracting event hierarchies from news stories. In Proceedings of 9th Language Resources and Evaluation Conference (LREC), pages 3678–3683. Aria D Haghighi, Andrew Y Ng, and Christopher D Manning. 2005. Robust textual inference via graph matching. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 387–394, Vancouver, Canada. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Volume I, pages 473–483, Vancouver, Canada. Matthew Honnibal and Ines Montani. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. https://github.com/ explosion/spaCy; Last accessed on May 31, 2019. 4789 Eduard Hovy, Teruko Mitamura, Felisa Verdejo, Jun Araki, and Andrew Philpot. 2013. Events are not simple: Identity, non-identity, and quasi-identity. In Proceedings of the Workshop on Events: Definition, Detection, Coreference, and Representation, pages 21–28, Atlanta, Georgia. Kevin Humphreys, Robert Gaizauskas, and Saliha Azzam. 1997. Event coreference for information extraction. In Proceedings of a Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, pages 75–81, Madrid, Spain. Silja Huttunen, Roman Yangarber, and Ralph Grishman. 2002. Complexity of event structure in ie scenarios. In Proceedings of the 19th International Conference on Computational Linguistics (COLING), pages 1–7, Taipei, Taiwan. Shafiq Joty, Giuseppe Carenini, and Raymond T Ng. 2015. Codra: A novel discriminative framework for rhetorical analysis. Computational Linguistics, 41(3):385–435. Angeliki Lazaridou, Ivan Titov, and Caroline Sporleder. 2013. A Bayesian model for joint unsupervised induction of sentiment, aspect and discourse representations. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), Volume I, pages 1630–1639, Sofia, Bulgaria. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 188–197, Copenhagen, Denmark. Dekang Lin. 1998. An information-theoretic definition of similarity. In Proceedings of the 15th International Conference on Machine Learning (ICML), pages 296–304, San Francisco, CA. Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016. Leveraging framenet to improve automatic event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Volume I, pages 2134– 2143, Berlin, Germany. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243–281. Mstislav Maslennikov and Tat-Seng Chua. 2007. A multi-resolution framework for information extraction from free text. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL), pages 592–599, Prague, Czech Republic. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, and Armand Puhrsch, Christian andJoulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the 11th Language Resources and Evaluation Conference (LREC), Miyazaki, Japan. Srini Narayanan and Sanda Harabagiu. 2004. Question answering based on semantic structures. In Proceedings of the 20th International Conference on Computational Linguistics (COLING), pages 693– 701, Geneva, Switzerland. Arne Neumann. 2015. discoursegraphs: A graphbased merging tool and converter for multilayer annotated corpora. In Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015), pages 309–312, Vilnius, Lithuania. Qiang Ning, Zhili Feng, and Dan Roth. 2017. A structured learning approach to temporal relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1027–1037, Copenhagen, Denmark. Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Rashmi Prasad, Aravind Joshi, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, and Bonnie Webber. 2005. The Penn Discourse TreeBank as a resource for natural language generation. In Proceedings of the Corpus Linguistics Workshop on Using Corpora for Natural Language Generation, pages 25–32, Birmingham, UK. James Pustejovsky, Jos´e M Castano, Robert Ingria, Roser Sauri, Robert J Gaizauskas, Andrea Setzer, Graham Katz, and Dragomir R Radev. 2003. TimeML: Robust specification of event and temporal expressions in text. In Proceedings of the 2003 AAAI Spring Symposium on New Directions in Question Answering, pages 28–34. Stanford, CA. Marta Recasens and Eduard Hovy. 2011. Blanc: Implementing the rand index for coreference evaluation. Natural Language Engineering, 17(4):485– 510. Swapna Somasundaran. 2010. Discourse-level relations for Opinion Analysis. Ph.D. thesis, University of Pittsburgh. Naushad UzZaman and James Allen. 2011. Temporal evaluation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT), pages 351–356, Portland, OR. Suzan Verberne, LWJ Boves, NHJ Oostdijk, and PAJM Coppen. 2007. Evaluating discourse-based answer extraction for why-question answering. In Proceedings of the 30th Annual International ACM SIGIR 4790 Conference on Research and Development in Information Retrieval (SIGIR), pages 735–736, Amsterdam, The Netherlands. Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics (ACL), pages 133–138, Las Cruces, NM.
2019
471
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791–4800 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4791 HellaSwag: Can a Machine Really Finish Your Sentence? Rowan Zellers♠ Ari Holtzman♠ Yonatan Bisk♠ Ali Farhadi♠~ Yejin Choi♠~ ♠Paul G. Allen School of Computer Science & Engineering, University of Washington ~Allen Institute for Artificial Intelligence https://rowanzellers.com/hellaswag Abstract Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as “A woman sits at a piano,” a machine must select the most likely followup: “She sets her fingers on the keys.” With the introduction of BERT (Devlin et al., 2018), near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense inference still proves difficult for even stateof-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (°95% accuracy), state-of-the-art models struggle (†48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical ‘Goldilocks’ zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models. Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges. 1 Introduction Imagine a woman chasing a dog around outside, trying to give it a bath. What might happen next? Humans can read a narrative like this, shown in Figure 1, and connect it to a rich model of the world: the dog is currently dry and not soapy, and it actively doesn’t want to be bathed. Thus, one A woman is outside with a bucket and a dog. The dog is running around trying to avoid a bath. She… A. rinses the bucket off with soap and blow dry the dog’s head. B. uses a hose to keep it from getting soapy. C. gets the dog wet, then it runs away again. D. gets into a bath tub with the dog. Come to a complete halt at a stop sign or red light. At a stop sign, come to a complete halt for about 2 seconds or until vehicles that arrived before you clear the intersection. If you're stopped at a red light, proceed when the light has turned green. … A. Stop for no more than two seconds, or until the light turns yellow. A red light in front of you indicates that you should stop. B. After you come to a complete stop, turn off your turn signal. Allow vehicles to move in different directions before moving onto the sidewalk. C. Stay out of the oncoming traffic. People coming in from behind may elect to stay left or right. D. If the intersection has a white stripe in your lane, stop before this line. Wait until all traffic has cleared before crossing the intersection. OpenAI GPT How to determine who has right of way. easy! ??? + Adversarial Filtering + Adversarial Filtering Figure 1: Models like BERT struggle to finish the sentences in HellaSwag, even when they come from the same distribution as the training set. While the wrong endings are on-topic, with words that relate to the context, humans consistently judge their meanings to be either incorrect or implausible. For example, option A of the WikiHow passage suggests that a driver should stop at a red light for no more than two seconds. plausible next event is option C—that she’ll get the dog wet and it will run away again. When the SWAG dataset was first announced (Zellers et al., 2018), this new task of commonsense natural language inference seemed trivial for humans (88%) and yet challenging for thenstate-of-the-art models (†60%), including ELMo (Peters et al., 2018). However, BERT (Devlin et al., 2018) soon reached over 86%, almost human-level performance. One news article on this development was headlined “finally, a machine that can finish your sentence.”1 In this paper, we investigate the following question: How well do deep pretrained models, like 1A New York Times article at https://nyti.ms/2DycutY. 4792 BERT, perform at commonsense natural language inference (NLI)? Our surprising conclusion is that the underlying task remains unsolved. Indeed, we find that deep models such as BERT do not demonstrate robust commonsense reasonining ability by themselves. Instead, they operate more like rapid surface learners for a particular dataset. Their strong performance on SWAG is dependent on the finetuning process, wherein they largely learn to pick up on dataset-specific distributional biases. When the distribution of language shifts slightly, performance drops drastically – even if the domain remains identical. We study this question by introducing HellaSwag,2 a new benchmark for commonsense NLI. We use Adversarial Filtering (AF), a datacollection paradigm in which a series of discriminators is used to select a challenging set of generated wrong answers. AF is surprisingly e↵ective towards this goal: the resulting dataset of 70k problems is easy for humans (95.6% accuracy), yet challenging for machines (†50%q. This result holds even when models are given a significant number of training examples, and even when the test data comes from the exact same distribution as the training data. Machine performance slips an additional 5% when evaluated on examples that cover novel concepts from the same domain. To make this dataset robust to deep pretrained models, we use a trifecta of state-of-theart generators (Radford et al., 2018), state-ofthe-art discriminators (BERT), and high quality source text. We expand on the SWAG’s original video-captioning domain by using WikiHow articles, greatly increasing the context diversity and generation length. Our investigation reveals a Goldilocks zone – roughly three sentences of context, and two generated sentences – wherein generations are largely nonsensical, even though state-of-the-art discriminators cannot reliably tell the di↵erence between these generations and the ground truth. More broadly, our paper presents a case-study towards a future of verified progress in NLP, via iterative rounds of building and breaking datasets. If our ultimate goal is to provide reliable benchmarks for challenging tasks, such as commonsense NLI, these benchmarks cannot be static. Instead, they must evolve together with the evolving state-of2Short for Harder Endings, Longer contexts, and Lowshot Activities for Situations With Adversarial Generations. Dataset and code at https://rowanzellers.com/hellaswag. Context 2 Context 1 Context N … Context 1 Context M … Real ending … Real ending (N instances) (M instances) Dtrain Real ending … Real ending Real ending Gen’d ending K Gen’d ending K Gen’d ending K … … … … … Gen’d ending2 … Gen’d ending2 Gen’d ending2 Gen’d ending 1 … Gen’d ending 1 Gen’d ending 1 Dtest Gen’d ending2 … Gen’d ending 1 … … … … Gen’d ending 2 Gen’d ending 1 Gen’d ending K Gen’d ending K … f Train f to discriminate real vs. generated Replace easily-classified generations with adversarial ones that currently aren’t included Generated Ending (context M) Generated Ending (context 2) New! New! Figure 2: An overview of Adversarial Filtering. On each iteration, a new classifier is trained on a dummy training set Dtrain to replace easily-classified negative endings on the dummy test set Dtest with adversarial endings. This process is repeated iteratively, to obtain a challenging dataset regardless of the final split. the-art. Continued evolution in turn requires principled dataset creation algorithms. Whenever a new iteration of a dataset is created, these algorithms must leverage existing modeling advancements to filter out spurious biases. Only once this cycle becomes impossible can we say that the underlying task – as opposed an individual dataset – is solved. 2 Background SWAG is a dataset for commonsense NLI. For each question, a model is given a context from a video caption and four ending choices for what might happen next. Only one choice is right – the actual next caption of the video. Obtaining interesting negatives is challenging. Prior work (e.g. Gururangan et al., 2018; Poliak et al., 2018) has found that when humans write the endings to NLI questions, they introduce subtle yet strong class-conditional biases known as annotation artifacts.3 To address this, Zellers et al. (2018) introduced Adversarial Filtering (AF). An overview is shown in Figure 2. The key idea is to produce a dataset D which is adversarial for any arbitrary split of pDtrain, Dtestq. This requires a generator of negative candidates (i.e., wrong endings that vi3These biases simply inflate model performance, but past work has also shown that are unwanted social biases induced when humans write the endings, in terms of gender and race (Rudinger et al., 2015). 4793 Figure 3: Validation accuracy on SWAG for BERTLarge versus training set size. The baseline (25% accuracy) is random chance. BERT does well given as few as 16 training examples, but requires tens of thousands of examples to approach human performance. olate human notions about how the world works), which we achieve by using a language model. Potential candidates of incorrect answers were massively oversampled from a language model trained on in-domain data, and then selected using an ensemble of adversaries. The selection process happens iteratively: on each iteration, the dataset is randomly partitioned into Dtrain and Dtest. The ensemble is trained to classify endings as real or generated on Dtrain, then, AF replaces easy-toclassify generations in Dtest. This process continues until the accuracy of these adversaries converges. Last, humans validate the data to remove adversarial endings that seem realistic. Importantly, AF creates a final dataset that is challenging to models regardless of the final dataset split. In Section 4, we will use AF as the underlying workhorse to construct an NLI dataset that is easy for humans, yet challenging for machines. This difficulty persists even when models are provided significant training data, and even when this data comes from the same distribution as the test set. This contrasts with past work on adversarial examples (e.g. Jia and Liang, 2017; Glockner et al., 2018; Belinkov and Bisk, 2018) which consider cases where an out-of-distribution test set is constructed to be adversarial. 3 Investigating SWAG In this section, we investigate why SWAG was solved. We focus on BERT, since it is the best Figure 4: BERT validation accuracy when trained and evaluated under several versions of SWAG, with the new dataset HellaSwag as comparison. We compare: Ending Only No context is provided; just the endings. Shuffled Endings that are indidivually tokenized, shu✏ed, and then detokenized. Shuffled+ Ending Only No context is provided and each ending is shu✏ed. known approach at the time of writing.4 Core to our analysis is investigating how a model trained on Wikipedia and books can be so e↵ectively finetuned for SWAG, a dataset from video captions. 3.1 How much innate knowledge does BERT have about SWAG? We investigate this question by measuring BERT’s performance on SWAG while varying the size of the training dataset; results are shown in Figure 3. While the best known ELMo NLI model (ESIM+ELMo; Chen et al., 2017) requires the entire training set to reach 59%, BERT outperforms this given only 64 examples. However, BERT still needs upwards of 16k examples to approach human performance, around which it plateaus. 3.2 What is learned during finetuning? Figure 4 compares BERT’s performance when trained and evaluated on variants of SWAG. Context: BERT’s performance only slips 11.9 points (86.7%Ñ74.8%) when context is omitted (Ending Only), suggesting a bias exists in the endings themselves.5 If a followup event seems unreasonable absent of context, then there must be something markedly di↵erent between the space of human-written and machine-generated endings. Structure: To distinguish word usage from 4See the appendix for a discussion of the BERT architecture and hyperparameter settings we used in our experiments. 5These biases are similar to those in NLI datasets, as found by Gururangan et al. (2018); Poliak et al. (2018). 4794 Figure 5: Adversarial Filtering (AF) results with BERT-Large as the discriminator. Left: AF applied to ActivityNet generations produced by Zellers et al. (2018)’s language model versus OpenAI GPT. While GPT converges at random, the LM used for SWAG converges at 75%. Right: AF applied to WikiHow generations from GPT, while varying the ending length from one to three sentences. They converge to random, „40%, and „50%, respectively. structural patterns, we consider a new scenario, Shuffled. Here the shared context is provided, but the words in each ending choice are randomly permuted. Surprisingly, this reduces BERT performance by less than 10%. Even though BERT was never exposed to randomly shu✏ed text during pretraining, it easily adapts to this setting, which suggests that BERT is largely performing lexical reasoning over each (context, answer) pair. Finally, when the context is removed and the words in each ending are shu✏ed, performance drops to 60.4%. While low, this is still higher than ELMo’s performance (†60% from Zellers et al., 2018). As neither context nor structure is needed to discriminate between human and machine-written endings in a majority of cases, it is likely that systems primarily learn to detect distributional stylistic patterns during finetuning. 3.3 Where do the stylistic biases come from? SWAG was constructed via Adversarial Filtering (AF). Endings were generated via a language model, and then selected to fool a discriminator. To understand why it was solved requires understanding the interplay of AF with respect to SWAG’s generators and discriminators. Zellers et al. (2018) used a two-layer LSTM for generation, with shallow stylistic adversarial filters.6 This setup was robust against ELMo models, but has the shallow LM in particular produced distributional artifacts that BERT picks up on? 6The discriminator was an ensemble that featured a bag of words model, a shallow CNN, a multilayer perceptron operating on language model perplexities. To investigate this, we perform AF using BERTLarge as the discriminator7 in two settings, comparing generations from Zellers et al. (2018) with those from a finetuned GPT (Radford et al., 2018). Strikingly, the results, Figure 5 (left), show that the generations used in SWAG are so di↵erent from the human-written endings that AF never drops the accuracy to chance; instead, it converges to roughly 75%. On the other hand, GPT’s generations are good enough that BERT accuracy drops below 30% over many random subsplits of the data, revealing the importance of the generator. 4 HellaSwag The success of BERT implies that high-quality generators and discriminators are crucial to AF’s success. However, it does not imply that the underlying task of commonsense NLI – as opposed to a single dataset – is solved. To evaluate this claim requires us to try making a new evolution of the SWAG dataset, one in which artifacts are removed. In this section, we do just that by introducing HellaSwag. 4.1 ActivityNet Captions We start by including video captions from the ActivityNet Captions dataset (Krishna et al., 2017). The original SWAG dataset contains these, along with captions from LSMDC (Rohrbach et al., 2017), but for HellaSwag we solely used 7On each iteration, BERT-Large is re-initialized from its pretrained checkpoint, finetuned, and then evaluated in a four-way setting on the dummy test set of held-out data. See Supp A for a details of our BERT-Large AF setup. 4795 ActivityNet. In addition to temporal descriptions, ActivityNet also provides activity labels for each caption (e.g. jumping rope). We will use these activity labels as additional structure to test generalization ability. 4.2 WikiHow: A New Testbed We next consider a new and challenging testbed for commonsense reasoning: completing how-to articles from WikiHow, an online how-to manual. We scrape 80k context and follow-up paragraphs from WikiHow, covering such diverse topics as “how to make an origami owl” to “how to survive a bank robbery.” Each context has at most three sentences, as do the follow-ups. AF’s e↵ectiveness in this new setting is shown in Figure 5 (right). We consider three settings, corresponding to endings that are either one, two, or three sentences long. In all cases, BERT performance begins high (70-90%), but there are enough generations for Adversarial Filtering to lower the final accuracy considerably. While the one-sentence case converges to slightly higher than random – 35% when it converges – the two and three sentence cases are higher, at 40% and 50% respectively. Given more context, it becomes easier to classify an ending as machine- or humanwritten. We compromise and use two-sentence generations. Particularly in the two-sentence case, we find ourselves in a Goldilocks zone wherein generations are challenging for deep models, yet as we shall soon see, easy for humans. 4.3 Obtaining high human agreement How well can humans distinguish human-written endings from machine generations refined with Adversarial Filtering? In Figure 6, we compare human performance with that of BERT on a random 80%/20% split. We see a contrast between the ActivityNet and WikiHow performance. While ActivityNet starts o↵harder for BERT (25.5%), it also proves difficult for humans (60%). In contrast, WikiHow starts easier for BERT (41.1%) and humans find the domain almost trivial (93.5%). We hypothesis this discrepancy is due to the lengths of both datasets (Figure 7). WikiHow’s 2-sentence generations average 41 tokens, versus 13 for ActivityNet. This gives WikiHow generations three times as many opportunities to make a detectable mistake. To ensure high agreement on ActivityNet, we perform several rounds of human filtering, inFigure 6: For HellaSwag, we ensure high human agreement through several rounds of annotation. By collecting how likely each ending is we can filter false negative endings – machine generations that sound realistic – and replace them with true negatives. On both subdatasets, BERT performance increases during validation, but the gap to human performance remains wide. Figure 7: Lengths of ActivityNet and WikiHow; the latter with two-sentence generations. WikiHow is much longer, which corresponds to being easier for humans, while taking longer for AF to converge. creasing human performance to 94%. During human validation, crowd workers are given a context and six ending choices, of which one is the true ending, and the other five are from AF. On each iteration, we replace machine-written endings that the worker rated as realistic with new samples. In the end, we keep the 25k best ActivityNet contexts (i.e. those with highest agreement among workers 8) and the 45k best WikiHow contexts. 4.4 Zero-shot categories for evaluation To evaluate a model’s ability to generalize to new situations, we use category labels from WikiHow and ActivityNet to make ‘zero-shot’ evaluation sets. For each set (validation or test), we craft two subsets: one containing 5k ‘in-domain’ examples that come from categories as seen during training (Figure 8), and another with 5k ‘zero-shot’ examples from randomly chosen held-out categories. In total, there are 70k dataset examples. 8See the appendix for details about how we estimate this. 4796 Overall In-Domain Zero-Shot ActivityNet WikiHow Model Val Test Val Test Val Test Val Test Val Test Split SizeÑ 10K 10K 5K 5K 5K 5K 3.2K 3.5K 6.8K 6.5K Chance 25.0 fastText 30.9 31.6 33.8 32.9 28.0 30.2 27.7 28.4 32.4 33.3 LSTM+GloVe 31.9 31.7 34.3 32.9 29.5 30.4 34.3 33.8 30.7 30.5 LSTM+ELMo 31.7 31.4 33.2 32.8 30.4 30.0 33.8 33.3 30.8 30.4 LSTM+BERT-Base 35.9 36.2 38.7 38.2 33.2 34.1 40.5 40.5 33.7 33.8 ESIM+ELMo 33.6 33.3 35.7 34.2 31.5 32.3 37.7 36.6 31.6 31.5 OpenAI GPT 41.9 41.7 45.3 44.0 38.6 39.3 46.4 43.8 39.8 40.5 BERT-Base 39.5 40.5 42.9 42.8 36.1 38.3 48.9 45.7 34.9 37.7 BERT-Large 46.7 47.3 50.2 49.7 43.3 45.0 54.7 51.7 42.9 45.0 Human 95.7 95.6 95.6 95.6 95.8 95.7 94.0 94.0 96.5 96.5 Table 1: Performance of models, evaluated with accuracy (%).We report results on the full validation and test sets (Overall), as well as results on informative subsets of the data: evaluated on in-domain, versus zero-shot situations, along with performance on the underlying data sources (ActivityNet versus WikiHow). All models substantially underperform humans: the gap is over 45% on in-domain categories, and 50% on zero-shot categories. Figure 8: Examples on the in-domain validation set of HellaSwag, grouped by category label. Our evaluation setup equally weights performance on categories seen during training as well as out-of-domain. 5 Results We evaluate the difficulty of HellaSwag using a variety of strong baselines, with and without massive pretraining. The models share the same format: given a context and an ending, return a logit for that ending. Accordingly, we train our models using a four-way cross-entropy loss, where the objective is to predict the correct ending. In addition to BERT-Large, our comparisons include: a. OpenAI GPT (Radford et al., 2018): A finetuned 12-layer transformer that was pre-trained on the BookCorpus (Zhu et al., 2015). b. Bert-Base: A smaller version of the BERT model whose architecture size matches GPT. c. ESIM+ELMo (Chen et al., 2017; Peters et al., 2018): This is the best-performing ELMo model for NLI, modified slightly so the final output layer is now a four-way softmax over endings. d. LSTM sentence encoder: This is a randomly initialized two-layer bi-LSTM; the second layer’s hidden states are max-pooled and fed into an MLP to predict the logit. We consider three variations: GloVe embeddings, ELMo embeddings, or (frozen) BERT-Base embeddings.9 e. FastText: (Joulin et al., 2017) An o↵-the-shelf library for bag-of-words text classification.10 We compare all models to human performance by asking five independent crowd workers to solve the same four-way multiple choice problems; their predictions are combined via majority vote. Our results, shown in Table 1, hint at the difficulty of the dataset: human performance is over 95%, while overall model performance is below 50% for every model. Surprisingly, despite BERTLarge having been used as the adversarial filter, it still performs the strongest at 47.3% overall. By making the dataset adversarial for BERT, it seems to also have become adversarial for every other model. For instance, while ESIM+ELMo obtained 59% accuracy on SWAG, it obtains only 33.3% accuracy on HellaSwag. In addition to pretraining being critical, so too is end-to-end finetuning. Freezing BERT-Base and adding an LSTM on top lowers its overall performance 4.3%. This may help explain why models such as ESIM+ELMo struggled on SWAG, as ELMo isn’t updated during finetuning. While BERT is the best model, it still struggles on HellaSwag, and especially so on zero-shot cat9For ELMo and BERT-Base, the model learns scalar weights to combine each internal layer of the encoder. 10This model is trained with binary cross entropy loss. 4797 Figure 9: Transfer experiments from SWAG to HellaSwag and vice versa, evaluated on the validation sets. Overall, a BERT-Large that is trained on SWAG hardly generalizes to HellaSwag: it scores 34.6%. egories. Performance drops roughly 5% on the test fold, which suggests that the finetuning is not enough for BERT to learn to generalize to novel activities or how-to categories. Last, we see that WikiHow is a much harder domain that ActivityNet for machines: 45% BertLarge performance, versus 96.5% for humans. Curiously, it is on this source dataset that we see the smallest gap between OpenAI GPT and BERT. In fact, OpenAI GPT outperforms BERT on WikiHow, but the reverse is true for ActivityNet. One possibility is that the left-to-right structure of GPT is the right inductive bias for WikiHow - perhaps reasoning bidirectionally over long contexts is too much for a 12-layer transformer to learn. 5.1 SWAG to HellaSwag transfer Given the shared goals and partial domains of SWAG and HellaSwag, it is natural to ask to what extent models can transfer between the two datasets. In Figure 9 we show the results from transfer experiments: models are trained on one dataset and evaluated on the other.11 The best models are trained on the same dataset that they are evaluated on: training on SWAG and evaluating on HellaSwag lowers performance by 12%; vice versa lowers performance by 15%. The missing domain for HellaSwag models is movie descriptions (LSMDC), still, HellaSwag models obtain 69% accuracy. On the other hand, SWAG models do not generalize at all to their missing domain, WikiHow (28%), suggesting that learning general commonsense reasoning 11Note that the ActivityNet splits are di↵erent for each dataset. To avoid skewing the results, we report only on the validation video captions that are not in the training sets of either dataset. The overall accuracy is then a weighted average, where ActivityNet examples are weighted proportionately more. This gives a slight advantage to training on SWAG, as it sees all the ActivityNet categories when training. Category: Shaving (ActivityNet; In-domain) A bearded man is seen speaking to the camera and making several faces. the man a) then switches o↵and shows himself via the washer and dryer rolling down a towel and scrubbing the floor. (0.0%) b) then rubs and wipes down an individual’s face and leads into another man playing another person’s flute. (0.0%) c) is then seen eating food on a ladder while still speaking. (0.0%) d) then holds up a razor and begins shaving his face. (100.0%) Category: Sharpening knives (ActivityNet; Zero-Shot) Two men are in a room and the man with a blue shirt takes out a bench stone and with a little lubricant on the stone takes an knife and explains how to sharpen it. then he a) uses a sharpener to smooth out the stone using the knife. (100.0%) b) shows how to cut the bottom with the knife and place a tube on the inner and corner. (0.0%) c) bends down and grabs the knife and remove the appliance. (0.0%) d) stops sharpening the knife and takes out some pieces of paper to show how sharp the knife is as he cuts slivers of paper with the knife. (0.0%) Category: Youth (WikiHow; In-Domain) How to make up a good excuse for your homework not being finished Blame technology. One of the easiest and most believable excuses is simply blaming technology. You can say your computer crashed, your printer broke, your internet was down, or any number of problems. a) Your excuses will hardly seem believable. [substeps] This doesn’t mean you are lying, just only that you don’t have all the details of how your computer ran at the time of the accident. (0.0%) b) The simplest one to have in a classroom is to blame you entire classroom, not just lab. If you can think of yourself as the victim, why not blame it on technology. (9.4%) c) Most people, your teacher included, have experienced setbacks due to technological problems. [substeps] This is a great excuse if you had a paper you needed to type and print. (29.1%) d) It may also be more believable if you are fully aware that you may be flying at high speed on a plane and need someone to give you traffic report. Your problem might be your laptop failing to charge after a long flight. (61.5%) Figure 10: Example questions answered by BERTLarge. Correct model predictions are blue, incorrect predictions are red. The right answers are bolded. was hardly necessary to solve SWAG. 5.2 Qualitative examples We show several qualitative examples in Figure 10, along with BERT-Large’s predictions. BERT does well on some ActivityNet contexts, such as in the first row, where it correctly predicts the ending for a shaving caption. Whereas shaving is in-domain, the second example about sharpening knives is zero-shot. In this context, BERT’s answer suggests that one would use a knife to sharpen a stone, rather than vice versa. The last example comes from WikiHow, which appears to be incredibly challenging for BERT. BERT picks answer d, which has more words that match the context of technology (planes, traffic, laptop), but is incoherent.12 12Among other issues, why would someone suddenly be aware that they are ‘flying at high speed on a plane...?’ 4798 Figure 11: Performance on the WikiHow subset of alternative variations of HellaSwag, where di↵erent Adversarial Filters are used (but without human validation). We consider the shallow stylistic adversaries used by Zellers et al. (2018) (Stylistic Ensemble), as well as an LSTM with ELMo embeddings, GPT, BERT-Base, and BERT-Large. For each adversarial filtering model, we record the accuracy of that model before and after AF is used. We also evaluate each alternative dataset using BERT-Large. The results suggest that using a a stronger model at test time (over the model used for AF) improves performance, but is not enough to solve the task. 6 Discussion Our results suggest that HellaSwag is a challenging testbed for state-of-the-art NLI models, even those built on extensive pretraining. The question still remains, though, of where will the field go next? 6.1 How easy might HellaSwag be for future discriminators? In this paper, we showed the existence of a Goldilocks zone of text complexity – in which generations are nonsensical, but existing stateof-the-art NLP models cannot tell the di↵erence. How hard will the dataset be for future, even more powerful, models? Answering this question is challenging because these models don’t exist (or are unavailable) at the time of writing. However, one remedy is to perform an ablation study on the Adversarial Filtering model used, comparing weaker filters with stronger discriminators. We present our results in Figure 11, and find that while weak discriminators (like the stylistic ensemble used to make SWAG) only marginally reduce the accuracy of BERT-Large, increasing the gap between the filter and the final discriminator is not enough to solve the task. For instance, using a discriminator with 3x the parameters as the adversarial filter (BERTLarge vs. BERT-Base) results in 63% machine accuracy. Figure 12: Estimated pretraining hours required to reach a desired accuracy on HellaSwag. We estimate perfomance with respect to a RTX 2080 Ti - a modern, fast GPU, and fit a log-linear regression line. An extrapolation suggests that to reach human-level performance on HellaSwag, without algorithmic or computational improvements, would require 109 GPU-hours of pretraining (over 100k GPU years). 6.2 How well does pretraining scale? Overall, the current paradigm of pretraining large models on lots of data has made immense progress on NLP benchmarks. Though we expect this trend to continue, it also behooves us to consider its limits. If more compute is indeed the answer for human-level commonsense inference, what would the compute requirements of this hypothetical massive model look like? We investigate this in Figure 12 by comparing the accuracies of known models on HellaSwag with their computational needs. This estimation is a rough estimate: we convert reported TPU runtimes to our benchmark RTX 2080 Ti GPU using the Roofline model (Williams et al., 2009), which focuses primarily on the bottleneck of loading tensors into GPU memory. Extrapolating from an exponential fit suggests that reaching humanlevel performance on our dataset would require 109 GPU hours, or 100k years – unless algorithmic improvements are made. What might these algorithmic improvements look like? These could include architectural advances, better pretraining objectives, and beyond. However, these improvements share the bottleneck of the data source. To answer some HellaSwag questions correctly without reasoning deeply – like knowing that it is a bad idea to stop at a red light for ‘at most two seconds’ – might require an exponential number of samples, due to prob4799 lems of reporting bias (Gordon and Van Durme, 2013). Alternatively, future models might answer correctly only by picking up on spurious patterns, in which case a new development of the benchmark – using these models as adversaries – would place us in the same position as we are right now. Put another way, for humans to answer HellaSwag questions requires abstracting away from language and modeling world states instead. We postulate that this is what separates solving the task of commonsense NLI, as opposed to a particular dataset. Indeed, we find that existing deep methods often get fooled by lexical false friends. For example, in the WikiHow example from Figure 10, BERT chooses an ending that matches the technology words in the context, rather than matching the deeper topic: using technology as an excuse for not doing homework. 6.3 Towards a future of evolving benchmarks What happens when HellaSwag gets solved? We believe the answer is simple: crowdsource another dataset, with the same exact format, and see where models fail. Indeed, in our work we found this to be straightforward from an algorithmic perspective: by throwing in the best known generator (GPT) and the best known discriminator (BERTLarge), we made a dataset that is adversarial - not just to BERT, but to all models we have access to. While this was easy algorithmically, care must be taken from a data curation standpoint. Indeed, we find success exists within a Goldilocks zone: the data source must be complex enough that stateof-the-art generators often make mistakes, while simple enough such that discriminators often fail to catch them. This ties the future of SWAGstyle benchmarks to progress on language generation: until generation is solved, commonsense NLI will remain unsolved. Even recent promising results on scaling up language models (Radford et al., 2019) find problems in terms of consistency, with the best curated examples requiring 25 random seeds. 7 Conclusion In this paper, we presented HellaSwag, a new dataset for physically situated commonsense reasoning. By constructing the dataset through adversarial filtering, combined with state-of-the-art models for language generation and discrimination, we produced a dataset that is adversarial to the most robust models available – even when models are evaluated on items from the training distribution. In turn, we provided insight into the inner workings of pretrained models, and suggest a path for NLP progress going forward: towards benchmarks that adversarially co-evolve with evolving state-of-the-art models. Acknowledgments We thank the reviewers, as well as Jesse Thomason, for their helpful feedback. We thank the Mechanical Turk workers for their great work during dataset collection. Thanks also to Zak Stone and the Google Cloud TPU team for help with the computing infrastructure. This work was supported by the National Science Foundation through a Graduate Research Fellowship (DGE1256082) and NSF grants (IIS-1524371, 1637479, 165205, 1703166), the DARPA CwC program through ARO (W911NF-15-1-0543), the IARPA DIVA program through D17PC00343, the Sloan Research Foundation through a Sloan Fellowship, the Allen Institute for Artificial Intelligence, the NVIDIA Artificial Intelligence Lab, and gifts by Google and Facebook. The views and conclusions contained herein are those of the authors and should not be interpreted as representing endorsements of IARPA, DOI/IBC, or the U.S. Government. References Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In ICLR. ICLR. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1657–1668. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–655. Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceed4800 ings of the 2013 workshop on Automated knowledge base construction, pages 25–30. ACM. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proc. of NAACL. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 427–431. Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-Captioning Events in Videos. In International Conference on Computer Vision (ICCV). Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180–191. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI. Alec Radford, Je↵rey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, and Bernt Schiele. 2017. Movie Description. International Journal of Computer Vision, 123(1):94–120. Rachel Rudinger, Vera Demberg, Ashutosh Modi, Benjamin Van Durme, and Manfred Pinkal. 2015. Learning to predict script events from domainspecific text. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 205–210. Samuel Williams, Andrew Waterman, and David Patterson. 2009. Roofline: An insightful visual performance model for floating-point programs and multicore architectures. Technical report, Lawrence Berkeley National Lab.(LBNL), Berkeley, CA (United States). Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In arXiv preprint arXiv:1506.06724.
2019
472
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4801–4810 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4801 Unified Semantic Parsing with Weak Supervision Priyanka Agrawal*, Parag Jain, Ayushi Dalmia, Abhishek Bansal, Ashish Mittal, Karthik Sankaranarayanan IBM Research AI *[email protected] {pajain34, adalmi08, abbansal, arakeshk, kartsank}@in.ibm.com Abstract Semantic parsing over multiple knowledge bases enables a parser to exploit structural similarities of programs across the multiple domains. However, the fundamental challenge lies in obtaining high-quality annotations of (utterance, program) pairs across various domains needed for training such models. To overcome this, we propose a novel framework to build a unified multi-domain enabled semantic parser trained only with weak supervision (denotations). Weakly supervised training is particularly arduous as the program search space grows exponentially in a multi-domain setting. To solve this, we incorporate a multipolicy distillation mechanism in which we first train domain-specific semantic parsers (teachers) using weak supervision in the absence of the ground truth programs, followed by training a single unified parser (student) from the domain specific policies obtained from these teachers. The resultant semantic parser is not only compact but also generalizes better, and generates more accurate programs. It further does not require the user to provide a domain label while querying. On the standard OVERNIGHT dataset (containing multiple domains), we demonstrate that the proposed model improves performance by 20% in terms of denotation accuracy in comparison to baseline techniques. 1 Introduction Semantic parsing is the task of converting natural language utterances into machine executable programs such as SQL, lambda logical form (Liang, 2013). This has been a classical area of research in natural language processing (NLP) with earlier works primarily utilizing rule based approaches (Woods, 1973) or grammar based approaches (Lafferty et al., 2001; Kwiatkowski et al., Figure 1: Examples for natural language utterances with linguistic variations in two different domains that share structural regularity (Source: OVERNIGHT dataset). Note that in this setup, we do not use ground truth parses for training the semantic parser. 2011; Zettlemoyer and Collins, 2005, 2007). Recently, there has been a surge in neural encoderdecoder techniques which are trained with input utterances and corresponding annotated output programs (Dong and Lapata, 2016; Jia and Liang, 2016). However, the performance of these strongly supervised methods is restricted by the size and the diversity of training data i.e. natural language utterances and their corresponding annotated logical forms. This has motivated the work on applying weak supervision based approaches (Clarke et al., 2010; Liang et al., 2017; Neelakantan et al., 2016; Chen et al., 2018), which use denotations i.e. the final answers obtained upon executing a program on the knowledge base and use REINFORCE (Williams, 1992; Norouzi et al., 2016), to guide the network to learn its semantic parsing policy (see Figure 3(a)). Another line of work (Goldman et al., 2018; Cheng and Lapata, 2018) is aimed towards improving the efficiency of weakly supervised parsers by applying a twostage approach of first learning to generate program templates followed by exact program generation. It is important to note that this entire body of work on weakly supervised semantic parsing has 4802 been restricted to building a parser over a single domain only (i.e. single dataset). Moving beyond single-domain to multiple domains, Herzig and Berant (2017) proposed semantic parsing networks trained by combining the datasets corresponding to multiple domains into a single pool. Consider the example in Figure 1 illustrating utterances from two domains, RECIPES and PUBLICATIONS, of the OVERNIGHT dataset. The utterances have linguistic variations most and maximum number corresponding to the shared program token argmax. This work shows that leveraging such structural similarities in language by combining these different domains leads to improved performance. However, as with many single-domain techniques, this work also requires strong supervision in the form of program annotations corresponding to the utterances. Obtaining such high quality annotations across multiple domains is challenging, thereby making it expensive to scale to newer domains. To overcome these limitations, in this work, we focus on the problem of developing a semantic parser for multiple domains in the weak supervision setting using denotations. Note that, this combined multiple domain task clearly entails a large set of answers and complex search space in comparison to the individual domain tasks. Therefore, the existing multi-domain semantic parsing models (Herzig and Berant, 2017) fail when trained under weak supervision setting. See Section 6 for a detailed analysis. To address this challenge, we propose a multipolicy distillation framework for multi-domain semantic parsing. This framework splits the training in the following two stages: 1) Learn domain experts (teacher) policy using weak supervision for each domain. This allows the individual models to focus on learning the semantic parsing policy for corresponding single domains; 2) Train a unified compressed semantic parser (student) using distillation from these expert policies. This enables the unified student to gain supervision from the above trained expert policies and thus, learn the shared semantic parsing policy for all the domains. This two-stage framework is inspired from policy distillation (Rusu et al., 2016) which transfers policy of a reinforcement learning (RL) agent to train a student network that is more compact and efficient. In our case, weakly supervised domain teachers serve as RL agents. For inference, only the compressed student model is used which takes as input the user utterance from any domain and outputs the corresponding parse program. It is important to note that, the domain identifier input is not required by our model. The generated program is then executed over the corresponding KB to retrieve denotations that are provided as responses to the user. To the best of our knowledge, we are the first to propose a unified multiple-domain parsing framework which does not assume the availability of ground truth programs. Additionally, it allows inference to be multi-domain enabled and does not require user to provide domain identifiers corresponding to the input utterance. In summary, we make the following contributions: • Build a unified neural framework to train a single semantic parser for multiple domains in the absence of ground truth parse programs. (Section 3) • We show the effectiveness of multi-policy distillation in learning a semantic parser using independent weakly supervised experts for each domain. (Section 4) • We perform an extensive experimental study in multiple domains to understand the efficacy of the proposed system against multiple baselines. We also study the effect of the availability of a small labeled corpus in the distillation setup. (Section 5) 2 Related Work Figure 2: Illustration of the proposed work in the space of key related work in the area of semantic parsing, knowledge distillation and policy learning This work is related to three different areas: semantic parsing, policy learning and knowledge 4803 distillation. Figure 2 illustrates the placement of our proposed framework of unified semantic parsing in the space of the key related works done in each of these three areas. Semantic parsing has been an extensively studied problem, the first study dating back to Woods (1973). Much of the work has been towards exploiting annotated programs for natural language utterances to build single domain semantic parsers using various methods. Zettlemoyer and Collins (2007); Kwiatkowski et al. (2011) propose to learn the probabilistic categorical combination grammars, Kate et al. (2005) learn transformation from syntactic parse tree of natural language utterance to formal parse tree. Andreas et al. (2013) model the task of semantic parsing as machine translation. Recently, Dong and Lapata (2016) introduce the use of neural sequence-to-sequence models for the task of machine translation. Due to the cost of obtaining annotated programs, there has been an increasing interest in using weak supervision based methods (Clarke et al., 2010; Liang et al., 2017; Neelakantan et al., 2016; Chen et al., 2018; Goldman et al., 2018) which uses denotations, i.e. final answers obtained on executing a program on the knowledge base, for training. The problem of semantic parsing has been primarily studied in a single domain setting employing supervised and weakly supervised techniques. However, the task of building a semantic parser in the multi-domain setting is relatively new. Herzig and Berant (2017) propose semantic parsing models using supervised learning in a multi-domain setup and is the closest to our work. However, none of the existing works inspect the problem of multi-domain semantic parsing in a weak supervision setting. Knowledge distillation was first presented by Hinton et al. (2015) and has been popularly used for model compression of convolution neural networks in computer vision based tasks (Yu et al., 2017; Li et al., 2017). Kim and Rush (2016); Chen et al. (2017) applied knowledge distillation on recurrent neural networks for the task of machine translation and showed improved performance with a much compressed student network. Our proposed method of policy distillation was first introduced by Rusu et al. (2016) and is built on the principle of knowledge distillation and applied for reinforcement learning agents. Variants of the framework for policy distillations have also been proposed (Teh et al., 2017). To the best of our knowledge, our work is the first to apply policy distillation in a sequence-to-sequence learning task. We anticipate that the framework described in this paper can be applied to learn unified models for other tasks as well. 3 Proposed Framework In this section, we first present a high level overview of the framework for the proposed unified semantic parsing using multi-policy distillation and then describe the models employed for each component of the framework. We focus on the setting of ‘K’ domains each with an underlying knowledge-base B1, · · · , BK. We have a training set of utterances Xk and the corresponding final denotations Y k, for each domain k ∈ 1, · · · , K. Unlike existing works (Herzig and Berant, 2017), we do not assume availability of ground truth programs corresponding to the utterances in the training data. Our goal is to learn a unified semantic parsing model which takes as input a user utterance xk i = {xk i1, · · · , xk in} ∈ Xk from any domain k and produces the corresponding program zk i = {zk i1, · · · , zk im} which when executed on the corresponding knowledge base Bk should return denotation yk i ∈Y k. In this setup, we only rely on the weak supervision from the final denotations Y k for training this model. Moreover, the domain identifier k is not needed by this unified model. We use multi-policy distillation framework for the task of learning a unified semantic parser. Figure 3 summarizes the proposed architecture. We first train parsing models (teachers) for each domain using weak supervision to learn domainspecific teacher policies. We use REINFORCE for training, similar to prior work on Neural Symbolic Machine (Liang et al., 2017) described briefly in Section 4.1. Next, we distill the learnt teacher policies to train a unified semantic parser enabled over multiple domain. (described in Section 4.2). Note that: (1) Our teachers are trained with weak supervision from denotations instead of actual parses and hence are weaker compared to completely supervised semantic parses. (2) Stateof-the-art sequence distillation works (Kim and Rush, 2016; Chen et al., 2017) have focused on a single teacher-student setting. 4804 teacher network encoder decoder input utterance executor predicted output ground truth answer REINFORCE loss generated parse !" = {!% ", … , !("} *" = {*% ", … , *+ " } ,-" -" backpropagation Knowledge Base (Bk) a (a) Domain specific expert policy Ek E1 E2 Ek-1 EK . . probability distribution from experts input utterance ! = {!$, … , !'} generated parse ) = {)$, … , )*} student network encoder decoder distillation loss backpropagation Multi-policy distillation domain experts a (b) Learning a unified student S by distilling domain policies from experts E1, · · · , EK Figure 3: Proposed architecture diagram of unified semantic parsing framework. Figure 3(a) demonstrates the training of the experts Ek using weak supervision on the denotation corresponding to input utterance. Once we train all the domain experts E1, · · · , EK for the K domains, we use the probability distributions of the parse generated by these experts to train the student, thereby distilling the domain policies learnt by the teachers to the student as shown in Figure 3(b). 3.1 Model In this section, we describe the architecture of semantic parsing model used for both teachers as well as the student networks. We use a standard sequence-to-sequence model (Sutskever et al., 2014) with attention similar to Dong and Lapata (2016) for this task. Each parsing model (the domain specific teachers E1, ..., EK and the unified student S) is composed of an L-layer encoder LSTM (Hochreiter and Schmidhuber, 1997) for encoding the input utterances and an L-layer attention based decoder LSTM (Bahdanau et al., 2014) for producing the program sequences. Note that in this section, we omit the domain id superscript k. Given a user utterance x, the aim of the semantic parsing model is to generate output program z which should ultimately result in the true denotations y. This user utterance x = {x1, ..., xn} is input to the encoder which maps each word in the input sequence to the embedding e = {e1, ..., en} and uses this embedding to update its respective hidden states h = {h1, ..., hn} using ht = LSTM(et, ht−1; θenc), where θenc are the parameters of encoder LSTM. The last hidden state hn is input to the decoder’s first state. The decoder updates its hidden state st using st = LSTM(ct−1, st−1; θdec) where st−1 is the embedding of output program token zt−1 at last step t −1 and θdec are the decoder LSTM parameters. The output program {z1, ..., zm} is generated token-wise by applying softmax over the vocabulary weights derived by transforming the corresponding hidden state s. Further, we employ beam search during decoding which generates a set of parses B for every utterance. At each decoding step t, a beam Bt containing partial parses of length t are maintained. The next step beam Bt+1 are the |B| highest scoring expansions of programs in the beam Bt. 4 Training In this section we describe the training mechanism employed for the proposed multi-domain policy distillation framework for semantic parsing. The training process in our proposed framework has the following two components (Figure 3): (i) weakly supervised training for domain specific semantic parsing experts E1, ..., EK and, (ii) distilling multiple domain policies to the unified student 4805 S. We next describe each of these two components. 4.1 Domain-specific Semantic Parsing Policy As described in the previous section, an individual domain specific semantic parsing model generates the program z = {z1, ..., zm} which is executed on the knowledge base B to return the denotation ˆy. For brevity, we omit domain identifier k and instance id i in this section. In our setting, since labeled programs are not available for training, we use weak supervision from final denotations y similar to Liang et al. (2017) for each domain expert. As the execution of parse program is a non-differential operation on the KB, we use REINFORCE (Williams, 1992; Norouzi et al., 2016) for training which maximizes the expected reward. Reward R(x, z) for prediction z on an input x is defined as the match score between the true denotations y for utterance x and the denotations obtained by executing the predicted program z. The overall objective to maximize the expected reward is as follows X x EPθ(z|x)[R(x, z)] = X x X z Pθ(z|x)R(x, z) ≈ X x X z∈B Pθ(z|x)[R(x, z)] where θ = (θenc, θdec) are the policy parameters; B is the output beam containing top scoring programs (described in Section 3.1) and Pθ(z|x) is the likelihood of parse z Pθ(z|x) = Y t Pθ(zt|x, z1:t−1) (1) To reduce the variance in gradient estimation we use baseline b(x) = 1 |B| P z∈B R(x, z) i.e. the average reward for the beam corresponding to the input instance x. See Table 2 WEAKINDEP for the performance achieved for individual domains with this training objective. Note that the primary challenge with this weakly supervised training is the sparsity in reward signal given the large search space leading to only a few predictions having a non-zero reward. This can be seen in the Table 2 WEAKCOMBINED when the entire set of domains is pooled into one, the numbers drop severely due to the exponential increase in the search space. 4.2 Unified Model for multiple domains For the unified semantic parser, we use the same sequence-to-sequence model described in Section 3.1. The hyper-parameter settings vary from domain-specific models as detailed in Section 5.3. We use the multi-task policy distillation method of Rusu et al. (2016) to train this unified parser for multiple domains. The individual domain experts E1, ..., EK are trained independently as described in Section 4.1. This distillation framework enables transfer of knowledge from experts E1, ..., EK to a single student model S that operates as a multi-domain parser, even in the absence of any domain indicator with input utterance during the test phase. Each expert Ek provides a transformed training dataset to the student Dk = {(xk i , (pk θ)i)}|Xk| i=1 , where (pk θ)i is the expert’s probability distribution on the entire program space w.r.t input utterance xi. Concretely, given m is the decoding sequence length and V is the vocabulary combined across domains, then (pk θ)i ∈[0, 1]m×|V| denotes the expert Ek’s respective probabilities that output token zij equals vocab token v, for all time steps j ∈{1, . . . , m} and ∀v ∈V. (pk θ)i = {{pk θ(zij = v; xk i , zi{1:j−1})}m j=1}|V| v=1 The student takes the probability outputs from the experts as the ground truth and is trained in a supervised manner to minimize the cross-entropy loss L w.r.t to teachers’ probability distribution: L(θS; θ1, ..., θK) = − K X k=1 |Xk| X i=1 |m| X j=1 |V| X v=1 pk θ(zij = v; xk i , zi{1:j−1}) log pS θ (zij = v; xk, zi{1:j−1}) (2) where {θk}K k=1 are the policy parameters of experts and θS are the student model parameters; similarly pS θ (zij = v; xk, zi{1:j−1}) is the probability assigned to output token zij by student S. This training objective enables the unified parser to learn domain-specific parsing strategies from individual domains as well as leverage structural variations across domains. Therefore, the combined multi-domain policy S is refined and compressed during the distillation process thus rendering it to be more effective in parsing for each of the domains. 4806 5 Experimental Setup In this section, we provide details on the data and model used for the experimental analysis1. We further elaborate on the baselines used. 5.1 Data We use the OVERNIGHT semantic parsing dataset (Wang et al., 2015) which contains multiple domains. Each domain has utterances (questions) and corresponding parses in λ−DCS form that are executable on domain specific knowledge base. Every domain is designed to focus on a specific linguistic phenomenon, for example, CALENDAR on temporal knowledge, BLOCKS on spatial queries. In this work, we use seven domains from the dataset as listed in Table 1. We would like to highlight that we do not use the parses available in the dataset during the training of our unified semantic parser. Our weakly supervised setup uses denotations to navigate the program search space and learn the parsing policy. This search space is a function of decoder (program) length and vocabulary size. Originally, the parses have 45 tokens on an average with a combined vocabulary of 182 distinct tokens across the domains. To reduce the decoder search space, we normalize the data to have shortened parses with an average length of 11 tokens and 147 combined vocab size. We reduce the sequence length by using a set of template normalization functions and reduce the vocab size by masking named entities for each domain. An example of normalization function is the following: an entity utterance say of type recipe in the query is programmed by first creating a single valued list with the entity type i.e. (en.recipe) and then that property is extracted : (call SW.getProperty ( call SW.singleton en.recipe ) ( string ! type )) resulting in 14 tokens. We replace this complex phrasing by directly substituting the entity type under consideration i.e. (en.recipe) (1 token). Next, we show an example for a complete utterance: what recipes posting date is at least the same as rice pudding. Its original parse is: (call SW.listValue (call SW.filter (call SW.getProperty (call SW.singleton en.recipe) (string ! type)) (call SW.ensureNumericProperty (string posting_date)) (string >=) 1Code and data is available at https://github. com/pagrawal-ml/Unified-Semantic-Parsing (call SW.ensureNumericEntity (call SW.getProperty en.recipe.rice_pudding (string posting_date))))). Our normalized query is what recipes posting date is at least the same as e0, where entity rice pudding is substituted by entity identifier e0. The normalized parse is as follows: SW.filter en.recipe SW.ensureNumericProperty posting_date >= (SW.ensureNumericEntity SW.getProperty e0 posting_date) It is important to note that this normalization function is reversible. During the test phase, we apply the reverse function to convert the normalized parses to original forms for computing the denotations. Table 1 shows the domain wise statistics of original and normalized data. It is important to note that this script is applicable for template reduction for any λ−DCS form. We report hard denotation accuracy i.e. the proportion of questions for which the top prediction and ground truth programs yield the matching answer sets as the evaluation metric. For computing the rewards during training, we use soft denotation accuracy i.e. F1 score between predicted and ground truth answer sets. Table 2 shows the accuracy with strongly supervised training (SUPERVISED). The average denotation accuracy (with beam width 1) of 70.6% which is comparable to state-of-the-art (Jia and Liang, 2016) denotation accuracy of 75.6% (with beam width 5). This additionally suggests that data normalization process does not alter the task complexity. 5.2 Baselines In the absence of any work on multi-domain parser trained without ground truth programs, we compare the performance of the proposed unified framework against the following baselines: 1. Independent Domain Experts (WEAKINDEPENDENT): These are the set of weakly supervised semantic parsers, trained independently for each domain using REINFORCE algorithm as described in Section 4.1. Note that these are the teachers in our multi-policy distillation framework. 2. Combined Weakly Supervised Semantic Parser (WEAK-COMBINED)): As per 4807 DOMAIN ORIGINAL DATASET NORMALIZED DATASET UTTERANCE PROGRAM UTTERANCE PROGRAM Vocab Vocab Avg. Vocab Vocab Avg. Length Length BASKETBALL 340 65 48.3 332 58 20.5 BLOCKS 213 48 47.4 212 41 9.7 CALENDAR 206 54 43.7 191 46 8.8 HOUSING 302 58 42.7 293 48 8.5 PUBLICATIONS 190 44 46.2 187 38 8.5 RECIPES 247 49 42.6 241 40 7.8 RESTAURANTS 315 62 41.2 310 48 8.2 AVERAGE 259 54.3 44.6 252.3 45.6 10.3 Table 1: Training data statistics for original and normalized dataset. For each domain, we compare the #unique tokens (Vocab) in input utterances and corresponding programs; and average program length. the recommendation in Herzig and Berant (2017), we pool all the domains datasets into one and train a single semantic parser with weak supervision. 3. Independent Policy Distillation (DISTILLINDEPENDENT): We also experiment with independent policy distillation for each domain. The setup is similar to the one described in Section 4.2 used to learn K student parsing models, one for each individual domain. Each student model uses the respective expert model as the only teacher. Following the above naming convention, we term our proposed framework as DISTILL-COMBINED. For the sake of completeness, we also compute the skyline SUPERVISED i.e. the sequence-tosequence model described in Section 3.1 trained with ground truth parses. 5.3 Model Setting We use the original train-test split provided in the dataset. We further split the training set of each domain into training (80%) and validation (20%) sets. We tune each hyperparameter by choosing the parameter from a range of values and choose the configuration with highest validation accuracy for each model. For each experiment we select from: beam width = {1, 5, 10, 20}, number of layers = {1,2,3,4}, rnn size for both encoder & decoder = {100, 200, 300}. For faster compute, we use the string match accuracy as the proxy to denotation reward. In our experiments, we found that combined model performs better with the number of layers set to 2 and RNN size set to 300 while individual models’ accuracies did not increase with an increase in model capacity. This is intuitive as the combined model requires more capacity to learn multiple domains. Encoder and decoder maximum sequence lengths were set to 50 and 35 respectively. For all the models, RMSprop optimizer (Hinton et al.) was used with learning rate set to 0.001. 6 Results and Discussion Table 2 summarizes our main experimental results. It shows that our proposed framework DISTILL-COMBINED clearly outperforms the three baselines WEAK-INDEPENDENT, WEAKCOMBINED, DISTILL-INDEPENDENT described in Section 5.2 Effect of Policy Distillation: DISTILLINDEPENDENT are individual domain models trained through distillation of individual weakly supervised domain experts policies WEAKINDEPENDENT. We observe that policy distillation of individual expert policies result in an average percentage increase of ∼10% in accuracy with a maximum of ∼33% increase in case of BLOCKS domains, which shows the effectiveness of the distillation method employed in our framework. Note that for CALENDAR domain, WEAKINDEPENDENT is unable to learn the parsing policy probably due to the complexity of temporal utterances. Therefore, further distillation on the inaccurate policy leads to drop in performance. More systematic analysis on the failure cases is an interesting future direction. Performance of Unified Semantic Parsing framework: The results show the proposed uni4808 DOMAIN WEAKWEAKDISTILLDISTILLINDEPENDENT COMBINED INDEPENDENT COMBINED SUPERVISED BASKETBALL 33.8 0.5 33.8 36.3 81.0 BLOCKS 27.6 0.8 36.8 37.1 52.8 CALENDAR 25.0 0.6 12.5 17.3 72.0 HOUSING 33.3 2.1 42.3 49.2 66.1 PUBLICATIONS 42.2 6.2 45.9 48.4 68.3 RECIPES 45.8 2.3 61.5 66.2 80.5 RESTAURANTS 41.3 2.1 40.9 45.2 73.5 AVERAGE 35.5 2.1 39.1 42.8 70.6 Table 2: Test denotation accuracy for each domain comparing our proposed method DISTILLCOMBINED with the three baselines. We also report the skyline SUPERVISED. fied semantic parser using multi-policy distillation (DISTILL-COMBINED) (as described in section 3) on an average has the highest performance in predicting programs under weak supervision setup. DISTILL-COMBINED approach leads to an increased performance by ∼20% on an average in comparison to individual domain specific teachers (WEAK-INDEPENDENT). We note maximum increase in the case of HOUSING domain with ∼47% increase in the denotation accuracy. Effectiveness of Multi-Policy Distillation: Finally, we evaluate the effectiveness of the overall multi-policy distillation process in comparison to training a combined model with data merged from all the domains (WEAK-COMBINED) in the weak supervision setup. We observe that due to weak signal strength and enlarged search space from multiple domains, WEAK-COMBINED model performs poorly across domains. Thus, further reinforcing the need for the distillation process. As discussed earlier, the SUPERVISED model is trained using strong supervision from ground-truth parses and hence is not considered as a comparable baseline, rather a skyline, for our proposed model 6.1 Effect of Small Parallel Corpus We show that our model can greatly benefit from the availability of a limited amount of parallel data where semantic parses are available. Figure 4 plots the performance of WEAK-INDEPENDENT and DISTILL-INDEPENDENT models for RECIPES domain when initialized with a pre-trained SUPERVISED model trained on 10% and 30% of parallel training data. As it can be seen, adding 10% parallel data brings an improvement of about 5 points, while increasing the parallel corpus size to 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0% 10% 30% Denotation Accuracy % of Training Data SUPERVISED WEAK-INDEPENDENT DISTILL-INDEPENDENT Figure 4: Effect of the fraction of training data on different models only 30% we observe an improvement of about 11 points. The observed huge boost in performance is motivating given the availability of small amount of parallel corpus in most real world scenarios. 7 Conclusions and Future Work In this work, we addressed the challenge of training a semantic parser for multiple domains without strong supervision i.e. in the absence of ground truth programs corresponding to input utterances. We propose a novel unified neural framework using multi-policy distillation mechanism with two stages of training through weak supervision from denotations i.e. final answers corresponding to utterances. The resultant multi-domain semantic parser is compact and more precise as demonstrated on the OVERNIGHT dataset. We believe that this proposed framework has wide applicability to any sequence-to-sequence model. We show that a small parallel corpus with annotated programs boosts the performance. We plan to explore if further fine-tuning using denotations 4809 based training on the distilled model can lead to improvements in the unified parser. We also plan to investigate the possibility of augmenting the parallel corpus by bootstrapping from shared templates across domains. This would further make it feasible to perform transfer learning on a new domain. An interesting direction would be to enable domain experts to identify and actively request for program annotations given the knowledge shared by other domains. We would also like to explore if guiding the decoder through syntactical and domain-specific constraints helps in reducing the search space for the weakly supervised unified parser. Acknowledgement We thank Ghulam Ahmed Ansari and Miguel Ballesteros, our colleagues at IBM for discussions and suggestions which helped in shaping this paper. References Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 47–52. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv e-prints, abs/1409.0473. Bo Chen, Le Sun, and Xianpei Han. 2018. Sequenceto-action: End-to-end semantic graph generation for semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 766–777. Association for Computational Linguistics. Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A teacher-student framework for zeroresource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1925–1935. Association for Computational Linguistics. Jianpeng Cheng and Mirella Lapata. 2018. Weaklysupervised neural semantic parsing with a generative ranker. CoRR, abs/1808.07625. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the fourteenth conference on computational natural language learning, pages 18–27. Association for Computational Linguistics. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33–43. Association for Computational Linguistics. O. Goldman, V. Latcinnik, U. Naveh, A. Globerson, and J. Berant. 2018. Weakly-supervised semantic parsing with abstract examples. In Association for Computational Linguistics (ACL). Jonathan Herzig and Jonathan Berant. 2017. Neural semantic parsing over multiple knowledge-bases. In Association for Computational Linguistics. Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22. Association for Computational Linguistics. Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the 20th National Conference on Artificial Intelligence - Volume 3, AAAI’05, pages 1062–1068. AAAI Press. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327. Association for Computational Linguistics. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 1512–1523, Stroudsburg, PA, USA. Association for Computational Linguistics. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, and Jia Li. 2017. Learning from noisy labels with distillation. 2017 IEEE International Conference on Computer Vision (ICCV), pages 1928–1936. 4810 Chen Liang, Jonathan Berant, Quoc V. Le, Ken Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 23–33, Vancouver, Canada. Percy Liang. 2013. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408. Arvind Neelakantan, Quoc V. Le, Mart´ın Abadi, Andrew McCallum, and Dario Amodei. 2016. Learning a natural language interface with neural programmer. CoRR, abs/1611.08945. Mohammad Norouzi, Samy Bengio, zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. 2016. Reward augmented maximum likelihood for neural structured prediction. In Advances in Neural Information Processing Systems 29, pages 1723–1731. Curran Associates, Inc. Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. 2016. Policy distillation. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Yee Teh, Victor Bapst, Wojciech M. Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. 2017. Distral: Robust multitask reinforcement learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4496– 4506. Curran Associates, Inc. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1332–1342. Association for Computational Linguistics. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. W. A. Woods. 1973. Progress in natural language understanding: An application to lunar geology. In Proceedings of the June 4-8, 1973, National Computer Conference and Exposition, AFIPS ’73, pages 441–450, New York, NY, USA. ACM. Ruichi Yu, Ang Li, Vlad I. Morariu, and Larry S. Davis. 2017. Visual relationship detection with internal and external linguistic knowledge distillation. 2017 IEEE International Conference on Computer Vision (ICCV), pages 1068–1076. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI’05, pages 658–666, Arlington, Virginia, United States. AUAI Press.
2019
473
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4811–4817 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4811 Every child should have parents: a taxonomy refinement algorithm based on hyperbolic term embeddings Rami Aly1, Shantanu Acharya2, Alexander Ossa1, Arne K¨ohn4,1, Chris Biemann1, and Alexander Panchenko3,1 1Universit¨at Hamburg, Hamburg, Germany 2National Institute of Technology Mizoram, Aizawl, India 3Skolkovo Institute of Science and Technology, Moscow, Russia 4Saarland University, Saarbr¨ucken, Germany {5aly,2ossa,koehn,biemann,panchenko}@informatik.uni-hamburg.de Abstract We introduce the use of Poincar´e embeddings to improve existing state-of-the-art approaches to domain-specific taxonomy induction from text as a signal for both relocating wrong hyponym terms within a (pre-induced) taxonomy as well as for attaching disconnected terms in a taxonomy. This method substantially improves previous state-of-the-art results on the SemEval-2016 Task 13 on taxonomy extraction. We demonstrate the superiority of Poincar´e embeddings over distributional semantic representations, supporting the hypothesis that they can better capture hierarchical lexical-semantic relationships than embeddings in the Euclidean space. 1 Introduction The task of taxonomy induction aims at creating a semantic hierarchy of entities by using hyponym-hypernym relations – called taxonomy – from text corpora. Compared to many other domains of natural language processing that make use of pre-trained dense representations, state-ofthe-art taxonomy learning is still highly relying on traditional approaches like extraction of lexicalsyntactic patterns (Hearst, 1992) or co-occurrence information (Grefenstette, 2015). Despite the success of pattern-based approaches, most taxonomy induction systems suffer from a significant number of disconnected terms, since the extracted relationships are too specific to cover most words (Wang et al., 2017; Bordea et al., 2016). The use of distributional semantics for hypernym identification and relation representation has thus received increasing attention (Shwartz et al., 2016). However, Levy et al. (2015) observe that many proposed supervised approaches instead learn prototypical hypernyms (that are hypernyms to many other terms), not taking into account the relation between both terms in classification. Therefore, past applications of distributional semantics appear to be rather unsuitable to be directly applied to taxonomy induction as the sole signal (Tan et al., 2015; Pocostales, 2016). We address that issue by introducing a series of simple and parameter-free refinement steps that employ word embeddings in order to improve existing domain-specific taxonomies, induced from text using traditional approaches in an unsupervised fashion. We compare two types of dense vector embeddings: the standard word2vec CBOW model (Mikolov et al., 2013a,b), that embeds terms in Euclidean space based on distributional similarity, and the more recent Poincar´e embeddings (Nickel and Kiela, 2017), which capture similarity as well as hierarchical relationships in a hyperbolic space. The source code has been published1 to recreate the employed embedding, to refine taxonomies as well as to enable further research of Poincar´e embeddings for other semantic tasks. 2 Related Work The extraction of taxonomic relationships from text corpora is a long-standing problem in ontology learning, see Biemann (2005) for an earlier survey. Wang et al. (2017) discuss recent advancements in taxonomy construction from text corpora. Conclusions from the survey include: i) The performance of extraction of IS-A relation can be improved by studying how pattern-based and distributional approaches complement each other; ii) there is only limited success of pure deep learn1https://github.com/uhh-lt/Taxonomy_ Refinement_Embeddings 4812 ing paradigms here, mostly because it is difficult to design a single objective function for this task. On the two recent TExEval tasks at SemEval for taxonomy extraction (Bordea et al., 2015, 2016), attracting a total of 10 participating teams, attempts to primarily use a distributional representation failed. This might seem counterintuitive, as taxonomies are surely modeling semantics and thus their extraction should benefit from semantic representations. The 2015 winner INRIASAC (Grefenstette, 2015) performed relation discovery using substring inclusion, lexicalsyntactic patterns and co-occurrence information based on sentences and documents from Wikipedia. The winner in 2016, TAXI (Panchenko et al., 2016), harvests hypernyms with substring inclusion and Hearst-style lexical-syntactic patterns (Hearst, 1992) from domain-specific texts obtained via focused web crawling. The only submission to the TExEval 2016 task that relied exclusively on distributional semantics to induce hypernyms by adding a vector offset to the corresponding hyponym (Pocostales, 2016) achieved only modest results. A more refined approach to applying distributional semantics by Zhang et al. (2018) generates a hierarchical clustering of terms with each node consisting of several terms. They find concepts that should stay in the same cluster using embedding similarity – whereas, similar to the TExEval task, we are interested in making distinctions between all terms. Finally, Le et al. (2019) also explore using Poincar´e embeddings for taxonomy induction, evaluating their method on hypernymy detection and reconstructing WordNet. However, in contrast to our approach that filters and attaches terms, they perform inference. 3 Taxonomy Refinement using Hyperbolic Word Embeddings We employ embeddings using distributional semantics (i.e. word2vec CBOW) and Poincar´e embeddings (Nickel and Kiela, 2017) to alleviate the largest error classes in taxonomy extraction: the existence of orphans – disconnected nodes that have an overall connectivity degree of zero and outliers – a child node that is assigned to a wrong parent. The rare case in which multiple parents can be assigned to a node has been ignored in the proposed refinement system. The first step consists of creating domain-specific Poincar´e embeddings (§ 3.1). They are then used to identify and relocate outlier terms in the taxonomy (§ 3.2), as well as to attach unconnected terms to the taxonomy (§ 3.3). In the last step, we further optimize the taxonomy by employing the endocentric nature of hyponyms (§ 3.4). See Figure 1 for a schematic visualization of the refinement pipeline. In our experiments, we use the output of three different systems. The refinement method is generically applicable to (noisy) taxonomies, yielding an improved taxonomy extraction system overall. 3.1 Domain-specific Poincar´e Embedding Training Dataset Construction To create domain-specific Poincar´e embeddings, we use noisy hypernym relationships extracted from a combination of general and domain-specific corpora. For the general domain, we extracted 59.2 GB of text from English Wikipedia, Gigaword (Parker et al., 2009), ukWac (Ferraresi et al., 2008) and LCC news corpora (Goldhahn et al., 2012). The domain-specific corpora consist of web pages, selected by using a combination of BootCat (Baroni and Bernardini, 2004) and focused crawling (Remus and Biemann, 2016). Noisy IS-A relations are extracted with lexicalsyntactic patterns from all corpora by applying PattaMaika2, PatternSim (Panchenko et al., 2012), and WebISA (Seitner et al., 2016), following (Panchenko et al., 2016).3 The extracted noisy relationships of the common and domain-specific corpora are further processed separately and combined afterward. To limit the number of terms and relationships, we restrict the IS-A relationships on pairs for which both entities are part of the taxonomy’s vocabulary. Relations with a frequency of less than three are removed to filter noise. Besides further removing every reflexive relationship, only the more frequent pair of a symmetric relationship is kept. Hence, the set of cleaned relationships is transformed into being antisymmetric and irreflexive. The same procedure is applied to relationships extracted from the general-domain corpus with a frequency cut-off of five. They are then used to expand the set of relationships created from the domain-specific corpora. 2http://jobimtext.org: The PattaMaika component is based on UIMA RUTA (Kluegl et al., 2016). 3Alternatively to the relations extracted using lexical patterns, we also tried to use hypernyms extracted using the pretrained HypeNet model (Shwartz et al., 2016), but the overall taxonomy evaluation results were lower than the standard baseline of the TAXI system and thus are not presented here. 4813 §3.1 Domain-specific Poincaré Embeddings Noisy is-a relations Taxonomy vocabulary Identified outlier Improved is-a relations New is-a pairs Parent-Child pairs and Orphans New is-a pairs Remaining is-a relations Taxonomy Crawled Corpora Poincaré Embeddings Cleaning and traning Keep relations  above mean rank Remove relationships Search for matching substrings in taxonomy Taxonomy is-a relations Reconnect components to  identified parents Calculate most similar term to disconnected components Identified outlier Calculate most similar terms Keep relations less equal to mean rank Connect orphans to taxonomy §3.2 Relocation of Outlier Terms §3.3 Attachment of Orphan Terms §3.4 Attachment of Compound Terms  Taxonomy is-a relations Improved Taxonomy Potential parents ParentChild pairs Distributional representation Figure 1: Outline of our taxonomy refinement method, with paper sections indicated. Hypernym-Hyponym Distance Poincar´e embeddings are trained on these cleaned IS-A relationships. For comparison, we also trained a model on noun pairs extracted from WordNet (PWN). Pairs were only kept if both nouns were present in the vocabulary of the taxonomy. Finally, we trained the word2vec embeddings, connecting compound terms in the training corpus (Wikipedia) by ’ ’ to learn representations for compound terms, i.e multiword units, for the input vocabulary. In contrast to embeddings in the Euclidean space where the cosine similarity u·v |u||v| is commonly applied as a similarity measure, Poincar´e embeddings use a hyperbolic space, specifically the Poincar´e ball model (Stillwell, 1996). Hyperbolic embeddings are designed for modeling hierarchical relationships between words as they explicitly capture the hierarchy between words in the embedding space and are therefore a natural fit for inducing taxonomies. They were also successfully applied to hierarchical relations in image classification tasks (Khrulkov et al., 2019). The distance between two points u, v ∈Bd for a d-dimensional Poincar´e Ball model is defined as: d(u, v) = arcosh  1 + 2 ||u −v||2 (1 −||u||2)(1 −||v||2)  . This Poincar´e distance enables us to capture the hierarchy and similarity between words simultaneously. It increases exponentially with the depth of the hierarchy. So while the distance of a leaf node to most other nodes in the hierarchy is very high, nodes on abstract levels, such as the root, have a comparably small distance to all nodes in the hierarchy. The word2vec embeddings have no notion of hierarchy and hierarchical relationships cannot be represented with vector offsets across the vocabulary (Fu et al., 2014). When applying word2vec, we use the observation that distributionally similar words are often co-hyponyms (Heylen et al., 2008; Weeds et al., 2014). 3.2 Relocation of Outlier Terms Poincar´e embeddings are used to compute and store a rank rank(x, y) between every child and parent of the existing taxonomy, defined as the index of y in the list of sorted Poincar´e distances of all entities of the taxonomy to x. Hypernymhyponym relationships with a rank larger than the mean of all ranks are removed, chosen on the basis of tests on the 2015 TExEval data (Bordea et al., 2015). Disconnected components that have children are re-connected to the most similar parent in the taxonomy or to the taxonomy root if no distributed representation exists. Previously or now disconnected isolated nodes are subject to orphan attachment (§ 3.3). Since distributional similarity does not capture parent-child relations, the relationships are not registered as parent-child but as co-hyponym relationships. Thus, we compute the distance to the closest co-hyponym (child of the same parent) for every node. This filtering technique is then applied to identify and relocate outliers. 3.3 Attachment of Orphan Terms We then attach orphans (nodes unattached in the input or due to the removal of relationships in the previous step) by computing the rank between every orphan and the most similar node in the taxonomy. This node is an orphan’s potential parent. Only hypernym-hyponym relationships with a rank lower or equal to the mean of all stored ranks are added to the taxonomy. For the word2vec system, a link is added between the parent of the most 4814 similar co-hyponym and the orphan. 3.4 Attachment of Compound Terms In case a representation for a compound noun term does not exist, we connect it to a term that is a substring of the compound. If no such term exists, the noun remains disconnected. Finally, the Tarjan algorithm (Tarjan, 1972) is applied to ensure that the refined taxonomy is asymmetric: In case a circle is detected, one of its links is removed at random. Environment Science Food 15 20 25 30 35 40 45 26.9 36.7 27.9 22.1 31.1 21.2 27.7 38.2 29.3 26.7 40.4 34.2 30.9 41.4 34.1 Baseline (Original system) Attach orphans to the root Word2vec Poincaré Wordnet Poincaré Domain-specifc TAXI Environment Science Food 5 10 15 20 25 30 35 25.9 28.7 8.8 19.5 23.1 8 28.4 26.4 10.2 25.9 31 11.1 23.9 30.6 11.4 USAAR 20 Environment Science Food 15 16 17 18 19 15.6 17.1 16.3 15.6 17 15.8 16.5 18.4 16.2 15.6 17.6 16.4 16.2 17.8 17.6 JUNLP Figure 2: F1 results for the systems on all domains. Vocabulary sizes: environment (|V | = 261), science (|V | = 453), food (|V | = 1556). Bold numbers are significantly different results to the original system with p < 0.05. 4 Evaluation Proposed methods are evaluated on the data of SemEval2016 TExEval (Bordea et al., 2016) for submitted systems that created taxonomies for all domains of the task4, namely the task-winning system TAXI (Panchenko et al., 2016) as well as the systems USAAR (Tan et al., 2016) and JUNLP (Maitra and Das, 2016). TAXI harvests 4http://alt.qcri.org/semeval2016/ task13/index.php hypernyms with substring inclusion and lexicalsyntactic patterns by obtaining domain-specific texts via focused web crawling. USAAR and JUNLP heavily rely on rule-based approaches. While USAAR exploits the endocentric nature of hyponyms, JUNLP combines two string inclusion heuristics with semantic relations from BabelNet. We use the taxonomies created by these systems as our baseline and additionally ensured that taxonomies do neither have circles nor in-going edges to the taxonomy root by applying the Tarjan algorithm (Tarjan, 1972), removing a random link from detected cycles. This causes slight differences between the baseline results in Figure 2 and (Bordea et al., 2016). 5 Results and Discussion Comparison to Baselines Figure 2 shows comparative results for all datasets and measures for every system. The Root method, which connects all orphans to the root of the taxonomy, has the highest connectivity but falls behind in scores significantly. Word2vec CBOW embeddings partly increase the scores, however, the effect appears to be inconsistent. Word2vec embeddings connect more orphans to the taxonomy (cf. Table 2), albeit with mixed quality, thus the interpretation of word similarity as co-hyponymy does not seem to be appropriate. Word2vec as a means to detect hypernyms has shown to be rather unsuitable (Levy et al., 2015). Even more advanced methods such as the diff model (Fu et al., 2014) merely learn socalled prototypical hypernyms. Both Poincar´e embeddings variants outperform the word2vec ones yielding major improvements over the baseline taxonomy. Employing the McNemar (1947) significance test shows that Poincar´e embeddings’ improvements to the original systems are indeed significant. The achieved improvements are larger on the TAXI system than on the other two systems. We attribute to the differences of these approaches: The rule-based approaches relying on string inclusion as carried out by USAAR and JUNLP are highly similar to step §3.4. Additionally, JUNLP creates taxonomies with many but very noisy relationships, therefore step §3.3 does not yield significant gains, since there are much fewer orphans available to connect to the taxonomy. This problem also affects the USAAR system for the food domain. For the environment domain, however, USAAR creates a 4815 Word Parent in TAXI Parent after refinement Gold parent Closest neighbors second language acquisition — linguistics linguistics applied linguistics, semantics, linguistics botany — genetics plant science, ecology genetics, evolutionary ecology, animal science sweet potatoes — vegetables vegetables vegetables, side dishes, fruit wastewater water waste waste marine pollution, waste, pollutant water waste, natural resources natural resources aquatic environment continental shelf, management of resources international relations sociology, analysis, humanities humanities political science economics, economic theory, geography Table 1: Example words with respective parent(s) in the input taxonomy and after refinement using our domainspecfic Poincar´e embeddings, as well as the word’s closest three neighbors (incl. orphans) in embeddings. Domain word2vec P. WordNet P. domain-specific # orphans Environment 25 18 34 113 Science 56 39 48 158 Food 347 181 267 775 Table 2: Number of attached orphans in taxonomies created by TAXI using different embeddings. taxonomy with very high precision but low recall which makes step §3.2 relatively ineffective. As step §3.3 has shown to improve scores more than §3.2, the gains on JUNLP are comparably lower. WordNet-based Embeddings The domainspecific Poincar´e embeddings mostly perform either comparably or outperform the WordNetbased ones. In error analysis, we found that while WordNet-based embeddings are more accurate, they have a lower coverage as seen in Table 2, especially for attaching complex multiword orphan vocabulary entries that are not contained in WordNet, e.g., second language acquisition. Based on the results we achieved by using domain-specific Poincar´e embeddings, we hypothesize that their attributes result in a system that learns hierarchical relations between a pair of terms. The closest neighbors of terms in the embedding clearly tend to be more generic as exemplarily shown in Table 1, which further supports our claim. Their use also enables the correction of false relations created by string inclusion heuristics as seen with wastewater. However, we also notice that few and inaccurate relations for some words results in imprecise word representations such as for botany. Multilingual Results Applying domain-specific Poincar´e embeddings to other languages also creates overall improved taxonomies, however the scores vary as seen in Table 3. While the score of all food taxonomies increased substantially, the taxonomies quality for environment did not improve, it even declines. This seems to be due to the lack of extracted relations in (§3.1), which results in imprecise representations and a highly limited vocabulary in the Poincar´e embedding model, especially for Italian and Dutch. In these cases, the refinement is mostly defined by step §3.4. Language Domain Original Refined # rel. data # rel. gold English Environment 26.9 30.9 657 261 Science 36.7 41.4 451 465 Food 27.9 34.1 1898 1587 French Environment 23.7 28.3 114 266 Science 31.8 33.1 118 451 Food 22.4 28.9 598 1441 Italian Environment 31.0 30.8 2 266 Science 32.0 34.2 4 444 Food 16.9 18.5 57 1304 Dutch Environment 28.4 27.1 7 267 Science 29.8 30.5 15 449 Food 19.4 21.8 61 1446 Table 3: F1 comparison between original (TAXI) and refined taxonomy using domain-specific embeddings. 6 Conclusion We presented a refinement method for improving existing taxonomies through the use of hyperbolic Poincar´e embeddings. They consistently yield improvements over strong baselines and in comparison to word2vec as a representative for distributional vectors in the Euclidean space. We further showed that Poincar´e embeddings can be efficiently created for a specific domain from crawled text without the need for an existing database such as WordNet. This observation confirms the theoretical capability of Poincar´e embeddings to learn hierarchical relations, which enables their future use in a wide range of semantic tasks. A prominent direction for future work is using the hyperbolic embeddings as the sole signal for taxonomy extraction. Since distributional and hyperbolic embeddings cover different relations between terms, it may be interesting to combine them. Acknowledgments We acknowledge the support of DFG under the “JOIN-T” (BI 1544/4) and “ACQuA” (BI 1544/7) projects as well as the DAAD. We also thank three anonymous reviewers and Simone Paolo Ponzetto for providing useful feedback on this work. 4816 References Marco Baroni and Silvia Bernardini. 2004. Bootcat: Bootstrapping corpora and terms from the web. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04), pages 1313–1316, Lisbon, Portugal. Chris Biemann. 2005. Ontology learning from text: A survey of methods. LDV Forum, 20(2):75–93. Georgeta Bordea, Paul Buitelaar, Stefano Faralli, and Roberto Navigli. 2015. Semeval-2015 task 17: Taxonomy Extraction Evaluation (TExEval). In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 902–910, Denver, CO, USA. Georgeta Bordea, Els Lefever, and Paul Buitelaar. 2016. SemEval-2016 Task 13: Taxonomy Extraction Evaluation (TExEval-2). In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1081–1091, San Diego, CA, USA. Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukWaC, a very large web-derived corpus of English. In Proceedings of the 4th Web as Corpus Workshop. Can we beat Google?, pages 47–54, Marrakech, Morocco. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hierarchies via word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1199–1209, Baltimore, MD, USA. Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig Corpora Collection: From 100 to 200 languages. In Proceedings of the Eight International Conference on Language Resources and Evaluation, pages 759–765, Istanbul, Turkey. Gregory Grefenstette. 2015. INRIASAC: Simple Hypernym Extraction Methods. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 911–914, Denver, CO, USA. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th Conference on Computational Linguistics, pages 539–545, Nantes, France. Kris Heylen, Yves Peirsman, Dirk Geeraerts, and Dirk Speelman. 2008. Modelling word similarity: an evaluation of automatic synonymy extraction algorithms. In Proceedings of the sixth international language resources and evaluation, pages 3243– 3249, Marrakech, Morocco. Valentin Khrulkov, Leyla Mirvakhabova, Evgeniya Ustinova, Ivan Oseledets, and Victor Lempitsky. 2019. Hyperbolic image embeddings. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Peter Kluegl, Martin Toepfer, Philip-Daniel Beck, Georg Fette, and Frank Puppe. 2016. UIMA Ruta: Rapid development of rule-based information extraction applications. Natural Language Engineering, 22(1):1–40. Matt Le, Stephen Roller, Laetitia Papaxanthos, Douwe Kiela, and Maximilian Nickel. 2019. Inferring concept hierarchies from text corpora via hyperbolic embeddings. arXiv preprint arXiv:1902.00913. Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional methods really learn lexical inference relations? In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 970–976, Denver, CO, USA. Promita Maitra and Dipankar Das. 2016. JUNLP at SemEval-2016 task 13: A language independent approach for hypernym identification. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1310–1314, San Diego, CA, USA. Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153–157. Tomas Mikolov, Kai Chen, G.S Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In ICLR Workshop, Scottsdale, AZ, USA. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, Stateline, NV, USA. Maximillian Nickel and Douwe Kiela. 2017. Poincar´e embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems 30, pages 6338–6347, Long Beach, CA, USA. Alexander Panchenko, Stefano Faralli, Eugen Ruppert, Steffen Remus, Hubert Naets, C´edrick Fairon, Simone P. Ponzetto, and Chris Biemann. 2016. TAXI at SemEval-2016 Task 13: a taxonomy Induction Method based on Lexico-syntactic Patterns, Substrings and Focused Crawling. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1320–1327, San Diego, CA, USA. Alexander Panchenko, Olga Morozova, and Hubert Naets. 2012. A semantic similarity measure based on lexico-syntactic patterns. In Proceedings of KONVENS 2012, pages 174–178, Vienna, Austria. 4817 Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2009. English gigaword forth edition. In Linguistic Data Consortium, Philadelphia, PA, USA. Joel Pocostales. 2016. NUIG-UNLP at SemEval-2016 Task 13: A Simple Word Embedding-based Approach for Taxonomy Extraction. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1298–1302, San Diego, CA, USA. Steffen Remus and Chris Biemann. 2016. Domainspecific corpus expansion with focused webcrawling. In Proceedings of the 10th International Conference on Language Resources and Evaluation, pages 3607–3611, Portoroˇz, Slovenia. Julian Seitner, Christian Bizer, Kai Eckert, Stefano Faralli, Robert Meusel und Heiko Paulheim, and Simone P. Ponzetto. 2016. A large database of hypernymy relations extracted from the Web. In Proceedings of the 10th International Conference on Language Resources and Evaluation, pages 360–367, Portoroˇz, Slovenia. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2389–2398, Berlin, Germany. John Stillwell. 1996. Sources of hyperbolic geometry. History of Mathematics, Volume 10. American Mathematical Society. Liling Tan, Francis Bond, and Josef van Genabith. 2016. USAAR at SemEval-2016 task 13: Hyponym endocentricity. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval2016), pages 1303–1309, San Diego, CA, USA. Liling Tan, Rohit Gupta, and Josef van Genabith. 2015. USAAR-WLV: Hypernym generation with Deep Neural Nets. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 932–937, Denver, CO, USA. Robert Tarjan. 1972. Depth first search and linear graph algorithms. SIAM Journal on Computing, 1(2):146 –160. Chengyu Wang, Xiaofeng He, and Aoying Zhou. 2017. A short survey on taxonomy learning from text corpora: Issues, resources and recent advances. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1190– 1203, Copenhagen, Denmark. Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics, pages 2249– 2259, Dublin, Ireland. Chao Zhang, Fangbo Tao, Xiusi Chen, Jiaming Shen, Meng Jiang, Brian Sadler, Michelle Vanni, and Jiawei Han. 2018. TaxoGen: Unsupervised topic taxonomy construction by adaptive term embedding and clustering. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2701–2709, London, United Kingdom.
2019
474
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4818–4823 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4818 Learning to Rank for Plausible Plausibility Zhongyang Li†‡∗ Tongfei Chen‡ Benjamin Van Durme‡ † Harbin Institute of Technology ‡ Johns Hopkins University [email protected], {tongfei,vandurme}@cs.jhu.edu Abstract Researchers illustrate improvements in contextual encoding strategies via resultant performance on a battery of shared Natural Language Understanding (NLU) tasks. Many of these tasks are of a categorical prediction variety: given a conditioning context (e.g., an NLI premise), provide a label based on an associated prompt (e.g., an NLI hypothesis). The categorical nature of these tasks has led to common use of a cross entropy log-loss objective during training. We suggest this loss is intuitively wrong when applied to plausibility tasks, where the prompt by design is neither categorically entailed nor contradictory given the context. Log-loss naturally drives models to assign scores near 0.0 or 1.0, in contrast to our proposed use of a margin-based loss. Following a discussion of our intuition, we describe a confirmation study based on an extreme, synthetically curated task derived from MultiNLI. We find that a margin-based loss leads to a more plausible model of plausibility. Finally, we illustrate improvements on the Choice Of Plausible Alternative (COPA) task through this change in loss. 1 Introduction Contextualized encoders such as GPT (Radford et al., 2018) and BERT (Devlin et al., 2019) have led to improvements on various structurally similar Natural Language Understanding (NLU) tasks such as variants of Natural Language Inference (NLI). Such tasks model the conditional interpretation of a sentence (e.g., an NLI hypothesis) based on some other context (usually some other sentence, e.g., an NLI premise). The structural similarity of these tasks points to a structurally similar modeling approach: (1) concatenate the conditioning context (premise) to a sentence to be interpreted, (2) ∗This work was done while the first author was visiting Johns Hopkins University. p I just stopped where I was hE I stopped in my tracks ✓ hN I stopped running right were I was hN I stopped running right were I was ✓ hC I continued on my way Figure 1: COPA-like pairs may be constructed from datasets such as MultiNLI, where a premise and two hypotheses are presented, where the correct – most plausible – item depends on the competing hypothesis. Score Distribution Cross entropy log-loss CON NEU ENT Score Distribution Margin-loss CON NEU ENT Figure 2: Dev set score distribution on COPA-pairs derived from MNLI, after training with cross entropy logloss and margin-loss. Margin-loss leads to a more intuitively plausible encoding of Neutral statements. read this pair using a contextualized encoder, then (3) employ the resultant representation to support classification under the label set of the task. NLI datasets employ a categorical label scheme (Entailment, Neutral, Contradiction) which has led to the use of a cross-entropy log-loss objective at training time: learn to maximize the probability of the correct label, and thereby minimize the probability of the competing labels. We suggest that this approach is intuitively problematic when applied to a task such as COPA (Choice Of Plausible Alternative) by Roemmele et al. (2011), where one is provided with a premise and two or more alternatives, and the model must select the most sensible hypothesis, with respect to the premise and the other options. As compared 4819 to NLI datasets, COPA was designed to have alternatives that are neither strictly true nor false in context: a procedure that maximizes the probability of the correct item at training time, thereby minimizing the probability of the other alternative(s), will seemingly learn to misread future examples. We argue that COPA-style tasks should intuitively be approached as learning to rank problems (Burges et al., 2005; Cao et al., 2007), where an encoder on competing items is trained to assign relatively higher or lower scores to candidates, rather than maximizing or minimizing probabilities. In the following we investigate three datasets, beginning with a constructed COPA-style variant of MultiNLI (Williams et al., 2018, later MNLI), designed to be adversarial (see Figure 1). Results on this dataset support our intuition (see Figure 2). We then construct a second synthetic dataset based on JOCI (Zhang et al., 2017), which employed a finer label set than NLI, and a margin-based approach strictly outperforms log-loss in this case. Finally, we demonstrate state-of-the-art on COPA, showing that a BERT-based model trained with margin-loss significantly outperforms a log-loss alternative. 2 Background A series of efforts have considered COPA: by causality estimation through pointwise mutual information (Gordon et al., 2011) or data-driven methods (Luo et al., 2016; Sasaki et al., 2017), or through a pre-trained language model (Radford et al., 2018, GPT).1 Under the Johns Hopkins Ordinal Commonsense Inference (JOCI) dataset (Zhang et al., 2017), instead of selecting which hypothesis is the most plausible, a model is expected to directly assign ordinal 5-level Likert scale judgments (from impossible to very likely). If taking an ordinal interpretation of NLI, this can be viewed as a 5-way variant of the 3-way labels used in SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018). In this paper, we recast MNLI and JOCI as COPA-style plausibility tasks by sampling and constructing (p, h, h′) triples from these two datasets. Each premise-hypothesis pair (p, h) is labeled with different levels of plausibility yp,h.2 1 As reported in https://blog.openai.com/ language-unsupervised/. 2 For MNLI, entailment > neutral > contradiction; for JOCI, very likely > likely > plausible > technically possible > impossible. 3 Models In models based on GPT and BERT for plausibility or NLI, similar neural architectures have been employed. The premise p and hypothesis h are concatenated into a sequence with a special delimiter token, along with a special sentinel token CLS inserted as the token for feature extraction: BERT : [CLS ; p ; SEP ; h ; SEP] GPT : [BOS ; p ; EOS ; h ; CLS] The concatenated string is passed into the BERT or GPT encoder. One takes the encoded vector of the CLS state as the feature vector extracted from the (p, h) pair. Given the feature vector, a dense layer is stacked upon to get the final score F(p, h), where F : P × H →R is the model. Cross entropy loss The model is trained to maximize the probability of the correct candidate, normalized over all candidates in the set (leading to a cross entropy log-loss between the posterior distribution of the scores and the true labels): P(hi | p) = exp F(p, hi) N X j=1 exp F(p, hj) . (1) Margin-based loss As we have argued before, the cross entropy loss employed in Equation 1 is problematic. Instead we propose to use the following margin-based triplet loss (Weston and Watkins, 1999; Chechik et al., 2010; Li et al., 2018): L = 1 N X h>h′ max{0, ξ −F(p, h) + F(p, h′)} , (2) where N is the number of pairs of hypotheses where the first is more plausible than the second under the given premise p; h > h′ means that h ranks higher than (i.e., is more plausible than) h′ under premise p; and ξ is a margin hyperparameter denoting the desired scores difference between these two hypotheses. 4 Recasting Datasets We consider three datasets: MNLI, JOCI, and COPA. These are all cast as plausibility datasets, into a format comprising (p, h, h′) triples, where h is more plausible than h′ under the context of premise p. 4820 Dataset Train Eval MNLI1 410k dev: 8.2k MNLI2 142k dev: 130k JOCI1 8.7k dev: 3.0k JOCI2 2.3k dev: 1.9k COPA 500 test: 500 Table 1: Statistics of various plausibility datasets. All numbers are numbers of (p, h, h′) triplets. MNLI In MNLI, each premise p is paired with 3 hypotheses. We cast the label on each hypothesis as a relative plausibility judgment, where entailment > neutral > contradiction (we label them as 2, 1, and 0). We construct two 2-choice plausibility tasks from MNLI: MNLI1 = {(p, h, h′) | yp,h > yp,h′} MNLI2 = {(p, h, h′) | (yp,h, yp,h′) ∈{(2, 1), (1, 0)}} MNLI1 comprises all pairs labeled with 2/1, 2/0, or 1/0; whereas MNLI2 removes the presumably easier 2/0 pairs. For MNLI1, the training set is constructed from the original MNLI training dataset, and the dev set for MNLI1 is derived from the original MNLI matched dev dataset. For MNLI2, all of the examples in our training and dev sets is taken from the original MNLI training dataset, hence the same premise exists in both training and dev. This is by our adversarial design: each neutral hypothesis appears either as the preferred (beating contradiction), or dispreferred alternative (beaten by entailment), which is flipped at evaluation time. JOCI In JOCI, every inference pair is labeled with their ordinal inference Likert-scale labels 5, 4, 3, 2, or 1. Similar to MNLI, we cast these to 2-choice problems under the following conditions: JOCI1 = {(p, h, h′) | yp,h > yp,h′ ≥3} JOCI2 = {(p, h, h′) | (yp,h, yp,h′) ∈{(5, 4), (4, 3)}} We ignore inference pairs with scores below 3, aiming for sets akin to COPA, where even the dispreferred option is still often semi-plausible. COPA We label alternatives as 1 (the more plausible one) and 0 (otherwise). The original dev set in COPA is used as the training set. Table 1 shows the statistics of these datasets. 5 Experiments and Analyses Setup We fine-tune the BERT-BASE-UNCASED (Devlin et al., 2019) using our proposed marginDataset Log loss Margin loss MNLI1 93.6 93.4 MNLI2 87.9 87.9 JOCI1 86.6 86.9 JOCI2 76.6 78.0 Table 2: Results on recast MNLI and JOCI. based loss, and perform hyperparameter search on the margin parameter ξ. For the recast MNLI and JOCI datasets, the margin hyperparameter ξ = 0.2. Since COPA does not have a training set, we use the original dev set as the training set, and perform 10-fold cross validation to find the best hyperparameter ξ = 0.37. We employ the Adam optimizer (Kingma and Ba, 2014) with initial learning rate η = 3 × 10−5, finetune for at most 3 epochs and use early-stopping to select the best model. Results on Recast MNLI and JOCI Table 2 shows results on the recast MNLI and JOCI datasets. We find that for the two synthetic MNLI datasets, margin-loss performs similarly to cross entropy log-loss. Shifting to the JOCI datasets, with less extreme (contradiction / entailed) hypotheses, especially in the adversarial JOCI2 variant, marginloss outperforms log-loss. Though log-loss and margin-loss give close quantitative results on predicting the more plausible (p, h) pairs, they do so in different ways, confirming our intuition. From Figure 3 we find that the log-loss always predicts the more plausible (p, h) pair with very high probabilities close to 1, and predicts the less plausible (p, h) pair with very low probabilities close to 0. Figure 3, showing a perpremise normalized score distribution from marginloss, is more reasonable and explainable: hypotheses with different plausibility are distributed hierarchically between 0 and 1. Method Acc (%) PMI (Jabeen et al., 2014) 58.8 PMI EX (Gordon et al., 2011) 65.4 CS (Luo et al., 2016) 70.2 CS MWP (Sasaki et al., 2017) 71.2 BERTlog (ours) 73.4 BERTmargin (ours) 75.4 Table 3: Experimental results on COPA test set. 4821 Score Distribution Train score distribution under log-loss contradiction neutral entailment Score Distribution Dev score distribution under log-loss contradiction neutral entailment Score Distribution Train score distribution under margin-loss contradiction neutral entailment Score Distribution Dev score distribution under margin-loss contradiction neutral entailment (a) MNLI1 Score Distribution Train score distribution under log-loss plausible likely very likely Score Distribution Dev score distribution under log-loss plausible likely very likely Score Distribution Train score distribution under margin-loss plausible likely very likely Score Distribution Dev score distribution under margin-loss plausible likely very likely (b) JOCI1 Figure 3: Train and dev score distribution after training with a cross entropy log-loss and a margin-loss. Dataset Premise Hypotheses Gold Log Margin MNLI1 (1) I just stopped where I was. (a) I stopped in my tracks 2 0.919 0.568 (b) I stopped running right where I was. 1 0.0807 0.358 (c) I continued on my way. 0 1.71×10−8 0.0739 MNLI1 (2) An organization’s activities, core processes and resources must be aligned to support its mission and help it achieve its goals. (a) An organization is successful if its activities, resources and goals align. 2 0.505 0.555 (b) Achieving organizational goals reflects a change in core processes. 1 0.495 0.257 (c) A company’s mission can be realized even without the alignment of resources. 0 3.48×10−5 0.187 JOCI1 (3) A few people and cars out on their daily commute on a rainy day. (a) The commute is a journey. 5 0.994 0.473 (b) The commute is bad. 4 5.79×10−3 0.230 (c) The commute becomes difficult. 3 1.28×10−3 0.157 JOCI1 (4) Cheerleaders in red uniforms perform a lift stunt. (a) The stunt is a feat. 5 0.508 0.304 (b) The stunt is no fluke. 4 0.486 0.279 (c) The stunt is dangerous. 3 2.72×10−4 0.166 (d) The stunt is remarkable. 3 4.13×10−3 0.153 (e) The stunt backfires. 3 2.36×10−4 0.107 COPA (5) She jumped off the diving board. (a) The girl landed in the pool. 1 0.972 0.520 (5′) She ran on the pool deck. 0 0.028 0.480 COPA (6) The student knew the answer to the question. (a) He raised his hand. 1 0.982 0.738 (b) He goofed off. 0 0.018 0.262 Table 4: Examples of premises and their corresponding hypotheses in various plausibility datasets, with gold labels and scores given by the log-loss and margin-loss trained models. Results on COPA Table 3 shows our results on COPA. Compared with previous state-of-theart knowledge-driven baseline methods, a BERT model trained with a log-loss achieves better performance. When training the BERT model with a margin-loss instead of a log-loss, our method gets the new state-of-the-art result on the established COPA splits, with an accuracy of 75.4%.3 Analyses Table 4 shows some examples from the MNLI1, JOCI1 and COPA datasets, with scores 3 We exclude a blog-posted GPT result, which comes without experimental conditions and is not reproducible. normalized with respect to all hypotheses given a specific premise. For the premise (1) from MNLI1, log-loss results in a very high score (0.919) for the entailment hypothesis (1a), while assigning a low score (0.0807) for the neutral hypothesis (1b), and an extremely low score (1.71×10−8) for the contradiction hypothesis (1c). Though the log-loss can achieve high accuracy by making these extreme prediction scores, we argue these scores are unintuitive. For the premise (2) from MNLI1, log-loss again gives a very high score (0.505) for the hypothesis (2a). 4822 But it also gives a high score (0.495) for the neutral hypothesis (2b). The contradiction hypothesis (2c) still gets an extremely low score (3.48×10−5). These are the two ways for the log-loss approach to make predictions with high accuracy: always giving very high score for the entailment hypothesis and low score for the contradiction hypothesis, but giving either very high or very low score for the neutral hypothesis. In contrast, the margin-loss gives more intuitive scores for these two examples. Also, we get similar observations from the JOCI1 examples (3) and (4). Example (5) from COPA is asking for a more plausible cause premise for the effect hypothesis. Here, each of the two candidate premises (5) and (5′) is a possible answer. The log-loss gives very high (0.972) and very low (0.028) scores for the two candidate premises, which is unreasonable. Whereas the margin-loss gives much more rational ranking scores for them (0.52 and 0.48). For example (6), which is asking for a more likely effect hypothesis for the cause premise, margin-loss still gets more reasonable prediction scores than the log-loss. Our qualitative analysis is related to the concept of calibration in statistics: are these resulting scores close to their class membership probabilities? Our intuitive qualitative results might be thought as a type of calibration for the plausibility task (more “reliable” scores) instead of the more common multi-class classification (Zadrozny and Elkan, 2002; Hastie and Tibshirani, 1998; Niculescu-Mizil and Caruana, 2005). 6 Conclusion In this paper, we propose that margin-loss in contrast to log-loss is a more plausible training objective for COPA-style plausibility tasks. Through adversarial construction we illustrated that a logloss approach can be driven to encode plausible statements (Neutral hypotheses in NLI) as either extremely likely or unlikely, which was highlighted in contrasting figures of per-premise normalized hypothesis scores. This intuition was shown to lead to a new state-of-the-art in the original COPA task, based on a margin-based loss. Acknowledgements This work was partially sponsored by the China Scholarship Council. It was also supported in part by DARPA AIDA. The authors thank the reviewers for their helpful comments. References Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proc. EMNLP, pages 632–642. Christopher Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Gregory N Hullender. 2005. Learning to rank using gradient descent. In Proc. ICML, pages 89–96. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In Proc. ICML, pages 129–136. ACM. Gal Chechik, Varun Sharma, Uri Shalit, and Samy Bengio. 2010. Large scale online learning of image similarity through ranking. J. Mach. Learn. Res., 11(3):1109–1135. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL. Andrew S Gordon, Cosmin A Bejan, and Kenji Sagae. 2011. Commonsense causal reasoning using millions of personal stories. In Proc. AAAI. Trevor Hastie and Robert Tibshirani. 1998. Classification by pairwise coupling. In Proc. NeurIPS, pages 507–513. Shahida Jabeen, Xiaoying Gao, and Peter Andreae. 2014. Using asymmetric associations for commonsense causality detection. In Pacific Rim International Conference on Artificial Intelligence, pages 877–883. Springer. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Zhongyang Li, Xiao Ding, and Ting Liu. 2018. Constructing narrative event evolutionary graph for script event prediction. In Proc. IJCAI, pages 4201– 4207. AAAI Press. Zhiyi Luo, Yuchen Sha, Kenny Q Zhu, Seung-won Hwang, and Zhongyuan Wang. 2016. Commonsense causal reasoning between short texts. In Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning. Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In Proc. ICML, pages 625–632. ACM. 4823 Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In 2011 AAAI Spring Symposium Series. Shota Sasaki, Sho Takase, Naoya Inoue, Naoaki Okazaki, and Kentaro Inui. 2017. Handling multiword expressions in causality estimation. In IWCS 12th International Conference on Computational SemanticsShort papers. Jason Weston and Chris Watkins. 1999. Support vector machines for multi-class pattern recognition. In ESANN 1999, 7th European Symposium on Artificial Neural Networks, Bruges, Belgium, April 21-23, 1999, Proceedings, pages 219–224. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proc. NAACL, pages 1112–1122. Association for Computational Linguistics. Bianca Zadrozny and Charles Elkan. 2002. Transforming classifier scores into accurate multiclass probability estimates. In Proc. KDD, pages 694–699. ACM. Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal common-sense inference. Trans. ACL, 5(1):379–395.
2019
475
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4824–4830 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4824 Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment Goran Glavaˇs University of Mannheim Data and Web Science Group B6, 26, DE-68161 Mannheim, Germany [email protected] Ivan Vuli´c PolyAI Ltd. 144A Clerkenwell Road London, United Kingdom [email protected] Abstract Lexical entailment (LE; also known as hyponymy-hypernymy or is-a relation) is a core asymmetric lexical relation that supports tasks like taxonomy induction and text generation. In this work, we propose a simple and effective method for fine-tuning distributional word vectors for LE. Our Generalized Lexical ENtailment model (GLEN) is decoupled from the word embedding model and applicable to any distributional vector space. Yet – unlike existing retrofitting models – it captures a general specialization function allowing for LE-tuning of the entire distributional space and not only the vectors of words seen in lexical constraints. Coupled with a multilingual embedding space, GLEN seamlessly enables cross-lingual LE detection. We demonstrate the effectiveness of GLEN in graded LE and report large improvements (over 20% in accuracy) over state-ofthe-art in cross-lingual LE detection. 1 Background and Motivation Lexical entailment (LE; hyponymy-hypernymy or is-a relation), is a fundamental asymmetric lexicosemantic relation (Collins and Quillian, 1972; Beckwith et al., 1991) and a key building block of lexico-semantic networks and knowledge bases (Fellbaum, 1998; Navigli and Ponzetto, 2012). Reasoning about word-level entailment supports a multitude of tasks such as taxonomy induction (Snow et al., 2006; Navigli et al., 2011; Gupta et al., 2017), natural language inference (Dagan et al., 2013; Bowman et al., 2015; Williams et al., 2018), metaphor detection (Mohler et al., 2013), and text generation (Biran and McKeown, 2013). Due to their distributional nature (Harris, 1954), embedding models (Mikolov et al., 2013; Levy and Goldberg, 2014; Pennington et al., 2014; Melamud et al., 2016; Bojanowski et al., 2017; Peters et al., 2018, inter alia) conflate paradigmatic relations (e.g., synonymy, antonymy, LE, meronymy) and the broader topical (i.e., syntagmatic) relatedness (Schwartz et al., 2015; Mrkˇsi´c et al., 2017). Consequently, distributional vectors (i.e., embeddings) cannot be directly used to reliably detect LE. Embedding specialization methods remedy for the semantic vagueness of distributional spaces, forcing the vectors to conform to external linguistic constraints (e.g., synonymy or LE word pairs) in order to emphasize the lexico-semantic relation of interest (e.g., semantic similarity of LE) and diminish the contributions of other types of semantic association. Lexical specialization models generally belong to one of the two families: (1) joint optimization models and (2) retrofitting (also known as fine-tuning or post-processing) models. Joint models incorporate linguistic constraints directly into the objective of an embedding model, e.g., Skip-Gram (Mikolov et al., 2013), by modifying the prior or regularization of the objective (Yu and Dredze, 2014; Xu et al., 2014; Kiela et al., 2015) or by augmenting the objective with additional factors reflecting linguistic constraints (Ono et al., 2015; Osborne et al., 2016; Nguyen et al., 2017). Joint models are tightly coupled to a concrete embedding model – any modification to the underlying embedding models warrants a modification of the whole joint model, along with the expensive retraining. Conversely, retrofitting models (Faruqui et al., 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkˇsi´c et al., 2017; Vuli´c and Mrkˇsi´c, 2018, inter alia) change the distributional spaces post-hoc, by fine-tuning word vectors so that they conform to external linguistic constraints. Advantageously, this makes retrofitting models more flexible, as they can be applied to any pre-trained distributional space. On the downside, retrofitting models specialize only the vectors of words seen in constraints, leaving vectors of unseen words unchanged. In this work, we propose an LE-specialization framework that combines the strengths of both 4825 small big large huge distr. space EN ling. constraints small, big, ant huge, big, le big, large, syn ... + = LE specialization function f small huge big large f small big large huge = small huge big large distr. space EN petit gros grand énorme distr. space FR Cross-lingual projection g f g = petit énorme gros grand LE Retrofitting GLEN Specialization GLEN Spec. Cross-lingual Figure 1: High-level illustration of GLEN. Row #1: LE-retrofitting – specializes only vectors of constraint words (from language L1); Row #2: GLEN – learns the specialization function f using constraints (from L1) as supervision; Row #3: Cross-lingual GLEN: LE-tuning of vectors from language L2 – f applied to L2 vectors projected (function g) to the L1 embedding space. model families: unlike joint models, our generalized LE specialization (dubbed GLEN) is easily applicable to any embedding space. Yet, unlike the retrofitting models, it LE-specializes the entire distributional space and not just the vectors of words from external constraints. GLEN utilizes linguistic constraints as training examples in order to learn a general LE-specialization function (instantiated simply as a feed-forward neural net), which can then be applied to the entire distributional space. The difference between LE-retrofitting and GLEN is illustrated in Figure 1. Moreover, with GLEN’s ability to LE-specialize unseen words we can seamlessly LE-specialize word vectors of another language (L2), assuming we previously project them to the distributional space of L1 for which we had learned the specialization function. To this end, we can leverage any from the plethora of resource-lean methods for learning the cross-lingual projection (function g in Figure 1) between monolingual distributional vector spaces (Smith et al., 2017; Conneau et al., 2018; Artetxe et al., 2018, inter alia).1 Conceptually, GLEN is similar to the explicit retrofitting model of Glavaˇs and Vuli´c (2018), who focus on the symmetric semantic similarity relation. In contrast, GLEN has to account for the asymmetric nature of the LE relation. Besides joint (Nguyen et al., 2017) and retrofitting (Vuli´c and Mrkˇsi´c, 2018) models for LE, there is a number of supervised LE detection models that employ distributional vectors as input features (Tuan et al., 2016; Shwartz et al., 2016; Glavaˇs and Ponzetto, 1See (Ruder et al., 2018b; Glavaˇs et al., 2019) for a comprehensive overview of models for inducing cross-lingual word embedding spaces. 2017; Rei et al., 2018). These models, however, predict LE for pairs of words, but do not produce LE-specialized word vectors, which are directly pluggable into downstream models. 2 Generalized Lexical Entailment Following LEAR (Vuli´c and Mrkˇsi´c, 2018), the state-of-the-art LE-retrofitting model, we use three types of linguistic constraints to learn the general specialization f: synonyms, antonyms, and LE (i.e., hyponym-hypernym) pairs. Similarityfocused specialization models tune only the direction of distributional vectors (Mrkˇsi´c et al., 2017; Glavaˇs and Vuli´c, 2018; Ponti et al., 2018). In LEspecialization we need to emphasize similarities but also reflect the hierarchy of concepts offered by LE relations (e.g., car should be similar to both Ferrari and vehicle but is a hyponym only of vehicle). GLEN learns a specialization function f that rescales vector norms in order to reflect the hierarchical LE relation. To this end, we use the following asymmetric distance between vectors defined in terms of their Euclidean norms: dN(x1, x2) = ∥x1∥−∥x2∥ ∥x1∥+ ∥x2∥ (1) Simultaneously, GLEN aims to bring closer together in direction vectors for synonyms and LE pairs and to push vectors of antonyms further apart. We use the cosine distance dC as a symmetric measure of direction (dis)similarity between vectors. We combine the asymmetric distance dN and symmetric dC in different objective functions that we optimize to learn the LE-specialization function f. Lexical Constraints as Training Instances. For each constraint type – synonyms, antonyms, and LE pairs – we create separate batches of training instances. Let {xE 1 , xE 2 }K, {xS 1 , xS 2 }K, and {xA 1 , xA 2 }K be the batches of K LE, synonymy, and antonymy pairs, respectively. For each constraint (x1, x2) we create a pair of negative vectors (y1, y2) such that y1 is the vector within the batch (except x2), closest to x1 and y2 the vector closest to x2 (but not x1) in terms of some distance or similarity metric. For LE constraints, we find y1 and y2 that minimize dN(x1, y1)+dC(x1, y1) and dN(y2, x2) + dC(x2, y2), respectively. Intuitively, we want our model to predict a smaller LE distance dN+dC for a positive LE pair (x1, x2) than for negative pairs (x1, y1) and (x2, y2) in the specialized space. By choosing the most-challenging negative pairs, i.e., y1 and y2 that are respectively closest 4826 to x1 and x2 in terms of LE distance in the distributional space, we force our model to learn a more robust LE specialization function (this is further elaborated in the description of the objective function). Analogously, for positive synonym pairs, y1 and y2 are the vectors closest to x1 and x2, respectively, but in terms of only the (symmetric) cosine distance dC. Finally, for antonyms, y1 is the vector maximizing dC(x1, y1) and y2 the vector that maximizes dC(x2, y2). In this case, we want the vectors of antonyms x1 and x2 after specialization to be further apart from one another (according to dC) than from, respectively, the vectors y1 and y2 that are most distant to them in the original distributional space. A training batch, with K entailment (E), synonymy (S), or antonymy (A) instances, is obtained by coupling constraints (x1, x2) with their negative vectors (y1, y2): {x1, x2, y1, y2}K. Specialization Function. The parametrized specialization function f(x; θ) : Rd →Rd (with d being the embedding size), transforms the distributional space to the space that better captures the LE relation. Once we learn the specialization function f (i.e., we tune the parameters θ), we can LE-specialize the entire distributional embedding space X (i.e., the vectors of all vocabulary words): X′ = f(X; θ). For simplicity, we define f to be a (fully-connected) feed-forward net with H hidden layers of size dh and non-linear activation ψ. The i-th hidden layer (i ∈{1, . . . , H}) is parametrized by the weight matrix Wi and the bias vector bi:2 hi(x; θi) = ψ  hi−1(x, θi−1)Wi + bi (2) Objectives and Training. We define four losses which we combine into training objectives for different constraint types (E, S, and A). The asymmetric loss la forces the asymmetric margin-based distance dN to be larger for negative pairs (x1, y1) and (y2, x2) than for the positive (true LE) pair (x1, x2) by at least the margin δa: la= K X k=1 τ  δa −dN  f(xk 1), f(yk 1)  + dN  f(xk 1), f(xk 2)  + τ  δa −dN  f(yk 2), f(xk 2)  + dN  f(xk 1), f(xk 2)  (3) where τ(x) = max(0, x) is the ramp function. The similarity loss ls pushes the vectors x1, and x2 to be direction-wise closer to each other than to negative vectors y1 and y2, by margin δs: 2The 0-th “hidden layer” is the input distributional vector: h0(x; θ0) = x and θ0 = ∅, following the notation of Eq. (2). ls= K X k=1 τ  δs −dC  f(xk 1), f(yk 1)  + dC  f(xk 1), f(xk 2)  + τ  δs −dC  f(xk 2), f(yk 2)  + dN  f(xk 1), f(xk 2)  (4) The dissimilarity loss ld pushes vectors x1 and x2 further away from each other than from respective negative vectors y1 and y2, by the margin δd: ld= K X k=1 τ  δd −dC  f(xk 1), f(xk 2)  + dC  f(xk 1), f(yk 1)  + τ  δd −dC  f(xk 1), f(xk 2)  + dN  f(xk 2), f(yk 2)  (5) We also define the regularization loss lr, preventing f from destroying the useful semantic information contained in distributional vectors: lr = K X k=1 dC  xk 1, f(xk 1)  + dC  xk 2, f(xk 2)  + dC  yk 1, f(yk 1)  + dC  yk 2, f(yk 2)  . (6) Finally, we define different objectives for different constraints types (E, S, and A): JE = ls(E) + λa · la(E) + λr · lr(E); JS = ls(S) + λr · lr(S); JA = ld(A) + λr · lr(A), (7) where λa and λr scale the contributions of the asymmetric and regularization losses, respectively. JE pushes LE vectors to be similar in direction (loss ls) and different in norm (loss la) after specialization. JS forces vectors of synonyms to be closer together (loss ls) and JA vectors of antonyms to be further apart (loss ld) in direction after specialization, both without affecting vector norms. We tune hyperparameters (δa, δs, δd, λa, and λr) via cross-validation, with train and validation portions containing randomly shuffled E, S, and A batches. Inference. We infer the strength of the LE relation between vectors x′ 1 = f(x1) and x′ 2 = f(x2) with an asymmetric LE distance combining dC and dN: ILE(x′ 1, x′ 2) = dC(x′ 1, x′ 2) + dN(x′ 1, x′ 2). True LE pairs should have a small dC and negative dN. We thus rank LE candidate word pairs according to their ILE scores, from smallest to largest. For the binary LE detection, ILE is binarized via threshold t: if ILE < t, we predict that LE holds. Cross-Lingual (CL) LE Specialization. After learning the generalized LE-specialization function f, we can apply it to specialize any vector that comes from the same distributional vector space 4827 that we used in training. Let L1 be the language for which we have the linguistic constraints and let XL1 be its corresponding distributional space. Let XL2 be the distributional space of another language L2. Assuming a function g : RdL2 →RdL1 that projects vectors from XL2 to XL1, we can straightforwardly LE-specialize the distributional space of L2 by composing functions f and g: X′L2 = f (g(XL2)). Recently, a large number of projection-based models have been proposed for inducing bilingual word embedding spaces (Smith et al., 2017; Conneau et al., 2018; Artetxe et al., 2018; Ruder et al., 2018a; Joulin et al., 2018, inter alia), most of them requiring limited (word-level) or no bilingual supervision. Based on a few thousand (manually created or automatically induced) word-translation pairs, these models learn a linear mapping Wg that projects the vectors from XL2 to the space XL1: g(XL2) = XL2Wg. The crosslingual space is then given as: XL1 ∪XL2Wg. Due to simplicity and robust downstream performance,3 we opt for the simple supervised learning of the cross-lingual projection matrix Wg (Smith et al., 2017) based on (closed-form) solution of the Procrustes problem (Sch¨onemann, 1966). Let XS ⊂XL2 and XT ⊂XL1 be the subsets of the two monolingual embedding spaces, containing (row-aligned) vectors of word translations. We then obtain the projection matrix as Wg = UV⊤, where UΣV⊤is the singular value decomposition of the product matrix XT XS⊤. 3 Evaluation Experimental Setup. We work with Wikipediatrained FASTTEXT embeddings (Bojanowski et al., 2017). We take English constraints from previous work – synonyms and antonyms were created from WordNet and Roget’s Thesaurus (Zhang et al., 2014; Ono et al., 2015); LE constraints were collected from WordNet by Vuli´c and Mrkˇsi´c (2018) and contain both direct and transitively obtained LE pairs. We retain the constraints for which both words exist in the trimmed (200K) FASTTEXT vocabulary, resulting in a total of 1,493,686 LE, 521,037 synonym, and 141,311 antonym pairs. We reserve 4,000 constraints (E: 2k, S: 1k, A: 1k) for validation and use the rest for training. We identify the following best hyperparameter configuration via grid search: H = 5, dh = 300, ψ = tanh, δa = 1, δs = δd = 0.5, λa = 2, and λr = 1. 3For a comprehensive downstream comparison of different cross-lingual embedding models, see (Glavaˇs et al., 2019). Setup 0% 10% 30% 50% 70% 90% 100% LEAR .174 .188 .273 .438 .548 .634 .682 GLEN .481 .485 .478 .474 .506 .504 .520 Table 1: Spearman correlation for GLEN, compared with LEAR (Vuli´c and Mrkˇsi´c, 2018), on HyperLex, for different word coverage settings (i.e., percentages of Hyperlex words seen in constraints in training). We apply a dropout (keep rate 0.5) to each hidden layer of f. We train in mini-batches of K = 50 constraints and learn with the Adam algorithm (Kingma and Ba, 2015): initial learning rate 10−4. 3.1 Graded Lexical Entailment We use ILE to predict the strength of LE between words. We evaluate GLEN against the state-of-theart LE-retrofitting model LEAR (Vuli´c and Mrkˇsi´c, 2018) on the HyperLex dataset (Vuli´c et al., 2017) which contains 2,616 word pairs (83% nouns, 17% verbs) judged (0-6 scale) by human annotators for the degree to which the LE relation holds. We evaluate the models in a deliberately controlled setup: we (randomly) select a subset of HyperLex words (0%, 10%, 30%, 50%, 70%, 90%, and 100%) that we allow models to “see” in the constraints, removing constraints with any other HyperLex word.4 Results and Discussion. The graded LE performance is shown in Table 1 for all seven setups. Graded LE results suggest that GLEN is robust and generalizes well to unseen words: the drop in performance between the 0% and 100% setups is mere 4% for GLEN (compared to a 50% drop for LEAR). Results in the 0% setting, in which GLEN improves over the distributional space by more than 30 points most clearly demonstrate its effectiveness.5 GLEN, however, lags behind LEAR in setups where LEAR has seen 70% or more of test words. This is intuitive: LEAR specializes the vector of each particular word using only the constraints containing that word; this gives LEAR higher specialization flexibility at the expense of generalization ability. In contrast, GLEN’s specialization function is affected by all constraints and has to work for all words; GLEN trades the effectiveness of LEAR’s word-specific updates for seen words, for the ability to generalize over unseen words. In a sense, there is a trade-off between the ability to generalize the 4In the 0% setting we remove all constraints containing any HyperLex word; in the 100% we use all constraints. The full set of constraints contains 99.8% of all HyperLex words. 5LEAR’s performance in the 0% setup corresponds to the performance of input distributional vectors. 4828 LE-specialization over unseen words and the performance for seen words. Put differently, by learning a general specialization function – i.e., by using linguistic constraints merely as training instances – GLEN is prevented from “overfitting” to seen words. Evaluation settings like our 90% or 100% settings, in which GLEN is outperformed by a pure retrofitting model, are however unrealistic in view of downstream tasks: for any concrete downstream task (e.g., textual entailment or taxonomy induction), it is highly unlikely that the LE-specialization model will have seen almost all of the test words (words for which LE inference is required) in its training linguistic constraints; this is why GLEN’s ability to generalize LE-specialization to unseen words (as indicated by 0%-50% settings) is particularly important. 3.2 Cross-Lingual LE Detection Neither joint (Nguyen et al., 2017) nor retrofitting models (Vuli´c and Mrkˇsi´c, 2018) can predict LE across languages in a straightforward fashion. Coupled with a CL space, GLEN can seamlessly predict LE across language boundaries. Experimental Setup. We evaluate GLEN on datasets from Upadhyay et al. (2018), encompassing two binary cross-lingual LE detection tasks: (1) HYPO task test model’s ability to determine the direction of the LE relation, i.e., to discern hyponymhypernym pairs from hypernym-hyponym pairs; (2) COHYP tasks tests whether the models are able to discern true LE pairs from cohyponyms (e.g., car and boat, cohyponyms of vehicle). We report results for three language pairs: English (EN) – {French (FR), Russian (RU), Arabic (AR)}. Upadhyay et al. (2018) divided each dataset into train (400-500 word pairs) and test portions (900-1000 word pairs): we use the train portions to tune the threshold t that binarizes GLEN’s predictions ILE. We induce the CL embeddings (i.e., learn the projections Wg, see Section §2) by projecting AR, FR, and RU embeddings to the EN space in a supervised fashion, by finding the optimal solution to the Procrustes problem for given 5K word translation pairs (for each language pair). 6 We compare GLEN with more complex models from (Upadhyay et al., 2018): they couple two methods for inducing syntactic CL embeddings – CL-DEP (Vuli´c, 2017) and BI-SPARSE (Vyas and Carpuat, 2016) – with 6We automatically translated 5K most frequent EN words to AR, FR, and RU with Google Translate. Model EN-FR EN-RU EN-AR Avg HYPO CL-DEP .538 .602 .567 .569 BI-SPARSE .566 .590 .526 .561 GLEN .792 .811 .816 .806 COHYP CL-DEP .610 .562 .631 .601 BI-SPARSE .667 .636 .668 .657 GLEN .779 .849 .821 .816 Table 2: CL LE detection results (accuracy) on CL datasets (HYPO, COHYP) (Upadhyay et al., 2018). an LE scorer based on the distributional inclusion hypothesis (Geffet and Dagan, 2005). Results. GLEN’s cross-lingual LE detection performance is shown in Table 2. GLEN dramatically outperforms CL LE detection models from (Upadhyay et al., 2018), with an average edge of 24% on HYPO datasets and 16% on the COHYP datasets.7 This accentuates GLEN’s generalization ability: it robustly predicts CL LE, although trained only on EN constraints. GLEN performs better for ENAR and EN-RU than for EN-FR: we believe this to merely be an artifact of the (rather small) test sets. We find GLEN’s CL performance for more distant language pairs (EN-AR, EN-RU) especially encouraging as it holds promise of successful transfer of LE-specialization to resource-lean languages lacking external linguistic resources. 4 Conclusion We presented GLEN, a general framework for specializing word embeddings for lexical entailment. Unlike existing LE-specialization models (Nguyen et al., 2017; Vuli´c and Mrkˇsi´c, 2018), GLEN learns an explicit specialization function using linguistic constraints as training examples. The learned LE-specialization function is then applied to vectors of words (1) unseen in constraints and (2) from different languages. GLEN displays robust graded LE performance and yields massive improvements over state-of-the-art in cross-lingual LE detection. We next plan to evaluate GLEN on multilingual and cross-lingual graded LE datasets (Vuli´c et al., 2019) and release a large multilingual repository of LE-specialized embeddings. We make GLEN (code and resources) available at: https://github.com/codogogo/glen. Acknowledgments The work of the first author was supported by the Eliteprogramm of the Baden-W¨urttemberg Stiftung, within the scope of the AGREE grant. 7All differences are statistically significant at α = 0.01, according to the non-parametric shuffling test (Yeh, 2000) 4829 References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of ACL, pages 789–798. Richard Beckwith, Christiane Fellbaum, Derek Gross, and George A. Miller. 1991. WordNet: A lexical database organized on psycholinguistic principles. Lexical acquisition: Exploiting on-line resources to build a lexicon, pages 211–231. Or Biran and Kathleen McKeown. 2013. Classifying taxonomic relations between pairs of Wikipedia articles. In Proceedings of IJCNLP, pages 788–794. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL, 5:135–146. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of EMNLP, pages 632–642. Allan M. Collins and Ross M. Quillian. 1972. Experiments on semantic memory and language comprehension. Cognition in Learning and Memory. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In Proceedings of ICLR. Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing textual entailment: Models and applications. Synthesis Lectures on Human Language Technologies, 6(4):1–220. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL-HLT, pages 1606–1615. Christiane Fellbaum. 1998. WordNet. MIT Press. Maayan Geffet and Ido Dagan. 2005. The distributional inclusion hypotheses and lexical entailment. In Proceedings of ACL, pages 107–114. Association for Computational Linguistics. Goran Glavaˇs and Ivan Vuli´c. 2018. Explicit retrofitting of distributional word vectors. In Proceedings of ACL, pages 34–45. Goran Glavaˇs, Robert Litschko, Sebastian Ruder, and Ivan Vuli´c. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. arXiv preprint arXiv:1902.00508. Goran Glavaˇs and Simone Paolo Ponzetto. 2017. Dual tensor model for detecting asymmetric lexicosemantic relations. In Proceedings of EMNLP, pages 1758–1768. Amit Gupta, R´emi Lebret, Hamza Harkous, and Karl Aberer. 2017. Taxonomy induction using hypernym subsequences. In Proceedings of CIKM, pages 1329–1338. Zellig S. Harris. 1954. Distributional structure. Word, 10(23):146–162. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv´e J´egou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of EMNLP, pages 2979–2984. Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or relatedness. In Proceedings of EMNLP, pages 2044– 2048. Diederik P. Kingma and Jimmy Ba. 2015. ADAM: A Method for Stochastic Optimization. In Proceedings of ICLR. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of ACL, pages 302–308. Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In Proceedings of NAACL, pages 1030–1040. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111– 3119. Michael Mohler, David Bracewell, Marc Tomlinson, and David Hinote. 2013. Semantic signatures for example-based linguistic metaphor detection. In Proceedings of the First Workshop on Metaphor in NLP, pages 27–35. Nikola Mrkˇsi´c, Ivan Vuli´c, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gaˇsi´c, Anna Korhonen, and Steve Young. 2017. Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the ACL, 5:309–324. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217–250. Roberto Navigli, Paola Velardi, and Stefano Faralli. 2011. A graph-based algorithm for inducing lexical taxonomies from scratch. In Proceedings of IJCAI, pages 1872–1877. Kim Anh Nguyen, Maximilian K¨oper, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017. Hierarchical embeddings for hypernymy detection and directionality. In Proceedings of EMNLP, pages 233–243. 4830 Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonymsynonym distinction. In Proceedings of ACL, pages 454–459. Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word embedding-based antonym detection using thesauri and distributional information. In Proceedings of NAACL-HLT, pages 984–989. Dominique Osborne, Shashi Narayan, and Shay Cohen. 2016. Encoding prior knowledge with eigenword embeddings. Transactions of the ACL, 4:417–430. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP, pages 1532– 1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Edoardo Maria Ponti, Ivan Vuli´c, Goran Glavaˇs, Nikola Mrkˇsi´c, and Anna Korhonen. 2018. Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization. In Proceedings of EMNLP, pages 282–293. Marek Rei, Daniela Gerz, and Ivan Vuli´c. 2018. Scoring lexical entailment with a supervised directional similarity network. In Proceedings of ACL, pages 638–643. Sebastian Ruder, Ryan Cotterell, Yova Kementchedjhieva, and Anders Søgaard. 2018a. A discriminative latent-variable model for bilingual lexicon induction. In Proceedings of EMNLP, pages 458–468. Sebastian Ruder, Anders Søgaard, and Ivan Vuli´c. 2018b. A survey of cross-lingual embedding models. arXiv preprint arXiv:1706.04902. Peter H Sch¨onemann. 1966. A generalized solution of the orthogonal Procrustes problem. Psychometrika, 31(1):1–10. Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In Proceedings of CoNLL, pages 258–267. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of ACL, pages 2389–2398. Samuel L. Smith, David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of ACL, pages 801–808. Luu Anh Tuan, Yi Tay, Siu Cheung Hui, and See Kiong Ng. 2016. Learning term embeddings for taxonomic relation identification using dynamic weighting neural network. In Proceedings of EMNLP, pages 403– 413. Shyam Upadhyay, Yogarshi Vyas, Marine Carpuat, and Dan Roth. 2018. Robust cross-lingual hypernymy detection using dependency context. In Proceedings of NAACL, pages 607–618. Ivan Vuli´c. 2017. Cross-lingual syntactically informed distributed word representations. In Proceedings of EACL, volume 2, pages 408–414. Ivan Vuli´c, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017. Hyperlex: A large-scale evaluation of graded lexical entailment. Computational Linguistics, 43(4):781–835. Ivan Vuli´c and Nikola Mrkˇsi´c. 2018. Specialising word vectors for lexical entailment. In Proceedings of NAACL-HLT, pages 1134–1145. Ivan Vuli´c, Simone Paolo Ponzetto, and Goran Glavaˇs. 2019. Multilingual and cross-lingual graded lexical entailment. In Proceedings of ACL, page in print. Yogarshi Vyas and Marine Carpuat. 2016. Sparse bilingual word representations for cross-lingual lexical entailment. In Proceedings of NAACL, pages 1187– 1197. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the ACL, 3:345–358. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT, pages 1112–1122. Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. RCNET: A general framework for incorporating knowledge into word representations. In Proceedings of CIKM, pages 1219–1228. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of COLING, pages 947–953. Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of ACL, pages 545–550. Jingwei Zhang, Jeremy Salwen, Michael Glass, and Alfio Gliozzo. 2014. Word semantic representations using bayesian probabilistic tensor factorization. In Proceedings of EMNLP, pages 1522–1531.
2019
476
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4831–4836 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4831 Attention Is (not) All You Need for Commonsense Reasoning Tassilo Klein1, Moin Nabi1 1SAP Machine Learning Research, Berlin, Germany {tassilo.klein, m.nabi}@sap.com Abstract The recently introduced BERT model exhibits strong performance on several language understanding benchmarks. In this paper, we describe a simple re-implementation of BERT for commonsense reasoning. We show that the attentions produced by BERT can be directly utilized for tasks such as the Pronoun Disambiguation Problem and Winograd Schema Challenge. Our proposed attention-guided commonsense reasoning method is conceptually simple yet empirically powerful. Experimental analysis on multiple datasets demonstrates that our proposed system performs remarkably well on all cases while outperforming the previously reported state of the art by a margin. While results suggest that BERT seems to implicitly learn to establish complex relationships between entities, solving commonsense reasoning tasks might require more than unsupervised models learned from huge text corpora. 1 Introduction Recently, neural models pre-trained on a language modeling task, such as ELMo (Peters et al., 2018b), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. The success of BERT can largely be associated to the notion of context-aware word embeddings, which differentiate it from common approaches such as word2vec (Mikolov et al., 2013) that establish a static semantic embedding. Since the introduction of BERT, the NLP community continues to be impressed by the amount of ideas produced on top of this powerful language representation model. However, despite its success, it remains unclear whether the representations produced by BERT can be utilized for tasks such as commonsense reasoning. Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC). These tasks have been proposed as potential alternatives to the Turing Test, because they are formulated to be robust to statistics of word co-occurrence (Levesque et al., 2012). Below is a popular example from the binarychoice pronoun coreference problem (Lee et al., 2017) of WSC: Sentence: The trophy doesn't fit in the suitcase because it is too small. Answers: A) the trophy B) the suitcase Humans resolve the pronoun “it” to “the suitcase” with no difficulty, whereas a system without commonsense reasoning would be unable to distinguish “the suitcase” from the otherwise viable candidate, “the trophy”. Previous attempts at solving WSC usually involve heavy utilization of annotated knowledge bases (KB), rule-based reasoning, or hand-crafted features (Peng et al., 2015; Bailey et al., 2015; Sch¨uller, 2014; Sharma et al., 2015; Morgenstern et al., 2016). There are also some empirical works towards solving WSC making use of learning (Rahman and Ng, 2012; Tang et al., 2018; Radford et al., 2018). Recently, (Trinh and Le, 2018) proposed to use a language model (LM) to score the two sentences obtained when replacing the pronoun by the two candidates. The sentence that is assigned higher probability under the model designates the chosen candidate. Probability is calculated via the chain rule, as the product of the probabilities assigned to each word in the sentence. Very recently, (Emami et al., 2018) proposed the knowledge hunting method, which is a rule-based system that uses search engines 4832 0.2 0.1 0.5 0.1 0.7 0.5 0 0.2 0.2 0.1 0 0.4 0.2 0.6 0.4 0.1 0.1 0.3 0.2 0.1 0.5 0 0.7 0.5 0 0.2 0 0 0 0 0.2 0 0 0.1 0 0.3 The trophy doesn’t fit in the suitcase because it is too small Figure 1: Maximum Attention Score (MAS) for a particular sentence, where colors show attention maps for different words (best shown in color). Squares with blue/red frames correspond to specific sliced attentions Ac for candidates c, establishing the relationship to the reference pronoun indicated with green. Attention is color-coded in blue/ red for candidates “trophy”/ “suitcase”; the associated pronoun “it” is indicated in green. Attention values are compared elementwise (black double arrow), and retain only the maximum achieved by a masking operation. Matrices on the outside with red background elements correspond to the masked attentions Ac ◦Mc. to gather evidence for the candidate resolutions without relying on the entities themselves. Although these methods are interesting, they need fine-tuning, or explicit substitution or heuristicbased rules. See also (Trichelair et al., 2018) for a discussion. The BERT model is based on the “Transformer” architecture (Vaswani et al., 2017), which relies purely on attention mechanisms, and does not have an explicit notion of word order beyond marking each word with its absolute-position embedding. This reliance on attention may lead one to expect decreased performance on commonsense reasoning tasks (Roemmele et al., 2011; Zellers et al., 2018) compared to RNN (LSTM) models (Hochreiter and Schmidhuber, 1997) that do model word order directly, and explicitly track states across the sentence. However, the work of (Peters et al., 2018a) suggests that bidirectional language models such as BERT implicitly capture some notion of coreference resolution. In this paper, we show that the attention maps created by an out-of-the-box BERT can be directly exploited to resolve coreferences in long sentences. As such, they can be simply repurposed for the sake of commonsense reasoning tasks while achieving state-of-the-art results on the multiple task. On both PDP and WSC, our method outperforms previous state-of-the-art methods, without using expensive annotated knowledge bases or hand-engineered features. On a Pronoun Disambiguation dataset, PDP-60, our method achieves 68.3% accuracy, which is better than the state-ofart accuracy of 66.7%. On a WSC dataset, WSC273, our method achieves 60.3%. As of today, state-of-the-art accuracy on the WSC-273 for single model performance is around 57%, (Emami et al., 2018) and (Trinh and Le, 2018). These results suggest that BERT implicitly learns to establish complex relationships between entities such as coreference resolution. Although this helps in commonsense reasoning, solving this task requires more than employing a language model learned from large text corpora. 2 Attention Guided Reasoning In this section we first review the main aspects of the BERT approach, which are important to understand our proposal and we introduce notations used in the rest of the paper. Then, we introduce Maximum Attention Score (MAS), and explain how it can be utilized for commonsense reasoning. 2.1 BERT and Notation The concept of BERT is built upon two key ingredients: (a) the transformer architecture and (b) unsupervised pre-training. The transformer architecture consists of two main building blocks, stacked encoders and de4833 Method Acc. Unsupervised Semantic Similarity Method (USSM) 48.3 % USSM + Cause-Effect Knowledge Base (Liu et al., 2016) 55.0 % USSM + Cause-Effect + WordNet (Miller, 1995) + ConceptNet (Liu and Singh, 2004) KB 56.7 % Subword-level Transformer LM (Vaswani et al., 2017) 58.3 % Single LM (partial) (Trinh and Le, 2018) 53.3 % Single LM (full) (Trinh and Le, 2018) 60.0 % Patric Dhondt (WS Challenge 2016) 45.0 % Nicos Issak (WS Challenge 2016) 48.3 % Quan Liu (WS Challenge 2016 - winner) 58.3 % USSM + Supervised DeepNet 53.3 % USSM + Supervised DeepNet + 3 KBs 66.7 % Our Proposed Method 68.3 % Table 1: Pronoun Disambiguation Problem: Results on (top) Unsupervised method performance on PDP-60 and (bottom) Supervised method performance on PDP-60. Results other than ours are taken from (Trinh and Le, 2018). Method Acc. Random guess 50.0 % USSM + KB 52.0% USSM + Supervised DeepNet + KB 52.8 % Single LM (Trinh and Le, 2018) 54.5 % Transformer (Vaswani et al., 2017) 54.1 % Know. Hunter (Emami et al., 2018) 57.1 % Our Proposed Method 60.3 % Table 2: Winograd Schema Challenge. The other results are taken from (Trichelair et al., 2018) and (Trinh and Le, 2018). coders, which are connected in a cascaded fashion. The encoder is further divided into two components, namely a self-attention layer and a feedforward neural network. The self-attention allows for attending to specific words during encoding and therefore establishing a focus context w.r.t. to each word. In contrast to that, the decoder has an additional encoder-decoder layer that switches between self-attention and a feed-forward network. It allows the decoder to attend to specific parts of the input sequence. As attention allows for establishing a relationship between words, it is very important for tasks such as coreference resolution and finding associations. In the specific context of pronouns, attention gives rise to links to m candidate nouns, which we denote in the following as C = {c1, .., cm}. The concept of self-attention is further expanded within BERT by the idea of so called multi-head outputs that are incorporated in each layer. In the following, we will denote heads and layers with h ∈H and l ∈L, respectively. Multi-heads serve several purposes. On the one hand, they allow for dispersing the focus on multiple positions. On the other hand, they constitute an enriched representation by expanding the embedding space. Leveraging the nearly unlimited amount of data available, BERT learns two novel unsupervised prediction tasks during training. One of the tasks is to predict tokens that were randomly masked given the context, notably with the context being established in a bi-directional manner. The second task constitutes next sentence prediction, whereby BERT learns the relationship between two sentences, and classifies whether they are consecutive. 2.2 Maximum Attention Score (MAS) In order to exploit the associative leverage of selfattention, the computation of MAS follows the notion of max-pooling on attention level between a reference word s (e.g. pronoun) and candidate words c (e.g. multiple choice pronouns). The proposed approach takes as input the BERT attention tensor and produces for each candidate word a score, which indicates the strength of association. To this end, the BERT attention tensor A ∈RH×L×|C| is sliced into several matrices Ac ∈RH×L, each of them corresponding to the attention between the reference word and a candidate c. Each Ac is associated with a binary mask matrix Mc. The mask values of Mc are obtained 4834 1.0 0.5 0.0 The drain is clogged with hair. It has to be cleaned. The drain is clogged with hair. It has to be removed. Steve follows Fred's example in everything. He admires him hugely. Steve follows Fred's example in everything. He influences him hugely. The fish ate the worm . It was hungry. The fish ate the worm . It was tasty. The foxes are attacking the chickens at night. I have to kill them. The foxes are attacking the chickens at night. I have to guard them. The man lifted the boy onto his shoulders. The man lifted the boy onto his bunk bed. Figure 2: Maximum Attention Score (MAS) for some sample questions from WSC-273: The last example is an example of failure of the method, where the coreference is predicted incorrectly. at each location tuple (l, h), according to: Mc(l, h) = ( 1 argmax A(l, h) = c 0 otherwise (1) Mask entries are non-zero only at locations where the candidate word c is associated with maximum attention. Limiting the impact of attention by masking allows to accommodate for the most salient parts. Given the Ac and Mc matrix pair for each candidate c, the MAS can be computed. For this purpose, the sum of the Hadamard product for each pair is calculated first. Next, the actual score is obtained by computing the ratio of each Hadamard sum w.r.t. all others according to, MAS(c) = P l,h Ac ◦Mc P c∈C P l,h Ac ◦Mc ∈[0, 1] . (2) Thus MAS retains the attention of each candidate only where it is most dominant, coupling it with the notion of frequency of occurrence to weight the importance. See Fig. 1 for a schematic illustration of the computation of MAS, and the matrices involved. 3 Experimental Results We evaluate our method on two commonsense reasoning tasks, PDP and WSC. On the former task, we use the original set of 60 questions (PDP-60) as the main benchmark. The second task (WSC-273) is qualitatively much more difficult. The recent best reported result are not much above random guess. This task consists of 273 questions and is designed to work against traditional linguistic techniques, common heuristics or simple statistical tests over text corpora (Levesque et al., 2012). 3.1 BERT Model Details In all our experiments, we used the out-of-thebox BERT models without any task-specific finetuning. Specifically, we use the PyTorch implementation of pre-trained bert −base −uncased models supplied by Google1. This model has 12 layers (i.e., Transformer blocks), a hidden size of 768, and 12 self-attention heads. In all cases we set the feed-forward/filter size to be 3072 for the hidden size of 768. The total number of parameters of the model is 110M. 3.2 Pronoun Disambiguation Problem We first examine our method on PDP-60 for the Pronoun Disambiguation task. In Tab. 1 (top), our method outperforms all previous unsupervised results sharply. Next, we allow other systems to take in necessary components to maximize their test performance. This includes making use of supervised training data that maps commonsense reasoning questions to their correct answer. As reported in Tab. 1 (bottom), our method outperforms the best system in the 2016 competition (58.3%) by a large margin. Specifically, we achieve 68.3% accuracy, better than the more recently reported results from (Liu et al., 2017) (66.7%), who makes use of three KBs and a supervised deep network. 3.3 Winograd Schema Challenge On the harder task WSC-273, our method also outperforms the current state-of-the-art, as shown in Tab. 2. Namely, our method achieves an accuracy of 60.3%, nearly 3% of accuracy above the 1https://github.com/huggingface/pytorch-pretrainedBERT 4835 previous best result. This is a drastic improvement considering the best system based on language models outperforms random guess by only 4% in accuracy. This task is more difficult than PDP-60. First, the overall performance of all competing systems are much lower than that of PDP60. Second, incorporating supervised learning and expensive annotated KBs to USSM provides insignificant gain this time (+3%), comparing to the large gain on PDP-60 (+19%). Finally, for the sake of completeness, (Trinh and Le, 2018) report that their single language model trained on a customized dataset built from CommonCrawl based on questions used in comonsense reasoning achieves an higher accuracy than the proposed approach with 62.6%. We visualize the MAS to have more insights into the decisions of our resolvers. Fig. 2 displays some samples of correct and incorrect decisions made by our proposed method. MAS score of different words are indicated with colors, where the gradient from blue to red represents the score transition from low to high. 4 Discussion Pursuing commonsense reasoning in a purely unsupervised way seems very attractive for several reasons. On the one hand, this implies tapping the nearly unlimited resources of unannotated text and leveraging the wealth of information therein. On the other hand, tackling the commonsense reasoning objective in a (more) supervised fashion typically seems to boost performance for very a specific task as concurrent work shows (Kocijan et al., 2019). However, the latter approach is unlikely to generalize well beyond this task. That is because covering the complete set of commonsense entities is at best extremely hard to achieve, if possible at all. The data-driven paradigm entails that the derived model can only make generalizations based on the data it has observed. Consequently, a supervised machine learning approach will have to be exposed to all combinations, i.e. replacing lexical items with semantically similar items in order to derive various concept notions. Generally, this is prohibitively expensive and therefore not viable. In contrast, in the proposed (unsupervised self-attention guided) approach this problem is alleviated. This can be largely attributed to the nearly unlimited text corpora on which the model originally learns, which makes it likely to cover a multitude of concept relations, and the fact that attention implicitly reduces the search space. However, all these approaches require the answer to explicitly exist in the text. That is, they are unable to resolve pronouns in light of abstract/implicit referrals that require background knowledge - see (Saba, 2018) for more detail. However, this is beyond the task of WSC. Last, the presented results suggest that BERT models the notion of complex relationship between entities, facilitating commonsense reasoning to a certain degree. 5 Conclusion Attracted by the success of recently proposed language representation model BERT, in this paper, we introduce a simple yet effective reimplementation of BERT for commonsense reasoning. Specifically, we propose a method which exploits the attentions produced by BERT for the challenging tasks of PDP and WSC. The experimental analysis demonstrates that our proposed system outperforms the previous state of the art on multiple datasets. However, although BERT seems to implicitly establish complex relationships between entities facilitating tasks such as coreference resolution, the results also suggest that solving commonsense reasoning tasks might require more than leveraging a language model trained on huge text corpora. Future work will entail adaption of the attentions, to further improve the performance. References Daniel Bailey, Amelia J Harrison, Yuliya Lierler, Vladimir Lifschitz, and Julian Michael. 2015. The winograd schema challenge and reasoning about correlation. In 2015 AAAI Spring Symposium Series. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Ali Emami, Noelia De La Cruz, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. 2018. A knowledge hunting framework for common sense reasoning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1949–1958, Brussels, Belgium. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. 4836 Long short-term memory. Neural computation, 9(8):1735–1780. Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, and Thomas Lukasiewicz. 2019. A surprisingly robust trick for the winograd schema challenge. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28 - August 2, 2019. Association for Computational Linguistics. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. Hugo Liu and Push Singh. 2004. Conceptneta practical commonsense reasoning tool-kit. BT technology journal, 22(4):211–226. Quan Liu, Hui Jiang, Andrew Evdokimov, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and Yu Hu. 2016. Probabilistic reasoning via deep learning: Neural association models. arXiv preprint arXiv:1603.07704. Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and Yu Hu. 2017. Combing context and commonsense knowledge through neural networks for solving winograd schema problems. In 2017 AAAI Spring Symposium Series. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Leora Morgenstern, Ernest Davis, and Charles L Ortiz. 2016. Planning, executing, and evaluating the winograd schema challenge. AI Magazine, 37(1):50–54. Haoruo Peng, Daniel Khashabi, and Dan Roth. 2015. Solving hard coreference problems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 809–819. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018a. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Brussels, Belgium. Association for Computational Linguistics. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: the winograd schema challenge. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 777–789. Association for Computational Linguistics. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In 2011 AAAI Spring Symposium Series. Walid S. Saba. 2018. A simple machine learning method for commonsense reasoning? A short commentary on trinh & le (2018). CoRR, abs/1810.00521. Peter Sch¨uller. 2014. Tackling winograd schemas by formalizing relevance theory in knowledge graphs. In Fourteenth International Conference on the Principles of Knowledge Representation and Reasoning. Arpit Sharma, Nguyen H Vo, Somak Aditya, and Chitta Baral. 2015. Towards addressing the winograd schema challengebuilding and using a semantic parser and a knowledge hunting module. In TwentyFourth International Joint Conference on Artificial Intelligence. Gongbo Tang, Mathias M¨uller, Annette Rios, and Rico Sennrich. 2018. Why self-attention? a targeted evaluation of neural machine translation architectures. arXiv preprint arXiv:1808.08946. Paul Trichelair, Ali Emami, Jackie Chi Kit Cheung, Adam Trischler, Kaheer Suleman, and Fernando Diaz. 2018. On the evaluation of common-sense reasoning in natural language understanding. arXiv preprint arXiv:1811.01778. Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326.
2019
477
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4837–4842 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4837 A Surprisingly Robust Trick for the Winograd Schema Challenge Vid Kocijan1, Ana-Maria Cret¸u2, Oana-Maria Camburu1,3, Yordan Yordanov1, Thomas Lukasiewicz1,3 1University of Oxford 2Imperial College London 3Alan Turing Institute, London [email protected], [email protected] Abstract The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273 consistently and robustly improves when finetuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSClike dataset. By fine-tuning the BERT language model both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-theart solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more accurate on the “complex” subsets of WSC273, introduced by Trichelair et al. (2018). 1 Introduction The Winograd Schema Challenge (WSC) (Levesque et al., 2012, 2011) was introduced for testing AI agents for commonsense knowledge. Here, we refer to the most popular collection of such sentences as WSC273, to avoid confusion with other slightly modified datasets, such as PDP60, (Davis et al., 2017) and the Definite Pronoun Resolution dataset (Rahman and Ng, 2012), denoted WSCR in the sequel. WSC273 consists of 273 instances of the pronoun disambiguation problem (PDP) (Morgenstern et al., 2016). Each is a sentence (or two) with a pronoun referring to one of the two or more nouns; the goal is to predict the correct one. The task is challenging, since WSC examples are constructed to require human-like commonsense knowledge and reasoning. The best known solutions use deep learning with an accuracy of 63.7% (Opitz and Frank, 2018; Trinh and Le, 2018). The problem is difficult to solve not only because of the commonsense reasoning challenge, but also due to the small existing datasets making it difficult to train neural networks directly on the task. Neural networks have proven highly effective in natural language processing (NLP) tasks, outperforming other machine learning methods and even matching human performance (Hassan et al., 2018; Nangia and Bowman, 2018). However, supervised models require many per-task annotated training examples for a good performance. For tasks with scarce data, transfer learning is often applied (Howard and Ruder, 2018; Johnson and Zhang, 2017), i.e., a model that is already trained on one NLP task is used as a starting point for other NLP tasks. A common approach to transfer learning in NLP is to train a language model (LM) on large amounts of unsupervised text (Howard and Ruder, 2018) and use it, with or without further fine-tuning, to solve other downstream tasks. Building on top of a LM has proven to be very successful, producing state-of-the-art (SOTA) results (Liu et al., 2019; Trinh and Le, 2018) on benchmark datasets like GLUE (Wang et al., 2019) or WSC273 (Levesque et al., 2011). In this work, we first show that fine-tuning existing LMs on WSCR is a robust method of improving the capabilities of the LM to tackle WSC273 and WNLI. This is surprising, because previous attempts to generalize from the WSCR dataset to WSC273 did not achieve a major improvement (Opitz and Frank, 2018). Secondly, we introduce a method for generating large-scale WSC-like examples. We use this method to create a 2.4M dataset from English Wikipedia1, which we further use together with WSCR for finetuning the pre-trained BERT LM (Devlin et al., 2018). The dataset will be made publicly available. We achieve accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous 1https://dumps.wikimedia.org/enwiki/ dump id: enwiki-20181201 4838 best solutions by 8.8% and 9.6%, respectively. 2 Background This section introduces the main LM used in our work, BERT (Devlin et al., 2018), followed by a detailed description of WSC and its relaxed form, the Definite Pronoun Resolution problem. BERT. Our work uses the pre-trained Bidirectional Encoder Representations from Transformers (BERT) LM (Devlin et al., 2018) based on the transformer architecture (Vaswani et al., 2017). Due to its high performance on natural language understanding (NLU) benchmarks and the simplicity to adapt its objective function to our finetuning needs, we use BERT throughout this work. BERT is originally trained on two tasks: masked token prediction, where the goal is to predict the missing tokens from the input sequence, and next sentence prediction, where the model is given two sequences and asked to predict whether the second sequence follows after the first one. We focus on the first task to fine-tune BERT using WSC-like examples. We use masked token prediction on a set of sentences that follow the WSC structure, where we aim to determine which of the candidates is the correct replacement for the masked pronoun. Winograd Schema Challenge. Having introduced the goal of the Winograd Schema Challenge in Section 1, we illustrate it with the following example: The trophy didn’t fit into the suitcase because it was too [large/small]. Question: What was too [large/small]? Answer: the trophy / the suitcase The pronoun “it” refers to a different noun, based on the word in the brackets. To correctly answer both versions, one must understand the meaning of the sentence and its relation to the changed word. More specifically, a text must meet the following criteria to be considered for a Winograd Schema (Levesque et al., 2011): 1. Two parties must appear in the text. 2. A pronoun appears in the sentence and refers to one party. It would be grammatically correct if the pronoun referred to the other. 3. The question asks to determine what party the pronoun refers to. 4. A “special word” appears in the sentence. When switched to an “alternative word”, the sentence remains grammatically correct, but the referent of the pronoun changes. Additionally, commonsense reasoning must be required to answer the question. A detailed analysis by Trichelair et al. (2018) shows that not all WSC273 examples are equally difficult. They introduce two complexity measures (associativity and switchability) and, based on them, refine evaluation metrics for WSC273. In associative examples, one of the parties is more commonly associated with the rest of the question than the other one. Such examples are seen as “easier” than the rest and represent 13.5% of WSC273. The remaining 86.5% of WSC273 is called non-associative. 47% of the examples are “switchable”, because the roles of the parties can be changed, and examples still make sense. A model is tested on the original, “unswitched” switchable subset and on the same subset with switched parties. The consistency between the two results is computed by comparing how often the model correctly changes the answer when the parties are switched. Definite Pronoun Resolution. Since collecting examples that meet the criteria for WSC is hard, Rahman and Ng (2012) relax the criteria and construct the Definite Pronoun Resolution (DPR) dataset, following the structure of WSC, but also accepting easier examples. The dataset, referred throughout the paper as WSCR, is split into a training set with 1322 examples and test set with 564 examples. Six examples in the WSCR training set reappear in WSC273. We remove these examples from WSCR. We use the WSCR training and test sets for fine-tuning the LMs and for validation, respectively. WNLI. One of the 9 GLUE benchmark tasks (Wang et al., 2019), WNLI is very similar to the WSC273 dataset, but is phrased as an entailment problem instead. A WSC schema is given as a premise. The hypothesis is constructed by extracting the sentence part where the pronoun is, and replacing the pronoun with one candidate. The label is 1, if the candidate is the correct replacement, and 0, otherwise. 3 Related Work There have been several attempts at solving WSC273. Previous work is based on Google queries for knowledge (Emami et al., 2018) (58%), 4839 sequence ranking (Opitz and Frank, 2018) (63%), and using an ensemble of LMs (Trinh and Le, 2018) (63%). A critical analysis (Trichelair et al., 2018) showed that the main reason for success when using an ensemble of LMs (Trinh and Le, 2018) was largely due to imperfections in WSC273, as discussed in Section 2. The only dataset similar to WSC273 is an easier but larger (1886 examples) variation published by Rahman and Ng (2012) and earlier introduced as WSCR. The sequence ranking approach uses WSCR for training and attempts to generalize to WSC273. The gap in performance scores between WSCR and WSC273 (76% vs. 63%) implies that examples in WSC273 are much harder. We note that Opitz and Frank (2018) do not report removing the overlapping examples between WSCR and WSC273. Another important NLU benchmark is GLUE (Wang et al., 2019), which gathers 9 tasks and is commonly used to evaluate LMs. The best score has seen a huge jump from 0.69 to over 0.82 in a single year. However, WNLI is a notoriously difficult task in GLUE and remains unsolved by the existing approaches. None of the models have beaten the majority baseline at 65.1, while human performance lies at 95.9 (Nangia and Bowman, 2018). 4 Our Approach WSC Approach. We approach WSC by finetuning the pre-trained BERT LM (Devlin et al., 2018) on the WSCR training set and further on a very large Winograd-like dataset that we introduce. Below, we present our fine-tuning objective function and the introduced dataset. Given a training sentence s, the pronoun to be resolved is masked out from the sentence, and the LM is used to predict the correct candidate in the place of the masked pronoun. Let c1 and c2 be the two candidates. BERT for Masked Token Prediction is used to find P(c1|s) and P(c2|s). If a candidate consists of several tokens, the corresponding number of [MASK] tokens is used in the masked sentence. Then, log P(c|s) is computed as the average of log-probabilities of each composing token. If c1 is correct, and c2 is not, the loss is: L = −log P(c1|s) + (1) + α · max(0, log P(c2|s) −log P(c1|s) + β), where α and β are hyperparameters. MaskedWiki Dataset. To get more data for fine-tuning, we automatically generate a largescale collection of sentences similar to WSC. More specifically, our procedure searches a large text corpus for sentences that contain (at least) two occurrences of the same noun. We mask the second occurrence of this noun with the [MASK] token. Several possible replacements for the masked token are given, for each noun in the sentence different from the replaced noun. We thus obtain examples that are structurally similar to those in WSC, although we cannot ensure that they fulfill all the requirements (see Section 2). To generate such sentences, we choose the English Wikipedia as source text corpus, as it is a large-scale and grammatically correct collection of text with diverse information. We use the Stanford POS tagger (Manning et al., 2014) for finding nouns. We obtain a dataset with approximately 130M examples. We downsample the dataset uniformly at random to obtain a dataset of manageable size. After downsampling, the dataset consists of 2.4M examples. All experiments are conducted with this downsampled dataset only. To determine the quality of the dataset, 200 random examples are manually categorized into 4 categories: • Unsolvable: the masked word cannot be unambiguously selected with the given context. Example: Palmer and Crenshaw both used Wilson 8802 putters , with [MASK] ’s receiving the moniker “ Little Ben ” due to his proficiency with it . [Palmer/Crenshaw] • Hard: the answer is not trivial to figure out. Example: At the time of Plath ’s suicide , Assia was pregnant with Hughes ’s child , but she had an abortion soon after [MASK] ’s death . [Plath/Assia] • Easy: The alternative sentence is grammatically incorrect or is very visibly an inferior choice. Example: The syllables are pronounced strongly by Gaga in syncopation while her vibrato complemented Bennett’s characteristic jazz vocals and swing . Olivier added , “ [MASK] ’s voice , when stripped of its bells and whistles, showcases a timelessness that lends itself well to the genre . ” [Gaga/syncopation] • Noise: The example is a result of a parsing error. 4840 In the analyzed subset, 8.5% of examples were unsolvable, 45% were hard, 45.5% were easy, and 1% fell into the noise category. WNLI Approach. Models are additionally tested on the test set of the WNLI dataset. To use the same evaluation approach as for the WSC273 dataset, we transform the examples in WNLI from the premise–hypothesis format into the masked words format. Since each hypothesis is just a substring of the premise with the pronoun replaced for the candidate, finding the replaced pronoun and one candidate can be done by finding the hypothesis as a substring of the premise. All other nouns in the sentence are treated as alternative candidates. The Stanford POS-tagger (Manning et al., 2014) is used to find the nouns in the sentence. The probability for each candidate is computed to determine whether the candidate in the hypothesis is the best match. Only the test set of the WNLI dataset is used, because it does not overlap with WSC273. We do not train or validate on the WNLI training and validation sets, because some of the examples share the premise. Indeed, when upper rephrasing of the examples is used, the training, validation, and test sets start to overlap. 5 Evaluation In this work, we use the PyTorch implementation2 of Devlin et al.’s (2018) pre-trained model, BERT-large. To obtain BERT WIKI, we train on MaskedWiki starting from the pre-trained BERT. The training procedure differs from the training of BERT (Devlin et al., 2018) in a few points. The model is trained with a single epoch of the MaskedWiki dataset, using batches of size 64 (distributed on 8 GPUs), Adam optimizer, a learning rate of 5.0 · 10−6, and hyperparameter values α = 20 and β = 0.2 in the loss function (Eq. (1)). The values were selected from α ∈ {5, 10, 20} and β ∈{0.1, 0.2, 0.4} and learning rate from {3 · 10−5, 1 · 10−5, 5 · 10−6, 3 · 10−6} using grid search. To speed up the hyperparameter search, the training (for hyperparameter search only) is done on a randomly selected subset of size 100, 000. The performance is then compared on the WSCR test set. Both BERT and BERT WIKI are fine-tuned on the WSCR training dataset to create BERT WSCR 2https://github.com/huggingface/ pytorch-pretrained-BERT and BERT WIKI WSCR. The WSCR test set was used as the validation set. The fine-tuning procedure was the same as the training procedure on MaskedWiki, except that 30 epochs were used. The model was validated after every epoch, and the model with highest performance on the validation set was retained. The hyperparameters α and β and learning rate were selected with grid search from the same sets as for MaskedWiki training. For comparison, experiments are also conducted on two other LMs, BERT-base (BERT with less parameters) and General Pre-trained Transformer (GPT) by Radford et al. (2018). The training on BERT-base was conducted in the same way as for the other models. When using GPT, the probability of a word belonging to the sentence P(c|s) is computed as partial loss in the same way as by Trinh and Le (2018). Due to WSC’s “special word” property, examples come in pairs. A pair of examples only differs in a single word (but the correct answers are different). The model BERT WIKI WSCR no pairs is the BERT WIKI model, fine-tuned on WSCR, where only a single example from each pair is retained. The size of WSCR is thus halved. The model BERT WIKI WSCR pairs is obtained by fine-tuning BERT WIKI on half of the WSCR dataset. This time, all examples in the subset come in pairs, just like in the unreduced WSCR dataset. We evaluate all models on WSC273 and the WNLI test dataset, as well as the various subsets of WSC273, as described in Section 2. The results are reported in Table 1 and will be discussed next. Discussion. Firstly, we note that models that are fine-tuned on the WSCR dataset consistently outperform their non-fine-tuned counterparts. The BERT WIKI WSCR model outperforms other language models on 5 out of 6 sets that they are compared on. In comparison to the LM ensemble by Trinh and Le (2018), the accuracy is more consistent between associative and non-associative subsets and less affected by the switched parties. However, it remains fairly inconsistent, which is a general property of LMs. Secondly, the results of BERT WIKI seem to indicate that this dataset alone does not help BERT. However, when additionally fine-tuned to WSCR, the accuracy consistently improves. Finally, the results of BERT WIKI no pairs 4841 WSC273 non-assoc. assoc. unswitched switched consist. WNLI BERT WIKI 0.619 0.597 0.757 0.573 0.603 0.389 0.712 BERT WIKI WSCR 0.725 0.720 0.757 0.732 0.710 0.550 0.747 BERT 0.619 0.602 0.730 0.595 0.573 0.458 0.658 BERT WSCR 0.714 0.699 0.811 0.695 0.702 0.550 0.719 BERT-base 0.564 0.551 0.649 0.527 0.565 0.443 0.630 BERT-base WSCR 0.623 0.606 0.730 0.611 0.634 0.443 0.705 GPT 0.553 0.525 0.730 0.595 0.519 0.466 – GPT WSCR 0.674 0.653 0.811 0.664 0.580 0.641 – BERT WIKI WSCR no pairs 0.663 0.669 0.622 0.672 0.641 0.511 – BERT WIKI WSCR pairs 0.703 0.695 0.757 0.718 0.710 0.565 – LM ensemble 0.637 0.606 0.838 0.634 0.534 0.443 – Knowledge Hunter 0.571 0.583 0.5 0.588 0.588 0.901 – Table 1: Results on WSC273 and its subsets. The comparison between each language model and its WSCR-tuned model is given. For each column, the better result of the two is in bold. The best result in the column overall is underlined. Results for the LM ensemble and Knowledge Hunter are taken from Trichelair et al. (2018). All models consistently improve their accuracy when fine-tuned on the WSCR dataset. and BERT WIKI pairs show that the existence of WSC-like pairs in the training data affects the performance of the trained model. MaskedWiki does not contain such pairs. 6 Summary and Outlook This work achieves new SOTA results on the WSC273 and WNLI datasets by fine-tuning the BERT language model on the WSCR dataset and a newly introduced MaskedWiki dataset. The previous SOTA results on WSC273 and WNLI are improved by 8.8% and 9.6%, respectively. To our knowledge, this is the first model that beats the majority baseline on WNLI. We show that by fine-tuning on WSC-like data, the language model’s performance on WSC consistently improves. The consistent improvement of several language models indicates the robustness of this method. This is particularly surprising, because previous work (Opitz and Frank, 2018) implies that generalizing to WSC273 is hard. In future work, other uses and the statistical significance of MaskedWiki’s impact and its applications to different tasks will be investigated. Furthermore, to further improve the results on WSC273, data-filtering procedures may be introduced to find harder WSC-like examples. Acknowledgments This work was supported by the Alan Turing Institute under the UK EPSRC grant EP/N510129/1, by the EPSRC grant EP/R013667/1, by the EPSRC studentship OUCS/EPSRC-NPIF/VK/ 1123106, and by an EPSRC Vacation Bursary. We also acknowledge the use of the EPSRC-funded Tier 2 facility JADE (EP/P020275/1). References Ernest Davis, Leora Morgenstern, and Charles L. Ortiz. 2017. The first Winograd Schema Challenge at IJCAI-16. AI Magazine, 38(3):97–98. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. Computing Research Repository, arXiv:1810.04805. Ali Emami, Noelia De La Cruz, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. 2018. A knowledge hunting framework for common sense reasoning. Computing Research Repository, arXiv:1810.01375. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic Chinese to English news translation. Computing Research Repository, arXiv:1803.05567. Jeremy Howard and Sebastian Ruder. 2018. Fine-tuned language models for text classification. Computing Research Repository, arXiv:1801.06146. Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text categorization. In Proceedings of ACL, pages 562–570. ACL. 4842 Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2011. The Winograd Schema Challenge. AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, 46. Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The Winograd Schema Challenge. In Proceedings of KR. AAAI Press. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. Computing Research Repository, arXiv:1901.11504. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Leora Morgenstern, Ernest Davis, and Charles L. Ortiz. 2016. Planning, executing and evaluating the Winograd Schema Challenge. AI Magazine. Nikita Nangia and Samuel R. Bowman. 2018. A conservative human baseline estimate for GLUE: People still (mostly) beat machines. Juri Opitz and Anette Frank. 2018. Addressing the Winograd Schema Challenge as a sequence ranking task. In Proceedings of the First International Workshop on Language Cognition and Computational Models, pages 41–52. ACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: The Winograd Schema Challenge. In Proceedings of EMNLP. Paul Trichelair, Ali Emami, Jackie Chi Kit Cheung, Adam Trischler, Kaheer Suleman, and Fernando Diaz. 2018. On the evaluation of common-sense reasoning in natural language understanding. Computing Research Repository, arXiv:1811.01778. T. H. Trinh and Q. V. Le. 2018. A Simple Method for Commonsense Reasoning. Computing Research repository, arXiv:1806.02847. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Computing Research Repository, arXiv:1706.03762. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of ICLR.
2019
478
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4843–4852 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4843 Coherent Comment Generation for Chinese Articles with a Graph-to-Sequence Model Wei Li1, Jingjing Xu1, Yancheng He2, Shengli Yan2, Yunfang Wu1, Xu Sun1,3 1MOE Key Lab of Computational Linguistics, School of EECS, Peking University 2Platform & Content Group, Tencent 3Deep Learning Lab, Beijing Institute of Big Data Research, Peking University {liweitj47, jingjingxu}@pku.edu.cn {collinhe, victoryyan}@tencent.com {wuyf, xusun}@pku.edu.cn Abstract Automatic article commenting is helpful in encouraging user engagement and interaction on online news platforms. However, the news documents are usually too long for traditional encoder-decoder based models, which often results in general and irrelevant comments. In this paper, we propose to generate comments with a graph-to-sequence model that models the input news as a topic interaction graph. By organizing the article into graph structure, our model can better understand the internal structure of the article and the connection between topics, which makes it better able to understand the story. We collect and release a large scale news-comment corpus from a popular Chinese online news platform Tencent Kuaibao.1 Extensive experiment results show that our model can generate much more coherent and informative comments compared with several strong baseline models.2 1 Introduction Online news platform is now a popular way for people to get information, where users also make comments or read comments made by others, making the comments very valuable resource to attract user attention and encourage interactions among users (Park et al., 2016). The ability to automatically generate comments is desirable for online news platforms, especially comments that can encourage user engagement and interactions, serving as one form of intelligent chatbot (Shum et al., 2018). Important as the comment generation task is, it is still relatively new. Qin et al. (2018) proposed the problem of automatic article comment generation, which is to generate comments 1https://kuaibao.qq.com/ 2Code for the paper is available at https://github.com/lancopku/ Graph-to-seq-comment-generation given the title and content of the article (An example is shown in Table 1). They only proposed the task, but did not propose a specially designed solution to the problem other than sequence-tosequence paradigm (Sutskever et al., 2014). Ma et al. (2018) proposed a retrieval based model that uses variational topic model to find comments that are related to the news in an unsupervised fashion. Lin et al. (2018) proposed to refer to the retrieved comments during generation, which is a combination of retrieval and generation based model. Pure generation based model remains challenging, yet is a more direct way to solve the problem. Additionally, when the article is very different from the historical ones, there may not be appropriate comments to refer to. In this work, we would like to explore a generation model that better exploits the news content to solve the problem. Different from the scenarios where sequenceto-sequence models achieve great success like machine translation (Bahdanau et al., 2014) and summarization (See et al., 2017), comment generation has several nontrivial challenges: • The news articles can be very long, which makes it intractable for classic sequence-tosequence models. On the contrary, although the title is a very important information resource, it can be too short to provide sufficient information. • The title of the news sometimes uses hyperbolic expressions that are semantically different from the content of the article. For example, the title shown in the example (Table 1) provides no valuable information other than “Marvel movie”, which is far from enough to generate coherent comments. • Users focus on different aspects (topics) of the news when making comments, which 4844 Title 这部影片被称为“十年来最搞笑漫威电影”,你 看了吗? Have you seen the movie intitled as “the most hilarious Marvel movie”? Content 点击“IPTV4K超高清”订阅,精彩内容等你共享 《复仇者联盟3:无限战争》中的巅峰一役,将 战火燃遍了整个宇宙...作为接档《复联3》的漫 威电影,《蚁人2》的故事爆笑中带着温情,无 疑成为了现阶段抚平漫威粉心中伤痛的一味良 药...看过《复联3》的漫威粉们,心中都有同一 个疑问:在几乎整个复仇者联盟都参与到无限 战争的关键时刻,蚁人究竟去哪儿了?... Click on the “IPTV4K ultra HD” to subscribe, fantastic contents are waiting for you to share. The battle in “Avengers: Infinity War” has spread the flames of war throughout the universe ... As the continuation Marvel movie to “Avengers 3”, the hilarious and warm “Ant-Man and the Wasp” is no doubt a good dose to heal the fans of Marvel at the time. ... Fans of the Marvel who have watched “Avengers 3” all have a doubt about where AntMan is when all other Avengers have been involved in the infinity war. Comment 只有我觉得那个头盔像蚁人的头盔吗? Am I the only one that thinks the helmet similar to the helmet of Ant-Man? Table 1: An example of news article comment generation task, which is to generate new comments given the title and content of the news. Because the article is too long, only the first sentence and three fragments with topic words (blue) are shown. Note that the title and the first sentence of the news are very different from traditional news, which can not summarize the content of the article. makes the content of the comments very diverse. For example, comments can be about the plots in “Avengers”, “Ant-Man” or other characters in Marvel movies. Based on the above observations, we propose a graph-to-sequence model that generates comments based on a graph constructed out of content of the article and the title. We propose to represent the long document as a topic interaction graph, which decomposes the text into several topic centered clusters of texts, each of which representing a key aspect (topic) of the article. Each cluster together with the topic form a vertex in the graph. The edges between vertices are calculated based on the semantic relation between the vertices. Compared with the hierarchical structure (Yang et al., 2016), which is designed for long articles, our graph based model is better able to understand the connection between different topics of the news. Our model jointly models the title and the content of the article by combining the title into the graph as a special vertex, which is helpful to get the main point of the article. We conduct extensive experiments on the news comments collected from Tencent Kuaibao news, which is a popular Chinese online news platform. We use three metrics consulting to Qin et al. (2018) to evaluate the generated comments. Experiment results show that our model can generate more coherent and informative comments compared with the baseline models. We conclude the contributions as follows: • We propose to represent the article with a topic interaction graph, which organizes the sentences of the article into several topic centered vertices. • We propose a graph-to-sequence model that generates comments based on the topic interaction graph. • We collect and release a large scale (200,000) article-comment corpus that contains title, content and the comments of the news articles. 2 Related Work The Graph Neural Networks (GNN) model has attracted growing attention recently, which is good at modeling graph structure data. GNN is not only applied in structural scenarios, where the data are naturally performed in graph structure, such as social network prediction systems (Hamilton et al., 2017; Kipf and Welling, 2016), recommender systems (van den Berg et al., 2017; Ying et al., 2018), and knowledge graphs (Hamaguchi et al., 2017), but also non-structural scenarios where the relational structure is not explicit including image classification (Kampffmeyer et al., 2018; Wang et al., 2018), text, etc. In this paper, we explore to use GNN to model non-structural article text. Some recent researches are devoted to applying GNN in the text classification task, which involves modeling long documents as graphs. Peng et al. (2018) proposed to convert a document into a word co-occurrence graph, which is then used as the input to the convolutional layers. Yao et al. (2018) proposed to organize the words and documents into one unified graph. Edges between words are calculated with point-wise mutual information (PMI), edges between word and document are calculated with TF-IDF. Then a spectral 4845 Algorithm 1 Graph Construction Require: The title title and article text D, weight calculation function λ 1: Segment title and D into words 2: Do named entity recognition and keyword detection and get the keywords κ 3: for sentence s do 4: if s contains k ∈κ then 5: Assign s to vertex vk 6: else 7: Assign s to vertex vempty 8: end if 9: end for 10: for vertex vi and vj do 11: Calculate edge weight: wij = λ(vi, vj) 12: end for based graph convolutional networks (GCN) is applied to classify the documents. Liu et al. (2018) proposed a siamese GCN model in the text matching task by modelling two documents into one interaction graph. Zhang et al. (2018) adopted a similar strategy but used GCN to match the article with a short query. These works are inspiring to our work, however, they are only designed for the classification task, which are different from generation tasks. There are also some previous work dedicated to use GNN in the generation tasks. Xu et al. (2018a,b) proposed to use graph based model to encode SQL queries in the SQL-to-Text task. Beck et al. (2018) and Song et al. (2018) proposed to solve the AMR-to-Text problem with graph neural networks. Zhao et al. (2018) proposed to facilitate neural machine translation by fusing the dependency between words into the traditional sequence-to-sequence framework. Although these work apply GNN as the encoder, they are meant to take advantage of the information that are already in the form of graph (SQL query, AMR graph, dependency graph) and the input text is relatively short, while our work tries to model long text documents as graphs, which is more challenging. 3 Graph-to-Sequence Model In this section, we introduce the proposed graphto-sequence model (shown in Figure 1). Our model follows the Encoder-Decoder framework. The encoder is bound to encode the article text presented as an interaction graph into a set of hidden vectors, based on which the decoder generates the comment sequence. 3.1 Graph Construction In this section, we introduce how to construct the topic interaction graph from a news article. Algorithm 1 shows the construction process. Different from traditional news, the articles from online news platforms contain much noise. Many sentences of the articles are even irrelevant to the main topic of the news. For example, “谢谢大家点开这 篇文章” (Thanks for opening this article). Therefore, we extract the keywords of the article which serve as the topics of the news. These keywords are the most important words to understand the story of the article, most of which are named entities. Since keyword detection is not the main point of this paper, we do not go into the details of the extraction process. Given a news article D, we first do word segmentation and named entity recognition on the news articles with off-the-shelf tools such as Stanford CoreNLP.3 Since the named entities alone can be insufficient to cover the main focuses of the document, we further apply keyword extraction algorithms like TextRank (Mihalcea and Tarau, 2004) to obtain additional keywords. After we get the keywords κ of the news, we associate each sentence of the documents to its corresponding keywords. We adopt a simple strategy that assigns a sentence s to the keyword k if k appears in the sentence. Note that one sentence can be associated with multiple keywords, which implicitly indicates connection between the two topics. Sentences that do not contain any of the keywords are put into a special vertex called “Empty”. Because the title of the article is crucial to understand the news, we also add a special vertex called “Title” that contains the title sentence of the article. The sentences together with the keyword k they belong to form a vertex vk in the interaction graph. The words of the sentences are concatenated together. The words within each vertex represent one aspect of the article. There can be many ways to construct the edges between vertices denoted as λ in Algorithm 1. In this paper, we propose to adopt a structure based method. If vertices vi and vj share at least one sentence, we add an edge eij between them, the weight of which is calculated by the number of shared sentences. The intuition behind this design is that the more sentences comention two keywords together, the closer these 3https://stanfordnlp.github.io/CoreNLP 4846 k1 k2 k3 k4 s1 s3 s1 s4 s5 s1 s4 s4 Vertex Encoding Graph Encoding Title: Have you seen the movie intitled as ``the most hilarious Marvel movie? s1: Click on the ``IPTV4K ultra HD'' to subscribe. s2: fantastic contents are waiting for you to share . s3 s4 s5 s6 ... E Ve V1 V2 V3 V4 h1 h2 h3 h4 he s5 , s6 ... Graph Building T Title Vt ht Att Decoder I like Ant-Man. Figure 1: A brief illustration of our proposed graph-to-sequence model. A vertex in the interaction graph consists of a topic word ki and the sentences containing ki. If a sentence contains no topic word, it is archived to a special “Empty” vertex. Each vertex is first encoded into a hidden vector vi by the vertex encoder. Then the whole graph is fed into the graph encoder and get the final vertex representation hi encoded with structure information. A RNN decoder with attention mechanism is adopted to generate comment words. two keywords are. One can also use content based method such as tf-idf similarity between the content of vi and vj. 3.2 Vertex Encoder To encode each vertex in the graph into one hidden vector υ, we propose to use a multi-head selfattention (Vaswani et al., 2017) based vertex encoder. The vertex encoder consists of two modules, the first one is an embedding module, the second one is a self-attention module. For the i-th word wi in the word sequence, we first look up the word embedding of the words ei. Note that the keywords and regular words in the article share the same embedding table. By “regular words” we mean words other than keywords. To represent the position information of each word, a positional embedding pi is added to the word. The keyword k of the vertex is put in the front of the word sequence. Therefore, the positional embedding of all the inserted keywords share the same embedding p0, which indicates the special role of the keyword. Both the word embedding and positional embedding are set to be learn-able vectors. The final embedding ϵi of word wi is the sum of the original word embedding ei and positional embedding pi, ϵi = ei + pi Then we feed ϵi to the self-attention module and get the hidden vector ai of each word. This module is to model the interaction between the words so that each hidden vector in this layer contains the context information of the vertex. The self-attention module contains multiple layers of multi-head self-attention. The hidden vector of each layer is calculated by Equation (1)-(3), where Q, K, V represent query vector, key vector and value vectors respectively. In our case, Q, K, V all represent the same vectors. For the first layer, they are ϵ. For the following layers, they are the hidden vectors calculated by the previous layer. W o, W Q i , W K i , W V i are all learnable matrices, Attention(Q, K, V ) =softmax(QKT )V (1) MultiHead(Q, K, V ) =[head1; · · · ; headh]W o (2) headi = Attention(QW Q i , KW K i , V W V i ) (3) Since the keyword k is the most important information in the vertex, we use the hidden vector of the inserted keyword a0 in the last layer as the vector that represents the whole vertex. 3.3 Graph Encoder After we get the hidden vector of each vertex vi in the graph, we feed them to a graph encoder to make use of the graph structure of the constructed topic interaction graph. We propose to use spectral based graph convolutional model (GCN). Spectral approaches work with a spectral representation of the graphs (Zhou et al., 2018). We choose this architecture because GCN can both model the content of the vertex and make use of the structure information of the graph. We use an implementation of GCN model similar to the work of Kipf and Welling (2016). Denote the adjacency matrix of the interaction graph as A ∈RN×N, where Aij = wij (defined in Section 3.1). We add an edge that points to the node itself (Equation 5). D is a diagonal matrix where ˜Dii = P j ˜Aij, Hl+1 = σ( ˜D−1 2 ˜A ˜D−1 2 HlW l) (4) ˜A = A + IN (5) 4847 where IN is the identity matrix, ˜D−1 2 ˜A ˜D is the normalized symmetric adjacency matrix, W l is a learnable weight matrix. To avoid the oversmoothing problem of GCN, we add residual connections between layers, gl+1 = Hl+1 + Hl (6) gout = tanh(WogK) (7) We add one feed forward layer to the final output of the GCN. gK is the output of the last layer of GCN. Since the title of the news is still an important information, we use the hidden output of the title vertex of the graph encoder as the initial state t0 of the decoder. One can also use other pooling method such as max pooling or mean pooling. 3.4 Decoder For the decoder, we adopt the recurrent neural network (RNN) decoder with attention mechanism (Bahdanau et al., 2014). Given the initial state t0 and the output of the GCN ⟨g0, g1, · · · , gn⟩, the decoder is bound to generate a sequence of comment tokens y1, y2, · · · , ym. At each decoding step, a context vector ci is calculated by doing attention on the outputs of the GCN, ti = RNN(ti−1, ei−1) (8) ci = X αj × gj (9) αj = exp(δ(ti, gj) P exp(δ(ti, gk)) (10) where δ is the attention function. Since the topic words (name of the vertices) κ are important information for the article and may appear in the comment, we adopt copy mechanism (Gu et al., 2016) by merging the predicted word token probability distribution with the attention distribution. The probability pcopy of copying from the topic words is dynamically calculated with the decoding hidden state ti and the context vector ci, yi = softmax(Wo(tanh(W([ti; ci]) + b))) (11) pcopy = σ(Wcopy[ti; ci]) (12) p = (1 −pcopy) × y + pcopy × α (13) where Wo, W, Wcopy, b are all learnable parameters. Topic document # comment # Entertainment 116,138 287,889 Sport 90,979 378,677 Table 2: Document and comment number of Entertainment and Sport. ave word # ave character # Ent Sport Ent Sport content 456.1 506.6 754.0 858.7 title 16.4 15.7 28.1 27.4 comment 16.3 19.4 26.2 31.2 keyword 8.4 9.0 Table 3: Length of content, title, comment and keyword of the news for the topic of Ent (entertainment) and Sport. 4 Experiment 4.1 Corpus We collect news and comments from Tencent Kuaibao,4 which is a popular online news platform in Chinese. Because the number of news is very large and the comments vary a lot between different topics of news, we select the news from two most popular topics (topics that have the most news and comments) Entertainment and Sport. The data is available at https://pan.baidu. com/s/1b5zAe7qqUBmuHz6nTU95UA5. The document number and comment number of the two topics are listed in Table 2. The average length with respect to words and characters of content, title, comment and keyword for the two topics are listed in Table 3. From the number we can see that the length of news content is too large for traditional sequence-to-sequence model. 4.2 Experiment Settings We use a batch size of 32. The embedding size is set to 128. The word embeddings are shared between encoder and decoder. Because the vertex number (keyword number in Table 3) is relatively small, to ease the over-smoothing problem we use 1-layer convolution in GCN. For all the RNN based encoders, we use bidirectional LSTM and set the hidden size to 128. For the baseline hierarchical attention model, the hidden size of the second LSTM layer is 256. We use a vocabulary 4https://kuaibao.qq.com/ 5The extraction code is 6xdw 4848 size of 60,000. The sentences are truncated to 100 words. The maximum length for generating is set to 32. For multi-head attention, we use 4 heads. For RNN encoder, RNN decoder and multi-layer self-attention, we use a layer number of 2. We use a dropout rate of 0.1. We use Adam optimizer (Kingma and Ba, 2014) to train the parameters. The initial learning rate is set to 0.0005. For all the models, we train for 5 epochs, the learning rate is decayed to half after each epoch. 4.3 Evaluation Metrics We choose three metrics to evaluate the quality of generated comments. For all the metrics, we ask the raters to score the comments with three gears, the scores are then projected to 0 ∼10. • Coherence: This metric evaluates how Coherent (consistent) is the comment to the news document. It measures whether the comment is about the main story of the news, one side part of the news, or irrelevant to the news. • Informativeness: This metric evaluates how much concrete information the comment contains. It measures whether the comment involves a specific aspect of some character or event, or is a general description of some character or event, or is a general comment that can be the answer to many news. • Fluency: This metric evaluates whether the sentence is fluent. It mainly measures whether the sentence follows the grammar and whether the sentence accords with the logic including world knowledge. We ask three raters to evaluate the generated comments of different models. Owing to the laborious evaluation process (reading the long news document is time consuming), we ask the raters to evaluate the generated comments from one hundred news documents of both topics. The raters are given both the title and the document content of the news which is the same as how a user would read the news online. We use spearman’s rank score to measure the correlation between raters.The p-values are all below 1e −50. The ratings between raters have relatively good correlation with spearman’s rank of around 0.6. Among the metrics, fluency is more divergent. This is expected as this metric is more flexible, different people may have more divided opinion. 4.4 Baseline Models In this section, we describe the baseline models we use. The settings of these models are described in Section 4.2. Note that for fair comparison, all the baselines use RNN with attention as the decoder, the choice of the encoder is dependent on the input of the model (whether the input is in order or not). • Seq2seq (Qin et al., 2018): this model follows the framework of sequence-to-sequence model with attention. We use three kinds of input, the title (T), the content (C) and the title together with the content (TC). The length of the input sequence is truncated to 100. For the input of title together with content, we append the content to the back of the title. • Self-attention (Chen et al., 2018): this model follows the encoder-decoder framework. We use multi-layer self-attention with multi-head as the encoder, and a RNN decoder with attention is applied. We use two kinds of input, the bag of words (B) and the keywords (K). Since the input is not sequential, positional encoding is not applied. A special ‘CLS’ label is inserted, the hidden vector of which serves as the initial state of decoder. For the bag of words input we use the words with top 100 term frequency (TF) in the news document. For the keywords input, we use the same extracted keywords (topic words) with the ones used in our topic interaction graph. • Hierarchical-Attention (Yang et al., 2016): this model takes all the content sentences as input and applies hierarchical attention as the encoder to get the sentence vectors and document vector. A RNN decoder with attention is applied. The document vector is used as the initial state for RNN decoder. 4.5 Results In Table 4 and Table 5, we show the results of different baseline models and our graph2seq model for the topic of entertainment and sport separately. From the results we can see that our proposed graph2seq model beats all the baselines in both coherence and informativeness. Coherence: Our model receives much higher scores in coherence compared with all other baseline models. This indicates that our graph based 4849 Models Coherence Informativeness Fluency Total seq2seq-T (Qin et al., 2018) 5.38 3.70 8.22 5.77 seq2seq-C (Qin et al., 2018) 4.87 3.72 8.53 5.71 seq2seq-TC (Qin et al., 2018) 3.28 4.02 8.68 5.33 self-attention-B (Chen et al., 2018) 6.72 5.05 8.27 6.68 self-attention-K (Chen et al., 2018) 6.62 4.73 8.28 6.54 hierarchical-attention (Yang et al., 2016) 1.38 2.97 8.65 4.33 graph2seq (proposed) 8.23 5.27 8.08 7.19 Table 4: Comparison between our graph2seq model and baseline models for the topic of entertainment. T, C, B, K represents title, content, bag of words, keywords separately. Total is the average of other three metrics Models Coherence Informativeness Fluency Total seq2seq-T (Qin et al., 2018) 4.30 4.38 6.27 4.98 seq2seq-C (Qin et al., 2018) 3.88 3.85 6.02 4.58 seq2seq-TC (Qin et al., 2018) 4.70 5.08 6.37 5.38 self-attention-B (Chen et al., 2018) 5.15 5.62 6.28 5.68 self-attention-K (Chen et al., 2018) 6.68 5.83 7.00 6.50 hierarchical-attention (Yang et al., 2016) 4.43 5.05 6.02 5.17 graph2seq (proposed) 7.97 6.18 6.37 6.84 Table 5: Comparison between our graph2seq model and baseline models for the topic of sport. T, C, B, K represents title, content, bag of words, keywords separately. Total is the average of other three metrics model can better get the main point of the article instead of referring to the high frequency terms that are only slightly related or even irrelevant to the article, which is often carried out by baseline models (especially seq2seq based models). Besides, other baseline models tend to generate general comments such as “I still think I like him” when encountering low frequency topics (similar to the dull response problem in dialogue). These two phenomena hurt the coherence performance severely. Compared with other baselines, selfattention based models receive higher coherence score, we assume that this is because the most relevant words are maintained by the bag of words and keywords input. However, it is hard to distinguish the main point of the article from all other input words with self-attention model. Therefore, they do not perform as well as our graph based model, which can make use of the structure of the article. For the hierarchical attention model, although it uses a hierarchical structure to organize the article, it is still very difficult for the model to understand the story. In fact, we observe in the experiment that the hierarchical structure even makes it harder to extract useful information because of the oversimplified attention performed in the word level. Informativeness: For the metric of informativeness, our graph2seq model can generate comments with the most information because it can capture the plot of the article. We observe that this metric is related to the metric of coherence. Models with higher coherence score tend to be more informative. This phenomenon is related to the fact that many of the comments with low informative scores are general comments which are naturally not coherent to the news. In Figure 2 we show the number of generated general comments and number of generated unique words for both topics. By “general comment”, we mean those comments that have no specific information, irrelevant to the news and can be the comment to many other news of different stories, e.g., “I still think I like him”. Note that the notion of general comment is not strictly defined, but an information that is meant to help analyze informativeness score. The unique words are those not in a pre-defined stop word list. From the figure we can see that the number of general comments is loosely negatively correlated to the informative score, especially in entertainment topic. The number of generated unique words can also be an indicator for the informativeness of the comments, because the more words are involved in the comment, the more information the comment is able to provide. Fluency: Our model receives comparable fluency score in the experiments, we assume that this is be4850 Title 被王丽坤美到了,《上新了·故宫》里穿古装温婉又娴静,气质惊艳 In “updates of the Palace Museum” Likun Wang appears so gentle, refined and astonishingly elegant wearing ancient costume that audiences are touched by her beauty. S2S-T 我觉得还是喜欢看的古装,古装扮相,古装扮相很好看 I still think I like ancient costume, appearance in ancient costume, appearance in ancient costume is pretty. S2S-C 我觉得还是喜欢看的 I still think I like to watch S2S-TC 我觉得还是喜欢看的 I still think I like to watch SA-B 我觉得赵丽颖的演技真的很好 I think the acting skill of Liying Zhao is very good SA-K 我觉得还是喜欢李沁 I still think I like Qin Li HA 我觉得还是喜欢看她的剧 I still think I like her plays graph2seq 王丽坤的演技真的好 The acting skill of Likun Wang is really good Table 6: An example of comments generated by different models. Title is the original title of the article. S2S, SA, HA indicate seq2seq, self-attention and hierarchical attention respectively. T, C, B, K represents title, content, bag of words, keywords separately. title content TC bow keyword HA graph 0 10 20 30 40 50 Entertainment Sport content title TC bow keyword HA graph 0 100 200 300 400 500 Entertainment Sport Figure 2: Number of generated general comments (Left, the lower the better) and number of unique words (Right, the higher the better) in the generated comments by different models. The comments from a total number of 100 news articles are inspected. cause of the similar structure of decoder between different models. After inspecting a part of the generated comments, we observe that the following reasons may lead to low fluency cases. (1) The generated comment is against the world knowledge, for instance, “The big feast is a good actor ( 大餐是个好演员)”. (2) The model can not distinguish between similar characters, for instance, “Who is Han Lu? I only know Han Lu (鹿晗是谁?我只认识鹿晗) ”. (3) The model sometimes repeatedly generates the same names. We assume that this is because repeated pattern appears in some of the real comments and the copy mechanism sometimes makes the problem more severe. These phenomena are actually observed in comments generated by various models, problems such as the deficiency of understanding world knowledge are actually very hard to solve, which are beyond the discussion of this paper. 4.6 Case Study In Table 6 we show an example of comments generated by different models. For the seq2seq-T (S2S-T) model (Qin et al., 2018), the comment is generated mainly based on the clue “ancient costume” in the title. However, because “ancient costume” is not frequently seen in the comments (in the training set). The pattern of generating comments about “ancient costume” is not well learned by the model, which makes the language of the comment not fluent. The comment generated by the seq2seq-C (S2SC) model is a typical general comment, which includes no specific information. This happens when the input to the model does not contain obvious signals that indicates what topic the comment should be about. Despite the fact that these comments are not what we desire, these comments get good fluency scores, which explains why the fluency scores of some of the baselines exceed our model’s. The comment made by hierarchical attention model (HA) suffers from the same problem with seq2seq model. We assume that this is because even with the hierarchical structure, this model can not understand the long input well. Therefore, it can not extract the main point of the story and generate general comments. The comments made by self-attention based models (SA) are generally more informative, which contain more specific plots or characters. Even though the input to these models are not in order, the combination of the keywords makes the model easier to associate the input with some learned pattern. However, this way of representing the article is incapable of getting the main point of the article. The main characters in the generated comments “ 赵丽颖” and “ 李沁” (names of Chi4851 nese actresses) are not much related to the news. The comment generated by our proposed graph2seq model is the only model that mentions the main character of the news “王丽坤” (name of the Chinese actress), which accords with the expectation of the design of our graph based model. 5 Conclusion In this paper, we propose to automatically generate comment of articles with a graph-to-sequence model that organizes the article into a topic interaction graph. Our model can better understand the structure of the article, thus capturing the main point of the article. Experiment results show that our model can generate more coherent and informative comments. We observe that there are still some comments conflicting with the world knowledge. In the future, we would like to explore how to introduce external knowledge into the graph to make the generated comments more logical. Acknowledgement We thank the anonymous reviewers for their thoughtful comments. This work was supported in part by National Natural Science Foundation of China (No. 61673028). Xu Sun is the corresponding author of this paper. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. arXiv preprint arXiv:1806.09835. Rianne van den Berg, Thomas N Kipf, and Max Welling. 2017. Graph convolutional matrix completion. stat, 1050:7. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. CoRR, abs/1804.09849. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. CoRR, abs/1603.06393. Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. 2017. Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach. arXiv preprint arXiv:1706.05674. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 1024–1034. Curran Associates, Inc. Michael Kampffmeyer, Yinbo Chen, Xiaodan Liang, Hao Wang, Yujia Zhang, and Eric P Xing. 2018. Rethinking knowledge graph propagation for zero-shot learning. arXiv preprint arXiv:1805.11724. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Zhaojiang Lin, Genta Indra Winata, and Pascale Fung. 2018. Learning comment generation by leveraging user-generated data. arXiv preprint arXiv:1810.12264. Bang Liu, Ting Zhang, Di Niu, Jinghong Lin, Kunfeng Lai, and Yu Xu. 2018. Matching long text documents via graph convolutional networks. CoRR, abs/1802.07459. Shuming Ma, Lei Cui, Furu Wei, and Xu Sun. 2018. Unsupervised machine commenting with neural variational topic model. arXiv preprint arXiv:1809.04960. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing. Deokgun Park, Simranjit Sachar, Nicholas Diakopoulos, and Niklas Elmqvist. 2016. Supporting comment moderators in identifying high quality online news comments. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, pages 1114–1125, New York, NY, USA. ACM. Hao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Yangqiu Song, and Qiang Yang. 2018. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1063–1072. International World Wide Web Conferences Steering Committee. Lianhui Qin, Lemao Liu, Wei Bi, Yan Wang, Xiaojiang Liu, Zhiting Hu, Hai Zhao, and Shuming Shi. 2018. Automatic article commenting: the task and dataset. arXiv preprint arXiv:1805.03668. 4852 Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Heung-Yeung Shum, Xiao-dong He, and Di Li. 2018. From eliza to xiaoice: challenges and opportunities with social chatbots. Frontiers of Information Technology &amp; Electronic Engineering, 19(1):10– 26. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amr-to-text generation. arXiv preprint arXiv:1805.02473. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Xiaolong Wang, Yufei Ye, and Abhinav Gupta. 2018. Zero-shot recognition via semantic embeddings and knowledge graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6857–6866. Kun Xu, Lingfei Wu, Zhiguo Wang, and Vadim Sheinin. 2018a. Graph2seq: Graph to sequence learning with attention-based neural networks. arXiv preprint arXiv:1804.00823. Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, and Vadim Sheinin. 2018b. Sql-to-text generation with graph-to-sequence model. arXiv preprint arXiv:1809.05255. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Liang Yao, Chengsheng Mao, and Yuan Luo. 2018. Graph convolutional networks for text classification. arXiv preprint arXiv:1809.05679. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. arXiv preprint arXiv:1806.01973. Ting Zhang, Bang Liu, Di Niu, Kunfeng Lai, and Yu Xu. 2018. Multiresolution graph attention networks for relevance matching. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, pages 933–942. Guoshuai Zhao, Jun Li, Lu Wang, Xueming Qian, and Yun Fu. 2018. Graphseq2seq: Graph-sequence-tosequence for neural machine translation. Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. 2018. Graph neural networks: A review of methods and applications. CoRR, abs/1812.08434.
2019
479
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 504–515 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 504 An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis Ruidan He†‡, Wee Sun Lee†, Hwee Tou Ng†, and Daniel Dahlmeier‡ †Department of Computer Science, National University of Singapore ‡SAP Innovation Center Singapore †{ruidanhe,leews,nght}@comp.nus.edu.sg ‡[email protected] Abstract Aspect-based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence. This task is usually done in a pipeline manner, with aspect term extraction performed first, followed by sentiment predictions toward the extracted aspect terms. While easier to develop, such an approach does not fully exploit joint information from the two subtasks and does not use all available sources of training information that might be helpful, such as document-level labeled sentiment corpus. In this paper, we propose an interactive multi-task learning network (IMN) which is able to jointly learn multiple related tasks simultaneously at both the token level as well as the document level. Unlike conventional multi-task learning methods that rely on learning common features for the different tasks, IMN introduces a message passing architecture where information is iteratively passed to different tasks through a shared set of latent variables. Experimental results demonstrate superior performance of the proposed method against multiple baselines on three benchmark datasets. 1 Introduction Aspect-based sentiment analysis (ABSA) aims to determine people’s attitude towards specific aspects in a review. This is done by extracting explicit aspect mentions, referred to as aspect term extraction (AE), and detecting the sentiment orientation towards each extracted aspect term, referred to as aspect-level sentiment classification (AS). For example, in the sentence “Great food but the service is dreadful”, the aspect terms are “food” and “service”, and the sentiment orientations towards them are positive and negative respectively. In previous works, AE and AS are typically treated separately and the overall task is performed in a pipeline manner, which may not fully exploit the joint information between the two tasks. Recently, two studies (Wang et al., 2018; Li et al., 2019) have shown that integrated models can achieve comparable results to pipeline methods. Both works formulate the problem as a single sequence labeling task with a unified tagging scheme1. However, in their methods, the two tasks are only linked through unified tags, while the correlation between them is not explicitly modeled. Furthermore, the methods only learn from aspect-level instances, the size of which is usually small, and do not exploit available information from other sources such as related documentlevel labeled sentiment corpora, which contain useful sentiment-related linguistic knowledge and are much easier to obtain in practice. In this work, we propose an interactive multitask learning network (IMN), which solves both tasks simultaneously, enabling the interactions between both tasks to be better exploited. Furthermore, IMN allows AE and AS to be trained together with related document-level tasks, exploiting the knowledge from larger document-level corpora. IMN introduces a novel message passing mechanism that allows informative interactions between tasks. Specifically, it sends useful information from different tasks back to a shared latent representation. The information is then combined with the shared latent representation and made available to all tasks for further processing. This operation is performed iteratively, allowing the information to be modified and propagated across multiple links as the number of iterations increases. In contrast to most multi-task learning schemes which share information through learning 1{B, I}-{POS, NEG, NEU} denotes the beginning and inside of an aspect-term with positive, negative, or neutral sentiment, respectively, and O denotes background words. 505 a common feature representation, IMN not only allows shared features, but also explicitly models the interactions between tasks through the message passing mechanism, allowing different tasks to better influence each other. In addition, IMN allows fined-grained tokenlevel classification tasks to be trained together with document-level classification tasks. We incorporated two document-level classification tasks – sentiment classification (DS) and domain classification (DD) – to be jointly trained with AE and AS, allowing the aspect-level tasks to benefit from document-level information. In our experiments, we show that the proposed method is able to outperform multiple pipeline and integrated baselines on three benchmark datasets2. 2 Related Work Aspect-Based Sentiment Analysis. Existing approaches typically decompose ABSA into two subtasks, and solve them in a pipeline setting. Both AE (Qiu et al., 2011; Yin et al., 2016; Wang et al., 2016a, 2017; Li and Lam, 2017; He et al., 2017; Li et al., 2018b; Angelidis and Lapata, 2018) and AS (Dong et al., 2014; Nguyen and Shirai, 2015; Vo and Zhang, 2015; Tang et al., 2016a; Wang et al., 2016b; Zhang et al., 2016; Liu and Zhang, 2017; Chen et al., 2017; Cheng et al., 2017; Tay et al., 2018; Ma et al., 2018; He et al., 2018a,b; Li et al., 2018a) have been extensively studied in the literature. However, treating each task independently has several disadvantages. In a pipeline setting, errors from the first step tend to be propagated to the second step, leading to a poorer overall performance. In addition, this approach is unable to exploit the commonalities and associations between tasks, which may help reduce the amount of training data required to train both tasks. Some previous works have attempted to develop integrated solutions. Zhang et al. (2015) proposed to model the problem as a sequence labeling task with a unified tagging scheme. However, their results were discouraging. Recently, two works (Wang et al., 2018; Li et al., 2019) have shown some promising results in this direction with more sophisticated network structures. However, in their models, the two subtasks are still only linked through a unified tagging scheme, while the interactions between them are not explicitly mod2Our source code can be obtained from https:// github.com/ruidan/IMN-E2E-ABSA eled. To address this issue, a better network structure allowing further task interactions is needed. Multi-Task Learning. One straightforward approach to perform AE and AS simultaneously is multi-task learning, where one conventional framework is to employ a shared network and two task-specific network to derive a shared feature space and two task-specific feature spaces. Multitask learning frameworks have been employed successfully in various natural language processing (NLP) tasks (Collobert and Weston, 2008; Luong et al., 2015a; Liu et al., 2016). By learning semantically related tasks in parallel using a shared representation, multi-task learning could capture the correlations between tasks and improve the model generalization ability in certain cases. For ABSA, He et al. (2018b) have shown that aspectlevel sentiment classification can be significantly improved through joint training with documentlevel sentiment classification. However, conventional multi-task learning still does not explicitly model the interactions between tasks – the two tasks only interact with each other through error back-propoagation to contribute to the learned features and such implicit interactions are not controllable. Unlike existing methods, our proposed IMN not only allows the representations to be shared, but also explicitly models the interactions between tasks, by using an iterative message passing scheme. The propagated information contributes to both learning and inference to boost the overall performance of ABSA. Message Passing Architectures. Networked representations for message passing graphical model inference algorithms have been studied in computer vision (Arnab et al., 2018) and NLP (Gormley et al., 2015). Modeling the execution of these message passing algorithms as a network results in recurrent neural network architectures. We similarly propagate information in a network and learn the update operators, but the architecture is designed for solving multi-task learning problems. Our algorithm can similarly be viewed as a recurrent neural network since each iteration uses the same network to update the shared latent variables. 3 Proposed Method The IMN architecture is shown in Figure 1. It accepts a sequence of tokens {x1, . . . , xn} as input into a feature extraction component fθs that is 506 Embedding Layer message-passing mechanism opinion transmission AE AS DS DD aspect term and opinion term co-extraction aspect-level sentiment classification document-level sentiment classification document-level domain classification AE: AS: DS: DD: Figure 1: The overall architecture of IMN. shared among all tasks. This component consists of a word embedding layer followed by a few feature extraction layers. Specifically, we employ ms layers of CNNs after the word embedding layer in fθs. The output of fθs is a sequence of latent vectors {hs 1, hs 2, ..., hs n} shared among all the tasks. After initialization by fθs, this sequence of latent vectors is later updated by combining information propagated from different task components through message passing. We denote hs(t) i as the value of the shared latent vector corresponding to xi after t rounds of message passing, with hs(0) i denoting the value after initialization. The sequence of shared latent vectors3 {hs 1, hs 2, ..., hs n} is used as input to the different task-specific components. Each task-specific component has its own sets of latent and output variables. The output variables correspond to a label sequence in a sequence tagging task; in AE, we assign to each token a label indicating whether it belongs to any aspect or opinion4 term, while in AS, we label each word with its sentiment. In a classification task, the output corresponds to the label of the input instance: the sentiment of the document for the sentiment classification task (DS), and the domain of the document for the domain classification task (DD). At each iteration, appropriate information is passed back to the shared latent vectors to be combined; this could be the values of the output variables or the latent variables, depending on the task. In addition, we also allow messages to be passed between the 3We omit the iteration superscript t in the description for simplicity. 4e.g. “great” and “dreadful” in “Great food but the service is dreadful” are the opinion terms. components in each iteration. Specifically for this problem, we send information from the AE task to the AS task as shown in Figure 1. After T iterations of message passing, which allows information to be propagated through multiple hops, we use the values of the output variables as predictions. For this problem, we only use the outputs for AE and AS during inference as these are the end-tasks, while the other tasks are only used for training. We now describe each component and how it is used in learning and inference. 3.1 Aspect-Level Tasks AE aims to extract all the aspect and opinion terms5 appearing in a sentence, which is formulated as a sequence tagging problem with the BIO tagging scheme. Specifically, we use five class labels: Y ae = {BA, IA, BP, IP, O}, indicating the beginning of and inside of an aspect term, the beginning of and inside of an opinion term, and other words, respectively. We also formulate AS as a sequence tagging problem with labels Y as = {pos, neg, neu}, indicating the tokenlevel positive, negative, and neutral sentiment orientations. Table 1 shows an example of aspectlevel training instance with gold AE and AS labels. In aspect-level datasets, only aspect terms get sentiment annotated. Thus, when modeling AS as a sequence tagging problem, we label each token that is part of an aspect term with the sentiment label of the corresponding aspect term. For exam5Note that we are actually performing aspect and opinion term co-extraction. We still denote this task as AE for simplicity. We believe ABSA is more complete with opinion terms also extracted. Also, the information learned from opinion term extraction could be useful for the other tasks. 507 Input The fish is fresh but the variety of fish is nothing out of ordinary . AE O BA O BP O O BA IA IA O O O O BP O AS pos neg neg neg Table 1: An aspect-level training instance with gold AE and AS labels. ple, as shown in Table 1, we label “fish” as pos, and label “variety”, “of”, “fish” as neg, based on the gold sentiment labels of the two aspect terms “fish” and “varity of fish” respectively. Since other tokens do not have AS gold labels, we ignore the predictions on them when computing the training loss for AS. The AE component fθae is parameterized by θae and outputs {ˆyae 1 , ..., ˆyae n }. The AS component fθas is parameterized by θas and outputs {ˆyas 1 , ..., ˆyas n }. The AE and AS encoders consist of mae and mas layers of CNNs respectively, and they map the shared representations to {hae 1 , hae 2 , ..., hae n } and {has 1 , has 2 , ..., has n } respectively. For the AS encoder, we employ an additional self-attention layer on top of the stacked CNNs. As shown in Figure 1, we make ˆyae i , the outputs from AE available to AS in the selfattention layer, as the sentiment task could benefit from knowing the predictions of opinion terms. Specifically, the self-attention matrix A ∈Rn×n is computed as follows: score(i̸=j) ij = (has i Was(has j )T ) · 1 |i −j| · P op j (1) A(i̸=j) ij = exp(scoreij) Pn k=1 exp(scoreik) (2) where the first term in Eq.(1) indicates the semantic relevance between has i and has j with parameter matrix Was, the second term is a distancerelevant factor, which decreases with increasing distance between the ith token and the jth token, and the third term P op j denotes the predicted probability that the jth token is part of any opinion term. The probability P op j can be computed by summing the predicted probabilities on opinionrelated labels BP and IP in ˆyae j . In this way, AS is directly influenced by the predictions of AE. We set the diagonal elements in A to zeros, as we only consider context words for inferring the sentiment of the target token. The self-attention layer outputs h′ i as = Pn j=1 Aijhas j . In AE, we concatenate the word embedding, the initial shared representation hs(0) i , and the task-specific representation hae i as the final representation of the ith token. In AS, we concatenate hs(0) i and h′ i as as the final representation. For each task, we employ a fullyconnected layer with softmax activation as the decoder, which maps the final token representation to probability distribution ˆyae i (ˆyas i ). 3.2 Document-Level Tasks To address the issue of insufficient aspect-level training data, IMN is able to exploit knowledge from document-level labeled sentiment corpora, which are more readily available. We introduce two document-level classification tasks to be jointly trained with AE and AS. One is documentlevel sentiment classification (DS), which predicts the sentiment towards an input document. The other is document-level domain classification (DD), which predicts the domain label of an input document. As shown in Figure 1, the task-specific operation fθo consists of mo layers of CNNs that map the shared representations {hs 1, ..., hs n} to {ho 1, ..., ho n}, an attention layer atto, and a decoding layer deco, where o ∈{ds, dd} is the task symbol. The attention weight is computed as: ao i = exp(ho i Wo) Pn k=1 exp(ho kWo) (3) where W o is a parameter vector. The final document representation is computed as ho = Pn i=1 ao i ho i . We employ a fully-connected layer with softmax activation as the decoding layer, which maps ho to ˆyo. 3.3 Message Passing Mechanism To exploit interactions between different tasks, the message passing mechanism aggregates predictions of different tasks from the previous iteration, and uses this knowledge to update the shared latent vectors {hs 1, ..., hs n} at the current iteration. Specifically, the message passing mechanism integrates knowledge from ˆyae i , ˆyas i , ˆyds, ads i , and add i computed on an input {x1, ..., xn}, and the shared hidden vector hs i is updated as follows: hs(t) i =fθre(hs(t−1) i : ˆyae(t−1) i : ˆyas(t−1) i : ˆyds(t−1) : ads(t−1) i : add(t−1) i ) (4) 508 where t > 0 and [:] denotes the concatenation operation. We employ a fully-connected layer with ReLu activation as the re-encoding function fθre. To update the shared representations, we incorporate ˆyae(t−1) i and ˆyas(t−1) i , the outputs of AE and AS from the previous iteration, such that these information are available for both tasks in current round of computation. We also incorporate information from DS and DD. ˆyds indicates the overall sentiment of the input sequence, which could be helpful for AS. The attention weights ads i and add i generated by DS and DD respectively reflect how sentiment-relevant and domain-relevant the ith token is. A token that is more sentiment-relevant or domain-relevant is more likely to be an opinion word or aspect word. This information is useful for the aspect-level tasks. 3.4 Learning Instances for aspect-level problems only have aspect-level labels while instances for documentlevel problems only have document labels. IMN is trained on aspect-level and document-level instances alternately. When trained on aspect-level instances, the loss function is as follows: La(θs,θae, θas, θds, θdd, θre) = 1 Na Na X i=1 1 ni ni X j=1 ( l(yae i,j, ˆyae(T) i,j ) + l(yas i,j, ˆyas(T) i,j )) (5) where T denotes the maximum number of iterations in the message passing mechanism, Na denotes the total number of aspect-level training instances, ni denotes the number of tokens contained in the ith training instance, and yae i,j (yas i,j) denotes the one-hot encoding of the gold label for AE (AS). l is the cross-entropy loss applied to each token. In aspect-level datasets, only aspect terms have sentiment annotations. We label each token that is part of any aspect term with the sentiment of the corresponding aspect term. During model training, we only consider AS predictions on these aspect term-related tokens for computing the AS loss and ignore the sentiments predicted on other tokens6. When trained on document-level instances, we 6Let l(yas i,j, ˆyas(T ) i,j ) = 0 in Eq.(5) if yae i,j is not BA or IA Algorithm 1 Pseudocode for training IMN Require: Da = {(xa i , yae i , yas i )Na i=1}, Dds = {(xds i , yds i )Nds i=1} and Ddd = {(xdd i , ydd i )Ndd i=1 } Require: Integer r > 0 for e ∈[1, max-pretrain-epochs] do for minibatch Bds, Bdd in Dds, Ddd do compute Ld based on Bds and Bdd update θs, θds, θdd end for end for for e ∈[1, max-epochs] do for b ∈[1, batches-per-epoch] do sample Ba from Da compute La based on Ba update θs, θae, θas, θre if b is divisible by r then sample Bds, Bdd from Dds, Ddd compute Ld based on Bds and Bdd update θs, θds, θdd end if end for end for minimize the following loss: Ld(θs, θds, θdd) = 1 Nds Nds X i=1 l(yds i , ˆyds i ) + 1 Ndd Ndd X i=1 l(ydd i , ˆydd i ) (6) where Nds and Ndd denote the number of training instances for DS and DD respectively, and yds i and ydd i denote the one-hot encoding of the gold label. Message passing iterations are not used when training document-level instances. For learning, we first pretrain the network on the document-level instances (minimize Ld) for a few epochs, such that DS and DD can make reasonable predictions. Then the network is trained on aspectlevel instances and document-level instances alternately with ratio r, to minimize La and Ld. The overall training process is given in Algorithm 1. Da, Dds, and Ddd denote the aspect-level training set and the training sets for DS, DD respectively. Dds and Da are from similar domains. Ddd contains review documents from at least two domains with yds i denoting the domain label, where one of the domains is similar to the domains of Da and Dds. In this way, linguistic knowledge can be transferred from DS and DD to AE and AS, as 509 Datasets Train Test aspect opinion aspect opinion D1 Restaurant14 3699 3484 1134 1008 D2 Laptop14 2373 2504 654 674 D3 Restaurant15 1199 1210 542 510 Table 2: Dataset statistics with numbers of aspect terms and opinion terms they are semantically relevant. We fix θds and θdd when updating parameters for La, since we do not want them to be affected by the small number of aspect-level training instances. 4 Experiments 4.1 Experimental Settings Datasets. Table 2 shows the statistics of the aspect-level datasets. We run experiments on three benchmark datasets, taken from SemEval2014 (Pontiki et al., 2014) and SemEval 2015 (Pontiki et al., 2015). The opinion terms are annotated by Wang et al. (2016a). We use two document-level datasets from (He et al., 2018b). One is from the Yelp restaurant domain, and the other is from the Amazon electronics domain. Each contains 30k instances with exactly balanced class labels of pos, neg, and neu. We use the concatenation of the two datasets with domain labels as Ddd. We use the Yelp dataset as Dds when Da is either D1 or D3, and use the electronics dataset as Dds when Da is D2. Network details. We adopt the multi-layerCNN structure from (Xu et al., 2018) as the CNN-based encoders in our proposed network. See Appendix A for implementation details. For word embedding initialization, we concatenate a general-purpose embedding matrix and a domain-specific embedding matrix7 following (Xu et al., 2018). We adopt their released domainspecific embeddings for restaurant and laptop domains with 100 dimensions, which are trained on a large domain-specific corpus using fastText. The general-purpose embeddings are pre-trained Glove vectors (Pennington et al., 2014) with 300 dimensions. One set of important hyper-parameters are the number of CNN layers in the shared encoder and the task-specific encoders. To decide the values of ms, mae, mas, mds, mdd, we first investigate 7For DD, we only look at the general-purpose embeddings by masking out the domain-specific embeddings. how many layers of CNNs would work well for each of the task when training it alone. We denote co as the optimal number of CNN layers in this case, where o ∈{ae, as, ds, dd} is the task indicator. We perform AE, AS separately on the training set of D1, and perform DS, DD separately on the document-level restaurant corpus. Crossvalidation is used for selecting co, which yields 4, 2, 2, 2 for cae, cas, cds, cdd. Based on this observation, we made ms, mae, mas, mds, mdd equals to 2, 2, 0, 0, 0 respectively, such that ms + mo = co. Note that there are other configurations satisfying the requirement, for example, ms, mae, mas, mds, mdd equals to 1, 3, 1, 1, 1. we select our setting as it involves the smallest set of parameters. We tune the maximum number of iterations T in the message passing mechanism by training IMN−d via cross validation on D1. It is set to 2. With T fixed as 2, we then tune r by training IMN via cross validation on D1 and the relevant document-level datasets. It is set to 2 as well. We use Adam optimizer with learning rate set to 10−4, and we set batch size to 32. Learning rate and batch size are set to conventional values without specific tuning for our task. At training phase, we randomly sample 20% of the training data from the aspect-level dataset as the development set and only use the remaining 80% for training. We train the model for a fix number of epoches, and save the model at the epoch with the best F1-I score on the development set for evaluation. Evaluation metrics. During testing, we extract aspect (opinion) terms, and predict the sentiment for each extracted aspect term based on ˆyae(T) and ˆyas(T). Since the extracted aspect term may consist of multiple tokens and the sentiment predictions on them could be inconsistent in AS, we only output the sentiment label of the first token as the predicted sentiment for any extracted aspect term. We employ five metrics for evaluation, where two measure the AE performance, two measure the AS performance, and one measures the overall performance. Following existing works for AE (Wang et al., 2017; Xu et al., 2018), we use F1 to measure the performance of aspect term extraction and opinion term extraction, which are denoted as F1-a and F1-o respectively. Following existing works for AS (Chen et al., 2017; He et al., 2018b), we adopt accuracy and macro-F1 to measure the performance of AS. We denote them 510 as acc-s and F1-s. Since we are solving the integrated task without assuming that gold aspect terms are given, the two metrics are computed based on the correctly extracted aspect terms from AE. We compute the F1 score of the integrated task denoted as F1-I for measuring the overall performance. To compute F1-I, an extracted aspect term is taken as correct only when both the span and the sentiment are correctly identified. When computing F1-a, we consider all aspect terms, while when computing acc-s, F1-s, and F1-I, we ignore aspect terms with conflict sentiment labels. 4.2 Models under Comparison Pipeline approach. We select two topperforming models from prior works for each of AE and AS, to construct 2 × 2 pipeline baselines. For AE, we use CMLA (Wang et al., 2017) and DECNN (Xu et al., 2018). CMLA was proposed to perform co-extraction of aspect and opinion terms by modeling their interdependencies. DECNN is the state-of-the-art model for AE. It utilizes a multi-layer CNN structure with both general-purpose and domainspecific embeddings. We use the same structure as encoders in IMN. For AS, we use ATAELSTM (denoted as ALSTM for short) (Wang et al., 2016b) and the model from (He et al., 2018b) which we denote as dTrans. ALSTM is a representative work with an attention-based LSTM structure. We compare with dTrans as it also utilizes knowledge from document corpora for improving AS performance, which achieves state-of-the-art results. Thus, we compare with the following pipeline methods: CMLAALSTM, CMLA-dTrans, DECNN-ALSTM, and DECNN-dTrans. We also compare with the pipeline setting of IMN, which trains AE and AS independently (i.e., without parameter sharing, information passing, and document-level corpora). We denote it as PIPELINE. The network structure for AE in PIPELINE is the same as DECNN. During testing of all methods, we perform AE in the first step, and then generate AS predictions on the correctly extracted aspect terms. Integrated Approach. We compare with two recently proposed methods that have achieved stateof-the-art results among integrated approaches: MNN (Wang et al., 2018) and the model from (Li et al., 2019) which we denote as INABSA (integrated network for ABSA). Both methods model the overall task as a sequence tagging problem with a unified tagging scheme. Since during testing, IMN only outputs the sentiment on the first token of an extracted aspect term to avoid sentiment inconsistency, to enable fair comparison, we also perform this operation on MNN and INABSA. We also show results for a version of IMN that does not use document-level corpora, denoted as IMN−d. The structure of IMN−d is shown as the solid lines in Figure 1. It omits the information ˆyds, ads i , and add i propagated from the documentlevel tasks in Eq.(4). 4.3 Results and Analysis Main results. Table 3 shows the comparison results. Note that IMN performs co-extraction of aspect and opinion terms in AE, which utilizes additional opinion term labels during training, while the baseline methods except CMLA do not consider this information in their original models. To enable fair comparison, we slightly modify those baselines to perform co-extraction as well, with opinion term labels provided. Further details on model comparison are provided in Appendix B. From Table 3, we observe that IMN−d is able to significantly outperform other baselines on F1-I. IMN further boosts the performance and outperforms the best F1-I results from the baselines by 2.29%, 1.77%, and 2.61% on D1, D2, and D3. Specifically, for AE (F1-a and F1-o), IMN−d performs the best in most cases. For AS (acc-s and F1-s), IMN outperforms other methods by large margins. PIPELINE, IMN−d, and the pipeline methods with dTrans also perform reasonably well on this task, outperforming other baselines by moderate margins. All these models utilize knowledge from larger corpora by either joint training of document-level tasks or using domain-specific embeddings. This suggests that domain-specific knowledge is very helpful, and both joint training and domain-specific embeddings are effective ways to transfer such knowledge. We also show the results of IMN−d and IMN when only the general-purpose embeddings (without domain-specific embeddings) are used for initialization. They are denoted as IMN−d/IMN wo DE. IMN wo DE performs only marginally below IMN. This indicates that the knowledge captured by domain-specific embeddings could be similar to that captured by joint training of the document-level tasks. IMN−d is more affected 511 Methods CMLA-ALSTM CMLA-dTrans DECNN-ALSTM DECNN-dTrans PIPELINE MNN INABSA IMN−d wo DE IMN−d IMN wo DE IMN D1 F1-a 82.45 82.45 83.94 83.94 83.94 83.05 83.92 83.95 84.01 83.50 83.33 F1-o 82.67 82.67 85.60 85.60 85.60 84.55 84.97 85.21 85.64 84.62 85.61 acc-s 77.46 79.58 77.79 80.04 79.56 77.17 79.68 79.65 81.56∗ 83.17∗ 83.89∗ F1-s 68.70 72.23 68.50 73.31 69.59 68.45 68.38 69.32 71.90 73.44 75.66 F1-I 63.87 65.34 65.26 67.25 66.53 63.87 66.60 66.96 68.32∗ 69.11∗ 69.54∗ D2 F1-a 76.80 76.80 78.38 78.38 78.38 76.94 77.34 76.96 78.46 76.87 77.96 F1-o 77.33 77.33 78.81 78.81 78.81 77.77 76.62 76.85 78.14 77.04 77.51 acc-s 70.25 72.38 70.46 73.10 72.29 70.40 72.30 72.89 73.21 74.31∗ 75.36∗ F1-s 66.67 69.52 66.78 70.63 68.12 65.98 68.24 67.26 69.92 70.76 72.02∗ F1-I 53.68 55.56 55.05 56.60 56.02 53.80 55.88 56.25 57.66∗ 57.04∗ 58.37∗ D3 F1-a 68.55 68.55 68.32 68.32 68.32 70.24 69.40 69.23 69.80 68.23 70.04 F1-o 71.07 71.07 71.22 71.22 71.22 69.38 71.43 68.39 72.11∗ 70.09 71.94 acc-s 81.03 82.27 80.32 82.65 82.27 80.79 82.56 81.64 83.38 85.90∗ 85.64∗ F1-s 58.91 66.45 57.25 69.58 59.53 57.90 58.81 57.51 60.65 71.67∗ 71.76∗ F1-I 54.79 56.09 55.10 56.28 55.96 56.57 57.38 56.80 57.91∗ 58.82∗ 59.18∗ Table 3: Model comparison. Average results over 5 runs with random initialization are reported. ∗indicates the proposed method is significantly better than the other baselines (p < 0.05) based on one-tailed unpaired t-test. Model variants D1 D2 D3 Vanilla model 66.66 55.63 56.24 +Opinion transmission 66.98 56.03 56.65 +Message passing-a (IMN−d) 68.32 57.66 57.91 +DS 68.48 57.86 58.03 +DD 68.65 57.50 58.26 +Message passing-d (IMN) 69.54 58.37 59.18 Table 4: F1-I scores of different model variants. Average results over 5 runs are reported. without domain-specific embeddings, while it still outperforms all other baselines except DECNNdTrans. DECNN-dTrans is a very strong baseline as it exploits additional knowledge from larger corpora for both tasks. IMN−d wo DE is competitive with DECNN-dTrans even without utilizing additional knowledge, which suggests the effectiveness of the proposed network structure. Ablation study. To investigate the impact of different components, we start with a vanilla model which consists of fθs, fθae, and fθas only without any informative message passing, and add other components one at a time. Table 4 shows the results of different model variants. +Opinion transmission denotes the operation of providing additional information P op j to the self-attention layer as shown in Eq.(1). +Message passing-a denotes propagating the outputs from aspect-level tasks only at each message passing iteration. +DS and +DD denote adding DS and DD with parameter sharing only. +Message passing-d denotes involving the document-level information for message passing. We observe that +Message passing-a and +Message passing-d contribute to the performance gains the most, which demonstrates the effectiveness of the proposed message passing mechanism. We also observe that simply adding documentlevel tasks (+DS/DD) with parameter sharing only marginally improves the performance of IMN−d. This again indicates that domain-specific knowledge has already been captured by domain embeddings, while knowledge obtained from DD and DS via parameter sharing could be redundant in this case. However, +Message passing-d is still helpful with considerable performance gains, showing that aspect-level tasks can benefit from knowing predictions of the relevant document-level tasks. Impact of T. We have demonstrated the effectiveness of the message passing mechanism. Here, we investigate the impact of the maximum number of iterations T. Table 6 shows the change of F1-I on the test sets as T increases. We find that convergence is quickly achieved within two or three iterations, and further iterations do not provide considerable performance improvement. Case study. To better understand in which conditions the proposed method helps, we examine the instances that are misclassified by PIPELINE and INABSA, but correctly classified by IMN. For aspect extraction, we find the message passing mechanism is particularly helpful in two scenarios. First, it helps to better recognize uncommon aspect terms by utilizing information from the opinion contexts. As shown in example 1 in 512 Examples PIPELINE INABSA IMN Opinion Aspect Opinion Aspect Opinion Aspect 1. Strong [build]pos though which really adds to its [durability]pos. Strong [durability]pos Strong [durability]pos Strong [build]pos, [durability]pos 2. Curioni’s Pizza has been around since the 1920’s None [Pizza]neu None [Pizza]pos None None 3. The [battery]pos is longer longer [battery]neg longer [battery]neg longer [battery]pos 4. The [potato balls]pos were not dry at all dry [potato balls]neg dry [potato balls]neg dry [potato balls]pos 5. That’s a good thing, but it’s made from [aluminum]neg that scratches easily. good, easily [aluminum]pos good, easily [aluminum]pos good, scratches easily [aluminum]neg Table 5: Case analysis. The “Examples” column contains instances with gold labels. ’The “opinion” and “aspect” columns present the opinion terms and aspect terms with sentiments, generated by the corresponding model. T 0 1 2 3 4 5 D1 66.98 67.97 68.32 68.03 68.11 68.26 D2 56.03 57.14 57.66 57.82 57.78 57.33 D3 56.65 57.60 57.91 57.66 57.41 57.48 Table 6: F1 scores with different T values using IMN−d. Average results over 5 runs are reported. Table 5, PIPELINE and INABSA fail to recognize “build” as it is an uncommon aspect term in the training set while IMN is able to correctly recognize it. We find that when no message passing iteration is performed, IMN also fails to recognize “build”. However, when we analyze the predicted sentiment distribution on each token in the sentence, we find that except “durability”, only “build” has a strong positive sentiment, while the sentiment distributions on the other tokens are more uniform. This is an indicator that “build” is also an aspect term. IMN is able to aggregate such knowledge with the message passing mechanism, such that it is able to correctly recognize “build” in later iterations. Due to the same reason, the message passing mechanism also helps to avoid extracting terms on which no opinion is expressed. As observed in example 2, both PIPELINE and INABSA extract “Pizza”. However, since no opinion is expressed in the given sentence, “Pizza” should not be considered as an aspect term. IMN avoids extracting this kind of terms by aggregating knowledge from opinion prediction and sentiment prediction. For aspect-level sentiment, since IMN is trained on larger document-level labeled corpora with balanced sentiment classes, in general it better captures the meaning of domain-specific opinion words (example 3), better captures sentiments of complex expressions such as negation (example 4), and better recognizes minor sentiment classes in the aspect-level datasets (negative and neutral in our cases). In addition, we find that knowledge propagated by the document-level tasks through message passing is helpful. For example, the sentiment-relevant attention weights are helpful for recognizing uncommon opinion words, and which further help on correctly predicting the sentiments of the aspect terms. As observed in example 5, PIPELINE and INABSA are unable to recognize “scratches easily” as the opinion term, and they also make wrong sentiment prediction on the aspect term “aluminum”. IMN learns that “scratches” is sentiment-relevant through knowledge from the sentiment-relevant attention weights aggregated via previous iterations of message passing, and is thus able to extract “scratches easily”. Since the opinion predictions from AE are sent to the self-attention layer in the AS component, correct opinion predictions further help to infer the correct sentiment towards “aluminum”. 5 Conclusion We propose an interactive multi-task learning network IMN for jointly learning aspect and opinion term co-extraction, and aspect-level sentiment classification. The proposed IMN introduces a novel message passing mechanism that allows informative interactions between tasks, enabling the correlation to be better exploited. In addition, IMN is able to learn from multiple training data sources, allowing fine-grained token-level tasks to benefit from document-level labeled corpora. The proposed architecture can potentially be applied to similar tasks such as relation extraction, semantic role labeling, etc. Acknowledgments This research is supported by the National Research Foundation Singapore under its AI Singapore Programme grant AISG-RP-2018-006. 513 References Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In Conference on Empirical Methods in Natural Language Processing. Anurag Arnab, Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, M˚ans Larsson, Alexander Kirillov, Bogdan Savchynskyy, Carsten Rother, Fredrik Kahl, and Philip HS Torr. 2018. Conditional random fields meet deep neural networks for semantic segmentation: Combining probabilistic graphical models with deep learning for structured prediction. IEEE Signal Processing Magazine, 35(1):37–52. Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Conference on Empirical Methods in Natural Language Processing. Jiajun Cheng, Shenglin Zhao, Jiani Zhang, Irwin King, Xin Zhang, and Hui Wang. 2017. Aspect-level sentiment classification with heat (hierarchical attention) network. In ACM on Conference on Information and Knowledge Management. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In International Conference on Machine Learning. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent Twitter sentiment classification. In Annual Meeting of the Association for Computational Linguistics. Matthew R Gormley, Mark Dredze, and Jason Eisner. 2015. Approximation-aware dependency parsing by belief propagation. Transactions of the Association for Computational Linguistics, 3:489–501. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In Annual Meeting of the Association for Computational Linguistics. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018a. Effective attention modeling for aspect-level sentiment classification. In International Conference on Computational Linguistics. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018b. Exploiting document knowledge for aspect-level sentiment classification. In Annual Meeting of the Association for Computational Linguistics. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018a. Transformation networks for target-oriented sentiment classification. In Annual Meeting of the Association for Computational Linguistics. Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019. A unified model for opinion target extraction and target sentiment prediction. In AAAI Conference on Artificial Intelligence. Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhimou Yang. 2018b. Aspect term extraction with history attention and selective transformation. In International Joint Conference on Artificial Intelligence. Xin Li and Wai Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In Conference on Empirical Methods in Natural Language Processing. Jiangming Liu and Yue Zhang. 2017. Attention modeling for target sentiment. In Conference of the European Chapter of the Association for Computational Linguistics. Yang Liu, Sujian Li, Xiaodong Zhang, and Zhifang Sui. 2016. Implicit discourse relation classification via multi-task neural networks. In AAAI Conference on Artificial Intelligence. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. In International Conference on Learning Representation. Yukun Ma, Haiyun Peng, and Erik Cambira. 2018. Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM. In AAAI Conference on Artificial Intelligence. Thien Hai Nguyen and Kiyoaki Shirai. 2015. PhraseRNN: Phrase recursive neural network for aspect-based sentiment analysis. In Conference on Empirical Methods in Natural Language Processing. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing. Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 task 12: Aspect based sentiment analysis. In International Workshop on Semantic Evaluation. Maria Pontiki, Dimitrios Galanis, John Pavlopoulos, Haris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In International Workshop on Semantic Evaluation. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Linguistics, 37(1):9–27. 514 Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective LSTMs for target-dependent sentiment classification. In International Conference on Computational Linguistics. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis. In AAAI Conference on Artificial Intelligence. Duy-Tin Vo and Yue Zhang. 2015. Target-dependent Twitter sentiment classification with rich automatic features. In International Joint Conference on Artificial Intelligence. Feixiang Wang, Man Lan, and Wenting Wang. 2018. Towards a one-stop solution to both aspect extraction and sentiment analysis tasks with neural multitask learning. In International Joint Conference on Neural Networks. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016a. Recursive neural conditional random fields for aspect-based sentiment analysis. In Conference on Empirical Methods in Natural Language Processing. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In AAAI Conference on Artificial Intelligence. Yequan Wang, Minlie Huang, Li Zhao, and Xiaoyan Zhu. 2016b. Attention-based LSTM for aspect-level sentiment classification. In Conference on Empirical Methods in Natural Language Processing. Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2018. Double embeddings and CNN-based sequence labeling for aspect extraction. In Annual Meeting of the Association for Computational Linguistics. Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, and Ming Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term extraction. In International Joint Conference on Artificial Intelligence. Meishan Zhang, Yue Zhang, and Duy-Tin Vo. 2015. Neural networks for open domain targeted sentiment. In Conference on Empirical Methods in Natural Language Processing. Meishan Zhang, Yue Zhang, and Duy-Tin Vo. 2016. Gated neural networks for targeted sentiment analysis. In AAAI Conference on Artificial Intelligence. A Implementation Details CNN-based Encoder We adopt the multi-layer-CNN structure from (Xu et al., 2018) as the CNN-based encoders for both the shared CNNs and the task-specific ones in the proposed network. Each CNN layer has many 1Dconvolution filters, and each filter has a fixed kernel size k = 2c + 1, such that each filter performs convolution operation on a window of k word representations, and compute the representation for the ith word along with 2c nearby words in its context. Following the settings in the original paper, the first CNN layer in the shared encoder has 128 filters with kernel sizes k = 3 and 128 filters with kernel sizes k = 5. The other CNN layers in the shared encoder and the CNN layers in each task-specific encoder have 256 filters with kernel sizes k = 5 per layer. ReLu is used as the activation function for each CNN layer. Dropout with p = 0.5 is employed after the embedding layer and each CNN layer. Opinion Transmission To alleviate the problem of unreliable predictions of opinion labels in the early stage of training, we adopt scheduled sampling for opinion transmission at training phase. We send gold opinion labels rather than the predicted ones generated by AE to AS in the probability of ϵi. The probability ϵi depends on the number of epochs i during training, for which we employ an inverse sigmoid decay ϵi = 5/(5 + exp(i/5)). B Model Comparison Details For CMLA8, ALSTM9, dTrans10, and INABSA11, we use the officially released source codes for experiments. For MNN, we re-implement the model following the descriptions in the paper as the source code is not available. We run each baseline multiple times with random initializations and save their predicted results. We use an unified evaluation script for measuring the outputs from different baselines as well as the proposed method. The proposed IMN performs co-extraction of aspect terms and opinion terms in AE, which utilizes additional opinion term labels during model training. In the baselines, the two integrated methods MNN and INABSA, and the pipeline methods with DECNN as the AE component do not 8https://github.com/happywwy/ Coupled-Multi-layer-Attentions 9https://www.wangyequan.com/ publications/ 10https://github.com/ruidan/ Aspect-level-sentiment 11https://github.com/lixin4ever/ E2E-TBSA 515 Methods D1 D2 D3 F1-a acc-s F1-s F1-I F1-a acc-s F1-s F1-I F1-a acc-s F1-s F1-I DECNN-ALSTM 83.33 77.63 70.09 64.32 80.28 69.98 66.20 55.92 68.72 79.22 54.40 54.22 DECNN-dTrans 83.33 79.45 73.08 66.15 80.28 71.51 68.03 57.28 68.72 82.09 68.35 56.08 PIPELINE 83.33 79.39 69.45 65.96 80.28 72.12 68.56 57.29 68.72 81.85 58.74 56.04 MNN 83.20 77.57 68.19 64.26 76.33 70.62 65.44 53.77 69.29 80.86 55.45 55.93 INABSA 83.12 79.06 68.77 65.94 77.67 71.72 68.36 55.95 68.79 80.96 57.10 55.45 IMN−d 83.89 80.69 72.09 67.27∗ 78.43 72.49 69.71 57.13 70.35∗ 81.86 56.88 57.86∗ IMN 83.04 83.05∗ 73.30 68.71∗ 77.69 75.12∗ 71.35∗ 58.04∗ 69.25 84.53∗ 70.85∗ 58.18∗ Table 7: Model comparison in a setting without opinion term labels. Average results over 5 runs with random initialization are reported. ∗indicates the proposed method is significantly better than the other baselines (p < 0.05) based on one-tailed unpaired t-test. take take opinion information during training. To make fair comparison, we add labels {BP, IP} to the original label sets of MNN, INABSA, and DECNN, indicating the beginning of and inside of an opinion term. We train those models on training sets with both aspect and opinion term labels to perform co-extraction as well. In addition, for pipeline methods, we also make the gold opinion terms available to the AS models (ALSTM and dTrans) during training. To make ALSTM and dTrans utilize the opinion label information, we modify their attention layer to assign higher weights to tokens that are more likely to be part of an opinion term. This is reasonable since the objective of the attention mechanism in an AS model is to find the relevant opinion context. The attention weight of the ith token before applying softmax normalization in an input sentence is modified as: a′ i = ai ∗P op i (7) where ai denotes the attention weight computed by the original attention layer, pop i denotes the probability that the ith token belongs to any opinion term. a′ i denotes the modified attention weights. At the training phase, since the gold opinion terms are provided, pop i = 1 for the tokens that are part of the gold opinion terms, while pop i = 0 for the other tokens. At the testing phase, pop i is computed based on the predictions from the AE model in the pipeline method. It is computed by summing up the predicted probabilities on the opinion-related labels BP and IP for the ith token. We also present the comparison results in a setting without using opinion term labels in Table 712. In this setting, we modify the proposed IMN and IMN−d to recognize aspect terms only 12We exclude the results of the pipeline methods with CMLA, as CMLA relies on opinion term labels during training. It is difficult to modify it. in AE. The opinion transmission operation, which sends the opinion term predictions from AE to AS, is omitted as well. Both IMN−d and IMN still significantly outperform other baselines in most cases under this setting. In addition, when compare the results in Table 7 and Table 3, we observe that IMN−d and IMN consistently yield better F1-I scores on all datasets in Table 3, when opinion term extraction is also considered. Consistent improvements are not observed in other baseline methods when trained with opinion term labels. These findings suggest that knowledge obtained from learning opinion term extraction is indeed beneficial, however, a carefully-designed network structure is needed to utilize such information. IMN is designed to exploit task correlations by explicitly modeling interactions between tasks, and thus it better integrates knowledge obtained from training different tasks.
2019
48
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4853–4862 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4853 Interconnected Question Generation with Coreference Alignment and Conversation Flow Modeling Yifan Gao1∗ Piji Li2 Irwin King1 Michael R. Lyu1 1 Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong 2 Tencent AI Lab 1{yfgao,king,lyu}@cse.cuhk.edu.hk [email protected] Abstract We study the problem of generating interconnected questions in question-answering style conversations. Compared with previous works which generate questions based on a single sentence (or paragraph), this setting is different in two major aspects: (1) Questions are highly conversational. Almost half of them refer back to conversation history using coreferences. (2) In a coherent conversation, questions have smooth transitions between turns. We propose an end-to-end neural model with coreference alignment and conversation flow modeling. The coreference alignment modeling explicitly aligns coreferent mentions in conversation history with corresponding pronominal references in generated questions, which makes generated questions interconnected to conversation history. The conversation flow modeling builds a coherent conversation by starting questioning on the first few sentences in a text passage and smoothly shifting the focus to later parts. Extensive experiments show that our system outperforms several baselines and can generate highly conversational questions. The code implementation is released at https://github.com/ Evan-Gao/conversaional-QG. 1 Introduction Question Generation (QG) aims to create humanlike questions from a range of inputs, such as natural language text (Heilman and Smith, 2010), knowledge base (Serban et al., 2016) and image (Mostafazadeh et al., 2016). QG is helpful for the knowledge testing in education, i.e., the intelligence tutor system, where an instructor can actively ask questions to students given reading comprehension materials (Heilman and Smith, 2010; Du et al., 2017). Besides, raising good questions ∗This work was partially done when Yifan Gao was an intern at Tencent AI Lab. Passage: Incumbent Democratic President Bill Clinton was ineligible to serve a third term due to term limitations in the 22nd Amendment of the Constitution, and Vice President Gore was able to secure the Democratic nomination with relative ease. Bush was seen as the early favorite for the Republican nomination and, despite a contentious primary battle with Senator John McCain and other candidates, secured the nomination by Super Tuesday. Bush chose ... Q1: What political party is Clinton a member of? A1: Democratic Q2: What was he ineligible to serve? A2: third term Q3: Why? A3: term limitations Q4: Based on what amendment? A4: 22nd Q5: Of what document? A5: Constitution Q6: Who was his vice president? A6: Gore Q7: Who was the early Republican favorite for A7: Bush the nomination? Q8: Who was the primary battle with? A8: John McCain Q9: What is his title? A9: Senator Q10: When did Bush secure the nomination by? A10: Tuesday Table 1: An example for conversational question generation from a conversational question answering dataset CoQA (Reddy et al., 2019). Each turn contains a question Qi and an answer Ai. in a conversational can enhance the interactiveness and persistence of human-machine interactions (Wang et al., 2018). Recent works on question generation for knowledge testing are mostly formalized as a standalone interaction (Yuan et al., 2017; Song et al., 2018), while it is a more natural way for human beings to test knowledge or seek information through conversations involving a series of interconnected questions (Reddy et al., 2019). Furthermore, the inability for virtual assistants to ask questions based on previous discussions often leads to unsatisfying user experiences. In this paper, we consider a new setting called Conversational Question Generation (CQG). In this scenario, a system needs to ask a series of interconnected questions grounded in a passage through a questionanswering style conversation. Table 1 provides an example under this scenario. In this dialogue, a questioner and an answerer chat about the above passage. Every question after the first turn is dependent on the conversation history. Considering that the goal of the task is to generate interconnected questions in conversational 4854 1 Turn Chunk Passage Chunk 2 3 4 5 6 7 8 9 10 Figure 1: Passage chunks of interest for each turn chunks. Each row contains 10 bands distinguished by different colors. Each band represents a passage chunk. The width of a passage chunk indicates the concentration of conversation in that turn. The y-axis indicates turn chunk number. Same passage chunks share the same color across different turn chunks. (Best viewed in color) question answering, CQG is challenging in a few aspects. Firstly, a model should learn to generate conversational interconnected questions depending on the conversation so far. As shown in Table 1, Q3 is a single word ‘Why?’, which should be ‘Why was he ineligible to serve a third term?’ in a standalone interaction. Moreover, many questions in this conversation refer back to the conversation history using coreferences (e.g., Q2, Q6, Q9), which is the nature of questions in a human conversation. Secondly, a coherent conversation must have smooth transitions between turns (each turn contains a question-answer pair). We expect the narrative structure of passages can influence the conversation flow of our interconnected questions. We further investigate this point by conducting an analysis on our experiment dataset CoQA (Reddy et al., 2019). We first split passages and turns of QA pairs into 10 uniform chunks and identify passage chunks of interest for each turn chunk. Figure 1 portrays the conversation flow between passage chunks and turn chunks. We see that in Figure 1, a question-answering style conversation usually starts focusing on the first few chunks in the passage and as the conversation advances, the focus shifts to the later passage chunks. Previous works on question generation employ attentional sequence-to-sequence models on the crowd-sourced machine reading comprehension dataset SQuAD (Rajpurkar et al., 2016). They mainly focus on generating questions based on a single sentence (or paragraph) and an answer phrase (Du et al., 2017; Sun et al., 2018; Zhao et al., 2018), while in our setting, our model needs to not only ask a question on the given passage (paragraph) but also make the questions conversational by considering the conversation history. Meanwhile, some researchers study question generation in dialogue systems to either achieve the correct answer through interactions (Li et al., 2017) or enhance the interactiveness and persistence of conversations (Wang et al., 2018). Although questions in our setting are conversational, our work is different from these because our conversations are grounded in the given passages rather than open-domain dialogues. We propose a framework based on the attentional encoder-decoder model (Luong et al., 2015) to address this task. To generate conversational questions (first challenge), we propose a multi-source encoder to jointly encode the passage and the conversation so far. At each decoding timestep, our model can learn to focus more on the passage to generate content words or on the conversation history to make the question succinct. Furthermore, our coreference alignment modeling explicitly aligns coreferent mentions in conversation history (e.g. Clinton in Q1 Table 1) with corresponding pronominal references in generated questions (e.g. he in Q2), which makes generated questions interconnected to conversation history. The coreference alignment is implemented by adding extra supervision to bias the attention probabilities through a loss function. The loss function explicitly guides our model to resolve to the correct non-pronominal coreferent mentions in the attention distribution and generate the correct pronominal references in target questions. To make the conversations coherent (second challenge), we propose to model the conversation flow to transit focus inside the passage smoothly across turns. The conversation flow modeling achieves this goal via a flow embedding and a flow loss. The flow embedding conveys the correlations between number of turns and narrative structure of passages. The flow loss explicitly encourages our model to focus on sentences contain key information to generate the current turn question and ignore sentences questioned several turns ago. In evaluations on a conversational question answering dataset CoQA (Reddy et al., 2019), we find that our proposed framework outperforms several baselines in both automatic and human evaluations. Moreover, the coreference alignment can greatly improve the precision and recall of generated pronominal references. The conversation flow modeling can learn the smooth transition of conversation flow across turns. 4855 Current Evidence Sentence Passage Encoder !" !# !$ … %" %# %$ … &" &# &$ … '" '# '$ … Word Emb Answer Pos. Emb Turn Num. Emb Chunk Emb Conversation Encoder Conversation History … … Clinton he Coreference Alignment History Evidence Sentence Conversation Flow Modeling () *",) *,-.,) *,-",) Passage Attention Conversation Attention Vocabulary Distribution /,-" 0,-" /# 0# /" 0" … … Flow Emb <BOS> What Decoder with Attention & Copy was … … … P2 Attention 345 '4 Figure 2: The framework of our proposed model. For clarity, we omit to plot the copy mechanism in the figure. (Best viewed in color) 2 Problem Setting In this section, we define the Conversation Question Generation (CQG) task. Given a passage P, a conversation history Ci−1 = {(Q1, A1), ..., (Qi−1, Ai−1)} and the aspect to ask (the current answer Ai), the task of CQG is to generate a question Qi for the next turn: Qi = arg max Qi Prob(Qi|P, Ai, Ci−1), (1) in which the generated question should be as conversational as possible. Note that we formalize this setting as an answeraware QG problem (Zhao et al., 2018), which assumes answer phrases are given before generating questions. Moreover, answer phrases are shown as text fragments in passages. Similar problems have been addressed in (Du and Cardie, 2018; Zhao et al., 2018; Sun et al., 2018). Our problem setting can also be generalized to the answerignorant case. Models can identify which answers to ask first by combining question-worthy phrases extraction methods (Du and Cardie, 2018; Wang et al., 2019). 3 Model Description As shown in Figure 2, our framework consists of four components: (1) multi-source encoder; (2) decoder with copy mechanism; (3) coreference alignment; (4) conversation flow modeling. 3.1 Multi-Source Encoder Since a conversational question is dependent on a certain aspect of the passage P and the conversation context Ci−1 so far, we jointly encode information from two sources via a passage encoder and a conversation encoder. Passage Encoder. The passage encoder is a bidirectional-LSTM (bi-LSTM) (Hochreiter and Schmidhuber, 1997), which takes the concatenation of word embeddings w and answer position embeddings a as input xi = [wi; ai]. We denote the answer span using the typical BIO tagging scheme and map each token in the paragraph into the corresponding answer position embedding (i.e., B ANS, I ANS, O). Then the whole passage can be represented using the hidden states of the bi-LSTM encoder, i.e., (hp 1, ..., hp m), where m is the sequence length. Conversation Encoder. The conversation history Ci−1 is a sequence of question-answer pairs {(Q1, A1), ..., (Qi−1, Ai−1)}. We use segmenters <q><a>to concatenate each question answer pair (Q, A) into a sequence of tokens (<q>, q1, ..., qm; <a>, a1, ..., am). We design a hierarchical structure to conduct conversation history modeling. We first employ a token level bi-LSTM to get contextualized representation of questionanswer pairs (hw i−k,1, ..., hw i−k,m), where i −k is the turn number and k ∈[1, i). To model the dependencies across turns in the conversation history, we adopt a context level bi-LSTM to learn the contextual dependency (hc 1, ..., hc i−1) across different turns (denoted in the subscript 1, ..., i −1) of question-answer pairs. 3.2 Decoder with Attention & Copy The decoder is another LSTM to predict the word probability distribution. At each decoding timestep t, it reads the word embedding wt and the 4856 hidden state of previous timestep hd t−1 to generate the current hidden state hd t = LSTM(wt, hd t−1). To generate a conversational question grounded in the passage, the decoder itself should decide to focus more on passage hidden states hp j or the hidden states of conversation history hw i−k,j at each decoding timestep. Therefore, we flat token level conversation hidden states hw i,j and aggregate the passage hidden states hp j with token level conversation hidden states hw i,j into a unified memory: (hp 1, ..., hp m; hw 1,1, ..., hw 1,m; ... ; hw i−1,1, ..., hw i−1,m), where hw i,j denotes the jth token of the i-th turn in token level conversation hidden states. Then we attend the unified memory with the standard attention mechanism (Luong et al., 2015) for the passage attention (α1, ..., αm) and the hierarchical attention mechanism for the conversation attention (β1,1, ..., β1,m; ...; βi−1,1, ..., βi−1,m): ep j = hp j ⊤Wphd t , (2) ew i−k,j = hw i−k,j ⊤Wwhd t , (3) ec i−k = hc i−k ⊤Wchd t , (4) αj = ep j etotal , βi−k,j = ew i−k,j ∗ec i−k etotal , (5) where etotal = Σjep j + Σk,jew i−k,j ∗ec i−k and Wp, Ww, Wc are learnable weights. Finally, we derive the context vector ct and the final vocabulary distribution PV : ct = Σjαjhp j + Σj,kβi−k,jhw i−k,j, PV = softmax(Wv(tanh(Wa[hd t ; ct]) + bv), where Wv, Wa are learnable weights. Please refer to See et al. (2017) for more details on the copy mechanism. 3.3 Coreference Alignment Using coreferences to refer back is an essential property of conversational questions. Almost half of the questions contains explicit coreference markers such as he, she, it in CoQA (Reddy et al., 2019). Therefore, we propose the coreference alignment to enable our model such ability. Take Q2 in Table 1 as an example, traditional question generation system can only generate question like “What was Clinton ineligible to serve?”, while our system with coreference alignment can align the name “Clinton” to its pronominal reference “he” and generate a more conversational question “What was he ineligible to serve?”. The coreference alignment modeling tells the decoder to look at the correct non-pronominal coreferent mention in the conversation attention distribution to produce the pronominal reference word. We achieve this via two stages. In the preprocessing stage, given the conversation history Ci−1 and the question Qi which has a pronominal reference (e.g., he for Q2 in Table 1), we first run a coreference resolution system (Clark and Manning, 2016) to find its coreferent mention (wc 1, ...wc m) (e.g. Clinton) in the conversation history Ci−1, where the superscript c denotes tokens identified as the coreferent mention. During training, we introduce a novel loss function built on the conversation attention of coreferent mentions βc i and the output word probability of its pronominal reference word pcoref ∈PV . As shown in Figure 2, when our model need to refer back to the coreferent mention, we ask the model focus correctly on the antecedent (e.g. Clinton) and maximize the probability of its pronominal reference (e.g. he) pcoref in the output vocabulary distribution PV , Lcoref = −(λ1log Σjβc j Σk,jβi−k,j + λ2logpcoref) ∗sc, where λ1, λ2 are hyperparameters, sc is the confidence score between the non-pronominal coreferent mention and the pronoun obtained during the pre-processing stage. 3.4 Conversation Flow Modeling Another key challenge in CQG is that a coherent conversation must have smooth transitions between turns. As illustrated in Figure 1, we find that as the conversations go on, most of the questioners transit their focus from the beginning of passages to the end. Following this direction, we model the conversation flow to learn smooth transitions across turns of the conversation. Flow Embedding. As shown in Figure 2, we feed our model with the current turn number indicator in the conversation and the relative position for each token in the passage, which, intuitively, are useful for modeling the conversation flow. We achieve this goal via two additional embeddings. The turn number embedding is a learned lookup table [t1, ..., tn] to map the turn number i into its feature embedding space, where n is the maximum turn we consider. For encoding the relative position of each token, we split the passage 4857 into L uniform chunks. Each token in the passage is mapped to its corresponding chunk embedding [c1, ..., cL]. The final input to the passage encoder is the concatenation of word embedding, answer position embedding (introduced in Section 3.1) and these two additional embeddings: xi = [wi; ai; ti; ci]. We further add a gated self-attention modeling mechanism (Zhao et al., 2018) in the passage encoder. Motivating our use of self-attention we consider two desiderata. One is self-attention with answer position embedding can aggregate answer-relevant information from the whole passage for question generation. Another is we want to learn the latent alignment between the turn number embedding and the chunk embedding for better modeling the conversation flow. We first match the rich-feature enhanced passage representation Hp = [hp 1; ...; hp m] with itself hp j to compute the self-matching representation up j, and then combine it with the original representation hp j: ap j = softmax(Hp⊤Wshp j), up j = Hpap j (6) fp j = tanh(Wf[hp j; up j]), (7) The final representation ˜hp j is derived via a gated summation through a learnable gate vector gp j, gp t = sigmoid(Wg[hp j; up j]) (8) ˜hp j = gp t ⊙fp j + (1 −gp t ) ⊙hp j (9) where Ws, Wf, Wg are learnable weights, ⊙ is the element-wise multiplication. Self matching enhanced representation ˜hp j takes the place of the passage representation hp j for calculating the passage attention. Flow Loss. In Section 3.1, our answer position embedding can help model the conversation flow by showing the position of answer fragments inside the passage. However, it is still helpful to tell the model explicitly which sentences around the answer are of high informativity to generate the current turn question. The flow loss is designed to help our model to locate the evidence sentences correctly. Firstly, we define two kinds of sentences in the passage. If a sentence is informative to the current question, we call it Current Evidence Sentence (CES). If a sentence is informative to questions in the conversation history and irrelevant to the current question, we call it History Evidence Sentence (HES). Then our model is taught to focus on current evidence sentences and ignore the history evidence sentences in the passage attention αj via the following flow loss: Lflow = −λ3logΣj:wj∈CESαj Σjαj + λ4 Σj:wj∈HESαj Σjαj where λ3, λ4 are hyperparameters, and wj ∈ CES/HES indicates the token wj is inside the sentence with a CES/HES label. 3.5 Joint Training Considering all the aforementioned components, we define a joint loss function as: L = Lnll + Lcoref + Lflow, (10) in which Lnll = −log Prob(Qi|P, Ai, Ci−1) is the the negative log-likelihood loss in the sequence to sequence learning (Sutskever et al., 2014). 4 Experiments 4.1 Dataset Preparation We conduct experiments on the CoQA dataset (Reddy et al., 2019). It is a large-scale conversational question answering dataset for measuring the ability of machines to participate in a questionanswering style conversation. The authors employ Amazon Mechanical Turk to collect 8k conversations with 127k QA pairs. Specifically, they pair two crowd-workers: a questioner and an answerer to chat about a passage. The answerers are asked to firstly highlight extractive spans in the passage as rationales and then write the free-form answers. We first extract each data sample as a quadruple of passage, question, answer and conversation history (previous n turns of QA pairs) from CoQA. Then we filter out QA pairs with yes, no or unknown as answers (28.7% of total QA pairs) because there is too little information to generate the question to the point. Finally, we randomly split the dataset into a training set (80%, 66298 samples), a validation set (10%, 8409 samples) and a testing set (10%, 8360 samples). The average passage, question and answer lengths are 332.9, 6.3 and 3.2 tokens respectively. 4.2 Implementation Details Locating Extractive Answer Spans. As studied by Yatskar (2018), abstractive answers in CoQA are mostly small modifications to spans occurring in the context. The maximum achievable 4858 performance by a model that predicts spans from the context is 97.8 F1 score. Therefore, we find the extractive spans from the passage which have the maximum F1 score with answers and treat them as answers for our answer position embedding. Number of Turns in Conversation History. Reddy et al. (2019) find that in CoQA dataset, most questions in a conversation have a limited dependency within a bound of two turns. Therefore, we choose the number of history turns as n = 3 to ensure the target questions have enough conversation history information to generate and avoid introducing too much noise from all turns of QA pairs. Labeling Evidence Sentences. As mentioned in Section 4.1, the crowd-workers label the extractive spans in the passage as rationales for actual answers. We treat sentences containing the rationale as Current Evidence Sentence. Model Settings. We employ the teacher-forcing training, and in the generating stage, we set the maximum length for output sequence as 15 and block unigram repeated token, the beam size k is set to 5. All hyperparameters and models are selected on the validation set and the results are reported on the test set. 4.3 Baselines and Ablations We compare with the state-of-the-art baselines and conduct ablations as follows: PGNet is the pointer-generator network (See et al., 2017). We concatenate the passage P, the conversation history Ci−1 and the current answer Ai as a sequence for the input. NQG (Du and Cardie, 2018) is similar to the previous one but it takes current answer features concatenated with the word embeddings during encoding. MSNet is our base model Multi-Source encoder decoder network (Section 3.1 & 3.2). CorefNet is our proposed Coreference alignment model (Section 3.3). FlowNet is our proposed conversation Flow model (Section 3.4). CFNet is the model with both the Coreference alignment and the conversation Flow modeling. 5 Results and Analysis 5.1 Main Results Since the average length of questions is 6.3 tokens only, we employ BLEU (1-3) (Papineni et al., 2002) and ROUGE-L (R-L) (Lin, 2004) scores B1 B2 B3 R-L PGNet 28.84* 13.74* 8.16* 39.18* NQG 35.56* 21.14* 14.84* 45.58* MSNet 36.27* 21.92* 15.51* 46.01* CorefNet 36.89 22.28 15.77 46.53 FlowNet 36.87 22.49 15.98 46.64 CFNet 37.38 22.81 16.25 46.90 Table 2: Main results of baselines and our models. t-test is conducted between our CFNet and baselines/ablations. (underline: p-value <0.05, *: p-value <0.01). to evaluate n-gram similarity between the generated questions with the ground truth. We evaluate baselines and our models by predicting the current question given a passage, the current answer, and the ground truth conversation history. Table 2 shows the main results, and we have the following observations: • NQG outperforms PGNet by a large margin. The improvement shows that the answer position embedding (Zhou et al., 2017) is helpful for asking questions to the point. • Our base model MSNet outperforms NQG, which reveals that the hierarchical encoding and the hierarchical attention to conversation history can model the dependency across different turns in conversations. • Both our CorefNet and FlowNet outperform our base model. We will analyze the effectiveness of our coreference alignment and conversation flow modeling in the following two sections respectively. • Our CFNet is significantly better than two baselines (PGNet, NQG), our MSNet, and our CorefNet. However, the difference between our CFNet and our FlowNet is not significant. This is because the conversation flow modeling improves all test samples while the coreference alignment contributes only to questions containing pronominal references. 5.2 Coreference Alignment Analysis As we discussed in Section 3.3, it is the nature of conversational questions to use coreferences to refer back. In order to demonstrate the effectiveness of the proposed coreference alignment, we evaluate models on a subset of the test set called coreference set. Each sample in the coreference set requires a pronoun resolution between the conversation history and the current question (e.g., Q2, Q6, 4859 B1 B2 B3 R-L P R F PGNet 27.66* 13.82* 8.96* 38.40* 26.87* 25.17* 25.68* NQG 34.75* 21.52* 15.96* 45.04* 34.46* 32.97* 33.25* MSNet 36.31* 22.92 17.07 45.97* 35.34* 33.80* 34.07* CorefNet 37.51 24.14 18.44 47.45 42.09 40.35 40.64 Table 3: Evaluation results on the coreference test set. Precision (P), Recall (R) and F-score (F) of predicted pronouns are also reported. Significant tests with t-test are conducted between CorefNet and models without the coreference alignment. (underline: p-value <0.05, *: p-value <0.01). Passage: … however , mccain has a very different life story . he grew up in a navy family and was a pilot during the vietnam war in the 1960s … Conversation History: <q> what war was mccain in ? 0.0000 0.0001 0.0049 0.0138 0.7710 0.0055 0.0069 <a> vietnam war 0.0000 0.0140 0.0095 <q> was he in the army ? 0.0000 0.0045 0.1303 0.0005 0.0139 0.0001 0.0250 <a> no 0.0000 0.0000 Question (Human): what was his job ? Question (Our Model): what was his job ? Passage: … incumbent democratic president bill clinton was ineligible to serve a third term due to term limitations in the 22nd amendment of the constitution … Conversation History: <q> what political party is clinton a 0.0000 0.0000 0.0002 0.0063 0.0045 0.9260 0.0430 member of ? <a> democratic 0.0008 0.0006 0.0026 0.0000 0.0160 Question (Human): what was he ineligible to serve ? Question (Our Model): what was he ineligible for ? Figure 3: Examples for the coreference alignment model. We show the attention probability (renormalize to 1) when the CorefNet predicts a pronoun (red color in Question). The current answers are underlined in the passages. (Best viewed in color) Q9 in Table 1). In additional to the BLEU(1-3) and ROUGE-L metrics, we also calculate the Precision (P), Recall (R) and F-score (F) of pronouns in the generated questions with regard to pronouns in the ground truth questions. The results are depicted in Table 3. With the help of the coreference alignment, CorefNet significantly improves the precision, recall, and fscore of the predicted pronouns. Moreover, the performance on n-gram overlapping metrics is also boosted. To gain more insights into how the coreference alignment model influence the generation process, in Figure 3, we visualize the conversation attention distribution βj at the timestep the model predicts a pronoun. The conversation history distribution βj is renormalized to Σjβj = 1. All two examples show that our model put the highest attention probability on the coreferent mentions (i.e. McCain/Clinton) when it generates the pronominal references (his/he). We can conclude that our coreference alignment model can align correct coreferent mentions to generate corresponding pronouns. 5.3 Conversation Flow Modeling Analysis As discussed in Section 3.4, a coherent conversation should have smooth transitions between turns, and we design our model to follow the narrative structure of the passage. Figure 4 shows an example illustrating the transition of passage attention distribution aj (normalize to 1) during first 11 turns of a conversation. We see that the model transits its focus smoothly across the first 11 turns from the first sentence in the passage to later parts. Sometimes the model drills down with two questions for the same sentence such as turn 2 & 3, 4 & 5 and 10 & 11. To quantitatively validate the effectiveness of our conversation flow modeling, we study the alignment between passage attention αj and sentences of interest in the passage. Ideally, a successful model should focus on sentences of interest (i.e., Current Evidence Sentence) and ignore sentences questioned several turns ago (i.e., History Evidence Sentence). We validate this intuition by calculating Σj:wj∈CESαj and Σj:wj∈HESαj for all examples in test set. Results show that Σj:wj∈CESαj and Σj:wj∈HESαj for our model with conversation flow modeling are 0.9966 and 0.0010 on average, which demonstrates that our conversation flow modeling can locate the current evidence sentences precisely and ignore the history evidence sentence. For the model without the flow modeling (CorefNet), Σj:wj∈CESαj = 0.4093, Σj:wj∈HESαj = 0.1778, which proves our intuition in Section 3.4 that the answer position embedding cannot have comparable effects on the conversation flow modeling. 5.4 Human Evaluation We randomly sample 93 questions with the associated passage and conversation history to conduct human evaluation. We hire 5 workers to evaluate the questions generated by PGNet, MSNet, and our CFNet. All models are evaluated in terms of following 3 metrics: “Grammaticality”, “Answerability” and “Interconnectedness”. “Grammaticality” measures the grammatical correctness and fluency of the generated questions. “Answerability” evaluates whether the generated question can be 4860 annie s sister , julia , was having a birthday party in the afternoon . annie 's mother was going to bake the cake for the party . mother asked annie to help her bake the cake . they chose to make a chocolate cake with chocolate frosting . annie got the bowls and the ingredients they would need for the cake . she helped measure the flour , the sugar and the cocoa . : 2nd & 3rd Turn number: : 4th &5th : 6th : 7th &8th : 9th : 10th &11th Figure 4: The transition of passage attention distribution between turns. Different colors are correspond to different turns. To show attention probability of different turns in one place, we only draw attention probability αj >0.1 here. If two turns focus on the same sentence, we average the attention probability between them. (Best viewed in color) Grammaticality Answerability Interconnectedness PGNet 2.74 1.39 1.59 MSNet 2.85 2.39 1.74 CFNet 2.89 2.74* 2.67* Table 4: Manual evaluation results. All metrics are rated on a 1-3 scale (3 for the best). Two-tailed ttest results are shown for our CFNet compared to PGNet/MSNet. * indicates p-value <0.01. answered by the current answer. “Interconnectedness” measures whether the generated questions are conversational or not. If a question refers back to the conversation history using coreference or is dependent on the conversation history such as incomplete questions ‘Why?’, ‘Of what?’, we define it as a conversational question. All metrics are rated on a 1-3 scale (3 for the best). The results are shown in Table 4. All models achieve high scores on “Grammaticality”, owing to the strong language modeling capability of neural models. MSNet and our CFNet perform well on “Answerability” while PGNet does not. This demonstrates our base model MSNet and our CFNet can ask questions to the point. Finally, our CFNet outperforms the other two models in terms of “Interconnectedness” by a large gap, which proves that the proposed coreference alignment and conversation flow modeling can effectively make questions conversational. 6 Related Work The task of Question Generation (QG) aims at generating natural questions from given input contexts. Some template-based approaches (Vanderwende, 2007; Heilman and Smith, 2010) were proposed initially, where well-designed rules and heavy human labor are required for declarativeto-interrogative sentence transformation. With the rise of data-driven learning approach and sequence to sequence (seq2seq) framework (Sutskever et al., 2014), Du et al. (2017) first formulate QG as a seq2seq problem with attention mechanism. They extract sentences and pair them with questions from SQuAD (Rajpurkar et al., 2016), a largescale reading comprehension dataset. Recent works along this line focus on how to utilize the answer information better to generate questions to the point (Zhou et al., 2017; Gao et al., 2019b; Sun et al., 2018), how to generate questions with specific difficulty levels (Gao et al., 2019a) and how to effectively use the contexts in paragraphs to generate questions that cover context beyond a single sentence (Zhao et al., 2018; Du and Cardie, 2018). In parallel to question generation for reading comprehension, some researchers recently investigate question generation in dialogue systems. Li et al. (2017) show that asking questions through interactions can receive useful feedbacks to reach the correct answer. Wang et al. (2018) consider asking questions in open-domain conversational systems with typed decoders to enhance the interactiveness and persistence of conversations. In this paper, we propose a new setting which is related to the above two lines of research. We consider asking questions grounded in a passage via a question-answering style conversation. Since the questions and answers are in the format of a conversation, questions in our setting are highly conversational and interconnected to conversation history. This setting is challenging because we need to jointly model the attention shifting in the passage and the structure of a conversation (Grosz and Sidner, 1986). A limitation of the conversation in our setting is that we can only generate a series of interconnected questions according to predefined answers but in a real dialog the questioner can ask different questions according to the answers’ response. 4861 7 Conclusion and Future Work In this paper, we study the problem of questionanswering style Conversational Question Generation (CQG), which has never been investigated before. We propose an end-to-end neural model with coreference alignment and conversation flow modeling to solve this problem. The coreference alignment enables our framework to refer back to the conversation history using coreferences. The conversation flow modeling builds a coherent conversation between turns. Experiments show that our proposed framework achieves the best performance in automatic and human evaluations. There are several future directions for this setting. First, the presented system is still contingent on highlighting answer-like nuggets in the declarative text. Integrating answer span identification into the presented system is a promising direction. Second, in our setting, the roles of the questioner and the answerer are fixed. However, questions can be raised by either part in real scenario. Acknowledgments This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 and No. CUHK 14210717 of the General Research Fund). We thank Department of Computer Science and Engineering, The Chinese University of Hong Kong for the conference grant support. We would like to thank Wang Chen and Jingjing Li for their comments. References Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2256–2262, Austin, Texas. Association for Computational Linguistics. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from Wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1907–1917, Melbourne, Australia. Association for Computational Linguistics. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342– 1352, Vancouver, Canada. Association for Computational Linguistics. Yifan Gao, Lidong Bing, Wang Chen, Michael R. Lyu, and Irwin King. 2019a. Difficulty controllable generation of reading comprehension questions. In Proceedings of the Twenty-Eightth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial Intelligence Organization. Yifan Gao, Lidong Bing, Piji Li, Irwin King, and Michael R. Lyu. 2019b. Generating distractors for reading comprehension questions from real examinations. In AAAI Conference on Artificial Intelligence. Barbara J. Grosz and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175–204. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617, Los Angeles, California. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735– 1780. Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2017. Learning through dialogue interactions by asking questions. In ICLR. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1802–1813, Berlin, Germany. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 4862 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Iulian Vlad Serban, Alberto Garc´ıa-Dur´an, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30M factoid question-answer corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 588–598, Berlin, Germany. Association for Computational Linguistics. Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, and Daniel Gildea. 2018. Leveraging context information for natural question generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 569–574, New Orleans, Louisiana. Association for Computational Linguistics. Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3930– 3939, Brussels, Belgium. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Lucy Vanderwende. 2007. Answering and questioning for machine reading. In AAAI Spring Symposium: Machine Reading. Siyuan Wang, Zhongyu Wei, Zhihao Fan, Yang Liu, and Xuanjing Huang. 2019. A multi-agent communication framework for question-worthy phrase extraction and question generation. In AAAI Conference on Artificial Intelligence. Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. 2018. Learning to ask questions in opendomain conversational systems with typed decoders. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2193–2203, Melbourne, Australia. Association for Computational Linguistics. Mark Yatskar. 2018. A qualitative comparison of coqa, squad 2.0 and quac. CoRR, abs/1809.10735. Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Saizheng Zhang, Sandeep Subramanian, and Adam Trischler. 2017. Machine comprehension by text-to-text neural question generation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 15–25, Vancouver, Canada. Association for Computational Linguistics. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901–3910, Brussels, Belgium. Association for Computational Linguistics. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In Proceedings of the 6th CCF International Conference on Natural Language Processing and Chinese Computing (NLPCC), pages 662–671, Dalian, China.
2019
480
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4863–4872 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4863 Cross-Lingual Training for Automatic Question Generation Vishwajeet Kumar1,2, Nitish Joshi2, Arijit Mukherjee2, Ganesh Ramakrishnan2, and Preethi Jyothi2 1IITB-Monash Research Academy, Mumbai, India 2IIT Bombay, Mumbai, India {vishwajeet, nitishj, ganesh, pjyothi}@cse.iitb.ac.in {arijitmukh007}@gmail.com Abstract Automatic question generation (QG) is a challenging problem in natural language understanding. QG systems are typically built assuming access to a large number of training instances where each instance is a question and its corresponding answer. For a new language, such training instances are hard to obtain making the QG problem even more challenging. Using this as our motivation, we study the reuse of an available large QG dataset in a secondary language (e.g. English) to learn a QG model for a primary language (e.g. Hindi) of interest. For the primary language, we assume access to a large amount of monolingual text but only a small QG dataset. We propose a cross-lingual QG model which uses the following training regime: (i) Unsupervised pretraining of language models in both primary and secondary languages and (ii) joint supervised training for QG in both languages. We demonstrate the efficacy of our proposed approach using two different primary languages, Hindi and Chinese. We also create and release a new question answering dataset for Hindi consisting of 6555 sentences. 1 Introduction Automatic question generation from text is an important yet challenging problem especially when there is limited training data (i.e., pairs of sentences and corresponding questions). Standard sequence to sequence models for automatic question generation have been shown to perform reasonably well for languages like English, for which hundreds of thousands of training instances are available. However, training sets of this size are not available for most languages. Manually curating a dataset of comparable size for a new language will be tedious and expensive. Thus, it would be desirable to leverage existing question answering datasets to help build QG models for a Sentence : िवा के ये सभी प हमारे राट्रीय ान के िविवध अंग ह (All these forms of education are diverse aspects of our national knowledge system.) Question (ground truth) : िवा के सभी प हमारे राट्रीय ान के या ह  (What is the relationship between different forms of education and our national knowledge systems?) Question (predicted) : िवा के सभी प या ह  (What are all the forms of education?) Sentence : सयता का अथ है संपि! की िनरंतर वृि# , $यव%था और र'ा अपनी संपि! की र'ा औजारों के *ारा की जाती है (Civilization means continuous growth of prosperity, the system and its security are facilitated by the defense mechanism of the civilization.) Question (ground truth) : सयता का या अथ है  (What is the meaning of civilization?) Question (predicted) : सयता का या अथ है  (What is the meaning of civilization?) 1. . 2. Figure 1: Automatic QG from Hindi text. new language. This is the overarching idea that motivates this work. In this paper, we present a cross-lingual model for leveraging a large question answering dataset in a secondary language (such as English) to train models for QG in a primary language (such as Hindi) with a significantly smaller question answering dataset. We chose Hindi to be one of our primary languages. There is no established dataset available for Hindi that can be used to build question answering or question generation systems, making it an appropriate choice as a primary language. We create a new question answering dataset for Hindi (named HiQuAD): https://www.cse.iitb.ac.in/ ˜ganesh/HiQuAD/clqg/. Figure 1 shows two examples of sentence-question pairs from HiQuAD along with the questions predicted by our best model. We also experimented with Chinese as a primary language. This choice was informed by our desire to use a language that was very different from Hindi. We use the same secondary language – English – with both choices of our primary language. Drawing inspiration from recent work on unsupervised neural machine translation (Artetxe et al., 4864 2018; Yang et al., 2018), we propose a crosslingual model to leverage resources available in a secondary language while learning to automatically generate questions from a primary language. We first train models for alignment between the primary and secondary languages in an unsupervised manner using monolingual text in both languages. We then use the relatively larger QG dataset in a secondary language to improve QG on the primary language. Our main contributions can be summarized as follows: • We present a cross-lingual model that effectively exploits resources in a secondary language to improve QG for a primary language. • We demonstrate the value of cross-lingual training for QG using two primary languages, Hindi and Chinese. • We create a new question answering dataset for Hindi, HiQuAD. 2 Related Work Prior work in QG from text can be classified into two broad categories. Rule-based: Rule-based approaches (Heilman, 2011) mainly rely on manually curated rules for transforming a declarative sentence into an interrogative sentence. The quality of the questions generated using rule-based systems highly depends on the quality of the handcrafted rules. Manually curating a large number of rules for a new language is a tedious and challenging task. More recently, Zheng et al. (2018) propose a template-based technique to construct questions from Chinese text, where they rank generated questions using a neural model and select the topranked question as the final output. Neural Network Based: Neural network based approaches do not rely on hand-crafted rules, but instead use an encoder-decoder architecture which can be trained in an end-to-end fashion to automatically generate questions from text. Several neural network based approaches (Du et al., 2017; Kumar et al., 2018a,b) have been proposed for automatic question generation from text. Du et al. (2017) propose a sequence to sequence model for automatic question generation from English text. Kumar et al. (2018a) use a rich set of linguistic features and encode pivotal answers predicted using a pointer network based model to automatically generate a question for the encoded WEpri WEshared WDshared Denoising Autoencoding Back Translation Supervised Training WEpri, WDpri WEsec, WDsec WEpri, WDsec WEsec, WDpri All Training Phases WEshared, WDshared WEsec WDpri WDsec WEpri, WDpri WEsec, WDsec Figure 2: Schematic diagram of our cross-lingual QG system. WEpri and WEsec refer to parameters of the encoder layers specific to the primary and secondary languages; WDpri and WDsec are the weights of the corresponding decoder layers. WEshared and WDshared refer to weights of the encoder and decoder layers shared across both languages, respectively. Weights updated in each training phase are explicitly listed. answer. All existing models optimize a crossentropy based loss function, that suffers from exposure bias (Ranzato et al., 2016). Further, existing methods do not directly address the problem of handling important rare words and word repetition in QG. Kumar et al. (2018b) propose a reinforcement learning based framework which addresses the problem of exposure bias, word repetition and rare words. Tang et al. (2017) and Wang et al. (2017) propose a joint model to address QG and the question answering problem together. All prior work on QG assumed access to a sufficiently large number of training instances for a language. We relax this assumption in our work as we only have access to a small question answering dataset in the primary language. We show how we can improve QG performance on the primary language by leveraging a larger question answering dataset in a secondary language. (Similarly in spirit, cross-lingual transfer learning based approaches have been recently proposed for other NLP tasks such as machine translation (Schuster et al., 2019; Lample and Conneau, 2019).) 3 Our Approach We propose a shared encoder-decoder architecture that is trained in two phases. The first, is an unsupervised pretraining phase, consisting of denoising autoencoding and back-translation. This pretraining phase only requires sentences in both the primary and secondary languages. This is followed by a supervised question generation training phase that uses sentence-question pairs in both languages to fine-tune the pretrained weights. 4865 1 Unsupervised Pretraining while not converged do 2 Train autoencoder to generate sentence xp from noisy sentence ˜xp in primary language and similarly xs from ˜xs in the secondary language. 3 Back Translation: Generate sentences x ′ p and xs ′ in primary and secondary 4 languages from xs and xp respectively, using the current translation model. 5 Train a new translation model using x ′ p and xs ′ where xs and xp are used for supervision, respectively. end 6 Supervised Question Generation 7 Initialize with pretrained weights while not converged do 8 Train sequence to sequence models for question generation in both the primary and secondary languages. end Algorithm 1: Cross-lingual Training Algorithm for QG In Algorithm 1, we outline our training procedure and Figure 2 illustrates the overall architecture of our QG system. Our cross-lingual QG model consists of two encoders and two decoders specific to each language. We also enforce shared layers in both the encoder and the decoder whose weights are updated using data in both languages. (This weight sharing is discussed in more detail in Section 3.3.) For the encoder and decoder layers, we use the newly released Transformer (Vaswani et al., 2017) model that has shown great success compared to recurrent neural network-based models in neural machine translation. Encoders and decoders consist of a stack of four identical layers, of which two layers are independently trained and two are trained in a shared manner. Each layer of the transformer consists of a multi-headed selfattention model followed by a position-wise fully connected feed-forward network. 3.1 Unsupervised Pretraining We use monolingual corpora available in the primary (Hindi/Chinese) and secondary (English) languages for unsupervised pretraining. Similar to Artetxe et al. (2018), we use denoising autoencoders along with back-translation (described in Section 3.1.1) for pretraining the language models in both the primary and secondary languages. Specifically, we first train the model to reconstruct their inputs, which will expose the model to the grammar and vocabulary specific to each language while enforcing a shared latent-space with the help of the shared encoder and decoder layers. To prevent the model from simply learning to copy every word, we randomly permute the word order in the input sentences so that the model learns meaningful structure in the language. If xp denotes the true input sentence to be generated from the sentence with permuted word order ˜xp for the primary language, then during each pass of the autoencoder training we update the weights WEpri, WEshared, WDshared and WDpri. For the secondary language, we analogously update WEsec, WDsec and the weights in the shared layers as shown in Figure 2. 3.1.1 Back translation In addition to denoising autoencoders, we utilize back-translation (Sennrich et al., 2016a). This further aids in enforcing the shared latent space assumption by generating a pseudo-parallel corpus (Imankulova et al., 2017).1 Back translation has been demonstrated to be very important for unsupervised NMT (Yang et al., 2018; Lample et al., 2018). Given a sentence in the secondary language xs, we generate a translated sentence in the primary language, ˜xp. We then use the translated sentence ˜xp to generate the original xs back, while updating the weights WEsec, WEshared, WDshared and WDpri as shown in Figure 2. Note that we utilize denoising autoencoding and back-translation for both languages in each step of training. 3.2 Supervised Question Generation We formulate the QG problem as a sequence to sequence modeling task where the input is a sentence and the output is a semantically consistent, syntactically correct and relevant question in the same language that corresponds to the sentence. Each encoder receives a sentence x (from the corresponding language) as input and the decoder generates a question ¯y such that ¯y = arg maxy P(y|x), and P(y|x) = |y| Y t=1 P(yt|x, y<t), where probability of each subword yt is predicted conditioned on all the subwords generated previously y<t and the input sentence x. We initialize the encoder and decoder weights using unsupervised pretraining and finetune these weights further during the supervised 1A pseudo-parallel corpus consists of pairs of translated sentences using the current state of the model along with the original sentences. 4866 QG model training. Specifically, in each step of training, we update the weights WEsec, WEshared, WDshared and WDsec using QG data in the secondary language and WEpri, WEshared, WDshared and WDpri using QG data in the primary language. 3.3 More Architectural Details We make three important design choices: 1. Use of positional masks: Shen et al. (2018) point out that transformers are not capable of capturing within the attention, information about order of the sequence. Following Shen et al. (2018), we enable our encoders to use directional self attention so that temporal information is preserved. We use positional encodings which are essentially sine and cosine functions of different frequencies. More formally, positional encoding (PE) is defined as: PE(pos,2i) = sin pos m 2i dmodel ! (1) PE(pos,2i+1) = cos pos m 2i dmodel ! (2) where m is a hyper-parameter, pos is the position, dmodel is the dimensionality of the transformer and i is the dimension. Following Vaswani et al. (2017), we set m to 10000 in all our experiments. Directional self attention uses positional masks to inject temporal order information. Based on Shen et al. (2018), we define a forward positional mask (Mf) and a backward positional mask (Mb), Mf ij = ( 0, i < j. −∞, otherwise. Mb ij = ( 0, i > j. −∞, otherwise. that processes the sequence in the forward and backward direction, respectively. 2. Weight sharing: Based on the assumption that sentences and questions in two languages are similar in some latent space, in order to get a shared language independent representation, we share the last few layers of the encoder and the first few layers of the decoder (Yang et al., 2018). Unlike Artetxe et al. (2018); Lample et al. (2018), we do not share the encoder completely across the two languages, thus allowing the encoder layers private to each language to capture languagespecific information. We found this to be useful in our experiments. 3. Subword embeddings: We represent data using BPE (Byte Pair Encoding) (Gage, 1994) embeddings. We use BPE embeddings for both unsupervised pretraining as well as the supervised QG training phase. This allows for more fine-grained control over input embeddings compared to word-level embeddings (Sennrich et al., 2016b). This also has the advantage of maintaining a relatively smaller vocabulary size.2 4 Experimental Setup We first describe all the datasets we used in our experiments, starting with a detailed description of our new Hindi question answering dataset, “HiQuAD”. We will then describe various implementation-specific details relevant to training our models. We conclude this section with a description of our evaluation methods. 4.1 Datasets 4.1.1 HiQuAD HiQuAD (Hindi Question Answering dataset) is a new question answering dataset in Hindi that we developed for this work. This dataset contains 6555 question-answer pairs from 1334 paragraphs in a series of books called Dharampal Books. 3 Similar to SQuAD (Rajpurkar et al., 2016), an English question answering dataset that we describe further in Section 4.1.2, HiQuAD also consists of a paragraph, a list of questions answerable from the paragraph and answers to those questions. To construct sentence-question pairs, for a given question, we identified the first word of the answer in the paragraph and extracted the corresponding sentence to be paired along with the question. We curated a total of 6555 sentencequestion pairs. We tokenize the sentence-question pairs to remove any extra white spaces. For our experiments, we randomly split the HiQuAD dataset into train, 2Using word embeddings across pretraining and the main QG task makes the vocabulary very large, thus leading to large memory issues. 3HiQuAD can be downloaded from: https://www. cse.iitb.ac.in/˜ganesh/HiQuAD/clqg/ 4867 #pairs (Train set) 4000 #pairs (Dev set) 1300 #pairs (Test set) 1255 Text: avg tokens 28.64 Question: avg tokens 14.13 Table 1: HiQuAD dataset details development and test sets as shown in Table 1. All model hyperparameters are optimized using the development set and all results are reported on the test set. 4.1.2 Other Datasets We briefly describe all the remaining datasets used in our experiments. (The relevant primary or secondary language is mentioned in parenthesis, alongside the name of the datasets.) IITB Hindi Monolingual Corpus (Primary language: Hindi) We extracted 93,000 sentences from the IITB Hindi monolingual corpus4 , where each sentence has between 4 and 25 tokens. These sentences were used for unsupervised pretraining. IITB Parallel Corpus (Primary language: Hindi) We selected 100,000 English-Hindi sentence pairs from IITB parallel corpus (Kunchukuttan et al., 2018) where the number of tokens in the sentence was greater than 10 for both languages. We used this dataset to further fine-tune the weights of the encoder and decoder layers after unsupervised pretraining. DuReader (He et al., 2018) Chinese Dataset: (Primary language: Chinese) This dataset consists of question-answer pairs along with the question type. We preprocessed and used “DESCRIPTION” type questions for our experiments, resulting in a total of 8000 instances. From this subset, we created a 6000/1000/1000 split to construct train, development and test sets for our experiments. We also preprocessed and randomly extracted 100,000 descriptions to be used as a Chinese monolingual corpus for the unsupervised pretraining stage. News Commentary Dataset: (Primary language: Chinese) This is a parallel corpus of 4http://www.cfilt.iitb.ac.in/ iitb_parallel/iitb_corpus_download/ monolingual.hi.tgz news commentaries provided by WMT.5 It contains roughly 91000 English sentences along with their Chinese translations. We preprocessed this dataset and used this parallel data for fine-tuning the weights of the encoder and decoder layers after unsupervised pretraining. SQuAD Dataset: (Secondary language: English) This is a very popular English question answering dataset (Rajpurkar et al., 2016). We used the train split of the pre-processed QG data released by Du et al. (2017) for supervised QG training. This dataset consists of 70,484 sentencequestion pairs in English. 4.2 Implementation Details We implemented our model in TensorFlow.6 We used 300 hidden units for each layer of the transformer with the number of attention heads set to 6. We set the size of BPE embeddings to 300. Our best model uses two independent encoder and decoder layers for both languages, and two shared encoder and decoder layers each. We used a residual dropout set to 0.2 to prevent overfitting. During both the unsupervised pretraining and supervised QG training stages, we used the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e−5 and batch size of 64. 4.2.1 Unsupervised Pretraining For Hindi as the primary language, we use 93000 Hindi sentences from the IITB Hindi Monolingual Corpus and around 70000 English sentences from the preprocessed SQuAD dataset for unsupervised pretraining. We pretrain the denoising autoencoders over 15 epochs. For Chinese, we use 100000 Chinese sentences from the DuReader dataset for this stage of training. 4.2.2 Supervised Question Generation Training We used 73000 sentence-question pairs from SQuAD and 4000 sentence-question pairs from HiQuAD (described in Section 4.1.1) to train the supervised QG model in Hindi. We used 6000 Chinese sentence-question pairs from the DuReader dataset to train the supervised QG model in Chinese. We initialize all the weights, including the BPE embeddings, from the pretraining phase and fine-tune them until convergence. 5http://opus.nlpl.eu/ News-Commentary-v11.php 6Code available at https://github.com/vishwajeet93/clqg 4868 Language Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L Hindi Transformer 28.414 18.493 12.356 8.644 23.803 29.893 Transformer+pretraining 41.059 29.294 21.403 16.047 28.159 39.395 CLQG 41.034 29.792 22.038 16.598 27.581 39.852 CLQG+parallel 42.281 32.074 25.182 20.242 29.143 40.643 Chinese Transformer 25.52 9.22 5.14 3.25 7.64 27.40 Transformer+pretraining 30.38 14.01 8.37 5.18 10.46 32.71 CLQG 30.69 14.51 8.82 5.39 10.44 31.82 CLQG+parallel 30.30 13.93 8.43 5.51 10.26 31.58 Table 2: BLEU, METEOR and ROUGE-L scores on the test set for Hindi and Chinese question generation. Best results for each metric (column) are highlighted in bold. 4.3 Evaluation Methods We evaluate our systems and report results on widely used BLEU (Papineni et al., 2002), ROUGE-L and METEOR metrics. We also performed a human evaluation study to evaluate the quality of the questions generated. Following Kumar et al. (2018a), we measure the quality of questions in terms of syntactic correctness, semantic correctness and relevance. Syntactic correctness measures the grammatical correctness of a generated question, semantic correctness measures naturalness of the question, and relevance measures how relevant the question is to the text and answerability of the question from the sentence. 5 Results We present our automatic evaluation results in Table 2, where the primary language is Hindi or Chinese and the secondary language in either setting is English. We do not report on Chinese as a secondary language owing to the relatively poor quality of the Chinese dataset. Here are all the models we compare and evaluate: • Transformer: We train a transformer model (Vaswani et al., 2017) using the QG dataset in the primary language. This serves as a natural baseline for comparison.7 This model consists of a two-layer encoder and a two-layer decoder. • Transformer+pretraining: The abovementioned Transformer model undergoes an additional step of pretraining. The encoder and decoder layers are pretrained using monolingual data from the primary language. This model will help further demonstrate the value of cross-lingual training. 7We also trained a sequence-to-sequence model by augmenting HiQuAD with SQuAD sentences translated into Hindi using Google Translate. This did not perform well giving a BLEU-4 score of 7.54. • CLQG: This is our main cross-lingual question generation model (described in Section 3) where the encoder and decoder layers are initialized in an unsupervised pretraining phase using primary and secondary language monolingual corpora, followed by a joint supervised QG training using QG datasets in the primary and secondary languages. • CLQG+parallel: The CLQG model undergoes further training using a parallel corpus (with primary language as source and secondary language as target). After unsupervised pretraining, the encoder and decoder weights are fine-tuned using the parallel corpus. This fine-tuning further refines the language models for both languages and helps enforce the shared latent space across both languages. We observe in Table 2 that CLQG+parallel outperforms all the other models for Hindi. For Chinese, parallel fine-tuning does not give significant improvements over CLQG; this could be attributed to the parallel corpus being smaller in size (when compared to Hindi) and domain-specific (i.e. the news domain). Model Syntax Semantics Relevance Score Kappa Score Kappa Score Kappa Transformer 71 0.239 62.5 0.46 32 0.75 CLQG 72 0.62 68.5 0.82 54 0.42 +parallel Table 3: Human evaluation results as well as inter-rater agreement (column “Kappa”) for each model on the Hindi test set. The scores are between 0-100, 0 being the worst and 100 being the best. Best results for each metric (column) are in bold. The three evaluation criteria are: (1) syntactic correctness (Syntax), (2) semantic correctness (Semantics), and (3) relevance to the paragraph (Relevance). 4869 Sentence : आज देश म जो हो रहा है वह तो एक बहुत िनचले तर का यूरोप व अमरीका का अनुकरण हो रहा है (What is happening in the country today is a very low level emulation of Europe and America.) Question (human generated) : आज भारत देश म जो हो रहा है वह या है (How do you describe whatever is happening in India today?) Question (predicted) : आज भारत देश म जो कुछ हो रहा है वह या है (How do you describe whatever is happening in India today?) (a) Sentence : लेफेयर ने कहा िक गुवाकषण िसांत एवं इटीग्रल केलकुलस के गिणतीय िसातों के ान के िबना भारतीय गिणत इतना अचूक गिणत %योितषीय आकलन कर ही नहीं सकते थे (Playfair said that without the knowledge of the mathematical principles of ....) Question (human generated) : लेफेयर ने (या कहा ) (What did Playfair say?) Question (predicted) : लेफेयर ने (या कहा ) (What did Playfair say?) (b) Sentence : इस गाथा के अनुसार ब्र के तप व संकप से सृिट का सजन होता है , और िफर यह अनेकानेक आवतनों से होती हुई , वापस ब्र म$ लीन हो जाती है (According to this narrative, the universe is created by tenacity and resolution of...) Question (human generated) : इस गाथा के अनुसार िकससे सृिट का सजन होता है & (According to this narrative, how is the universe created?) Question (predicted) : िकस चीज़ के अनुसार सृिट का सजन होता है & (According to what the universe is created?) (c) Figure 3: Three examples of correctly generated Hindi questions by our model, further analyzed in Section 6.2. Sentence : इसी ईसाईकरण का दूसरा नाम पिचमीकरण है , िजसे करने के प्रयन वतंत्र भारत की सरकार भी करती चली आ रही ह (The second name of this Christianization is Westernization, which independent India's governments has been trying to do.) Question (human generated) : ईसाईकरण का दूसरा नाम !या है " (What is the second name of Christianization?) Question (predicted) : िव#ान का दूसरा नाम !या है " (What is the second name of science?) (a) Sentence : हम जानते ह िक अरब बहुत बड़ा िवदेश यापार करते थे (We know that the Arabs used to very big foreign trade.) Question (human generated) : अरब या करते थे  (What did Arab people used to do?) Question (predicted) : अरब लोग िकस तरह के थे  (What kind of people were the Arabs?) (b) Figure 4: Two examples of incorrectly generated Hindi questions by our model, further analyzed in Section 6.2. 6 Discussion and Analysis We closely inspect our cross-lingual training paradigm using (i) a human evaluation study in Section 6.1 (ii) detailed error analysis in Section 6.2 and (iii) ablation studies in Section 6.3. All the models analyzed in this section used Hindi as the primary language.8 6.1 Human evaluation We conduct a human evaluation study comparing the questions generated by the Transformer and CLQG+parallel models. We randomly selected a subset of 100 sentences from the Hindi test set and generated questions using both models. We presented these sentence-question pairs for each model to three language experts and asked for a binary response on three quality parameters namely syntactic correctness, semantic correctness and relevance. The responses from all the experts for each parameter was averaged for 8Figure 5 shows two examples of correctly generated Chinese questions. Sentence : 打开 微信 , 点击 “ 我 ” , 选择 通⽤ , 点击 功能 , 选择 群发 助⼿ , 点 开始 群发 , 如果 被 对⽅ 删 了 发布 出去. (Open WeChat, click "I", select General, click on function, select the group assistant, click to start the group, if it is deleted by the other party, release it.) Question (human generated) : 怎么 知道 对⽅ 微信 是否 把 我 删 了 ? (How do I know if I have been deleted by the other person's Wechat?) Question (predicted) : 怎样 知道 微信 好友 是否 删除 ⾃己 ? (How do I know if my WeChat friends deleted me? ) (a) Sentence : 放置 在 冰箱 ⾥ ; 把 百⾹果 洗⼲净 切成 条 放在 太 阳 底下 晒 成果 ⼲. (Put them in the refrigerator; wash and cut them into strips and dry them in the sun.) Question (human generated) : 百⾹果 怎么 保存 得 久 ⼀点 ? (How can fruit be stored for longer ?)) Question (predicted) : 樱桃 怎么 保存 ? (How to store cherries? ) (b) Figure 5: Automatic QG from Chinese text. each model to get the final numbers shown in Table 3. Although we perform comparably to the baseline model on syntactic correctness scores, we obtain significantly higher agreement across annotators using our cross-lingual model. Our crosslingual model performs significantly better than the Transformer model on “Relevance” at the cost of agreement. On semantic correctness, we perform signficantly better both in terms of the score and agreement statistics. 6.2 Error Analysis Correct examples: We show several examples where our model is able to generate semantically and syntactically correct questions in Figure 3. Figure 3b shows our model is able to generate questions that are identical to human-generated questions. Fig. 3c demonstrates that our model can generate new questions which clearly differ from the human-generated questions but are syntactically correct, semantically correct and relevant to the text. Fig. 3a shows a third question which differs from the human-generated question in only a 4870 Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CLQG (no pretraining) 31.707 20.727 13.954 9.862 24.209 32.332 CLQG 41.034 29.792 22.038 16.598 27.581 39.852 CLQG+ parallel 42.281 32.074 25.182 20.242 29.143 40.643 Table 4: Ablation study showing the importance of both unsupervised and unsupervised pretraining for Hindi Dataset BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L Hindi QG only 41.66 31.576 24.572 19.538 28.665 40.765 Hindi QG + English QG 42.281 32.074 25.182 20.242 29.143 40.643 Table 5: Ablation study showing the importance of using English QG data for Hindi QG single word but does not alter its quality. Incorrect examples: We also present a couple of examples where our model is unable to generate good questions and analyze possible reasons for the same. In Fig. 4a, the model captures the type of question correctly but gets the main subject of the sentence wrong. On the other hand, Fig. 4b shows a question which is syntactically correct and relevant to the main subject, but is not consistent with the given sentence. 6.3 Ablation Studies We performed two experiments to better understand the role of each component in our model towards automatic QG from Hindi text. 6.3.1 Importance of unsupervised pretraining We construct a model which does not employ any unsupervised or supervised pretraining but uses the same network architecture. This helps in studying the importance of pretraining in our model. We present our results in Table 4. We observe that our shared architecture does not directly benefit from the English QG dataset with simple weight sharing. Unsupervised pretraining (with back-translation) helps the shared encoder and decoder layers capture higher-level languageindependent information giving an improvement of approximately 7 in BLEU-4 scores. Additionally, the use of parallel data for fine-tuning unsupervised pretraining aids this process further by improving BLEU-4 scores by around 3 points. 6.3.2 Importance of secondary language resources To demonstrate the improvement in Hindi QG from the relatively larger English SQuAD dataset, we show results of using only HiQuAD during the Figure 6: Trade-off between HiQuAD training dataset size and BLEU scores. main task in Table 5; unsupervised and supervised pretraining are still employed. We obtain modest performance improvements on the standard evaluation metrics (except ROUGE-L) by using English SQuAD data in the main task. These improvements (albeit small) demonstrate that our proposed cross-lingual framework is a step in the right direction towards leveraging information from a secondary language. 6.4 How many sentence-question pairs are needed in the primary language? To gain more insight into how much data is required to be able to generate questions of high quality, Fig. 6 presents a plot of BLEU scores when the number of Hindi sentence-question pairs is varied. Here, both unsupervised and supervised pretraining are employed but the English SQuAD dataset is not used. After significant jumps in BLEU-4 performance using the first 2000 sentences, we see a smaller but steady improvement in performance with the next set of 2000 sentences. 4871 7 Conclusion Neural models for automatic question generation using the standard sequence to sequence paradigm have been shown to perform reasonably well for languages such as English, which have a large number of training instances. However, large training sets are not available for most languages. To address this problem, we present a crosslingual model that leverages a large QG dataset in a secondary language (along with monolingual data and parallel data) to improve QG performance on a primary language with a limited number of QG training pairs. In future work, we will explore the use of cross-lingual embeddings to further improve performance on this task. Acknowledgments The authors thank the anonymous reviewers for their insightful comments that helped improve this paper. The authors also gratefully acknowledge support from IBM Research, India (specifically the IBM AI Horizon Networks - IIT Bombay initiative). References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In ICLR. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In ACL. Philip Gage. 1994. A new algorithm for data compression. The C Users Journal. Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2018. DuReader: a Chinese machine reading comprehension dataset from real-world applications. In Workshop on Machine Reading for Question Answering. Michael Heilman. 2011. Automatic factual question generation from text. Language Technologies Institute School of Computer Science Carnegie Mellon University. Aizhan Imankulova, Takayuki Sato, and Mamoru Komachi. 2017. Improving low-resource neural machine translation with filtered pseudo-parallel corpus. In 4th Workshop on Asian Translation (WAT2017). Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Vishwajeet Kumar, Kireeti Boorla, Yogesh Meena, Ganesh Ramakrishnan, and Yuan-Fang Li. 2018a. Automating reading comprehension by generating question and answer pairs. In PAKDD. Vishwajeet Kumar, Ganesh Ramakrishnan, and YuanFang Li. 2018b. A framework for automatic question generation from text using deep reinforcement learning. arXiv preprint arXiv:1808.04961. Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhattacharyya. 2018. The iit bombay english-hindi parallel corpus. In LREC. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In EMNLP. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR. Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning for multilingual task oriented dialog. In NAACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In ACL. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Directional self-attention network for rnn/cnn-free language understanding. In AAAI. Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS, pages 5998–6008. Tong Wang, Xingdi Yuan, and Adam Trischler. 2017. A joint model for question answering and question generation. arXiv preprint arXiv:1706.01450. 4872 Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In ACL. Hai-Tao Zheng, JX Han, JY Chen, and Arun Kumar Sangaiah. 2018. A novel framework for automatic chinese question generation based on multi-feature neural network model. Comput. Sci. Inf. Syst.
2019
481
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4873–4883 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4873 A Hierarchical Reinforced Sequence Operation Method for Unsupervised Text Style Transfer Chen Wu1∗, Xuancheng Ren2∗, Fuli Luo2, Xu Sun2,3 1Department of Foreign Languages and Literatures, Tsinghua University 2MOE Key Laboratory of Computational Linguistics, School of EECS, Peking University 3Center for Data Science, Beijing Institute of Big Data Research, Peking University [email protected] {renxc, luofuli, xusun}@pku.edu.cn Abstract Unsupervised text style transfer aims to alter text styles while preserving the content, without aligned data for supervision. Existing seq2seq methods face three challenges: 1) the transfer is weakly interpretable, 2) generated outputs struggle in content preservation, and 3) the trade-off between content and style is intractable. To address these challenges, we propose a hierarchical reinforced sequence operation method, named Point-Then-Operate (PTO), which consists of a high-level agent that proposes operation positions and a lowlevel agent that alters the sentence. We provide comprehensive training objectives to control the fluency, style, and content of the outputs and a mask-based inference algorithm that allows for multi-step revision based on the single-step trained agents. Experimental results on two text style transfer datasets show that our method significantly outperforms recent methods and effectively addresses the aforementioned challenges. 1 1 Introduction Text style transfer aims to convert a sentence of one style into another while preserving the style-independent content (Shen et al., 2017; Fu et al., 2018). In most cases, aligned sentences are not available, which requires learning from nonaligned data. Previous work mainly learns disentangled content and style representations using seq2seq (Sutskever et al., 2014) models and decomposes the transfer into neutralization and stylization steps. Although impressive results have been achieved, three challenges remain: 1) the interpretability of the transfer procedure is still weak in seq2seq models, 2) generated sentences are usually highly stylized with poor content preserva∗Equal Contributions. 1 Our code is available at https://github.com/ ChenWu98/Point-Then-Operate. I will be going back and enjoying this great place ! I will be going back and enjoying this horrible place ! I will be going back and avoid this horrible place ! I will not be going back and avoid this horrible place ! Replace(great,horrible)  Replace(enjoying,avoid)  InsertBefore(be,not)  [Input]  [Iteration 1]  [Iteration 2]  [Iteration 3]  Figure 1: Our proposed Point-Then-Operate (PTO) applied to a real test sample. A high-level agent (red squares) iteratively proposes operation positions, and a low-level agent (arrows) alters the sentence based on the high-level proposals. Compared with seq2seq methods, PTO is more interpretable and better preserves style-independent contents. tion, and 3) the trade-off between content preservation and style polarity is intractable. To address these challenges, we propose a sequence operation-based method within the hierarchical reinforcement learning (HRL) framework, named Point-Then-Operate (PTO). It consists of a hierarchy of a high-level agent that proposes operation positions and a low-level agent that alters the sentence based on high-level proposals. We propose a policy-based training algorithm to model the key aspects in text style transfer, i.e., fluency, style polarity, and content preservation. For fluency, we use a language model reward; for style polarity, we introduce a classification confidence reward and an auxiliary classification task; for content preservation, we adopt a reconstruction reward and a self-supervised reconstruction loss. We introduce a mask-based inference algorithm that applies multi-step sequence operations to the input sentence, allowing for singlestep training which is more stable. Figure 1 shows an example of our method applied to a real test sample from Yelp. Compared with existing seq2seq methods, our 4874 sequence operation method has three merits. 1) Interpretability: our method explicitly models where and how to transfer. 2) Content preservation: sequence operations are targeted at stylized parts; thus, style-independent content can be better preserved. 3) Controllable trade-off: the trade-off between content preservation and style polarity could be tuned in our method. Specifically, we tune it by biasing the number of operation steps. We conduct extensive experiments on two text style transfer datasets, i.e., Yelp and Amazon. We show that our proposed method outperforms recent methods and that it addresses the challenges of existing seq2seq methods. The contributions of this paper are: • We propose a sequence operation method, i.e., Point-Then-Operate, for unsupervised text style transfer. The transfer procedure is modeled as explicit revisions on the input sentences, which improves interpretability, content preservation, and controllable stylecontent trade-off. • The method is interpreted and trained in the HRL framework with a high-level agent that proposes operation positions and a low-level agent that applies explicit operations. We design comprehensive learning objectives to capture three important aspects of text style transfer and propose a mask-based inference algorithm that allows for multi-step revision based on the single-step trained agents. • Experiments on Yelp and Amazon show that our method significantly improves BLEU, fluency, and content preservation compared with recent methods and effectively addresses the aforementioned challenges. 2 Related Work Text Style Transfer Most work on text style transfer learns disentangled representations of style and content. We categorize them based on how they represent content. Hidden vector approaches represent content as hidden vectors, e.g., Hu et al. (2017) adversarially incorporate a VAE and a style classifier; Shen et al. (2017) propose a cross-aligned AE that adversarially aligns the hidden states of the decoder; Fu et al. (2018) design a multi-decoder model and a style-embedding model for better style representations; Yang et al. (2018) use language models as style discriminators; John et al. (2018) utilize bagof-words prediction for better disentanglement of style and content. Deletion approaches represent content as the input sentence with stylized words deleted, e.g., Li et al. (2018) delete stylized ngrams based on corpus-level statistics and stylize it based on similar, retrieved sentences; Xu et al. (2018) jointly train a neutralization module and a stylization module the with reinforcement learning; Zhang et al. (2018a) facilitate the stylization step with a learned sentiment memory. As far as we know, there are two work that avoid disentangled representations. Zhang et al. (2018b) construct a pseudo-aligned dataset with an SMT model and then learn two NMT models jointly and iteratively. A concurrent work, Luo et al. (2019), propose to learn two dual seq2seq models between two styles via reinforcement learning, without disentangling style and content. Sequence Operation Methods Our work is also closely related to sequence operation methods, which are widely used in SMT (Durrani et al., 2011, 2015; Pal et al., 2016) and starts to attract attention in NMT (Stahlberg et al., 2018). Compared with methods based on seq2seq models, sequence operation methods are inherently more interpretable (Stahlberg et al., 2018). Notably, our method is revision-based, i.e., it operates directly on the input sentence and does not generate from scratch as in machine translation systems. Hierarchical Reinforcement Learning In this work, we adopt the Options Framework (Sutton et al., 1999) in HRL, in which a high-level agent learns to determine more abstract options and a low-level agent learns to take less abstract actions given the option. Recent work has shown that HRL is effective in various tasks, e.g., Atari games (Kulkarni et al., 2016), relation classification (Feng et al., 2018), relation extraction (Takanobu et al., 2018), and video captioning (Wang et al., 2018). 3 Formulation We start by formalizing the problem of our interest. Given two non-aligned sets of sentences X1 = {x(1) 1 , · · · , x(n) 1 } of style s1 and X2 = {x(1) 2 , · · · , x(m) 2 } of style s2. Unsupervised text style transfer aims to learn two conditional distributions p(x1→2|x1) and p(x2→1|x2) which alter the style of a sentence and preserve the style4875 independent content. However, defining content is not trivial. Different from previous text style transfer methods that explicitly model contents with disentangled representations, we implicitly model content with reconstruction, similar to the idea proposed adopted in CycleGAN (Zhu et al., 2017). Given the discreteness nature of natural language texts, we use sequence operations to approximate p(x1→2|x1) and p(x2→1|x2). In our notations, x1→2 and x2→1 are transferred sentences, which are the outputs of a text style transfer system; ˆx2 and ˆx1 are operated sentences, which are not necessarily fully transferred. 4 Our Approach Our proposed sequence operation-based method, Point-Then-Operate (PTO), decomposes style transfer into two steps: 1) finding where to transfer and 2) determining how to transfer. It could be naturally formulated as an HRL problem, in which a high-level agent (i.e., pointer) proposes operation positions and a low-level agent (i.e., operators) alters the sentence based on high-level proposals. In this section, we first briefly review the Options Framework in HRL. Then we introduce the proposed pointer module (§4.2) and operator modules (§4.3). The training algorithm is in §4.4, in which two extrinsic rewards, an intrinsic reward, and a self-supervised loss are proposed for fluency, style polarity, and content preservation. The inference algorithm is in §4.5, in which a mask mechanism is proposed to iteratively and dynamically apply sequence operations to the input. 4.1 Review: The Options Framework in HRL The Options framework (Sutton et al., 1999) is a well-known formulation in HRL. We denote the state space as S; the option space, O; the action space, A. The high-level agent learns a stochastic policy µ : S × O →[0, 1]. The low-level agent learns a stochastic policy πo : S×A →[0, 1], conditioned on an option o ∈O. Additionally, each option o ∈O has a low-level stochastic termination condition βo : S →[0, 1] which indicates whether the current option should end. In each episode, the high-level agent executes a trajectory (o1, · · · , oL) based on µ; once an option ot is sampled, the low-level agent executes a trajectory (a1 t , · · · , alt t ) based on πot, where lt is dependent on βot. Intuitively, the flattened trajectory for one episode is (o1, a1 1, · · · , al1 1 , · · · , oL, a1 L, · · · , alL L ). Module Operation IFφ1 Insert a word ˆw in Front of the position IBφ2 Insert a word ˆw Behind the position Repφ3 Replace it with another word ˆw DC Delete the Current word DF Delete the word in Front of the position DB Delete the word Behind the position Skip Do not change anything Table 1: Operator modules. Parameters φ1, φ2, and φ3 are meant to generate their corresponding ˆw. 4.2 High-Level Agent: Pointer The high-level policy µ aims to propose operation positions; thus, we model it as an attention-based (Bahdanau et al., 2015) pointer network, which assigns normalized probability to each position. Option Given a sentence x = {x1, · · · , xT }, the option space is O = {1, · · · , T}. Note that T changes within an episode, since operations may change the length of a sentence. State The state is represented by the sentence representation hT and each position representation hi, where {h1, · · · , hT } is mapped from the sentence x by a bi-LSTM encoder. Policy We adopt an attention-based policy µ: µ(i|x) = exp(a(hT , hi)) PT t=1 exp(a(hT , ht)) (1) where a(·, ·) is the scoring function for attention, and i ∈{1, · · · , T} denotes each position in the intput sentence. 4.3 Low-Level Agent: Operators The low-level policy π alters the sentence around the position i (i.e., option) sampled from µ. We restrict the operations to those listed in Table 1. Note that these operations are complete to generate all natural language sentences in multiple steps. Action Given the sentence x = {x1, · · · , xT } and the operation position i, the action of the lowlevel agent can be decomposed into two step, i.e., 1. Operator selection. Select an operator module from Table 1. 2. Word generation (optional). Generates a word, if necessary as is specified in Table 1. 4876 State Compared with the high-level agent, our low-level agent focuses on features that are more local. We map x to {h1, · · · , hT }2 through a biLSTM encoder and take hi as the state representation. Low-Level Termination Condition Different from the original Options Framework in which a stochastic termination condition βo is learned, we adopt a deterministic termination condition: the low-level agent takes one action in each option and terminates, which makes training easier and more stable. Notably, it does not harm the expressiveness of our method, since multiple options can be executed. Policy for Operator Selection For training, we adopt a uniform policy for operator selection, i.e., we uniformly sample an operator module from Table 1. In preliminary experiments, we explored a learned policy for operator selection. However, we observed that the learned policy quickly collapses to a nearly deterministic choice of Repφ3. Our explanation is that, in many cases, replacing a stylized word is the optimal choice for style transfer. Thus, the uniform policy assures that all operators are trained on sufficient and diversified data. For inference, we adopt a heuristic policy based on fluency and style polarity, detailed in §4.5.3. Policy for Word Generation As shown in Table 1, three operators are parameterized, which are burdened with the task of generating a proper word to complete the action. For each parameterized operator M, the probability of generating ˆw is M( ˆw|x, i) = softmax ˆw(W hi) (2) Notably, for each M we train two sets of parameters for s1 →s2 and s2 →s1. For readability, we omit the direction subscripts and assure that they can be inferred from contexts; parameters of the opposite direction are denoted as φ′ 1, φ′ 2, and φ′ 3. 4.4 Hierarchical Policy Learning We introduce comprehensive training objectives to model the key aspects in text style transfer, i.e., fluency, style polarity, and content preservation. For fluency, we use an extrinsic language model reward; for style polarity, we use an extrinsic classification confidence reward and incorporate an auxiliary style classification task; for content 2We reuse h and W notations for all modules for brevity. Input Operators Transferred nlm, nconf nconf Style Label hcls Input Pointer nrec , hrec Operators Transferred Reconstruction Figure 2: Graphical overview for the training algorithm, which consists of a transfer step (left) and a reconstruction step (right). Solid lines denote forward pass; dotted lines denote rewards or losses. Blue / red items belong to the source / target styles; yellow items denotes the agents. Best viewed in color. Algorithm 1 Point-Then-Operate Training 1: Input: Non-aligned sets of sentences X1,2 2: Initialize θ, φ1,2,3 3: Train language models LM2 on X2 4: Pre-train θ by optimizing Lθ cls ▷Eq. 6 5: for each iteration i = 1, 2, · · · , m do 6: Sample x1 from X1 7: Sample i from µθ(i|x1) ▷Eq. 1 8: Uniformly sample M ▷Table 1 9: ˆx2 ←Transfer(x1, M, i) ▷Table 1 10: Compute Rconf and Rlm ▷Eq. 3 and 4 11: Update θ based on Lθ cls and ∇θJ(θ) ▷Eq. 6 and 9 12: Get M ′ and i′ ▷Table 2 13: if M ′ is parameterized by φ′ then 14: ¯x1 ←Reconstruct(ˆx2, M ′, i′) ▷Table 1 15: Update φ′ by optimizing Lφ′ rec ▷Eq. 7 16: end if 17: if M is parameterized by φ then 18: Compute Rrec if M is Repφ3 ▷Eq. 8 19: Update φ with ∇φJ(φ) ▷Eq. 11 20: end if 21: end for preservation, we use a self-supervised reconstruction loss and an intrinsic reconstruction reward. In the following parts, we only illustrate equations related to x1 →ˆx2 operations and ˆx2 →x1 reconstructions for brevity; the opposite direction can be derived by swapping 1 and 2. The training algorithm is presented in Algorithm 1. A graphical overview is shown in Figure 2. 4.4.1 Modeling Fluency Language Model Reward To improve the fluency, we adopt a language model reward. Let LM1, LM2 denote the language models for s1 and s2, respectively. Given the generated word ˆw in the operated sentence ˆx2, the language model reward is defined as Rlm = λlmLM2( ˆw|ˆx2) (3) 4877 M M ′ i′ Repφ3 Repφ′ 3 i DC IFφ′ 1 or IBφ′ 2 i or i −1 DF IFφ′ 1 or IBφ′ 2 i −1 or i −2 DB IFφ′ 1 or IBφ′ 2 i + 1 or i Table 2: Construction of self-supervised data. where LM2( ˆw|ˆx2) denotes the probability of ˆw given other words in ˆx2. In our experiments, the probability is computed by averaging a forward LSTM-LM and a backward LSTM-LM. 4.4.2 Modeling Style Polarity Classification Confidence Reward We observe that language models are not adequate to capture style polarity; thus, we encourage larger change in the confidence of a style classifier, by adopting a classification confidence reward, i.e., Rconf = λconf[p(s2|ˆx2) −p(s2|x1)] (4) where we reuse the classifier defined in Eq. 5. Auxiliary Task: Style Classification In HRL, the high-level policy usually suffers from the high variance of gradients since the estimated gradients are dependent on the poorly trained low-level policy. To stabilize the high-level policy learning, we introduce auxiliary supervision to the pointer. Specifically, we extend the pointer to an attentionbased classifier, i.e., p(sj|x) = softmaxj(W T X i=1 µ(i|x)hi) (5) for j = 1, 2. Let θ denotes the parameters of the pointer. The auxiliary classification loss for θ is Lθ cls = X j=1,2 Exj∼Xj[−log pθ(sj|xj)] (6) The underlying assumption is that positions with larger attention weights for classification are more likely to be critical to style transfer. 4.4.3 Modeling Content Preservation Self-Supervised Reconstruction Loss To improve content preservation, we propose a reconstruction loss that guides the operator modules with self-supervision. Suppose the word w at the ith position is deleted or replaced by operator M, we identify the reconstruction operator M′ and reconstruction position i′ in Table 2. Then M′ is updated with MLE, by operating on position i′ in ˆx2 with w as gold output. For those with two (M′, i′) pairs, we uniformly sample one for training. Formally, the reconstruction loss is defined as Lφ′ rec = −log M′(w|ˆx2, i′) (7) Reconstruction Reward One-to-one transfer (e.g., {delicious↔bland, caring↔unconcerned}) is usually preferable to many-to-one transfer (e.g., {delicious→bad, caring→bad}). Thus, we introduce a reconstruction reward for Repφ3 to encourage one-to-one transfer, i.e., Rrec = −λrecLφ′ 3 rec (8) where Lφ′ 3 rec is the reconstruction loss in Eq. 7. 4.4.4 Training with Single-Option Trajectory Instead of executing multi-option trajectories, we only allow the high-level agent to execute a single option per episode during training, and leave the multi-option scenario to the inference algorithm (§4.5). We have two motivations for executing single-option trajectories: 1) executing multioption trajectories is less tractable and stable, especially in the case of style transfer which is sensitive to nuances in the sentence; 2) self-supervised reconstruction is ambiguous in a multi-option trajectory, i.e., the gold trajectory for reconstruction is not deterministic. High-Level Policy Gradients Since the language model reward is more local and increases the variance of estimated gradients, we only use the classification confidence reward for the highlevel policy. The policy gradient is ∇θJ(θ) = Ei[Rconf · ∇θ log µθ(i|x1)] (9) where gradients are detached from Rconf. Low-Level Policy Gradients All the extrinsic and intrinsic rewards are used for low-level policy learning. Specifically, the rewards for φ1,2,3 are R1,2 = Rlm + Rconf R3 = Rlm + Rconf + Rrec (10) For φ = φ1, φ2, φ3, the policy gradient is ∇φJ(φ) = E ˆw[R · ∇φ log Mφ( ˆw|x1, i)] (11) Overall Objectives The overall objectives for θ are the classification loss in Eq. 6 and the policy gradient in Eq. 9. The overall objectives for φ1,2,3 are the reconstruction loss in Eq. 7 and the policy gradients in Eq. 11. 4878 Algorithm 2 Point-Then-Operate Inference 1: Input: Input sentence x1, additional classifier padd 2: Initialize ˆx2 ←x1, ˆxm 2 ←x1, j ←1 3: while padd(s1|ˆxm 2 ) > pstop and j ≤jmax do 4: Mask the options in µθ(i|ˆx2) ▷§4.5.1 5: Select i that maximizes the masked µθ(i|ˆx2) 6: Select the best M from Table 1 ▷§4.5.3 7: Update ˆx2 ←Transfer(ˆx2, M, i) ▷§4.3 8: Update ˆxm 2 ▷§4.5.2 9: j ←j + 1 10: end while 11: The output is x1→2 ←ˆx2 12: return x1→2 4.5 Inference The main problems in applying single-step trained modules to the multi-step scenario are 1) previous steps of operations may influence later steps, and 2) we need to dynamically decide when the trajectory should terminate. We leverage a mask mechanism to address these problems. The basic idea is that given an input sentence, the high-level agent iteratively proposes operation positions for the low-level agent to operate around. In each iteration, the high-level agent sees the whole sentence but with some options (i.e., positions) masked in its policy. The trajectory termination condition is modeled by an additional pre-trained classifier. The algorithm for style transfer from s1 to s2 is detailed in Algorithm 2. 4.5.1 Masked Options To tackle the first problem, we mask the options (i.e., positions) in the high-level policy which appear in the contexts in which any words are inserted, replaced, or skipped (but not for deleted words). Note that we only mask the options in the policy but do not mask the words in the sentence (i.e., both agents still receive the complete sentence), since we cannot bias the state representations (§4.2 and §4.3) with masked tokens. We set the window size as 1 (i.e., three words are masked in each step). We find the use of window size necessary, since in many cases, e.g., negation and emphasis, the window size of 1 is capable of covering a complete semantic unit. 4.5.2 Termination Condition A simple solution to the second problem is to terminate the trajectory if the operated sentence is confidently classified as the target style. The problem with this simple solution is that the highly stylized part may result in too early termination. For example, Otherwise a terrible experience and we will go again may be classified as negative with high confidence. Thus, we propose to mask words in the operated sentence for the termination condition. The masking strategy is the same as §4.5.1 and masked words are replaced by ⟨unk⟩. To tackle the excessive number of ⟨unk⟩, we train an additional classifier as defined in §4.4.2, but trained on sentences with words randomly replaced as ⟨unk⟩. 4.5.3 Inference Policy for Operator Selection As discussed in §4.3, we adopt a heuristic inference policy for operator selection. Specifically, we enumerate each operator and select the operated sentence ˆx2 which maximizes the criterion: c(ˆx2) = LM2(ˆx2) · p(s2|ˆx2)η (12) where LM2(ˆx2) denotes the probability of ˆx2 computed by the language model LM2, p(sj|·) is the classifier defined in §4.4.2, and η is a balancing hyper-parameter. 5 Experiments 5.1 Datasets We conduct experiments on two commonly used datasets for unsupervised text style transfer, i.e., Yelp and Amazon, following the split of datasets in Li et al. (2018). Dataset statistics are shown in Table 3. For each dataset, Li et al. (2018) provided a gold output for each entry in the test set written by crowd-workers on Amazon Mechanical Turk. Since gold outputs are not written for development sets, we tune the hyper-parameters on the development sets based on our intuition of English. Yelp The Yelp dataset consists of business reviews and their labeled sentiments (from 1 to 5) from Yelp. Those labeled greater than 3 are considered as positive samples and those labeled smaller than 3 are negative samples. Amazon The Amazon dataset consists of product reviews and labeled sentiments from Amazon (He and McAuley, 2016). Positive and negative samples are defined in the same way as Yelp. We observe that the Amazon dataset contains many neutral or wrongly labeled sentences, which greatly harms our HRL-based sequence operation method. Thus, on the Amazon dataset, we adopt a cross-domain setting, i.e., we train the modules 4879 Dataset Attributes Train Dev Test Yelp Positive 270K 2000 500 Negative 180K 2000 500 Amazon Positive 277K 985 500 Negative 278K 1015 500 Table 3: Dataset statistics. on the Yelp training set using the Amazon vocabulary and test the method on Amazon test set. Experimental results show the effectiveness of our method under this cross-domain setting. 5.2 Evaluation Metrics Automatic Evaluation Following previous work (Shen et al., 2017; Xu et al., 2018), we pre-train a style classifier TextCNN (Kim, 2014) on each dataset and measure the style polarity of system outputs based on the classification accuracy. Also, based on the human references provided by Li et al. (2018), we adopt a caseinsensitive BLEU metric, which is computed using the Moses multi-bleu.perl script. Human Evaluation Following previous work (Shen et al., 2017; Xu et al., 2018), we also conduct human evaluations. For each input sentence and corresponding output, each participant is asked to score from 1 to 5 for fluency, content preservation, and style polarity. If a transfer gets scores of 4 or 5 on all three aspects, it is considered as a successful transfer. We count the success rate over the test set for each system, which is denoted as Suc in Table 5. 5.3 Baselines We make a comprehensive comparison with stateof-the-art style transfer methods. CrossAligned (Shen et al., 2017) aligns decoder hidden states adversarially. MultiDecoder (Fu et al., 2018) adopts multiple decoders for different styles. StyleEmbedding (Fu et al., 2018) adopts a single decoder conditioned on learned style embeddings. TemplateBased (Li et al., 2018) retrieves and replaces stylized words. DeleteOnly (Li et al., 2018) only deletes the stylized words in the input sentence. Del-Ret-Gen (Li et al., 2018) is the same as TemplateBased except that an RNN is adopted to generate the output. BackTranslate (Prabhumoye et al., 2018) stylizes the back-translated input. UnpairedRL (Xu et al., 2018) deletes stylized words and generates with a denoising AE. UnsuperMT Yelp Amazon Acc BLEU Acc BLEU CrossAligned 74.7 9.06 75.1 1.90 MultiDecoder 50.6 14.54 69.9 9.07 StyleEmbedding 8.4 21.06 38.2 15.07 TemplateBased 81.2 22.57 64.3 34.79 DeleteOnly 86.0 14.64 47.0 33.00 Del-Ret-Gen 88.6 15.96 51.0 30.09 BackTranslate 94.6 2.46 76.7 1.04 UnpairedRL 57.5 18.81 56.3 15.93 UnsuperMT 97.8 22.75 72.4 33.95 Human 74.7 43.2 Point-Then-Operate 91.5 29.86 40.2 41.86 Table 4: Automatic evaluation results for classification accuracy and BLEU with human reference. Human denotes human references. Note that Acc for human references are relatively low; thus, we do not consider it as a valid metric for comparison. (Zhang et al., 2018b) produces pseudo-aligned data and iteratively learns two NMT models. The outputs of the first six baselines are made public by Li et al. (2018). The outputs of BackTranslate and UnpairedRL are obtained by running the publicly available codes. We get the outputs of UnsuperMT from the authors of Zhang et al. (2018b). 5.4 Evaluation Results Table 4 shows the results of automatic evaluation. It should be noted that the classification accuracy for human reference is relatively low (74.7% on Yelp and 43.2% on Amazon); thus, we do not consider it as a valid metric for comparison. For BLEU score, our method outperforms recent systems by a large margin, which shows that our outputs have higher overlap with reference sentences provided by humans. To lighten the burden on human participants, we compare our proposed method to only four of the previous methods, selected based on their performance in automatic evaluation. Given the observation discussed in §5.1, we remove the wrongly labeled test samples for human evaluation. Table 5 shows the results of human evaluation. Our proposed method achieves the highest fluency and content preservation on Yelp and performs the best on all human evaluation metrics on Amazon. 5.5 Controllable Trade-Off Figure 3 shows how classification accuracy and BLEU change when we manually set pstop. When 4880 Yelp Amazon Fluency Content Style Suc Fluency Content Style Suc TemplateBased 3.47 3.76 3.25 68.0 % 3.46 4.08 2.15 9.0 % Del-Ret-Gen 3.82 3.73 3.52 70.3 % 4.02 4.31 2.69 21.0 % UnpairedRL 3.54 3.59 2.90 53.8 % 2.58 2.55 2.44 4.5 % UnsuperMT 4.26 4.24 4.03 82.5 % 4.24 4.13 3.05 35.5 % Point-Then-Operate 4.39 4.56 3.78 81.5 % 4.28 4.47 3.31 47.0 % Table 5: Human evaluation results. Methods are selected based on automatic evaluation. Style: style polarity; Content: content preservation; Fluency: fluency; Suc: the proportion of successful transfer (refer to §5.2) 0.0 0.2 0.4 0.6 0.8 1.0 pstop 0 20 40 60 80 100 value (%) Metric BLEU Acc (a) Yelp 0.0 0.2 0.4 0.6 0.8 1.0 pstop 25 30 35 40 45 50 value (%) Metric BLEU Acc (b) Amazon Figure 3: The controllable trade-off between content preservation and style polarity. The x-axis is pstop (defined in Algorithm 2). The y-axis is the value of different automatic metrics, i.e., BLEU (the blue lines) and classification accuracy (the orange lines). pstop is larger, classification accuracy drops and BLEU increases. Based on our observation of human references, we find that humans usually make minimal changes to the input sentence; thus, BLEU computed with human references can be viewed as an indicator of content preservation. From this perspective, Figure 3 shows that if we stop earlier, i.e., when the current style is closer to the source style, more content will be preserved and more weakly stylized words may be kept. Thus, controllable trade-off is achieved by manually setting pstop. 5.6 Ablation Studies We conduct several ablation studies to show the effect of different components in our method: Ablations of Operators To show that incorporating various operators is essential, we evaluate the performance of the following ablations: InsertOnly, ReplaceOnly, and DeleteOnly, in which operator choices are restricted to subsets of Table 1. Ablation of Reconstruction Reward and Reconstruction Loss To show the effectiveness of our reconstruction-based objectives, we remove the reconstruction reward and the reconstruction loss as an ablation. Yelp Amazon Acc BLEU Acc BLEU InsertOnly 68.6 23.93 48.2 36.77 ReplaceOnly 93.8 26.41 47.8 37.39 DeleteOnly 37.6 25.70 25.0 41.68 w/o Rrec and Lrec 39.1 27.80 46.3 40.52 Human 74.7 43.2 Full 91.5 29.86 40.2 41.86 Table 6: Ablation Studies. Table 6 shows the ablation results. It shows that BLEU drops if operators are restricted to a fixed set, showing the necessity of cooperating operator modules. It also shows that BLEU drops if we remove the reconstruction loss and the reconstruction reward, indicating the generated words overlap less with human references in this ablation case. As discussed in §5.4, we ignore Acc since it is low on human references. 5.7 Qualitative Study Figure 1 is an example of our method applied to a test sample. The transfer starts from more stylized parts and ends at less stylized parts, while keeping neutral parts intact. It also shows that our method learns lexical substitution and negation in an unsupervised way. Table 7 displays some comparisons of different systems. It shows that our proposed method is better at performing local changes to reverse the style of the input sentence while preserving most style-independent parts. 6 Discussions We study the system outputs and observe two cases that our method cannot properly handle: Neutral Input The reconstruction nature of our method prefers stylized input to neutral input. We observe that it fails to convert some neutral inputs, e.g., I bought this toy for my daughter about 4881 Original (Yelp, negative) staffed primarily by teenagers that do n’t understand customer service . TemplateBased staffed primarily by teenagers that huge portions and customer service are pretty good . Del-Ret-Gen staffed , the best and sterile by flies , how fantastic customer service . UnpairedRL staffed established each tech feel when great customer service professional . UnsuperMT staffed distance that love customer service . Point-Then-Operate staffed by great teenagers that do delightfully understand customer service . Original (Yelp, positive) i will be going back and enjoying this great place ! TemplateBased i will be going back and enjoying this i did not @unk Del-Ret-Gen i will be going back and will not be returning into this UnpairedRL i will be going back and enjoying this great place . UnsuperMT i wo n’t be going back and sitting this @num . Point-Then-Operate i will not be going back and avoid this horrible place ! Original (Amazon, negative) i could barely get through it they taste so nasty . TemplateBased beautifully through it they taste so nasty . Del-Ret-Gen i have used it through and it is very sharp and it was very nasty . UnpairedRL i could barely get through it they taste so nasty . UnsuperMT i can perfect get through it they taste so delicious . Point-Then-Operate i could get through it they taste so good . Original (Amazon, positive) i also prefered the blade weight and thickness of the wustof . TemplateBased i also prefered the blade weight and thickness of the wustof toe . Del-Ret-Gen i also prefered the blade and was very disappointed in the weight and thickness of the wustof . UnpairedRL i also sampled the comfortable base and follow of the uk . UnsuperMT i also encounter the blade weight and width of the guitar . Point-Then-Operate i only prefered the weight and thickness of the wustof . Table 7: Sampled system outputs. The dataset and the original style for each input sentence are parenthesized. We mark improperly generated or preserved words in blue, and mark words that show target style and are grammatical in the context in red. Best viewed in color. @num months ago., which shows that the highlevel policy is not well learned for some neutral sentences. Adjacent Stylized Words We introduce a window size of 1 in §4.5.1 to deal with most semantic units. However, we observe in some cases two adjacent stylized words occur, e.g., poor watery food. If the first step is to replace one of them, then the other will be masked in later iterations, leading to incomplete transfer; if the first step is deletion, our method performs well, since we do not mask the context of deletion as stated in §4.5.1. Notably, phrases like completely horrible is not one of these cases, since completely itself is not stylized. Experiments in this work show the effectiveness of our proposed method for positive-negative text style transfer. Given its sequence operation nature, we see potentials of the method for other types of transfers that require local changes, e.g., politeimpolite and written-spoken, while further empirical verification is needed. 7 Conclusions We identify three challenges of existing seq2seq methods for unsupervised text style transfer and propose Point-Then-Operate (PTO), a sequence operation-based method within the hierarchical reinforcement learning (HRL) framework consisting of a hierarchy of agents for pointing and operating respectively. We show that the key aspects of text style transfer, i.e., fluency, style polarity, and content preservation, can be modeled by comprehensive training objectives. To make the HRL training more stable, we provide an efficient mask-based inference algorithm that allows for single-option trajectory during training. Experimental results show the effectiveness of our method to address the challenges of existing methods. Acknowledgments We would like to thank the anonymous reviewers for their thorough and helpful comments. We are grateful to the authors of Zhang et al. (2018b) for providing the UnsuperMT results. Xu Sun is the corresponding author of this paper. 4882 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Nadir Durrani, Helmut Schmid, and Alexander M. Fraser. 2011. A joint sequence translation model with integrated reordering. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 1045–1054. Nadir Durrani, Helmut Schmid, Alexander M. Fraser, Philipp Koehn, and Hinrich Sch¨utze. 2015. The operation sequence model - combining n-grambased and phrase-based statistical machine translation. Computational Linguistics, 41(2):185–214. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement learning for relation classification from noisy data. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5779– 5786. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 663–670. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web, WWW 2016, Montreal, Canada, April 11 - 15, 2016, pages 507–517. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1587–1596. Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2018. Disentangled representation learning for text style transfer. CoRR, abs/1808.04339. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, pages 1746–1751. Tejas D. Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. 2016. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3675–3683. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: A simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1865–1874. Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. A dual reinforcement learning framework for unsupervised text style transfer. CoRR, abs/1905.10060. Santanu Pal, Marcos Zampieri, and Josef van Genabith. 2016. USAAR: An operation sequential model for automatic statistical post-editing. In Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, August 1112, Berlin, Germany, pages 759–763. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W. Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 866–876. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 49 December 2017, Long Beach, CA, USA, pages 6833–6844. Felix Stahlberg, Danielle Saunders, and Bill Byrne. 2018. An operation sequence model for explainable neural machine translation. In Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2018, Brussels, Belgium, November 1, 2018, pages 175–186. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 813 2014, Montreal, Quebec, Canada, pages 3104– 3112. 4883 Richard S. Sutton, Doina Precup, and Satinder P. Singh. 1999. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artif. Intell., 112(1-2):181–211. Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, and Minlie Huang. 2018. A hierarchical framework for relation extraction with reinforcement learning. CoRR, abs/1811.03925. Xin Wang, Wenhu Chen, Jiawei Wu, Yuan-Fang Wang, and William Yang Wang. 2018. Video captioning via hierarchical reinforcement learning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 4213–4222. Jingjing Xu, Xu Sun, Qi Zeng, Xiaodong Zhang, Xuancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 979–988. Zichao Yang, Zhiting Hu, Chris Dyer, Eric P. Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discriminators. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montr´eal, Canada., pages 7298– 7309. Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun. 2018a. Learning sentiment memories for sentiment modification without parallel data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1103–1108. Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2018b. Style transfer as unsupervised machine translation. CoRR, abs/1808.07894. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 2229, 2017, pages 2242–2251.
2019
482
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884–4895 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4884 Handling Divergent Reference Texts when Evaluating Table-to-Text Generation Bhuwan Dhingra† ∗ Manaal Faruqui‡ Ankur Parikh‡ Ming-Wei Chang‡ Dipanjan Das‡ William W. Cohen†‡ † Carnegie Mellon University ‡ Google Research [email protected] {mfaruqui,aparikh,mingweichang,dipanjand,wcohen}@google.com Abstract Automatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio (Lebret et al., 2016), often contain reference texts that diverge from the information in the corresponding semistructured data. We show that metrics which rely solely on the reference texts, such as BLEU and ROUGE, show poor correlation with human judgments when those references diverge. We propose a new metric, PARENT, which aligns n-grams from the reference and generated texts to the semi-structured data before computing their precision and recall. Through a large scale human evaluation study of table-to-text models for WikiBio, we show that PARENT correlates with human judgments better than existing text generation metrics. We also adapt and evaluate the information extraction based evaluation proposed in Wiseman et al. (2017), and show that PARENT has comparable correlation to it, while being easier to use. We show that PARENT is also applicable when the reference texts are elicited from humans using the data from the WebNLG challenge.1 1 Introduction The task of generating natural language descriptions of structured data (such as tables) (Kukich, 1983; McKeown, 1985; Reiter and Dale, 1997) has seen a growth in interest with the rise of sequence to sequence models that provide an easy way of encoding tables and generating text from them (Lebret et al., 2016; Wiseman et al., 2017; Novikova et al., 2017b; Gardent et al., 2017). For text generation tasks, the only gold standard metric is to show the output to humans for judging its quality, but this is too expensive to apply ∗Work done during an internship at Google. 1Code and Data: http://www.cs.cmu.edu/ ~bdhingra/pages/parent.html repeatedly anytime small modifications are made to a system. Hence, automatic metrics that compare the generated text to one or more reference texts are routinely used to compare models (Bangalore et al., 2000). For table-to-text generation, automatic evaluation has largely relied on BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004). The underlying assumption behind these metrics is that the reference text is gold-standard, i.e., it is the ideal target text that a system should generate. In practice, however, when datasets are collected automatically and heuristically, the reference texts are often not ideal. Figure 1 shows an example from the WikiBio dataset (Lebret et al., 2016). Here the reference contains extra information which no system can be expected to produce given only the associated table. We call such reference texts divergent from the table. We show that existing automatic metrics, including BLEU, correlate poorly with human judgments when the evaluation sets contain divergent references (§5.4). For many table-to-text generation tasks, the tables themselves are in a pseudonatural language format (e.g., WikiBio, WebNLG (Gardent et al., 2017), and E2E-NLG (Dušek et al., 2019)). In such cases we propose to compare the generated text to the underlying table as well to improve evaluation. We develop a new metric, PARENT (Precision And Recall of Entailed Ngrams from the Table) (§3). When computing precision, PARENT effectively uses a union of the reference and the table, to reward correct information missing from the reference. When computing recall, it uses an intersection of the reference and the table, to ignore extra incorrect information in the reference. The union and intersection are computed with the help of an entailment model to decide if a text n-gram is entailed by the table.2 We 2Here “entailed” means can be reasonably inferred from 4885 Figure 1: A table from the WikiBio dataset (right), its reference description and three hypothetical generated texts with scores assigned to them by automatic evaluation metrics. Text which cannot be inferred from the table is in red, and text which can be inferred but isn’t present in the reference is in green. PARENT is our proposed metric. show that this method is more effective than using the table as an additional reference. Our main contributions are: • We conduct a large-scale human evaluation of the outputs from 16 table-to-text models on 1100 examples from the WikiBio dataset, many of which have divergent references (§5.2). • We propose a new metric, PARENT (§3), and show that it improves correlation with human judgments over existing metrics, both when comparing similar systems (such as different hyperparameters of a neural network) and when comparing vastly different systems (such as template-based and neural models). • We also develop information extraction based metrics, inspired from Wiseman et al. (2017), by training a model to extract tables from the reference texts (§4). We find that these metrics have comparable correlation to PARENT, with the latter being easier to use out of the box. • We analyze the sensitivity of the metrics to divergence by collecting labels for which references contain only information also present in the tables. We show that PARENT maintains high correlation as the number of such examples is varied. (§5.5). • We also demonstrate the applicability of PARENT on the data released as part of the WebNLG challenge (Gardent et al., 2017), where the references are elicited from humans, and hence are of high quality (§5.4). 2 Table-to-Text Generation We briefly review the task of generating natural language descriptions of semi-structured data, which we refer to as tables henceforth (Barzilay the corresponding table. In practice, we use simple lexical entailment models to determine this. and Lapata, 2005; Liang et al., 2009). Tables can be expressed as set of records T = {rk}K k=1, where each record is a tuple (entity, attribute, value). When all the records are about the same entity, we can truncate the records to (attribute, value) pairs. For example, for the table in Figure 1, the records are {(Birth Name, Michael Dahlquist), (Born, December 22 1965), ...}. The task is to generate a text G which summarizes the records in a fluent and grammatical manner.3 For training and evaluation we further assume that we have a reference description R available for each table. We let DM = {(T i, Ri, Gi)}N i=1 denote an evaluation set of tables, references and texts generated from a model M, and Ri n, Gi n denote the collection of n-grams of order n in Ri and Gi, respectively. We use #Rin(g) to denote the count of n-gram g in Ri n, and #Gin,Rin(g) to denote the minimum of its counts in Ri n and Gi n. Our goal is to assign a score to the model, which correlates highly with human judgments of the quality of that model. Divergent References. In this paper we are interested in the case where reference texts diverge from the tables. In Figure 1, the reference, though technically correct and fluent, mentions information which cannot be gleaned from the associated table. It also fails to mention useful information which a generation system might correctly include (e.g. candidate 3 in the figure). We call such references divergent from the associated table. This phenomenon is quite common – in WikiBio we found that 62% of the references mention extra information (§5.5). Divergence is common in human-curated translation datasets as well (Carpuat et al., 2017; Vyas et al., 2018). How does divergence affect automatic evalua3In some cases the system is expected to summarize all the records (e.g. WebNLG); in others the system is expected to only summarize the salient records (e.g. WikiBio). 4886 tion? As a motivating example, consider the three candidate generations shown in Figure 1. Clearly, candidate 1 is the worst since it “hallucinates” false information, and candidate 3 is the best since it is correct and mentions more information than candidate 2. However, BLEU and ROUGE, which only compare the candidates to the reference, penalize candidate 3 for both excluding the divergent information in the reference (in red) and including correct information from the table (in green).4 PARENT, which compares to both the table and reference, correctly ranks the three candidates. 3 PARENT PARENT evaluates each instance (T i, Ri, Gi) separately, by computing the precision and recall of Gi against both T i and Ri. Entailment Probability. The table is in a semistructured form, and hence not directly comparable to the unstructured generated or reference texts. To bridge this gap, we introduce the notion of entailment probability, which we define as the probability that the presence of an n-gram g in a text is “correct” given the associated table. We denote this probability as w(g) = Pr(g ⇐T i). Estimating this probability is in itself a challenging language understanding task, since the information in the table may be expressed in varied forms in text. Here, we describe two simple models of lexical entailment, inspired by work on the Recognizing Textual Entailment Challenge (Dagan et al., 2006). We found these simple models to be effective; while more sophisticated models may be used if there are complex inferences between the table and text, they are beyond the scope of this paper. 1. Word Overlap Model: Let ¯T i denote all the lexical items present in the table T i, including both attribute names and their values. Then, w(g) = Pn j=1 1(gj ∈¯T i)/n, where n is the length of g, and gj is the jth token in g. 2. Co-occurrence Model: (Glickman and Dagan, 2005) Originally proposed for the RTE task, this model computes the probability of a term gj in the n-gram being entailed by the table as the maximum of its probabilities of being en4BLEU is usually computed at the corpus-level, however here we show its value for a single sentence purely for illustration purposes. The remaining BLEU scores in this paper are all at the corpus-level. tailed by each lexical item v in the table: Pr(gj ⇐T i) = max v∈¯T i Pr(gj ⇐v). (1) Pr(gj ⇐v) is estimated using co-occurrence counts from a training set of table-reference pairs. Then the overall probability of the ngram being entailed is taken as the geometric average w(g) = Qn j=1 Pr(gj ⇐T i) 1/n .5 We note that these models are not sensitive to paraphrases between the table and text. For tasks where this is important, embedding-based similarities may be used, but those are beyond the scope of this paper. Next we discuss how to compute the precision and recall of the generation. Entailed Precision. When computing precision, we want to check what fraction of the n-grams in Gi n are correct. We consider an n-gram g to be correct either if it occurs in the reference Ri n 6, or if it has a high probability of being entailed by the table (i.e. w(g) is high). Let Pr(g ∈Ri n) = #Gin,Rin(g) #Gin(g) denote the probability that an n-gram in Gi n also appears in Ri n. Then, the entailed precision En p for n-grams of order n is given by: En p = P g∈Gin  Pr(g ∈Ri n) + Pr(g /∈Ri n)w(g)  #Gin(g) P g∈Gin #Gin(g) , = P g∈Gin #Gin(g)w(g) + #Gin,Rin(g)[1 −w(g)] P g∈Gin #Gin(g) . (2) In words, an n-gram receives a reward of 1 if it appears in the reference, with probability Pr(g ∈ Ri n), and otherwise it receives a reward of w(g). Both numerator and denominator are weighted by the count of the n-gram in Gi n. Pr(g ∈Ri n) rewards an n-gram for appearing as many times as it appears in the reference, not more. We combine precisions for n-gram orders 1-4 using a geometric 5Glickman and Dagan (2005) used a product instead of geometric mean. Here we use a geometric mean to ensure that n-grams of different lengths have comparable probabilities of being entailed. 6It is unlikely that an automated system produces the same extra n-gram as present in the reference, thus a match with the reference n-gram is considered positive. For example, in Figure 1, it is highly unlikely that a system would produce “Silkworm” when it is not present in the table. 4887 average, similar to BLEU: Ep = exp 4 X n=1 1 4 log En p ! (3) Entailed Recall. We compute recall against both the reference (Er(Ri)), to ensure proper sentence structure in the generated text, and the table (Er(T i)), to ensure that texts which mention more information from the table get higher scores (e.g. candidate 3 in Figure 1). These are combined using a geometric average: Er = Er(Ri)(1−λ)Er(T i)λ (4) The parameter λ trades-off how much the generated text should match the reference, versus how much it should cover information from the table. The geometric average, which acts as an AND operation, ensures that the overall recall is high only when both the components are high. We found this necessary to assign low scores to bad systems which, for example, copy values from the table without phrasing them in natural language. When computing Er(Ri), divergent references will have n-grams with low w(g). We want to exclude these from the computation of recall, and hence their contributions are weighted by w(g): En r (Ri) = P g∈Rin #Gin,Rin(g)w(g) P g∈Rin #Rin(g)w(g) . (5) Similar to precision, we combine recalls for n = 1-4 using a geometric average to get Er(Ri). For computing Er(T i), note that a table is a set of records T i = {rk}K k=1. For a record rk, let ¯rk denote its string value (such as “Michael Dahlquist” or “December 22 1965”). Then: Er(T i) = 1 K K X k=1 1 |¯rk|LCS(¯rk, Gi), (6) where ¯rk denotes the number of tokens in the value string, and LCS(x, y) is the length of the longest common subsequence between x and y. The LCS function, borrowed from ROUGE, ensures that entity names in ¯rk appear in the same order in the text as the table. Higher values of Er(T i) denote that more records are likely to be mentioned in Gi. The entailed precision and recall are combined into an F-score to give the PARENT metric for one instance. The system-level PARENT score for a model M is the average of instance level PARENT scores across the evaluation set: 1 N N X i=1 PARENT(Gi, Ri, T i) (7) Smoothing & Multiple References. The danger with geometric averages is that if any of the components being averaged become 0, the average will also be 0. Hence, we adopt a smoothing technique from Chen and Cherry (2014) that assigns a small positive value ϵ to any of En p , En r (Ri) and Er(T i) which are 0. When multiple references are available for a table, we compute PARENT against each reference and take the maximum as its overall score, similar to METEOR (Denkowski and Lavie, 2014). Choosing λ and ϵ. To set the value of λ we can tune it to maximize the correlation of the metric with human judgments, when such data is available. When such data is not available, we can use the recall of the reference against the table, using Eq. 6, as the value of 1 −λ. The intuition here is that if the recall of the reference against the table is high, it already covers most of the information, and we can assign it a high weight in Eq. 4. This leads to a separate value of λ automatically set for each instance.7 ϵ is set to 10−5 for all experiments. 4 Evaluation via Information Extraction Wiseman et al. (2017) proposed to use an auxiliary model, trained to extract structured records from text, for evaluation. However, the extraction model presented in that work is limited to the closed-domain setting of basketball game tables and summaries. In particular, they assume that each table has exactly the same set of attributes for each entity, and that the entities can be identified in the text via string matching. These assumptions are not valid for the open-domain WikiBio dataset, and hence we train our own extraction model to replicate their evaluation scheme. Our extraction system is a pointer-generator network (See et al., 2017), which learns to produce a linearized version of the table from the text.8 The network learns which attributes need to be populated in the output table, along with their values. It is trained on the training set of WikiBio. At test 7For WikiBio, on average λ = 0.6 using this heuristic. 8 All (attribute, value) pairs are merged into 1 long string using special separator tokens between them. 4888 time we parsed the output strings into a set of (attribute, value) tuples and compare it to the ground truth table. The F-score of this text-to-table system was 35.1%, which is comparable to other challenging open-domain settings (Huang et al., 2017). More details are included in the Appendix A.1. Given this information extraction system, we consider the following metrics for evaluation, along the lines of Wiseman et al. (2017). Content Selection (CS): F-score for the (attribute, value) pairs extracted from the generated text compared to those extracted from the reference. Relation Generation (RG): Precision for the (attribute, value) pairs extracted from the generated text compared to those in the ground truth table. RG-F: Since our task emphasizes the recall of information from the table as well, we consider another variant which computes the F-score of the extracted pairs to those in the table. We omit the content ordering metric, since our extraction system does not align records to the input text. 5 Experiments & Results In this section we compare several automatic evaluation metrics by checking their correlation with the scores assigned by humans to table-to-text models. Specifically, given l models M1, . . . , Ml, and their outputs on an evaluation set, we show these generated texts to humans to judge their quality, and obtain aggregated human evaluation scores for all the models, ¯h = (h1, . . . , hl) (§5.2). Next, to evaluate an automatic metric, we compute the scores it assigns to each model, ¯a = (a1, . . . , al), and check the Pearson correlation between ¯h and ¯a (Graham and Baldwin, 2014).9 5.1 Data & Models Our main experiments are on the WikiBio dataset (Lebret et al., 2016), which is automatically constructed and contains many divergent references. In §5.6 we also present results on the data released as part of the WebNLG challenge. We developed several models of varying quality for generating text from the tables in WikiBio. This gives us a diverse set of outputs to evaluate the automatic metrics on. Table 1 lists the models along with their hyperparameter settings and their scores from the human evaluation (§5.2). Our focus is primarily on neural sequence-to-sequence methods since these are most widely used, but we 9We observed similar trends for Spearman correlation. Name Beam Size Length Penalty Beam Rescoring Human Eval References – – – 0.20 ± 0.03 Template – – – -0.19 ± 0.04 Seq2Seq 1 0 No -0.28 ± 0.03 Seq2Seq + Att 1 0 No -0.12 ± 0.03 PG-Net 1,4,8 0,1,2,3 No,Yes 0.40 ± 0.03 Table 1: Models used for WikiBio, with the human evaluation scores for these model outputs and the reference texts. PG-Net: Pointer-Generator network. Human scores computed using Thurstone’s method (Tsukida and Gupta, 2011). also include a template-based baseline. All neural models were trained on the WikiBio training set. Training details and sample outputs are included in Appendices A.2 & A.3. We divide these models into two categories and measure correlation separately for both the categories. The first category, WikiBio-Systems, includes one model each from the four families listed in Table 1. This category tests whether a metric can be used to compare different model families with a large variation in the quality of their outputs. The second category, WikiBioHyperparams, includes 13 different hyperparameter settings of PG-Net (See et al., 2017), which was the best performing system overall. 9 of these were obtained by varying the beam size and length normalization penalty of the decoder network (Wu et al., 2016), and the remaining 4 were obtained by re-scoring beams of size 8 with the information extraction model described in §4. All the models in this category produce high quality fluent texts, and differ primarily on the quantity and accuracy of the information they express. Here we are testing whether a metric can be used to compare similar systems with a small variation in performance. This is an important use-case as metrics are often used to tune hyperparameters of a model. 5.2 Human Evaluation We collected human judgments on the quality of the 16 models trained for WikiBio, plus the reference texts. Workers on a crowd-sourcing platform, proficient in English, were shown a table with pairs of generated texts, or a generated text and the reference, and asked to select the one they prefer. Figure 2 shows the instructions they were given. Paired comparisons have been shown to be superior to rating scales for comparing generated texts 4889 Figure 2: Instructions to crowd-workers for comparing two generated texts. (Callison-Burch et al., 2007). However, for measuring correlation the comparisons need to be aggregated into real-valued scores, ¯h = (h1, . . . , hl), for each of the l = 16 models. For this, we use Thurstone’s method (Tsukida and Gupta, 2011), which assigns a score to each model based on how many times it was preferred over an alternative. The data collection was performed separately for models in the WikiBio-Systems and WikiBioHyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table 1). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio. 5.3 Compared Metrics Text only: We compare BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Denkowski and Lavie, 2014), CIDEr and CIDErD (Vedantam et al., 2015) using their publicly available implementations. Information Extraction based: We compare the CS, RG and RG-F metrics discussed in §4. Text & Table: We compare a variant of BLEU, denoted as BLEU-T, where the values from the table are used as additional references. BLEUT draws inspiration from iBLEU (Sun and Zhou, 2012) but instead rewards n-grams which match the table rather than penalizing them. For PARENT, we compare both the word-overlap model (PARENT-W) and the co-occurrence model (PARENT-C) for determining entailment. We also compare versions where a single λ is tuned on the entire dataset to maximize correlation with human judgments, denoted as PARENT*-W/C. Metric WikiBio Systems WikiBio Hyperparams Avg ROUGE 0.518±0.07C,W -0.585±0.15C,W -0.034 CIDEr 0.674±0.06C,W -0.516±0.15C,W 0.079 CIDEr-D 0.646±0.06C,W -0.372±0.16C,W 0.137 METEOR 0.697±0.06C,W -0.079±0.24C,W 0.309 BLEU 0.548±0.07C,W 0.407±0.15C,W 0.478 CS 0.735±0.06W -0.604±0.16C,W 0.066 BLEU-T 0.688±0.11W 0.587±0.14C,W 0.638 RG 0.645±0.07C,W 0.749±0.12 0.697 RG-F 0.753±0.06W 0.763±0.12 0.758 PARENT-C 0.776±0.05W 0.755±0.12 0.766 PARENT-W 0.912±0.03 0.763±0.12 0.838 PARENT*-C 0.976±0.01 0.793±0.11 0.885 PARENT*-W 0.982±0.01 0.844±0.10 0.913 Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1. 5.4 Correlation Comparison We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table 2. The distribution of correlations for the best performing metrics are shown in Figure 3. Table 2 also indicates whether PARENT is significantly better than a baseline metric. Graham and Baldwin (2014) suggest using the William’s test for this purpose, but since we are computing correlations between only 4/13 systems at a time, this test has very weak power in our case. Hence, we use the bootstrap samples to obtain a 1 −α confidence interval of the difference in correlation 4890 BLEU BLEU-T RG-F PRT-C PRT-W PRT*-W 0.4 0.6 0.8 1.0 WikiBio-Systems BLEU BLEU-T RG-F PRT-C PRT-W PRT*-W 0.0 0.2 0.4 0.6 0.8 1.0 WikiBio-Hyperparams Figure 3: Distribution of metric correlations across 500 bootstrap samples. PRT = PARENT. between PARENT and any other metric and check whether this is above 0 (Wilcox, 2016). Correlations are higher for the systems category than the hyperparams category. The latter is a more difficult setting since very similar models are compared, and hence the variance of the correlations is also high. Commonly used metrics which only rely on the reference (BLEU, ROUGE, METEOR, CIDEr) have only weak correlations with human judgments. In the hyperparams category, these are often negative, implying that tuning models based on these may lead to selecting worse models. BLEU performs the best among these, and adding n-grams from the table as references improves this further (BLEU-T). Among the extractive evaluation metrics, CS, which also only relies on the reference, has poor correlation in the hyperparams category. RG-F, and both variants of the PARENT metric achieve the highest correlation for both settings. There is no significant difference among these for the hyperparams category, but for systems, PARENT-W is significantly better than the other two. While RG-F needs a full information extraction pipeline in its implementation, PARENT-C only relies on co-occurrence counts, and PARENT-W can be used out-of-the-box for any dataset. To our knowledge, this is the first rigorous evaluation of using information extraction for generation evaluation. On this dataset, the word-overlap model showed higher correlation than the co-occurrence model for entailment. In §5.6 we will show that for the WebNLG dataset, where more paraphrasing is involved between the table and text, the opposite is true. Lastly, we note that the heuristic for selecting λ is sufficient to produce high correlations for PARENT, however, if human annotations are available, this can be tuned to produce significantly higher correlations (PARENT*-W/C). 0 20 40 60 80 100 % Entailed 0.0 0.2 0.4 0.6 0.8 1.0 WikiBio-Systems BLEU BLEU-T RG-F PARENT-W 0 20 40 60 80 100 % Entailed 1.0 0.5 0.0 0.5 1.0 WikiBio-Hyperparams BLEU BLEU-T RG-F PARENT-W Figure 4: Correlation of the metrics to human judgment as the percentage of entailed examples in WikiBio is varied. 5.5 Analysis In this section we further analyze the performance of PARENT-W10 under different conditions, and compare to the other best metrics from Table 2. Effect of Divergence. To study the correlation as we vary the number of divergent references, we also collected binary labels from workers for whether a reference is entailed by the corresponding table. We define a reference as entailed when it mentions only information which can be inferred from the table. Each table and reference pair was judged by 3 independent workers, and we used the majority vote as the label for that pair. Overall, only 38% of the references were labeled as entailed by the table. Fleiss’ κ was 0.30, which indicates a fair agreement. We found the workers sometimes disagreed on what information can be reasonably entailed by the table. Figure 4 shows the correlations as we vary the percent of entailed examples in the evaluation set of WikiBio. Each point is obtained by fixing the desired proportion of entailed examples, and sampling subsets from the full set which satisfy this proportion. PARENT and RG-F remain stable and show a high correlation across the entire range, whereas BLEU and BLEU-T vary a lot. In the hyperparams category, the latter two have the worst correlation when the evaluation set contains only entailed examples, which may seem surprising. However, on closer examination we found that this subset tends to omit a lot of information from the tables. Systems which produce more information than these references are penalized by BLEU, but not in the human evaluation. PARENT overcomes this issue by measuring recall against the table in addition to the reference. 10The trends were similar for PARENT-C. 4891 BLEU BLEU-T RG-F PARENT-W PARENT-C 0.556 0.567∗ 0.588∗ 0.598‡ 0.606† Table 3: Accuracy on making the same judgments as humans between pairs of generated texts. p < 0.01∗/0.05†/0.10‡: accuracy is significantly higher than the next best accuracy to the left using a paired McNemar’s test. Ablation Study. We check how different components in the computation of PARENT contribute to its correlation to human judgments. Specifically, we remove the probability w(g) of an ngram g being entailed by the table from Eqs. 2 and 5.11 The average correlation for PARENT-W drops to 0.168 in this case. We also try a variant of PARENT with λ = 0, which removes the contribution of Table Recall (Eq. 4). The average correlation is 0.328 in this case. With these components, the correlation is 0.838, showing that they are crucial to the performance of PARENT. Sentence Level Discrimination. Chaganty et al. (2018) point out that hill-climbing on an automatic metric is meaningless if that metric has a low instance-level correlation to human judgments. In Table 3 we show the average accuracy of the metrics in making the same judgments as humans between pairs of generated texts. Both variants of PARENT are significantly better than the other metrics, however the best accuracy is only 60% for the binary task. This is a challenging task, since there are typically only subtle differences between the texts. Achieving higher instance-level accuracies will require more sophisticated language understanding models for evaluation. 5.6 WebNLG Dataset To check how PARENT correlates with human judgments when the references are elicited from humans (and less likely to be divergent), we check its correlation with the human ratings provided for the systems competing in the WebNLG challenge (Gardent et al., 2017). The task is to generate text describing 1-5 RDF triples (e.g. John E Blaha, birthPlace, San Antonio), and human ratings were collected for the outputs of 9 participating systems on 223 instances. These systems include a mix of pipelined, statistical and neural methods. Each instance has upto 3 reference texts associated with 11When computing precision we set w(g) = 0, and when computing recall we set w(g) = 1 for all g. Metric Grammar Fluency Semantics Avg METEOR 0.788±0.04 0.792±0.04 0.576±0.06 0.719 ROUGE 0.788±0.04 0.792±0.04 0.576±0.06 0.719 CIDEr 0.804±0.03 0.753±0.04 0.860±0.02 0.806 BLEU 0.858±0.02 0.811±0.03 0.775±0.03 0.815 BLEU-T 0.849±0.02 0.801±0.03 0.816±0.02 0.822 CIDErD 0.838±0.04 0.796±0.04 0.853±0.02 0.829 PARENT-W 0.821±0.03 0.768±0.04 0.887±0.02 0.825 PARENT-C 0.851±0.03 0.809±0.04 0.877±0.02 0.846 Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge. the RDF triples, which we use for evaluation. The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table 4.12 Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well. While BLEU has the highest correlation for the grammar and fluency aspects, PARENT does best for semantics. This suggests that the inclusion of source tables into the evaluation orients the metric more towards measuring the fidelity of the content of the generation. A similar trend is seen comparing BLEU and BLEU-T. As modern neural text generation systems are typically very fluent, measuring their fidelity is of increasing importance. Between the two entailment models, PARENTC is better due to its higher correlation with the grammaticality and fluency aspects. Distribution of λ. The λ parameter in the calculation of PARENT decides whether to compute recall against the table or the reference (Eq. 4). Figure 5 shows the distribution of the values taken by 1 −λ using the heuristic described in §3 for instances in both WikiBio and WebNLG. For WikiBio, the recall of the references against the table is generally low, and hence the recall of the generated text relies more on the table. For WebNLG, where the references are elicited from humans, this recall is much higher (often 1.0), and hence 12 We omit extractive evaluation metrics since no extraction systems are publicly available for this dataset, and developing one is beyond the scope of this work. 4892 0.0 0.2 0.4 0.6 0.8 1.0 1 −λ 0 50 100 150 200 250 300 350 400 Frequency WikiBio WebNLG Figure 5: Histogram of the recall of the references against the table (Eq. 6), which is used to set 1 −λ. Lower values indicate that the metric relies more on the table and less on the reference. the recall of the generated text relies more on the reference. 6 Related Work Over the years several studies have evaluated automatic metrics for measuring text generation performance (Callison-Burch et al., 2006; Stent et al., 2005; Belz and Reiter, 2006; Reiter, 2018; Liu et al., 2016; Kilickaya et al., 2017; Gatt and Krahmer, 2018). The only consensus from these studies seems to be that no single metric is suitable across all tasks. A recurring theme is that metrics like BLEU and NIST (Doddington, 2002) are not suitable for judging content quality in NLG. Recently, Novikova et al. (2017a) did a comprehensive study of several metrics on the outputs of state-of-the-art NLG systems, and found that while they showed acceptable correlation with human judgments at the system level, they failed to show any correlation at the sentence level. Ours is the first study which checks the quality of metrics when tableto-text references are divergent. We show that in this case even system level correlations can be unreliable. Hallucination (Rohrbach et al., 2018; Lee et al., 2018) refers to when an NLG system generates text which mentions extra information than what is present in the source from which it is generated. Divergence can be viewed as hallucination in the reference text itself. PARENT deals with hallucination by discounting n-grams which do not overlap with either the reference or the table. PARENT draws inspiration from iBLEU (Sun and Zhou, 2012), a metric for evaluating paraphrase generation, which compares the generated text to both the source text and the reference. While iBLEU penalizes texts which match the source, here we reward such texts since our task values accuracy of generated text more than the need for paraphrasing the tabular content (Liu et al., 2010). Similar to SARI for text simplification (Xu et al., 2016) and Q-BLEU for question generation (Nema and Khapra, 2018), PARENT falls under the category of task-specific metrics. 7 Conclusions We study the automatic evaluation of table-to-text systems when the references diverge from the table. We propose a new metric, PARENT, which shows the highest correlation with humans across a range of settings with divergent references in WikiBio. We also perform the first empirical evaluation of information extraction based metrics (Wiseman et al., 2017), and find RG-F to be effective. Lastly, we show that PARENT is comparable to the best existing metrics when references are elicited by humans on the WebNLG data. Acknowledgements Bhuwan Dhingra is supported by a fellowship from Siemens, and by grants from Google. We thank Maruan Al-Shedivat, Ian Tenney, Tom Kwiatkowski, Michael Collins, Slav Petrov, Jason Baldridge, David Reitter and other members of the Google AI Language team for helpful discussions and suggestions. We thank Sam Wiseman for sharing data for an earlier version of this paper. We also thank the anonymous reviewers for their feedback. References Srinivas Bangalore, Owen Rambow, and Steve Whittaker. 2000. Evaluation metrics for generation. In Proc. of INLG. Regina Barzilay and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In Proc. of EMNLP. Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of nlg systems. In Proc. of EACL. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation of machine translation. In Proc. of WMT. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluation the role of bleu in machine translation research. In Proc. of EACL. 4893 Marine Carpuat, Yogarshi Vyas, and Xing Niu. 2017. Detecting cross-lingual semantic divergence for neural machine translation. In Proc.of Workshop on NMT. Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. In Proc. of ACL. Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentencelevel bleu. In Proc. of WMT. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, pages 177–190, Berlin, Heidelberg. Springer Berlin Heidelberg. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proc. of WMT. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proc. of HLT. Ondˇrej Dušek, Jekaterina Novikova, and Verena Rieser. 2019. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG Challenge. arXiv preprint arXiv:1901.11528. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for micro-planners. In Proc. of ACL. Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61:65–170. Oren Glickman and Ido Dagan. 2005. A probabilistic setting and lexical cooccurrence model for textual entailment. In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pages 43–48. Association for Computational Linguistics. Yvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proc. of EMNLP. Lifu Huang, Avirup Sil, Heng Ji, and Radu Florian. 2017. Improving slot filling performance with attentive neural networks on dependency structures. Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, and Erkut Erdem. 2017. Re-evaluating automatic metrics for image captioning. In Proc. of EACL. Karen Kukich. 1983. Design of a knowledge-based report generator. In Proc. of ACL. Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proc. of EMNLP. Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proc. of ACL. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proc. of Workshop on Text Summarization Branches Out. Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2010. PEM: A paraphrase evaluation metric exploiting parallel texts. In Proc. of EMNLP. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proc. of EMNLP. Kathleen R. McKeown. 1985. Text Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text. Cambridge University Press, New York, NY, USA. Preksha Nema and Mitesh M Khapra. 2018. Towards a better metric for evaluating question generation systems. In Proc. of EMNLP. Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017a. Why we need new evaluation metrics for NLG. In Proc. of EMNLP. Jekaterina Novikova, Ondˇrej Dušek, and Verena Rieser. 2017b. The E2E dataset: New challenges for endto-end generation. In Proc. of SIGDIAL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proc. of ACL. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Ehud Reiter. 2018. A structured review of the validity of bleu. Computational Linguistics, pages 1–12. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Nat. Lang. Eng., 3(1):57–87. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallucination in image captioning. EMNLP. 4894 Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proc. of ACL. Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In Proc. of CICLing. Hong Sun and Ming Zhou. 2012. Joint learning of a dual smt system for paraphrase generation. In Proc. of ACL. Kristi Tsukida and Maya R Gupta. 2011. How to analyze paired comparison data. Technical report, Washington University Seattle Dept of Electrical Engineering. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575. Yogarshi Vyas, Xing Niu, and Marine Carpuat. 2018. Identifying semantic divergences in parallel text without annotations. In Proc. of NAACL. Rand R Wilcox. 2016. Comparing dependent robust correlations. British Journal of Mathematical and Statistical Psychology, 69(3):215–224. Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in data-to-document generation. In Proc. of EMNLP. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Å ˛ Aukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. A Appendices A.1 Information Extraction System For evaluation via information extraction (Wiseman et al., 2017) we train a model for WikiBio which accepts text as input and generates a table as the output. Tables in WikiBio are open-domain, without any fixed schema for which attributes may be present or absent in an instance. Hence we Text: michael dahlquist ( december 22 , 1965 – july 14 , 2005 ) was a drummer in the seattle band silkworm . Table: name <C> michael dahlquist <R> birth date <C> 22 december 1965 <R> birth place <C> seattle , washington <R> death date <C> 14 july 2005 <R> death place <C> skokie , illinois <R> genres <C> male <R> occupation(s) <C> drummer <R> instrument <C> drums Figure 6: An input-output pair for the information extraction system. <R> and <C> are special symbols used to separate (attribute, value) pairs and attributes from values, respectively. Precision Recall F-score 0.428 0.310 0.351 Table 5: Performance of the Information Extraction system. employ the Pointer-Generator Network (PG-Net) (See et al., 2017) for this purpose. Specifically, we use a sequence-to-sequence model, whose encoder and decoder are both single-layer bi-directional LSTMs. The decoder is augmented with an attention mechanism over the states of the encoder. Further, it also uses a copy mechanism to optionally copy tokens directly from the source text. We do not use the coverage mechanism of See et al. (2017) since that is specific to the task of summarization they study. The decoder is trained to produce a linearized version of the table where the rows and columns are flattened into a sequence, and separate by special tokens. Figure 6 shows an example. Clearly, since the references are divergent, the model cannot be expected to produce the entire table, and we see some false information being hallucinated after training. Nevertheless, as we show in §5.4, this system can be used for evaluating generated texts. After training, we can parse the output sequence along the special tokens <R> and <C> to get a set of (attribute, value) pairs. Table 5 shows the precision, recall and F-score of these extracted pairs against the ground truth tables, where the attributes and values are compared using an exact string match. A.2 Hyperparameters After tuning we found the same set of hyperparameters to work well for both the table-to-text PG-Net, and the inverse information extraction PG-Net. The hidden state size of the biLSTMs 4895 Reference vedran nikÅ ˛aiÄ ˘G ( born 5 may 1987 in osijek ) is a croatian football striker . [STOP] Prediction vedran nikÅ ˛aiÄ ˘G ( born 5 may 1987 ) is a croatian football forward who is currently a free agent . [STOP] Reference adam whitehead ( born 28 march 1980 ) is a former breaststroke swimmer from coventry , england , who competed at the 2000 summer olympics in sydney , australia . [STOP] Prediction adam whitehead ( born 28 march 1980 ) is an english swimmer . [STOP] Reference chris fortier is an american dj and founder of the balance record pool as well as co-founder and owner of fade records . [STOP] Prediction chris fortier ( born in melbourne , florida ) is an american disc jockey and record producer from melbourne , florida . [STOP] Reference pretty balanced was an american band based in columbus , ohio . [STOP] Prediction pretty balanced is an american piano band from columbus , ohio . [STOP] Reference ben street ( born february 13 , 1987 ) is a canadian professional ice hockey player who is a member within the colorado avalanche organization of the national hockey league . [STOP] Prediction ben street ( born february 13 , 1987 ) is a canadian professional ice hockey centre currently playing for the colorado avalanche of the national hockey league ( nhl ) . [STOP] Table 6: Sample references and predictions from PG-Net with beam size 8. Information which is absent from the reference, but can be inferred from the table is in bold. Information which is present in the reference, but cannot be inferred from the table is in italics. was set to 200. The input and output vocabularies were set to 50000 most common words in the corpus, with additional special symbols for table attribute names (such as “birth-date”). The embeddings of the tokens in the vocabulary were initialized with Glove (Pennington et al., 2014). Learning rate of 0.0003 was used during training, with the Adam optimizer, and a dropout of 0.2 was also applied to the outputs of the biLSTM. Models were trained till the loss on the dev set stopped dropping. Maximum length of a decoded text was set to 40 tokens, and that of the tables was set to 120 tokens. Various beam sizes and length normalization penalties were applied for the table-totext system, which are listed in the main paper. For the information extraction system, we found a beam size of 8 and no length penalty to produce the highest F-score on the dev set. A.3 Sample Outputs Table 6 shows some sample references and the corresponding predictions from the best performing model, PG-Net for WikiBio.
2019
483
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4896–4910 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4896 Unsupervised Question Answering by Cloze Translation Patrick Lewis Facebook AI Research University College London [email protected] Ludovic Denoyer Facebook AI Research [email protected] Sebastian Riedel Facebook AI Research University College London [email protected] Abstract Obtaining training data for Question Answering (QA) is time-consuming and resourceintensive, and existing QA datasets are only available for limited domains and languages. In this work, we explore to what extent high quality training data is actually required for Extractive QA, and investigate the possibility of unsupervised Extractive QA. We approach this problem by first learning to generate context, question and answer triples in an unsupervised manner, which we then use to synthesize Extractive QA training data automatically. To generate such triples, we first sample random context paragraphs from a large corpus of documents and then random noun phrases or named entity mentions from these paragraphs as answers. Next we convert answers in context to “fill-in-the-blank” cloze questions and finally translate them into natural questions. We propose and compare various unsupervised ways to perform cloze-tonatural question translation, including training an unsupervised NMT model using nonaligned corpora of natural questions and cloze questions as well as a rule-based approach. We find that modern QA models can learn to answer human questions surprisingly well using only synthetic training data. We demonstrate that, without using the SQuAD training data at all, our approach achieves 56.4 F1 on SQuAD v1 (64.5 F1 when the answer is a Named entity mention), outperforming early supervised models. 1 Introduction Extractive Question Answering (EQA) is the task of answering questions given a context document under the assumption that answers are spans of tokens within the given document. There has been substantial progress in this task in English. For SQuAD (Rajpurkar et al., 2016), a common EQA benchmark dataset, current models beat human The London Sevens is a rugby tournament held at Twickenham Stadium in London. It is part of the World Rugby Sevens Series. For many years the London Sevens was the last tournament of each season but the Paris Sevens became the last stop on the calendar in 2018. Question Answering Cloze Translation Cloze Generation QA Model the Paris sevens become the last stop on the calendar in MASK Question Generation 2018 Answer Extraction Context Cloze  Question Natural Question Answer   When did the Paris Sevens become the last stop on   the calendar? Figure 1: A schematic of our approach. The right side (dotted arrows) represents traditional EQA. We introduce unsupervised data generation (left side, solid arrows), which we use to train standard EQA models performance; For SQuAD 2.0 (Rajpurkar et al., 2018), ensembles based on BERT (Devlin et al., 2018) now match human performance. Even for the recently introduced Natural Questions corpus (Kwiatkowski et al., 2019), human performance is already in reach. In all these cases, very large amounts of training data are available. But, for new domains (or languages), collecting such training data is not trivial and can require significant resources. What if no training data was available at all? In this work we address the above question by exploring the idea of unsupervised EQA, a setting in which no aligned question, context and answer data is available. We propose to tackle this by reduction to unsupervised question generation: If we had a method, without using QA supervision, to generate accurate questions given a context document, we could train a QA system using the generated questions. This approach allows us to directly 4897 leverage progress in QA, such as model architectures and pretraining routines. This framework is attractive in both its flexibility and extensibility. In addition, our method can also be used to generate additional training data in semi-supervised settings. Our proposed method, shown schematically in Figure 1, generates EQA training data in three steps. 1) We first sample a paragraph in a target domain—in our case, English Wikipedia. 2) We sample from a set of candidate answers within that context, using pretrained components (NER or noun chunkers) to identify such candidates. These require supervision, but no aligned (question, answer) or (question, context) data. Given a candidate answer and context, we can extract “fillthe-blank” cloze questions 3) Finally, we convert cloze questions into natural questions using an unsupervised cloze-to-natural question translator. The conversion of cloze questions into natural questions is the most challenging of these steps. While there exist sophisticated rule-based systems (Heilman and Smith, 2010) to transform statements into questions (for English), we find their performance to be empirically weak for QA (see Section 3). Moreover, for specific domains or other languages, a substantial engineering effort will be required to develop similar algorithms. Also, whilst supervised models exist for this task, they require the type of annotation unavailable in this setting (Du et al. 2017; Du and Cardie 2018; Hosking and Riedel 2019, inter alia). We overcome this issue by leveraging recent progress in unsupervised machine translation (Lample et al., 2018, 2017; Lample and Conneau, 2019; Artetxe et al., 2018). In particular, we collect a large corpus of natural questions and an unaligned corpus of cloze questions, and train a seq2seq model to map between natural and cloze question domains using a combination of online back-translation and de-noising auto-encoding. In our experiments, we find that in conjunction with the use of modern QA model architectures, unsupervised QA can lead to performances surpassing early supervised approaches (Rajpurkar et al., 2016). We show that forms of cloze “translation” that produce (unnatural) questions via word removal and flips of the cloze question lead to better performance than an informed rule-based translator. Moreover, the unsupervised seq2seq model outperforms both the noise and rule-based system. We also demonstrate that our method can be used in a few-shot learning setting, for example obtaining 59.3 F1 with 32 labelled examples, compared to 40.0 F1 without our method. To summarize, this paper makes the following contributions: i) The first approach for unsupervised QA, reducing the problem to unsupervised cloze translation, using methods from unsupervised machine translation ii) Extensive experiments testing the impact of various cloze question translation algorithms and assumptions iii) Experiments demonstrating the application of our method for few-shot learning in EQA.1 2 Unsupervised Extractive QA We consider extractive QA where we are given a question q and a context paragraph c and need to provide an answer a = (b, e) with beginning b and end e character indices in c. Figure 1 (right-hand side) shows a schematic representation of this task. We propose to address unsupervised QA in a two stage approach. We first develop a generative model p(q, a, c) using no (QA) supervision, and then train a discriminative model pr(a|q, c) using p as training data generator. The generator p(q, a, c) = p(c)p(a|c)p(q|a, c) will generate data in a “reverse direction”, first sampling a context via p(c), then an answer within the context via p(a|c) and finally a question for the answer and context via p(q|a, c). In the following we present variants of these components. 2.1 Context and Answer Generation Given a corpus of documents our context generator p(c) uniformly samples a paragraph c of appropriate length from any document, and the answer generation step creates answer spans a for c via p(a|c). This step incorporates prior beliefs about what constitutes good answers. We propose two simple variants for p(a|c): Noun Phrases We extract all noun phrases from paragraph c and sample uniformly from this set to generate a possible answer span. This requires a chunking algorithm for our language and domain. Named Entities We can further restrict the possible answer candidates and focus entirely on named entities. Here we extract all named entity 1Synthetic EQA training data and models that generate it will be made publicly available at https://github. com/facebookresearch/UnsupervisedQA 4898 mentions using an NER system and then sample uniformly from these. Whilst this reduces the variety of questions that can be answered, it proves to be empirically effective as discussed in Section 3.2. 2.2 Question Generation Arguably, the core challenge in QA is modelling the relation between question and answer. This is captured in the question generator p(q|a, c) that produces questions from a given answer in context. We divide this step into two steps: cloze generation q′ = cloze(a, c) and translation, p(q|q′). 2.2.1 Cloze Generation Cloze questions are statements with the answer masked. In the first step of cloze generation, we reduce the scope of the context to roughly match the level of detail of actual questions in extractive QA. A natural option is the sentence around the answer. Using the context and answer from Figure 1, this might leave us with the sentence “For many years the London Sevens was the last tournament of each season but the Paris Sevens became the last stop on the calendar in ”. We can further reduce length by restricting to subclauses around the answer, based on access to an English syntactic parser, leaving us with “the Paris Sevens became the last stop on the calendar in ”. 2.2.2 Cloze Translation Once we have generated a cloze question q′ we translate it into a form closer to what we expect in real QA tasks. We explore four approaches here. Identity Mapping We consider that cloze questions themselves provide a signal to learn some form of QA behaviour. To test this hypothesis, we use the identity mapping as a baseline for cloze translation. To produce “questions” that use the same vocabulary as real QA tasks, we replace the mask token with a wh* word (randomly chosen or with a simple heuristic described in Section 2.4). Noisy Clozes One way to characterize the difference between cloze and natural questions is as a form of perturbation. To improve robustness to pertubations, we can inject noise into cloze questions. We implement this as follows. First we delete the mask token from cloze q′, apply a simple noise function from Lample et al. (2018), and prepend a wh* word (randomly or with the heuristic in Section 2.4) and append a question mark. The noise function consists of word dropout, word order permutation and word masking. The motivation is that, at least for SQuAD, it may be sufficient to simply learn a function to identify a span surrounded by high n-gram overlap to the question, with a tolerance to word order perturbations. Rule-Based Turning an answer embedded in a sentence into a (q, a) pair can be understood as a syntactic transformation with wh-movement and a type-dependent choice of wh-word. For English, off-the-shelf software exists for this purpose. We use the popular statement-to-question generator from Heilman and Smith (2010) which uses a set of rules to generate many candidate questions, and a ranking system to select the best ones. Seq2Seq The above approaches either require substantial engineering and prior knowledge (rulebased) or are still far from generating naturallooking questions (identity, noisy clozes). We propose to overcome both issues through unsupervised training of a seq2seq model that translates between cloze and natural questions. More details of this approach are in Section 2.4. 2.3 Question Answering Extractive Question Answering amounts to finding the best answer a given question q and context c. We have at least two ways to achieve this using our generative model: Training a separate QA system The generator is a source of training data for any QA architecture at our disposal. Whilst the data we generate is unlikely to match the quality of real QA data, we hope QA models will learn basic QA behaviours. Using Posterior Another way to extract the answer is to find a with the highest posterior p(a|c, q). Assuming uniform answer probabilities conditioned on context p(a|c), this amounts to calculating arg maxa′ p(q|a′, c) by testing how likely each possible candidate answer could have generated the question, a similar method to the supervised approach of Lewis and Fan (2019). 2.4 Unsupervised Cloze Translation To train a seq2seq model for cloze translation we borrow ideas from recent work in unsupervised Neural Machine Translation (NMT). At the heart of most these approaches are nonparallel corpora 4899 of source and target language sentences. In such corpora, no source sentence has any translation in the target corpus and vice versa. Concretely, in our setting, we aim to learn a function which maps between the question (target) and cloze question (source) domains without requiring aligned corpora. For this, we need large corpora of cloze questions C and natural questions Q. Cloze Corpus We create the cloze corpus C by applying the procedure outlined in Section 2.2.2. Specifically we consider Noun Phrase (NP) and Named Entity mention (NE) answer spans, and cloze question boundaries set either by the sentence or sub-clause that contains the answer.2 We extract 5M cloze questions from randomly sampled wikipedia paragraphs, and build a corpus C for each choice of answer span and cloze boundary technique. Where there is answer entity typing information (i.e. NE labels), we use type-specific mask tokens to represent one of 5 high level answer types. See Appendix A.1 for further details. Question Corpus We mine questions from English pages from a recent dump of common crawl using simple selection criteria:3 We select sentences that start in one of a few common wh* words, (“how much”, “how many”, “what”, “when”, “where” and “who”) and end in a question mark. We reject questions that have repeated question marks or “?!”, or are longer than 20 tokens. This process yields over 100M english questions when deduplicated. Corpus Q is created by sampling 5M questions such that there are equal numbers of questions starting in each wh* word. Following Lample et al. (2018), we use C and Q to train translation models ps→t(q|q′) and pt→s(q′|q) which translate cloze questions into natural questions and vice-versa. This is achieved by a combination of in-domain training via denoising autoencoding and cross-domain training via online-backtranslation. This could also be viewed as a style transfer task, similar to Subramanian et al. (2018). At inference time, ‘natural’ questions are generated from cloze questions as arg maxq ps→t(q|q′).4 Further experimental detail 2We use SpaCy for Noun Chunking and NER, and AllenNLP for the Stern et al. (2017) parser. 3http://commoncrawl.org/ 4We also experimented with language model pretraining in a method similar to Lample and Conneau (2019). Whilst generated questions were generally more fluent and wellformed, we did not observe significant changes in QA performance. Further details in Appendix A.6 can be found in Appendix A.2. Wh* heuristic In order to provide an appropriate wh* word for our “identity” and “noisy cloze” baseline question generators, we introduce a simple heuristic rule that maps each answer type to the most appropriate wh* word. For example, the “TEMPORAL” answer type is mapped to “when”. During experiments, we find that the unsupervised NMT translation functions sometimes generate inappropriate wh* words for the answer entity type, so we also experiment with applying the wh* heuristic to these question generators. For the NMT models, we apply the heuristic by prepending target questions with the answer type token mapped to their wh* words at training time. E.g. questions that start with “when” are prepended with the token “TEMPORAL”. Further details on the wh* heuristic are in Appendix A.3. 3 Experiments We want to explore what QA performance can be achieved without using aligned q, a data, and how this compares to supervised learning and other approaches which do not require training data. Furthermore, we seek to understand the impact of different design decisions upon QA performance of our system and to explore whether the approach is amenable to few-shot learning when only a few q,a pairs are available. Finally, we also wish to assess whether unsupervised NMT can be used as an effective method for question generation. 3.1 Unsupervised QA Experiments For the synthetic dataset training method, we consider two QA models: finetuning BERT (Devlin et al., 2018) and BiDAF + Self Attention (Clark and Gardner, 2017).5 For the posterior maximisation method, we extract cloze questions from both sentences and sub-clauses, and use the NMT models to estimate p(q|c, a). We evaluate using the standard Exact Match (EM) and F1 metrics. As we cannot assume access to a development dataset when training unsupervised models, the QA model training is halted when QA performance on a held-out set of synthetic QA data plateaus. We do, however, use the SQuAD development set to assess which model components are 5We use the HuggingFace implementation of BERT, available at https://github.com/huggingface/ pytorch-pretrained-BERT, and the documentQA implementation of BiDAF+SA, available at https:// github.com/allenai/document-qa 4900 Unsupervised Models EM F1 BERT-Large Unsup. QA (ens.) 47.3 56.4 BERT-Large Unsup. QA (single) 44.2 54.7 BiDAF+SA (Dhingra et al., 2018) 3.2† 6.8† BiDAF+SA (Dhingra et al., 2018)‡ 10.0* 15.0* BERT-Large (Dhingra et al., 2018)‡ 28.4* 35.8* Baselines EM F1 Sliding window (Rajpurkar et al., 2016) 13.0 20.0 Context-only (Kaushik and Lipton, 2018) 10.9 14.8 Random (Rajpurkar et al., 2016) 1.3 4.3 Fully Supervised Models EM F1 BERT-Large (Devlin et al., 2018) 84.1 90.9 BiDAF+SA (Clark and Gardner, 2017) 72.1 81.1 Log. Reg. + FE (Rajpurkar et al., 2016) 40.4 51.0 Table 1: Our best performing unsupervised QA models compared to various baselines and supervised models. * indicates results on SQuAD dev set. † indicates results on non-standard test set created by Dhingra et al. (2018). ‡ indicates our re-implementation important (Section 3.2). To preserve the integrity of the SQuAD test set, we only submit our best performing system to the test server. We shall compare our results to some published baselines. Rajpurkar et al. (2016) use a supervised logistic regression model with feature engineering, and a sliding window approach that finds answers using word overlap with the question. Kaushik and Lipton (2018) train (supervised) models that disregard the input question and simply extract the most likely answer span from the context. To our knowledge, ours is the first work to deliberately target unsupervised QA on SQuAD. Dhingra et al. (2018) focus on semi-supervised QA, but do publish an unsupervised evaluation. To enable fair comparison, we re-implement their approach using their publicly available data, and train a variant with BERT-Large.6 Their approach also uses cloze questions, but without translation, and heavily relies on the structure of wikipedia articles. Our best approach attains 54.7 F1 on the SQuAD test set; an ensemble of 5 models (different seeds) achieves 56.4 F1. Table 1 shows the result in context of published baselines and supervised results. Our approach significantly outperforms baseline systems and Dhingra et al. (2018) and surpasses early supervised methods. 3.2 Ablation Studies and Analysis To understand the different contributions to the performance, we undertake an ablation study. All ablations are evaluated using the SQUAD development set. We ablate using BERT-Base and BiDAF+SA, and our best performing setup is then used to fine-tune a final BERT-Large model, which is the model in Table 1. All experiments with BERT-Base were repeated with 3 seeds to account for some instability encountered in training; we report mean results. Results are shown in Table 2, and observations and aggregated trends are highlighted below. Posterior Maximisation vs. Training on generated data Comparing Posterior Maximisation with BERT-Base and BiDAF+SA columns in Table 2 shows that training QA models is more effective than maximising question likelihood. As shown later, this could partly be attributed to QA models being able to generalise answer spans, returning answers at test-time that are not always named entity mentions. BERT models also have the advantage of linguistic pretraining, further adding to generalisation ability. Effect of Answer Prior Named Entities (NEs) are a more effective answer prior than noun phrases (NPs). Equivalent BERT-Base models trained with NEs improve on average by 8.9 F1 over NPs. Rajpurkar et al. (2016) estimate 52.4% of answers in SQuAD are NEs, whereas (assuming NEs are a subset of NPs), 84.2% are NPs. However, we found that there are on average 14 NEs per context compared to 33 NPs, so using NEs in training may help reduce the search space of possible answer candidates a model must consider. Effect of Question Length and Overlap As shown in Figure 2, using sub-clauses for generation leads to shorter questions and shorter common subsequences to the context, which more closely match the distribution of SQuAD questions. Reducing the length of cloze questions helps the translation components produce simpler, more precise questions. Using sub-clauses leads to, on average +4.0 F1 across equivalent sentencelevel BERT-Base models. The “noisy cloze” generator produces shorter questions than the NMT model due to word dropout, and shorter common subsequences due to the word perturbation noise. 6http://bit.ly/semi-supervised-qa 4901 Cloze Answer Cloze Boundary Cloze Translation Wh* Heuristic BERT-Base BiDAF+SA Posterior Max. EM F1 EM F1 EM F1 NE Sub-clause UNMT ✓ 38.6 47.8 32.3 41.2 17.1 21.7 NE Sub-clause UNMT × 36.9 46.3 30.3 38.9 15.3 19.8 NE Sentence UNMT × 32.4 41.5 24.7 32.9 14.8 19.0 NP Sentence UNMT × 19.8 28.4 18.0 26.0 12.9 19.2 NE Sub-clause Noisy Cloze ✓ 36.5 46.1 29.3 38.7 NE Sub-clause Noisy Cloze × 32.9 42.1 26.8 35.4 NE Sentence Noisy Cloze × 30.3 39.5 24.3 32.7 NP Sentence Noisy Cloze × 19.5 29.3 16.6 25.7 NE Sub-clause Identity ✓ 24.2 34.6 12.6 21.5 NE Sub-clause Identity × 21.9 31.9 16.1 26.8 NE Sentence Identity × 18.1 27.4 12.4 21.2 NP Sentence Identity × 14.6 23.9 6.6 13.5 Rule-Based (Heilman and Smith, 2010) 16.0 37.9 13.8 35.4 Table 2: Ablations on the SQuAD development set. “Wh* Heuristic” indicates if a heuristic was used to choose sensible Wh* words during cloze translation. NE and NP refer to named entity mention and noun phrase answer generation. Figure 2: Lengths (blue, hashed) and longest common subsequence with context (red, solid) for SQuAD questions and various question generation methods. Effect of Cloze Translation Noise acts as helpful regularization when comparing the “identity” cloze translation functions to “noisy cloze”, (mean +9.8 F1 across equivalent BERT-Base models). Unsupervised NMT question translation is also helpful, leading to a mean improvement of 1.8 F1 on BERT-Base for otherwise equivalent “noisy cloze” models. The improvement over noisy clozes is surprisingly modest, and is discussed in more detail in Section 5. Effect of QA model BERT-Base is more effective than BiDAF+SA (an architecture specifically designed for QA). BERT-Large (not shown in Table 2) gives a further boost, improving our best configuration by 6.9 F1. Effect of Rule-based Generation QA models trained on QA datasets generated by the RuleQuestion Generation EM F1 Rule Based 16.0 37.9 Rule Based (NE filtered) 28.2 41.5 Ours 38.6 47.8 Ours (filtered for c,a pairs in Rule Based) 38.5 44.7 Table 3: Ablations on SQuAD development set probing the performance of the rule based system. based (RB) system of Heilman and Smith (2010) do not perform favourably compared to our NMT approach. To test whether this is due to different answer types used, we a) remove questions of their system that are not consistent with our (NE) answers, and b) remove questions of our system that are not consistent with their answers. Table 3 shows that while answer types matter in that using our restrictions help their system, and using their restrictions hurts ours, they cannot fully explain the difference. The RB system therefore appears to be unable to generate the variety of questions and answers required for the task, and does not generate questions from a sufficient variety of contexts. Also, whilst on average, question lengths are shorter for the RB model than the NMT model, the distribution of longest common sequences are similar, as shown in Figure 2, perhaps suggesting that the RB system copies a larger proportion of its input. 3.3 Error Analysis We find that the QA model predicts answer spans that are not always detected as named entity mentions (NEs) by the NER tagger, despite being trained with solely NE answer spans. In fact, 4902 Figure 3: Breakdown of performance for our best QA model on SQuAD for different question types (left) and different NE answer categories (right) when we split SQuAD into questions where the correct answer is an automatically-tagged NE, our model’s performance improves to 64.5 F1, but it still achieves 47.9 F1 on questions which do not have automatically-tagged NE answers (not shown in our tables). We attribute this to the effect of BERT’s linguistic pretraining allowing it to generalise the semantic role played by NEs in a sentence rather than simply learning to mimic the NER system. An equivalent BiDAF+SA model scores 58.9 F1 when the answer is an NE but drops severely to 23.0 F1 when the answer is not an NE. Figure 3 shows the performance of our system for different kinds of question and answer type. The model performs best with “when” questions which tend to have fewer potential answers, but struggles with “what” questions, which have a broader range of answer semantic types, and hence more plausible answers per context. The model performs well on “TEMPORAL” answers, consistent with the good performance of “when” questions. 3.4 UNMT-generated Question Analysis Whilst our main aim is to optimise for downstream QA performance, it is also instructive to examine the output of the unsupervised NMT cloze translation system. Unsupervised NMT has been used in monolingual settings (Subramanian et al., 2018), but cloze-to-question generation presents new challenges – The cloze and question are asymmetric in terms of word length, and successful translation must preserve the answer, not just superficially transfer style. Figure 4 shows that without the wh* heuristic, the model learns to generate questions with broadly appropriate wh* words for the answer type, but can struggle, particularly with Person/Org/Norp and Numeric answers. Table 4 shows representative examples from the NE unsupervised NMT model. The model generally copies large segments of the input. Also shown in Figure 2, generated questions have, on average, a 9.1 token contiguous sub-sequence from the context, corresponding to 56.9% of a generated question copied verbatim, compared to 4.7 tokens (46.1%) for SQuAD questions. This is unsurprising, as the backtranslation training objective is to maximise the reconstruction of inputs, encouraging conservative translation. The model exhibits some encouraging, nontrivial syntax manipulation and generation, particularly at the start of questions, such as example 7 in Table 4, where word order is significantly modified and “sold” is replaced by “buy”. Occasionally, it hallucinates common patterns in the question corpus (example 6). The model can struggle with lists (example 4), and often prefers present tense and second person (example 5). Finally, semantic drift is an issue, with generated questions being relatively coherent but often having different answers to the inputted cloze questions (example 2). We can estimate the quality and grammaticality of generated questions by using the well-formed question dataset of Faruqui and Das (2018). This dataset consists of search engine queries annotated with whether the query is a well-formed question or not. We train a classifier on this task, and then measure how many questions are classified as “well-formed” for our question generation methods. Full details are given in Appendix A.5. We find that 68% of questions generated by UNMT model are classified as well-formed, compared to 75.6% for the rule-based system and 92.3% for SQuAD questions. We also note that using language model pretraining improves the quality of questions generated by UNMT model, with 78.5% classified as well-formed, surpassing the rule-based system (see Appendix A.6). 3.5 Few-Shot Question Answering Finally, we consider a few-shot learning task with very limited numbers of labelled training examples. We follow the methodology of Dhingra et al. (2018) and Yang et al. (2017), training on a small number of training examples and using a development set for early stopping. We use the splits made 4903 # Cloze Question Answer Generated Question 1 they joined with PERSON/NORP/ORG to defeat him Rom Who did they join with to defeat him? 2 the NUMERIC on Orchard Street remained open until 2009 second How much longer did Orchard Street remain open until 2009? 3 making it the third largest football ground in PLACE Portugal Where is it making the third football ground? 4 he speaks THING, English, and German Spanish What are we , English , and German? 5 Arriving in the colony early in TEMPORAL 1883 When are you in the colony early? 6 The average household size was NUMERIC 2.30 How much does a Environmental Engineering Technician II in Suffolk , CA make? 7 WALA would be sold to the Des Moines-based PERSON/NORP/ORG for $86 million Meredith Corp Who would buy the WALA Des Moines-based for $86 million? Table 4: Examples of cloze translations for the UNMT model using the wh* heuristic and subclause cloze extraction. More examples can be found in appendix A.7 Figure 4: Wh* words generated by the UNMT model for cloze questions with different answer types. available by Dhingra et al. (2018), but switch the development and test splits, so that the test split has n-way annotated answers. We first pretrain a BERT-large QA model using our best configuration from Section 3, then fine-tune with a small amount of SQuAD training data. We compare this to our re-implementation of Dhingra et al. (2018), and training the QA model directly on the available data without unsupervised QA pretraining. Figure 5 shows performance for progressively larger amounts of training data. As with Dhingra et al. (2018), our numbers are attained using a development set for early stopping that can be larger than the training set. Hence this is not a true reflection of performance in low data regimes, but does allow for comparative analysis between models. We find our approach performs best in very data poor regimes, and similarly to Dhingra et al. (2018) with modest amounts of data. We also note BERT-Large itself is remarkably efficient, reaching ∼60% F1 with only 1% of the available data. 4 Related Work Unsupervised Learning in NLP Most representation learning approaches use latent variables (Hofmann, 1999; Blei et al., 2003), or language Figure 5: F1 score on the SQuAD development set for progressively larger training dataset sizes model-inspired criteria (Collobert and Weston, 2008; Mikolov et al., 2013; Pennington et al., 2014; Radford et al., 2018; Devlin et al., 2018). Most relevant to us is unsupervised NMT (Conneau et al., 2017; Lample et al., 2017, 2018; Artetxe et al., 2018) and style transfer (Subramanian et al., 2018). We build upon this work, but instead of using models directly, we use them for training data generation. Radford et al. (2019) report that very powerful language models can be used to answer questions from a conversational QA task, CoQA (Reddy et al., 2018) in an unsupervised manner. Their method differs significantly to ours, and may require “seeding” from QA dialogs to encourage the language model to generate answers. Semi-supervised QA Yang et al. (2017) train a QA model and also generate new questions for greater data efficiency, but require labelled data. Dhingra et al. (2018) simplify the approach and remove the supervised requirement for question generation, but do not target unsupervised QA or attempt to generate natural questions. They also make stronger assumptions about the text used for question generation and require Wikipedia summary paragraphs. Wang et al. (2018) consider 4904 semi-supervised cloze QA, Chen et al. (2018) use semi-supervision to improve semantic parsing on WebQuestions (Berant et al., 2013), and Lei et al. (2016) leverage semi-supervision for question similarity modelling. Finally, injecting external knowledge into QA systems could be viewed as semi-supervision, and Weissenborn et al. (2017) and Mihaylov and Frank (2018) use Conceptnet (Speer et al., 2016) for QA tasks. Question Generation has been tackled with pipelines of templates and syntax rules (Rus et al., 2010). Heilman and Smith (2010) augment this with a model to rank generated questions, and Yao et al. (2012) and Olney et al. (2012) investigate symbolic approaches. Recently there has been interest in question generation using supervised neural models, many trained to generate questions from c, a pairs in SQuAD (Du et al., 2017; Yuan et al., 2017; Zhao et al., 2018; Du and Cardie, 2018; Hosking and Riedel, 2019) 5 Discussion It is worth noting that to attain our best performance, we require the use of both an NER system, indirectly using labelled data from OntoNotes 5, and a constituency parser for extracting subclauses, trained on the Penn Treebank (Marcus et al., 1994).7 Moreover, a language-specific wh* heuristic was used for training the best performing NMT models. This limits the applicability and flexibility of our best-performing approach to domains and languages that already enjoy extensive linguistic resources (named entity recognition and treebank datasets), as well as requiring some human engineering to define new heuristics. Nevertheless, our approach is unsupervised from the perspective of requiring no labelled (question, answer) or (question, context) pairs, which are usually the most challenging aspects of annotating large-scale QA training datasets. We note the “noisy cloze” system, consisting of very simple rules and noise, performs nearly as well as our more complex best-performing system, despite the lack of grammaticality and syntax associated with questions. The questions generated by the noisy cloze system also perform poorly on the “well-formedness” analysis mentioned in Sec7Ontonotes 5: https://catalog.ldc.upenn. edu/LDC2013T19 tion 3.4, with only 2.7% classified as well-formed. This intriguing result suggests natural questions are perhaps less important for SQuAD and strong question-context word matching is enough to do well, reflecting work from Jia and Liang (2017) who demonstrate that even supervised models rely on word-matching. Additionally, questions generated by our approach require no multi-hop or multi-sentence reasoning, but can still be used to achieve non-trivial SQuAD performance. Indeed, Min et al. (2018) note 90% of SQuAD questions only require a single sentence of context, and Sugawara et al. (2018) find 76% of SQuAD has the answer in the sentence with highest token overlap to the question. 6 Conclusion In this work, we explore whether it is possible to to learn extractive QA behaviour without the use of labelled QA data. We find that it is indeed possible, surpassing simple supervised systems, and strongly outperforming other approaches that do not use labelled data, achieving 56.4% F1 on the popular SQuAD dataset, and 64.5% F1 on the subset where the answer is a named entity mention. However, we note that whilst our results are encouraging on this relatively simple QA task, further work is required to handle more challenging QA elements and to reduce our reliance on linguistic resources and heuristics. Acknowledgments The authors would like to thank Tom Hosking, Max Bartolo, Johannes Welbl, Tim Rockt¨aschel, Fabio Petroni, Guillaume Lample and the anonymous reviewers for their insightful comments and feedback. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine translation. In EMNLP, pages 3632–3642. Association for Computational Linguistics. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic Parsing on Freebase from Question-Answer Pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA. Association for Computational Linguistics. 4905 David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching Word Vectors with Subword Information. arXiv:1607.04606 [cs]. ArXiv: 1607.04606. Bo Chen, Bo An, Le Sun, and Xianpei Han. 2018. Semi-Supervised Lexicon Learning for WideCoverage Semantic Parsing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 892–904, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Christopher Clark and Matt Gardner. 2017. Simple and Effective Multi-Paragraph Reading Comprehension. arXiv:1710.10723 [cs]. ArXiv: 1710.10723. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 160–167, New York, NY, USA. ACM. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. CoRR, abs/1710.04087. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs]. ArXiv: 1810.04805. Bhuwan Dhingra, Danish Danish, and Dheeraj Rajagopal. 2018. Simple and Effective SemiSupervised Question Answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 582–587, New Orleans, Louisiana. Association for Computational Linguistics. Xinya Du and Claire Cardie. 2018. Harvesting Paragraph-level Question-Answer Pairs from Wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1907–1917, Melbourne, Australia. Association for Computational Linguistics. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to Ask: Neural Question Generation for Reading Comprehension. Manaal Faruqui and Dipanjan Das. 2018. Identifying Well-formed Natural Language Questions. arXiv:1808.09419 [cs]. ArXiv: 1808.09419. Michael Heilman and Noah A. Smith. 2010. Good Question! Statistical Ranking for Question Generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 609–617, Stroudsburg, PA, USA. Association for Computational Linguistics. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22Nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’99, pages 50–57, New York, NY, USA. ACM. Tom Hosking and Sebastian Riedel. 2019. Evaluating Rewards for Question Generation Models. arXiv:1902.11049 [cs]. ArXiv: 1902.11049. Robin Jia and Percy Liang. 2017. Adversarial Examples for Evaluating Reading Comprehension Systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics. Divyansh Kaushik and Zachary C. Lipton. 2018. How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks. arXiv:1808.04926 [cs, stat]. ArXiv: 1808.04926. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180. Association for Computational Linguistics. Event-place: Prague, Czech Republic. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: a Benchmark for Question Answering Research. Transactions of the Association of Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Cross-lingual Language Model Pretraining. arXiv:1901.07291 [cs]. ArXiv: 1901.07291. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised Machine Translation Using Monolingual Corpora Only. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-Based & Neural Unsupervised Machine Translation. In Proceedings of the 2018 Conference 4906 on Empirical Methods in Natural Language Processing, pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics. Tao Lei, Hrishikesh Joshi, Regina Barzilay, Tommi Jaakkola, Kateryna Tymoshenko, Alessandro Moschitti, and Llus Mrquez. 2016. Semi-supervised Question Retrieval with Gated Convolutions. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1279–1289, San Diego, California. Association for Computational Linguistics. Mike Lewis and Angela Fan. 2019. Generative question answering: Learning to answer the whole question. In International Conference on Learning Representations. Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The Penn Treebank: annotating predicate argument structure. In Proceedings of the workshop on Human Language Technology HLT ’94, page 114, Plainsboro, NJ. Association for Computational Linguistics. Todor Mihaylov and Anette Frank. 2018. Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 821–832, Melbourne, Australia. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and Robust Question Answering from Minimal Context over Documents. arXiv:1805.08092 [cs]. ArXiv: 1805.08092. Andrew M. Olney, Arthur C. Graesser, and Natalie K. Person. 2012. Question Generation from Concept Maps. Dialogue & Discourse, 3(2):75–99–99. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In In EMNLP. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Dont Know: Unanswerable Questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784– 789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. CoQA: A Conversational Question Answering Challenge. arXiv:1808.07042 [cs]. ArXiv: 1808.07042 Citation Key: reddyCoQAConversationalQuestion2018. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Cristian Moldovan. 2010. The First Question Generation Shared Task Evaluation Challenge. In Proceedings of the 6th International Natural Language Generation Conference, INLG ’10, pages 251–257, Stroudsburg, PA, USA. Association for Computational Linguistics. Eventplace: Trim, Co. Meath, Ireland. Robyn Speer, Joshua Chin, and Catherine Havasi. 2016. ConceptNet 5.5: An Open Multilingual Graph of General Knowledge. arXiv:1612.03975 [cs]. ArXiv: 1612.03975. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A Minimal Span-Based Neural Constituency Parser. arXiv:1705.03919 [cs]. ArXiv: 1705.03919. Sandeep Subramanian, Guillaume Lample, Eric Michael Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, and Y.-Lan Boureau. 2018. Multiple-Attribute Text Style Transfer. arXiv:1811.00552 [cs]. ArXiv: 1811.00552. Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. 2018. What Makes Reading Comprehension Questions Easier? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4208–4219, Brussels, Belgium. Association for Computational Linguistics. Liang Wang, Sujian Li, Wei Zhao, Kewei Shen, Meng Sun, Ruoyu Jia, and Jingming Liu. 2018. Multi-Perspective Context Aggregation for Semisupervised Cloze-style Reading Comprehension. In Proceedings of the 27th International Conference on Computational Linguistics, pages 857–867, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Dirk Weissenborn, Tom Koisk, and Chris Dyer. 2017. Dynamic Integration of Background Knowledge in 4907 Neural NLU Systems. arXiv:1706.02596 [cs]. ArXiv: 1706.02596. Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William Cohen. 2017. Semi-Supervised QA with Generative Domain-Adaptive Nets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1040–1050, Vancouver, Canada. Association for Computational Linguistics. Xuchen Yao, Gosse Bouma, and Yi Zhang. 2012. Semantics-based Question Generation and Implementation. D&D, 3:11–42. Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, and Adam Trischler. 2017. Machine Comprehension by Text-to-Text Neural Question Generation. arXiv:1705.02012 [cs]. ArXiv: 1705.02012. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901–3910, Brussels, Belgium. Association for Computational Linguistics. 4908 Supplementary Materials for ACL 2019 Paper: Unsupervised Question Answering by Cloze Translation A Appendices A.1 Cloze Question Featurization and Translation Cloze questions are featurized as follows. Assume we have a cloze question extracted from a paragraph “the Paris Sevens became the last stop on the calendar in .”, and the answer “2018”. We first tokenize the cloze question, and discard it if it is longer than 40 tokens. We then replace the “blank” with a special mask token. If the answer was extracted using the noun phrase chunker, there is no specific answer entity typing so we just use a single mask token "MASK". However, when we use the named entity answer generator, answers have a named entity label, which we can use to give the cloze translator a high level idea of the answer semantics. In the example above, the answer “2018” has the named entity type "DATE". We group fine grained entity types into higher level categories, each with its own masking token as shown in Table 5, and so the mask token for this example is "TEMPORAL". A.2 Unsupervised NMT Training Setup Details Here we describe experimental details for unsupervised NMT setup. We use the English tokenizer from Moses (Koehn et al., 2007), and use FastBPE (https://github.com/ glample/fastBPE) to split into subword units, with a vocabulary size of 60000. The architecture uses a 4-layer transformer encoder and 4-layer transformer decoder, where one layer is language specific for both the encoder and decoder, the rest are shared. We use the standard hyperparameter settings recommended by Lample et al. (2018). The models are initialised with random weights, and the input word embedding matrix is initialised using FastText vectors (Bojanowski et al., 2016) trained on the concatenation of the C and Q corpora. Initially, the auto-encoding loss and backtranslation loss have equal weight, with the autoencoding loss coefficient reduced to 0.1 by 100K steps and to 0 by 300k steps. We train using 5M cloze questions and natural questions, and cease training when the BLEU scores between backtranslated and input questions stops improving, usually around 300K optimisation steps. When generating, we decode greedily, and note that decoding with a beam size of 5 did not significantly change downstream QA performance, or greatly change the fluency of generations. A.3 Wh* Heuristic We defined a heuristic to encourage appropriate wh* words for the inputted cloze question’s answer type. This heuristic is used to provide a relevant wh* word for the “noisy cloze” and “identity” baselines, as well as to assist the NMT model to produce more precise questions. To this end, we map each high level answer category to the most appropriate wh* word, as shown on the right hand column of Table 5 (In the case of NUMERIC types, we randomly choose between “How much” and “How many”). Before training, we prepend the high level answer category masking token to the start of questions that start with the corresponding wh* word, e.g. the question “Where is Mount Vesuvius?” would be transformed into “PLACE Where is Mount Vesuvius ?”. This allows the model to learn a much stronger association between the wh* word and answer mask type. A.4 QA Model Setup Details We train BiDAF + Self Attention using the default settings. We evaluate using a synthetic development set of data generated from 1000 context paragraphs every 500 training steps, and halt when the performance has not changed by 0.1% for the last 5 evaluations. We train BERT-Base and BERT-Large with a batch size of 16, and the default learning rate hyperparameters. For BERT-Base, we evaluate using a synthetic development set of data generated from 1000 context paragraphs every 500 training steps, and halt when the performance has not changed by 0.1% for the last 5 evaluations. For BERT-Large, due to larger model size, training takes longer, so we manually halt training when the synthetic development set performance plateaus, rather than using the automatic early stopping. A.5 Question Well-Formedness We can estimate how well-formed the questions generated by various configurations of our model are using the Well-formed query dataset of Faruqui and Das (2018). This dataset consists of 25,100 4909 High Level Answer Category Named Entity labels Most appropriate wh* PERSON/NORP/ORG PERSON, NORP, ORG Who PLACE GPE, LOC, FAC Where THING PRODUCT, EVENT, WORKOFART, LAW, LANGUAGE What TEMPORAL TIME, DATE When NUMERIC PERCENT, MONEY, QUANTITY, ORDINAL, CARDINAL How much/How many Table 5: High level answer categories for the different named entity labels Cloze Answer Cloze Boundary Cloze Translation Wh* Heuristic % Wellformed NE Sub-clause UNMT ✓ 68.0 NE Sub-clause UNMT × 65.3 NE Sentence UNMT × 61.3 NP Sentence UNMT × 61.9 NE Sub-clause Noisy Cloze ✓ 2.7 NE Sub-clause Noisy Cloze × 2.4 NE Sentence Noisy Cloze × 0.7 NP Sentence Noisy Cloze × 0.8 NE Sub-clause Identity ✓ 30.8 NE Sub-clause Identity × 20.0 NE Sentence Identity × 49.5 NP Sentence Identity × 48.0 NE Sub-clause UNMT* ✓ 78.5 Rule-Based (Heilman and Smith, 2010) 75.6 SQuAD Questions (Rajpurkar et al., 2016) 92.3 Table 6: Fraction of questions classified as ”wellformed” by a classifier trained on the dataset of Faruqui and Das (2018) for different question generation models. * indicates MLM pretraining was applied before UNMT training search engine queries, annotated with whether the query is a well-formed question. We train a BERTBase classifier on the binary classification task, achieving a test set accuracy of 80.9% (compared to the previous state of the art of 70.7%). We then use this classifier to measure what proportion of questions generated by our models are classified as “well-formed”. Table 6 shows the full results. Our best unsupervised question generation configuration achieves 68.0%, demonstrating the model is capable of generating relatively well-formed questions, but there is room for improvement, as the rule-based generator achieves 75.6%. MLM pretraining (see Appendix A.6) greatly improves the well-formedness score. The classifier predicts that 92.3% of SQuAD questions are well-formed, suggesting it is able to detect high quality questions. The classifier appears to be sensitive to fluency and grammar, with the “identity” cloze translation models scoring much higher than their “noisy cloze” counterparts. A.6 Language Model Pretraining We experimented with Masked Language Model (MLM) pretraining of the translation models, ps→t(q|q′) and pt→s(q′|q). We use the XLM implementation (https://github. com/facebookresearch/XLM) and use default hyperparameters for both MLM pretraining and and unsupervised NMT fine-tuning. The UNMT encoder is initialized with the MLM model’s parameters, and the decoder is randomly initialized. We find translated questions to be qualitatively more fluent and abstractive than the those from the models used in the main paper. Table 6 supports this observation, demonstrating that questions produced by models with MLM pretraining are classified as well-formed 10.5% more often than those without pretraining, surpassing the rule-based question generator of Heilman and Smith (2010). However, using MLM pretraining did not lead to significant differences for question answering performance (the main focus of this paper), so we leave a thorough investigation into language model pretraining for unsupervised question answering as future work. A.7 More Examples of Unsupervised NMT Cloze Translations Table 4 shows examples of cloze question translations from our model, but due to space constraints, only a few examples can be shown there. Table 7 shows many more examples. 4910 Cloze Question Answer Generated Question to record their sixth album in TEMPORAL 2005 When will they record their sixth album ? Redline management got word that both were negotiating with THING Trek/Gary Fisher What Redline management word got that both were negotiating ? Reesler to suspect that Hitchin murdered PERSON/NORP/ORG Wright Who is Reesler to suspect that Hitchin murdered ? joined PERSON/NORP/ORG in the 1990s to protest the Liberals’ long-gun registry the Reform Party Who joined in the 1990s to protest the Liberals ’ long-gun registry ? to end the TEMPORAL NLCS, and the season, for the New York Mets 2006 When will the NLCS end , and the season , for the New York Mets ? NUMERIC of the population concentrated in the province of Lugo about 75% How many of you are concentrated in the province of Lugo ? placed NUMERIC on uneven bars and sixth on balance beam fourth How many bars are placed on uneven bars and sixth on balance beam ? to open a small branch in PLACE located in Colonia Escalon in San Salvador La Casona Where do I open a small branch in Colonia Escalon in San Salvador ? they finished outside the top eight when considering only THING events World Cup What if they finished outside the top eight when considering only events ? he obtained his Doctor of Law degree in 1929.Who’s who in PLACE America Where can we obtain our Doctor of Law degree in 1929.Who ’ s who ? to establish the renowned Paradise Studios in PLACE in 1979 Sydney Where is the renowned Paradise Studios in 1979 ? Ukraine came out ahead NUMERIC four to three How much did Ukraine come out ahead ? their rule over these disputed lands was cemented after another Polish victory, in THING the PolishSoviet War What was their rule over these disputed lands after another Polish victory , anyway ? sinking PERSON/NORP/ORG 35 before being driven down by depth charge attacks Patrol Boat Who is sinking 35 before being driven down by depth charge attacks ? to hold that PLACE was the sole or primary perpetrator of human rights abuses North Korea Where do you hold that was the sole or primary perpetrator of human rights abuses ? to make it 21 to the Hungarians, though PLACE were quick to equalise Italy Where do you make it 2-1 to the Hungarians , though quick equalise ? he was sold to Colin Murphy’s Lincoln City for a fee of NUMERIC 15,000 How much do we need Colin Murphy ’ s Lincoln City for a fee ? Bierut is the co-founder of the blog PERSON/NORP/ORG Design Observer Who is the Bierut co-founder of the blog ? the Scotland matches at the 1982 THING being played in a ”family atmosphere” FIFA World Cup What are the Scotland matches at the 1982 being played in a ” family atmosphere ” ? Tom realizes that he has finally conquered both ”THING” and his own stage fright La Cinquette What happens when Tom realizes that he has finally conquered both ” and his own stage fright ? it finished first in the PERSON/NORP/ORG ratings in April 1990 Arbitron Who finished it first in the ratings in April 1990 ? his observer to destroy NUMERIC others two How many others can his observer destroy ? Martin had recorded some solo songs (including ”Never Back Again”) in 1984 in PLACE the United Kingdom Where have Martin recorded some solo songs ( including ” Never Back Again ” ) in 1984 ? the NUMERIC occurs under stadium lights second How many lights occurs under stadium ? PERSON/NORP/ORG had made a century in the fourth match Poulton Who had made a century in the fourth match ? was sponsored by the national liberal politician PERSON/NORP/ORG Valentin Zarnik Who was sponsored by the national liberal politician ? Woodbridge also shares the PERSON/NORP/ORG with the neighboring towns of Bethany and Orange. Amity Regional High School Who else shares the Woodbridge with the neighboring towns of Bethany and Orange ? A new Standard TEMPORAL benefit was introduced for university students tertiary When was a new Standard benefit for university students ? mentions the Bab and THING Bbs What are the mentions of Bab ? Table 7: Further cloze translations from the UNMT model (with subclause boundaries and wh* heuristic applied)
2019
484
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4911–4921 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4911 MULTIQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension Alon Talmor1,2 Jonathan Berant1,2 1School of Computer Science, Tel-Aviv University 2Allen Institute for Artificial Intelligence {alontalmor@mail,joberant@cs}.tau.ac.il Abstract A large number of reading comprehension (RC) datasets has been created recently, but little analysis has been done on whether they generalize to one another, and the extent to which existing datasets can be leveraged for improving performance on new ones. In this paper, we conduct such an investigation over ten RC datasets, training on one or more source RC datasets, and evaluating generalization, as well as transfer to a target RC dataset. We analyze the factors that contribute to generalization, and show that training on a source RC dataset and transferring to a target dataset substantially improves performance, even in the presence of powerful contextual representations from BERT (Devlin et al., 2019). We also find that training on multiple source RC datasets leads to robust generalization and transfer, and can reduce the cost of example collection for a new RC dataset. Following our analysis, we propose MULTIQA, a BERTbased model, trained on multiple RC datasets, which leads to state-of-the-art performance on five RC datasets. We share our infrastructure for the benefit of the research community. 1 Introduction Reading comprehension (RC) is concerned with reading a piece of text and answering questions about it (Richardson et al., 2013; Berant et al., 2014; Hermann et al., 2015; Rajpurkar et al., 2016). Its appeal stems both from the clear application it proposes, but also from the fact that it allows to probe many aspects of language understanding, simply by posing questions on a text document. Indeed, this has led to the creation of a large number of RC datasets in recent years. While each RC dataset has a different focus, there is still substantial overlap in the abilities required to answer questions across these datasets. Nevertheless, there has been relatively little work (Min et al., 2017; Chung et al., 2018; Sun et al., 2018) that explores the relations between the different datasets, including whether a model trained on one dataset generalizes to another. This research gap is highlighted by the increasing interest in developing and evaluating the generalization of language understanding models to new setups (Yogatama et al., 2019; Liu et al., 2019). In this work, we conduct a thorough empirical analysis of generalization and transfer across 10 RC benchmarks. We train models on one or more source RC datasets, and then evaluate their performance on a target test set, either without any additional target training examples (generalization) or with additional target examples (transfer). We experiment with DOCQA (Clark and Gardner, 2018), a standard and popular RC model, as well as a model based on BERT (Devlin et al., 2019), which provides powerful contextual representations. Our generalization analysis confirms findings that current models over-fit to the particular training set and generalize poorly even to similar datasets. Moreover, BERT representations substantially improve generalization. However, we find that the contribution of BERT is much more pronounced on Wikipedia (which BERT was trained on) and Newswire, but quite moderate when documents are taken from web snippets. We also analyze the main causes for poor generalization: (a) differences in the language of the text document, (b) differences in the language of the question, and (c) the type of language phenomenon that the dataset explores. We show how generalization is related to these factors (Figure 1) and that performance drops as more of these factors accumulate. Our transfer experiments show that pre-training on one or more source RC datasets substantially improves performance when fine-tuning on a tar4912 get dataset. An interesting question is whether such pre-training improves performance even in the presence of powerful language representations from BERT. We find the answer is a conclusive yes, as we obtain consistent improvements in our BERT-based RC model. We find that training on multiple source RC datasets is effective for both generalization and transfer. In fact, training on multiple datasets leads to the same performance as training from the target dataset alone, but with roughly three times fewer examples. Moreover, we find that when using the high capacity BERT-large, one can train a single model on multiple RC datasets, and obtain close to or better than state-of-the-art performance on all of them, without fine-tuning to a particular dataset. Armed with the above insights, we train a large RC model on multiple RC datasets, termed MULTIQA. Our model leads to new state-of-the-art results on five datasets, suggesting that in many language understanding tasks the size of the dataset is the main bottleneck, rather than the model itself. Last, we have developed infrastructure (on top of AllenNLP (Gardner et al., 2018)), where experimenting with multiple models on multiple RC datasets, mixing datasets, and performing finetuning, are trivial. It is also simple to expand the infrastructure to new datasets and new setups (abstractive RC, multi-choice, etc.). We will open source our infrastructure, which will help researchers evaluate models on a large number of datasets, and gain insight on the strengths and shortcoming of their methods. We hope this will accelerate progress in language understanding. To conclude, we perform a thorough investigation of generalization and transfer in reading comprehension over 10 RC datasets. Our findings are: • An analysis of generalization on two RC models, illustrating the factors that influence generalization between datasets. • Pre-training on a RC dataset and fine-tuning on a target dataset substantially improves performance even in the presence of contextualized word representations (BERT). • Pre-training on multiple RC datasets improves transfer and generalization and can reduce the cost of example annotation. • A new model, MULTIQA, that improves state-ofthe-art performance on five datasets. • Infrastructure for easily performing experiments on multiple RC datasets. Dataset Size Context Question Multi-hop SQUAD 108K Wikipedia crowd No NEWSQA 120K Newswire crowd No SEARCHQA 140K Snippets trivia No TRIVIAQA 95K Snippets trivia No HOTPOTQA 113K Wikipedia crowd Yes CQ 2K Snippets Web queries/KB No CWQ 35K Snippets crowd/KB Yes COMQA 11K Snippets WikiAnswers No WIKIHOP 51K Wikipedia KB Yes DROP 96K Wikipedia crowd Yes Table 1: Characterization of different RC datasets. The top part corresponds to large datasets, and the bottom to small datasets. The uniform format datasets can be downloaded from www.tau-nlp.org/multiqa. The code for the AllenNLP models is available at http://github.com/alontalmor/ multiqa. 2 Datasets We describe the 10 datasets used for our investigation. Each dataset provides question-contextanswer triples {(qi, ci, ai)}N i=1 for training, and a model maps an unseen question-context pair (q, c) to an answer a. For simplicity, we focus on the single-turn extractive setting, where the answer a is a span in the context c. Thus, we do not evaluate abstractive (Nguyen et al., 2016) or conversational datasets (Choi et al., 2018; Reddy et al., 2018). We broadly distinguish large datasets that include more than 75K examples, from small datasets that contain less than 75K examples. In §4, we will fix the size of the large datasets to control for size effects, and always train on exactly 75K examples per dataset. We now shortly describe the datasets, and provide a summary of their characteristics in Table 1. The table shows the original size of each dataset, the source for the context, how questions were generated, and whether the dataset was specifically designed to probe multi-hop reasoning. The large datasets used are: 1. SQUAD (Rajpurkar et al., 2016): Crowdsourcing workers were shown Wikipedia paragraphs and were asked to author questions about their content. Questions mostly require soft matching of the language in the question to a local context in the text. 2. NEWSQA (Trischler et al., 2017): Crowdsourcing workers were shown a CNN article (longer than SQUAD) and were asked to au4913 thor questions about its content. 3. SEARCHQA (Dunn et al., 2017): Trivia questions were taken from Jeopardy! TV show, and contexts are web snippets retrieved from Google search engine for those questions, with an average of 50 snippets per question. 4. TRIVIAQA (Joshi et al., 2017): Trivia questions were crawled from the web. In one variant of TRIVIAQA (termed TQA-W), Wikipedia pages related to the questions are provided for each question. In another, web snippets and documents from Bing search engine are given. For the latter variant, we use only the web snippets in this work (and term this TQA-U). In addition, we replace Bing web snippets with Google web snippets (and term this TQA-G). 5. HOTPOTQA (Yang et al., 2018): Crowdsourcing workers were shown pairs of related Wikipedia paragraphs and asked to author questions that require multi-hop reasoning over the paragraphs. There are two versions of HOTPOTQA: the first where the context includes the two gold paragraphs and eight “distractor” paragraphs, and a second, where 10 paragraphs retrieved by an information retrieval (IR) system are given. Here, we use the latter version. The small datasets are: 1. CQ (Bao et al., 2016): Questions are real Google web queries crawled from Google Suggest, originally constructed for querying the KB Freebase (Bollacker et al., 2008). However, the dataset was also used as a RC task with retrieved web snippets (Talmor et al., 2017). 2. CWQ (Talmor and Berant, 2018c): Crowdsourcing workers were shown compositional formal queries against Freebase and were asked to re-phrase them in natural language. Thus, questions require multi-hop reasoning. The original work assumed models contain an IR component, but the authors also provided default web snippets, which we use here. The repartitioned version 1.1 was used. (Talmor and Berant, 2018a) 3. WIKIHOP (Welbl et al., 2017) Questions are entity-relation pairs from Freebase, and are not phrased in natural language. Multiple Wikipedia paragraphs are given as context, and the dataset was constructed such that multi-hop reasoning is needed for answering the question. 4. COMQA (Abujabal et al., 2018): Questions are real user questions from the WikiAnswers community QA platform. No contexts are provided, and thus we augment the questions with web snippets retrieved from Google search engine. 5. DROP (Dua et al., 2019): Contexts are Wikipedia paragraphs and questions are authored by crowdsourcing workers. This dataset focuses on quantitative reasoning. Because most questions are not extractive, we only use the 33,573 extractive examples in the dataset (but evaluate on the entire development set). 3 Models We carry our empirical investigation using two models. The first is DOCQA (Clark and Gardner, 2018), and the second is based on BERT (Devlin et al., 2019), which we term BERTQA. We now describe the pre-processing on the datasets, and provide a brief description of the models. We emphasize that in all our experiments we use exactly the same training procedure for all datasets, with minimal hyper-parameter tuning. Pre-processing Examples in all datasets contain a question, text documents, and an answer. To generate an extractive example we (a) Split: We define a length L and split every paragraph whose length is > L into chunks using a few manual rules. (b) Sort: We sort all chunks (paragraphs whose length is ≤L or split paragraphs) by cosine similarity to the question in tf-idf space, as proposed by Clark and Gardner (2018). (c) Merge: We go over the sorted list of chunks and greedily merge them to the largest possible length that is at most L, so that the RC model will be exposed to as much context as possible. The final context is the merged list of chunks c = (c1, . . . , c|c|) (d) We take the gold answer and mark all spans that match the answer. DOCQA (Clark and Gardner, 2018): A widelyused RC model, based on BIDAF (Seo et al., 2016), that encodes the question and document with bidirectional RNNs, performs attention between the question and document, and adds selfattention on the document side. We run DOCQA on each chunk ci, where the input is a sequence of up to L(= 400) tokens represented as GloVE embeddings (Pennington et al., 2014). The output is a distribution over the start and end positions of the predicted span, and we output the span with highest probability across all chunks. At training time, DOCQA uses a shared4914 norm objective that normalizes the probability distribution over spans from all chunks. We define the gold span to be the first occurrence of the gold answer in the context c. BERTQA (Devlin et al., 2019): For each chunk, we apply the standard implementation, where the input is a sequence of L = 512 wordpiece tokens composed of the question and chunk separated by special tokens [CLS] <question> [SEP] <chunk> [SEP]. A linear layer with softmax over the top-layer [CLS] outputs a distribution over start and end span positions. We train over each chunk separately, backpropagating into BERT’s parameters. We maximize the log-likelihood of the first occurrence of the gold answer in each chunk that contains the gold answer. At test time, we output the span with the maximal logit across all chunks. 4 Controlled Experiments We now present controlled experiments aiming to explore generalization and transfer of models trained on a set of RC datasets to a new target. 4.1 Do models generalize to unseen datasets? We first examine generalization – whether models trained on one dataset generalize to examples from a new distribution. While different datasets differ substantially, there is overlap between them in terms of: (i) the language of the question, (ii) the language of the context, and (iii) the type of linguistic phenomena the dataset aims to probe. Our goal is to answer (a) do models over-fit to a particular dataset? How much does performance drop when generalizing to a new dataset? (b) Which datasets generalize better to which datasets? What properties determine generalization? We train DOCQA and BERTQA (we use BERTbase) on six large datasets (for TRIVIAQA we use TQA-G and TQA-W), taking 75K examples from each dataset to control for size. We also create MULTI-75K, which contains 15K examples from the five large dataset (Using TQA-G only for TRIVIAQA), resulting in another dataset of 75K examples. We evaluate performance on all datasets that the model was not trained on. Table 2 shows exact match (EM) performance (does the predicted span exactly match the gold span) on the development set. The row SELF corresponds to training and testing on the target itself, and is provided for reference (For DROP, we train on questions where the answer is a span in the context, but evaluate on the entire development set). The top part shows DOCQA, while the bottom BERTQA. At a high-level we observe three trends. First, models generalize poorly in this zero-shot setup: comparing SELF to the best zero-shot number shows a performance reduction of 31.5% on average. This confirms the finding that models overfit to the particular dataset. Second, BERTQA substantially improves generalization compared to DOCQA owing to the power of large-scale unsupervised learning – performance improves by 21.2% on average. Last, MULTI-75K performs almost as well as the best source dataset, reducing performance by only 3.7% on average. Hence, training on multiple datasets results in robust generalization. We further investigate training on multiple datasets in §4.2 and §5. Taking a closer look, the pair SEARCHQA and TQA-G exhibits the smallest performance drop, since both use trivia questions and web snippets. SQUAD and NEWSQA also generalize well (especially with BERTQA), probably because they contain questions on a single document, focusing on predicate-argument structure. While HOTPOTQA and WIKIHOP both examine multi-hop reasoning over Wikipedia, performance dramatically drops from HOTPOTQA to WIKIHOP. This is due to the difference in the language of the questions (WIKIHOP questions are synthetic). The best generalization to DROP is from HOTPOTQA, since both require multi-hop reasoning. Performance on DROP is overall low, showing that our models struggle with quantitative reasoning. For the small datasets, COMQA, CQ, and CWQ, generalization is best with TQA-G, as the contexts in these datasets are web snippets. For CQ, whose training set has 1,300 examples, zeroshot performance is even higher than SELF. Interestingly, BERTQA improves performance substantially compared to DOCQA on NEWSQA, SQUAD, TQA-W and WIKIHOP, but only moderately on HOTPOTQA, SEARCHQA, and TQAG. This hints that BERT is efficient when the context is similar to (or even part of) its training corpus, but degrades over web snippets. This is most evident when comparing TQA-G to TQA-W, as the difference between them is the type of context. Global structure To view the global structure of the datasets, we visualize them with the force4915 CQ CWQ COMQA WIKIHOP DROP SQUAD NEWSQA SEARCHQA TQA-G TQA-W HOTPOTQA SQUAD 18.0 10.1 16.1 4.2 2.4 23.4 9.5 32.0 20.9 7.6 NEWSQA 14.9 8.2 13.5 4.8 3.0 41.9 7.7 25.3 19.9 5.3 SEARCHQA 29.2 16.1 24.6 8.1 2.3 17.4 10.8 50.3 28.9 4.5 TQA-G 30.3 17.8 29.4 9.2 3.0 30.2 15.5 38.5 7.2 TQA-W 24.6 14.5 17.9 8.4 2.9 24.8 15.0 20.5 6.5 HOTPOTQA 24.6 14.9 21.2 8.5 7.7 38.3 16.9 13.5 36.8 26.0 MULTI-75K 32.8 17.9 26.7 7.4 4.3 SELF 24.1 24.9 45.2 41.7 15.6 68.0 36.5 51.3 58.9 41.6 22.5 SQUAD 23.6 12.0 20.0 4.6 5.5 31.8 8.4 37.8 33.4 11.8 NEWSQA 24.1 12.4 18.9 7.1 4.4 60.4 10.1 37.6 28.4 8.0 SEARCHQA 30.3 18.5 25.8 12.4 2.8 23.3 12.7 53.2 35.4 5.2 TQA-G 35.4 19.7 28.6 6.3 3.6 36.3 18.8 39.2 8.8 TQA-W 30.3 16.5 23.6 12.6 5.1 35.5 19.4 27.8 8.7 HOTPOTQA 27.7 15.5 22.1 10.2 9.1 54.5 25.6 19.6 37.3 34.9 MULTI-75K 34.0 18.2 30.9 11.7 8.6 SELF 30.8 27.1 51.6 52.9 17.9 78.0 46.0 52.2 60.7 50.1 24.2 Table 2: Exact match on the development set for all datasets in a zero-shot training setup (no training on the target dataset). The top of the table shows results for DOCQA, while the bottom for BERTQA. Rows correspond to the training dataset and columns to the evaluated dataset. Large datasets are on the right side, and small datasets on the left side, see text for details of all rows. Datasets used for training were not evaluated. In MULTI-75K these comprise all large datasets, and thus these cases are marked by “-” directed placement algorithm (Fruchterman and Reingold, 1991). The input is a set of nodes (datasets), and a set of undirected edges representing springs in a mechanical system pulling nodes towards one another. Edges specify the pulling force, and a physical simulation places the nodes in a final minimal energy state in 2D-space. Let Pij be the performance when training BERTQA on dataset Di and evaluating on Dj. Let Pi be the performance when training and evaluating on Di. The force between an unordered pair of datasets is F(D1, D2) = P12 P2 + P21 P1 when we train and evaluate in both directions, and F(D1, D2) = 2·P12 P2 , if we train on D1 and evaluate on D2 only. Figure 1 shows this visualization, where we observe that datasets cluster naturally according to shape and color. Focusing on the context, datasets with web snippets are clustered (triangles), while datasets that use Wikipedia are also near one another (circles). Considering the question language, TQA-G, SEARCHQA, and TQA-U are very close (blue triangles), as all contain trivia questions over web snippets. DROP, HOTPOTQA, NEWSQA and SQUAD generate questions with crowd workers, and all are at the top of the figure. WIKIHOP uses synthetic questions that prevent generalization, and is far from other datasets – however this gap will be closed during transfer learning (§4.2). DROP is far from all datasets because it requires quantitative reasoning that is missing from other datasets. However, it is relatively close to HOTPOTQA and WIKIHOP, which target multiCWQ ComQA WikiHop DROP SQuAD NewsQA SearchQA TQA-G TQA-W TQA-U HotpotQA Figure 1: A 2D-visualization of the similarity between different datasets using the force-directed placement algorithm. We mark datasets that use web snippets as context with triangles, Wikipedia with circles, and Newswire with squares. We color multi-hop reasoning datasets in red, trivia datasets in blue, and factoid RC datasets in green. hop reasoning. DROP is also close to SQUAD, as both have similar contexts and question language, but the linguistic phenomena they target differ. Does generalization improve with more data? So far we trained on datasets with 75K examples. To examine generalization as the training set size increases, we evaluate performance as the number of examples from the five large datasets grows. Table 3 shows that generalization improves by 26% on average when increasing the number of exam4916 CQ CWQ COMQA WIKIHOP DROP MULTI-37K 30.9 17.7 28.4 12.3 6.3 MULTI-75K 34.0 18.2 30.9 11.7 8.6 MULTI-150K 35.0 17.6 30.0 12.4 9.1 MULTI-250K 35.6 20.2 31.1 11.9 11.0 MULTI-300K 37.6 18.8 31.5 13.5 10.4 MULTI-375K 36.1 20.7 31.3 13.3 11.3 Table 3: Exact match on the development set of all small datasets, as we increase the number of examples taken from the five large datasets (zero-shot setup). ples from 37K to 375K. 4.2 Does pre-training improve results on small datasets? We now consider transfer learning, assuming access to a small number of examples (≤15K) from a target dataset. We pre-train a model on a source dataset, and then fine-tune on the target. In all models, pre-training and fine-tuning are identical and performed until no improvement is seen on the development set (early stopping). Our goal is to analyze whether pre-training improves performance compared to training on the target alone. This is particularly interesting with BERTQA, as BERT already contains substantial knowledge that might deem pre-training unnecessary. How to choose the dataset to pre-train on? Table 4 shows exact match (EM) on the development set of all datasets (rows are the trained datasets and columns the evaluated datasets). Pre-training on a source RC dataset and transferring to the target improves performance by 21% on average for DOCQA (improving on 8 out of 11 datasets), and by 7% on average for BERTQA (improving on 10 out of 11 datasets). Thus, pre-training on a related RC dataset helps even given representations from a model like BERTQA. Second, MULTI-75K obtains good performance in almost all setups. Performance of MULTI-75K is 3% lower than the best source RC dataset on average for DOCQA, and 0.3% lower for BERTQA. Hence, one can pre-train a single model on a mixed dataset, rather than choose the best source dataset for every target. Third, in 4 datasets (COMQA, DROP, HOTPOTQA, WIKIHOP) the best source dataset uses web snippets in DOCQA, but Wikipedia in BERTQA. This strengthens our finding that BERTQA performs better given Wikipedia text. Last, we see dramatic improvement in performance comparing to §4.1. This highlights that current models over-fit to the data they are trained on, and small amounts of data from the target distribution can overcome this generalization gap. This is clearest for WIKIHOP, where synthetic questions preclude generalization, but fine-tuning improves performance from 12.6 EM to 50.5 EM. Thus, low performance was not due to a modeling issue, but rather a mismatch in the question language. An interesting question is whether performance in the generalization setup is predictive of performance in the transfer setup. Average performance across target datasets in Table 4, when choosing the best source dataset from Table 4, is 39.3 (DOCQA) and 43.8 (BERTQA). Average performance across datasets in Table 4, when choosing the best source dataset from Table 2, is 38.9 (DOCQA) and 43.5 (BERTQA). Thus, one can select a dataset to pre-train on based on generalization performance and suffer a minimal hit in accuracy, without fine-tuning on each dataset. However, training on MULTI-75K also yields good results without selecting a source dataset at all. How much target data is needed? We saw that with 15K training examples from the target dataset, pre-training improves performance. We now ask whether this effect maintains given a larger training set. To examine this, we measure (Figure 2) the performance on each of the large datasets when pre-training on its nearest dataset (according to F(·, ·)) for both DOCQA (top) and BERTQA (bottom row). The orange curve corresponds to training on the target dataset only, while the blue curve describes pre-training on 75K examples from a source dataset, and then fine-tuning on an increasing number of examples from the target dataset. In 5 out of 10 curves, pre-training improves performance even given access to all 75K examples from the target dataset. In the other 5, using only the target dataset is better after 30-50K examples. To estimate the savings in annotation costs through pre-training, we measure how many examples are needed, when doing pre-training, to reach 95% of the performance obtained when training on all examples from the target dataset. We find that with pre-training we only need 49% of the examples to reach 95% performance, compared to 86% without pre-training. To further explore pre-training on multiple datasets, we plot a curve (green) for BERTQA, 4917 CQ CWQ COMQA WIKIHOP DROP SQUAD NEWSQA SEARCHQA TQA-G TQA-W HOTPOTQA SQUAD 29.7 25.3 37.1 39.2 14.5 33.3 39.2 49.2 34.5 17.8 NEWSQA 16.9 26.1 34.7 38.1 14.3 59.6 41.6 44.2 33.9 16.5 SEARCHQA 30.8 28.8 41.3 39.0 15.0 57.0 31.4 57.5 39.6 19.2 TQA-G 41.5 30.1 42.6 42.0 14.0 57.7 31.8 49.5 41.4 19.1 TQA-W 31.3 27.0 38.0 41.4 13.3 57.6 31.7 44.4 50.7 17.2 HOTPOTQA 40.0 27.7 39.5 40.4 14.6 59.8 32.4 46.3 54.6 37.4 MULTI-75K 43.1 27.6 39.1 38.9 14.5 59.8 33.0 47.5 56.4 40.4 19.2 SELF 24.1 24.9 45.2 41.7 15.6 56.5 30.0 35.9 41.2 27.7 13.8 SQUAD 36.9 29.0 52.2 48.2 18.6 41.2 47.8 55.2 45.4 20.8 NEWSQA 36.9 29.4 52.2 48.4 17.8 72.1 47.4 55.9 45.2 20.6 SEARCHQA 40.5 30.0 53.4 50.6 17.6 70.2 40.2 57.3 45.5 20.4 TQA-G 40.0 30.6 53.4 49.5 17.6 69.9 41.2 50.0 46.2 20.8 TQA-W 39.0 30.3 54.0 50.0 17.3 71.0 39.2 48.4 55.7 20.9 HOTPOTQA 34.4 30.2 53.0 49.3 17.2 71.2 39.5 48.6 56.6 45.6 MULTI-75K 42.6 30.6 53.3 50.5 17.9 71.5 42.1 48.5 56.6 46.5 20.4 SELF 30.8 27.1 51.6 52.9 17.1 70.1 37.9 46.0 54.4 41.9 18.9 Table 4: Exact match on the development set for all datasets with transfer learning. Fine-tuning is done on ≤15K examples. The top of the table shows results for DOCQA, while the bottom for BERTQA. Rows are the trained datasets and columns are the evaluated datasets for which fine-tuning was performed. Large datasets are on the right, and small datasets are on the left side where at each point we train on a fixed number of examples from all five large datasets (no finetuning). We observe that more data from multiple datasets improves performance in almost all cases. In this case, we reach 95% of the final performance using 30% of the examples only. We will use this observation further in §5 to reach new state-of-the-art performance on several datasets. 4.3 Does context augmentation improve performance? For TRIVIAQA we have for all questions, contexts from three different sources – Wikipedia (TQAW), Bing web snippets (TQA-U), and Google web snippets (TQA-G). Thus, we can explore whether combining the three datasets improves performance. Moreover, because questions are identical across the datasets, we can see the effect on generalization due to the context language only. Table 5 shows the results. In the first 3 rows we train on 75K examples from each dataset, and in the last we train on the combined 225K examples. First, we observe that context augmentation substantially improves performance (especially for TQA-G and TQA-W). Second, generalization is sensitive to the context type: performance substantially drops when training on one context type and evaluating on another (60.7 → 48.4 for TQA-G, 53.1 →44.6 for TQA-U, and 50.1 →43.3 for TQA-W). TQA-G TQA-U TQA-W TQA-G 60.7 53.6 43.3 TQA-U 57.2 53.1 39.9 TQA-W 48.4 44.6 50.1 ALLCONTEXTS 67.7 54.4 54.7 Table 5: EM on the development set, where each row uses the same question with a different context, and ALLCONTEXTS is a union of the other 3 datasets. 5 MULTIQA We now present MULTIQA, a BERT-based model, trained on multiple RC datasets, that obtains new state-of-the-art results on several datasets. Does training on multiple datasets improve BERTQA? MULTIQA trains BERTQA on the MULTI-375K dataset presented above, which contains 75K examples from 5 large datasets, but uses BERT-large rather than BERT-base. For small target datasets, we fine-tune the model on these datasets, since they were not observed when training on MULTI-375K. For large datasets, we do not fine-tune. We found that fine-tuning on datasets that are already part of MULTI-375K does not improve performance (we assume this is due to the high-capacity of BERT-large), and thus we use one model for all the large datasets. We train on MULTI-375K, and thus our model does not use all examples in the original datasets, which contain more than 75K examples. We use the official evaluation script for any 4918 20000 40000 60000 52.5 55.0 57.5 60.0 62.5 65.0 67.5 Squad 20000 40000 60000 28 30 32 34 36 NewsQA 20000 40000 60000 14 16 18 20 22 HotpotQA 20000 40000 60000 45 50 55 60 TQA-G 20000 40000 60000 35 40 45 50 SearchQA 20000 40000 60000 64 66 68 70 72 74 76 78 Squad 20000 40000 60000 35.0 37.5 40.0 42.5 45.0 47.5 50.0 NewsQA 20000 40000 60000 16 18 20 22 24 26 HotpotQA 20000 40000 60000 50 55 60 65 TQA-G 20000 40000 60000 40 42 44 46 48 50 52 54 SearchQA Figure 2: Learning curves for the five large datasets (top is DOCQA and bottom is BERTQA). The x-axis corresponds to the number of examples from the target dataset, and the y-axis is EM. The orange curve refers to training on the target dataset only, and the blue curve refers to pre-training on 75K examples from the nearest source dataset and fine-tuning on the target dataset. The green curve is training on a fixed number of examples from all 5 large datasets without fine-tuning (MULTIQA). BERT-large Dev. MULTIQA Dev. MULTIQA Test SOTA1 Dataset EM tok. F1 EM tok. F1 EM tok. F1 EM tok. F1 NEWSQA 51.5 66.2 53.9 68.2 52.3 67.4 53.1 66.3 SEARCHQA 59.2 66.4 60.7 67.1 59.0 65.1 58.8 64.5 TQA-U 56.8 62.6 58.4 64.3 52.02 61.72 CWQ 30.8 35.4 34.9 34.2 HOTPOTQA 27.9 37.7 30.6 40.3 30.7 40.2 37.12 48.92 Table 6: Results for datasets where the official evaluation metric is EM and token F1. The CWQ evaluation script provides only the EM mertic. We did not find a public evaluation script for the hidden test set of TQA-U. dataset that provides one, and the SQUAD evaluation script for all other datasets. Table 6 shows results for datasets where the evaluation metric is EM or token F1 (harmonic mean of the list of tokens in the predicted vs. gold span). Table 7 shows results for datasets where the evaluation metric is average recall/precision/F1 between the list of predicted answers and the list of gold answers. We compare MULTIQA to BERT-large, a model that does not train on MULTI-375K, but only fine-tunes BERT-large on the target dataset. We also show the state-of-the-art (SOTA) result for all datasets for reference.1 MULTIQA improves state-of-the-art performance on fivedatasets, although it does not even train on all examples in the large datasets.2 MUL1State-of-the-are-results were found in (Tay et al., 2018) for NEWSQA, in Lin et al. (2018), for SEARCHQA, in Das et al. (2019) for TQA-U, in (Talmor and Berant, 2018b) for CWQ, in Ding et al. (2019) for HOTPOTQA, in (Abujabal et al., 2018) for COMQA, and in Bao et al. (2016) for CQ. 2We compare only to models for which we found a publication. For TQA-U, Figure 4 in Clark and Gardner (2018) TIQA improves performance compared to BERTlarge in all cases. This improvement is especially noticeable in small datasets such as COMQA, CWQ, and CQ. Moreover, in NEWSQA, MULTIQA surpasses human performance as measured by the creators of those datasets. (46.5 EM, 69.4 F1) (Trischler et al., 2017)), improving upon previous state-of-the-art by a large margin. To conclude, MULTIQA is able to improve state-of-the-art performance on multiple datasets. Our results suggest that in many NLU tasks the size of the dataset is the main bottleneck rather than the model itself. Does training on multiple datasets improve resiliency against adversarial attacks? Finally, we evaluated MULTIQA on the adversarial SQUAD (Jia and Liang, 2017), where a misleading sentence is appended to each conshows roughly 67 F1 on the development set, but no exact number. For CQ we compare against SOTA achieved on the web snippets context. On the Freebase context SOTA is 42.8 F1. (Luo1 et al., 2018) 4919 BERT-large Dev. MULTIQA Dev. MULTIQA Test SOTA Dataset Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 COMQA 45.8 42.0 42.9 51.9 47.2 48.2 44.4 40.0 40.8 21.2 38.4 22.4 CQ 32.8 46.6 42.4 39.72 Table 7: Results for datasets where the evaluation metric is average recall/precision/F1. CQ evaluates with F1 only. text (ADDSENT variant). MULTIQA obtained 66.7 EM and 73.1 F1, outperforming BERT-large (60.4EM, 66.3F1) by a significant margin, and also substantially improving state-of-the-art results (56.0 EM, 61.3 F1, (Hu et al., 2018) and 52.1 EM, 62.7 F1, (Wang et al., 2018)). 6 Related Work Prior work has shown that RC performance can be improved by training on a large dataset and transferring to a smaller one, but at a small scale (Min et al., 2017; Chung et al., 2018). Sun et al. (2018) has recently shown this in a larger experiment for multi-choice questions, where they first fine-tuned BERT on RACE (Lai et al., 2017) and then finetuned on several smaller datasets. Interest in learning general-purpose representations for natural language through unsupervised, multi-task and transfer learning has been skyrocketing lately (Peters et al., 2018; Radford et al., 2018; McCann et al., 2018; Chronopoulou et al., 2019; Phang et al., 2018; Wang et al., 2019; Xu et al., 2019). In parallel to our work, studies that focus on generalization have appeared on publication servers, empirically studying generalization to multiple tasks (Yogatama et al., 2019; Liu et al., 2019). Our work is part of this research thread on generalization in natural langauge understanding, focusing on reading comprehension, which we view as an important and broad language understanding task. 7 Conclusions In this work we performed a thorough empirical investigation of generalization and transfer over 10 RC datasets. We characterized the factors affecting generalization and obtained several state-ofthe-art results by training on 375K examples from 5 RC datasets. We open source our infrastructure for easily performing experiments on multiple RC datasets, for the benefit of the community. We highlight several practical take-aways: • Pre-training on multiple source RC datasets consistently improves performance on a target RC dataset , even in the presence of BERT representations. It also leads to substantial reduction in the number of necessary training examples for a fixed performance. • Training the high-capacity BERT-large representations over multiple RC datasets leads to good performance on all of the trained datasets without having to fine-tune on each dataset separately. • BERT representations improve generalization, but their effect is moderate when the source of the context is web snippets compared to Wikipedia and newswire. • Performance over an RC dataset can be improved by retrieving web snippets for all questions and adding them as examples (context augmentation). Acknowledgments We thank the anonymous reviewers for their constructive feedback. This work was completed in partial fulfillment for the PhD degree of Alon Talmor. This research was partially supported by The Israel Science Foundation grant 942/16, The Blavatnik Computer Science Research Fund and The Yandex Initiative for Machine Learning. References A. Abujabal, R. S. Roy, M. Yahya, and G. Weikum. 2018. Comqa: A community-sourced dataset for complex factoid question answering with paraphrase clusters. arXiv preprint arXiv:1809.09528. J. Bao, N. Duan, Z. Yan, M. Zhou, and T. Zhao. 2016. Constraint-based question answering with knowledge graph. In International Conference on Computational Linguistics (COLING). J. Berant, V. Srikumar, P. Chen, A. V. Linden, B. Harding, B. Huang, P. Clark, and C. D. Manning. 2014. Modeling biological processes for reading comprehension. In Empirical Methods in Natural Language Processing (EMNLP). K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In International Conference on Management of Data (SIGMOD), pages 1247–1250. 4920 E. Choi, H. He, M. Iyyer, M. Yatskar, W. Yih, Y. Choi, P. Liang, and L. Zettlemoyer. 2018. Quac: Question answering in context. In Empirical Methods in Natural Language Processing (EMNLP). A. Chronopoulou, C. Baziotis, and A. Potamianos. 2019. An embarrassingly simple approach for transfer learning from pretrained language models. arXiv preprint arXiv:1902.10547. Y. Chung, H. Lee, and J. Glass. 2018. Supervised and unsupervised transfer learning for question answering. In North American Association for Computational Linguistics (NAACL). C. Clark and M. Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Association for Computational Linguistics (ACL). R. Das, S. Dhuliawala, M. Zaheer, and A. McCallum. 2019. Multi-step retriever-reader interaction for scalable open-domain question answering. In International Conference on Learning Representations (ICLR). J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Association for Computational Linguistics (NAACL). M. Ding, C. Zhou, Q. Chen, H. Yang, and J. Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In Association for Computational Linguistics (ACL). D. Dua, Y. Wang, P. Dasigi, G. Stanovsky, S. Singh, and M. Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Association for Computational Linguistics (NAACL). M. Dunn, , L. Sagun, M. Higgins, U. Guney, V. Cirik, and K. Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXiv. T. M. Fruchterman and E. M. Reingold. 1991. Graph drawing by force-directed placement. Software: Practice and experience, 21(11):1129–1164. M. Gardner, J. Grus, M. Neumann, O. Tafjord, P. Dasigi, N. Liu, M. Peters, M. Schmitz, and L. Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640. K. M. Hermann, T. Koisk, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NeurIPS). M. Hu, Y. Peng, F. Wei, Z. Huang, D. Li, N. Yang, and M. Zhou. 2018. Attention-guided answer distillation for machine reading comprehension. In Empirical Methods in Natural Language Processing (EMNLP). R. Jia and P. Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Empirical Methods in Natural Language Processing (EMNLP). M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Association for Computational Linguistics (ACL). G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. Y. Lin, H. Ji, Z. Liu, and M. Sun. 2018. Denoising distantly supervised open-domain question answering. In Association for Computational Linguistics (ACL), volume 1, pages 1736–1745. X. Liu, P. He, W. Chen, and J. Gao. 2019. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. K. Luo1, F. Lin1, X., L. Kenny, and Q.Zhu1. 2018. Knowledge base question answering via encoding of complex query graphs. In Empirical Methods in Natural Language Processing (EMNLP). B. McCann, N. S. Keskar, C. Xiong, and R. Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. S. Min, M. Seo, and H. Hajishirzi. 2017. Question answering through transfer learning from large finegrained supervision data. In Association for Computational Linguistics (ACL). T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Workshop on Cognitive Computing at NIPS. J. Pennington, R. Socher, and C. D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. 2018. Deep contextualized word representations. In North American Association for Computational Linguistics (NAACL). J. Phang, T. Fevry, and S. R. Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI. 4921 P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP). S. Reddy, D. Chen, and C. D. Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. M. Richardson, C. J. Burges, and E. Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP), pages 193–203. M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv. K. Sun, D. Yu, D. Yu, and C. Cardie. 2018. Improving machine reading comprehension with general reading strategies. arXiv preprint arXiv:1810.13441. A. Talmor and J. Berant. 2018a. Repartitioning of the complexwebquestions dataset. arXiv preprint arXiv:1807.09623. A. Talmor and J. Berant. 2018b. Repartitioning of the complexwebquestions dataset. arXiv preprint arXiv:1807.09623. A. Talmor and J. Berant. 2018c. The web as knowledge-base for answering complex questions. In North American Association for Computational Linguistics (NAACL). A. Talmor, M. Geva, and J. Berant. 2017. Evaluating semantic parsing against a simple web-based question answering model. In *SEM. Y. Tay, L. Tuan, S. Hui, and J. Su. 2018. Densely connected attention propagation for reading comprehension. In Advances in Neural Information Processing Systems (NeurIPS). A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman. 2017. NewsQA: A machine comprehension dataset. In Workshop on Representation Learning for NLP. A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations (ICLR). W. Wang, M. Yan, and C. Wu. 2018. Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. In Association for Computational Linguistics (ACL). J. Welbl, P. Stenetorp, and S. Riedel. 2017. Constructing datasets for multi-hop reading comprehension across documents. arXiv preprint arXiv:1710.06481. Y. Xu, X. Liu, Y. Shen, J. Liu, and J. Gao. 2019. Multitask learning with sample re-weighting for machine reading comprehension. In North American Association for Computational Linguistics (NAACL). Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Empirical Methods in Natural Language Processing (EMNLP). D. Yogatama, C. de M. d’Autume, J. Connor, T. Kocisky, M. Chrzanowski, L. Kong, A. Lazaridou, W. Ling, L. Yu, C. Dyer, et al. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373.
2019
485
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4922–4931 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4922 Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives 1Yi Tay, 2Shuohang Wang, 3Luu Anh Tuan, 4Jie Fu, 5Minh C. Phan 6Xingdi Yuan, 7Jinfeng Rao∗, 8Siu Cheung Hui, 9Aston Zhang 1,5,8Nanyang Technological University 2Singapore Management University 3MIT CSAIL 4Mila, Polytechnic Montr´eal 6Microsoft Research Montr´eal 7Facebook AI 9Amazon AI [email protected] Abstract This paper tackles the problem of reading comprehension over long narratives where documents easily span over thousands of tokens. We propose a curriculum learning (CL) based Pointer-Generator framework for reading/sampling over large documents, enabling diverse training of the neural model based on the notion of alternating contextual difficulty. This can be interpreted as a form of domain randomization and/or generative pretraining during training. To this end, the usage of the Pointer-Generator softens the requirement of having the answer within the context, enabling us to construct diverse training samples for learning. Additionally, we propose a new Introspective Alignment Layer (IAL), which reasons over decomposed alignments using block-based self-attention. We evaluate our proposed method on the NarrativeQA reading comprehension benchmark, achieving state-of-the-art performance, improving existing baselines by 51% relative improvement on BLEU-4 and 17% relative improvement on Rouge-L. Extensive ablations confirm the effectiveness of our proposed IAL and CL components. 1 Introduction Teaching machines to read and comprehend is a fundamentally interesting and challenging problem in AI research (Hermann et al., 2015; Trischler et al., 2016; Rajpurkar et al., 2016). While there have been considerable and broad improvements in reading and understanding textual snippets, the ability for machines to read/understand complete stories and novels is still in infancy (Koˇcisk`y et al., 2018). The challenge becomes insurmountable in lieu of not only the large context but also the intrinsic challenges of ∗Work done while at University of Maryland. narrative text which arguably requires a larger extent of reasoning. As such, this motivates the inception of relevant, interesting benchmarks such as the NarrativeQA Reading Comprehension challenge1 (Koˇcisk`y et al., 2018). The challenges of having a long context have been traditionally mitigated by a two-step approach - retrieval first and then reading second (Chen et al., 2017; Wang et al., 2018; Lin et al., 2018). This difficulty mirrors the same challenges of open domain question answering, albeit introducing additional difficulties due to the nature of narrative text (stories and retrieved excerpts need to be coherent). While some recent works have proposed going around by training retrieval and reading components end-to-end, this paper follows the traditional paradigm with a slight twist. We train our models to be robust regardless of whatever is retrieved. This is in similar spirit to domain randomization (Tobin et al., 2017). In order to do so, we propose a diverse curriculum learning scheme (Bengio et al., 2009) based on two concepts of difficulty. The first, depends on whether the answer exists in the context (answerability), aims to bridge the gap between training time and inference time retrieval. On the other hand, and the second, depends on the size of retrieved documents (coherence and understandability). While conceptually simple, we found that these heuristics help improve performance of the QA model. To the best of our knowledge, we are the first to incorporate these notions of difficulty in QA reading models. All in all, our model tries to learn to generate the answer even if the correct answer does not appear as evidence which acts as a form of generative pretraining during training. As such, this is akin to learning to guess, largely motivated by how 1We tackle the full story setting instead of the summary setting which, inherently, is a much harder task. 4923 humans are able to extrapolate/guess even when given access to a small fragment of a film/story. In this case, we train our model to generate answers, making do with whatever context it was given. To this end, a curriculum learning scheme controls the extent of difficulty of the context given to the model. At this juncture, it would be easy to realize that standard pointer-based reading comprehension models would not adapt well to this scheme, as they fundamentally require the golden label to exist within the context (Wang and Jiang, 2016b; Seo et al., 2016). As such, our overall framework adopts a pointer-generator framework (See et al., 2017) that learns to point and generate, conditioned on not only the context but also the question. This relaxes this condition, enabling us to train our models with diverse views of the same story which is inspired by domain randomization (Tobin et al., 2017). For our particular task at hand, the key idea is that, even if the answer is not found in the context, we learn to generate the answer despite the noisy context. Finally, our method also incorporates a novel Introspective Alignment Layer (IAL). The key idea of the IAL mechanism is to introspect over decomposed alignments using block-style local self-attention. This not only imbues our model with additional reasoning capabilities but enables a finer-grained (and local-globally aware) comparison between soft-aligned representations. All in all, our IAL mechanism can be interpreted as learning a matching over matches. Our Contributions All in all, the prime contributions of this work is summarized as follows: • We propose a curriculum learning based Pointer-Generator model for reading comprehension over narratives (long stories). For the first time, we propose two different notions of difficulty for constructing diverse views of long stories for training. We show that this approach achieves better results than existing models adapted for open-domain question answering. • Our proposed model incorporates an Introspective Alignment Layer (IAL) which uses block-based self-attentive reasoning over decomposed alignments. Ablative experiments show improvements of our IAL layer over the standard usage of vanilla self-attention. • Our proposed framework (IAL-CPG) achieves state-of-the-art performance on the NarrativeQA reading comprehension challenge. On metrics such as BLEU-4 and Rouge-L, we achieve a 17% relative improvement over prior state-of-the-art and a 10 times improvement in terms of BLEU-4 score over BiDAF, a strong span prediction based model. • We share two additional contributions. Firstly, we share negative results on using Reinforcement Learning to improve the quality of generated answers (Paulus et al., 2017; Bahdanau et al., 2016). Secondly, we show that the evaluation scheme in NarrativeQA is flawed and models can occasionally generate satisfactory (correct) answers but score zero points during evaluation. 2 Our Proposed Framework This section outlines the components of our proposed architecture. Since our problem is mainly dealing with extremely long sequences, we employ an initial retrieval2 phrase by either using the answer or question as a cue (query for retrieving relevant chunks/excerpts). The retrieval stage is controlled by our curriculum learning process in which the details are deferred to subsequent sections. The overall illustration of this framework is depicted in Figure 1. 2.1 Introspective Alignment Reader This section introduces our proposed Introspective Alignment Reader (IAL-Reader). Input and Context Encoding Our model accepts two inputs, (context C and question Q). Each input is a sequence of words. We pass each sequence into a shared Bidirectional LSTM layer. Hc = BiLSTM(C) , Hq = BiLSTM(Q) where Hc ∈Rℓc×d and Hq ∈Rℓq×d are the hidden representations for C and Q respectively. Introspective Alignment Next, we pass Hc, Hq into an alignment layer. Firstly, we compute a soft attention affinity matrix between Hc and Hq as follows: Eij = F(hc i)⊤F(hq j) (1) 2This is unavoidable since supporting up to 20K-30K words in computational graphs is still not manageable even with top-grade GPUs. 4924 Question Context Easy Hard Story IAL Reader Pointer Generator He lives in Russia Where does john live? 100 200 500 50 Curriculum Reading IR Figure 1: Illustration of our proposed IAL-CPG framework. where hc i is the i-th word in the context and hq j is the j-th word in the question. F(·) is a standard nonlinear transformation function (i.e., F(x) = σ(Wx + b), where σ indicates non-linearity function), and is shared between context and question. E ∈Rℓc×ℓq is the soft matching matrix. To learn alignments between context and question, we compute: A = Softmax(E) Hq where A ∈Rℓc×d is the aligned representation of Hc. Reasoning over Alignments Next, to reason over alignments, we compute a self-attentive reasoning over decomposed alignments: Gij = Fs([Ai; Hc i ; Ai −Hc i , Ai ⊙Hc i ])⊤· Fs([Aj; Hc j; Aj −Hc j, Aj ⊙Hc j]) (2) where square brackets [·; ·] denote vector concatenation, Fs(·) is another nonlinear transformation layer which projects onto 4d dimensions. i is the positional index of each word token. Intuitively, Ai comprises of softly aligned question representations with respect to the context. The usage of the Hadamard and Subtraction operators helps to enhance the degree of comparison/matching. Hence, by including an additional local reasoning over these enhanced alignment vectors, our model can be interpreted as introspecting over alignment matches. Local Block-based Self-Attention Since ℓc is large in our case (easily ≥2000), computing the above Equation (2) may become computationally prohibitive. As such, we compute the scoring function for all cases where |i−j| ≤b, in which, b is a predefined hyperparameter and also the block size. Intuitively, the initial alignment layer (i.e., Equation 1) already considers a global view. As such, this self-attention layer can be considered as a local-view perspective, confining the affinity matrix computation to a local window of b. Finally, to compute the introspective alignment representation, we compute: B = Softmax(G) [A; Hc; A −Hc; A ⊙Hc] where Bℓc×4d is the introspective aligned representation of A. Finally, we use another d dimensional BiLSTM layer to aggregate the aligned representations: Y = BiLSTM([B; A; Hc; A−Hc; A⊙Hc]) (3) where Y ∈Rℓc×2d is the final contextual representation of context C. 2.2 Pointer-Generator Decoder Motivated by recent, seminal work in neural summarization, our model adopts a pointer-generator architecture (See et al., 2017). Given Y (the question infused contextual representation), we learn to either generate a word from vocabulary, or point to a word from the context. The decision to generate or point is controlled by an additive blend of several components such as the previous decoder state and/or question representation. The pointer-generator decoder in our framework uses an LSTM decoder3 with a cell state ct ∈Rn and hidden state vector ht ∈Rn. At 3To initialize the LSTM, we use an additional projection layer over the mean pooled representation of Y similar to (Xu et al., 2015). 4925 each decoding time step t, we compute an attention over Y as follows: gi = tanh(Fa(yi) + Fh(ht−1) + Fq(Hq)), (4) ai = g⊤ i wa , yt = ℓc X i=0 ai · yi (5) where Fa(·) and Fh(·) are nonlinear transformations projecting to n dimensions. i is the position index of the input sequence. Fq(·) is an additional attentive pooling operator over the question representation Hq (after the context encoding layer). The semantics of the question may be lost after the alignment based encoding. As such, this enables us to revisit the question representation to control the decoder. yt ∈Rn is the context representation at decoding time step t and a ∈Rℓc is an attention distribution over the context words which is analogous to the final probability distributions that exist in typical span prediction models. Next, we compute the next hidden state via: ht, ct = LSTM([yt; wt−1], ht−1, ct−1) where wt−1 is the (t −1)th token in the ground truth answer (teacher forcing). To learn to generate, we compute: vt = Wv(ht) + bv (6) where vt ∈R|Vg|, Vg is the global vocabulary size. The goal of the pointer-generator decoder is to choose between the abstractive distribution vt over the vocabulary (see Equation 6) and the extractive distribution at (see Equation 5) over the context text tokens. To this end, we learn a scalar switch pt ∈R: pt = sigmoid(Fpc(ct) + Fph(ht) + Fpy(yt)) where Fpc(·), Fph(·), Fpy(·) are linear transformation layers (without bias) which project ct, ht and yt into scalar values. To control the blend between the attention context and the generated words, we use a linear interpolation between at and vt. The predicted word wt at time step t is therefore: wt = argmax(pt · at + (1 −pt)vt) Note that we scale (append and prepend) at and vt with zeros to make them the same length (i.e., ℓc+ |Vg|). The LSTM decoder runs for a predefined fix answer length. During inference, we simply use greedy decoding to generate the output answer. 2.3 Curriculum Reading A key advantage of the pointer-generator is that it allows us to generate answers even if the answers do not exist in the context. This also enables us to explore multiple (diverse) views of contexts to train our model. However, to this end, we must be able to identify effectively the most useful retrieved context evidences for the training. For that purpose, we propose to use a diverse curriculum learning scheme which is based on two intuitive notions of difficulty: Answerability - It is regarded as common practice to retrieve excerpts based by using the correct answer as a cue (during training). This establishes an additional gap between training and inference since during inference, correct answers are not available. This measure aims to bridge the gap between question and answer (as a query prompt for passage retrieval). In this case, we consider the set of documents retrieved based on questions as the hard setting, H. Conversely, the set of retrieved documents using answers is regarded as the easy setting, E. Understandability - This aspect controls how understandable the overall retrieved documents are as a whole. The key idea of this setting is to control the paragraph/chunk size. Intuitively, a small paragraph/chunk size would enable more relevant components to be retrieved from the document. However, its understandability might be affected if paragraph/chunk size is too small. Conversely, a larger chunk size would be easier to be understood. To control the level of understandability, we pre-define several options of chunk sizes (e.g., {50, 100, 200, 500}) which will be swapped and determined during training. To combine the two measures described above, we comprise an easy-hard set pair for each chunk size, i.e., {Ek, Hk}, where: k ∈{50, 100, 200, 500}, En ←F(corpus, answer, n), Hn ←F(corpus, question, n) (7) F(.) is an arbitrary ranking function which may or may not be parameterized, and n is the size of each retrieved chunk. Two-layer Curriculum Reading Algorithm. As our model utilizes two above measures of difficulty, there lies a question on which whether we 4926 Algorithm 1 Curriculum Reading 1: chunk list ←{50, 100, 200, 500} 2: n ←sample i in chunk list 3: chunk list ←chunk list \ {n} 4: En ←F(Corpus, Answers, n) 5: Hn ←F(Corpus, Questions, n) 6: D ←En ▷initial training set 7: count ←0 ▷number of swaps within a chunk size 8: for i ←1 to numEpochs do 9: Train(D) 10: score ←Evaluate(Dev set) 11: if score < bestDev then 12: if count <= 1/δ then 13: D ←Swap(D, En, Hn, δ) ▷Swap δ percent of easy set in D with the hard set 14: count ←count + 1 15: else 16: Repeat step 3 to 8 ▷Replace training set with new easy set of another chunk size 17: else 18: bestDev = score should swap one measure at a time or swap both whenever the model meets the failure criterion. In our case, we find that prioritizing answerability over understandability is a better choice. More concretely, at the beginning of the training, we start with an easy set Ek of a random chunk size k. When the failure criterion is met (e.g. the model score does not improve on the validation set), we randomly swap a small percent δ (e.g., 5% in our experiments4) of the easy set Ek with the hard set Hk within its own chunk size group k to improve the answerability. In this case, after 1 δ failures, the model runs out of easy set Ek and is completely based on the hard set Hk. At this junction, we swap the model for understandability, replacing the training set with a completely new easy set El of another chunk size l, and repeat the above process. The formal description of our proposed curriculum reading is introduced in Algorithm 1. 3 Experiments We conduct our experiments on the NarrativeQA reading comprehension challenge. 3.1 Experimental Setup This section introduces our experimental setups. Model Hyperparameters We implement our model in Tensorflow. Our model is trained with Adadelta (Zeiler, 2012). The initial learning rate is tuned amongst {0.1, 0.2, 0.5}. The L2 regularization is tuned amongst {10−8, 10−6, 10−5}. The 4In early experiments, we found that 5% −10% works best. size of the LSTM at the encoder layer is set to 128 and the decoder size is set to 256. The block size b for the Introspective Alignment Layer is set to 200. We initialize our word embeddings with pretrained GloVe vectors (Pennington et al., 2014) which are not updated5 during training. Implementation Details Text is lowercased and tokenized with NLTK6. For retrieval of paragraphs, we use the cosine similarity between TF-IDF vector representations. TF-IDF representations are vectorized by Scikit-Learn using an N-gram range of [1, 3] with stopword filtering. The maximum context size is tuned amongst {2000, 4000} and reported accordingly. The paragraph/chunk size is dynamic and configured amongst {50, 100, 200, 500}. The retrieved excerpts are retrieved based on similarity match between context chunks and answer or question depending on the curriculum learning scheme. We tune the maximum answer length amongst {6, 8, 12} and the maximum question length is set to 30. Since two answers are provided for each question, we train on both sets of answers. During construction of the golden labels, first perform an n-gram search of the answer in the context. The largest n-gram match is allocated indices belonging to the context (i.e., [1,ℓc]). For the remainder words, stopwords are automatically allocated indices in the global vocabulary and non-stopwords are assigned context indices. If an answer word is not found, it is ignored. To construct the global vocabulary for the pointer generator decoder and avoid story-specific words, we use words that appear in at least 10 stories. Evaluation During evaluation, we (1) remove the full stop at the end of answers and (2) lowercase both answers. We use the BLEU, Rouge and METEOR scorers provided at https:// github.com/tylin/coco-caption. Baselines As baselines, we compare the proposed model with reported results in (Koˇcisk`y et al., 2018).. Additionally, we include several baselines which we implement by ourselves. This is in the spirit of providing better (and fairer) com5In our early experiments, we also masked entities following the original work (Koˇcisk`y et al., 2018), however, we did not observe obvious difference in performance. This is probably because we do not update word embeddings during training. 6https://www.nltk.org/ 4927 Dev Set Test Set Model ℓ BLEU-1 BLEU-4 Meteor Rouge BLEU-1 BLEU-4 Meteor Rouge IR (BLEU) 6.73 0.30 3.58 6.73 6.52 0.34 3.35 6.45 IR (ROUGE) 5.78 0.25 3.71 6.36 5.69 0.32 3.64 6.26 IR (Cosine) 6.40 0.28 3.54 6.50 6.33 0.29 3.28 6.43 BiDAF 5.82 0.22 3.84 6.33 5.68 0.25 3.72 6.22 ASR 200 16.95 1.26 3.84 1.12 16.08 1.08 3.56 11.94 ASR 400 18.54 0.00 4.2 13.5 17.76 1.10 4.01 12.83 ASR 1K 18.91 1.37 4.48 14.47 18.36 1.64 4.24 13.4 ASR 2K 20.00 2.23 4.45 14.47 19.09 1.81 4.29 14.03 ASR 4K 19.79 1.79 4.60 14.86 19.06 2.11 4.37 14.02 ASR (Ours) 4K 12.03 1.06 3.10 8.87 11.26 0.65 2.66 8.68 R3 16.40 0.50 3.52 11.40 15.70 0.49 3.47 11.90 RNET-PG 4K 17.74 0.00 3.95 14.56 16.89 0.00 3.84 14.35 RNET-CPG 4K 19.71 2.05 4.91 15.05 19.27 1.45 4.87 15.50 IAL-CPG 4K 23.31 2.70 5.68 17.33 22.92 2.47 5.59 17.67 Rel. Gain +31% +51% +23% +17% +20% +17% +28% +26% Table 1: Results on NarrativeQA reading comprehension dataset (Full story setting). Results are reported from (Koˇcisk`y et al., 2018) .The numbers besides the model name denote the total context size. Rel. Gain reports the relative improvement of our model and the best baseline reported in (Koˇcisk`y et al., 2018) on a specific context size setting. parisons. The compared baselines are listed below: • Attention Sum Reader (ASR) (Kadlec et al., 2016) is a simple baseline for reading comprehension. Aside from our the results on (Koˇcisk`y et al., 2018), we report our own implementation of the ASR model. Our implementation follows (Koˇcisk`y et al., 2018) closely. • Reinforced Reader Ranker (R3) (Wang et al., 2018) is a state-of-the-art model for open domain question answering, utilizing reinforcement learning to select relevant passages to train the reading comprehension model. Our objective is to get a sense of how well do open-domain models work on understanding narratives. • RNET + PG / CPG (Wang et al., 2017b) is a strong, competitive model for paragraph level reading comprehension. We replace the span7 prediction layer in RNET with a pointer generator (PG) model with the exact setup as our model. We also investigate equipping RNET + PG with our curriculum 7The performance of the RNET + span predictor is similar to the BiDAF model reported in (Koˇcisk`y et al., 2018). learning mechanism (curriculum pointer generator). 3.2 Experimental Results Table 1 reports the results of our approach on the NarrativeQA benchmark. Our approach achieves state-of-the-art results as compared to prior work (Koˇcisk`y et al., 2018). When compared to the best ASR model in (Koˇcisk`y et al., 2018), the relative improvement across all metrics are generally high, ranging from +17% to 51%. The absolute improvements range from approximately +1% to +3%. Pertaining to the models benchmarked by us, we found that our re-implementation of ASR (Ours) leaves a lot to be desired. Consequently, our proposed IAL-CPG model almost doubles the score on all metrics compared to ASR (Ours). The R3 model, which was proposed primarily for open-domain question answering does better than ASR (Ours) but still fall shorts. Our RNET-PG model performs slightly better than R3 but fails to get a score on BLEU-4. Finally, RNET-CPG matches the state-of-the-art performance of (Koˇcisk`y et al., 2018). However, we note that there might be distinct implementation differences8 with the primary retrieval mechanism 8This is made clear from how our ASR model performs 4928 and environment/preprocessing setup. A good fair comparison to observe the effect of our curricum reading is the improvement between RNET-PG and RNET-CPG. 3.3 Ablation Study In this section, we provide an extensive ablation study on all the major components and features of our proposed model. Table 2 reports results of our ablation study. Attention ablation In ablations (1-3), we investigate the effectiveness of the self-attention layer. In (1), we remove the entire IAL layer, piping the context-query layer directly to the subsequent layer. In (2), we replace block-based self-attention with the regular self-attention. Note that the batch size is kept extremely small (e.g., 2), to cope with the memory requirements. In (3), we remove the multiplicative and subtractive features in the IAL layer. Results show that replacing the block-based self-attention with regular self-attention hurts performance the most. However, this may be due to the requirement of reducing the batch size significantly. Removing the IAL layer only sees a considerable drop while removing the enhancement also reduces performance considerably. Curriculum ablation In ablations (4-8), we investigate various settings pertaining to curriculum learning. In (4), we remove the pointer generator (PG) completely. Consequently, there is also no curriculum reading in this setting. Performance drops significantly in this setting and demonstrates that the pointer generator is completely essential to good performance. In (5-6), we remove one component from our curriculum reading mechanism. Results show that the answerabiity heuristic is more important than the understandability heuristic. In (7-8), we focus on non curriculum approaches training on the easy or hard set only. It is surprising that training on the hard set alone gives considerablely decent performance which is comparable to the easy set. However, varying them in a curriculum setting has significant benefits. RL ablation In ablation (9), we investigated techniques that pass the BLEU-score back as a reward for the model and train the model jointly using Reinforcement learning. We follow the setting much worse than (Koˇcisk`y et al., 2018). We spend a good amount of time trying to reproduce the results of ASR on the original paper. of (Paulus et al., 2017), using the mixed training objective and setting λ to 0.05. We investigated using BLEU-1,BLEU-4 and Rouge-L (and combinations of these) as a reward for our model along with varying λ rates. Results in Table 2 reports the best result we obtained. We found that while RL does not significantly harm the performance of the model, there seem to be no significant benefit in using RL for generating answers, as opposed to other sequence transduction problems (Bahdanau et al., 2016; Paulus et al., 2017). Understandability ablation From ablations (10-16), we study the effect of understandability and alternating paragraph sizes. We find that generally starting from a smaller paragraph and moving upwards performs better and moving the reverse direction may have adverse effects on performance. This is made evident by ablations (10-11). We also note that a curriculum approach beats a static approach often. 3.4 Qualitative Error Analysis Table 3 provides some examples of the output of our best model. First, we discuss some unfortunate problems with the evaluation in generation based QA. In examples (1), the model predicts a semantically correct answer but gets no credit due to a different form. In (2), no credit is given for wordlevel evaluation. In (3), the annotators provide a more general answer and therefore, a highly specific answer (e.g., moscow) do not get any credit. Second, we observe that our model is occasionally able to get the correct (exact match) answer. This is shown in example (4) and (7). However, there are frequent inability to generate phrases that make sense, even though it seems like the model is trudging along the right direction (e.g., “to wants to be a love of john” versus “because he wants her to have the baby” and “in the york school” versus “east harlem in new york”). In (9), we also note a partially correct anwer, even though it fails to realize that the question is about a male and generates “she is a naval”. 4 Related Work The existing work on open domain QA (Chen et al., 2017) has distinct similarities with our problem, largely owing to the overwhelming large corpus that a machine reader has to reason over. In recent years, a multitude of techniques have been developed. (Wang et al., 2018) proposed reinforce4929 Ablation BLEU-1 BLEU-4 Meteor Rouge Original Full Setting 23.31 2.70 5.68 17.33 (1) Remove IAL layer 18.93 1.94 4.52 14.51 (2) Replace regular Self-Attention 19.61 0.96 4.38 15.24 (3) Remove Enhancement 20.25 1.76 4.92 15.14 (4) Remove PG + CR 15.30 0.91 3.85 11.36 (5) Remove CR (understandability) 20.13 2.30 4.94 16.96 (6) Remove CR (answerability) 20.13 1.82 4.92 15.77 (7) Train Easy Only 20.75 1.52 4.65 15.42 (8) Train Hard Only 19.18 1.49 4.60 14.19 (9) Add RL 21.85 2.70 5.31 16.73 (10) 50 ) 100 ) 200 23.31 2.70 5.68 17.33 (11) 50 ) 100 ) 200 ) 500 21.07 2.86 5.33 16.78 (12) 100 ) 200 ) 500 ) 50 20.18 2.60 5.50 18.14 (13) 500 ) 50 ) 100 ) 200 20.95 2.51 5.41 17.05 (14) 500 ) 200 ) 100 ) 50 17.13 2.38 4.60 15.56 (15) 50 (static) 20.91 2.57 5.35 18.78 (16) 500 (static) 19.36 2.45 4.94 16.00 Table 2: Ablation results on NarrativeQA development set. (1-3) are architectural ablations. (4-8) are curriculum reading based ablations. (9) investigates RL-based generation. (10-16) explores the understandability/paragraph size heuristic. Note that (10) was the optimal scheme reported in the original setting. Moreover, more permutations were tested but only representative example are reported due to lack of space. Question Model Answer Ground Truth (1) how many phases did the court compliment competition have? two 2 (2) who suffers from a crack addiction? dick dicky (3) where did john and sophia go to from the airport? moscow russia (4) what country did nadia’s cousin and friend visit her from? russia russia (5) why is nadia kidnapped by alexei? to wants be a love of john because he now wants her to have the baby (6) who does mary marry? charles who is her charles (7) what instrument does roberta guaspari play? violin violin (8) where is the school located where roberta takes a position as a substitute violin teacher? in the york school east harlem in new york city (9) what is the profession of roberta’s husband? she is a naval he is in the us navy Table 3: Qualitative analysis on NarrativeQA development set. ment learning to select passages using the reader as the reward. (Min et al., 2018) proposed ranking the minimal context required to answer the question. (Clark and Gardner, 2017) proposed shared norm method for predicting spans in the multiparagraph reading comprehension setting. (Lin et al., 2018) proposed ranking and de-noising techniques. (Wang et al., 2017a) proposed evidence aggregation based answer re-ranking. Most techniques focused on constructing a conducive and less noisy context for the neural reader. Our work provides the first evidence of diverse sampling for training neural reading comprehension models. Our work draws inspiration from curriculum learning (CL) (Bengio et al., 2009). One key difficulty in CL is to determine which samples are easy or hard. Self-paced learning (Jiang et al., 2015) is a recently popular form of curriculum learning that treats this issue as an optimization problem. To this end, (Sachan and Xing, 2016) applies selfpaced learning for neural question answering. Automatic curriculum learning (Graves et al., 2017), similarly, extracts signals from the learning process to infer progress. State-of-the-art neural question answering models are mainly based on cross-sentence attention (Seo et al., 2016; Wang and Jiang, 2016b; Xiong et al., 2016; Tay et al., 2018c). Self-attention (Vaswani et al., 2017; Wang et al., 2017b) has also been popular for reading comprehension (Wang et al., 2018; Clark and Gardner, 2017). However, its memory complexity makes it a chal4930 lenge for reading long context. Notably, the truncated/summary setting of the NarrativeQA benchmark have been attempted recently (Tay et al., 2018c,b; Hu et al., 2018; Tay et al., 2018a). However, this summary setting bypasses the difficulties of long context reading comprehension, reverting to the more familiar RC setup. While most of the prior work in this area has mainly focused on span prediction models (Wang and Jiang, 2016b) and/or multiple choice QA models (Wang and Jiang, 2016a), there have been recent interest in generation based QA (Tan et al., 2017). S-NET (Tan et al., 2017) proposed a twostage retrieve then generate framework. Flexible neural mechanisms that learn to point and/or generate have been also popular across many NLP tasks. Our model incorporates PointerGenerator networks (See et al., 2017) which learns to copy or generate new words within the context of neural summarization. Prior to Pointer Generators, CopyNet (Gu et al., 2016) incorporates a copy mechanism for sequence to sequence learning. Pointer generators have also been recently adopted for learning a universal multi-task architecture for NLP (McCann et al., 2018). 5 Conclusion We proposed curriculum learning based Pointergenerator networks for reading long narratives. Our proposed IAL-CPG model achieves stateof-the-art performance on the challenging NarrativeQA benchmark. We show that sub-sampling diverse views of a story and training them with a curriculum scheme is potentially more effective than techniques designed for open-domain question answering. We conduct extensive ablation studies and qualitative analysis, shedding light on the task at hand. 6 Acknowledgements The authors would like to thank the anonymous reviewers of ACL 2019 for their comments and time to review our paper. References Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086. Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48. ACM. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051. Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723. Alex Graves, Marc G Bellemare, Jacob Menick, Remi Munos, and Koray Kavukcuoglu. 2017. Automated curriculum learning for neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1311–1320. JMLR. org. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Minghao Hu, Yuxing Peng, Furu Wei, Zhen Huang, Dongsheng Li, Nan Yang, and Ming Zhou. 2018. Attention-guided answer distillation for machine reading comprehension. arXiv preprint arXiv:1808.07644. Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G Hauptmann. 2015. Self-paced curriculum learning. In Twenty-Ninth AAAI Conference on Artificial Intelligence. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547. Tom´aˇs Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´aabor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association of Computational Linguistics, 6:317–328. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised opendomain question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1736–1745. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. 4931 Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. arXiv preprint arXiv:1805.08092. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Mrinmaya Sachan and Eric Xing. 2016. Easy questions first? a case study on curriculum learning for question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 453–463. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Chuanqi Tan, Furu Wei, Nan Yang, Bowen Du, Weifeng Lv, and Ming Zhou. 2017. S-net: From answer extraction to answer generation for machine reading comprehension. arXiv preprint arXiv:1706.04815. Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018a. Multi-granular sequence encoding via dilated compositional units for reading comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2141– 2151, Brussels, Belgium. Association for Computational Linguistics. Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018b. Recurrently controlled recurrent networks. In Advances in Neural Information Processing Systems, pages 4731–4743. Yi Tay, Anh Tuan Luu, Siu Cheung Hui, and Jian Su. 2018c. Densely connected attention propagation for reading comprehension. In Advances in Neural Information Processing Systems, pages 4906–4917. Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. 2017. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 23–30. IEEE. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Shuohang Wang and Jing Jiang. 2016a. A compareaggregate model for matching text sequences. arXiv preprint arXiv:1611.01747. Shuohang Wang and Jing Jiang. 2016b. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R 3: Reinforced ranker-reader for open-domain question answering. In Thirty-Second AAAI Conference on Artificial Intelligence. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2017a. Evidence aggregation for answer re-ranking in open-domain question answering. arXiv preprint arXiv:1711.05116. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017b. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
2019
486
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4932 Explain Yourself! Leveraging Language Models for Commonsense Reasoning Nazneen Fatema Rajani Bryan McCann Caiming Xiong Richard Socher Salesforce Research Palo Alto, CA, 94301 {nazneen.rajani,bmccann,cxiong,rsocher}@salesforce.com Abstract Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of worldknowledge or reasoning over information not immediately present in the input. We collect human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations (CoS-E). We use CoS-E to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework. CAGE improves the state-of-the-art by 10% on the challenging CommonsenseQA task. We further study commonsense reasoning in DNNs using both human and auto-generated explanations including transfer to out-of-domain tasks. Empirical results indicate that we can effectively leverage language models for commonsense reasoning. 1 Introduction Commonsense reasoning is a challenging task for modern machine learning methods (Zhong et al., 2018; Talmor et al., 2019). Explanations are a way to verbalize the reasoning that the models learn during training. Common sense Question Answering (CQA) is a multiple-choice question answering dataset proposed for developing natural language processing (NLP) models with commonssense reasoning capabilities (Talmor et al., 2019). Although these efforts have led to progress, it is still unclear how these models perform reasoning and to what extent that reasoning is based on world knowledge. We collect human explanations for commonsense reasoning built on top of CQA and introduce them as Common Sense Explanations (CoS-E)1. CoS-E contains human explanations in 1https://github.com/nazneenrajani/CoS-E Question: While eating a hamburger with friends, what are people trying to do? Choices: have fun, tasty, or indigestion CoS-E: Usually a hamburger with friends indicates a good time. Question: After getting drunk people couldn’t understand him,it was because of his what? Choices: lower standards,slurred speech, or falling down CoS-E: People who are drunk have difficulty speaking. Question: People do what during their time off from work? Choices: take trips, brow shorter, or become hysterical CoS-E: People usually do something relaxing, such as taking trips,when they don’t need to work. Table 1: Examples from our CoS-E dataset. the form of both open-ended natural language explanations as well as highlighted span annotations that represent words selected by humans as important for predicting the right answer (see Table 1). Talmor et al. (2019) show that using Google search to extract context from top 100 result snippets for each of the question and answer choices does not help much in improving the accuracy on CQA trained using even the state-of-the-art reading comprehension model BiDAF++ (Seo et al., 2017) augmented with a self-attention layer and ELMo representations (Peters et al., 2018). In contrast, we leverage a pretrained language model to generate explanations that are useful for commonsense reasoning. We propose Commonsense Auto-Generated Explanations (CAGE) as a framework for generating explanations for CQA. We break down the task of commonsense reasoning into two phases. In the first phase, we provide a CQA example alongside the corresponding CoS-E explanation to a language model. The language model conditions on the question and answer choices from the example and is trained to generate the CoS-E explanation. In the second phase, we use the language model 4933 … (a) One time-step of training a CAGE language model to generate explanations from CoS-E. It is conditioned on the question tokens Q concatenated with the answer choice tokens A1, A2, A3 and previously generated tokens E1, . . . , Ei−1. It is trained to generate token Ei. … CSRM (b) A trained CAGE language model is used to generate explanations for a downstream commonsense reasoning model (CSRM), which itself predicts one of the answer choices. Figure 1: An overview of CAGE trained on CoS-E and CQA. to generate explanations for each example in the training and validation sets of CQA. These CAGE explanations are provided to a second commonsense reasoning model by concatenating it to the end of the original question, answer choices, and output of the language model. The two-phase CAGE framework obtains state-of-the-art results outperforming the best reported baseline by 10% and also produces explanations to justify its predictions. Figure 1 shows an overview of our proposed approach. In summary, we introduce a new Common Sense Explanations (CoS-E) dataset to study neural commonsense reasoning and provide a new method, CAGE for automatically generating explanations that achieve a state-of-the-art accuracy of approximately 65% on CQA v1.0. We demonstrate explanation transfer on two out-of-domain datasets. Note that before our final submission, the organizers released a more challenging v1.11 of CQA with 5 answer choices instead of 3 and so we also included the new version in our results and discussions. 2 Background and Related Work Commonsense reasoning Datasets that require models to learn to predict relations between situations or events in natural language have been introduced in the recent past. The Story Cloze (also referred to as ROC Stories) involves predicting the correct story ending from a set of plausible endings (Mostafazadeh et al., 2016) while the Situations with Adversarial Generations (SWAG) involves predicting the next scene based on an initial event (Zellers et al., 2018). Language Modeling based techniques such as the GPT and BERT models get human-level performance on these datasets (Radford et al., 2018; Devlin et al., 2019). They have been less successful on tasks that require clear understanding of how pronouns resolve between sentences and how that interacts with world knowledge. For example, the Winograd Schemas (Winograd, 1972) and challenges derived from that format (Levesque et al., 2012; McCann et al., 2018; Wang et al., 2018) have proven difficult for even the most modern machine learning methods (Trinh and Le, 2018) to achieve near-human performance, but the emphasis on pronoun resolution in those challenges leaves room for exploration of other kinds of commonsense reasoning. CQA is a new dataset that consists of 9500 questions with one correct answer and two distractor answers (Talmor et al., 2019). The authors claim that because all the answer choices are drawn from the same source concept, the dataset requires models to actually infer from the question rather than take advantage of distributional biases. We, however, observed that the current state of this dataset has gender disparity with higher proportion of feminine pronouns used in negative context. The authors show that the state-of-the-art language models perform very poorly compared to human participants on their dataset. Although, CQA introduces a benchmark for evaluating commonsense reasoning capabilities of models, it is still unclear how and to what extent do models actually do common-sense reasoning. CoS-E builds on top of their benchmark, on the other hand, provides data in the form of explanations that can be used to study and analyze as well as evaluate a model’s reasoning capabilities. Natural language explanations Lei et al. (2016) proposed an approach for rationale generation for sentiment analysis by highlighting complete phrases in the input text that by itself is sufficient to predict the desired output. Humangenerated natural language explanations for classification data have been used in the past to train a semantic parser that in turn generates more noisy 4934 labeled data which can used to train a classifier (Hancock et al., 2018). Camburu et al. (2018) generate explanations and predictions for the natural language inference problem (Camburu et al., 2018). However, the authors report that interpretability comes at the cost of loss in performance on the popular Stanford Natural Language Inference (Bowman et al., 2015) dataset. We find that, unlike for e-SNLI, explanations for CQA lead to improved performance in what Camburu et al. (2018) would call the explain-predict setting. In the multi-modal setting, Rajani and Mooney (2018) showed that visual explanations can be leveraged to improve performance of VQA (Antol et al., 2015) and that an ensemble explanation is significantly better than individual explanations using both automated and human evaluations (Rajani and Mooney, 2017). Knowledge Transfer in NLP Natural language processing has often relied on the transfer of world-knowledge through pretrained word vectors like Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). Contextualized word vectors (McCann et al., 2017; Peters et al., 2018) refined these representations for particular inputs by using different forms of general encoding. Language models trained from scratch on large amounts of data have made groundbreaking success in this direction by carefully finetuning for specific tasks (Dai and Le, 2015; Radford et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019). These models have the advantage that only a few parameters need to be learned from scratch and thus perform surprisingly well even on small amounts of supervised data. Fine-tuned language models do not however work as well for directly predicting answers for CQA (Talmor et al., 2019). In our work, we show how these finetuned language models are more effective when leveraged to generate explanations and empirically prove that they also linguistically capture common sense. 3 Common Sense Explanations (CoS-E) We used Amazon Mechanical Turk (MTurk) to collect explanations for our Common Sense Explanations (CoS-E) dataset. The CQA dataset consists of two splits – the question token split and the random split. Our CoS-E dataset and all our experiments use the more difficult random split, which is the main evaluation split according to Tal0 20 40 60 80 100 Percent of Examples Question Trigram Question Bigram Answer or Distractor Distractor Answer 38.1 60.0 58.5 35.0 30.0 Figure 2: Analysis of the CoS-E v1.0 dataset. Percent of the dataset that contains the answer, a distractor, either, at least one bigram from the question, and at least one trigram from the question. mor et al. (2019). We also release CoS-E for CQA v1.11. Human participants are given the question and answer choices along with the ground-truth answer choice. Turkers are prompted with the following question: “Why is the predicted output the most appropriate answer?” Annotators were instructed to highlight relevant words in the question that justifies the ground-truth answer choice and to provide a brief open-ended explanation based on the highlighted justification could serve as the commonsense reasoning behind the question. We collected these explanations for the CQA trainrandom-split and dev-random-split, which have a size of 7610 and 950 for v1.0 and 9741 and 1221 for v1.11 respectively. Table 1 shows a random sample of examples from our CoS-E dataset with both free-form explanations and highlighted text. From here on, we refer to the highlighted words as CoS-E-selected and the free-form explanation as CoS-E-open-ended. In MTurk, it is difficult to control the quality of open-ended annotations. So, we do some inbrowser checks to avoid obviously bad explanations. Annotators cannot move forward if they do not highlight any relevant words in the question or if the length of explanations is less than 4 words. We also check that the explanation is not a substring of the question or the answer choices without any other extra words. We collect these explanations from only one annotator per example, so we also perform some post-collection checks to catch examples that are not caught by our previous filters. We filter out explanations that could be classified as a template. For example, explanations of the form “<answer> is the only option that is [correct|obvious]” are deleted and then reannotated. Figure 2 shows the distribution of explanations collected in the CoS-E v1.0 dataset. 58% of expla4935 nations from CoS-E contain the ground truth, but the effectiveness of CoS-E is not constrained only to those examples. Our model obtains state-of-theart results by using CoS-E only during training. Empirical results show that even when using only those explanations that do not have any word overlap with any of the answer choices, performance exceeds that of baselines that do not use CoS-E at all. We also observed that a significant proportion of the distractor choices are also present in the CoS-E dataset and on further analysis we found that for those examples, annotators resorted to explaining by eliminating the wrong choices. This indicates that it is difficult even for humans to reason about many of the examples in CQA. Because CoS-E uses crowd-sourcing, it also adds diversity of perspective and in particular diverse reasoning on world knowledge to the CQA dataset. Even though many explanations remain noisy after quality-control checks, we find that they are of sufficient quality to train a language model that generates commonsense reasoning. We refer to Section 5 for more details on empirical results and ablation analysis on CoS-E. 4 Algorithm We present Commonsense Auto-Generated Explanations (CAGE) and apply it to the CQA task. CAGE are generated by a language model and are used aas supplementary inputs to a classification model. Each example in CQA consists of a question, q, three answer choices, c0, c1, c2, and a labeled answer a. Our CoS-E dataset adds a human explanation eh for why a is the most appropriate choice. The output of CAGE is a language model generated explanation e that is trained to be close to eh. 4.1 Commonsense Auto-Generated Explanations (CAGE) In order to supply CAGE to a classification model, we fine-tune a language model (LM) to generate explanations from our CoS-E dataset. Our LM is the large, pre-trained OpenAI GPT (Radford et al., 2018) which is a multi-layer, transformer (Vaswani et al., 2017) decoder. GPT is fine-tuned on the combination of CQA and CoS-E datasets, as shown in the left half of Figure 1. We explore explanation generation in two settings – 1) explain-and-then-predict (reasoning) (Figure 1) and 2) predict-and-then-explain (rationalization). Reasoning This is our main approach and in this the LM is fine-tuned conditioned on the question, answer choices and the human generated explanation and not the actual predicted label. So, the input context during training is defined as follows: CRE = “q, c0, c1, or c2? commonsense says ” The model is trained to generate explanations e according to a conditional language modeling objective. The objective is to maximize: X i log P(ei|ei−k, . . . , ei−1, CRE; Θ) where k is the size of the context window (in our case k is always greater than the length of e so that the entire explanation is within the context). The conditional probability P is modeled by a neural network with parameters Θ conditioned on CRE and previous explanation tokens. We call this kind of explanation reasoning because they can be automatically generated during inference to provide additional context for commonsense question answering. In Section 5, we show that this approach outperforms the reported state-of-the-art on CQA by 10%. For the sake of completeness, we also experimented with the reverse of this approach wherein the model first makes the predictions and then generates explanations based on those labels, which we call rationalization and is discussed below. Rationalization In rationalization, the LM model conditions on the predicted labels along with the input to generate post-hoc rationalizations. So, during the fine-tuning step, the input context contains the output label and is constructed as follows: CRA = “ q, c0, c1, or c2? a because ” The training objective for the LM in rationalization is similar to that in reasoning except that in this case, the model has access to the ground truth labels to the input questions during training. Because the language model is conditioned on the predicted label, the explanations cannot be considered as common sense reasoning. Instead, they offer a rationalization that makes the model more accessible and interpretable. We find that this approach outperforms the current best model by 6% and also produces interestingly good quality explanations as discussed in Section 5. For CAGE, we generate sequences of maximum length 20, use a batch size of 36, train for a maximum of 10 epochs, selecting the best model based 4936 on validation BLEU and perplexity scores. Learning rate was set to 1e−6, warmed up linearly with proportion 0.002 and weight decay 0.01. 4.2 Commonsense Predictions with Explanations Given either a human explanation from CoS-E or reasoning from a language model, we can then learn to perform predictions on the CQA task. For the classification module of our proposed approach, we adopt the widely popular BERT model (Devlin et al., 2019) which we refer to as just BERT. BERT can be fine-tuned for multiple choice question answering by adding a simple binary classifier that takes as input the final state corresponding to the the special [CLS] token placed at the start of all inputs to BERT models (Devlin et al., 2019). We apply this same approach to the CQA task. For each example in the dataset, we construct three input sequences for fine-tuning BERT. Each sequence is the concatenation of the question, a separator token [SEP], and one of the answer choices. If the approach requires explanation from either CoS-E or automatically generated as in the CAGE, we concatenate the question, [SEP], the explanation, [SEP], and an answer choice. For BERT, the explanations share the same input representation as that of the questions. We also experimented with the explanation sharing the same representation as that of the answer choice but found that the performance decreased slightly. When explanations are used only during training, the explanation variable is optional and the answer choices directly follow the question during evaluation. For all our experiments we used a train batch size of 24, test batch size of 12, 10 training epochs and maximum sequence length of 50 for the baseline and 175 for all experiments involving explanations. The right part of Figure 1 gives an overview of the classification module of our proposed approach. 4.3 Transfer to out-of-domain datasets Transfer without fine-tuning to out-of-domain NLP datasets is known to exhibit poor performance. For example, for the comparatively easier natural langauge inference task with fixed labels, Bowman et al. (2015) show that the accuracy dropped by 25% when training on SNLI and evaluating on SICK-E (Marelli et al., 2014). We study transfer of natural language explanations from the CQA to SWAG (Zellers et al., 2018) and Story Cloze Test (Mostafazadeh et al., 2016). Both the datasets are multiple-choice like CQA and the authors publicize them as commonsense reasoning and inference tasks. We use the GPT language model fine-tuned on CQA train and dev sets to generate explanations on the SWAG train and val sets (with 73546 and 20006 instances respectively) and the Story Cloze Spring 2016 val and test sets (with 1870 instances each). We then train a BERT classifier using the input instances and generated explanations and evaluate on the SWAG and Story Cloze test sets. 5 Experimental Results We present results on the CQA dataset using variations of our proposed Commonsense AutoGenerated Explanations (CAGE). All our models are based on BERT, which also serves as our baseline without any CoS-E or CAGE. All our ablation analysis is conducted on the CQA dev-randomsplit. We also show results for key models on the final test split.2 Method Accuracy (%) BERT (baseline) 63.8 CoS-E-open-ended 65.5 CAGE-reasoning 72.6 Table 2: Results on CQA dev-random-split with CoS-E used during training. Table 2 shows results that compare a BERT baseline that uses only the CQA inputs and the same architecture but trained using inputs that contain explanations from CoS-E during training. The BERT baseline model reaches 64% accuracy and adding open-ended human explanations (CoS-E-open-ended) alongside the questions during training results in a 2% boost in accuracy. By generating explanations as described in Section 4.1, we can give the commonsense question answering model access to an explanation that is not conditioned on the ground truth. These explanations (CAGE-reasoning) can be provided during both training and validation and increases the accuracy to 72%. Table 3 shows the results obtained on the CQA test split. We report our two best models that represent using human explanations (CoS-E-openended) for training only and using language model explanations (CAGE-reasoning) during both train and test. We compare our approaches to the best reported models for the CQA task (Talmor et al., 2https://www.tau-nlp.org/csqa-leaderboard 4937 Method Accuracy (%) RC (Talmor et al., 2019) 47.7 GPT (Talmor et al., 2019) 54.8 CoS-E-open-ended 60.2 CAGE-reasoning 64.7 Human (Talmor et al., 2019) 95.3 Table 3: Test accuracy on CQA v1.0. The addition of CoS-E-open-ended during training dramatically improves performance. Replacing CoS-E during training with CAGE reasoning during both training and inference leads to an absolute gain of 10% over the previous state-of-the-art. Method Accuracy (%) CoS-E-selected w/o ques 53.0 CoS-E-limited-open-ended 67.6 CoS-E-selected 70.0 CoS-E-open-ended w/o ques 84.5 CoS-E-open-ended* 89.8 Table 4: Oracle results on CQA dev-random-split using different variants of CoS-E for both training and validation. * indicates CoS-E-open-ended used during both training and validation to contrast with CoS-E-openended used only during training in Table 2. 2019). We observe that using CoS-E-open-ended during training improves the state-of-the-art by approximately 6%. Talmor et al. (2019) experimented with using Google search of “question + answer choice” for each example in the dataset and collected 100 top snippets per answer choice to be used as context for their Reading Comprehension (RC) model. They found that providing such extra data does not improve accuracy. On the other hand, using CAGE-reasoning resulted in a gain of 10% accuracy over the previous state-of-the-art. This suggests that our CoS-E-open-ended and CAGEreasoning explanations provide far more useful information than what can be achieved through simple heuristics like using Google search to find relevant snippets. We observed that our models’ performance on test is lower than those on validation and this trend was confirmed by the organizers of the task. To establish an oracle upper-bound on the performance, we also explored an experimental setting in which human-generated explanations from CoS-E are provided during both training and validation. These results are summarized in Table 4. We note that this is an unfair setting because the human that provided the explanation had access to the ground truth answer; these results merely serve as an oracle for how much potential benefit can come from using CoS-E-open-ended. If the openended human explanations (CoS-E-open-ended) are provided at inference time, performance jumps to approximately 90%. These results also motivate an attempt to automatically generate explanations that establish the world knowledge needed to solve CQA. CAGE-reasoning is our attempt towards this goal. Table 4 also contains results that use only the explanation and exclude the original question from CQA denoted by ‘w/o question’. These variants also use explanation during both train and validation. For these experiments we give the explanation in place of the question followed by the answer choices as input to the model. When the explanation consists of words humans selected as justification for the answer (CoS-E-selected), the model was able to obtain 53% in contrast to the 85% achieved by the open-ended human explanations (CoS-E-open-ended). Adding the question boosts performance for CoS-E-selected to 70%, again falling short of almost 90% achieved by CoS-E-open-ended. We conclude then that our full, open-ended CoS-E thus supply a significant source of information beyond simply directing the model towards the most useful information already in the question. Method Accuracy (%) CAGE-reasoning 55.7 BERT baseline 56.7 CoS-E-open-ended 58.2 Table 5: Test results on CQA v1.11. We experimented with one final setting in which we only used open-ended explanations that did not contain any word from any answer choices (23%. In this setting, we call these “CoS-E-limited-openended” explanations because these explanations are limited in the choice of words allowed. We observe that even using these limited kind of explanations improves over the BERT baseline in Table 4, which suggests that the explanations are providing useful information beyond just mentioning the correct or incorrect answers. We also evaluated our key models – CoS-Eopen-ended used during training only and the CAGE reasoning on the v1.11 of CQA that was released before the final submission. Table 5 shows the results obtained on the more challenging CQA v1.11. Camburu et al. (2018) empirically show that 4938 transferring explanations on the natural language inference (NLI) problem from SNLI to MultiNLI performs very poorly and is still an open challenging problem. We study transfer of explanations on commonsense reasoning tasks. The NLI problem has a small fixed set of pre-defined labels unlike the commonsense reasoning tasks such as CQA, SWAG and Story Cloze. Table 6 shows the results obtained by the BERT baseline without explanations and using our transferred explanations from CQA to SWAG and Story Cloze. We observed that adding explanations led to a very small decrease (< 0.6%) in the performance compared to the baseline for both tasks. Method SWAG Story Cloze BERT 84.2 89.8 + expl transfer 83.6 89.5 Table 6: Results for explanation transfer from CQA to out-of-domain SWAG and Sotry Cloze tasks. 6 Analysis and Discussion In Table 2, using CAGE-reasoning at both train and validation resulted in an accuracy of 72%, but Table 4 shows that if CAGE-reasoning truly captured all information provided in CoS-E-openended, performance would be 90%. This gap between CAGE and CoS-E prompted further analysis. We measure quality of CAGE using human evaluation and automated metrics. One of the metrics is the BLEU score (Papineni et al., 2002), which measures syntactical precision by n-gram overlap. We also report perplexity, which provides a token-level measure of how well the language models predict the next word. We obtained a peak BLEU score of 4.1 between CAGEreasoning and CoS-E-open-ended and perplexity of 32. Language models that are not fine-tuned achieve BLEU score of only 0.8. Though it is clearly beneficial to fine-tune the LM and empirical results suggested that CAGE increased performance, these scores suggest that humans and LMs have widely varying ways of providing useful explanations. Error analysis on the baseline BERT model that does not use any explanations indicates that the model performs poorly on questions that are longer on an average and are more compositional. The average length of such questions is 14 words as opposed to the average length of 13 words for questions that the model using CAGE predicts inQuestion: What could people do that involves talking? Choices: confession, carnival, state park CoS-E: confession is the only vocal action. Reason people talk to each other Rationale: people talk to people Question: A child wants to play, what would they likely want? Choices: play tag, breathe, fall down CoS-E: A child to play tag Reason Children want to play tag, and they want to play tag with their friends. Rationale: Children want to play tag, what would they want to do? Question: They were getting ready for a really long hike, he put the food in his what? Choices: recycling center, house, backpack CoS-E: Backpacks are used on hikes Reason a backpack is a place to store food and supplies. Rationale: a backpack is used to carry food and supplies Question: You can do knitting to get the feeling of what? Choices: relaxation, your, arthritis CoS-E: Your are focusing on a repetitive task. Reason knitting is the only thing that is relaxing. Rationale: you can do knitting to get the feeling of what? Table 7: Random sample of explanations generated by humans from CoS-E and our CAGE framework’s reasoning and rationalization approaches. Boldface indicates gold label. All the typos and grammatical errors are as they appear in the actual output sequence. correctly. Therefore, we can conclude that explanations help elucidate the longer and more complicated compositional questions. Table 7 shows a collection of examples from CQA, CoS-E, and CAGE samples. We observe that CAGE-reasoning typically employs a much simpler construction than CoS-E-openended. Nonetheless, this simple declarative mode can sometimes be more informative than CoS-Eopen-ended. CAGE achieves this by either providing more explicit guidance (as in the final example of Table 7) or by adding meaningful context (as in the third example by introducing the word ‘friends’). We observe that CAGE-reasoning contains at least one of the answer choices 43% of the time, out of which it contains the model’s actual predicted answer choice 21% of the time. This suggests that there is more to the effectiveness of CAGE-reasoning than directly pointing to the answer. Question: What is the main purpose of having a bath? Choices: cleanness, use water, exfoliation, hygiene, wetness Explanation: the only purpose of having a bath is to clean yourself. Question: Where can you store you spare linens near your socks? Choices: cabinet, chest, hospital, dresser drawers, home Explanation: dresser drawer is the only place that you can store linens. Question: Where do you find the most amount of leafs?, Choices: forrest, floral arrangement, compost pile, field, ground Explanation: the most likely place to find leafs is in a garden. Table 8: Random sample of incorrectly predicted instances by CAGE-reasoning on CQA v1.11 dev-set. Bold indicated ground-truth and underline indicates our CAGE’s prediction. 4939 We also carried out human evaluations to compare 400 examples of CoS-E and CAGEreasoning. We asked human participants on Mechanical Turk to guess the most appropriate answer choice based on only the explanation without the question. This tests whether the explanation by itself is sufficient for a human to arrive at the same answer as the neural network. We found that Turkers were able to arrive at the same answer as the model based on CAGE-reasoning 42% of the time. This initially seemed low, but Turkers could only arrive at the same answer as humans using only CoS-E-open-ended 52% of the time From Table 7, we observed that CAGErationalization and CAGE-reasoning were often identical or differed only in word ordering or by replacing one of the answer choices with another. Humans could predict the answer based on just CAGE-rationalization 42% of the time, same as CAGE-reasoning. Although CAGErationalizations seem to be better than CAGEreasoning, we find that it does not drastically improve the model’s language generating behavior which is what humans judge while trying to guess the right answer without the actual question. Even though CoS-E and CAGE are noisy, they empirically perform well when used by downstream models for CQA, but this is not the case for misleading explanations. If we manually changed a random sample of 50 examples to have adversarial misleading explanations, performance dropped from 60% to 30%, well below the baseline of 50% validation accuracy. For example, we changed the explanation from “being able to use“ to “buying more will alleviate stress“ for the question “If a couple is having financial issues, buying products can lead to what“ with answer choices “economic boom”, “disagreements”, “being able to use”. Of the 70% of the errors made by a model trained on misleading explanations, 57% of them were instead correctly answered by our model trained with true CoS-E explanations. This demonstrates the effectiveness of having well-informing explanations. Camburu et al. (2018) use human explanations to train a neural network model on the SNLI dataset (Bowman et al., 2015). However, they obtain explanations at the cost of accuracy. The authors use the InferSent (Conneau et al., 2017) model for classification and add a one-layer LSTM as the explanation decoder. They report a slight drop in performance (< 1%) when training on human explanations and testing by first predicting an answer and then generating explanations. There is a further drop of approximately 2% accuracy when their model generates explanations prior to predicting an answer based only on that explanations. However, they also show that a bidirectional encoder with MLP-classifier obtains 96.83% accuracy when given only human explanations. CQA experiences a lift from explanations when e-SNLI performance appears to degrade with explanations. For CQA, humans are able to predict the right answer only about 52% of the time using only human explanations from CoS-E. On the more challenging CQA v1.11, we observed that our CoS-E model trained on human explanations but evaluated without explanations obtains state-of-the-art performance, beating the BERT baseline by 1.5%. Surprisingly, we found that our CAGE-reasoning model performs slightly worse than the baseline. However, during error analysis we found that the language model explanations do not exhibit any obvious problems. Table 8 shows some samples that CAGE predicts incorrectly. We observed that many of the incorrectly predicted instances had the correct answer in the generated explanation, such as “dresser drawer” and “cleanness” in the first two examples, but this information is not properly used by the BERT classifier. A more explicit method of guiding attention towards the relevant information in the explanations might be necessary for such cases. The model also frequently errs when the choices seem semantically close such as “forest” and “compost pile” in the third example. In these cases, the classifier often predicts the incorrect choice on v1.11, but was able to predict the correct choice on v1.0 when only 3 choices were presented. This suggests that simply concatenating explanations is unable to make sufficiently clear the more difficult cases of the newer version of CQA. Transferring the language model used to generate commonsense explanations to out-of-domain datasets, SWAG and Story Cloze, led to slight decrease in performance. Upon inspection, the generated explanations exhibited little grammatical or syntactical errors and often contained apparently relevant information. Table 9 shows examples from both datasets and the corresponding gen4940 SWAG Question: Men are standing on motorbikes getting ready for a motocross competition. Choices: man places the ladders onto a fence and winds up a marching wall, high with hammer and a stone., man is talking to the camera and standing on a podium., man stands outside in the field going at arms of people and leading a long jumping calf in front., man drops the javelin to the ground and jumps it very high. Explanation: man is talking to the camera and not the crowd. Question: The man examines the instrument in his hand. Choices: The person studies a picture of the man playing the violin., The person holds up the violin to his chin and gets ready., The person stops to speak to the camera again., The person puts his arm around the man and backs away. Explanation: the person is holding the instrument in his hand. Question: The woman is seated facing the camera while another woman styles her hair. Choices: The woman in purple is wearing a blue dress and blue headband, using the pits to style her hair., The woman begins to cut the hair with her hair then serves it and begins brushing her hair and styling it., The woman puts some right braids on his., The woman continues to have her hair styled while turned away from the camera. Explanation: the woman is using the braids to trim her hair. Story Cloze (ROCStories) Question: My friends all love to go to the club to dance. They think it’s a lot of fun and always invite. I finally decided to tag along last Saturday. I danced terribly and broke a friend’s toe. Choices: My friends decided to keep inviting me out as I am so much fun., The next weekend, I was asked to please stay home. Explanation: the next weekend, i would be asked to stay home Question: Ari spends $20 a day on pickles. He decides to make his own to save money. He puts the pickles in brine. Ari waits 2 weeks for his pickles to get sour. Choices: Ari opens the jar to find perfect pickles., Ari’s pickles are sweet. Explanation: pickles are the only thing that can be found in a jar. Question: Gina sat on her grandpa’s bed staring outside. It was winter and his garden was dead until spring. Her grandpa had passed away so there would be no one to tend it. The weeds would take over and strangle the flowers. Choices: Gina asked her grandpa what kind of flowers he liked best., Gina decided to go outside and pick some of the weeds. Explanation: the weeds would take over and strangle the flowers. Table 9: Random sample of explanations generated by the language model fine-tuned on CQA and transferred without further training to SWAG and Story Cloze. Bold indicates ground-truth. erated explanations. In the SWAG dataset, each question is a video caption from activity recognition videos with choices about what might happen next and the correct answer is the video caption of the next scene. Generated explanations for SWAG appear to be grounded in the given images even though the language model was not at all trained on SWAG. Similarly, we found that for the Story Cloze dataset, the explanations had information pointing to the correct ending. Nonetheless, the classifier was unable to make use of this information to improve performance. 7 Conclusion and Future Work We introduced the Common Sense Explanations (CoS-E) dataset built on top of the existing CommonsenseQA dataset. We also proposed the novel Commonsense Auto-Generated Explanations (CAGE) framework that trains a language model to generate useful explanations when finetuned on the problem input and human explanations These explanations can then be used by a classifier model to make predictions. We empirically show that such an approach not only results in state-of-the-art performance on a difficult commonsense reasoning task, but also opens further avenues for studying explanation as it relates to interpretable commonsense reasoning. We also performed comprehensive error analyses of language model explanations and evaluated explanation transfer to out-of-domain datasets. While CAGE focuses on generating explanations prior to predicting an answer, language models for explanation might also be jointly trained to predict the answer. They might also be extended to a broader set of tasks. With a sufficient dataset of explanations (analogous to CoS-E) for many tasks, it might be possible to fine-tune a more general explanatory language model that generates more useful explanations for unseen tasks. With deferral of explanation to neural models, it will be crucial in the future to study the ethical implications of biases that are accumulated during pretraining or fine-tuning. Explanations must be carefully monitored to ensure that they do not reinforce negative or otherwise harmful reasoning that might then propagate into downstream models. For example, in CQA we observed significant gender disparity and bias with higher proportion of female pronouns used in negative contexts. This kind of bias has inevitably propagated into CoSE and advise these datasets and trained models be used with that in mind. Acknowledgements We would like to thank Melvin Gruesbeck for the illustration of CAGE in Figure 1. We also thank the anonymous reviewers for their feedback. 4941 References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV). Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP2015), pages 632–642. Oana-Maria Camburu, Tim Rockt¨aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural Language Inference with Natural Language Explanations. In Advances in Neural Information Processing Systems (NeurIPS2018), pages 9560– 9572. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP2017), pages 670–680. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS2015), pages 3079–3087. MIT Press. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R´e. 2018. Training classifiers with natural language explanations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL2018), volume 1, pages 1884–1895. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL2018), pages 328–339. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP2016), pages 107–117. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA). Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6294–6305. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL2016), pages 839– 849, San Diego, California. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual meeting on Association for Computational Linguistics (ACL2002), pages 311–318. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP2014), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. https://s3-us-west-2.amazonaws.com/openai-assets/ research-covers/language-unsupervised/ language understanding paper.pdf. 4942 Nazneen Fatema Rajani and Raymond Mooney. 2018. Stacking with auxiliary features for visual question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2217–2226. Nazneen Fatema Rajani and Raymond J. Mooney. 2017. Ensembling visual explanations for vqa. In Proceedings of the NIPS 2017 workshop on Visually-Grounded Interaction and Language (ViGIL). Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. CoRR, abs/1611.01603. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS2017), pages 5998–6008. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Terry Winograd. 1972. Understanding natural language. Cognitive psychology, 3(1):1–191. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP2018), pages 93–104. Wanjun Zhong, Duyu Tang, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2018. Improving question answering by commonsense-based pre-training. arXiv preprint arXiv:1809.03568.
2019
487
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4943–4951 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4943 Interpretable Question Answering on Knowledge Bases and Text Alona Sydorova iteratec GmbH Munich, Germany [email protected] Nina Poerner & Benjamin Roth Center for Information and Language Processing LMU Munich, Germany {poerner,beroth}@cis.lmu.de Abstract Interpretability of machine learning (ML) models becomes more relevant with their increasing adoption. In this work, we address the interpretability of ML based question answering (QA) models on a combination of knowledge bases (KB) and text documents. We adapt post hoc explanation methods such as LIME and input perturbation (IP) and compare them with the self-explanatory attention mechanism of the model. For this purpose, we propose an automatic evaluation paradigm for explanation methods in the context of QA. We also conduct a study with human annotators to evaluate whether explanations help them identify better QA models. Our results suggest that IP provides better explanations than LIME or attention, according to both automatic and human evaluation. We obtain the same ranking of methods in both experiments, which supports the validity of our automatic evaluation paradigm. 1 Introduction Question answering (QA) is an important task in natural language processing and machine learning with a wide range of applications. QA systems typically use either structured information in the form of knowledge bases (KBs), or raw text. Recent systems have successfully combined both types of knowledge (Das et al., 2017). Nowadays, due to the changing legal situation and growing application in critical domains, ML based systems are increasingly required to provide explanations of their output. Lipton (2018), Poursabzi-Sangdeh et al. (2018) and Doshi-Velez and Kim (2017) point out that there is no complete agreement on the definition, measurability and evaluation of interpretability in ML models. Nevertheless, a number of explanation methods have been proposed in the recent literature, with the aim of making ML models more transparent for humans. To the best of our knowledge, the problem of explanations for deep learning based QA models working on a combination of structured and unstructured data has not yet been researched. Also, there is a lack of evaluation paradigms to compare different explanation methods in the context of QA. Contributions - We explore interpretability in the context of QA on a combination of KB and text. In particular, we apply attention, LIME and input perturbation (IP). - In order to compare these methods, we propose a novel automatic evaluation scheme based on “fake facts”. - We evaluate whether explanations help humans identify the better out of two QA models. - We show that the results of automatic and human evaluation agree. - Our results suggest that IP performs better than attention and LIME in this context. 2 Question Answering on Knowledge Bases and Text The combination of knowledge bases and text data is of particular interest in the context of QA. While knowledge bases provide a collection of facts with a rigid structure, the semantic information contained in text documents has the potential to enrich the knowledge base. In order to exploit different information sources within one QA system, Das et al. (2017) introduce the TextKBQA model, which works on a universal schema representation 4944 Figure 1: Overview of the TextKBQA model architecture. (Riedel et al., 2013) of a KB and text documents. They state that “individual data sources help fill the weakness of the other, thereby improving overall performance” and conclude that “the amalgam of both text and KB is superior than KB alone.” Their model solves the so-called cloze questions task, i.e., filling in blanks in sentences. For example, the answer to “Chicago is the third most populous city in blank .” would be the entity the USA. The model has a KB and a number of raw text sentences at its disposal. Das et al. (2017) use Freebase (Bollacker et al., 2008) as KB (8.0M facts) and ClueWeb (Gabrilovich et al., 2013) as raw text source (0.3M sentences). They test on question-answer pairs from SPADES (Bisk et al., 2016) (93K queries). The TextKBQA model (Figure 1) is a key-value memory network that uses distributed representations for all entities, relations, textual facts and input questions. Every memory cell corresponds to one KB fact or one textual fact, which are encoded as key-value pairs (Miller et al., 2016). Every KB fact is a triple consisting of a subject s, an object o and the relation r between these entities. s, r, o are embedded into real-valued vectors s, r, o. The memory key is the concatenation of subject and relation embedding: k = [s; r] ∈R2d. The memory value is the embedding of the object: v = o ∈Rd. Textual facts are sentences that contain at least two entities. They are also represented as triples, where the relation is a token sequence: (s, [w1, ..., arg1, ..., arg2, ..., wn], o). To convert the sentence into a vector, arg1 and arg2 are replaced by s and blank respectively. Then, the sequence is processed by a bidirectional LSTM. Its last states are concatenated to form the memory key k = [−−−−→ LSTM([w1, ..., wn]); ←−−−− LSTM([w1, ..., wn])] ∈ R2d. The memory value is v = o, as before. A question q = [w1, ..., e, ..., blank , ..., wn] is transformed into a distributed representation q ∈ R2d using the same bidirectional LSTM as before. In this way, KB and textual facts as well as queries are in the same R2d space. Given q and a set of relevant facts, represented by key-value pairs (k, v), TextKBQA performs multi-hop attention. More specifically, the context vector c0 is set to q. In every iteration (hop) t, a new context vector ct is computed as: ct = Wt ct−1 + Wp X (k,v)∈M softmax(ct−1 · k)v  (1) where Wp, Wt are weight matrices. In practice, M contains only facts that share an entity with the query. The result of the last hop is fed into a fullyconnected layer to produce a vector b ∈Rd. Then, the inner product between b and all entity embeddings is taken. The entity with the highest inner product is chosen as the model’s answer aq. We train the TextKBQA model using the datasets described above. We limit the number of textual facts per query to 500, since only 35 out of 1.8M entities in the dataset have more than 500 textual facts. Apart from this modification, we use the exact same implementation and training setup as in Das et al. (2017). Our final model achieves an F1 score of 41.59 on the dev dataset and 40.27 4945 on the test dataset, which is slightly better than the original paper (41.1 and 39.9, respectively). 3 Explanation methods We first present some important notation and give a working definition of an explanation method. Formally, let F be a database consisting of all KB and textual facts: F = FKB ∪Ftext. Furthermore, let E be a set of entities that are objects and subjects in F, and let R be a set of relations from FKB. In the following we will use a general notation f for a fact from F, distinguishing between KB and textual facts only when necessary. Let q be a query, and F ⊆F the corresponding set of facts, such that for ∀f ∈F holds: subjectf ∈q. Let TextKBQA be a function computed by the TextKBQA model and aq = TextKBQA(q, F), aq ∈E, the predicted answer to the query q. Note that aq is not necessarily the ground truth answer for q. Analogously to Poerner et al. (2018), we give the following definition: an explanation method is a function φ(f, aq, q, F) that assigns real-valued relevance scores to facts f from F given an input query q and a target entity aq. If φ(f1, aq, q, F) > φ(f2, aq, q, F) then fact f1 is of a higher relevance for aq given q and F than fact f2. 3.1 Attention Weights The attention mechanism provides an explanation method which is an integral part of the TextKBQA architecture. We formally define the explanation method attention weights as: φaw(f, aq, q, F) = softmax(KF · q)f (2) where KF is a matrix whose rows are key vectors of facts in F. Since the TextKBQA model takes three attention hops per query, φaw can be extended as follows: On the one hand, we can take attention weights from the first, second or third (=last) hops. Intuitively, attention weights from the first hop reflect the similarity of fact keys with the original query, while attention weights from the last hop reflect the similarity of fact keys with the summarized context from all previous iterations. On the other hand, some aggregation of attention weights could also be a plausible explanation method. For every fact, we take the mean attention weight over hops to be its average relevance in the reasoning process. Taking into account the above considerations we redefine φaw: −attention weights at hop j: φawj(f, aq, q, F) = softmax(KF · cj−1)f (3) −average attention weights: φawavg(f, aq, q, F) = 1 h h X j=1 softmax(KF · cj−1)f (4) where h is the number of hops. 3.2 LIME LIME (Local Interpretable Model-Agnostic Explanations) is a model-agnostic explanation method (Ribeiro et al., 2016). It approximates behavior of the model in the vicinity of an input sample with the help of a less complex, interpretable model. LIME requires a mapping from original features (used by TextKBQA) to an interpretable representation (used by LIME). For this purpose we use binary “bag of facts” vectors, analogously to the idea of bag of words: a vector z ∈{0, 1}|F| indicates presence or absence of a fact f from F. The reverse mapping is straightforward. We first turn the original fact set F into an interpretable representation z. Every entry of this vector represents a fact from F. Then we sample vectors z′ of the same length |F| by drawing facts from F using the Bernoulli distribution with p = 0.5. In every z′ vector, the presence or absence of facts is encoded as 1 or 0, respectively. We set the number of samples to 1000 in our experiments. For every z′, we obtain the corresponding original representation F′ and give this reduced input to the TextKBQA model. Note that the query q remains unchanged. We are interested in the probability that aq is still the predicted answer to the query q, given facts F′ instead of F. In the TextKBQA model, this probability is obtained from the inner product of b and the entity embedding matrix E at position aq. We define this step as a function logit(q, F, aq) = (E · b)aq. We gather the outputs of logit(q, F′, aq) for all sampled instances, together with the corresponding binary vectors, into a dataset Z. Then, we train 4946 a linear model on Z by optimizing the following equation: ξ(q, F) = argmin g∈G L(logit, g) (5) where L is ordinary least squares and G is the class of linear models, such that g(z′) = wg · z′.1 From the linear model g, we extract a weight vector wg ∈R|F|. This vector contains LIME relevance scores for facts in F given aq and q. We formally define the LIME explanation method for the TextKBQA model as follows: φlime(f, aq, q, F) = wg,f (6) 3.3 Input Perturbation Method Another explanation method is input perturbation (IP), originally proposed by Li et al. (2016), who apply it on a sentiment analysis task. They compute relevance scores for every word in a dictionary as the average relative log-likelihood difference that arises when the word is replaced with a baseline value. This method cannot be directly applied to QA, because the same fact can be highly relevant for one query and irrelevant for another. Therefore, we constrain the computation of loglikelihood differences to a single data sample (i.e., a single query). We formally define the input perturbation (IP) explanation method as follows: φip(f, aq, q, F) = logit(q, F, aq) −logit(q, F \ {f}, aq) logit(q, F, aq) (7) where logit is the same logit function that we used for LIME. A positive difference means that if we remove fact f when processing query q, the model’s hidden vector b is less similar to the entity aq, suggesting that the fact is relevant. 4 Automatic evaluation using fake facts This section presents our automatic evaluation approach, which is an extension of the hybrid document paradigm (Poerner et al., 2018). The major advantage of automatic evaluation in the context of explanation methods is that it does not require manual annotation. 1We do not use a proximity measure, because, unlike the original LIME, we only sample from the facts currently present in F, and not from the whole F set. 4.1 Definition of automatic evaluation Poerner et al. (2018) create hybrid documents by randomly concatenating fragments of different documents. We adapt this paradigm to our use case in the following way: Let q be a query and F the corresponding set of facts. We define the corresponding hybrid fact set ˆF as the union of F with another disjoint fact set F′: ˆF = F ∪F′, where F ∩F′ = ∅. (8) Conceptually, F′ are “fake facts”. We discuss how they are created below; for now, just assume that TextKBQA is unable to correctly answer q using only F′. Note that we only consider queries that are correctly answered by the model based on their hybrid fact set ˆF = F ∪F′. The next step is to obtain predictions aq for the hybrid instances and to explain them with the help of an explanation method φ. Recall that φ produces one relevance score per fact. The fact with the highest relevance score, rmax( ˆF, q, φ), is taken to be the most relevant fact given query q, answer aq and facts ˆF, according to φ. We assume that φ made a reasonable choice if rmax( ˆF, q, φ) stems from the original fact set F and not from the set of fake facts F′. Formally, a “hit point” is assigned to φ if: hit(φ, q, ˆF) = ( 1, if rmax( ˆF, q, φ) ∈F, 0, if rmax( ˆF, q, φ) ∈F′. (9) The pointing game accuracy of explanation method φ is simply its number of hit points divided by the maximally possible number of hit points. 4.2 Creating Fake Facts To create fake facts for query q, we randomly sample a different query q′ that has the same number of entities and gather its fact set F′. We then replace subject entities in facts from F′ with subject entities from F. We call these “fake facts” because they do not exist in F, unless by coincidence. For example, let q be “ blank was chosen to portray Patrick Bateman, a Wall Street serial killer.” and q′ be “This year Philip and blank divided Judea into four kingdoms.” Then replace subject entities Philip and Judea in facts of F′ by subject entities Patrick Bateman and Wall Street, respectively. E.g., the KB 4947 Facts in hybrid fact set ˆF Facts from F Disney award.award honor.award winner award.award honor.honored for Bambi (real facts) Disney’s Steamboat Willie premiered on November 18th 1928 at the Broadway. Disney film.performance.actor film.performance.character Mickey Disney film.film.directed by.2 film.film.directed by.1 The Opry House Facts from F′ But in the summer of 2007, Apple rocked Disney by launching the iPhone. (fake facts) Disney fashion.clothing size.region fashion.clothing size.person Frankie Rayder The Libertarian is a Disney political party created in 1971. eBay is the largest marketplace in the Disney. Table 1: An example of a hybrid instance. Query: “Walt Disney himself was the original voice of blank .”. Answer: Mickey. Green underlined: fact with the maximal relevance score assigned by IP. Red italics: fact with the maximal relevance score assigned by average attention weights. fact Philip people.person.gender.1 Males is turned into Patrick Bateman people.person.gender.1 Males, the textual fact "This year Herod divided Judea into four kingdoms." becomes "This year Herod divided Wall Street into four kingdoms." Our assumption is as follows: If the model is still able to predict the correct answer despite these fake facts, then this should be due to a fact contained in F and not in F′. This assumption fails when we accidentally sample a fact that supports the correct answer. Therefore, we validate F′ by testing whether the model is able to predict the correct answer to q using just F′. If this is the case, a different query q′ and a different fake fact set F′ are sampled and the validation step is applied again. This procedure goes on until a valid F′ is found. Table 1 contains an example of a query with real and fake facts for which explanations were obtained by average attention weights and IP. IP assigns maximal relevance to a real fact from F, which means that φip receives one hit point for this instance. The average attention weight method considers a fake fact from F′ to be the most important fact and thus does not get a hit point. 4.3 Experiments and results We perform the automatic evaluation experiment on the test set, which contains 9309 questionanswer pairs in total. Recall that we discard queries that cannot be answered correctly, which leaves us with 2661 question-answer pairs. We evaluate the following explanation methods: • φaw1 - attention weights at first hop • φaw3 - attention weights at third (last) hop • φawavg - average attention weights • φlime - LIME with 1000 samples per instance • φip - input perturbation (IP) A baseline that samples a random fact for rmax(...) is used for reference. Table 2 shows pointing game accuracies and the absolute number of hit points achieved by all five explanation methods and the baseline. All methods beat the random baseline. IP is the most successful explanation method with a pointing game accuracy of 0.97, and LIME comes second. Note that we did not tune the number of samples per query drawn by LIME, but set it to 1000. It is possible that as a consequence, queries with large fact sets are not sufficiently explored by LIME. On the other hand, a high number of samples is computationally prohibitive, as TextKBQA has to perform one inference step per sample. Attention weights at hop 3 performs best among the attention-based methods, but worse than LIME and IP. We suspect that the last hop is especially relevant for selecting the answer entity. The poor performance of attention is in line with recent work by Jain and Wallace (2019), who also question the validity of attention as an explanation method. We perform significance tests by means of binomial tests (with α = 0.05). Our null hypothesis is that there is no significant difference in hit scores between a given method and the nexthighest method in the ranking in Table 2. Differences are statistically significant in all cases, except for the difference between attention weights at hop 3 and average attention weights (p = 0.06). 4948 Explanation method Hit points Pointing game acc. attention weights at hop 1 1849 0.69 attention weights at hop 3 2116 0.80 average attention weights 2081 0.78 LIME 2271 0.85 IP 2570 0.97 random baseline 1458 0.55 Table 2: Hit points and pointing game accuracy. 2661 out of 9309 test set questions were used. 5 Evaluation with human annotators The main goal of explanation methods is to make machine learning models more transparent for humans. That is why we conduct a study with human annotators. Our experiment is based on the trust evaluation study conducted by Selvaraju et al. (2017) which, in turn, is motivated by the following idea: An important goal of interpretability is increasing users’ trust in ML models, and trust is directly impacted by how much a model is understood (Ribeiro et al., 2016). Selvaraju et al. (2017) develop a method to visualize explanations for convolutional neural networks on an image classification task, and evaluate this method in different ways. One of their experiments is conducted as follows: Given two models, one of which is known to be better (e.g., to have higher accuracy), instances are chosen that are classified correctly by both models. Visual explanations for these instances are produced by the evaluated explanation methods, and human annotators are given the task of rating the reliability of the models relative to each other, based on the predicted label and the visualizations. Since the annotators see only instances where the classifiers agree, judgments are based purely on the visualizations. An explanation method is assumed to be successful if it helps annotators identify the better model. The study confirmed that humans are able to identify the better classifier with the help of good explanations. We perform a similar study for our use case, but modify it as described below. 5.1 Experimental setup We use two TextKBQA Models, which are trained differently: • model A is the model used above, with a test set F1 of 40 • model B is a TextKBQA model with a test set F1 of 23. The lower score was obtained by training the model for fewer epochs and without pre-training in ONLYKB mode (see (Das et al., 2017)). We only present annotators with query instances for which both models output the same answer. However, we do not restrict these answers to be the ground truth. We perform the study with three explanation methods: average attention weights, LIME and IP. We apply each of them to the same question-answer pairs, so that the explanation methods are equally distributed among tasks. Every task contains one query and its predicted answer (which is the same for both models), and explanations for both models by the same explanation method. In contrast to image classification, it would not be human-friendly to show participants all input components (i.e., all facts), since their number can be up to 5500. Hence, we show the top5 facts with the highest relevance score. The order in which model A and model B appear on the screen (i.e., which is “left” and which is “right” in Figure 2) is random to avoid biasing annotators. Annotators are asked to compare both lists of top5 facts and decide which of them explains the answer better. This decision is not binary, but five options are given: definitely left, rather left, difficult to say, rather right and definitely right. The interface is presented in Figure 2. 25 computer science students, researchers and IT professionals took part in our study and annotated 600 tasks in total. 5.2 Results As shown in Table 3, the answer difficult to say is the most frequent one for all explanation methods. For attention weights and LIME there is a clear trend that, against expectations, users found fact lists coming from model B to be a better explanation. The total share of votes for definitely 4949 Figure 2: Interface for the human annotation study. model B and rather model B makes up 49.5% for attention weights and 29% for LIME, while definitely model A and rather model A gain 19.5% and 23.5%, respectively. In contrast to that, IP achieves a higher share of votes for model A than for model B: 16.5% vs. 10.5%. Analogously to Selvaraju et al. (2017), we compute an aggregate score that expresses how much an explanation method helps users to identify the better model. Votes are weighted in the following way: definitely model A +1, definitely model A +0.75, difficult to say +0.5, rather model B +0.25 and definitely model B +0. We then compute a weighted average of votes for all tasks per explanation method. In this way, scores are bounded in [0, 1] like the values of the hit score function used for the automatic evaluation. Values smaller than 0.5 indicate that the less accurate model B was trusted more, while values larger than 0.5 represent a higher level of trust in the more accurate model A. According to this schema, attention weights achieve a score of 0.386 and LIME achieves a score of 0.476. The score of the IP method is 0.524, which means that participants were able to identify the better model A when explanations were given by IP. Significance tests show that while attention weights perform significantly worse than other methods, the difference between LIME and IP is insignificant, with p = 0.07. A larger sample of data and/or more human participants may be necessary in this case. We also collected feedback from participants and performed qualitative analysis on the evaluated fact lists. The preference for the difficult to say option can be explained by the fact that in many cases, both models were explained with the same or very similar fact lists. In particular, we found that IP provided identical top five fact lists in 120 out of 200 tasks. In the case of attention weights and LIME, this occurs only in 9 and 10 cases out of 200 tasks. Another problem mentioned by annotators was that KB facts are not intuitive or easy to read for humans that have not dealt with such representations before. It would be interesting to explore if some additional preprocessing of facts would lead to different results. For example, KB facts could be converted into natural language sentences, while textual facts could be presented with additional context like the previous and the next sentences from the original document. We leave such preprocessing to future work. 6 Related work Rychalska et al. (2018) estimate relevance of words in queries with LIME to test the robustness of QA models. However, they do not analyze the importance of the facts used by these QA systems. Abujabal et al. (2017) present a QA system called QUINT that provides a visualization of how a natural language query is transformed into formal language and how the answer is derived. However, this system works only with knowledge bases and the explanatory system is its integral part, i.e., it cannot be reused for other models. Zhou et al. (2018) propose an out-of-the-box interpretable QA model that is able to answer multirelation questions. This model is explicitly designed to work only with KBs. Another approach 4950 avg. attention weights LIME IP definitely model A 6.0% 6.5% 5.0% rather model A 13.5% 17.0% 11.5% difficult to say 31.0% 47.5% 73.0% rather model B 28.0% 18.5% 9.0% definitely model B 21.5% 10.5% 1.5% aggregate score 0.386 0.476 0.524 Table 3: Percentage distribution of votes, and aggregate score, from the human annotation study. for interpretable QA with multi-hop reasoning on knowledge bases is introduced by Murugan et al. (2018). They claim that the transparent nature of attention distributions across reasoning steps allows humans to understand the model’s behavior. To the best of our knowledge, the interpretability of QA models that combine structured and unstructured data has not been addressed yet. Even in the context of KB-only QA models, no comprehensive evaluation of different explanation methods has been performed. The above-mentioned approaches also lack empirical evaluation with human annotators, to estimate how useful the explanations are to non-experts. 7 Conclusions We performed the first evaluation of different explanation methods for a QA model working on a combination of KB and text. The evaluated methods are attention, LIME and input perturbation. To compare their performance, we introduced an automatic evaluation paradigm with fake facts, which does not require manual annotations. We validated the ranking obtained with this paradigm through an experiment with human participants, where we observed the same ranking. Based on the outcomes of our experiments, we recommend the IP method for the TextKBQA model, rather than the model’s self-explanatory attention mechanism or LIME. References Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard Weikum. 2017. Quint: Interpretable question answering over knowledge bases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 61–66. Association for Computational Linguistics. Yonatan Bisk, Siva Reddy, John Blitzer, Julia Hockenmaier, and Mark Steedman. 2016. Evaluating induced ccg parsers on grounded semantic parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX. Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD Conference. Rajarshi Das, Manzil Zaheer, Siva Reddy, and Andrew McCallum. 2017. Question answering on knowledge bases and text using universal schema and memory networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 358– 365. Association for Computational Linguistics. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint. ArXiv:1702.08608v2. Evgeniy Gabrilovich, Michael Ringgaard, and Amarnag Subramanya. 2013. Facc1: Freebase annotation of clueweb corpora, version 1 (release date 2013-0626, format version 1, correction level 0). Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. arXiv preprint. ArXiv:1902.10186. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint. ArXiv:1612.08220. Zachary Chase Lipton. 2018. The mythos of model interpretability. Queue, 16(3):30:31–30:57. Alexander H. Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400–1409. Association for Computational Linguistics. Selvakumar Murugan, Suriyadeepan Ramamoorthy, Vaidheeswaran Archana, and Malaikannan Sankarasubbu. 2018. Compositional attention networks for interpretability in natural language question answering. arXiv preprint. ArXiv:1810.12698. 4951 Nina Poerner, Benjamin Roth, and Hinrich Sch¨utze. 2018. Evaluating neural network explanation methods using hybrid documents and morphological agreement. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers, pages 340–350, Melbourne, Australia. Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna M. Wallach. 2018. Manipulating and measuring model interpretability. arXiv preprint. ArXiv:1802.07810. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 1135–1144, New York, NY, USA. ACM. Sebastian Riedel, Limin Yao, Andrew Mccallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. Proceedings of NAACL-HLT 2013, pages 74–84. Barbara Rychalska, Dominika Basaj, and Przemyslaw Biecek. 2018. Are you tough enough? framework for robustness validation of machine comprehension systems. In Interpretability and Robustness for Audio, Speech and Language Workshop, Montreal, Canada. Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 618–626, Venice, Italy. Mantong Zhou, Minlie Huang, and Xiaoyan Zhu. 2018. An interpretable reasoning network for multirelation question answering. In COLING, pages 2010–2022, Sante Fe, USA.
2019
488
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4952–4962 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4952 A Resource-Free Evaluation Metric for Cross-Lingual Word Embeddings Based on Graph Modularity Yoshinari Fujinuma Computer Science University of Colorado [email protected] Jordan Boyd-Graber CS, iSchool, UMIACS, LSC University of Maryland [email protected] Michael J. Paul Information Science University of Colorado [email protected] Abstract Cross-lingual word embeddings encode the meaning of words from different languages into a shared low-dimensional space. An important requirement for many downstream tasks is that word similarity should be independent of language—i.e., word vectors within one language should not be more similar to each other than to words in another language. We measure this characteristic using modularity, a network measurement that measures the strength of clusters in a graph. Modularity has a moderate to strong correlation with three downstream tasks, even though modularity is based only on the structure of embeddings and does not require any external resources. We show through experiments that modularity can serve as an intrinsic validation metric to improve unsupervised cross-lingual word embeddings, particularly on distant language pairs in low-resource settings.1 1 Introduction The success of monolingual word embeddings in natural language processing (Mikolov et al., 2013b) has motivated extensions to cross-lingual settings. Cross-lingual word embeddings—where multiple languages share a single distributed representation—work well for classification (Klementiev et al., 2012; Ammar et al., 2016) and machine translation (Lample et al., 2018; Artetxe et al., 2018b), even with few bilingual pairs (Artetxe et al., 2017) or no supervision at all (Zhang et al., 2017; Conneau et al., 2018; Artetxe et al., 2018a). Typically the quality of cross-lingual word embeddings is measured with respect to how well they improve a downstream task. However, sometimes it is not possible to evaluate embeddings for a specific downstream task, for example a future task 1Our code is at https://github.com/akkikiki/ modularity_metric that does not yet have data or on a rare language that does not have resources to support traditional evaluation. In such settings, it is useful to have an intrinsic evaluation metric: a metric that looks at the embedding space itself to know whether the embedding is good without resorting to an extrinsic task. While extrinsic tasks are the ultimate arbiter of whether cross-lingual word embeddings work, intrinsic metrics are useful for low-resource languages where one often lacks the annotated data that would make an extrinsic evaluation possible. However, few intrinsic measures exist for crosslingual word embeddings, and those that do exist require external linguistic resources (e.g., sensealigned corpora in Ammar et al. (2016)). The requirement of language resources makes this approach limited or impossible for low-resource languages, which are the languages where intrinsic evaluations are most needed. Moreover, requiring language resources can bias the evaluation toward words in the resources rather than evaluating the embedding space as a whole. Our solution involves a graph-based metric that considers the characteristics of the embedding space without using linguistic resources. To sketch the idea, imagine a cross-lingual word embedding space where it is possible to draw a hyperplane that separates all word vectors in one language from all vectors in another. Without knowing anything about the languages, it is easy to see that this is a problematic embedding: the representations of the two languages are in distinct parts of the space rather than using a shared space. While this example is exaggerated, this characteristic where vectors are clustered by language often appears within smaller neighborhoods of the embedding space, we want to discover these clusters. To measure how well word embeddings are mixed across languages, we draw on concepts from network science. Specifically, some cross4953 .83 .71 eat ᷣΏΡ(eat) 䠷Ρ (take) 汯Ζ (drink) drink ჅΗΡ (warm up) nutritious ᅗͥ (cook) 汯ΗΡ (drinkable) consume ᷣΏ (eat) ͪ汬 (meal, rice) ͪ΅Ω (meal, rice) eating .77 .75 .72 .72 .71 .71 .71 .70 .82 .76 .75 .75 .71 .71 .70 (a) low modularity firefox chrome lollipop ubuntu mozilla android ios ϹϐςЄϞЀύ (firefox) ઊṛ (Yamataka) 揷᝛૝ (Kamo River) 䋠उᰀ (Miyagino) ช෭ᰀ (Kasugano) उઊ (Shiroyama) ኸៗ (Rumoi) .84 .82 .82 .81 .81 .80 .75 .73 .72 .72 .72 .71 (b) high modularity Figure 1: An example of a low modularity (languages mixed) and high modularity cross-lingual word embedding lexical graph using k-nearest neighbors of “eat” (left) and “firefox” (right) in English and Japanese. lingual word embeddings are modular by language: vectors in one language are consistently closer to each other than vectors in another language (Figure 1). When embeddings are modular, they often fail on downstream tasks (Section 2). Modularity is a concept from network theory (Section 3); because network theory is applied to graphs, we turn our word embeddings into a graph by connecting nearest-neighbors—based on vector similarity—to each other. Our hypothesis is that modularity will predict how useful the embedding is in downstream tasks; low-modularity embeddings should work better. We explore the relationship between modularity and three downstream tasks (Section 4) that use cross-lingual word embeddings differently: (i) cross-lingual document classification; (ii) bilingual lexical induction in Italian, Japanese, Spanish, and Danish; and (iii) low-resource document retrieval in Hungarian and Amharic, finding moderate to strong negative correlations between modularity and performance. Furthermore, using modularity as a validation metric (Section 5) makes MUSE (Conneau et al., 2018), an unsupervised model, more robust on distant language pairs. Compared to other existing intrinsic evaluation metrics, modularity captures complementary properties and is more predictive of downstream performance despite needing no external resources (Section 6). 2 Background: Cross-Lingual Word Embeddings and their Evaluation There are many approaches to training crosslingual word embeddings. This section reviews the embeddings we consider in this paper, along with existing work on evaluating those embeddings. 2.1 Cross-Lingual Word Embeddings We focus on methods that learn a cross-lingual vector space through a post-hoc mapping between independently constructed monolingual embeddings (Mikolov et al., 2013a; Vuli´c and Korhonen, 2016). Given two separate monolingual embeddings and a bilingual seed lexicon, a projection matrix can map translation pairs in a given bilingual lexicon to be near each other in a shared embedding space. A key assumption is that cross-lingually coherent words have “similar geometric arrangements” (Mikolov et al., 2013a) in the embedding space, enabling “knowledge transfer between languages” (Ruder et al., 2017). We focus on mapping-based approaches for two reasons. First, these approaches are applicable to low-resource languages because they do not requiring large bilingual dictionaries or parallel corpora (Artetxe et al., 2017; Conneau et al., 2018).2 Second, this focus separates the word embedding task from the cross-lingual mapping, which allows us to focus on evaluating the specific multilingual component in Section 4. 2.2 Evaluating Cross-Lingual Embeddings Most work on evaluating cross-lingual embeddings focuses on extrinsic evaluation of downstream tasks (Upadhyay et al., 2016; Glavas et al., 2019). However, intrinsic evaluations are crucial since many low-resource languages lack annotations needed for downstream tasks. Thus, our goal is to develop an intrinsic measure that correlates with downstream tasks without using any external resources. This section summarizes existing work on intrinsic methods of evaluation for cross-lingual embeddings. One widely used intrinsic measure for evaluating the coherence of monolingual embeddings is QVEC (Tsvetkov et al., 2015). Ammar et al. (2016) extend QVEC by using canonical correlation analysis (QVEC-CCA) to make the scores comparable across embeddings with different dimensions. However, while both QVEC and QVEC-CCA can be extended to cross-lingual word embeddings, they are limited: they require external annotated corpora. This is problematic in cross-lingual settings since this requires annotation to be consistent across languages (Ammar et al., 2016). Other internal metrics do not require external 2Ruder et al. (2017) offers detailed discussion on alternative approaches. 4954 resources, but those consider only part of the embeddings. Conneau et al. (2018) and Artetxe et al. (2018a) use a validation metric that calculates similarities of cross-lingual neighbors to conduct model selection. Our approach differs in that we consider whether cross-lingual nearest neighbors are relatively closer than intra-lingual nearest neighbors. Søgaard et al. (2018) use the similarities of intralingual neighbors and compute graph similarity between two monolingual lexical subgraphs built by subsampled words in a bilingual lexicon. They further show that the resulting graph similarity has a high correlation with bilingual lexical induction on MUSE (Conneau et al., 2018). However, their graph similarity still only uses intra-lingual similarities but not cross-lingual similarities. These existing metrics are limited by either requiring external resources or considering only part of the embedding structure (e.g., intra-lingual but not cross-lingual neighbors). In contrast, our work develops an intrinsic metric which is highly correlated with multiple downstream tasks but does not require external resources, and considers both intraand cross-lingual neighbors. Related Work A related line of work is the intrinsic evaluation measures of probabilistic topic models, which are another low-dimensional representation of words similar to word embeddings. Metrics based on word co-occurrences have been developed for measuring the monolingual coherence of topics (Newman et al., 2010; Mimno et al., 2011; Lau et al., 2014). Less work has studied evaluation of cross-lingual topics (Mimno et al., 2009). Some researchers have measured the overlap of direct translations across topics (Boyd-Graber and Blei, 2009), while Hao et al. (2018) propose a metric based on co-occurrences across languages that is more general than direct translations. 3 Approach: Graph-Based Diagnostics for Detecting Clustering by Language This section describes our graph-based approach to measure the intrinsic quality of a cross-lingual embedding space. 3.1 Embeddings as Lexical Graphs We posit that we can understand the quality of cross-lingual embeddings by analyzing characteristics of a lexical graph (Pelevina et al., 2016; Hamilton et al., 2016). The lexical graph has words as nodes and edges weighted by their similarity in the 1 0.5 0 0.5 1 9 9.5 10 10.5 stronger sluggish slower slows slow (worse) (faster) (slow, late) (slowly, late) (congestion) Figure 2: Local t-SNE (van der Maaten and Hinton, 2008) of an EN-JA cross-lingual word embedding, which shows an example of “clustering by language”. embedding space. Given a pair of words (i, j) and associated word vectors (vi, vj), we compute the similarity between two words by their vector similarity. We encode this similarity in a weighted adjacency matrix A: Aij ≡max(0, cos_sim(vi, vj)). However, nodes are only connected to their knearest neighbors (Section 6.2 examines the sensitivity to k); all other edges become zero. Finally, each node i has a label gi indicating the word’s language. 3.2 Clustering by Language We focus on a phenomenon that we call “clustering by language”, when word vectors in the embedding space tend to be more similar to words in the same language than words in the other. For example in Figure 2, the intra-lingual nearest neighbors of “slow” have higher similarity in the embedding space than semantically related cross-lingual words. This indicates that words are represented differently across the two languages, thus our hypothesis is that clustering by language degrades the quality of cross-lingual embeddings when used in downstream tasks. 3.3 Modularity of Lexical Graphs With a labeled graph, we can now ask whether the graph is modular (Newman, 2010). In a crosslingual lexical graph, modularity is the degree to which words are more similar to words in the same language than to words in a different language. This is undesirable, because the representation of words is not transferred across languages. If the nearest neighbors of the words are instead within the same language, then the languages are not mapped into the cross-lingual space consis4955 tently. In our setting, the language l of each word defines its group, and high modularity indicates embeddings are more similar within languages than across languages (Newman, 2003; Newman and Girvan, 2004). In other words, good embeddings should have low modularity. Conceptually, the modularity of a lexical graph is the difference between the proportion of edges in the graph that connect two nodes from the same language and the expected proportion of such edges in a randomly connected lexical graph. If edges were random, the number of edges starting from node i within the same language would be the degree of node i, di = P j Aij for a weighted graph, following Newman (2004), times the proportion of words in that language. Summing over all nodes gives the expected number of edges within a language, al = 1 2m X i di1 [gi = l] , (1) where m is the number of edges, gi is the label of node i, and 1 [·] is an indicator function that evaluates to 1 if the argument is true and 0 otherwise. Next, we count the fraction of edges ell that connect words of the same language: ell = 1 2m X ij Aij1 [gi = l] 1 [gj = l] . (2) Given L different languages, we calculate overall modularity Q by taking the difference between ell and a2 l for all languages: Q = L X l=1 (ell −a2 l ). (3) Since Q does not necessarily have a maximum value of 1, we normalize modularity: Qnorm = Q Qmax , where Qmax = 1 − L X l=1 (a2 l ). (4) The higher the modularity, the more words from the same language appear as nearest neighbors. Figure 1 shows the example of a lexical subgraph with low modularity (left, Qnorm = 0.143) and high modularity (right, Qnorm = 0.672). In Figure 1b, the lexical graph is modular since “firefox” does not encode same sense in both languages. Our hypothesis is that cross-lingual word embeddings with lower modularity will be more successful in downstream tasks. If this hypothesis holds, then modularity could be a useful metric for cross-lingual evaluation. Language Corpus Tokens English (EN) News 23M Spanish (ES) News 25M Italian (IT) News 23M Danish (DA) News 20M Japanese (JA) News 28M Hungarian (HU) News 20M Amharic (AM) LORELEI 28M Table 1: Dataset statistics (source and number of tokens) for each language including both Indo-European and non-Indo-European languages. 4 Experiments: Correlation of Modularity with Downstream Success We now investigate whether modularity can predict the effectiveness of cross-lingual word embeddings on three downstream tasks: (i) cross-lingual document classification, (ii) bilingual lexical induction, and (iii) document retrieval in low-resource languages. If modularity correlates with task performance, it can characterize embedding quality. 4.1 Data To investigate the relationship between embedding effectiveness and modularity, we explore five different cross-lingual word embeddings on six language pairs (Table 1). Monolingual Word Embeddings All monolingual embeddings are trained using a skip-gram model with negative sampling (Mikolov et al., 2013b). The dimension size is 100 or 200. All other hyperparameters are default in Gensim ( ˇReh˚uˇrek and Sojka, 2010). News articles except for Amharic are from Leipzig Corpora (Goldhahn et al., 2012). For Amharic, we use documents from LORELEI (Strassel and Tracey, 2016). MeCab (Kudo et al., 2004) tokenizes Japanese sentences. Bilingual Seed Lexicon For supervised methods, bilingual lexicons from Rolston and Kirchhoff (2016) induce all cross-lingual embeddings except for Danish, which uses Wiktionary.3 4.2 Cross-Lingual Mapping Algorithms We use three supervised (MSE, MSE+Orth, CCA) and two unsupervised (MUSE, VECMAP) crosslingual mappings:4 3https://en.wiktionary.org/ 4We use the implementations from original authors with default parameters unless otherwise noted. 4956 Mean-squared error (MSE) Mikolov et al. (2013a) minimize the mean-squared error of bilingual entries in a seed lexicon to learn a projection between two embeddings. We use the implementation by Artetxe et al. (2016). MSE with orthogonal constraints (MSE+Orth) Xing et al. (2015) add length normalization and orthogonal constraints to preserve the cosine similarities in the original monolingual embeddings. Artetxe et al. (2016) further preprocess monolingual embeddings by mean centering.5 Canonical Correlation Analysis (CCA) Faruqui and Dyer (2014) maps two monolingual embeddings into a shared space by maximizing the correlation between translation pairs in a seed lexicon. Conneau et al. (2018, MUSE) use languageadversarial learning (Ganin et al., 2016) to induce the initial bilingual seed lexicon, followed by a refinement step, which iteratively solves the orthogonal Procrustes problem (Schönemann, 1966; Artetxe et al., 2017), aligning embeddings without an external bilingual lexicon. Like MSE+Orth, vectors are unit length and mean centered. Since MUSE is unstable (Artetxe et al., 2018a; Søgaard et al., 2018), we report the best of five runs. Artetxe et al. (2018a, VECMAP) induce an initial bilingual seed lexicon by aligning intra-lingual similarity matrices computed from each monolingual embedding. We report the best of five runs to address uncertainty from the initial dictionary. 4.3 Modularity Implementation We implement modularity using random projection trees (Dasgupta and Freund, 2008) to speed up the extraction of k-nearest neighbors,6 tuning k = 3 on the German Rcv2 dataset (Section 6.2). 4.4 Task 1: Document Classification We now explore the correlation of modularity and accuracy on cross-lingual document classification. We classify documents from the Reuters Rcv1 and Rcv2 corpora (Lewis et al., 2004). Documents have one of four labels (Corporate/Industrial, Economics, Government/Social, Markets). We follow Klementiev et al. (2012), except we use all EN training documents and documents in each target 5One round of iterative normalization (Zhang et al., 2019) 6https://github.com/spotify/annoy 0.2 0.4 0.6 0.8 Classification Accuracy 0.4 0.5 0.6 0.7 Modularity Dim 100 200 Lang DA ES IT JA Figure 3: Classification accuracy and modularity of cross-lingual word embeddings (ρ = −0.665): less modular cross-lingual mappings have higher accuracy. Method Acc. Modularity MSE 0.399 0.529 Supervised CCA 0.502 0.513 MSE+Orth 0.628 0.452 Unsupervised MUSE 0.711 0.431 VECMAP 0.643 0.432 Table 2: Average classification accuracy on (EN → DA, ES, IT, JA) along with the average modularity of five cross-lingual word embeddings. MUSE has the best accuracy, captured by its low modularity. language (DA, ES, IT, and JA) as tuning and test data. After removing out-of-vocabulary words, we split documents in target languages into 10% tuning data and 90% test data. Test data are 10,067 documents for DA, 25,566 for IT, 58,950 for JA, and 16,790 for ES. We exclude languages Reuters lacks: HU and AM. We use deep averaging networks (Iyyer et al., 2015, DAN) with three layers, 100 hidden states, and 15 epochs as our classifier. The DAN had better accuracy than averaged perceptron (Collins, 2002) in Klementiev et al. (2012). Results We report the correlation value computed from the data points in Figure 3. Spearman’s correlation between modularity and classification accuracy on all languages is ρ = −0.665. Within each language pair, modularity has a strong correlation within EN-ES embeddings (ρ = −0.806), EN-JA (ρ = −0.794), EN-IT (ρ = −0.784), and a moderate correlation within EN-DA embeddings (ρ = −0.515). MUSE has the best classification accuracy (Table 2), reflected by its low modularity. Error Analysis A common error in EN →JA classification is predicting Corporate/Industrial for documents labeled Markets. One cause is documents with 終値“closing price”; this has few market-based English neighbors (Table 3). As a result, the model fails to transfer across languages. 4957 市場“market” 終値“closing price” 新興“new coming” 上げ幅“gains” market 株価“stock price” markets 年初来“yearly” 軟調“bearish” 続落“continued fall” マーケット“market” 月限“contract month” 活況“activity” 安値“low price” 相場“market price” 続伸“continuous rise” 底入“bottoming” 前日“previous day” 為替“exchange” 先物“futures” ctoc 小幅“narrow range” Table 3: Nearest neighbors in an EN-JA embedding. Unlike the JA word “market”, the JA word “closing price” has no EN vector nearby. 5 10 15 20 Precision@1 0.4 0.5 0.6 0.7 Modularity Dim 100 200 Lang DA ES IT JA Figure 4: Bilingual lexical induction results and modularity of cross-lingual word embeddings (ρ = −0.789): lower modularity means higher precision@1. 4.5 Task 2: Bilingual Lexical Induction (BLI) Our second downstream task explores the correlation between modularity and bilingual lexical induction (BLI). We evaluate on the test set from Conneau et al. (2018), but we remove pairs in the seed lexicon from Rolston and Kirchhoff (2016). The result is 2,099 translation pairs for ES, 1,358 for IT, 450 for DA, and 973 for JA. We report precision@1 (P@1) for retrieving cross-lingual nearest neighbors by cross-domain similarity local scaling (Conneau et al., 2018, CSLS). Results Although this task ignores intra-lingual nearest neighbors when retrieving translations, modularity still has a high correlation (ρ = −0.785) with P@1 (Figure 4). MUSE and VECMAP beat the three supervised methods, which have the lowest modularity (Table 4). P@1 is low compared to other work on the MUSE test set (e.g., Conneau et al. (2018)) because we filter out translation pairs which appeared in the large training lexicon compiled by Rolston and Kirchhoff (2016), and the raw corpora used to train monolingual embeddings (Table 1) are relatively small compared to Wikipedia. Method P@1 Modularity MSE 7.30 0.529 Supervised CCA 3.06 0.513 MSE+Orth 10.57 0.452 Unsupervised MUSE 11.83 0.431 VECMAP 12.92 0.432 Table 4: Average precision@1 on (EN →DA, ES, IT, JA) along with the average modularity of the crosslingual word embeddings trained with different methods. VECMAP scores the best P@1, which is captured by its low modularity. 4.6 Task 3: Document Retrieval in Low-Resource Languages As a third downstream task, we turn to an important task for low-resource languages: lexicon expansion (Gupta and Manning, 2015; Hamilton et al., 2016) for document retrieval. Specifically, we start with a set of EN seed words relevant to a particular concept, then find related words in a target language for which a comprehensive bilingual lexicon does not exist. We focus on the disaster domain, where events may require immediate NLP analysis (e.g., sorting SMS messages to first responders). We induce keywords in a target language by taking the n nearest neighbors of the English seed words in a cross-lingual word embedding. We manually select sixteen disaster-related English seed words from Wikipedia articles, “Natural hazard” and “Anthropogenic hazard”. Examples of seed terms include “earthquake” and “flood”. Using the extracted terms, we retrieve disaster-related documents by keyword matching and assess the coverage and relevance of terms by area under the precision-recall curve (AUC) with varying n. Test Corpora As positively labeled documents, we use documents from the LORELEI project (Strassel and Tracey, 2016) containing any disaster-related annotation. There are 64 disasterrelated documents in Amharic, and 117 in Hungarian. We construct a set of negatively labeled documents from the Bible; because the LORELEI corpus does not include negative documents and the Bible is available in all our languages (Christodouloupoulos and Steedman, 2015), we take the chapters of the gospels (89 documents), which do not discuss disasters, and treat these as non-disaster-related documents. Results Modularity has a moderate correlation with AUC (ρ = −0.378, Table 5). While modularity focuses on the entire vocabulary of cross-lingual 4958 Lang. Method AUC Mod. AM MSE 0.578 0.628 CCA 0.345 0.501 MSE+Orth 0.606 0.480 MUSE 0.555 0.475 VECMAP 0.592 0.506 HU MSE 0.561 0.598 CCA 0.675 0.506 MSE+Orth 0.612 0.447 MUSE 0.664 0.445 VECMAP 0.612 0.432 Spearman Correlation ρ −0.378 Table 5: Correlation between modularity and AUC on document retrieval. It shows a moderate correlation to this task. word embeddings, this task focuses on a small, specific subset—disaster-relevant words—which may explain the low correlation compared to BLI or document classification. 5 Use Case: Model Selection for MUSE A common use case of intrinsic measures is model selection. We focus on MUSE (Conneau et al., 2018) since it is unstable, especially on distant language pairs (Artetxe et al., 2018a; Søgaard et al., 2018; Hoshen and Wolf, 2018) and therefore requires an effective metric for model selection. MUSE uses a validation metric in its two steps: (1) the language-adversarial step, and (2) the refinement step. First the algorithm selects an optimal mapping W using a validation metric, obtained from language-adversarial learning (Ganin et al., 2016). Then the selected mapping W from the language-adversarial step is passed on to the refinement step (Artetxe et al., 2017) to re-select the optimal mapping W using the same validation metric after each epoch of solving the orthogonal Procrustes problem (Schönemann, 1966). Normally, MUSE uses an intrinsic metric, CSLS of the top 10K frequent words (Conneau et al., 2018, CSLS-10K). Given word vectors s, t ∈Rn from a source and a target embedding, CSLS is a cross-lingual similarity metric, CSLS(Ws, t) = 2 cos(Ws, t)−r(Ws)−r(t) (5) where W is the trained mapping after each epoch, and r(x) is the average cosine similarity of the top 10 cross-lingual nearest neighbors of a word x. What if we use modularity instead? To test modularity as a validation metric for MUSE, we compute modularity on the lexical graph of 10K most frequent words (Mod-10K; we use 10K for consistency with CSLS on the same words) after each Family Lang. CSLS-10K Mod-10K Avg. Best Avg. Best Germanic DA 52.62 60.27 52.18 60.13 DE 75.27 75.60 75.16 75.53 Romance ES 74.35 83.00 74.32 83.00 IT 78.41 78.80 78.43 78.80 IndoIranian FA 27.79 33.40 27.77 33.40 HI 25.71 33.73 26.39 34.20 BN 0.00 0.00 0.09 0.87 Others FI 4.71 47.07 4.71 47.07 HU 52.55 54.27 52.35 54.73 JA 18.13 49.69 36.13 49.69 ZH 5.01 37.20 10.75 37.20 KO 16.98 20.68 17.34 22.53 AR 15.43 33.33 15.71 33.67 ID 67.69 68.40 67.82 68.40 VI 0.01 0.07 0.01 0.07 Table 6: BLI results (P@1 ×100%) from EN to each target language with different validation metrics for MUSE: default (CSLS-10K) and modularity (Mod-10K). We report the average (Avg.) and the best (Best) from ten runs with ten random seeds for each validation metric. Bold values are mappings that are not shared between the two validation metrics. Mod-10K improves the robustness of MUSE on distant language pairs. epoch of the adversarial step and the refinement step and select the best mapping. The important difference between these two metrics is that Mod-10K considers the relative similarities between intra- and cross-lingual neighbors, while CSLS-10K only considers the similarities of cross-lingual nearest neighbors.7 Experiment Setup We use the pre-trained fastText vectors (Bojanowski et al., 2017) to be comparable with the prior work. Following Artetxe et al. (2018a), all vectors are unit length normalized, mean centered, and then unit length normalized. We use the test lexicon by Conneau et al. (2018). We run ten times with the same random seeds and hyperparameters but with different validation metrics. Since MUSE is unstable on distant language pairs (Artetxe et al., 2018a; Søgaard et al., 2018; Hoshen and Wolf, 2018), we test it on English to languages from diverse language families: Indo-European languages such as Danish (DA), German (DE), Spanish (ES), Farsi (FA), Italian (IT), Hindi (HI), Bengali (BN), and non-Indo-European languages such as Finnish (FI), Hungarian (HU), Japanese (JA), Chinese (ZH), Korean (KO), Arabic (AR), Indonesian (ID), and Vietnamese (VI). 7Another difference is that k-nearest neighbors for CSLS10K is k = 10, whereas Mod-10K uses k = 3. However, using k = 3 for CSLS-10K leads to worse results; we therefore only report the result on the default metric i.e., k = 10. 4959 0 0.25 0.50 0.75 1 Predicted Accuracy R2 = 0.770 R2 = 0.770 Omit Avg. cos_sim dim 100 200 lang DA IT R2 = 0.797 R2 = 0.797 Omit CSLS-10K 0 0.25 0.50 0.75 1 0 0.25 0.50 0.75 1 R2 = 0.554 R2 = 0.554 Omit Modularity 0 0.25 0.50 0.75 1 Actual Classification Accuracy R2 = 0.703 R2 = 0.703 Omit QVEC-CCA Figure 5: We predict the cross-lingual document classification results for DA and IT from Figure 3 using three out of four evaluation metrics. Ablating modularity causes by far the largest decrease (R2 = 0.814 when using all four features) in R2, showing that it captures information complementary to the other metrics. Results Table 6 shows P@1 on BLI for each target language using English as the source language. Mod-10K improves P@1 over the default validation metric in diverse languages, especially on the average P@1 for non-Germanic languages such as JA (+18.00%) and ZH (+5.74%), and the best P@1 for KO (+1.85%). These language pairs include pairs (EN-JA and EN-HI), which are difficult for MUSE (Hoshen and Wolf, 2018). Improvements in JA come from selecting a better mapping during the refinement step, which the default validation misses. For ZH, HI, and KO, the improvement comes from selecting better mappings during the adversarial step. However, modularity does not improve on all languages (e.g., VI) that are reported to fail by Hoshen and Wolf (2018). 6 Analysis: Understanding Modularity as an Evaluation Metric The experiments so far show that modularity captures whether an embedding is useful, which suggests that modularity could be used as an intrinsic evaluation or validation metric. Here, we investigate whether modularity can capture distinct information compared to existing evaluation measures: QVEC-CCA (Ammar et al., 2016), CSLS (Conneau et al., 2018), and cosine similarity between translation pairs (Section 6.1). We also analyze the effect of the number of nearest neighbors k (Section 6.2). 6.1 Ablation Study Using Linear Regression We fit a linear regression model to predict the classification accuracy given four intrinsic measures: QVEC-CCA, CSLS, average cosine similarity of 0 50 100 150 200 k 0.80 0.85 0.90 Absolute Correlation Pearson Correlation Spearman Correlation Figure 6: Correlation between modularity and classification performance (EN→DE) with different numbers of neighbors k. Correlations are computed on the same setting as Figure 3 using supervised methods. We use this to set k = 3. translations, and modularity. We ablate each of the four measures, fitting linear regression with standardized feature values, for two target languages (IT and DA) on the task of cross-lingual document classification (Figure 3). We limit to IT and DA because aligned supersense annotations to EN ones (Miller et al., 1993), required for QVECCCA are only available in those languages (Montemagni et al., 2003; Martínez Alonso et al., 2015; Martınez Alonso et al., 2016; Ammar et al., 2016). We standardize the values of the four features before training the regression model. Omitting modularity hurts accuracy prediction on cross-lingual document classification substantially, while omitting the other three measures has smaller effects (Figure 5). Thus, modularity complements the other measures and is more predictive of classification accuracy. 6.2 Hyperparameter Sensitivity While modularity itself does not have any adjustable hyperparameters, our approach to constructing the lexical graph has two hyperparameters: the number of nearest neighbors (k) and the number of trees (t) for approximating the k-nearest neighbors using random projection trees. We conduct a grid search for k ∈{1, 3, 5, 10, 50, 100, 150, 200} and t ∈ {50, 100, 150, 200, 250, 300, 350, 400, 450, 500} using the German Rcv2 corpus as the held-out language to tune hyperparameters. The nearest neighbor k has a much larger effect on modularity than t, so we focus on analyzing the effect of k, using the optimal t = 450. Our 4960 earlier experiments all use k = 3 since it gives the highest Pearson’s and Spearman’s correlation on the tuning dataset (Figure 6). The absolute correlation between the downstream task decreases when setting k > 3, indicating nearest neighbors beyond k = 3 are only contributing noise. 7 Discussion: What Modularity Can and Cannot Do This work focuses on modularity as a diagnostic tool: it is cheap and effective at discovering which embeddings are likely to falter on downstream tasks. Thus, practitioners should consider including it as a metric for evaluating the quality of their embeddings. Additionally, we believe that modularity could serve as a useful prior for the algorithms that learn cross-lingual word embeddings: during learning prefer updates that avoid increasing modularity if all else is equal. Nevertheless, we recognize limitations of modularity. Consider the following cross-lingual word embedding “algorithm”: for each word, select a random point on the unit hypersphere. This is a horrible distributed representation: the position of words’ embedding has no relationship to the underlying meaning. Nevertheless, this representation will have very low modularity. Thus, while modularity can identify bad embeddings, once vectors are well mixed, this metric—unlike QVEC or QVEC-CCA—cannot identify whether the meanings make sense. Future work should investigate how to combine techniques that use both word meaning and nearest neighbors for a more robust, semisupervised cross-lingual evaluation. Acknowledgments This work was supported by NSF grant IIS-1564275 and by DARPA award HR0011-15-C-0113 under subcontract to Raytheon BBN Technologies. The authors would like to thank Sebastian Ruder, Akiko Aizawa, the members of the CLIP lab at the University of Maryland, the members of the CLEAR lab at the University of Colorado, and the anonymous reviewers for their feedback. The authors would like to also thank Mozhi Zhang for providing the deep averaging network code. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor. References Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. Computing Research Repository, arXiv:1602.01925. Version 2. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of Empirical Methods in Natural Language Processing. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural machine translation. In Proceedings of the International Conference on Learning Representations. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics. Jordan Boyd-Graber and David M. Blei. 2009. Multilingual topic models for unaligned text. In Proceedings of Uncertainty in Artificial Intelligence. Christos Christodouloupoulos and Mark Steedman. 2015. A massively parallel corpus: The Bible in 100 languages. Proceedings of the Language Resources and Evaluation Conference. Michael Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of Empirical Methods in Natural Language Processing. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proceedings of the International Conference on Learning Representations. Sanjoy Dasgupta and Yoav Freund. 2008. Random projection trees and low dimensional manifolds. In Proceedings of the annual ACM symposium on Theory of computing. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the European Chapter of the Association for Computational Linguistics. 4961 Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17. Goran Glavas, Robert Litschko, Sebastian Ruder, and Ivan Vulic. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. Computing Research Repository, arXiv:1902.00508. Version 1. Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 languages. In Proceedings of the Language Resources and Evaluation Conference. Sonal Gupta and Christopher D. Manning. 2015. Distributed representations of words to guide bootstrapped entity classifiers. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific sentiment lexicons from unlabeled corpora. In Proceedings of Empirical Methods in Natural Language Processing. Shudong Hao, Jordan Boyd-Graber, and Michael J. Paul. 2018. From the Bible to Wikipedia: adapting topic model evaluation to multilingual and lowresource settings. In Conference of the North American Chapter of the Association for Computational Linguistics. Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of Empirical Methods in Natural Language Processing. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of International Conference on Computational Linguistics. Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to Japanese morphological analysis. In Proceedings of Empirical Methods in Natural Language Processing. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In Proceedings of the International Conference on Learning Representations. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the European Chapter of the Association for Computational Linguistics. David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. 2004. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5. Héctor Martínez Alonso, Anders Johannsen, Sussi Olsen, Sanni Nimb, Nicolai Hartvig Sørensen, Anna Braasch, Anders Søgaard, and Bolette Sandford Pedersen. 2015. Supersense tagging for Danish. In Proceedings of the Nordic Conference of Computational Linguistics. Héctor Martınez Alonso, Anders Johannsen, Sussi Olsen, Sanni Nimb, and Bolette Sandford Pedersen. 2016. An empirically grounded expansion of the supersense inventory. In Proceedings of the Global Wordnet Conference. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. Computing Research Repository, arXiv:1309.4168. Version 1. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of Advances in Neural Information Processing Systems. George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Proceedings of the Human Language Technology Conference. David M. Mimno, Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. 2009. Polylingual topic models. In Proceedings of Empirical Methods in Natural Language Processing. David M. Mimno, Hanna M. Wallach, Edmund M. Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of Empirical Methods in Natural Language Processing. Simonetta Montemagni, Francesco Barsotti, Marco Battista, Nicoletta Calzolari, Ornella Corazzari, Alessandro Lenci, Antonio Zampolli, Francesca Fanciulli, Maria Massetani, Remo Raffaelli, Roberto Basili, Maria Teresa Pazienza, Dario Saracino, Fabio Zanzotto, Nadia Mana, Fabio Pianesi, and Rodolfo Delmonte. 2003. Building the Italian syntactic-semantic treebank. In Treebanks: Building and Using Parsed Corpora. Springer. 4962 David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Conference of the North American Chapter of the Association for Computational Linguistics. Mark E. J. Newman. 2003. Mixing patterns in networks. Physical Review E, 67(2). Mark E. J. Newman. 2004. Analysis of weighted networks. Physical Review E, 70(5). Mark E. J. Newman. 2010. Networks: An introduction. Oxford university press. Mark E. J. Newman and Michelle Girvan. 2004. Finding and evaluating community structure in networks. Physical Review E, 69(2). Maria Pelevina, Nikolay Arefiev, Chris Biemann, and Alexander Panchenko. 2016. Making sense of word embeddings. In Proceedings of the 1st Workshop on Representation Learning for NLP. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. Leanne Rolston and Katrin Kirchhoff. 2016. Collection of bilingual data for lexicon transfer learning. UWEE Technical Report. Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. 2017. A survey of cross-lingual embedding models. Computing Research Repository, arXiv:1706.04902. Version 2. Peter H. Schönemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1). Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the Association for Computational Linguistics. Stephanie Strassel and Jennifer Tracey. 2016. LORELEI language packs: Data, tools, and resources for technology development in low resource languages. In Proceedings of the Language Resources and Evaluation Conference. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of Empirical Methods in Natural Language Processing. Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In Proceedings of the Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9. Ivan Vuli´c and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of the Association for Computational Linguistics. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the Association for Computational Linguistics. Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, and Jordan Boyd-Graber. 2019. Are girls neko or sh¯ojo? Cross-lingual alignment of non-isomorphic embeddings with iterative normalization. In Proceedings of the Association for Computational Linguistics.
2019
489
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 516–526 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 516 Decompositional Argument Mining: A General Purpose Approach for Argument Graph Construction Debela Gemechu and Chris Reed Centre for Argument Technology, School of Science & Engineering University of Dundee, Dundee, UK [email protected], [email protected] Abstract This work presents an approach decomposing propositions into four functional components and identify the patterns linking those components to determine argument structure. The entities addressed by a proposition are target concepts and the features selected to make a point about the target concepts are aspects. A line of reasoning is followed by providing evidence for the points made about the target concepts via aspects. Opinions on target concepts and opinions on aspects are used to support or attack the ideas expressed by target concepts and aspects. The relations between aspects, target concepts, opinions on target concepts and aspects are used to infer the argument relations. Propositions are connected iteratively to form a graph structure. The approach is generic in that it is not tuned for a specific corpus and evaluated on three different corpora from the literature: AAEC, AMT, US2016G1tv and achieved an F score of 0.79, 0.77 and 0.64, respectively. 1 Introduction Argument mining is the process of identifying argumentative structure contained within a text. It involves segmenting arguments into elementary discourse units (EDUs), distinguishing argumentative units from non-argumentative units, classifying argument components into classes such as premise and claim, identifying and labeling argument relations between the components, and identifying argument schemes. We are here aimed at mining argument structure from text segmented into EDUs (or, more precisely for argument mining, Argumentative Discourse Units, ADUs (Peldszus and Stede, 2015)). Several argument mining approaches use features identified from individual EDUs and apply classifiers (Moens et al., 2007); others use features that span EDUs and apply dependency parsing (Muller et al., 2012), similarity (Lawrence et al., 2014), linguistic indicators (Villalba and SaintDizier, 2012) and their combinations (Lawrence and Reed, 2015). Recently, a neural end-to-end method for argument mining shows that dependency parsing outperforms an EDU-level classifier (Eger et al., 2017). Stab and Gurevych (2014b) use both EDU-level and cross-EDU features to improve performance. The EDU-spanning features used by these latter approaches include syntactic dependency and lexical overlap between the EDUs. For instance, Eger et al. (2017) applied token level syntactic dependency to learn the relations between EDUs. Even though cross-EDU tokens are used for argument mining, the nature of such tokens is not studied well. Following the same line of reasoning, similarity approaches use EDU level similarity to determine argument structure. Lawrence et al. (2014) use Latent Dirichlet Allocation (LDA) topic modeling; Lawrence and Reed (2015) use WordNet1 Synset hierarchy to determine similarity between propositions. Such approaches start from a conclusion and determine the most related proposition to create hierarchical graph structure based on the assumption that a conclusion is similar to a premise. Similarity, however, does not necessarily entail an argument relation and vice-versa. In this work, we aim to detect argument relations (AR) and their category (support vs attack) based on the nature of the relations existing among the functional components of propositions. The functional components of propositions are: target concepts (C), aspects (A), opinions on aspects (OA) and opinions on target concepts (OC). In order to identify ARs and their category, we train classifiers using the relations between the four components. The classifiers provide an output pre1 http://wordnet.princeton.edu/ 517 dicting whether any pair of propositions involve an AR or not, and categorize the AR. To the best of our knowledge there is no approach that decomposes propositions into finegrained components and uses them to determine argument structure. Our Decompositional Argument Mining (DAM) identifies argument structure by exploiting similarity (between C and A) and relations between the polarities of OC and OA. Our first hypothesis is then the AR between EDUs is governed by the relations between their functional components. For instance, the support relation between (2) and (9) from Table 1 is a function of the similarity between C of (9) “cooking; potato; burger” and A of (2) “food” and the agreement between the polarities of their respective opinion expressions (i.e. the opinions “have an opportunity; interesting” and “better” are both positive). Similarly the support relation between (6) and (7) is the function of the similarity between A of (6) “job” and C of (7) “job” and the agreement between the polarities of their respective opinion expressions (i.e. “are losing” and “are fleeing” are both negative). The attack relation between (10) and (11) is the function of the similarity between C of (10) “advertising” and A of (11) “advertising” and the contradiction between the polarities of the opinion on A of (10) “should be prohibited” and the opinion on C of (11) “needs”. Our second hypothesis is that automatic recognition of argument structure can be substantially enhanced by using the relations between the four functional components of propositions as compared to other features like discourse indicators which are rare to find. For instance, none of the propositions presented in the example are linked via discourse indicators, and yet the relations between the four components can be used as a basis for identifying their ARs. The third hypothesis is that fine-grained similarity is more reliable and accurate than EDU level similarity. The similarity between the entirety of propositions is not a good indicator of AR. For instance the similarity between (3) and (8) is 0.737 (as provided by ADW (Pilehvar et al., 2013)) and yet does not involve an AR, but (8) and (1) has a similarity score of 0.45 and involves an AR since there is a strong similarity between the aspect of (1) “family” and target concept of (8) “family”. The contribution of this work is three-fold: (a) a model to identify components linking propositions; (b) directional similarity indicating the direction of AR between propositions; (c) an approach determining the entire argument structure based on just the relations between the four functional components of proposition across three heterogeneous corpora of which two are monological and the other is dialogical (see Section 3). 2 Argument Graph Model A proposition in the Frege’s sense, is decomposed into four functional components: C, A, OC and OA. C and A are used to link a premise and a conclusion; the polarity of OC and OA is used to identify the type of relations (inference vs conflict). 2.1 Functional Decomposition of a Proposition and their relations We define the four functional components of a proposition before formalizing the representation of proposition in terms of the components. Examples (4) to (7) in Table 1 are taken from the first US 2016 presidential election television debate corpus (US2016G1tv) (Lawrence and Reed, 2017; Visser et al., 2019) and (1) to (3), (8) to (13) are taken from the Argument Annotated Essay Corpus (AAEC) (Stab and Gurevych, 2014a) to illustrate the components. 2.1.1 Target Concept (C) A proposition makes a point about (at least one) concept: an idea, physical or abstract entity, following (Lima et al, 2010): “Concepts, also known as classes, are used in a broad sense. They can be abstract or concrete, elementary or composite, real or fict[it]ious. In short, a concept can be anything about which something is said, and, therefore, could also be the description of a task, function, action, strategy, reasoning process, etc.” (Lima et al., 2010, p:428). The set of concepts addressed by a proposition are referred to as target concepts, (C). The examples in Table 1 are annotated to show C (segmented with [], and marked by the subscript c and also shown in bold for convenience). (1) and (2) address the target concept (after stemming) “camp”, whilst the targets concepts in (3) are “family” and “camp”. The target concept is analogous to a topic of a propositions and usually presented as a subject of a proposition. Aspects specialize the topic of a proposition by providing specific angle of reasoning. 518 No Example 1 [Camping]c [is a great way]oc to [bring]oa [families]a [together]oa 2 [Campers]c [have an opportunity]oc to try some [interesting]oa [food]a 3 When [families]c go [camping]c, they put the [jobs]a and [sporting events]a [on hold]oa 4 [Housing]c [did collapse]oc 5 [These countries, especially China]c, [are taking]oc [Americans’ jobs]a 6 [We]c, [are losing]oc [our]a [good]oa [jobs]a so many of them 7 [Our jobs]c, [are fleeing]oc [the country]a 8 By putting aside these events, the [family]c [has an opportunity]oc to [bond]oa their [relationships]a 9 [Cooking]c over a fire makes [burgers]c and [potatoes]c [taste better]oc than can be found at [fast]a[food]a[place]a 10 [Advertising]c [alcohol]a, [cigarettes]a, [goods]a and [services]a with [adult content]a [should be prohibited]oc 11 [Modern society]c [needs]oc [advertising]a 12 [Ads]c will [keep] us [well informed]oc about [new]oa [products]a and [services]a 13 [advertising]c [cigarettes]a and [alcohol]a [will definitely affect]oc our children [in negative way]oc Table 1: Examples to illustrate the four functional components of a proposition: C, A, OC and OA. (In the online version, positive and negative polarity is indicated in blue and red, respectively). 2.1.2 Aspect (A) Often, a specific angle of reasoning is selected to make a point about C. The concepts providing such angles of reasoning are denoted as aspects (A). For instance, (1) and (2) address the target concept “camp” with respect to the aspects “family” and “food” (in bold) respectively. Similarly, the aspects of (3) are “job, sporting event”. The difference between C and A is not an ontological distinction, it is rather the syntactic and semantic role they play in the respective propositions. An aspect in one proposition can be a target concept in another (see (1) and (3)). 2.1.3 Opinion on Target Concept (OC) OC is an opinion expressed on C to express positive or negative attitudes. The opinionated words in a proposition are usually ambiguous and do not fall into the conventional opinionated words category. For instance, in (5), the opinion “are taking”, which is expressed on the target concept “country, china”, does not fall into the conventional opinionated word category. 2.1.4 Opinion on Aspect (OA) OA is an opinion expressed on an A to provide positive or negative attitudes. For instance, in (2) the opinion “interesting” is expressed on the aspect “food”. Since we have defined the four components of a proposition, we can now formalise the representation of a proposition in terms of the components. Hence, a proposition, p, can be represented as a set of tuples, P = {⟨C0, oC0, {⟨A0, oA0⟩, · · · , ⟨Ai, oAi⟩}⟩, ⟨C1, oC1, {⟨A1, oA1⟩, ..., ⟨Aj, oAj⟩}⟩, · · · ⟨Cn, oCn, {⟨Aj, oAj⟩, · · · , ⟨Ak, oAk⟩}⟩} (1) Where, Ci, Ai, oCi, oAi represents C, A, OC and OA, respectively. 2.1.5 The Relations Between the Four Functional Components The relations between the four components fall into two categories: similarity and agreement. The relation between C and A is similarity whereas agreement (or contradiction) between OC and OA. The relations between C and A are further categorized into four: (a) similarity between C of a premise and a conclusion, (b) similarity between A of a premise and a conclusion, (c) similarity between A of a premise and C of a conclusion, and (d) similarity between A of a conclusion and C of a premise. The relations between OC and OA are also categorized into four: (a) the agreement between OC of a conclusion and a premise, (b) the agreement between OA of a premise and a conclusion, (c) the agreement between OC of a conclusion and OA of a premise, and (d) the agreement between OA of a conclusion and OC of a premise. 2.2 Argument Relation The argument relation (AR) between a premise and a conclusion is a function of the relations between the four components. A classifier is trained on the relations between the four components to identify the patterns encoded by the type of AR: 519 Inference relations: A pair of propositions involving support relation. Conflict relations: A pair of propositions involving attack relation. To mention, when a premise develops one or more aspects of a conclusion, the aspects of a conclusion form C of a premise (i.e are highly similar). For instance, (8) supports (1) in relation to the aspect “family”; (9) supports (2) in relation to the aspect “food”. The relation between OC and OA is identified through matching the polarity of the opinions. For instance the polarity of the opinions on (1 and 8) matches (both are positive), since the propositions involve support relation. Similarly, the attack relation between (10) and (11) is indicated by the similarity between C of (10) and A of (11) and the contradiction between the polarities of the opinions on OC of (10) and OC of (11). Accordingly, the AR between propositions is defined by, AR =      S if rel(C, A, OC, OA) = θ AT if rel(C, A, OC, OA) = β N otherwise (2) where, S stands for support, AT for attack and N for none, while θ, β representing the result of a classifier (θ for support and β for attack). A graph structure is formed to represent an argument by linking proposition whose components are related via the valid relations encoded by AR. Propositions and the relations between them are nodes, the connections between the nodes form the edges. Figure 1 shows an argument structure for a portion of propositions in Table 1, where (11) is attacking (10), (12) is supporting (11), (13) is attacking (11), and (13) is supporting (10) based on the similarity between C and A and the agreement between the polarities of the opinion expressions on C and A. 3 Methodology In this section, we present the data-sets and the major components of our approach. 3.1 Data We aim to cover varieties of data-sets (though not comprehensive), annotated based on the underlying set of argumentation theory to see how our approach behaves across heterogeneous data-sets without tuning to a specific data-set. we use three Figure 1: Argument structure for propositions (10), (11), (12), and (13) from Table 1 corpora, with different types of source material (monologue, dialogue), different creation rubrics (naturally occurring, created under direction), different argument structure conventions (recursive, limited), different notions of inference (typed, untyped) and different notions of conflict (rebut-only, rebut and undercut). The first is Argument Annotated Essay Corpus (AAEC) (Stab and Gurevych, 2014a) which has a total of 90 arguments. Propositions under each argument are labelled as premise, claim or major claim. The corpus has 31,194 tokens, 1,552 propositions and 1214 Argument relations (AR). The second corpus is the Argumentative Micro Text (AMT) (Peldszus and Stede, 2013) which is a collection of 112 short texts collected from human subjects in German and were translated into English. It is annotated following the argumentation structure outlined by Peldszus and Stede (2013) and attain high inter-annotator agreement score. The structure consists of a central claim, and support/attack propositions. It has a total of 8,007 tokens, 576 propositions and 272 argument relations. We have also used dialogical corpus from the first US 2016 presidential election television debate between the candidates Clinton and Trump (US2016G1tv) (Lawrence and Reed, 2017; Visser et al., 2019) which is annotated based on AIF (Chesnevar et al., 2006) using the OVA+ annotation tool (Janier et al., 2014)2 and stored in the AIFdb database (Lawrence et al., 2015). The corpus has a total of 15,805 tokens, 1,473 propositions and 505 inferences. In addition to the original annotation, we anno2http://ova.arg-tech.org 520 tate C, A, OC and OA. We obtain the total of 3,455, 4,113, 4,359, and 2,987 C, A, OC and OA respectively. For the corpus evaluation, a second annotator analysed 10% of claim-premise pairs form the combined corpora. To this end, we combine the three corpora and randomly select 10% of claimpremise pairs and provide it to the second annotator after removing the annotation labels of the first annotation. The annotation of the four components is compared against the original annotation to calculate the inter-annotators agreement. This gave a Cohen’s kappa score κ = 0.86, κ = 0.82, κ = 0.81, and κ = 0.80 on C, A, OC and OA, respectively. The annotation of the second annotator is discarded after calculating the Cohen’s kappa score. The description of the annotation process and guideline is available online 3. 3.2 Identifying Argument Structure Our approach involves a pipeline of four steps: Given segmented argumentative text, the first step identifies C, A, OC and OA. The similarity component determines the degree of similarity between C and A. The next step identifies the polarity of the opinions to determine if they contradict or agree. The last component uses the similarity between C and A, and the relation between OC and OA (contradiction or agreement) to link propositions and iteratively construct a graph. The details are provided below. 3.2.1 Identifying Aspects, Target Concepts and Opinions We formulate the task in two ways: relation extraction task adapted from information extraction, and a sequence labeling task adapted from aspect based opinion mining. C, A, OC and OA identification as a relation extraction task. We model it as a relation extraction task since C, A, OC and OA are syntactically interdependent. Relation extraction has been studied extensively in natural language processing using supervised methods (Kambhatla, 2004; Zhao and Grishman, 2005) and semi-supervised methods (Etzioni et al., 2005; Banko et al., 2007). Supervised methods use classification techniques: Maximum Entropy Models (Borthwick et al., 1998), Hidden Markov Models (Bikei et al., 1997), Support Vector Machines 3http://arg.tech/˜debela/Guidelines. pdf (Asahara and Matsumoto, 2003), and Conditional Random Fields (McCallum and Li, 2003). Following the same line of reasoning, we train four classifiers (Naive Bayes, CRF, bag of features based SVM, and tree kernel based SVM) to classify the words in a proposition as C, A, OC or OA. The first three classifiers use frequency, part of speech category and universal dependency as classification features. The tree kernel SVM is trained using the portion of the dependency tree connecting the four components as positive examples and the rest as negative examples. C, A, OC and OA identification as a sequence labeling task. The sequence labeling model is adapted from aspect based opinion mining. Aspect based opinion mining identifies opinions expressed on a target object and specific aspects of the object (Zhang and Liu, 2014). Taking the analogy of target object:aspects in opinionated text to C:A:OC:OA in argumentative text, we apply similar techniques for identifying C, A, OC and OA. The underlying idea behind the model is that C, A, OC and OA are interdependent and occur in a sequence in a sentence. The model is based on the Inside-Outside-Begin (IOB) labelling schema (Ramshaw and Marcus, 1999). Accordingly, we use the IOB labeling schema where, B-Concept denotes the beginning of a concept; I-Concept, denotes that the token is inside the concept, and O for other (non C, A, OC or OA) tokens. Hidden Markov Models (HMM) (Jin et al., 2009), Conditional Random Fields (CRF) (Sminchisescu et al., 2006) and recently, convolutional neural networks (CNN) (Poria et al., 2016) are common techniques employed. The assumption that an observation only depends on the current state and that a given state depends on its immediate predecessor state made HMM approaches less applicable for relations involving long distance dependencies. CRF is also a linear model and suffers from the same criticism as HMM. CNN on the other hand can encode long distance relations existing between concepts. As a result, we use CNN to train the model since C, A, OC and OA can appear a long way away from each other. 3.2.2 Identifying the contradiction between opinions Our aim here is to compare the polarities between OC and OA to check if they match or contradict. The opinionated words in our case are context dependent (“are taking our jobs” vs “are taking our 521 presents”) and often the contexts are fine-grained (see example 5). We aim to disambiguate the sentiment orientation of the words via identifying constrained synonyms (CS). Constrained synonyms are subset of synonyms expressing similar sense to the current opinion word in a fine-grained context. For instance, among the synonyms of “taking” in (5), we are interested to identify synonyms like “robbing” and “stealing”, constrained by a given context like “China, Americans jobs”. Our hypothesis is then the use of such CS expressing a similar opinion can improve the estimation of the polarity of ambiguous opinion words by aggregating the information coming from multiple words expressing a similar opinion to the current opinion. In order to identify the CS, we enhance word embedding to enforce the encoding of finegrained contexts. Our Context Sensitive Polarity Prediction (CSPP) technique consists of two main components: identifying CS and predicting polarity using the CS. To identify the CS, we extend CBOW based Word2Vec (Tomas et al., 2013) (see Equation 3). Accordingly, given a fine-grained context, the extended CBOW predict CS for an opinion word in the context. We use C and A as a fine-grained context of the opinion and encode them in the representation of words. The embedding is extended by introducing an additional output layer (called the constrained context, CC, output layer) to update the embedding based on the fine-grained contexts. The two output layers are connected to the previous layer in the network and the cost function is the loss of the first plus the second output. Given a sequence of words W={w1, ..., wN}, the Constrained Embedding (CE) objective function is defined by the formula in Equation 4. CBOW(W) = 1 N N X i=1 log P(wi | gcwi) (3) CE(W) = 1 N N X i=1 log P(wi | gcwi) + log P(wi | ccwi) (4) where d is the number of fine-grained context which is equivalent to the number of target concepts and aspects; gcwi indicates the global contexts identified by taking d/2 words to the left and right of wi (d/2 words to the left and right of the current word is taken to equalize the number of global context with the number of fine-grained context); ccwi is given by the aggregation of finegrained and global context (gcwi) using Equation 5. Given an input sequence wi, wi+1, ...wn, and fine-grained context cj, cj+1, ...cd, the function which aggregates both contexts to produce (ccwi) for the current word wi is given by: ccwi = [ ⃗ew T i−d/2([⃗ecT j , ⃗ecT j+1, ⃗ecT j+2..., ⃗ecT d ]), ... ⃗ew T i−1([⃗ecT j , ⃗ecT j+1, ⃗ecT j+2..., ⃗ecT d ]), ⃗ewT i+1([⃗ecT j , ⃗ecT j+1, ⃗ecT j+2..., ⃗ecT d ]), ..., ⃗ewT i+d/2([⃗ecT j , ⃗ecT j+1, ⃗ecT j+2..., ⃗ecT d ])] (5) where, ⃗ew T i−d/2, ..., ⃗ew T i−1, ⃗ewT i+1, ..., ⃗ewT i+d/2 are the transpose of pre-trained vectors of the global contexts of the current word wi and ⃗ecT j , ⃗ecT j+1, ⃗ecT j+2, ..., ⃗ecT d are the transpose of pretrained vectors of the d sized fine-grained contexts. Once the CS are identified for the current opinion word using the extended word embedding, we train a classifier to categorize the polarity, given a classification feature including the initial list of opinion words generated by Hu and Liu (Hu and Liu, 2004), the current opinion word, the CS and paragraphs containing the opinion words and the CS. 3.2.3 Computing Similarity Similarity between C and A is used to connect propositions. In addition to aspect based, we have tried proposition level similarity for comparison: 1. Proposition level similarity. Computes similarity between the entirety of propositions. 2. Aspect Based Similarity. Computes the similarity between aspects and target concepts. We used two state of the art similarity approaches allowing to measure the similarity between any text fragment at various linguistic levels: Align Disambiguate Walk (ADW) (Pilehvar et al., 2013) and Doc2vec (Le and Mikolov, 2014). ADW is a graph-based approach for measuring the semantic similarity of linguistic items at various levels (word senses, texts). To measure the similarity between words, ADW starts by disambiguating them using the context in which the words are used based on their WordNet representation. Doc2vec (Le and Mikolov, 2014) is an enhanced version of word2vec (Mikolov et al., 2013) that 522 allows for computing similarity between phrases, sentences, paragraphs or documents. 3.2.4 Identify Argument Relations and Category A classifier is trained to learn the relations between the four components in order to link propositions. The classification features are: the similarity between C and A; the relation between OC and OA. To facilitate the training, we convert the continues similarity values (which ranges from 0.0 to 1.0) to a discrete value by tuning a threshold α on a development set to categorize them into two: unrelated or similar. Likewise, the relation between OC and OA holds discreet values: agreement, disagreement or neutral. 3.2.5 Iterative Graph Construction Given a set of propositions, we build a structure consisting the valid ARs holding between the propositions. Propositions and ARs are nodes and the links between them form edges. We start with any arbitrary proposition Pi and then identify the associated functional components. The similarity between C and A of Pi and all the other propositions (Pi+1...n); the agreement between OC and OA of Pi and all the other remaining propositions (Pi+1...n) are identified. A classifier is then used to identify the AR between the propositions based on the relations between their components. Accordingly, a proposition whose functional components are related with the functional components of Pi is connected to Pi to form an edge (Pi+1 →Pi). Once all the child nodes (all the premises) are connected, the proposition is marked as visited. Continuing with the next unvisited proposition, the same procedure is applied until all the propositions in the entire argument are visited. 4 Experiments Four machine learning approaches are trained to detect C, A, OC and OA. Two similarity approaches are tried to identify similarity between C and A. CSPP is tried to identify the polarity of OC and OA. Our DAM combines the best performing component identifier, similarity and the CSPP to train a classifier in order to identify AR existing between proposition. The implementation of our approach is available online 4. It takes argumenta4http://ws.arg.tech/ tive text as an input and returns the argument structure using AIF-JSON (Chesnevar et al., 2006) format. 4.1 Evaluation technique and setup We use ten-fold cross-validation, where the dataset is randomly divided into ten groups. Arguments are randomly split into 80% training and 20% test sets with the same class distribution. To balance the class distribution (composition of premise, conclusion, attack relation, and support relation), we follow the unitization in the respective corpus. For instance, AAEC is originally presented as 90 self contained essays consisting of conclusions, premises and the associated argument relations. Hence, we consider an argument as a unit to take all the constituted elements at a time. We report average precision, recall and F-measure computed by ten-fold cross-validation over these units. 4.2 Results and Discussions We present the results of the individual components separately: C, A, OC and OA extraction. The four classifiers are evaluated on the three corpora as presented in Table 2. We use the class distribution of the components as a baseline. We divide the number of C and A by the total number of concepts (C and A) to obtain the class distribution for C and A. The same procedure is followed for the opinions (OC and OA). The sequential labeling approach out-performed all the classifiers and the baseline across the corpora. The syntactic dependency existing between C, A, OC and OA, regardless of the distance existing between them, is recognized by the CNN more reliably than the other classifiers. The kernel-based SVM outperformed the feature based SVM which is again attributed to its ability of encoding the syntactic dependency linking the target concepts and the aspects. CSPP. We use SemEval data-sets (Rosenthal et al., 2017) to evaluate CSPP. We compare the result against an implementation using conventional word embedding as a baseline. CSPP achieves an overall F-measure of 0.79 while the baseline achieves 0.71. The strength of CSPP is founded on its use of multiple words expressing similar senses as the current opinion (in similar context) to gather several instances of the current ambiguous words to increases the chance of prediction. 523 Data-Sets AAEC AMT US2016G1tv Approaches C A OC OA C A OC OA C A OC OA Baseline 0.45 0.55 0.57 0.43 0.48 0.52 0.6 0.4 0.43 0.57 0.61 0.39 SVM-kernel 0.82 0.71 0.81 0.62 0.78 0.65 0.69 0.65 0.77 0.69 0.69 0.66 SVM-feature 0.81 0.70 0.81 0.65 0.75 0.68 0.67 0.66 0.76 0.69 0.67 0.66 CNN-Sequence 0.83 0.72 0.82 0.7 0.77 0.69 0.7 0.67 0.78 0.71 0.68 0.67 CRF 0.80 0.69 0.72 0.65 0.78 0.67 0.66 0.69 0.76 0.67 0.67 0.67 Naive Bayes 0.79 0.69 0.76 0.66 0.75 0.62 0.62 0.62 0.75 0.66 0.65 0.64 Table 2: The performance (F-measure) of C, A, OC and OA extraction on AAEC, AMT and US2016G1tv corpus Approaches S&G2014b P&S2016 PLS DAM Data-Sets Components P R F P R F P R F P R F AAEC Para Propositions 0.77 0.68 0.73 n/a n/a 0.81 0.77 0.79 AR 0.74 0.71 0.72 0.62 0.67 0.64 0.82 0.76 0.79 ARC 0.74 0.71 0.72 n/a 0.81 0.74 0.77 AAEC Essay Propositions n/a n/a n/a 0.76 0.73 0.74 AR 0.58 0.7 0.63 0.73 0.75 0.74 ARC n/a 0.73 0.74 0.74 AMT Propositions n/a n/a n/a 0.9 0.67 0.77 AR n/a n/a 0.76 0.61 0.64 0.62 0.91 0.66 0.77 ARC n/a n/a 0.88 0.66 0.75 US2016G1tv Propositions n/a n/a n/a 0.66 0.62 0.64 Inference 0.51 0.62 0.56 0.65 0.63 0.64 ARC n/a 0.63 0.61 0.62 Table 3: The performance of Stab and Gurevyech’s technique (2014b) (SG2014b), Peldszus and Stede’s technique (2016) (PS2016), PLS and DAM in extracting the components of an argument, AR and the category of AR (ARC) (inference vs conflict) on AAEC (paragraph and essay level), AMT and US2016G1tv. AR identification. The performance of our approach in identifying premises, conclusions, AR and the category of AR (inference vs conflict) is presented in Table 3. Since the AR between a premise and conclusion depends on the similarity between the C and A, we tune the value of α to 0.4 on a development set (similar components have a similarity measure greater than 0.4). Following the evaluation strategy of Stab and Gurevych (2014b), we first evaluate our approach on AAEC at paragraph and essay levels where we achieve F measures of 0.79 and 0.74, respectively. We have also achieved an F measure of 0.77 on the AMT corpus and 0.64 on US2016G1tv corpus. The performance of our approach tends to confirm our initial hypothesis: the AR between propositions is indeed governed by the relation between their functional components. The performance varies across the three corpora with the lowest performance observed on the US2016G1tv corpus. We have inspected the three corpora to identify the possible factors and identified three issues: (a) similarity is dependent on the information presented in the propositions alone, yet US2016G1tv is particularly demanding in that understanding many of the utterances depends upon (external) context in addition to what is present in the discourse; (b) since US2016G1tv corpus is dialogical, unlike the others, it includes the speakers’ text in the construction of propositions and hence their representation is more complex than the monological corpora. The complex representations of propositions make the formalization and the extraction of target concepts and aspects difficult; (c) the AMT corpus has a high proportion of co-reference to represent C and A resulting in poor similarity, since the similarity between a word and its co-reference is low. 4.3 Error Analysis Two major error types are observed. The first is related to propagation of the errors encountered during C and A extraction to the similarity identifier and AR identifier affecting the overall performance. Specifically, when a word is incorrectly identified as part of C or A, their similarity measure is affected and then the decision about the AR. The second error type is related to the similarity module which provides incorrect result for certain words. For instance, ADW provides comparable similarity values between “food” and “meal”, 524 and between “food” and “family”. Yet the first pair is more closely related as compared to the later. Moreover, propositions involving two or more categories of aspects (where each category is supported or attacked by different propositions) present a challenge, since it requires grouping of the aspects and consider each group as a unit to compute similarity. 4.4 Comparison Systems We have compared our approach against the leading techniques in the field including Stab and Gurevych’s work (2014b), Peldszus and Stede’s (2016) work, and proposition level similarity. We re-implement proposition level similarity and use the results reported by the authors for the remaining approaches. Stab and Gurevych (2014b) propose a classifier which identify argument components and AR category using a multiclass classification on (AAEC) (Stab and Gurevych, 2014a). Instead of considering the entirety of essay, they connect propositions within the same paragraph. They use Weka implementation of four different classifiers: SVM, Naive Bayes, C4.5 Decision Tree and Random Forest (Hall et al., 2009). SVM scored the best result with an overall accuracy of 0.73 and 0.72 in identifying argument components and AR respectively on AAEC (Stab and Gurevych, 2014a) at paragraph level. Peldszus and Stede (2016) aim to map RST trees to argumentation structures (Taboada and Mann, 2006) using subgraph matching and an evidence graph model. They evaluate several features of their system on AMT (Peldszus and Stede, 2013). We are concerned with one of the features in order to make direct comparison: identifying if two EDUs are connected on which they achieve an overall F-measure of 0.76. Most related to our work is an approach using proposition level similarity (PLS) as an integral component to determine argument structure (Lawrence and Reed, 2015). They use similarity to indicate the AR existing between EDUs and supplement other features to identify the entire argument structure. Since the similarity component alone can not induce the direction of the relation between the EDUs, we compared its performance in terms of detecting the existence of AR between EDUs. PLS provides a challenge to identify among different relations, since a pair of propositions in a given argument can score strong similarity without involving AR. PLS does not identify the direction of relation (claim vs premise) and hence these values are listed as n/a in Tables 3. We also use n/a to indicate that the evaluation result for the respective evaluation criteria (identifying premise, conclusion and AR) is not available for the comparison approaches. Table 3 shows the performance of DAM, PLS, Stab and Gurevych’s approach (2014b), and Peldszus and Stede’s (2016) approach on the three data-sets. DAM outperformed all the approaches across the three corpora achieving the highest precision, recall and F-measure. The decrease in recall on AMT is attributed to the fact that coreferences are productive in the corpus affecting similarity output, since similarity techniques are dependent on the lexicon choice (i.e the similarity between a word and its co-reference is low). 5 Conclusion In this work, we have presented an approach for linking premises and conclusions that uses the similarity of target concepts and aspects, and the agreement between the opinions on target concepts and aspects of EDUs. We have demonstrated that the argument relations existing between propositions are largely dependent on the relations existing between the individual components (target concepts, aspects, opinions on target concepts and opinions on aspects) of the propositions. It would also be nice to explore about more fine-grained functional components and grammatical entities in the future works. Not only does our DAM approach outperform the current state of the art, most importantly, it is shown to work without modification across heterogeneous corpora (AAEC, AMT and US2016G1tv) which are substantially different in kind. This generality is an important milestone in the development of argument mining techniques and suggests that a combination of structural and distributional techniques, as employed here, offers the potential for robust, domain-independent performance in this extremely demanding task. Acknowledgments This research was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) in the United Kingdom under grant EP/N014871/1. 525 References Masayuki Asahara and Yuji Matsumoto. 2003. Japanese named entity extraction with redundant morphological analysis. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 8–15. Association for Computational Linguistics. Michele Banko, Michael J Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI, pages 2670–2676. Daniel M. Bikei, Scott Miller, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: a highperformance learning name-finder. In Proceedings of the fifth conference on Applied natural language processing, pages 194–201. Association for Computational Linguistics. Andrew Borthwick, John Sterling, Eugene Agichtein, and Ralph Grishman. 1998. Exploiting diverse knowledge sources via maximum entropy in named entity recognition. In Sixth Workshop on Very Large Corpora, pages 152–160. Carlos Chesnevar, Jarred McGinnis, Sanjay Modgil, Iyad Rahwan, Chris Reed, Guillermo Simari, Matthew South, Gerard Vreeswijk, and Steven Willmott. 2006. Towards an argument interchange format. The knowledge engineering review, 21(4):293– 316. Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 11–22. Association for Computational Linguistics. Oren Etzioni, Michael Cafarella, Doug Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial intelligence, 165(1):91–134. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2004). Mathilde Janier, John Lawrence, and Chris Reed. 2014. OVA+: an argument analysis interface. In Computational Models of Argument: Proceedings of COMMA, volume 266, page 463. Wei Jin, Hung Hay Ho, and Rohini K. Srihari. 2009. Opinionminer: A novel machine learn-ing system for web opinion mining and extraction. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1195–1204. ACM. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 22. Association for Computational Linguistics. John Lawrence, Mathilde Janier, and Chris Reed. 2015. Working with open argument corpora. In European Conference on Argumentation, pages 367–380. John Lawrence and Chris Reed. 2015. Combining argument mining techniques. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 127–136. John Lawrence and Chris Reed. 2017. Using complex argumentative interactions to reconstruct the argumentative structure of large-scale debates. In Proceedings of the 4th International ACL/EMNLP Workshop on Argument Mining, pages 108–117. John Lawrence, Chris Reed, Colin Allen, Simon McAlister, and Andrew Ravenscroft. 2014. Mining arguments from 19th century philosophical texts using topic based modelling. In Proceedings of the First Workshop on Argumentation Mining, pages 79–87. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learning, pages 1188–1196. Joselice Ferreira Lima, Clia M. Gomes Amaral, and Lus Fernando R. Molinaro. 2010. Alternation. CENTERIS, 2(11):426–435. Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL, pages 188–191. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Marie-Francine Moens, Eric Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In Proceedings of the 11th international conference on Artificial intelligence and law, pages 225–230. ACM. Philippe Muller, Stergos D. Afantenos, Pascal Denis, and Nicholas Asher. 2012. Constrained decoding for text-level discourse parsing. Proceedings of COLING 2012, pages 1883–1900. 526 Andreas Peldszus and Manfred Stede. 2013. Ranking the annotators: An agreement study on argumentation structure. In Proceedings of the 7th linguistic annotation workshop and interoperability with discourse, pages 196–204. Andreas Peldszus and Manfred Stede. 2015. Joint prediction in mst-style discourse parsing for argumentation mining. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 938–948. Andreas Peldszus and Manfred Stede. 2016. Rhetorical structure and argumentation structure in monologue text. In Proceedings of the Third Workshop on Argument Mining, pages 103–112. Mohammad Taher Pilehvar, David Jurgens, and Roberto Navigli. 2013. Align, disambiguate andwalk: A unified approach for measuring semantic similarity. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1341–1351. Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2016. Aspect extraction for opinion mining with a deep convolutional neural network. Knowledge-Based Systems, 108:42–49. Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. In Natural language processing using very large corpora, pages 157–176. Springer. Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. Semeval-2017 task 4: Sentiment analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation,SemEval ’17,Association for Computational Linguistics. Cristian Sminchisescu, Atul Kanaujia, and Dimitris Metaxas. 2006. Conditional models for contextual human motion recognition. Computer Vision and Image Understanding, 104(2-3):210–220. Christian Stab and Iryna Gurevych. 2014a. Annotating argument components and relations in persuasive essays. pages 1501–1510. Christian Stab and Iryna Gurevych. 2014b. Identifying argumentative discourse structures in persuasive essays. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 46–56. Maite Taboada and William Mann. 2006. Rhetorical structure theory: Looking back and moving ahead. Discourse studies, 8(3):423–459. Mikolov Tomas, Chen Kai, Corrado Greg, and Dean Jeffrey. 2013. Efficient estimation of word representations in vector space. In arXiv preprint arXiv, pages 1301–3781. Maria P.G. Villalba and Patrick Saint-Dizier. 2012. Some facets of argument mining for opinion analysis. COMMA, 245:23–34. Jacky Visser, Barbara Konat, Rory Duthie, Marcin Koszowy, Katarzyna Budzynska, and Chris Reed. 2019. Argumentation in the 2016 US presidential elections: annotated corpora of television debates and social media reaction. Language Resources and Evaluation. Lei Zhang and Bing Liu. 2014. Aspect and entity extraction for opinion mining. In Data mining and knowledge discovery for big data, pages 1–40. Springer. Shubin Zhao and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 419–426. Association for Computational Linguistics.
2019
49
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4963–4974 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4963 Multilingual and Cross-Lingual Graded Lexical Entailment Ivan Vuli´c1, Simone Paolo Ponzetto2, Goran Glavaš2 1 PolyAI Ltd., London, United Kingdom 2 Data and Web Science Group, University of Mannheim, Germany [email protected] {simone, goran}@informatik.uni-mannheim.de Abstract Grounded in cognitive linguistics, graded lexical entailment (GR-LE) is concerned with finegrained assertions regarding the directional hierarchical relationships between concepts on a continuous scale. In this paper, we present the first work on cross-lingual generalisation of GR-LE relation. Starting from HyperLex, the only available GR-LE dataset in English, we construct new monolingual GR-LE datasets for three other languages, and combine those to create a set of six cross-lingual GR-LE datasets termed CL-HYPERLEX. We next present a novel method dubbed CLEAR (Cross-Lingual Lexical Entailment AttractRepel) for effectively capturing graded (and binary) LE, both monolingually in different languages as well as across languages (i.e., on CLHYPERLEX). Coupled with a bilingual dictionary, CLEAR leverages taxonomic LE knowledge in a resource-rich language (e.g., English) and propagates it to other languages. Supported by cross-lingual LE transfer, CLEAR sets competitive baseline performance on three new monolingual GR-LE datasets and six cross-lingual GR-LE datasets. In addition, we show that CLEAR outperforms current state-ofthe-art on binary cross-lingual LE detection by a wide margin for diverse language pairs. 1 Introduction Word-level lexical entailment (LE), also known as the TYPE-OF or hyponymy-hypernymy relation, is a fundamental asymmetric lexical relation (Collins and Quillian, 1972; Beckwith et al., 1991). It is a key principle behind the hierarchical structure found in semantic networks such as WordNet (Fellbaum, 1998) or ConceptNet (Speer et al., 2017). As opposed to simpler discrete and binary LE detection (e.g., oregano is a TYPE-OF food), graded lexical entailment (GR-LE) measures the strength of the LE relation between two concepts on a continuous scale (Vuli´c et al., 2017; Rei et al., 2018). GR-LE is concerned with fine-grained directional assertions of hierarchical arrangements between concepts. The notion of graded LE is rooted in theories of concept (proto)typicality and category vagueness from cognitive science (Rosch, 1973, 1975; Kamp and Partee, 1995). Instead of answering the simpler (discrete) question “Is X a type of Y?”, as in standard LE detection tasks (Kotlerman et al., 2010; Turney and Mohammad, 2015), GR-LE aims at answering the following question: “To what degree is X a type of Y?” The concept of LE gradience is also empirically confirmed by human judgements elicited for HyperLex (Vuli´c et al., 2017), a GR-LE resource in English.1 Furthermore, while simpler binary LE detection has been predominantly studied in monolingual settings only (Geffet and Dagan, 2005; Weeds et al., 2014; Santus et al., 2014; Kiela et al., 2015; Shwartz et al., 2016, 2017; Glavaš and Ponzetto, 2017; Roller et al., 2018, inter alia), more general reasoning over cross-lingual and multilingual LE relationships can improve language understanding in multilingual contexts, e.g., in cases when translations are ambiguous or not equivalent to the source concept (Vyas and Carpuat, 2016; Upadhyay et al., 2018).2 The ability to reason over cross-lingual LE is pivotal for a variety of cross-lingual tasks such as recognising cross-lingual textual entailment (Negri et al., 2012, 2013; Conneau et al., 2018b), constructing multilingual taxonomies (Ehrmann et al., 2014; Fu et al., 2014), cross-lingual event coreference (Song et al., 2018), machine translation in1For instance, the strength of LE association hamburger →food is on average judged by humans with 5.85/60. In comparison, oregano is seen as a less typical instance of the category/concept food, with the pair’s average rating of 3.58/6.0. In contrast, the pair food →pie receives the average rating of only 0.92/6, which confirms the inherent asymmetry of the GR-LE relation. 2For instance, translating the Italian word calcio to calcium prevents identifying sport as a hypernym of calcio. 4964 x y z en beagle es perro es mam´ıfero en animal es organismo en roadster es coche en vehicle es transporte Figure 1: A toy example of Euclidean shared crosslingual word vector space specialised for the asymmetric LE relation. The symmetric similarity of true LE pairs, irrespective of their actual language (the example shows English and Spanish words with the respective prefixes en_ and es_) is reflected by their small cosine distances (e.g., the small angle between −−−−−−→ en_beagle and −−−−−−→ es_perro and −−−−−−−→ en_animal), while simultaneously higher-level concepts are assigned larger norms to enforce the LE arrangement in the vector space. An asymmetric distance that takes into account the vector direction as well as the vector magnitude can be used to grade the LE relation strength between any two concepts in the shared cross-lingual vector space. terpretability (Padó et al., 2009), and cross-lingual lexical substitution (Mihalcea et al., 2010). In this work, we introduce the first set of benchmarks and methods that target cross-lingual and multilingual graded lexical entailment. We make several important contributions related to GR-LE in multilingual settings. First, we extend the research on GR-LE beyond English (Vuli´c et al., 2017; Rei et al., 2018) and provide new human-annotated GR-LE datasets in three other languages: German, Italian, and Croatian. Second, following an established methodology for constructing evaluation datasets for cross-lingual lexico-semantic relations (Camacho-Collados et al., 2015, 2017), we automatically derive a collection of six cross-lingual GR-LE datasets: CL-HYPERLEX. We analyse in detail the cross-lingual datasets (e.g., by comparing the scores to human-elicited ratings), demonstrating their robustness and reliability. In order to provide a competitive baseline on new monolingual and cross-lingual datasets, we next introduce a cross-lingual specialisation/retrofitting method termed CLEAR (Cross-Lingual Lexical Entailment Attract-Repel): starting from any two monolingual distributional spaces, CLEAR induces a bilingual cross-lingual space that reflects the asymmetric nature of the LE relation. Such a crosslingual LE-specialised space is illustrated in Figure 1. CLEAR is an extension of the monolingual LEAR specialisation method (Vuli´c and Mrkši´c, 2018). The key idea of CLEAR is to leverage external lexical knowledge (i.e., information on word relations from WordNet, BabelNet, or ConceptNet) to rescale vector norms which reflect the concept hierarchy, while simultaneously pushing (i.e., “attracting”) desirable word pairs closer (by vector direction) to reflect their semantic similarity in the cross-lingual LE-specialised space. Crucially, as shown later in Figure 3, CLEAR relies on a curated semantic resource only in the resource-rich source language (e.g., English): coupled with a bilingual dictionary it propagates the LE knowledge to the target (resource-poor) language and constructs a shared cross-lingual LE-specialised space. This cross-lingual LE-specialised space, depicted in Figure 1 and empirically validated in §4, is then used to reason over GR-LE in the target language, and for making cross-lingual GR-LE assertions. Our experiments demonstrate that CLEAR is a strong benchmark on all GR-LE datasets. It can effectively transfer LE knowledge to a spectrum of target languages. What is more, through multilingual training via a resource-rich pivot language (e.g., English) CLEAR supports cross-lingual GRLE for language pairs without any semantic resources. Finally, we report state-of-the-art scores in the ungraded (i.e., binary) cross-lingual LE detection for three diverse language pairs on standard evaluation sets (Upadhyay et al., 2018). Annotation guidelines and created datasets for all languages and language pairs are available online at: https://github.com/ivulic/ xling-grle/, and as the supplemental material. We also make available the code and CLEARspecialised vector spaces. 2 Graded LE Evaluation Datasets Graded lexical entailment is an asymmetric relation formulated by the intuitive question “To what degree is X a type of Y?”: it comprises two distinct phenomena studied in cognitive science (Hampton, 2007). First, it captures the measure of typicality in graded cognitive categorisation (Rosch, 1975; Medin et al., 1984): some instances of a category are more central than others (e.g., basketball will 4965 often be cited as a more typical sport than biathlon). Second, it covers the measure of vagueness (also referred to as graded membership): it measures the graded applicability of a concept to different instances.3 Despite the fact that GR-LE should not be bound to any particular surface realisation of concepts (i.e., it is not tied to a particular language), a graded LE repository has so far been created only for English: it is the HyperLex dataset of Vuli´c et al. (2017). Starting from the established data creation protocol for HyperLex, in this work we compile similar HyperLex datasets in three other languages and introduce novel multilingual and cross-lingual GR-LE tasks. Graded LE in English. HyperLex (Vuli´c et al., 2017) comprises 2,616 English (EN) word pairs (2,163 noun pairs and 453 verb pairs) annotated for the GR-LE relation. Unlike in symmetric similarity datasets (Hill et al., 2015; Gerz et al., 2016), word order in each pair (X, Y ) is important: this means that pairs (X, Y ) and (Y, X) can obtain drastically different graded LE ratings. The word pairs were first sampled from WordNet to represent a spectrum of different word relations (e.g., hyponymyhypernymy, meronymy, co-hyponymy, synonymy, antonymy, no relation). The ratings in the [0, 6] interval were then collected through crowdsourcing by posing the GR-LE “To what degree...” question to human subjects, with each pair rated by at least 10 raters: the score of 6 indicates strong LE relation between the concepts X and Y (in that order), and 0 indicates absence of the LE relation. The final score was averaged across individual ratings. The final EN HyperLex dataset reveals that gradience effects are indeed present in human annotations: it contains word pairs with ratings distributed across the entire [0, 6] rating interval. What is more, high inter-annotator agreement scores (see Table 3), suggest that even non-expert annotators consistently reason about the degree of LE between words.4 Word Pair Translation. Monolingual HyperLex datasets in three target languages: German (DE), Italian (IT), and Croatian (HR) were constructed by translating word pairs from the EN HyperLex and re-scoring the translated pairs in the target language. The translation approach has been selected 3Following Vuli´c et al. (2017), it is not clear to which extent a washing machine is an instance of the category chair despite the fact that “one can sit on washing machines”. 4For more details on guidelines and creation of EN HyperLex we refer the reader to the original work. because: 1) the original EN HyperLex pairs were already carefully selected through a controlled sampling procedure to ensure a wide coverage of diverse WordNet relations; 2) we want to ensure as comparable datasets as possible across different languages in terms of semantic coverage; 3) the approach has been extensively validated in related work on creating multilingual semantic similarity datasets (Leviant and Reichart, 2015; CamachoCollados et al., 2017). Most importantly, the translation approach allows for the automatic construction of cross-lingual GR-LE datasets. We have followed the standard word pair translation procedure (Leviant and Reichart, 2015; Camacho-Collados et al., 2017). Each EN HyperLex pair was first translated independently by two native speakers of the target language. The translation agreement was in the range of 85%-90% across the three target languages. Translation disagreements were resolved by a third annotator who selected the correct (or better) translation following discussions with both translators. To account for polysemy, each word pair was shown along with its EN HyperLex score, helping annotators to preserve word sense during translation. We allowed for multi-word translations only if there was no appropriate single word translation (e.g., typewriter →macchina da scrivere). Guidelines and Concept Pair Scoring. EN HyperLex annotation guidelines were translated to all three target languages (see the supplementary). The resulting 2,616 concept pairs in each language were annotated using a procedure analogous to that for EN HyperLex: the rating interval was [0, 6], and each word pair was rated by 4 native speakers.5 Cross-Lingual Datasets. The cross-lingual CLHYPERLEX datasets were then constructed automatically, leveraging word pair translations and scores in three target languages. To this end, we follow the methodology of Camacho-Collados et al. (2015, 2017), used previously for creating cross-lingual semantic similarity datasets. In short, we first intersect aligned concept pairs (obtained through translation) in two languages: e.g., father-ancestor in English and padre-antenato in Italian are used 5As opposed to (Hill et al., 2015; Gerz et al., 2016; Vuli´c et al., 2017), but similar to (Camacho-Collados et al., 2017; Pilehvar et al., 2018) we did not divide the dataset into smaller tranches; each annotator scored the entire target-language dataset instead. The target languages were selected based on the availability of native speakers; the total number of annotations was restricted by the annotation budget. 4966 Monolingual Datasets EN portrait picture 5.90 DE Idol Person 4.0 DE Motorrad Fahrrad 0.25 IT origano cibo 3.25 HR tenis rekreacija 5.75 Cross-Lingual Datasets (CL-HYPERLEX) EN-DE dinosaur Kreatur 4.75 EN-IT eye viso 0.6 EN-HR religija belief 4.92 DE-IT Medikation trattamento 5.38 DE-HR Form prizma 0.0 IT-HR aritmetica matematika 5.5 Table 1: Example pairs with ratings from monolingual and cross-lingual graded LE datasets. Note that for cross-lingual datasets words from each language can be placed as the first or the second word in the pair. EN DE IT HR EN 2,616 3,029 3,338 3,514 DE – 2,616 3,424 3,522 IT – – 2,616 3,671 HR – – – 2,616 Table 2: The sizes of all monolingual (main diagonal) and cross-lingual graded LE datasets. to create cross-lingual pairs father-antenato and padre-ancestor. The GR-LE scores of cross-lingual pairs are computed as averages of corresponding monolingual scores. Finally, we retain only crosslingual pairs for which the corresponding monolingual scores differ by ≤1.0. This heuristic (Camacho-Collados et al., 2017) mitigates the undesirable inter-language semantic shift. We refer the reader to (Camacho-Collados et al., 2015) for full (technical) description of the procedure. Score Distributions. Table 1 displays example pairs from monolingual and cross-lingual GR-LE datasets, whereas Table 2 lists the total number of pairs for each of them. The constructed datasets are comprehensive and on a par with or larger than semantic similarity benchmarks: SimLex (Hill et al., 2015) contains 999 word pairs; multilingual and cross-lingual datasets of Camacho-Collados et al. (2017) contain < 1, 000 pairs each. The only word similarity dataset comparable in size is SimVerb (Gerz et al., 2016) with 3,500 verb pairs. This dataset magnitude can even support supervised learning (Vuli´c et al., 2017; Rei et al., 2018). We verify that all score ranges are represented by a sufficient number of concept pairs. The score distributions are shown in Figure 2. As in EN HyperLex, a large number of concept pairs is placed within the two outer sub-intervals (i.e., [0, 1) [1, 2) [2, 3) [3, 4) [4, 5) [5, 6] Rating Interval 0 10 20 30 40 50 Percentage [%] EN DE IT HR [0, 1) [1, 2) [2, 3) [3, 4) [4, 5) [5, 6] Rating Interval 0 10 20 30 40 50 Percentage [%] EN-DE EN-IT EN-HR Figure 2: Rating distributions in monolingual and (a selection of) cross-lingual graded LE datasets. y axes plot percentages; the data sizes provided in Table 2. EN DE IT HR Pairwise-IAA 0.854 0.741 0.736 0.840 Mean-IAA 0.864 0.803 0.809 0.882 Table 3: Inter-annotator agreement (Spearman’s ρ correlation) for monolingual GR-LE datasets. IAA scores for the original EN HyperLex provided for reference. [0, 1) and [5, 6]): this is an artefact of having WordNet synonyms as trivial LE pairs on the one side, whereas antonyms, no-relation, and reverse hyponymy-hypernymy pairs are found on the other side of the scoring spectrum. Nonetheless, the inner interval (i.e., [1, 5)) covers a significant portion (≈30%) of (evenly distributed) word pairs, confirming the gradience of the LE relation. Inter-Annotator Agreement. Following prior work on word pair dataset creation (Silberer and Lapata, 2014; Hill et al., 2015; Gerz et al., 2016; Vuli´c et al., 2017, inter alia), we report two interannotator agreement (IAA) measures for the three new monolingual datasets. Pairwise-IAA is the average pairwise Spearman’s ρ correlation between any two raters. Mean-IAA compares the average correlation of an annotator with the average of all the other annotators: it is a human ’upper bound’ for the performance of automatic systems. The IAA scores in Table 3 show that humans quantify graded 4967 Monolingual vectors Target: L2 Monolingual vectors Source: L1 es_guerra en_war en_peace es_paz es_conflicto en_warfare en_army en_ejercito CLEAR specialisation Bilingual Dictionary D: L1-L2 (en_war, es_guerra); (en_peace, es_paz);... L1 synonyms (war, warfare) L1 LE pairs (war, conflict) L1 antonyms (war, peace) Cross-lingual LE-specialised vectors en_conflict Figure 3: High-level overview (with toy examples) of the CLEAR specialisation procedure resulting in a shared cross-lingual word vector space that accentuates the LE relation between the concepts. LE consistently across languages.6 High MeanIAA scores are challenging upper bounds that justify our automatic construction of CL-HYPERLEX. We further validate CL-HYPERLEX by comparing automatically induced scores with human judgements. For each EN-{DE,IT,HR} dataset we let two annotators fluent in both languages judge 333 randomly sampled pairs. We report high average Spearman’s ρ correlation between automatically induced scores and human judgements: 0.896 (EN-DE), 0.909 (EN-IT), and 0.905 (EN-HR). 3 Methodology In order to provide benchmarking graded LE scores on new monolingual and cross-lingual evaluation sets, we now introduce a novel method that can capture GR-LE cross-lingually. CLEAR ( CrossLingual Lexical Entailment Attract-Repel) is a cross-lingual extension of the monolingual LEAR specialisation method (Vuli´c and Mrkši´c, 2018), a state-of-the-art vector space fine-tuning method which specialises any input distributional vector 6Similarity benchmarks report much lower Pairwise-IAA scores: 0.61 on SimVerb-3500 (Gerz et al., 2016; Pilehvar et al., 2018), and 0.67 on SimLex-999 (Hill et al., 2015) and on WordSim-353 (Finkelstein et al., 2002) space to accentuate the asymmetric LE relation in the transformed space. We show that, coupled with a bilingual dictionary, CLEAR can learn vector rearrangements that reflect lexical entailment also in the target language for which no external lexical knowledge concerning the LE relation is available, and it can also quantify the degree of cross-lingual LE. The core idea is to simultaneously capture the hierarchy of concepts (through vector norms) and their similarity (through their cosine distance), irrespective of the actual language (see Figure 1). CLEAR Specialisation. A high-level overview of the CLEAR specialisation method is provided in Figure 3. The input to the method is as follows: 1) two independently trained monolingual word vector spaces in two languages L1 and L2; 2) sets of external lexical constraints in the resource-rich language L1 (e.g., English) extracted from an external lexical resource such as WordNet (Fellbaum, 1998) or BabelNet (Ehrmann et al., 2014); and 3) a bilingual L1-L2 dictionary D. The goal is to fine-tune input word vectors in both languages using the L1 lexical constraints and the dictionary D, and obtain a shared cross-lingual space specialised for LE. CLEAR uses a set of external linguistic constraints C = S ∪A ∪Le in language L1 for finetuning. The set comprises synonymy pairs S such as (clever, smart), antonymy pairs A such as (war, peace), and lexical entailment (i.e., hyponymyhypernymy) pairs Le such as (dog, animal). For the Le pairs, the word order is important: we assume that the left word is always the hyponym. Further, we treat pairs from the dictionary D such as (war, guerra) as another distinct set of (cross-lingual) synonymy pairs. The D pairs are L1-L2 pairs, while all the remaining word pairs are L1 pairs: this creates a true cross-lingual transfer setup. Similar to LEAR and the ATTRACT-REPEL model for symmetric similarity specialisation (Mrkši´c et al., 2017), CLEAR defines two types of symmetric objectives for the L1 pairs: 1) the ATTRACT (Att) objective aims to bring closer together in the vector space words that are semantically similar (i.e., synonyms and hyponym-hypernym pairs); 2) the REPEL (Rep) objective pushes apart vectors of dissimilar words (i.e., antonyms). We denote as B = {(x(k) l , x(k) r )}K k=1 the set of K word vector pairs for which the Att or Rep score is to be computed: we refer to these pairs as the positive examples. The set of corresponding negative examples T is created by coupling each positive AT4968 TRACT example (xl, xr) with a negative example pair (tl, tr), where tl is the vector closest (within the current batch in terms of cosine similarity) to xl, and tr the vector closest to xr. The Att objective Att(BAtt, TAtt) for a batch of ATTRACT constraints BAtt is then formulated as the max-margin learning problem as follows: K X k=1  τ  δatt + cos  x(k) l , t(k) l  −cos  x(k) l , x(k) r  + τ  δatt + cos  x(k) r , t(k) r  −cos  x(k) l , x(k) r   . (1) τ(x) = max(0, x) is the ramp function and δatt is the similarity margin imposed between the negative and positive vector pairs. The Rep objective is designed in a similar fashion: for each positive REPEL example, the negative example (tl, tr) couples the vector tl that is most distant from xl (cosine similarity in the current batch) and tr, most distant from xr. The goal of the Rep objective Rep(BRep, TRep) for a batch of REPEL word pairs BRep and the corresponding negative examples TRep is then to push REPEL pairs away from each other by the “repel” margin δrep. The exact formulation is analogous to the Att objective, and not shown for brevity. Crucially, similar to LEAR, CLEAR forces specialised vectors to reflect the asymmetry of the LE relation with an asymmetric distance-based objective. Starting from the Le (hyponymy-hypernymy) pairs, the goal is to rearrange vectors of words in these pairs, that is, to preserve the cosine distances in the specialised space while steering vectors of more general concepts to take larger norms, as shown in Figure 1 and 3. We adopt the bestperforming asymmetric objective from Vuli´c and Mrkši´c (2018) and use it with L1 Le word pairs: LE(BLe) = K X k=1 ∥x(k) l ∥−∥x(k) r ∥ ∥x(k) l ∥+ ∥x(k) r ∥ . (2) The objectives described so far cover S, A, and Le word pairs. The translation pairs from the dictionary D are also “attracted” to each other, but using a different objective. We define the AttD(BD) objective on a batch of translation pairs BD as the simple ℓ2-distance between two words in each pair: AttD(BD) = λD K X k=1 ∥x(k) l −x(k) r ∥. (3) x(k) l is the vector of an L1 word from the source language vector space and x(k) r the vector of its L2 translation from the target language space. λD is the cross-lingual regularisation factor. The rationale behind this design is as follows: in order to rearrange word vectors of both languages as shown in Figure 1, we have to allow for the adjustment of vector norms also for L2 word vectors. The previous Att objective from Eq. (1) relies on the cosine similarity and captures only the vector direction. Finally, CLEAR defines a regularisation term for all word pairs in the sets S, A, Le, and D in order to preserve the useful semantic information from the original distributional spaces. Let V (B) denote the set of distinct words in a constraint batch B; the regularisation term is then: Reg(B) = λreg P x∈V (B) ∥y −x∥2, where y is the CLEAR-transformed vector of any distributional vector x, and λreg is the regularisation factor. The full CLEAR objective is then defined as follows: J = Att(BS, TS) + Rep(BA, TA) + Att(BLe, TLe) + LE(BLe) + AttD(BD) + Reg(BS, BA, BLe, BD) (4) This joint objective rearranges vectors from both input monolingual vector spaces (see Figure 3) and enables the transfer of LE signal from the resourcerich language L1 to the target language (i.e., CLEAR does not rely on any explicit LE knowledge in L2). Asymmetric LE Distance. Monolingual and crosslingual LE strength can be inferred directly from the CLEAR-specialised cross-lingual space. It is done by a distance function that reflects both the cosine distance between the vectors (semantic similarity) as well as the asymmetric difference between the vectors’ norms (Vuli´c and Mrkši´c, 2018): ILE(x, y) = dcos(x, y) + ∥x∥−∥y∥ ∥x∥+ ∥y∥ (5) x and y are vectors of any two words x and y in the cross-lingual space. For less expressive ungraded LE detection tasks ILE distances are trivially transformed into binary LE predictions using a binarisation threshold t: if ILE(x, y) < t, we predict that the LE relation holds between words x and y. CLEAR-specialized vectors of general concepts obtain larger norms than vectors of specific concepts. Strong LE pairs should display both small cosine distances and negative norm differences. 4 Results and Discussion We run experiments with representative baseline models and CLEAR-specialised vectors on new 4969 monolingual and cross-lingual graded LE datasets, as well as on established ungraded cross-lingual LE detection datasets (Vyas and Carpuat, 2016; Upadhyay et al., 2018). The goal of reported experiments is twofold: besides providing baseline scores on new evaluation sets, we also analyse the usefulness of cross-lingual graded LE specialisation performed by CLEAR, and analyse its performance in comparison with distributional word vectors and non-specialised cross-lingual word embeddings. 4.1 Experimental Setup Distributional Vectors. Graded LE is evaluated on EN, DE, IT, and HR (see §2); we also evaluate CLEAR on ungraded cross-lingual LE (Upadhyay et al., 2018) for the following language pairs: ENFR (French); EN-RU (Russian); EN-AR (Arabic). All results are reported with English Skip-Gram with Negative Sampling (SGNS-BOW2) vectors (Mikolov et al., 2013) trained by Levy and Goldberg (2014) on the Polyglot Wikipedia (Al-Rfou et al., 2013) with bag-of-words context (window size of 2).7 Input vectors for other languages come from various sources: AR vectors are fastText vectors trained on the Common Crawl data by Grave et al. (2018). RU vectors are obtained by Kutuzov and Andreev (2015). FR, IT, DE, and HR word vectors are large SGNS vectors trained on the standard frWaC, itWaC, and deWaC corpora (Baroni et al., 2009), and the hrWaC corpus (Ljubeši´c and Klubiˇcka, 2014), also used in prior work (Vuli´c et al., 2017). All word vectors are 300-dim.8 Linguistic Constraints and Dictionaries. We use the same set of monolingual constraints as LEAR (Vuli´c and Mrkši´c, 2018): synonymy and antonymy constraints from (Zhang et al., 2014; Ono et al., 2015) are extracted from WordNet and Roget’s Thesaurus (Kipfer, 2009). As in other work on LE specialisation (Nguyen et al., 2017; Nickel and Kiela, 2017), asymmetric LE constraints are extracted from WordNet, and we collect both direct and indirect LE pairs (i.e., (beagle, dog), (dog, an7The proposed CLEAR method is by design agnostic of input distributional vectors and its main purpose is to support fine-tuning of a wide spectrum of input vectors. We have experimented with other standard distributional spaces in English such as fastText (Bojanowski et al., 2017; Grave et al., 2018), type-based ELMo embeddings (Peters et al., 2018), Context2Vec (Melamud et al., 2016) and Glove (Pennington et al., 2014), but the obtained results follow similar trends. We do not report these results for brevity. 8Vectors of multi-word expressions in CL-HYPERLEX are obtained by averaging over their constituent words’ vectors. imal), and (beagle, animal) are in the Le set) In total, we work with 1,023,082 pairs of synonyms, 380,873 pairs of antonyms, and 1,545,630 LE pairs. Bilingual dictionaries are derived from PanLex (Kamholz et al., 2014), which was used in prior work on cross-lingual word embeddings (Duong et al., 2016; Adams et al., 2017; Vuli´c et al., 2017). PanLex currently spans around 1,300 language varieties with over 12M expressions: it offers support also to low-resource transfer settings.9 Training Setup. CLEAR hyperparameters are adopted from the original Attract-Repel work (Mrkši´c et al., 2017): δatt = 0.6, δrep = 0.0, λreg = λD = 10−9. All batches are of size 128 (see Eq. (4)), and the model is trained for 5 epochs with Adagrad (Duchi et al., 2011). Baseline Models. In monolingual evaluation, we compare CLEAR to original non-specialised distributional vectors in each language. Another instructive baseline is the TRANS baseline which uses exactly the same amount of information as CLEAR. Instead of performing joint CLEAR specialisation as described in §3, TRANS is a two-step process that: 1) runs the monolingual LEAR specialisation of the English distributional space, and then 2) translates all test examples in the target language to English relying on the bilingual dictionary D.10 All LE reasoning is then conducted monolingually in English. The TRANS baseline is also used in cross-lingual graded LE evaluation. For cross-lingual datasets without English (e.g., DE-IT), we again translate all words to English and use the English specialised space for graded LE assertions. In addition, for each language pair we also report results of two stateof-the-art cross-lingual word embedding models (Smith et al., 2017; Artetxe et al., 2018), showing the better scoring one in each run (XEMB). For ungraded LE evaluation, in addition to TRANS, we compare CLEAR to two bestperforming baselines from (Upadhyay et al., 2018): they couple two methods for inducing syntactic cross-lingual vectors: 1) BI-SPARSE (Vyas and Carpuat, 2016) and 2) CL-DEP (Vuli´c, 2017) with an LE scorer based on the distributional inclusion hypothesis (Geffet and Dagan, 2005). For more details we refer the reader to (Upadhyay et al., 2018). 9The translations in PanLex were derived from various sources (e.g., glossaries, dictionaries, automatic inference). This results in high-coverage but noisy lexicons. 10In cases where one word has more than one EN translation, we randomly sample a single translation from D. 4970 DE IT HR 0.2 0.3 0.4 0.5 Spearman´s ρ correlation 0.469 0.514 0.499 DIST TRANS CLEAR-cos CLEAR-asym (a) Monolingual EN-DE EN-IT EN-HR 0.3 0.4 0.5 0.6 Spearman´s ρ correlation 0.594 0.586 0.596 XEMB TRANS CLEAR-cos CLEAR-asym (b) CL-HYPERLEX: EN-L2 DE-IT DE-HR HR-IT 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 Spearman´s ρ correlation 0.520 0.527 0.544 XEMB TRANS CLEAR-cos CLEAR-asym (c) CL-HYPERLEX: Other Figure 4: Summary of monolingual and cross-lingual graded LE results (Spearman’s ρ correlation scores). (a) Monolingual evaluation on target languages; (b) Cross-lingual evaluation with EN included in each pair; (c) Crosslingual evaluation: the scores are obtained via multilingual training of a joint EN-DE-IT-HR CLEAR model. 4.2 Results and Discussion Graded LE Evaluation. First, we evaluate the transfer capability of CLEAR: we make graded LE assertions monolingually in each target language without seeing a single hyponymy-hypernymy pair in the target, and evaluate the method on newly constructed monolingual HyperLex datasets. The results (Spearman’s ρ) are summarised in Figure 4a. They suggest that the CLEAR transfer is a viable strategy for LE-specialising target language vector spaces. Non-specialised input distributional vectors are not suitable for capturing graded LE. Importantly, CLEAR outperforms the direct translation approach (TRANS). Furthermore, the comparison between two CLEAR configurations reveals that the asymmetric distance (see Eq. (5)) is indeed crucial for improved performance: we observe consistent gains with the CLEAR-asym model, which uses full ILE from Eq. (5) for inference, over CLEAR-cos, which relies only on the symmetric cosine distance dcos, without leveraging vector norms. The results on three EN-{DE, IT, HR} crosslingual graded LE datasets are provided in Figure 4b. They largely follow the patterns already established in the monolingual graded LE task: non-specialised cross-lingual word vectors cannot match performance of other models, and CLEARasym is the strongest model across the board. To verify that CLEAR is not to tied to any specific dictionary, we have also experimented with crosslingual BabelNet synsets (Ehrmann et al., 2014), and combined BabelNet+PanLex dictionaries leading to very similar trends in results, with PanLex showing a slight edge over BabelNet. Furthermore, we leave experiments with dictionaries induced by unsupervised and weakly supervised cross-lingual word embeddings (Conneau et al., 2018a; Artetxe et al., 2018; Glavaš et al., 2019) for future work. We also provide results on other cross-lingual datasets relying on multilingual training: we fix EN as the single source language and propagate LE information to multiple target languages. To this end, we train a four-lingual EN-DE-IT-HR model. The main finding from Figure 4c is that multilingual training can effectively LE-specialise target language vector spaces and enable reasoning over the cross-lingual graded LE relation even in settings with limited or no target lexico-semantic resources. Finally, additional multilingual knowledge introduced through dictionaries D and distributional spaces of target languages is also beneficial for monolingual GR-LE in the resource-rich language. Previous best results on the EN HyperLex were 0.686 on the entire dataset and 0.703 on its noun portion (Vuli´c and Mrkši´c, 2018). All bilingual EN-L2 CLEAR models surpass these scores: e.g., the EN-IT model scores 0.691 on the entire dataset (0.712 on noun pairs). The best result on EN HyperLex is reported with the four-lingual CLEAR EN-DE-IT-HR model: 0.701 (0.719 on nouns). Ungraded Cross-Lingual LE Evaluation. We further demonstrate the effectiveness of CLEAR on ungraded cross-lingual LE benchmarks from Upadhyay et al. (2018). The models are evaluated on two types of test sets: HYPO – where LE pairs need to be distinguished from inverse LE (i.e., hypernymhyponym) pairs and COHYP in which LE pairs are to be differentiated from cohyponyms. Each test set has a corresponding train portion, which we use to tune the binarisation threshold t for ILE scores. The ungraded cross-lingual LE performance of CLEAR for three diverse language pairs (EN-FR, ENRU, EN-AR) is shown in Table 4. The results prove CLEAR’s robustness for cross-lingual LE modeling: 4971 Model EN-FR EN-RU EN-AR HYPO CL-DEP 0.538 0.602 0.567 BI-SPARSE 0.566 0.590 0.526 TRANS 0.766 0.764 0.690 CLEAR 0.821 0.791 0.783 COHYP CL-DEP 0.610 0.562 0.631 BI-SPARSE 0.667 0.636 0.668 TRANS 0.759 0.751 0.696 CLEAR 0.885 0.871 0.814 Table 4: Cross-lingual ungraded LE detection accuracy scores on cross-lingual HYPO and COHYP evaluation sets from Upadhyay et al. (2018). it substantially outperforms (by 22% on average) the current state-of-the-art models BI-SPARSE and CL-DEP (Upadhyay et al., 2018) in both HYPO and COHYP tasks, and for all language pairs. CLEAR again shows that it can LE-specialise target vectors without any target-language LE knowledge. It displays highest performance for EN-FR, but the drop in performance for EN-RU and EN-AR, is not large (especially for the HYPO setting). Extending CLEAR. As the main goal of this work is to validate the cross-lingual transfer potential and wide portability of the CLEAR model, we do not leverage any target language constraints. However, note that further improvements are expected by explicitly injecting symmetric and asymmetric linguistic constraints in the target language, if these are available, e.g., from BabelNet or multilingual WordNet (Bond and Foster, 2013). We also stress that the CLEAR method inherits the main “retrofitting” property of the underlying monolingual LEAR method: it updates (i.e., LE-specialises) only the vectors of words which are observed in the sets of external linguistic constraints. We believe that further improvements of the CLEAR transfer method can be achieved by LE-specialising the full distributional spaces through recently proposed post-specialisation methods which learn a global specialisation function (Ponti et al., 2018; Kamath et al., 2019; Glavaš and Vuli´c, 2018; Glavaš and Vuli´c, 2019). 5 Conclusion and Future Work We have proposed a novel graded cross-lingual lexical entailment (LE) task, introducing new monolingual and cross-lingual graded LE datasets that hold promise to support future research on this topic. We have then proposed a transfer-based method that can reason over graded LE across languages. We have demonstrated its robustness and usefulness for graded and ungraded LE in monolingual and cross-lingual settings. In the future, we will work on cross-lingual extensions of monolingual hyperbolic embedding models (Nickel and Kiela, 2017; Ganea et al., 2018). We will also experiment with other sources of bilingual information (e.g., cross-lingual word embeddings) and port the transfer approach to more language pairs, with a particular focus on resource-poor languages. Evaluation data for multilingual and crosslingual graded LE is available online at: github. com/ivulic/xling-grle/. Acknowledgments We thank our annotators for helping us create multilingual and cross-lingual HyperLex resources, and the three anonymous reviewers for their helpful suggestions. Goran Glavaš is supported by the Baden-Württemberg Stiftung’s Eliteprogramm grant AGREE (“Algebraic Reasoning over Events from Text and External Knowledge”). References Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017. Cross-lingual word embeddings for low-resource language modeling. In Proceedings of EACL, pages 937–947. Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of CoNLL, pages 183–192. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of ACL, pages 789–798. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: A collection of very large linguistically processed webcrawled corpora. Language Resources and Evaluation, 43(3):209–226. Richard Beckwith, Christiane Fellbaum, Derek Gross, and George A. Miller. 1991. WordNet: A lexical database organized on psycholinguistic principles. Lexical acquisition: Exploiting on-line resources to build a lexicon, pages 211–231. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL, 5:135–146. 4972 Francis Bond and Ryan Foster. 2013. Linking and extending an open multilingual Wordnet. In Proceedings of ACL, pages 1352–1362. Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. Semeval2017 task 2: Multilingual and cross-lingual semantic word similarity. In Proceedings of SEMEVAL, pages 15–26. José Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A framework for the construction of monolingual and cross-lingual word similarity datasets. In Proceedings of ACL, pages 1–7. Allan M. Collins and Ross M. Quillian. 1972. Experiments on semantic memory and language comprehension. Cognition in Learning and Memory. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018a. Word translation without parallel data. In Proceedings of ICLR (Conference Track). Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of EMNLP, pages 2475–2485. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilingual corpora. In Proceedings of EMNLP, pages 1285–1295. Maud Ehrmann, Francesco Cecconi, Daniele Vannella, John Philip McCrae, Philipp Cimiano, and Roberto Navigli. 2014. Representing multilingual data as linked data: the case of BabelNet 2.0. In Proceedings of LREC, pages 401–408. Christiane Fellbaum. 1998. WordNet. MIT Press. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1):116–131. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hierarchies via word embeddings. In Proceedings of ACL, pages 1199–1209. Octavian-Eugen Ganea, Gary Bécigneul, and Thomas Hofmann. 2018. Hyperbolic entailment cones for learning hierarchical embeddings. In Proceedings of ICML, pages 1632–1641. Maayan Geffet and Ido Dagan. 2005. The distributional inclusion hypotheses and lexical entailment. In Proceedings of ACL, pages 107–114. Daniela Gerz, Ivan Vuli´c, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A largescale evaluation set of verb similarity. In Proceedings of EMNLP, pages 2173–2182. Goran Glavaš and Ivan Vuli´c. 2018. Explicit retrofitting of distributional word vectors. In Proceedings of ACL, pages 34–45. Goran Glavaš, Robert Litschko, Sebastian Ruder, and Ivan Vuli´c. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of ACL. Goran Glavaš and Simone Paolo Ponzetto. 2017. Dual tensor model for detecting asymmetric lexicosemantic relations. In Proceedings of EMNLP, pages 1758–1768. Goran Glavaš and Ivan Vuli´c. 2019. Generalized tuning of distributional word vectors for monolingual and cross-lingual lexical entailment. In Proceedings of ACL. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of LREC, pages 3483–3487. James A. Hampton. 2007. Typicality, graded membership, and vagueness. Cognitive Science, 31(3):355– 384. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695. Aishwarya Kamath, Jonas Pfeiffer, Edoardo Maria Ponti, Goran Glavaš, and Ivan Vuli´c. 2019. Specializing distributional vectors of all words for lexical entailment. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP. David Kamholz, Jonathan Pool, and Susan M. Colowick. 2014. PanLex: Building a resource for panlingual lexical translation. In Proceedings of LREC, pages 3145–3150. Hans Kamp and Barbara Partee. 1995. Prototype theory and compositionality. Cognition, 57(2):129– 191. Douwe Kiela, Laura Rimell, Ivan Vuli´c, and Stephen Clark. 2015. Exploiting image generality for lexical entailment detection. In Proceedings of ACL, pages 119–124. Barbara Ann Kipfer. 2009. Roget’s 21st Century Thesaurus (3rd Edition). Philip Lief Group. 4973 Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. Natural Language Engineering, 16(4):359–389. Andrey Kutuzov and Igor Andreev. 2015. Texts in, meaning out: neural language models in semantic similarity task for Russian. In Proceedings of DIALOG. Ira Leviant and Roi Reichart. 2015. Separated by an un-common language: Towards judgment language informed vector space modeling. CoRR, abs/1508.00106. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of ACL, pages 302–308. Nikola Ljubeši´c and Filip Klubiˇcka. 2014. {bs,hr,sr}WaC – Web corpora of Bosnian, Croatian and Serbian. In Proceedings of the 9th Web as Corpus Workshop, pages 29–35, Gothenburg, Sweden. Association for Computational Linguistics. Douglas L. Medin, Mark W. Altom, and Timothy D. Murphy. 1984. Given versus induced category representations: Use of prototype and exemplar information in classification. Journal of Experimental Psychology, 10(3):333–352. Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In Proceedings of NAACL-HLT, pages 1030– 1040. Rada Mihalcea, Ravi Som Sinha, and Diana McCarthy. 2010. Semeval-2010 task 2: Cross-lingual lexical substitution. In Proceedings of SEMEVAL, pages 9– 14. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111– 3119. Nikola Mrkši´c, Diarmuid Ó Séaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of ACL, pages 1777–1788. Nikola Mrkši´c, Ivan Vuli´c, Diarmuid Ó Séaghdha, Ira Leviant, Roi Reichart, Milica Gaši´c, Anna Korhonen, and Steve Young. 2017. Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the ACL, 5:309–324. Matteo Negri, Alessandro Marchetti, Yashar Mehdad, Luisa Bentivogli, and Danilo Giampiccolo. 2012. Semeval-2012 task 8: Cross-lingual textual entailment for content synchronization. In Proceedings of SEMEVAL, pages 399–407. Matteo Negri, Alessandro Marchetti, Yashar Mehdad, Luisa Bentivogli, and Danilo Giampiccolo. 2013. Semeval-2013 task 8: Cross-lingual textual entailment for content synchronization. In Proceedings of SEMEVAL, pages 25–33. Kim Anh Nguyen, Maximilian Köper, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017. Hierarchical embeddings for hypernymy detection and directionality. In Proceedings of EMNLP, pages 233–243. Maximilian Nickel and Douwe Kiela. 2017. Poincaré embeddings for learning hierarchical representations. In Proceedings of NIPS, pages 6341–6350. Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word Embedding-based Antonym Detection using Thesauri and Distributional Information. In Proceedings of NAACL, pages 984–989. Sebastian Padó, Michel Galley, Dan Jurafsky, and Christopher D. Manning. 2009. Robust machine translation evaluation with entailment features. In Proceedings of ACL, pages 297–305. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532– 1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Mohammad Taher Pilehvar, Dimitri Kartsaklis, Victor Prokhorov, and Nigel Collier. 2018. Card-660: Cambridge rare word dataset - a reliable benchmark for infrequent word representation models. In Proceedings of EMNLP, pages 1391–1401. Edoardo Maria Ponti, Ivan Vuli´c, Goran Glavaš, Nikola Mrkši´c, and Anna Korhonen. 2018. Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization. In Proceedings of EMNLP, pages 282–293. Marek Rei, Daniela Gerz, and Ivan Vuli´c. 2018. Scoring lexical entailment with a supervised directional similarity network. In Proceedings of ACL, pages 638–643. Stephen Roller, Douwe Kiela, and Maximilian Nickel. 2018. Hearst patterns revisited: Automatic hypernym detection from large text corpora. In Proceedings of ACL, pages 358–363. Eleanor H. Rosch. 1973. Natural categories. Cognitive Psychology, 4(3):328–350. Eleanor H. Rosch. 1975. Cognitive representations of semantic categories. Journal of Experimental Psychology, 104(3):192–233. 4974 Enrico Santus, Alessandro Lenci, Qin Lu, and Sabine Schulte im Walde. 2014. Chasing hypernyms in vector spaces with entropy. In Proceedings of EACL, pages 38–42. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of ACL, pages 2389–2398. Vered Shwartz, Enrico Santus, and Dominik Schlechtweg. 2017. Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection. In Proceedings of EACL, pages 65–75. Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of ACL, pages 721–732. Samuel L. Smith, David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR (Conference Track). Zhiyi Song, Ann Bies, Justin Mott, Xuansong Li, Stephanie M. Strassel, and Christopher Caruso. 2018. Cross-document, cross-language event coreference annotation using event hoppers. In Proceedings of LREC, pages 3535–3540. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An open multilingual graph of general knowledge. In Proceedings of AAAI, pages 4444–4451. Peter D. Turney and Saif M. Mohammad. 2015. Experiments with three approaches to recognizing lexical entailment. Natural Language Engineering, 21(3):437–476. Shyam Upadhyay, Yogarshi Vyas, Marine Carpuat, and Dan Roth. 2018. Robust cross-lingual hypernymy detection using dependency context. In Proceedings of NAACL-HLT, pages 607–618. Ivan Vuli´c. 2017. Cross-lingual syntactically informed distributed word representations. In Proceedings of EACL, pages 408–414. Ivan Vuli´c, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017. Hyperlex: A large-scale evaluation of graded lexical entailment. Computational Linguistics, 43(4):781–835. Ivan Vuli´c and Nikola Mrkši´c. 2018. Specialising word vectors for lexical entailment. In Proceedings of NAACL-HLT, pages 1134–1145. Ivan Vuli´c, Nikola Mrkši´c, and Anna Korhonen. 2017. Cross-lingual induction and transfer of verb classes based on word vector space specialisation. In Proceedings of EMNLP, pages 2536–2548. Yogarshi Vyas and Marine Carpuat. 2016. Sparse bilingual word representations for cross-lingual lexical entailment. In Proceedings of NAACL-HLT, pages 1187–1197. Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of COLING, pages 2249–2259. Jingwei Zhang, Jeremy Salwen, Michael Glass, and Alfio Gliozzo. 2014. Word semantic representations using bayesian probabilistic tensor factorization. In Proceedings of EMNLP, pages 1522–1531.
2019
490
What Kind of Language Is Hard to Language-Model? Sabrina J. Mielke1 Ryan Cotterell1 Kyle Gorman2,3 Brian Roark3 Jason Eisner1 1 Department of Computer Science, Johns Hopkins University 2 Program in Linguistics, Graduate Center, City University of New York 3 Google {sjmielke@,ryan.cotterell@}jhu.edu [email protected] [email protected] [email protected] Abstract How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that “translationese” is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample. 1 Introduction Do current NLP tools serve all languages? Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task. However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world’s languages (Bender, 2009), we do not have a clear idea how well models perform cross-linguistically in a controlled setting. In this work, we look at current methods for language modeling and attempt to determine whether Resumption of the session Wiederaufnahme der ... The peace that ... Der gestern verein... Мирът, който беше ... Obwohl wir nicht ... Макар че не бяхме ... Now we can finally ... Накрая всички можем ... en de bg 1 2 3 4 aligned multi-text language model 푦1,en 푦1,de 푦2,en 푦2,de 푦2,bg 푦3,de 푦3,bg 푦4,en 푦4,bg Difficulty estimation from sentence surprisal 푛1 푛2 푛3 푛4 ⇒ ⇒ ⇒ ⇒ 푑en 푑de 푑bg ⇒ ⇒ ⇒ Figure 1: Jointly estimating the information 푛푖present in each multi-text intent 푖and the difficulty 푑푗of each language 푗. At left, gray text indicates translations of the original (white) sentence in the same row. At right, darker cells indicate higher surprisal/difficulty. Empty cells indicate missing translations. English (en) is missing a hard sentence and Bulgarian (bg) is missing an easy sentence, but this does not mislead our method into estimating English as easier than Bulgarian. there are typological properties that make certain languages harder to language-model than others. One of the oldest tasks in NLP (Shannon, 1951) is language modeling, which attempts to estimate a distribution 푝(x) over strings x of a language. Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018). Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization. It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018)—though those downstream evaluations, too, have focused on a small number of (mostly English) datasets. In prior work (Cotterell et al., 2018), we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus. Few such corpora exist: in that paper, we made 1 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4975–4989 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics use of the Europarl corpus which, unfortunately, is not very typologically diverse. Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons. In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared. Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings. We suppose that a language model’s surprisal on a sentence—the negated log of the probability it assigns to the sentence—reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language. Given language models of diverse languages, we jointly recover each language’s difficulty parameter. Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes. Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty. Our results suggest that simple properties of a language—the word inventory and (to a lesser extent) the raw character sequence length—are statistically significant indicators of modeling difficulty within our large set of languages. In contrast, we fail to reproduce our earlier results from Cotterell et al. (2018),1 which suggested morphological complexity as an indicator of modeling complexity. In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources. Additionally, exploiting our model’s ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012). We ultimatelycast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty. We conclude with a recommendation: The world 1We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations. However, we did not reproduce the results under new conditions (Drummond, 2009). Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and—perhaps crucially—improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity). being small, typology is in practice a small-data problem. there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate. We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow. 2 The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text. To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible. The datasets should all contain the same content, the only difference being the language in which it is expressed. 2.1 Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext—sentence-aligned2 translations of the same content in multiple languages. Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.3 In what follows, we will distinguish between the 푖th sentence in language 푗, which is a specific string 푠푖푗, and the 푖th intent, the shared abstract thought that gave rise to all the sentences 푠푖1, 푠푖2, . . .. For simplicity, suppose for now that we have a fully parallel corpus. We select, say, 80% of the intents.4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents. We will later drop the assumption of a fully parallel corpus (§3), which will help us to estimate the effects of translationese (§6). 2Both corpora we use align small paragraphs instead of sentences, but for simplicity we will call them “sentences.” 3A translator might add or remove information out of helpfulness, sloppiness, showiness, consideration for their audience’s background knowledge, or deference to the conventions of the target language. For example, English conventions make it almost obligatory to express number (via morphological inflection), but make it optional to express evidentiality (e.g., via an explicit modal construction); other languages are different. 4In practice, we use 2/3 of the raw data to train our models, 1/6 to tune them and the remaining 1/6 to test them. 2 2.2 Comparing Surprisal Across Languages Given some test sentence 푠푖푗, a language model 푝 defines its surprisal: the negative log-likelihood NLL(푠푖푗) = −log2 푝(푠푖푗). This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits. Long or unusual sentences tend to have high surprisal—but high surprisal can also reflect a language’s model’s failure to anticipate predictable words. In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy). Cotterell et al. (2018) similarly compared language models for different languages, using a multitext corpus. Concretely, recall that 푠푖푗and 푠푖푗′ should contain, at least in principle, the same information for two languages 푗and 푗′—they are translations of each other. But, if we find that NLL(푠푖푗) > NLL(푠푖푗′), we must assume that either 푠푖푗contains more information than 푠푖푗′, or that our language model was simply able to predict it less well.5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3). 2.3 Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units. For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (“unknown”) for some words of the language.6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al. (2011), whose model generates a sentence, not word by 5The former might be the result of overt marking of, say, evidentiality or gender, which adds information. We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus. 6We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol ^, as is common and easily defensible in openvocabulary language modeling (Mielke and Eisner, 2018). We make an exception for Chinese, where we only require each character to appear at least twice. These thresholds result in negligible “out-of-alphabet” rates for all languages. word, but rather character by character. An obvious drawback of the model is that it has no explicit representation of reusable substrings (Mielke and Eisner, 2018), but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study. We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997), using the implementation of Merity et al. (2018) with the char-PTB parameters. BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages (Mielke and Eisner, 2018). Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units. The RNN is then trained over the sequence of units, which looks like this: “The |ex|os|kel|eton |is |gener|ally |blue”. The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞merges) and a kind of character-level model (0 merges). As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.8 However, as we will see in Figure 3, tuning this parameter does not substantially influence our results. We therefore will refer to the model with 0.4|V| merges as BPE-RNNLM. 3 Aggregating Sentence Surprisals Cotterell et al. (2018) evaluated the model for language 푗simply by its total surprisal Í 푖NLL(푠푖푗). This comparative measure required a complete multitext corpus containing every sentence 푠푖푗(the expression of the intent 푖in language 푗). We relax this requirement by using a fully probabilistic regression model that can deal with missing data 7In practice, in both training and testing, we only evaluate the probability of the canonical segmentation of the held-out string, rather than the total probability of all segmentations (Kudo, 2018; Mielke and Eisner, 2018, Appendix D.2). 8Figure 2 shows the 21 languages of the Europarl dataset. Optimal values: 0.2 (et); 0.3 (fi, lt); 0.4 (de, es, hu, lv, sk, sl); 0.5 (da, fr, pl, sv); 0.6 (bg, ru); 0.7 (el); 0.8 (en); 0.9 (it, pt). 3 0.2 0.4 0.6 0.8 1 5.4 5.6 5.8 6 ·106 bg bg bg bg bg bg bg bg bg bg cs cs cs cs cs cs cs cs cs cs da da da da da da da da da da de de de de de de de de de el el el el el el el el el el en en en en en en en en en en es es es es es es es es es es et et et et et et et et et et fi fi fi fi fi fi fi fi fi fi fr fr fr fr fr fr fr fr fr fr hu hu hu hu hu hu hu hu hu hu it it it it it it it it it it lt lt lt lt lt lt lt lt lt lt lv lv lv lv lv lv lv lv lv lv nl nl nl nl nl nl nl nl nl nl pl pl pl pl pl pl pl pl pl pl pt pt pt pt pt pt pt pt pt pt ro ro ro ro ro ro ro ro ro ro sk sk sk sk sk sk sk sk sk sk sl sl sl sl sl sl sl sl sl sl sv sv sv sv sv sv sv sv sv sv Figure 2: Top: For each language, total NLL of the dev corpus varies with the number of BPE merges, which is expressed on the 푥-axis as a fraction of the number of observed word types |V|.8 Bottom: Averaging over all 21 languages motivates a global value of 0.4. (Figure 1).9 Our model predicts each sentence’s surprisal 푦푖푗= NLL(푠푖푗) using an intent-specific “information content” factor 푛푖, which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor 푑푗. This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data). 3.1 Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: 푦푖푗= 푛푖· exp(푑푗) · exp(휖푖푗) (1) 휖푖푗∼N (0, 휎2) (2) This says that each intent 푖has a latent size of 푛푖— measured in some abstract “informational units”— that is observed indirectly in the various sentences 푠푖푗that express the intent. Larger 푛푖tend to yield longer sentences. Sentence 푠푖푗has 푦푖푗bits of surprisal; thus the multiplier 푦푖푗/푛푖represents the number of bits that language 푗used to express each informational unit of intent 푖, under our language model of language 푗. Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences 푖: that is, log(푦푖푗/푛푖) ∼N (푑푗, 휎2), where mean 푑푗is the difficulty of language 푗. That is, 푦푖푗/푛푖= exp(푑푗+휖푖푗) where 휖푖푗∼N (0, 휎2) is residual noise, yielding equations (1)–(2).10 We jointly fit the intent sizes 푛푖and the language difficulties 푑푗. 9Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process. More discussion on this can be found in Appendix A. 10It is tempting to give each language its own 휎2 푗parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language’s 휎2 푗to 0. 3.2 Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language 푗, intents with large 푛푖will not only have larger 푦푖푗values but these values will vary more widely. However, Model 1 is homoscedastic: the variance 휎2 of log(푦푖푗/푛푖) is assumed to be independent of the independent variable 푛푖, which predicts that the distribution of 푦푖푗should spread out linearly as the information content 푛푖increases: e.g., 푝(푦푖푗≥13 | 푛푖= 10) = 푝(푦푖푗≥26 | 푛푖= 20). That assumption is questionable, since for a longer sentence, we would expect log 푦푖푗/푛푖to come closer to its mean 푑푗as the random effects of individual translational choices average out.11 We address this issue by assuming that 푦푖푗results from 푛푖∈ℕindependent choices: 푦푖푗= exp(푑푗) · 푛푖 Õ 푘=1 exp 휖푖푗푘 ! (3) 휖푖푗푘∼N (0, 휎2) (4) The number of bits for the 푘th informational unit now varies by a factor of exp 휖푖푗푘that is log-normal and independent of the other units. It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960),12 yielding Model 2: 푦푖푗= 푛푖· exp(푑푗) · exp(휖푖푗) (1) 휎2 푖= ln  1 + exp(휎2)−1 푛푖  (5) 휖푖푗∼N  휎2−휎2 푖 2 , 휎2 푖  , (6) in which the noise term 휖푖푗now depends on 푛푖. Unlike (4), this formula no longer requires 푛푖∈ℕ; we allow any 푛푖∈ℝ>0, which will also let us use gradient descent in estimating 푛푖. In effect, fitting the model chooses each 푛푖 so that the resulting intent-specific but languageindependent distribution of 푛푖· exp(휖푖푗) values,13 11Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads. Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5. 12There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral 푛푖values that we will obtain for the Fenton-Wilkinson approximation. 13The distribution of 휖푖푗is the same for every 푗. It no longer has mean 0, but it depends only on 푛푖. 4 after it is scaled by exp(푑푗) for each language 푗, will assign high probability to the observed 푦푖푗. Notice that in Model 2, the scale of 푛푖becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly 휎푖falls off with 푛푖. This contrasts with Model 1, where doubling all the 푛푖values could be compensated for by halving all the exp(푑푗) values. 3.3 Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution14 instead of a Gaussian in (6) as an approximation to the distribution of 휖푖푗. The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals. We choose its mean and variance just as in (6). This heavy-tailed 휖푖푗distribution can be viewed as approximating a version of Model 2 in which the 휖푖푗푘themselves follow some heavy-tailed distribution. 3.4 Estimating model parameters We fit each regression model’s parameters by LBFGS. We then evaluate the model’s fitness by measuring its held-out data likelihood—that is, the probability it assigns to the 푦푖푗values for held-out intents 푖. Here we use the previously fitted 푑푗and 휎parameters, but we must newly fit 푛푖values for the new 푖using MAP estimates or posterior means. A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows. On Europarl data (which has fewer languages), Model 2 performs best. On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets. We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.15 3.5 A Note on Bayesian Inference As our model of 푦푖푗values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference. We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them. Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017), a 14One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice. 15Further enhancements are possible: we discuss our “Model 3” in Appendix B, but it did not seem to fit better. toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation. Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C). We therefore work with the MAP estimates in the rest of this paper. 4 The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores 푑푗, we now seek data to do so for all our languages. If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages. Yet this short document is far too small to train state-of-the-art language models. In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005), but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)’s corpus. Although our regression models of the surprisals 푦푖푗can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing. To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages. For this, we seek complete multitext. 4.1 Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages. It was previously used by Cotterell et al. (2018) for its size and stability. In §6, we will also exploit the fact that each intent’s original language is known. To simplify our access to this information, we will use the “Corrected & Structured Europarl Corpus” (CoStEP) corpus (Gra¨en et al., 2014). From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8. The full extraction process and corpus statistics are detailed in Appendix D. 4.2 The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agi´c et al., 2016). Concretely, we use the 5 chars BPE (0.4|V|) BPE (best per language) bg cs da de el en es et fi fr hu it lt lv nl pl pt ro sk sl sv Figure 3: The Europarl language difficulties appear more similar, and are ordered differently, when the RNN models use BPE units instead of character units. Tuning BPE per-language has a small additional effect. tokenized16 and aligned collection assembled by Mayer and Cysouw (2014). We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2. Some of the Bibles in the dataset are incomplete. As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A). We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E). This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages,17 spanning 13 language families.18 We allow 푗to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty 푑푗for each one. 4.3 Results The estimated difficulties are visualized in Figure 4. We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest. As we can see in Figure 3 for Europarl, the difficulty estimates are 16The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization. It is possible that our 푦푖푗 values for each language 푗depend to a small degree on the tokenizer that was chosen for that language. 17afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 1822 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran. For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009), manually fixing tlh ↦→Constructed language. It is unfortunate not to have more families or more languages per family. A broader sample could be obtained by taking only the New Testament—but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E). hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments. A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles. We still see German as the hardest language, but almost all other languages switch places. Specifically, we can see that the variance of the char-RNNLM is much higher. 4.4 Are All Translations the Same? Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language. The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style. The sample standard deviation of 푑푗among the 106 Bibles 푗is 0.076/0.063 for BPE/charRNNLM. Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance. We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a,b,c,d,e,f,g, 2013b,a). We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators. 5 What Correlates with Difficulty? Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity? Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016), but the languages of our Bibles are often not. We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora. Note that to ensure a false discovery rate of at most 훼= .05, all reported 푝-values have to be corrected using Benjamini and Hochberg (1995)’s procedure: only 푝≤.05·5/28 ≈0.009 is significant. Morphological Counting Complexity Cotterell et al. (2018) suspected that inflectional morphology (i.e., the grammatical requirement to 6 −4 −3 −2 −1 0 1 2 3 4 5 −8 −6 −4 −2 0 2 4 6 8 10 easier with BPE easier with chars bg cs da de el en es et fi fr hu it lt lv nl pl pt ro sk sl sv difficulty (×100) using BPE-RNNLM with 0.4|V| merges difficulty (×100) using char-RNNLM Difficulties on Europarl vs. harder easier bg cs da de el en fi fr hu it lt nl pt ro bul ces dan deu ell eng fin fra hun ita lit nld por ron −15 −10 −5 0 5 10 15 20 25 −15 −10 −5 0 5 10 15 20 easier with BPE easier with chars afr aln arb arz ayr ayr bba ben ben bqc bul bul cac cak ceb ceb ceb ces ces cmn cnh cym dan deu deu deu deu deu deu deu deu deu deu deu ell eng eng eng eng epo fin fin fin fra fra fra fra fra fra fra fra fra fra fra guj gur hat hat hrv hun hun ind ind ita ita ita ita kek kek kjb lat lit mah mam mri mya nld nor nor plt poh por por por qub quh quy quz ron rus som tbz tcw tgl tlh tpi tpm ukrukr vie vie vie wal wbm xho zom difficulty (×100) using BPE-RNNLM with 0.4|V| merges difficulty (×100) using char-RNNLM Difficulties on Bibles Figure 4: Difficulties of 21 Europarl languages (left) and 106 Bibles (right), comparing difficulties when estimated from BPE-RNNLMs vs. char-RNNLMs. Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research. In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus. choose among forms like “talk,” “talks,” “talking”) was mainly responsible for difficulty in modeling. They found a language’s Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty. We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models. Comparing the scatterplot for both languages in Figure 5 with Cotterell et al. (2018)’s Figure 1, we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models. We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty. Perhaps finer measures of morphological complexity would be more predictive. Head-POS Entropy Dehouck and Denis (2018) propose an alternative measure of morphosyntactic complexity. Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token’s parent, conditioned on the token’s type. In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context. We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov´a, 2017). HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018), which is the entropy of the POS tag of a token’s parent given that particular token’s type. We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens. We remark that in each language, HPE is 0 for most tokens. As predictors of language difficulty, HPE-mean has a Spearman’s 휌= .004/−.045 (푝> .9/.8) and HPE-skew has a Spearman’s 휌= .032/.158 (푝> .8/.4), so this is not a positive result. Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008). Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017). On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average. Do language models find short dependencies easier? We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees. We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al. (2015) procedure, 7 0 50 100 150 200 −4 −2 0 2 4 bg cs da de el en et fi fr hu it lv lt nl pl pt ro sk sl es sv MCC difficulty (×100, BPE-RNNLM) 0 50 100 150 200 −5 0 5 10 bg cs da de el en et fi fr hu it lv lt nl pl pt ro sk sl es sv MCC difficulty (×100, char-RNNLM) Figure 5: MCC does not predict difficulty on Europarl. Spearman’s 휌is .091 / .110 with 푝> .6 for BPERNNLM (left) / char-RNNLM (right). which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers). Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman’s 휌 = .196/.092 (푝= .394/.691), Pearson’s 푟= .486/.522 (푝= .032/.015). However, after correcting for multiple comparisons, this is also non-significant.19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages. Similarly to the Bible situation, not all features are present for all languages—and for some of our Bibles, no information can be found at all. We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A “Prefixing vs. Suffixing in Inflectional Morphology” and 81A “Order of Subject, Object and Verb.” The results are again not quite as striking as we would hope. In particular, in Mood’s median null hypothesis significance test neither 26A (푝> .3 / .7 for BPE/char-RNNLM) nor 81A (푝> .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1). We therefore turn our attention to much simpler, yet strikingly effective heuristics. Raw character sequence length An interesting correlation emerges between language difficulty 19We also caution that the significance test for Pearson’s assumes that the two variables are bivariate normal. If not, then even a significant 푟does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs. 1–2, §5). for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2). On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of 푝< .001, passing the multiple-test correction. The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merri¨enboer et al., 2017). Raw word inventory Our most predictive feature, however, is the size of the word inventory. To obtain this number, we count the number of distinct types |V| in the (tokenized) training set of a language (detailed results in Appendix F.3).20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features—but only on the BPE model (푝< 1푒−11). Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V| (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries). Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages. A similarly predictive feature on Bibles— whose numerator is this word inventory size—is the type/token ratio, where values closer to 1 are a traditional omen of undertraining. An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson’s 휌= .693 at 푝= .0005, Spearman’s 휌= .666 at 푝= .0009), so the original claim in Cotterell et al. (2018) about MCC may very well hold true after all. Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results. 20A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance. We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive. 21Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018), a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all. 8 6 Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences. But since Europarl contains information about which language an intent was originally expressed in,22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell? We tackle this question by splitting each language 푗into two sub-languages, “native” 푗and “translated” 푗, resulting in 42 sublanguages with 42 difficulties.23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3. Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the 푛푖factors. Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways. In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences24 (to ensure stable results). Indeed we seem to find that native sentences are slightly more difficult: their 푑푗is 0.027 larger (± 0.023, averaged over our selected 8 languages). But are they? This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese). Thus, translationese might merely be different (Rabinovich and Wintner, 2015)—not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable. To remove this confound, we must train our language 22It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016), one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian. 23This method would also allow us to study the effect of source language, yielding 푑푗←푗′ for sentences translated from 푗′ into 푗. Similarly, we could have included surprisals from both models, jointly estimating 푑푗,char-RNN and 푑푗,BPE values. 24en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text. We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents. We thus choose to balance only one language—we train all models for all languages, making sure that the training set for one language is balanced—and then perform our regression, reporting the translationese and native difficulties only for the balanced language. We repeat this process for every language that has enough intents. We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size). To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences. On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is “easier” to model (Baker, 1993). 26 7 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data. We reevaluated the conclusions of Cotterell et al. (2018) on a larger set of languages, requiring new methods to select fully parallel data (§4.2) or handle missing data. We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora. Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors. However, a language’s vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages. Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure. Acknowledgments This work was supported by the National Science Foundation under Grant No. 1718846. 25da, de, en, es, fi, fr, it, nl, pt, sv 26Of course we cannot claim that it is just as hard to read or translate as native text—those are different claims altogether— but only that it is as easy to monolingually language-model. 9 References ˇZeljko Agi´c, Anders Johannsen, Barbara Plank, H´ector Mart´ınez Alonso, Natalie Schluter, and Anders Søgaard. 2016. Multilingual projection for parsing truly low-resource languages. Transactions of the Association for Computational Linguistics, 4:301–312. Mona Baker. 1993. Corpus linguistics and translation studies: Implications and applications. Text and Technology: In Honour of John Sinclair, pages 233– 250. Emily M. Bender. 2009. Linguistically na¨ıve != language independent: Why NLP needs linguistic typology. In EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics, pages 26–32. Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):289–300. Bob Carpenter, Andrew Gelman, Matthew Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: A probabilistic programming language. Journal of Statistical Software, Articles, 76(1):1–32. Ryan Cotterell, Sabrina J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of NAACL, pages 536–541. Mathieu Dehouck and Pascal Denis. 2018. A framework for understanding the role of morphology in universal dependency parsing. In Proceedings of EMNLP, pages 2864–2870. Chris Drummond. 2009. Replicability is not reproducibility: Nor is it good science. In Proceedings of the Evaluation Methods for Machine Learning Workshop at the 26th ICML. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Lawrence Fenton. 1960. The sum of log-normal probability distributions in scatter transmission systems. IRE Transactions on Communications Systems, 8(1):57–67. Richard Futrell, Kyle Mahowald, and Edward Gibson. 2015. Large-scale evidence of dependency length minimization in 37 languages. Proceedings of the National Academy of Sciences, 112(33):10336– 10341. Johannes Gra¨en, Dolores Batinic, and Martin Volk. 2014. Cleaning the Europarl corpus for linguistic applications. In Konvens, pages 222–227. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Christo Kirov, Ryan Cotterell, John Sylak-Glassman, G´eraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Arya McCarthy, Sabrina J. Mielke, Sandra K¨ubler, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. Unimorph 2.0: Universal morphology. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC). European Language Resources Association (ELRA). Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit, pages 79–86. Charles J. Kowalski. 1972. On the effects of nonnormality on the distribution of the sample productmoment correlation coefficient. Journal of the Royal Statistical Society. Series C (Applied Statistics), 21(1):1–12. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75, Melbourne, Australia. Gennadi Lembersky, Noam Ordan, and Shuly Wintner. 2012. Adapting translation models to translationese improves SMT. In Proceedings of EACL, pages 255–265. Haitao Liu. 2008. Dependency distance as a metric of language comprehension difficulty. Journal of Cognitive Science, 9(2):159–191. Haitao Liu, Chunshan Xu, and Junying Liang. 2017. Dependency distance: A new perspective on syntactic patterns in natural languages. Physics of Life Reviews, 21:171–193. Thomas Mayer and Michael Cysouw. 2014. Creating a massively parallel Bible corpus. In Proceedings of LREC, pages 3158–3163. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240. Bart van Merri¨enboer, Amartya Sanyal, Hugo Larochelle, and Yoshua Bengio. 2017. Multiscale sequence modeling with a learned dictionary. arXiv preprint arXiv:1707.00762. Sabrina J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. arXiv preprint arXiv:1804.08205. NIST Multimodal Information Group. 2010a. NIST 2002 Open Machine Translation (OpenMT) evaluation LDC2010T10. 10 NIST Multimodal Information Group. 2010b. NIST 2003 Open Machine Translation (OpenMT) evaluation LDC2010T11. NIST Multimodal Information Group. 2010c. NIST 2004 Open Machine Translation (OpenMT) evaluation LDC2010T12. NIST Multimodal Information Group. 2010d. NIST 2005 Open Machine Translation (OpenMT) evaluation LDC2010T14. NIST Multimodal Information Group. 2010e. NIST 2006 Open Machine Translation (OpenMT) evaluation LDC2010T17. NIST Multimodal Information Group. 2010f. NIST 2008 Open Machine Translation (OpenMT) evaluation LDC2010T21. NIST Multimodal Information Group. 2010g. NIST 2009 Open Machine Translation (OpenMT) evaluation LDC2010T23. NIST Multimodal Information Group. 2013a. NIST 2008-2012 Open Machine Translation (OpenMT) progress test sets LDC2013T07. NIST Multimodal Information Group. 2013b. NIST 2012 Open Machine Translation (OpenMT) evaluation LDC2013T03. Lewis M. Paul, Gary F. Simons, Charles D. Fennig, et al. 2009. Ethnologue: Languages of the world, 19 edition. SIL International, Dallas. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL, pages 2227– 2237. Ella Rabinovich and Shuly Wintner. 2015. Unsupervised identification of translationese. Transactions of the Association for Computational Linguistics, 3:419–432. Ella Rabinovich, Shuly Wintner, and Ofek Luis Lewinsohn. 2016. A parallel corpus of translationese. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 140–155. Springer. Philip Resnik, Mari Broman Olsen, and Mona Diab. 1999. The bible as a parallel corpus: Annotating the ‘book of 2000 tongues’. Computers and the Humanities, 33(1):129–153. Benoˆıt Sagot. 2013. Comparing complexity measures. In Computational Approaches to Morphological Complexity. S. C. Schwartz and Y. S. Yeh. 1982. On the distribution function and moments of power sums with log-normal components. The Bell System Technical Journal, 61(7):1441–1462. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL, pages 1715– 1725. Claude E. Shannon. 1951. Prediction and entropy of printed English. Bell Labs Technical Journal, 30(1):50–64. Milan Straka, Jan Hajiˇc, and Jana Strakov´a. 2016. UDPipe: Trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing. In Proceedings of LREC, pages 4290–4297. Milan Straka and Jana Strakov´a. 2017. Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. In CoNLL 2017 Shared Task: Multilingual parsing from raw text to Universal Dependencies, pages 88–99. Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of ICML, pages 1017–1024. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research, pages 1–8. 11 A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates. Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped. This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from—or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself). For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers. In some cases, sloppy translation will yield a 푦푖푗that is unusually high or low given the 푦′ 푖푗values of other languages 푗′. Such a 푦푖푗is not good evidence of the quality of the language model for language 푗since it has been corrupted by the sloppy translation. However, under Model 1 or 2, we could not simply explain this corrupted 푦푖푗with the random residual 휖푖푗since large |휖푖푗| is highly unlikely under the Gaussian assumption of those models. Rather, 푦푖푗would have significant influence on our estimate of the per-language effect 푑푗. This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.28 How can we include this idea into our models? First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the 푛푖additively; thus we should use a noisy 푛푖+ 휈푖푗in place of 푛푖in equations (1) and (5) 27Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied. 28An alternative would be to use a method like RANSAC to discard 푦푖푗values that do not appear to fit. (b) the style of the translation was unusual throughout the sentence; thus we should use a noisy 푛푖·exp 휈푖푗instead of 푛푖in equations (1) and (5) In both cases 휈푖푗∼Laplace(0, 푏), i.e., 휈푖푗specifies sparse additive or multiplicative noise in 휈푖푗(on language 푗only).29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6)): 푦푖푗= (푛푖· exp 휈푖푗) · exp(푑푗) · exp(휖푖푗) = 푛푖· exp(푑푗) · exp(휖푖푗+ 휈푖푗) (7) 휈푖푗∼Laplace(0, 푏) (8) 휎2 푖= ln  1 + exp(휎2)−1 푛푖·exp 휈푖푗  (9) 휖푖푗∼N  휎2−휎2 푖 2 , 휎2 푖  , (10) Comparing equation (7) to equation (1), we see that we are now modeling the residual error in log 푦푖푗 as a sum of two noise terms 푎푖푗= 휈푖푗+ 휖푖푗and penalizing it by (some multiple of) the weighted sum of |휈푖푗| and 휖2 푖푗, where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.30 The weighting of the two terms is a tunable hyperparameter. We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text. C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties 푑푗(and sometimes also the estimated variance 휎2) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29However, version (a) is then deficient since it then incorrectly allocates some probability mass to 푛푖+ 휈푖푗< 0 and thus 푦푖푗< 0 is possible. This could be fixed by using a different sparsity-inducing distribution. 30The cheapest penalty or explanation of the weighted sum 훿|휈푖푗| + 1 2휖2 푖푗for some weighting or threshold 훿(which adjusts the relative variances of the two priors) is 휈= 0 if |푎| ≤훿, 휈= 푎−훿if 푎≥훿, and 휈= −(푎−훿) if 푎< −훿(found by minimizing 훿|휈| + 1 2 (푎−휈)2, a convex function of 휈). This implies that we incur a quadratic penalty 1 2푎2 if |푎| ≤훿, and a linear penalty 훿(|푎| −1 2훿) for the other cases; this penalty function is exactly the Huber loss of 푎, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of 푑푗will be something between a mean and a median. 12 Figure 6: Achieved log-likelihoods on held-out data. Top: Europarl (BPE), Bottom: Bibles, Left: MAP inference, Right: HMC inference (posterior mean). the other parameters, in particular 푛푖for the new sentences 푖. The error bars are the standard deviations when running the model over different subsets of data. The “simplex” versions of regression in Figure 6 force all 푑푗to add up to the number of languages (i.e., encouraging each one to stay close to 1). This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation). For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate. For stability, we in all cases take the best result when initializing the new parameters randomly or “sensibly,” i.e., the 푛푖of an intent 푖 is initialized as the average of the corresponding sentences’ 푦푖푗. D Data selection: Europarl In the “Corrected & Structured Europarl Corpus” (CoStEP) corpus (Gra¨en et al., 2014), sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext. We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents. After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8). Since we want a fair comparison, we use the 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 0 10 20 #languages parallel % of intents Figure 7: In how many languages are the intents in Europarl translated? (intents from ill-fitting turns included in 100%, but not plotted) en fr it nl pt es da de sv fiel cs et lt sk lv sl pl hu ro bg 0 2 4 6 ·105 # mono-paragraphs Figure 8: How many sentences are there per Europarl language? en fr de es nl it pt sv el fipl da ro hu sk cs sl lt bg et lv 0 5 10 15 20 languages, sorted by absolute # native sentences % native of sentences Figure 9: How many of the Europarl sentences in one language are “native”? 13 aforementioned 14% of Europarl, giving us 78169 intents that are represented in all 21 languages. Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text. We tokenize it using the reversible language-agnostic tokenizer of Mielke and Eisner (2018)31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set. This way we ensure uniform division over sessions of the parliament and sizes of 2/3, 1/6, and 1/6, respectively. D.1 How are the source languages distributed? An obvious question we should ask is: how many “native” sentences can we actually find in Europarl? One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with “unknown” as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences). Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented. Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how “natively spoken” the language is in Europarl, shown in Figure 9. E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence. Turning to the collection assembled 31http://sjmielke.com/papers/tokenize/ (a) All 1174 Bibles, in packets of 20 verses, Bibles sorted by number of verses present, verses in chronological order. The New Testament (third quarter of verses) is present in almost every Bible. (b) The 131 Bibles with at least 20000 verses, in packets of 150 verses (this time, both sorted). The optimization task is to remove rows and columns in this picture until only black remains. Figure 10: Presence (black) of verses (y-axis) in Bibles (x-axis). Both pictures are downsampled, resulting in grayscale values for all packets of N values. by Mayer and Cysouw (2014), we see that it has over 1000 New Testaments, but far fewer complete Bibles. Despite being a fairly standardized book, not all Bibles are fully parallel. Some verses and sometimes entire books are missing in some Bibles— some of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations. For us, this means that we can neither simply take all translations that have “the entire thing” (in fact, no single Bible in the set covers the union of all others’ verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles). The whole situation is visualized in Figure 10. We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible. Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models. This problem can be cast as an integer linear program and solved using a standard optimization tool 14 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 ·106 1 1.5 2 2.5 3 3.5 4 4.5 5 ·106 afr aln arb arz ayr ayr bba ben ben bqc bul bul cac cak ceb ceb ceb ces ces cmn cnh cym dan deu deu deu deu deu deu deu deu deu deu deu ell eng eng eng eng epo fin fin fin fra frafra fra fra fra fra frafra fra fra guj gur hat hat hrv hun hun ind ind ita ita itaita kek kek kjb lat lit mah mam mri mya nld nor nor plt poh por por por qub quh quy quz ron rus som tbz tcw tgl tlh tpi tpm ukr ukr vie vie vie wal wbm xho zom tokens characters Figure 11: Tokens and characters (as reported by wc -w/-m) of the 106 Bibles. Equal languages share a color, all others are shown in faint gray. Most Bibles have around 700k tokens and 3.6M characters; outliers like Mandarin Chinese (cmn) are not surprising. (Gurobi) within a few hours. The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages,32 spanning 13 language families.33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1. We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set. This way we ensure uniform division over books of the Bible and sizes of 2/3, 1/6, and 1/6, respectively. F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2: 32afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 3322 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) for all languages, manually fixing tlh ↦→Constructed language. English corpus lines words chars WikiText-103 1809468 101880752 543005627 Wikipedia ( text8 ∈[a-z ]*) 1 17005207 100000000 Europarl 78169 6411731 37388604 WikiText-2 44836 2507005 13378183 PTB 49199 1036580 5951345 62/106-parallel Bible 25996 ∼700000 ∼3600000 Table 1: Sizes of various language modeling datasets, numbers estimated using wc. 26A (Inflectional Morphology) BPE chars 1 Little affixation (5) -0.0263 (± .034) 0.0131 (± .033) 2 Strongly suffixing (22) 0.0037 (± .049) -0.0145 (± .049) 3 Weakly suffixing (2) 0.0657 (± .007) -0.0317 (± .074) 6 Strong prefixing (1) 0.1292 -0.0057 81A (Order of S, O and V) BPE chars 1 SOV (7) 0.0125 (± .106) 0.0029 (± .099) 2 SVO (18) 0.0139 (± .058) -0.0252 (± .053) 3 VSO (5) -0.0241 (± .041) -0.0129 (± .089) 4 VOS (2) 0.0233 (± .026) 0.0353 (± .078) 7 No dominant order (4) 0.0252 (± .059) 0.0206 (± .029) Table 2: Average difficulty for languages with certain WALS features (with number of languages). F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in Table 3: BPE char dataset statistic 휌 p 휌 p Europarl Pearson .509 .0185 .621 .00264 Spearman .423 .0558 .560 .00832 Bibles Pearson .015 .917 .527 .000013 Spearman .014 .915 .434 .000481 Table 3: Correlations and significances when regressing on raw character sequence length. Significant correlations are boldfaced. F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4: BPE char dataset statistic 휌 p 휌 p Europarl Pearson .040 .862 .107 .643 Spearman .005 .982 .008 .973 Bibles Pearson .742 8e-12 .034 .792 Spearman .751 3e-12 -.025 .851 Table 4: Correlations and significances when regressing on the size of the raw word inventory. 15
2019
491
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4990–4995 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4990 Analyzing the Limitations of Cross-lingual Word Embedding Mappings Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, Eneko Agirre IXA NLP Group University of the Basque Country (UPV/EHU) [email protected] {mikel.artetxe, gorka.labaka, a.soroa, e.agirre}@ehu.eus Abstract Recent research in cross-lingual word embeddings has almost exclusively focused on offline methods, which independently train word embeddings in different languages and map them to a shared space through linear transformations. While several authors have questioned the underlying isomorphism assumption, which states that word embeddings in different languages have approximately the same structure, it is not clear whether this is an inherent limitation of mapping approaches or a more general issue when learning crosslingual embeddings. So as to answer this question, we experiment with parallel corpora, which allows us to compare offline mapping to an extension of skip-gram that jointly learns both embedding spaces. We observe that, under these ideal conditions, joint learning yields to more isomorphic embeddings, is less sensitive to hubness, and obtains stronger results in bilingual lexicon induction. We thus conclude that current mapping methods do have strong limitations, calling for further research to jointly learn cross-lingual embeddings with a weaker cross-lingual signal. 1 Introduction Cross-lingual word embeddings have attracted a lot of attention in recent times. Existing methods can be broadly classified into two categories: joint methods, which simultaneously learn word representations for multiple languages on parallel corpora (Gouws et al., 2015; Luong et al., 2015), and mapping methods, which independently train word embeddings in different languages and map them to a shared space through linear transformations (Mikolov et al., 2013a; Artetxe et al., 2018a). While early work in cross-lingual word embeddings was dominated by joint approaches, recent research has almost exclusively focused on mapping methods, which have the advantage of requiring little or no cross-lingual signal (Zhang et al., 2017; Conneau et al., 2018; Artetxe et al., 2018b). For mapping methods to work, it is necessary that embedding spaces in different languages have a similar structure (i.e. are approximately isomorphic), as it would otherwise be hopeless to find a linear map from one space to another. Several authors have questioned this assumption, showing that linguistic and domain divergences cause strong mismatches in embedding spaces, which in turn heavily hinders the performance of these methods (Søgaard et al., 2018; Patra et al., 2019). Nevertheless, it is not clear whether this mismatch is a consequence of separately training both embedding spaces, and thus an inherent limitation of mapping approaches, or an insurmountable obstacle that arises from the linguistic divergences across languages, and hence a more general issue when learning cross-lingual word embeddings. The goal of this paper is to shed light on this matter so as to better understand the nature and extension of these limitations. For that purpose, we experiment with parallel corpora, which allows us to compare mapping methods and joint methods under the exact same conditions, and analyze the properties of the resulting embeddings. Our results show that, under these conditions, joint learning yields to more isomorphic embeddings, is less sensitive to hubness, and obtains stronger results in Bilingual Lexicon Induction (BLI). This suggests that, despite the advantage of requiring weaker cross-lingual signal, current mapping methods do have strong limitations, as they are not able to leverage the available evidence as effectively as joint methods under ideal conditions. We thus conclude that future research should try to combine the best of both worlds, exploring joint methods to learn cross-lingual word embeddings with weaker supervision. 4991 2 Related work Cross-lingual word embeddings represent words from multiple languages in a common vector space. So as to train them, joint methods simultaneously learn the embeddings in the different languages, which requires some form of cross-lingual supervision. This supervision usually comes from parallel corpora, which can be aligned at the word level (Luong et al., 2015), or only at the sentence level (Gouws et al., 2015). In addition to that, methods that rely on comparable corpora (Vuli´c and Moens, 2016) or large bilingual dictionaries (Duong et al., 2016) have also been proposed. For a more detailed survey, the reader is referred to Ruder et al. (2017). In contrast, offline mapping approaches work by aligning separately trained word embeddings in different languages. For that purpose, early methods required a training dictionary, which was used to learn a linear transformation that mapped these embeddings into a common space (Mikolov et al., 2013a; Artetxe et al., 2018a). The amount of required supervision was later reduced through selflearning methods (Artetxe et al., 2017), and then completely eliminated through adversarial training (Zhang et al., 2017; Conneau et al., 2018) or more robust iterative approaches combined with initialization heuristics (Artetxe et al., 2018b; Hoshen and Wolf, 2018). There are several authors that have discussed the potential limitations of these mapping approaches. For instance, Søgaard et al. (2018) observe that the assumption that separately trained embeddings are approximately isomorphic is not true in general, showing that the performance of mapping methods is conditioned by the language pair, the comparability of the training corpora, and the parameters of the word embedding algorithms. Similarly, Patra et al. (2019) show that the isomorphism assumption weakens as the languages involved become increasingly etymologically distant. Finally, Nakashole and Flauger (2018) argue that embedding spaces in different languages are linearly equivalent only at local regions, but their global structure is different. Nevertheless, neither of these works does systematically analyze the extent to which these limitations are inherent to mapping approaches. To the best of our knowledge, ours is the first work comparing joint and mapping methods in the exact same conditions, characterizing the nature and impact of such limitations. 3 Experimental design We next describe the cross-lingual embedding methods, evaluation measures and datasets used in our experiments. 3.1 Cross-lingual embedding methods We use the following procedure to learn crosslingual embeddings, which are representative of the state-of-the-art in mapping and joint methods: Mapping: We first train 300-dimensional skipgram embeddings for each language using word2vec (Mikolov et al., 2013b) with 10 negative samples, a sub-sampling threshold of 1e-5 and 5 training iterations. Having done that, we map the resulting monolingual embeddings to a cross-lingual space using the unsupervised mode in VecMap1 (Artetxe et al., 2018b), which builds an initial solution based on heuristics and iteratively improves it through self-learning. Joint learning: We use the BiVec2 tool proposed by Luong et al. (2015), an extension of skip-gram that, given a word aligned parallel corpus, learns to predict the context of both the source word and the target word aligned with it. For that purpose, we first word align our training corpus using FastText (Dyer et al., 2013). Given that BiVec is a natural extension of skip-gram, we use the exact same hyperparameters as for the mapping method. In both cases, we restrict the vocabulary to the most frequent 200,000 words. 3.2 Evaluation measures We use the following measures to characterize cross-lingual embeddings: Isomorphism. Intuitively, the notion of isomorphism captures the idea of how well the embeddings in both languages fit together (i.e. the degree of their structural similarity). So as to measure it, we use the eigenvalue similarity metric proposed by Søgaard et al. (2018). For that purpose, we first center and normalize the embeddings, calculate the nearest neighbor graphs of the 10, 000 most frequent words in each language, and compute their Laplacian matrices L1 and L2. We then find the smallest k1 such that the sum of the largest k1 eigenvalues of L1 is at least 90% of the sum of all its eigenvalues, and analogously for k2 and L2. Finally we set k = min(k1, k2), and define the eigenvalue similarity of the two spaces as the sum 1https://github.com/artetxem/vecmap 2https://github.com/lmthang/bivec 4992 Eig. Hub. NN (↑) Hub. CSLS (↑) P@1 Eparl (↑) P@1 MUSE (↑) sim. (↓) 10% 100% 10% 100% NN CSLS NN CSLS FI-EN Joint learning 28.9 0.45 52.8 1.13 57.5 65.2 68.3 83.4 85.2 Mapping 115.9 0.12 33.8 0.38 46.1 26.3 34.8 44.6 56.8 ES-EN Joint learning 31.2 0.65 66.0 1.40 71.3 68.7 69.3 91.9 92.4 Mapping 47.8 0.58 63.1 1.31 69.1 65.4 67.0 87.1 89.0 DE-EN Joint learning 32.8 0.58 58.8 1.29 65.2 70.6 70.4 90.1 89.2 Mapping 39.4 0.60 58.7 1.33 64.8 65.3 66.4 82.4 83.1 IT-EN Joint learning 26.5 0.75 69.7 1.61 74.2 71.5 71.8 90.6 90.0 Mapping 43.9 0.65 63.9 1.53 70.8 64.1 67.2 84.4 85.9 Table 1: Evaluation measures for the two cross-lingual embedding approaches. Arrows indicate whether lower (↓) or higher (↑) is better. See text for further details. of the squared differences between the k largest eigenvalues of L1 and L2, ∆= Pk i=1(λ1i −λ2i)2. Hubness. Cross-lingual word embeddings are known to suffer from the hubness problem (Radovanovi´c et al., 2010a,b; Dinu et al., 2015), which causes a few points (known as hubs) to be the nearest neighbors of many other points in high-dimensional spaces. So as to quantify it, we measure the minimum percentage of target words HN that are the nearest neighbor of at least N% of the source words, where N is a parameter of the metric.3 For instance, a hubness value of H10% = 0.3% would indicate that 0.3% of the target words are the nearest neighbors of 10% of the source words. This way, lower values of HN are indicative of a higher level of hubness, and the parameter N serves to get a more complete picture of the distribution of hubs. For brevity, we report results for N = 10% and 100%. While the nearest neighbor retrieval is usually done according to cosine similarity, Conneau et al. (2018) proposed an alternative measure, called Cross-domain Similarity Local Scaling (CSLS), that penalizes the similarity scores of hubs, which in turn reduces the hubness level. So as to better understand its effect, we report results for both CSLS and standard nearest neighbor with cosine similarity (NN). 3Some previous work uses an alternative hubness metric that computes the hubness level N(t) of each target word t (i.e. the number of source words whose nearest neighbor is t) and measures the skewness of the resulting distribution. However, we find this metric to have two important drawbacks: 1) its magnitude is not easily interpretable, and 2) it is invariant to the variance of the distribution, even if higher variances are indicative of a higher hubness level. For instance, we observed that two very similar spaces (produced running word2vec twice over the same corpora) mapped to each other produced unusually high skewness scores, caused by the scale normalization done in skewness (division by the standard deviation). Bilingual Lexicon Induction (BLI). Following common practice, we induce a bilingual dictionary by linking each word in the source language with its nearest neighbor in the target language. So as to evaluate the quality of the induced translations, we compare them to a gold standard dictionary, and measure the precision at 1. We report results for both nearest neighbor with cosine similarity (NN) and the aforementioned CSLS retrieval. Note that, in addition to having a practical application, BLI performance is an informative measure of the quality of the embeddings, as a good cross-lingual representation should place equivalent words close to each other. 3.3 Datasets We experiment with 4 language pairs with English as the target language, covering 3 relatively close languages (German, Spanish and Italian) and a non-indoeuropean agglutinative language (Finnish). All embeddings were trained on the BiCleaner v3.0 version of the ParaCrawl corpus,4 a parallel corpus collected through crawling and filtered according to Sánchez-Cartagena et al. (2018). The size of this corpus changes from one language to another: German and Spanish are the largest (503 and 492 million tokens in the English side, respectively), followed by Italian (308 million tokens), and Finnish (55 million tokens). As for the evaluation dictionaries for BLI, we use two datasets that have been widely used in the literature. The first one, which we call Eparl, was first introduced by Dinu et al. (2015) and subsequently extended by Artetxe et al. (2017) and Artetxe et al. (2018a), and consists of 1,500 test entries extracted from Europarl word alignments 4https://paracrawl.eu/ 4993 and uniformly distributed in 5 frequency bins. The second one, which we call MUSE, consists of another 1,500 test entries, and was compiled by Conneau et al. (2018) using internal translation tools. 4 Results Table 1 reports the results of all the evaluation measures for both cross-lingual embedding approaches. The eigenvalue similarity metric shows that joint learning obtains substantially more isomorphic embedding spaces than the mapping approach, indicating that the representations it learns for different languages have a more similar structure. At the same time, it is remarkable that the eigenvalue similarity for the four language pairs is very close in the case of joint learning, with values that range between 26.5 and 32.8. In contrast, the degree of isomorphism for Finnish-English is substantially lower than the rest in the case of the mapping approach, which is likely caused by the typological differences between these languages and the smaller size of the training corpus. This suggests that joint learning is able to appropriately fit divergent languages together, which is troublesome when the embedding spaces are learned separately and then mapped together. When it comes to hubness, our results show that joint learning is generally less sensitive to this problem, although differences greatly vary depending on the language pair. This way, both approaches have a similar behavior in German, while joint learning does moderately better for Spanish and Italian, and the difference becomes very large for Finnish. Once again, this suggests that mapping methods are more severely affected by linguistic divergences. At the same time, we observe that CSLS is very effective at reducing the hubness level, specially for offline mapping. Finally, we observe that joint learning outperforms offline mapping in BLI. This difference is particularly pronounced for Finnish-English (e.g. 26.3% vs 65.2% for NN on Eparl), which is in line with the general behavior observed so far. At the same time, our results show that CSLS is most helpful with offline mapping, but it even has a negative impact with joint learning for some language pairs. This can be partly explained by the fact that the latter approach is less sensitive to hubness, which CSLS tries to address. 5 Discussion Our analysis reveals that, when trained on parallel corpora under the exact same conditions, joint learning obtains substantially better cross-lingual representations than offline mapping, yielding to more isomorphic embeddings that are less sensitive to hubness and obtain stronger results on BLI. Moreover, our results show that divergences across languages can be effectively mitigated by jointly learning their representations, whereas trying to align separately trained embeddings is troublesome when such divergences exist. Note that this should not be interpreted as a claim that existing joint methods are superior to existing mapping methods. In fact, we believe that both families serve different purposes in that they require a different degree of supervision (e.g. mapping methods can exploit monolingual corpora, which is useful in practical settings), so the choice to use one approach or the other should depend on the resources that are available in each particular case. Nevertheless, our results do show that offline mapping has fundamental limitations that, given the available evidence, seem specific to this particular approach. For that reason, we argue that, while recent research on cross-lingual word embeddings has almost exclusively focused on mapping methods, future work should consider alternative approaches to try to overcome the limitations of this paradigm. In particular, we believe that an interesting direction would be to adapt joint methods so they can work with monolingual corpora. 6 Conclusions and future work In this work, we compare the properties of crosslingual word embeddings trained through joint learning and offline mapping on parallel corpora. We observe that, under these ideal conditions, joint learning yields to more isomorphic embeddings, is less sensitive to hubness, and obtains stronger results in bilingual lexicon induction, concluding that current mapping methods have strong limitations. This analysis calls for further research on alternatives to current mapping methods, which have been very successful on unsupervised settings. In particular, we would like to explore new methods to jointly learn cross-lingual embeddings on monolingual corpora. 4994 Acknowledgments This research was partially supported by the Spanish MINECO (UnsupNMT TIN2017-91692EXP and DOMINO PGC2018-102041-B-I00, cofunded by EU FEDER), the BigKnowledge project (BBVA foundation grant 2018), the UPV/EHU (excellence research group), and the NVIDIA GPU grant program. Mikel Artetxe was supported by a doctoral grant from the Spanish MECD. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pages 5012–5019. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798. Association for Computational Linguistics. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), workshop track. Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilingual corpora. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1285– 1295, Austin, Texas. Association for Computational Linguistics. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast bilingual distributed representations without word alignments. In Proceedings of the 32nd International Conference on Machine Learning, pages 748–756. Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 469–478, Brussels, Belgium. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151–159. Association for Computational Linguistics. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Ndapa Nakashole and Raphael Flauger. 2018. Characterizing departures from linearity in word translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 221–227, Melbourne, Australia. Association for Computational Linguistics. Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R Gormley, and Graham Neubig. 2019. BLISS in non-isometric embedding spaces. Miloš Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. 2010a. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Machine Learning Research, 11(Sep):2487–2531. Milos Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. 2010b. On the existence of obstinate results in vector space models. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 186–193. ACM. Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. 2017. A survey of cross-lingual word embedding models. arXiv preprint arXiv:1706.04902. Víctor M. Sánchez-Cartagena, Marta Bañón, Sergio Ortiz Rojas, and Gema Ramírez. 2018. Prompsit’s submission to wmt 2018 parallel corpus filtering shared task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, 4995 pages 955–962. Association for Computational Linguistics. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788. Association for Computational Linguistics. Ivan Vuli´c and Marie-Francine Moens. 2016. Bilingual distributed word representations from documentaligned comparable data. Journal of Artificial Intelligence Research, 55(1):953–994. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959–1970, Vancouver, Canada. Association for Computational Linguistics.
2019
492
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996–5001 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 4996 How multilingual is Multilingual BERT? Telmo Pires∗ Eva Schlinger Dan Garrette Google Research {telmop,eschling,dhgarrette}@google.com Abstract In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2019) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs. 1 Introduction Deep, contextualized language models provide powerful, general-purpose linguistic representations that have enabled significant advances among a wide range of natural language processing tasks (Peters et al., 2018b; Devlin et al., 2019). These models can be pre-trained on large corpora of readily available unannotated text, and then fine-tuned for specific tasks on smaller amounts of supervised data, relying on the induced language model structure to facilitate generalization beyond the annotations. Previous work on model probing has shown that these representations are able to encode, among other things, syntactic and named entity information, but they have heretofore focused on what models trained on English capture about English (Peters et al., 2018a; Tenney et al., 2019b,a). ∗Google AI Resident. In this paper, we empirically investigate the degree to which these representations generalize across languages. We explore this question using Multilingual BERT (henceforth, M-BERT), released by Devlin et al. (2019) as a single language model pre-trained on the concatenation of monolingual Wikipedia corpora from 104 languages.1 M-BERT is particularly well suited to this probing study because it enables a very straightforward approach to zero-shot cross-lingual model transfer: we fine-tune the model using task-specific supervised training data from one language, and evaluate that task in a different language, thus allowing us to observe the ways in which the model generalizes information across languages. Our results show that M-BERT is able to perform cross-lingual generalization surprisingly well. More importantly, we present the results of a number of probing experiments designed to test various hypotheses about how the model is able to perform this transfer. Our experiments show that while high lexical overlap between languages improves transfer, M-BERT is also able to transfer between languages written in different scripts— thus having zero lexical overlap—indicating that it captures multilingual representations. We further show that transfer works best for typologically similar languages, suggesting that while M-BERT’s multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order. 2 Models and Data Like the original English BERT model (henceforth, EN-BERT), M-BERT is a 12 layer transformer (Devlin et al., 2019), but instead of be1https://github.com/google-research/bert 4997 Fine-tuning \ Eval EN DE NL ES EN 90.70 69.74 77.36 73.59 DE 73.83 82.00 76.25 70.03 NL 65.46 65.68 89.86 72.10 ES 65.38 59.40 64.39 87.18 Table 1: NER F1 results on the CoNLL data. ing trained only on monolingual English data with an English-derived vocabulary, it is trained on the Wikipedia pages of 104 languages with a shared word piece vocabulary. It does not use any marker denoting the input language, and does not have any explicit mechanism to encourage translationequivalent pairs to have similar representations. For NER and POS, we use the same sequence tagging architecture as Devlin et al. (2019). We tokenize the input sentence, feed it to BERT, get the last layer’s activations, and pass them through a final layer to make the tag predictions. The whole model is then fine-tuned to minimize the cross entropy loss for the task. When tokenization splits words into multiple pieces, we take the prediction for the first piece as the prediction for the word. 2.1 Named entity recognition experiments We perform NER experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German (Tjong Kim Sang, 2002; Sang and Meulder, 2003); and an in-house dataset with 16 languages,2 using the same CoNLL categories. Table 1 shows M-BERT zero-shot performance on all language pairs in the CoNLL data. 2.2 Part of speech tagging experiments We perform POS experiments using Universal Dependencies (UD) (Nivre et al., 2016) data for 41 languages.3 We use the evaluation sets from Zeman et al. (2017). Table 2 shows M-BERT zeroshot results for four European languages. We see that M-BERT generalizes well across languages, achieving over 80% accuracy for all pairs. 2Arabic, Bengali, Czech, German, English, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Russian, Turkish, and Chinese. 3Arabic, Bulgarian, Catalan, Czech, Danish, German, Greek, English, Spanish, Estonian, Basque, Persian, Finnish, French, Galician, Hebrew, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Latvian, Marathi, Dutch, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian), Romanian, Russian, Slovak, Slovenian, Swedish, Tamil, Telugu, Turkish, Urdu, and Chinese. Fine-tuning \ Eval EN DE ES IT EN 96.82 89.40 85.91 91.60 DE 83.99 93.99 86.32 88.39 ES 81.64 88.87 96.71 93.71 IT 86.79 87.82 91.28 98.11 Table 2: POS accuracy on a subset of UD languages. Figure 1: Zero-shot NER F1 score versus entity word piece overlap among 16 languages. While performance using EN-BERT depends directly on word piece overlap, M-BERT’s performance is largely independent of overlap, indicating that it learns multilingual representations deeper than simple vocabulary memorization. 3 Vocabulary Memorization Because M-BERT uses a single, multilingual vocabulary, one form of cross-lingual transfer occurs when word pieces present during fine-tuning also appear in the evaluation languages. In this section, we present experiments probing M-BERT’s dependence on this superficial form of generalization: How much does transferability depend on lexical overlap? And is transfer possible to languages written in different scripts (no overlap)? 3.1 Effect of vocabulary overlap If M-BERT’s ability to generalize were mostly due to vocabulary memorization, we would expect zero-shot performance on NER to be highly dependent on word piece overlap, since entities are often similar across languages. To measure this effect, we compute Etrain and Eeval, the sets of word pieces used in entities in the training and evaluation datasets, respectively, and define overlap as the fraction of common word pieces used in the entities: overlap = |Etrain∩Eeval| / |Etrain∪Eeval|. Figure 1 plots NER F1 score versus entity overlap for zero-shot transfer between every language pair in an in-house dataset of 16 languages, for both M-BERT and EN-BERT.4 We can see that 4Results on CoNLL data follow the same trends, but those trends are more apparent with 16 languages than with 4. 4998 Model EN DE NL ES Lample et al. (2016) 90.94 78.76 81.74 85.75 EN-BERT 91.07 73.32 84.23 81.84 Table 3: NER F1 results fine-tuning and evaluating on the same language (not zero-shot transfer). performance using EN-BERT depends directly on word piece overlap: the ability to transfer deteriorates as word piece overlap diminishes, and F1 scores are near zero for languages written in different scripts. M-BERT’s performance, on the other hand, is flat for a wide range of overlaps, and even for language pairs with almost no lexical overlap, scores vary between 40% and 70%, showing that M-BERT’s pretraining on multiple languages has enabled a representational capacity deeper than simple vocabulary memorization.5 To further verify that EN-BERT’s inability to generalize is due to its lack of a multilingual representation and not an inability of its Englishspecific word piece vocabulary to represent data in other languages, we evaluate on non-cross-lingual NER and see that it performs comparably to a previous state of the art model (see Table 3). 3.2 Generalization across scripts M-BERT’s ability to transfer between languages that are written in different scripts, and thus have effectively zero lexical overlap, is surprising given that it was trained on separate monolingual corpora and not with a multilingual objective. To probe deeper into how the model is able to perform this generalization, Table 4 shows a sample of POS results for transfer across scripts. Among the most surprising results, an M-BERT model that has been fine-tuned using only POSlabeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single POStagged Devanagari word. This provides clear evidence of M-BERT’s multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data. However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that M-BERT’s multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section 4.2, is typological similarity. English and Japanese have a different order of subject, verb 5Individual language trends are similar to aggregate plots. HI UR HI 97.1 85.9 UR 91.1 93.8 EN BG JA EN 96.8 87.1 49.4 BG 82.2 98.9 51.6 JA 57.4 67.2 96.5 Table 4: POS accuracy on the UD test set for languages with different scripts. Row=fine-tuning, column=eval. and object, while English and Bulgarian have the same, and M-BERT may be having trouble generalizing across different orderings. 4 Encoding Linguistic Structure In the previous section, we showed that M-BERT’s ability to generalize cannot be attributed solely to vocabulary memorization, and that it must be learning a deeper multilingual representation. In this section, we present probing experiments that investigate the nature of that representation: How does typological similarity affect M-BERT’s ability to generalize? Can M-BERT generalize from monolingual inputs to code-switching text? Can the model generalize to transliterated text without transliterated language model pretraining? 4.1 Effect of language similarity Following Naseem et al. (2012), we compare languages on a subset of the WALS features (Dryer and Haspelmath, 2013) relevant to grammatical ordering.6 Figure 2 plots POS zero-shot accuracy against the number of common WALS features. As expected, performance improves with similarity, showing that it is easier for M-BERT to map linguistic structures when they are more similar, although it still does a decent job for low similarity languages when compared to EN-BERT. 4.2 Generalizing across typological features Table 5 shows macro-averaged POS accuracies for transfer between languages grouped according to two typological features: subject/object/verb order, and adjective/noun order7 (Dryer and Haspelmath, 2013). The results reported include only zero-shot transfer, i.e. they do not include cases 681A (Order of Subject, Object and Verb), 85A (Order of Adposition and Noun), 86A (Order of Genitive and Noun), 87A (Order of Adjective and Noun), 88A (Order of Demonstrative and Noun), and 89A (Order of Numeral and Noun). 7SVO languages: Bulgarian, Catalan, Czech, Danish, English, Spanish, Estonian, Finnish, French, Galician, Hebrew, Croatian, Indonesian, Italian, Latvian, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian), Romanian, Russian, Slovak, Slovenian, Swedish, and Chinese. SOV Languages: Basque, Farsi, Hindi, Japanese, Korean, Marathi, Tamil, Telugu, Turkish, and Urdu. 4999 Figure 2: Zero-shot POS accuracy versus number of common WALS features. Due to their scarcity, we exclude pairs with no common features. SVO SOV SVO 81.55 66.52 SOV 63.98 64.22 (a) Subj./verb/obj. order. AN NA AN 73.29 70.94 NA 75.10 79.64 (b) Adjective/noun order. Table 5: Macro-average POS accuracies when transferring between SVO/SOV languages or AN/NA languages. Row = fine-tuning, column = evaluation. training and testing on the same language. We can see that performance is best when transferring between languages that share word order features, suggesting that while M-BERT’s multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order. 4.3 Code switching and transliteration Code-switching (CS)—the mixing of multiple languages within a single utterance—and transliteration—writing that is not in the language’s standard script—present unique test cases for M-BERT, which is pre-trained on monolingual, standard-script corpora. Generalizing to codeswitching is similar to other cross-lingual transfer scenarios, but would benefit to an even larger degree from a shared multilingual representation. Likewise, generalizing to transliterated text is similar to other cross-script transfer experiments, but has the additional caveat that M-BERT was not pre-trained on text that looks like the target. We test M-BERT on the CS Hindi/English UD corpus from Bhat et al. (2018), which provides texts in two formats: transliterated, where Hindi words are written in Latin script, and corrected, where annotators have converted them back to Devanagari script. Table 6 shows the results for modCorrected Transliterated Train on monolingual HI+EN M-BERT 86.59 50.41 Ball and Garrette (2018) — 77.40 Train on code-switched HI/EN M-BERT 90.56 85.64 Bhat et al. (2018) — 90.53 Table 6: M-BERT’s POS accuracy on the code-switched Hindi/English dataset from Bhat et al. (2018), on script-corrected and original (transliterated) tokens, and comparisons to existing work on code-switch POS. els fine-tuned using a combination of monolingual Hindi and English, and using the CS training set (both fine-tuning on the script-corrected version of the corpus as well as the transliterated version). For script-corrected inputs, i.e., when Hindi is written in Devanagari, M-BERT’s performance when trained only on monolingual corpora is comparable to performance when training on codeswitched data, and it is likely that some of the remaining difference is due to domain mismatch. This provides further evidence that M-BERT uses a representation that is able to incorporate information from multiple languages. However, M-BERT is not able to effectively transfer to a transliterated target, suggesting that it is the language model pre-training on a particular language that allows transfer to that language. M-BERT is outperformed by previous work in both the monolingual-only and code-switched supervision scenarios. Neither Ball and Garrette (2018) nor Bhat et al. (2018) use contextualized word embeddings, but both incorporate explicit transliteration signals into their approaches. 5 Multilingual characterization of the feature space In this section, we study the structure of M-BERT’s feature space. If it is multilingual, then the transformation mapping between the same sentence in 2 languages should not depend on the sentence itself, just on the language pair. 5.1 Experimental Setup We sample 5000 pairs of sentences from WMT16 (Bojar et al., 2016) and feed each sentence (separately) to M-BERT with no fine-tuning. We then extract the hidden feature activations at each layer for each of the sentences, and average the representations for the input tokens except [CLS] and [SEP], to get a vector for each sentence, at each layer l, v(l) LANG. For each pair of sentences, 5000 Figure 3: Accuracy of nearest neighbor translation for EN-DE, EN-RU, and HI-UR. e.g. (v(l) ENi, v(l) DEi), we compute the vector pointing from one to the other and average it over all pairs: ¯v(l) EN→DE = 1 M P i  v(l) DEi −v(l) ENi  , where M is the number of pairs. Finally, we translate each sentence, v(l) ENi, by ¯v(l) EN→DE, find the closest German sentence vector8, and measure the fraction of times the nearest neighbour is the correct pair, which we call the “nearest neighbor accuracy”. 5.2 Results In Figure 3, we plot the nearest neighbor accuracy for EN-DE (solid line). It achieves over 50% accuracy for all but the bottom layers,9 which seems to imply that the hidden representations, although separated in space, share a common subspace that represents useful linguistic information, in a language-agnostic way. Similar curves are obtained for EN-RU, and UR-HI (in-house dataset), showing this works for multiple languages. As to the reason why the accuracy goes down in the last few layers, one possible explanation is that since the model was pre-trained for language modeling, it might need more language-specific information to correctly predict the missing word. 6 Conclusion In this work, we showed that M-BERT’s robust, often surprising, ability to generalize crosslingually is underpinned by a multilingual representation, without being explicitly trained for it. The model handles transfer across scripts and to code-switching fairly well, but effective transfer to typologically divergent and transliterated targets 8In terms of ℓ2 distance. 9Our intuition is that the lower layers have more “token level” information, which is more language dependent, particularly for languages that share few word pieces. will likely require the model to incorporate an explicit multilingual training objective, such as that used by Lample and Conneau (2019) or Artetxe and Schwenk (2018). As to why M-BERT generalizes across languages, we hypothesize that having word pieces used in all languages (numbers, URLs, etc) which have to be mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close to a shared space. It is our hope that these kinds of probing experiments will help steer researchers toward the most promising lines of inquiry by encouraging them to focus on the places where current contextualized word representation approaches fall short. 7 Acknowledgements We would like to thank Mark Omernick, Livio Baldini Soares, Emily Pitler, Jason Riesa, and Slav Petrov for the valuable discussions and feedback. References Mikel Artetxe and Holger Schwenk. 2018. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. arXiv preprint arXiv:1812.10464. Kelsey Ball and Dan Garrette. 2018. Part-of-speech tagging for code-switched, transliterated texts without explicit language identification. In Proceedings of EMNLP. Irshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2018. Universal dependency parsing for Hindi-English code-switching. In Proceedings of NAACL. Ondˇrej Bojar, Yvette Graham, Amir Kamran, and Miloˇs Stanojevi´c. 2016. Results of the WMT16 metrics shared task. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. https://wals.info/. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. 5001 Neural architectures for named entity recognition. In Proceedings of NAACL. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of ACL. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan T. McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of LREC. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018a. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word representations. In Proceedings of NAACL. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of ACL. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? Probing for sentence structure in contextualized word representations. In Proceedings of ICLR. Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In Proceedings of CoNLL. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, V´aclava Kettnerov´a, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missil¨a, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria dePaiva, Kira Droganova, H´ector Mart´ınez Alonso, C¸ a˘grı C¸ ¨oltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadov´a, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of CoNLL. A Model Parameters All models were fine-tuned with a batch size of 32, and a maximum sequence length of 128 for 3 epochs. We used a learning rate of 3e−5 with learning rate warmup during the first 10% of steps, and linear decay afterwards. We also applied 10% dropout on the last layer. No parameter tuning was performed. We used the BERT-Base, Multilingual Cased checkpoint from https://github. com/google-research/bert. B CoNLL Results for EN-BERT Fine-tuning \Eval EN DE NL ES EN 91.07 24.38 40.62 49.99 DE 55.36 73.32 54.84 50.80 NL 59.36 27.57 84.23 53.15 ES 55.09 26.13 48.75 81.84 Table 7: NER results on the CoNLL test sets for EN-BERT. The row is the fine-tuning language, the column the evaluation language. There is a big gap between this model’s zero-shot performance and M-BERT’s, showing that the pre-training is helping in cross-lingual transfer. C Some POS Results for EN-BERT Fine-tuning \Eval EN DE ES IT EN 96.94 38.31 50.38 46.07 DE 28.62 92.63 30.23 25.59 ES 28.78 46.15 94.36 71.50 IT 52.48 48.08 76.51 96.41 Table 8: POS accuracy on the UD test sets for a subset of European languages using EN-BERT. The row specifies a fine-tuning language, the column the evaluation language. There is a big gap between this model’s zeroshot performance and M-BERT’s, showing the pretraining is helping learn a useful cross-lingual representation for grammar.
2019
493
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5002–5007 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5002 Bilingual Lexicon Induction through Unsupervised Machine Translation Mikel Artetxe, Gorka Labaka, Eneko Agirre IXA NLP Group University of the Basque Country (UPV/EHU) {mikel.artetxe, gorka.labaka, e.agirre}@ehu.eus Abstract A recent research line has obtained strong results on bilingual lexicon induction by aligning independently trained word embeddings in two languages and using the resulting crosslingual embeddings to induce word translation pairs through nearest neighbor or related retrieval methods. In this paper, we propose an alternative approach to this problem that builds on the recent work on unsupervised machine translation. This way, instead of directly inducing a bilingual lexicon from cross-lingual embeddings, we use them to build a phrasetable, combine it with a language model, and use the resulting machine translation system to generate a synthetic parallel corpus, from which we extract the bilingual lexicon using statistical word alignment techniques. As such, our method can work with any word embedding and cross-lingual mapping technique, and it does not require any additional resource besides the monolingual corpus used to train the embeddings. When evaluated on the exact same cross-lingual embeddings, our proposed method obtains an average improvement of 6 accuracy points over nearest neighbor and 4 points over CSLS retrieval, establishing a new state-of-the-art in the standard MUSE dataset. 1 Introduction Cross-lingual word embedding mappings have attracted a lot of attention in recent times. These methods work by independently training word embeddings in different languages, and mapping them to a shared space through linear transformations. While early methods required a training dictionary to find the initial alignment (Mikolov et al., 2013), fully unsupervised methods have managed to obtain comparable results based on either adversarial training (Conneau et al., 2018) or selflearning (Artetxe et al., 2018b). A prominent application of these methods is Bilingual Lexicon Induction (BLI), that is, using the resulting cross-lingual embeddings to build a bilingual dictionary. For that purpose, one would typically induce the translation of each source word by taking its corresponding nearest neighbor in the target language. However, it has been argued that this basic approach suffers from the hubness problem1, which has motivated alternative retrieval methods like inverted nearest neighbor2 (Dinu et al., 2015), inverted softmax (Smith et al., 2017), and Cross-domain Similarity Local Scaling (CSLS) (Conneau et al., 2018). In this paper, we go one step further and, rather than directly inducing the bilingual dictionary from the cross-lingual word embeddings, we use them to build an unsupervised machine translation system, and extract a bilingual dictionary from a synthetic parallel corpus generated with it. This allows us to take advantage of a strong language model and naturally extract translation equivalences through statistical word alignment. At the same time, our method can be used as a drop-in replacement of traditional retrieval techniques, as it can work with any cross-lingual word embeddings and it does not require any additional resource besides the monolingual corpus used to train them. Our experiments show the effectiveness of this alternative approach, which outperforms the previous best retrieval method by 4 accuracy points on average, establishing a new stateof-the-art in the standard MUSE dataset. As such, we conclude that, contrary to recent trend, future research in BLI should not focus exclusively on direct retrieval methods. 1Hubness (Radovanovi´c et al., 2010a,b) refers to the phenomenon of a few points being the nearest neighbors of many other points in high-dimensional spaces, which has been reported to severely affect cross-lingual embedding mappings (Dinu et al., 2015). 2The original paper refers to this method as globally corrected retrieval. 5003 2 Proposed method The input of our method is a set of cross-lingual word embeddings and the monolingual corpora used to train them. In our experiments, we use fastText embeddings (Bojanowski et al., 2017) mapped through VecMap (Artetxe et al., 2018b), but the algorithm described next can also work with any other word embedding and cross-lingual mapping method. The general idea of our method is to to build an unsupervised phrase-based statistical machine translation system (Lample et al., 2018; Artetxe et al., 2018c, 2019), and use it to generate a synthetic parallel corpus from which to extract a bilingual dictionary. For that purpose, we first derive phrase embeddings from the input word embeddings by taking the 400,000 most frequent bigrams and and the 400,000 most frequent trigrams in each language, and assigning them the centroid of the words they contain. Having done that, we use the resulting cross-lingual phrase embeddings to build a phrase-table as described in Artetxe et al. (2018c). More concretely, we extract translation candidates by taking the 100 nearest-neighbors of each source phrase, and score them with the softmax function over their cosine similarities: φ( ¯f|¯e) = exp cos(¯e, ¯f)/τ  P ¯f′ exp cos(¯e, ¯f′)/τ  where the temperature τ is estimated using maximum likelihood estimation over a dictionary induced in the reverse direction. In addition to the phrase translation probabilities in both directions, we also estimate the forward and reverse lexical weightings by aligning each word in the target phrase with the one in the source phrase most likely generating it, and taking the product of their respective translation probabilities. We then combine this phrase-table with a distortion model and a 5-gram language model estimated in the target language corpus, which results in a phrase-based machine translation system. So as to optimize the weights of the resulting model, we use the unsupervised tuning procedure proposed by Artetxe et al. (2019), which combines a cyclic consistency loss and a language modeling loss over a subset of 2,000 sentences from each monolingual corpora. Having done that, we generate a synthetic parallel corpus by translating the source language monolingual corpus with the resulting machine translation system.3 We then word align this corpus using FastAlign (Dyer et al., 2013) with default hyperparameters and the grow-diag-finaland symmetrization heuristic. Finally, we build a phrase-table from the word aligned corpus, and extract a bilingual dictionary from it by discarding all non-unigram entries. For words with more than one entry, we rank translation candidates according to their direct translation probability. 3 Experimental settings In order to compare our proposed method headto-head with other BLI methods, the experimental setting needs to fix the monolingual embedding training method, as well as the cross-lingual mapping algorithm and the evaluation dictionaries. In addition, in order to avoid any advantage, our method should not see any further monolingual corpora than those used to train the monolingual embeddings. Unfortunately, existing BLI datasets distribute pre-trained word embeddings alone, but not the monolingual corpora used to train them. For that reason, we decide to use the evaluation dictionaries from the standard MUSE dataset (Conneau et al., 2018) but, instead of using the pre-trained Wikipedia embeddings distributed with it, we extract monolingual corpora from Wikipedia ourselves and train our own embeddings trying to be as faithful as possible to the original settings. This allows us to compare our proposed method to previous retrieval techniques in the exact same conditions, while keeping our results as comparable as possible to previous work reporting results for the MUSE dataset. More concretely, we use WikiExtractor4 to extract plain text from Wikipedia dumps, and preprocess the resulting corpus using standard Moses tools (Koehn et al., 2007) by applying sentence splitting, punctuation normalization, tokenization with aggressive hyphen splitting, and lowercasing. We then train word embeddings for each language using the skip-gram implementation of fastText (Bojanowski et al., 2017) with default hyperparameters, restricting the vocabulary to the 200,000 most frequent tokens. The official embeddings in 3For efficiency purposes, we restricted the size of the synthetic parallel corpus to a maximum of 10 million sentences, and use cube-pruning for faster decoding. As such, our results could likely be improved by translating the full monolingual corpus with standard decoding. 4https://github.com/attardi/ wikiextractor 5004 en-es en-fr en-de en-ru avg. → ← → ← → ← → ← Nearest neighbor 81.9 82.8 81.6 81.7 73.3 72.3 44.3 65.6 72.9 Inv. nearest neighbor (Dinu et al., 2015) 80.6 77.6 81.3 79.0 69.8 69.7 43.7 54.1 69.5 Inv. softmax (Smith et al., 2017) 81.7 82.7 81.7 81.7 73.5 72.3 44.4 65.5 72.9 CSLS (Conneau et al., 2018) 82.5 84.7 83.3 83.4 75.6 75.3 47.4 67.2 74.9 Proposed method 87.0 87.9 86.0 86.2 81.9 80.2 50.4 71.3 78.9 Table 1: P@1 of proposed system and previous retrieval methods, using the same cross-lingual embeddings. the MUSE dataset were trained using these exact same settings, so our embeddings only differ in the Wikipedia dump used to extract the training corpus and the pre-processing applied to it, which is not documented in the original dataset. Having done that, we map these word embeddings to a cross-lingual space using the unsupervised mode in VecMap (Artetxe et al., 2018b), which builds an initial solution based on the intralingual similarity distribution of the embeddings and iteratively improves it through self-learning. Finally, we induce a bilingual dictionary using our proposed method and evaluate it in comparison to previous retrieval methods (standard nearest neighbor, inverted nearest neighbor, inverted softmax5 and CSLS). Following common practice, we use precision at 1 as our evaluation measure.6 4 Results and discussion Table 1 reports the results of our proposed system in comparison to previous retrieval methods. As it can be seen, our method obtains the best results in all language pairs and directions, with an average improvement of 6 points over nearest neighbor and 4 points over CSLS, which is the best performing previous method. These results are very consistent across all translation directions, with an absolute improvement between 2.7 and 6.3 points over CSLS. Interestingly, neither inverted nearest neighbor nor inverted soft5Inverted softmax has a temperature hyperparameter T, which is typically tuned in the training dictionary. Given that we do not have any training dictionary in our fully unsupervised settings, we use a fixed temperature of T = 30, which was also used by some previous authors (Lample et al., 2018). While we tried other values in our preliminary experiments, but we did not observe any significant difference. 6We find a few out-of-vocabularies in the evaluation dictionary that are likely caused by minor pre-processing differences. In those cases, we use copying as a back-off strategy (i.e. if a given word is not found in our induced dictionary, we simply leave it unchanged). In any case, the percentage of out-of-vocabularies is always below 1%, so this has a negligible effect in the reported results. max are able to outperform standard nearest neighbor, presumably because our cross-lingual embeddings are less sensitive to hubness thanks to the symmetric re-weighting in VecMap (Artetxe et al., 2018a). At the same time, CSLS obtains an absolute improvement of 2 points over nearest neighbor, only a third of what our method achieves. This suggests that, while previous retrieval methods have almost exclusively focused on addressing the hubness problem, there is a substantial margin of improvement beyond this phenomenon. So as to put these numbers into perspective, Table 2 compares our method to previous results reported in the literature.7 As it can be seen, our proposed method obtains the best published results in all language pairs and directions, outperforming the previous state-of-the-art by a substantial margin. Note, moreover, that these previous systems mostly differ in their cross-lingual mapping algorithm and not the retrieval method, so our improvements are orthogonal. We believe that, beyond the substantial gains in this particular task, our work has important implications for future research in cross-lingual word embedding mappings. While most work in this topic uses BLI as the only evaluation task, Glavas et al. (2019) recently showed that BLI results do not always correlate well with downstream performance. In particular, they observe that some mapping methods that are specifically designed for BLI perform poorly in other tasks. Our work shows that, besides their poor performance in those tasks, these BLI-centric mapping methods might not even be the optimal approach to BLI, as our alternative method, which relies on unsupervised machine translation instead of direct 7Note that previous results are based on the pre-trained embeddings of the MUSE dataset, while we had to train our embeddings to have a controlled experiment (see Section 3). In any case, our embeddings are trained following the official dataset setting, using Wikipedia, the same system and hyperparameters, so our results should be roughly comparable. 5005 en-es en-fr en-de en-ru avg. → ← → ← → ← → ← Conneau et al. (2018) 81.7 83.3 82.3 82.1 74.0 72.2 44.0 59.1 72.3 Hoshen and Wolf (2018) 82.1 84.1 82.3 82.9 74.7 73.0 47.5 61.8 73.6 Grave et al. (2018) 82.8 84.1 82.6 82.9 75.4 73.3 43.7 59.1 73.0 Alvarez-Melis and Jaakkola (2018) 81.7 80.4 81.3 78.9 71.9 72.8 45.1 43.7 69.5 Yang et al. (2018) 79.9 79.3 78.4 78.9 71.5 70.3 Mukherjee et al. (2018) 84.5 79.2 Alvarez-Melis et al. (2018) 81.3 81.8 82.9 81.6 73.8 71.1 41.7 55.4 71.2 Xu et al. (2018) 79.5 77.8 77.9 75.5 69.3 67.0 Proposed method 87.0 87.9 86.0 86.2 81.9 80.2 50.4 71.3 78.9 Table 2: Results of the proposed method in comparison to previous work (P@1). All systems are fully unsupervised and use fastText embeddings trained on Wikipedia with the same hyperparameters. retrieval over mapped embeddings, obtains substantially better results without requiring any additional resource. As such, we argue that 1) future work in cross-lingual word embeddings should consider other evaluation tasks in addition to BLI, and 2) future work in BLI should consider other alternatives in addition to direct retrieval over crosslingual embedding mappings. 5 Related work While BLI has been previously tackled using count-based vector space models (Vuli´c and Moens, 2013) and statistical decipherment (Ravi and Knight, 2011; Dou and Knight, 2012), these methods have recently been superseded by crosslingual embedding mappings, which work by aligning independently trained word embeddings in different languages. For that purpose, early methods required a training dictionary, which was used to learn a linear transformation that mapped these embeddings into a shared crosslingual space (Mikolov et al., 2013; Artetxe et al., 2018a). The resulting cross-lingual embeddings are then used to induce the translations of words that were missing in the training dictionary by taking their nearest neighbor in the target language. The amount of required supervision was later reduced through self-learning methods (Artetxe et al., 2017), and then completely eliminated through adversarial training (Zhang et al., 2017a; Conneau et al., 2018) or more robust iterative approaches combined with initialization heuristics (Artetxe et al., 2018b; Hoshen and Wolf, 2018). At the same time, several recent methods have formulated embedding mappings as an optimal transport problem (Zhang et al., 2017b; Grave et al., 2018; Alvarez-Melis and Jaakkola, 2018). In addition to that, a large body of work has focused on addressing the hubness problem that arises when directly inducing bilingual dictionaries from cross-lingual embeddings, either through the retrieval method (Dinu et al., 2015; Smith et al., 2017; Conneau et al., 2018) or the mapping itself (Lazaridou et al., 2015; Shigeto et al., 2015; Joulin et al., 2018). While all these previous methods directly induce bilingual dictionaries from cross-lingually mapped embeddings, our proposed method combines them with unsupervised machine translation techniques, outperforming them all by a substantial margin. 6 Conclusions and future work We propose a new approach to BLI which, instead of directly inducing bilingual dictionaries from cross-lingual embedding mappings, uses them to build an unsupervised machine translation system, which is then used to generate a synthetic parallel corpus from which to extract bilingual lexica. Our approach does not require any additional resource besides the monolingual corpora used to train the embeddings, and outperforms traditional retrieval techniques by a substantial margin. We thus conclude that, contrary to recent trend, future work in BLI should not focus exclusively in direct retrieval approaches, nor should BLI be the only evaluation task for cross-lingual embeddings. Our code is available at https://github.com/ artetxem/monoses. In the future, we would like to further improve our method by incorporating additional ideas from unsupervised machine translation such as joint refinement and neural hybridization (Artetxe et al., 2019). In addition to that, we would like to integrate our induced dictionaries in other downstream 5006 tasks like unsupervised cross-lingual information retrieval (Litschko et al., 2018). Acknowledgments This research was partially supported by the Spanish MINECO (UnsupNMT TIN2017-91692EXP and DOMINO PGC2018-102041-B-I00, cofunded by EU FEDER), the BigKnowledge project (BBVA foundation grant 2018), the UPV/EHU (excellence research group), and the NVIDIA GPU grant program. Mikel Artetxe was supported by a doctoral grant from the Spanish MECD. References David Alvarez-Melis and Tommi Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1881–1890, Brussels, Belgium. Association for Computational Linguistics. David Alvarez-Melis, Stefanie Jegelka, and Tommi S Jaakkola. 2018. Towards optimal transport with global invariances. arXiv preprint arXiv:1806.09277. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pages 5012–5019. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018c. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642, Brussels, Belgium. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine translation. arXiv preprint arXiv:1902.01313. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), workshop track. Qing Dou and Kevin Knight. 2012. Large scale decipherment for out-of-domain machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 266–275, Jeju Island, Korea. Association for Computational Linguistics. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics. Goran Glavas, Robert Litschko, Sebastian Ruder, and Ivan Vulic. 2019. How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. arXiv preprint arXiv:1902.00508. Edouard Grave, Armand Joulin, and Quentin Berthet. 2018. Unsupervised alignment of embeddings with wasserstein procrustes. arXiv preprint arXiv:1805.11222. Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 469–478, Brussels, Belgium. Association for Computational Linguistics. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herve Jegou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2979–2984, Brussels, Belgium. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of 5007 the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180. Association for Computational Linguistics. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics. Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 270– 280. Association for Computational Linguistics. Robert Litschko, Goran Glavaš, Simone Paolo Ponzetto, and Ivan Vuli´c. 2018. Unsupervised crosslingual information retrieval using monolingual data only. arXiv preprint arXiv:1805.00879. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tanmoy Mukherjee, Makoto Yamada, and Timothy Hospedales. 2018. Learning unsupervised word translations without adversaries. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 627–632, Brussels, Belgium. Association for Computational Linguistics. Miloš Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. 2010a. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Machine Learning Research, 11(Sep):2487–2531. Milos Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. 2010b. On the existence of obstinate results in vector space models. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 186–193. ACM. Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 12– 21, Portland, Oregon, USA. Association for Computational Linguistics. Yutaro Shigeto, Ikumi Suzuki, Kazuo Hara, Masashi Shimbo, and Yuji Matsumoto. 2015. Ridge Regression, Hubness, and Zero-Shot Learning, pages 135– 151. Springer International Publishing. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In 5th International Conference on Learning Representations (ICLR 2017). Ivan Vuli´c and Marie-Francine Moens. 2013. A study on bootstrapping bilingual vector spaces from nonparallel data (and nothing else). In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1613–1624, Seattle, Washington, USA. Association for Computational Linguistics. Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2465–2474, Brussels, Belgium. Association for Computational Linguistics. Pengcheng Yang, Fuli Luo, Shuangzhi Wu, Jingjing Xu, Dongdong Zhang, and Xu Sun. 2018. Learning unsupervised word mapping by maximizing mean discrepancy. arXiv preprint arXiv:1811.00275. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959–1970, Vancouver, Canada. Association for Computational Linguistics. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934–1945, Copenhagen, Denmark. Association for Computational Linguistics.
2019
494
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5008–5019 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5008 Automatically Identifying Complaints in Social Media Daniel Preot¸iuc-Pietro Bloomberg LP [email protected] Mihaela G˘aman Politehnica University of Bucharest [email protected] Nikolaos Aletras University of Sheffield [email protected] Abstract Complaining is a basic speech act regularly used in human and computer mediated communication to express a negative mismatch between reality and expectations in a particular situation. Automatically identifying complaints in social media is of utmost importance for organizations or brands to improve the customer experience or in developing dialogue systems for handling and responding to complaints. In this paper, we introduce the first systematic analysis of complaints in computational linguistics. We collect a new annotated data set of written complaints expressed in English on Twitter.1 We present an extensive linguistic analysis of complaining as a speech act in social media and train strong feature-based and neural models of complaints across nine domains achieving a predictive performance of up to 79 F1 using distant supervision. 1 Introduction Complaining is a basic speech act used to express a negative mismatch between reality and expectations towards a state of affairs, product, organization or event (Olshtain and Weinbach, 1987). Understanding the expression of complaints in natural language and automatically identifying them is of utmost importance for: (a) linguists to obtain a better understanding of the context, intent and types of complaints on a large scale; (b) psychologists to identify human traits underpinning complaint behavior and expression; (c) organizations and advisers to improve the customer service by identifying and addressing client concerns and issues effectively in real time, especially on social media; (d) developing downstream natural language processing (NLP) applications, such as 1Data and code is available here: https: //github.com/danielpreotiuc/ complaints-social-media Tweet C S @FC Help hi, I ordered a necklace over a week ago and it still hasn’t arrived (...)  @BootsUK I love Boots! Shame you’re introducing a man tax of 7% in 2018 :(   You suck  Table 1: Examples of tweets annotated for complaint (C) and sentiment (S). dialogue systems that aim to automatically identify complaints. However, complaining has yet to be studied using computational approaches. The speech act of complaining, as previously defined in linguistics research (Olshtain and Weinbach, 1987) and adopted in this study, has as its core the concept of violated or breached expectations i.e., the person posting the complaint had their favorable expectations breached by a party, usually the one to which the complaint is addressed. Complaints have been previously analyzed by linguists (V´asquez, 2011) as distinctly different from expressing negative sentiment towards an entity. Key to the definition of complaints is the expression of the breach of expectations. Table 1 shows examples of tweets highlighting the differences between complaints and sentiment. The first example expresses the writer’s breach of expectations about an item that was expected to arrive, but does not express negative sentiment toward the entity, while the second shows mixed sentiment and expresses a complaint about a tax that was introduced. The third statement is an insult that implies negative sentiment, but there are not enough cues to indicate any breach of expectations; hence, this cannot be categorized as a complaint. This paper presents the first extensive analysis of complaints in computational linguistics. Our contributions include: 1. The first publicly available data set of complaints extracted from Twitter with expert annotations spanning nine domains (e.g., software, 5009 transport); 2. An extensive quantitative analysis of the syntactic, stylistic and semantic linguistic features distinctive of complaints; 3. Predictive models using a broad range of features and machine learning models, which achieve high predictive performance for identifying complaints in tweets of up to 79 F1; 4. A distant supervision approach to collect data combined with domain adaptation to boost predictive performance. 2 Related Work Complaints have to date received significant attention in linguistics and marketing research. Olshtain and Weinbach (1987) provide one of the early definitions of a complaint as when a speaker expects a favorable event to occur or an unfavorable event to be prevented and these expectations are breached. Thus, the discrepancy between the expectations of the complainer and the reality is the key component of identifying complaints. Complaining is considered to be a distinct speech act, as defined by speech act theory (Austin, 1975; Searle, 1969) which is central to the field of pragmatics. Complaints are either addressed to the party responsible for enabling the breach of expectations (direct complaints) or indirectly mention the party (indirect complaints) (Boxer, 1993b). Complaints are widely considered to be among the face-threatening acts (Brown and Levinson, 1987) – acts that aim to damage the face or self-esteem of the person or entity the act is directed at. The concept of face (Goffman, 1967) represents the public image specific of each person or entity and has two aspects: positive (i.e., the desire to be liked) and negative face (i.e., the desire to not be imposed upon). Complaints can intrinsically threaten both positive and negative face. Positive face of the responsible party is affected by having enabled the breach of expectations. Usually, when a direct complaint is made, the illocutionary function of the complaint is to request for a correction or reparation for these events. Thus, this aims to affect negative face by aiming to impose an action to be undertaken by the responsible party. Complaints usually co-occur with other speech acts such as warnings, threats, suggestions or advice (Olshtain and Weinbach, 1987; Cohen and Olshtain, 1993). Previous linguistics research has qualitatively examined the types of complaints elicited via discourse completion tests (DCT) (Trosborg, 1995) and in naturally occurring speech (Laforest, 2002). Differences in complaint strategies and expression were studied across cultures (Cohen and Olshtain, 1993) and socio-demographic traits (Boxer, 1993a). In naturally occurring text, the discourse structure of complaints has been studied in letters to editors (Hartford and Mahboob, 2004; RanosaMadrunio, 2004). In the area of linguistic studies on computer mediated communication, V´asquez (2011) performed an analysis of 100 negative reviews on TripAdvisor, which showed that complaints in this medium often co-occur with other speech acts including positive and negative remarks, frequently make explicit references to expectations not being met and directly demand a reparation or compensation. Meinl (2013) studied complaints in eBay reviews by annotating 200 reviews in English and German with the speech act sequence that makes up each complaint e.g., warning, annoyance (the annotations are not available publicly or after contacting the authors). Mikolov et al. (2018) analyze which financial complaints submitted to the Consumer Financial Protection Bureau will receive a timely response. Most recently, Yang et al. (2019) studied customer support dialogues and predicted if these complaints will be escalated with a government agency or made public on social media. To the best of our knowledge, the only previous work that tackles a concept defined as a complaint with computational methods is by Zhou and Ganesan (2016) which studies Yelp reviews. However, they define a complaint as a ‘sentence with negative connotation with supplemental information’. This definition is not aligned with previous research in linguistics (as presented above) and represents only a minor variation on sentiment analysis. They introduce a data set of complaints, unavailable at the time of this submission, and only perform a qualitative analysis, without building predictive models for identifying complaints. 3 Data To date, there is no available data set with annotated complaints as previously defined in linguistics (Olshtain and Weinbach, 1987). Thus, we create a new data set of written utterances annotated with whether they express a complaint. We use Twitter as the data source because (1) it represents 5010 a platform with high levels of self-expression; and (2) users directly interact with other users or corporate brand accounts. Tweets are openly available and represent a popular option for data selection in other related tasks such as predicting sentiment (Rosenthal et al., 2017), affect (Mohammad et al., 2018), emotion analysis (Mohammad and Kiritchenko, 2015), sarcasm (Gonz´alezIb´anez et al., 2011; Bamman and Smith, 2015), stance (Mohammad et al., 2016), text-image relationship (Vempala and Preot¸iuc-Pietro, 2019) or irony (Van Hee et al., 2016; Cervone et al., 2017; Van Hee et al., 2018). 3.1 Collection We choose to manually annotate tweets in order to provide a solid benchmark to foster future research on this task. Complaints represent a minority of the total written posts on Twitter. We use a data sampling method that increases the hit rate of complaints, following previous work on labeling infrequent linguistic phenomena such as irony (Mohammad et al., 2018). Numerous companies use Twitter to provide customer service and address user complaints. We select tweets directed to these accounts as candidates for complaint annotation. We manually assembled a list of 93 customer service handles. Using the Twitter API,2 we collected all the tweets that are available to download (the most recent 3,200). We then identified all the original tweets to which the customer support handle responded. We randomly sample an equal number of tweets addressed to each customer support handle for annotation. Using this method, we collected 1,971 tweets to which the customer support handles responded. Further, we have also manually grouped the customer support handles in several high-level domains based on their industry type and area of activity. We have done this to enable analyzing complaints by domain and assess transferability of classifiers across domains. In related work on sentiment analysis, reviews for products from four different domains were collected across domains in a similar fashion (Blitzer et al., 2007). All customer support handles grouped by category are presented in Table 2. We add to our data set randomly sampled tweets to ensure that there is a more representative and 2https://developer.twitter.com/ diverse set of tweets for feature analysis and to ensure that the evaluation does not disproportionally contain complaints. We thus additionally sampled 1,478 tweets consisting of two groups of 739 tweets: the first group contains random tweets addressed to any other Twitter handle (at-replies) to match the initial sample, while the second group contains tweets not addressed to a Twitter handle. As preprocessing, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens. To extract the unigrams used as features, we use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017). Tweets were filtered for English using langid.py (Lui and Baldwin, 2012) and retweets were excluded. 3.2 Annotation We create a binary annotation task for identifying if a tweet contains a complaint or not. Tweets are short and usually express a single thought. Therefore, we consider the entire tweet as a complaint if it contains at least one complaint speech act. For annotation, we adopt as the guideline a complaint definition similar to that from previous linguistic research (Olshtain and Weinbach, 1987; Cohen and Olshtain, 1993): “A complaint presents a state of affairs which breaches the writer’s favorable expectation”. Each tweet was labeled by two independent annotators, authors of the paper, with significant experience in linguistic annotation. After an initial calibration run of 100 tweets (later discarded from the final data set), each annotator labeled all 1,971 tweets independently. The two annotators achieved a Cohen’s Kappa κ = 0.731, which is in the upper part of the substantial agreement band (Artstein and Poesio, 2008). Disagreements were discussed and resolved between the annotators. In total, 1,232 tweets (62.4%) are complaints and 739 are not complaints (37.6%). The statistics for each category is in Table 3. 4 Features In our analysis and predictive experiments, we use the following groups of features: generic linguistic features proven to perform well in text classification tasks (Preot¸iuc-Pietro et al., 2015; Preot¸iuc-Pietro et al., 2017; Volkova and Bell, 2017; Preot¸iuc-Pietro and Ungar, 2018) (unigrams, LIWC, word clusters), methods for predict5011 Food & Beverage Apparel Retail Cars Services Software & Online Services Transport Electronics Other ABCustomerCare NeimanMarcus HarrodsService HondaCustSvc GEICO Service YelpSupport AirAsiaSupport AskPlayStation BlackandDecker ArbysCares FC Help BN Care VWCares Safaricom Care UbisoftSupport SEPTA Social XBoxSupport WhirlpoolCare KFC UKI Help Zara Care WalmartHelp ChryslerCares VirginMedia SqSupportUK FreaterAnglia LenovoSupport NYTCare McDonalds NBaStoreSupport BootsHelp SubaruCustCare ThreeUKSupport AWSSupport RailMinIndia AppleSupport WashPostHelp PizzaHut HM CustServ WholeFoods AlfaRomeoCares KenyaPower Care SHO Help VirginTrains Moto Support MACCosmetics SupportAtTommy BestBuySupport GeorgiaPower TeamTurboTax Delta OnePlus Support HolidayInn BurberyService IKEAUSSupport UPShelp DropboxSupport British Airways SamsungSupport Nordstrom AmazonHelp ComcastCares AdobeCare JetBlue FitbitSupport DSGsupport AskEBay AOLSupportHelp Uber Support United BeatsSupport TopmanAskUs EE NortonSupport AmericanAir NvidiaCC SuperDry Care VodafoneIN MediumSupport SouthwestAir HPSupport ASOS HereToHelp BTcare TwitterSupport NikeSupport HMRCCustomers Hulu Support DirecTVService MicrosoftHelps Table 2: List of customer support handles by domain. The domain is chosen based on the most frequent product or service the account usually receives complaints about (e.g., NikeSupport receives most complaints about the Nike Fitness Bands). Category Complaints Not Complaints Food & Beverage 95 35 Apparel 141 117 Retail 124 75 Cars 67 25 Services 207 130 Software & Online Services 189 103 Transport 139 109 Electronics 174 112 Other 96 33 Total 1232 739 Table 3: Number of tweets annotated as complaints across the nine domains. ing sentiment or emotion which have an overlap with complaints and complaint specific features which capture linguistic aspects typical of complaints (Meinl, 2013; Danescu-Niculescu-Mizil et al., 2013): Unigrams. We use the bag-of-words approach to represent each tweet as a TF-IDF weighted distribution over the vocabulary consisting of all words present in at least two tweets (2,641 words). LIWC. Traditional psychology studies use dictionary-based approaches to representing text. The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including parts-of-speech, topical or stylistic categories. Each tweet is thus represented as a distribution over these categories. Word2Vec Clusters. An alternative to LIWC for identifying semantic themes in a tweet is to use automatically generated word clusters. These clusters can be thought of as topics i.e., groups of words that are semantically and/or syntactically similar. The clusters help reduce the feature space and provide good interpretability (Lampos et al., 2014; Preot¸iuc-Pietro et al., 2015; Preot¸iuc-Pietro et al., 2015; Lampos et al., 2016; Aletras and Chamberlain, 2018). We follow Preot¸iuc-Pietro et al. (2015) to compute clusters using spectral clustering (Shi and Malik, 2000) applied to a word-word similarity matrix weighted with the cosine similarity of the corresponding word embedding vectors (Mikolov et al., 2013). The clusters help reduce the feature space and provide good interpretability.3 For brevity and clarity, we present experiments using 200 clusters as in (Preot¸iucPietro et al., 2015). We aggregated all the words in a tweet and represent each tweet as a distribution of the fraction of tokens belonging to each cluster. Part-of-Speech Tags. We analyze part-of-speech tag usage to quantify the syntactic patterns associated with complaints and to enhance the representation of unigrams. We part-of-speech tag all tweets using the Twitter model of the Stanford Tagger (Derczynski et al., 2013). In prediction experiments we supplement each unigram feature with their POS tag (e.g., I PRP, bought VBN). For feature analysis, we represent each tweet as a bag-of-words distribution over part-of-speech unigrams and bigrams in order to uncover regular syntactic patterns specific of complaints. Sentiment & Emotion Models. We use existing sentiment and emotion analysis models to study their relationship to complaint annotations and to measure their predictive power on our complaint data set. If the concepts of negative sentiment and complaint were to coincide, standard sentiment prediction models that have access to larger sets of training data should be very competitive on predicting complaints. We test the following models: • MPQA: We use the MPQA sentiment lexicon (Wiebe et al., 2005) to assign a positive and negative score to each tweet based on the ratio of tokens in a tweet which appear in the positive and negative MPQA lists respectively. These scores are used as features. • NRC: We use the word lexicon derived using 3We have tried other alternatives to building clusters: using NPMI (Bouma, 2009), GloVe (Pennington et al., 2014) and LDA (Blei et al., 2003). 5012 crowd-sourcing from (Mohammad and Turney, 2010, 2013) for assigning to each tweet the proportion of tokens that have positive, negative and neutral sentiment, as well as one of eight emotions that include the six basic emotions of Ekman (Ekman, 1992) (anger, disgust, fear, joy, sadness and surprise) plus trust and anticipation. All scores are used as features in prediction in order to maximize their predictive power. • Volkova & Bachrach (V&B): We quantify positive, negative and neutral sentiment as well as the six Ekman emotions for each message using the model made available in (Volkova and Bachrach, 2016) and use them as features in predicting complaints. The sentiment model is trained on a data set of 19,555 tweets that combine all previously annotated tweets across seven public data sets. • VADER: We use the outcome of the rule-based sentiment analysis model which has shown very good predictive performance on predicting sentiment in tweets (Gilbert and Hutto, 2014). • Stanford: We quantify sentiment using the Stanford sentiment prediction model as described in (Socher et al., 2013). Complaint Specific Features. The features in this category are inspired by linguistic aspects specific to complaints (Meinl, 2013): • Request. The illocutionary function of complaints is often that of requesting for a correction or reparation for the event that caused the breach of expectations (Olshtain and Weinbach, 1987). We explicitly predict if an utterance is a request using the model introduced in (Danescu-NiculescuMizil et al., 2013). • Intensifiers. In order to increase the facethreatening effect a complaint has on the complainee, intensifiers are usually used by the person expressing the complaint (Meinl, 2013). We use features derived from: (1) capitalization patterns often used online as an equivalent to shouting (e.g., number/percentage of capitalized words, number/percentage of words starting with capitals, number/percentage of capitalized letters); and (2) repetitions of exclamation marks, question marks or letters within the same token. • Downgraders and Politeness Markers. In contrast to intensifiers, downgrading modifiers are used to reduce the face-threat involved when voicing a complaint, usually as part of a strategy to obtain a reparation for the breach of expectation (Meinl, 2013). Downgraders are coded by several dictionaries: play down (e.g., i wondered if), understaters (e.g., one little), disarmers (e.g., but), downtoners (e.g., just) and hedges (e.g., somewhat). Politeness markers have a similar effect to downgraders and include apologies (e.g., sorry), greetings at the start, direct questions, direct start (e.g., so), indicative modals (e.g., can you), subjunctive modals (e.g., could you), politeness markers (e.g., please) (Svarova, 2008) and politeness maxims (e.g., i must say). Finally, we directly predict the politeness score of the tweet using the model presented in (DanescuNiculescu-Mizil et al., 2013). • Temporal References. Temporal references are often used in complaints to stress how long a complainer has been waiting for a correction or reparation from the addressee or to provide context for their complaint (e.g., mentioning the date in which they have bought an item) (Meinl, 2013). We identify time expressions in tweets using SynTime, which achieved state-of-the-art results across on several benchmark data sets (Zhong et al., 2017). We represent temporal expressions both as days elapsed relative to the day of the post and in buckets of different granularities (one day, week, month, year). • Pronoun Types. Pronouns are used in complaints to reveal the personal involvement or opinion of the complainer and intensify or reduce the face-threat of the complaint based on the person or type of the pronoun (Claridge, 2007; Meinl, 2013). We split pronouns using dictionaries into: first person, second person, third person, demonstrative (e.g., this) and indefinite (e.g., everybody). 5 Linguistic Feature Analysis This section presents a quantitative analysis of the linguistic features distinctive of tweets containing complains in order to gain linguistic insight into this task and data. We perform analysis of all previously described feature sets using univariate Pearson correlation (Schwartz et al., 2013). We compute correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was annotated as a complaint or not. Top unigrams and part-of-speech features specific of complaints and non-complaints are presented in Table 4. The top features for the LIWC 5013 Complaints Not Complaints Feature r Feature r Unigrams not .154 <URL> .150 my .131 ! .082 working .124 he .069 still .123 thank .067 on .119 , .064 can’t .113 love .064 service .112 lol .061 customer .109 you .060 why .108 great .058 website .107 win .058 no .104 ’ .058 ? .098 she .054 fix .093 : .053 won’t .092 that .053 been .090 more .052 issue .089 it .052 days .088 would .051 error .087 him .047 is .084 life .046 charged .083 good .046 POS (Unigrams and Bigrams) VBN .141 UH .104 $ .118 NNP .098 VBZ .114 PRP .076 NN VBZ .114 HT .076 PRP$ .107 PRP . .076 PRP$ NN .105 PRP RB .067 VBG .093 NNP NNP .062 CD .092 VBP PRP .054 WRB VBZ .084 JJ .053 VBZ VBN .084 DT JJ .051 Table 4: Features associated with complaint and noncomplaint tweets, sorted by Pearson correlation (r) computed between the normalized frequency of each feature and the complaint label across all tweets. All correlations are significant at p < .01, two-tailed t-test, Simes corrected. categories and Word2Vec topics are presented in Table 5. All correlations shown in these tables are statistically significant at p < .01, with Simes correction for multiple comparisons. Negations. Negations are uncovered through unigrams (not, no, won’t) and the top LIWC category (NEGATE). Central to complaining is the concept of breached expectations. Hence the complainers use negations to express this discrepancy and to describe their experience with the product or service that caused this. Issues. Several unigrams (error, issue, working, fix) and a cluster (Issues) contain words referring to issues or errors. However, words regularly describing negative sentiment or emotions are not one of the most distinctive features for complaints. On the other hand, the presence of terms that show positive sentiment or emotions (good, great, win, POSEMO, AFFECT, ASSENT) are among the top most distinctive features for a tweet not being labeled as a complaint. In addition, other words and clusters expressing positive states such as gratitude (thank, great, love) or laughter (lol) are also distinctive for tweets that are not complaints. Linguistics research on complaints in longer documents identified that complaints are likely to co-occur with other speech acts, including with expressions of positive or negative emotions (V´asquez, 2011). In our data set, perhaps due to the particular nature of Twitter communication and the character limit, complainers are much more likely to not express positive sentiment in a complaint and do not regularly post negative sentiment. Instead, they choose to focus more on describing the issue regarding the service or product in an attempt to have it resolved. Pronouns. Across unigrams, part-of-speech patterns and word clusters, we see a distinctive pattern emerging around pronoun usage. Complaints use more possessive pronouns, indicating that the user is describing personal experiences. A distinctive part-of-speech pattern common in complaints is possessive pronouns followed by nouns (PRP$ NN) which refer to items of services possessed by the complainer (e.g., my account, my order). Complaints tend to not contain personal pronouns (he, she, it, him, you, SHEHE, MALE, FEMALE), as the focus on expressing the complaint is on the self and the party the complaint is addressed to and not other third parties. Punctuation. Question marks are distinctive of complaints, as many complaints are formulated as questions to the responsible party (e.g., why is this not working?, when will I get my response?). Complaints are not usually accompanied by exclamation marks. Although exclamation marks are regularly used for emphasis in the context of complaints, most complainers in our data set prefer not to use them perhaps in an attempt to address them in a less confrontational manner. Temporal References. Mentions of time are specific of complaints (been, still, on, days, Temporal References cluster). Their presence is usually needed to provide context for the event that caused the breach of expectations. Another role of temporal references is to express dissatisfaction towards non-responsiveness of the responsible party in addressing their previous requests. In addition, the presence of verbs in past participle (VBN) is the most distinctive part-of-speech pattern of complaints. These are used to describe actions com5014 Complaints Not Complaints Label Words r Label Words r LIWC Features NEGATE not, no, can’t, don’t, never, nothing, doesn’t, won’t .271 POSEMO thanks, love, thank, good, great, support, lol, win .185 RELATIV in, on, when, at, out, still, now, up, back, new .225 AFFECT thanks, love, thank, good, great, support, lol .111 FUNCTION the, i, to, a, my, and, you, for, is, in .204 SHEHE he, his, she, her, him, he’s, himself .105 TIME when, still, now, back, new, never, after, then, waiting .186 MALE he, his, man, him, sir, he’s, son .086 DIFFER not, but, if, or, can’t, really, than, other, haven’t .169 FEMALE she, her, girl, mom, ma, lady, mother, female, mrs .084 COGPROC not, but, how, if, all, why, or, any, need .132 ASSENT yes, ok, awesome, okay, yeah, cool, absolutely, agree .080 Word2Vec Clusters Cust. Service service, customer, contact, job, staff, assist, agent .136 Gratitude thanks, thank, good, great, support, everyone, huge, proud .089 Order order, store, buy, free, delivery, available, package .128 Family old, friend, family, mom, wife, husband, younger .063 Issues delayed, closed, between, outage, delay, road, accident .122 Voting favorite, part, stars, model, vote, models, represent .060 Time Ref. been, yet, haven’t, long, happened, yesterday, took .122 Contests Christmas, gift, receive, entered, giveaway, enter, cards .058 Tech Parts battery, laptop, screen, warranty, desktop, printer .100 Pets dogs, cat, dog, pet, shepherd, fluffy, treats .054 Access use, using, error, password, access, automatically, reset .098 Christian god, shall, heaven, spirit, lord, belongs, soul, believers .053 Table 5: Group text features associated with tweets that are complaints and not complaints. Features are sorted by Pearson correlation (r) between their each feature’s normalized frequency and the outcome. We restrict to only the top six categories for each feature type. All correlations are significant at p < .01, two-tailed t-test, Simes corrected. Within each cluster, words are sorted by frequency in our data set. Labels for Word2Vec clusters are assigned by the authors. pleted in the past (e.g., i’ve bought, have come) in order to provide context for the complaint. Verbs. Several part-of-speech patterns distinctive of complaints involve present verbs in third person singular (VBZ). In general, these verbs are used in complaints to reference an action that the author expects to happen, but his expectations are breached (e.g., nobody is answering). Verbs in gerund or present participle are used as a complaint strategy to describe things that just happened to a user (e.g., got an email saying my service will be terminated). Topics. General topics typical of complaint tweets include requiring assistance or customer support. Several groups of words are much more likely to appear in a complaint, although not used to express complaints per se: about orders or deliveries (in the retail domain), about access (in complaints to service providers) and about parts of tech products (in tech). This is natural, as people are more likely to deliberately tweet about an order or tech parts if they want to complain about them. This is similar to sentiment analysis, where not only emotionally valenced words are predictive of sentiment. 6 Predicting Complaints In this section, we experiment with different approaches to build predictive models of complaints from text content alone. We first experiment with feature based approaches including Logistic Regression classification with Elastic Net regularization (LR) (Zou and Hastie, 2005).4 We train the classifiers with all individual feature types. 4We use the Scikit Learn implementation (Pedregosa et al., 2011). Neural Methods. For reference, we experiment with two neural architectures. In both architectures, tweets are represented as sequences of onehot word vectors which are first mapped into embeddings. A multi-layer perceptron (MLP) network (Hornik et al., 1989) feeds the embedded representation (E = 200) of the tweet (mean embedding of its constituent words) into a dense hidden layer (D = 100) followed by a ReLU activation function and dropout (0.2). The output layer is one dimensional dense layer with a sigmoid activation function. The second architecture, a Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network, processes sequentially the tweet by modeling one word (embedding) at each time step followed by the same output layer as in MLP. The size of the hidden state of the LSTM is L = 50. We train the networks using the Adam optimizer (Kingma and Ba, 2014) (learning rate is set to 0.01) by minimizing the binary cross-entropy. Experimental Setup. We conduct experiments using a nested stratified 10-fold cross-validation, where nine folds are used for training and one for testing (i.e., outer loop). In the inner loop, we choose the model parameters5 using a 3fold cross-validation on the tweets from the nine folds of training data (from the outer loop). Train/dev/test splits for each experiment are released together with the data for replicability. We report predictive performance of the models as the mean accuracy, F1 (macro-averaged) and ROC AUC over the 10 folds (Dietterich, 1998). 5We tune the regularization term, α and the mixing parameter of the LR model. For the neural networks, we tune the size of the embedding E, the dense hidden layer D, the LSTM cells L and the learning rate of the optimizer. 5015 Model Acc F1 AUC Most Frequent Class 64.2 39.1 0.500 Logistic Regression Sentiment – MPQA 64.2 39.1 0.499 Sentiment – NRC 63.9 42.2 0.599 Sentiment – V&B 68.9 60.0 0.696 Sentiment – VADER 66.0 54.2 0.654 Sentiment – Stanford 68.0 55.6 0.696 Complaint Specific (all) 65.7 55.2 0.634 Request 64.2 39.1 0.583 Intensifiers 64.5 47.3 0.639 Downgraders 65.4 49.8 0.615 Temporal References 64.2 43.7 0.535 Pronoun Types 64.1 39.1 0.545 POS Bigrams 72.2 66.8 0.756 LIWC 71.6 65.8 0.784 Word2Vec Clusters 67.7 58.3 0.738 Bag-of-Words 79.8 77.5 0.866 All Features 80.5 78.0 0.873 Neural Networks MLP 78.3 76.2 0.845 LSTM 80.2 77.0 0.864 Table 6: Complaint prediction results using logistic regression (with different types of linguistic features), neural network approaches and the most frequent class baseline. Best results are in bold. Results. Results are presented in Table 6. Most sentiment analysis models show accuracy above chance in predicting complaints. The best results are obtained by the Volkova & Bachrach model (Sentiment – V&B) which achieves 60 F1. However, models trained using linguistic features on the training data obtain significantly higher predictive accuracy. Complaint specific features are predictive of complaints, but to a smaller extent than sentiment, reaching an overall 55.2 F1. From this group of features, the most predictive groups are intensifiers and downgraders. Syntactic part-ofspeech features alone obtain higher performance than any sentiment or complaint feature group, showing the syntactic patterns discussed in the previous section hold high predictive accuracy for the task. The topical features such as the LIWC dictionaries (which combine syntactic and semantic information) and Word2Vec topics perform in the same range as the part of speech tags. However, best predictive performance is obtained using bag-of-word features, reaching an F1 of up to 77.5 and AUC of 0.866. Further, combining all features boosts predictive accuracy to 78 F1 and 0.864 AUC. We notice that neural network approaches are comparable, but do not outperform the best performing feature-based model, likely in part due to the training data size. Model Acc F1 AUC Most Frequent Class 64.2 39.1 0.500 LR-All Features – Original Data 80.5 78.0 0.873 Dist. Supervision + Pooling 77.2 75.7 0.853 Dist. Supervision + EasyAdapt 81.2 79.0 0.885 Table 7: Complaint prediction results using the original data set and distantly supervised data. All models are based on logistic regression with bag-of-word and Partof-Speech tag features. Distant Supervision. We explore the idea of identifying extra complaint data using distant supervision to further boost predictive performance. Previous work has demonstrated improvements on related tasks relying on weak supervision e.g., in the form of tweets with related hashtags (Bamman and Smith, 2015; Volkova and Bachrach, 2016; Cliche, 2017). Following the same procedure, seven hashtags were identified with the help of the training data to likely correspond to complaints: #appallingcustomercare, #badbusiness, #badcustomerserivice, #badservice, #lostbusiness, #unhappycustomer, #worstbrand. Tweets were collected to contain these hashtags from a combination of the 1% Twitter archive between 2012-2018 and by filtering tweets with these hashtags in real-time from Twitter REST API for three months. We collected in total 18,218 tweets (excluding retweets and duplicates) equated to complaints. As negative complaint examples, the same amount of tweets were sampled randomly from the same time interval. All hashtags were removed and the data was preprocessed identically as the annotated data set. We experiment with two techniques for combining distantly supervised data with our annotated data. First, the tweets obtained through distant supervision are simply added to the annotated training data in each fold (Pooling). Secondly, as important signal may be washed out if the features are joined across both domains, we experiment with domain adaptation using the popular EasyAdapt algorithm (Daum´e III, 2007) (EasyAdapt). Experiments use logistic regression with bag-of-word features enhanced with part-ofspeech tags, because these performed best in the previous experiment. Results presented in Table 7 show that the domain adaptation approach further boosts F1 by 1 point to 79 (t-test, p<0.5) and ROC AUC by 0.012. However, simply pooling the data actually hurts predictive performance leading to a drop of more than 2 points in F1. 5016 Domain In-Domain Pooling EasyAdapt Food & Beverage 63.9 60.9 83.1 Apparel 76.2 71.1 72.5 Retail 58.8 79.7 79.7 Cars 41.5 77.8 80.9 Services 65.2 75.9 76.7 Software 61.3 73.4 78.7 Transport 56.4 73.4 69.8 Electronics 66.2 73.0 76.2 Other 42.4 82.8 82.8 Table 8: Performance of models in Macro F1 on tweets from each domain. Domain Experiments We assess the performance of models trained using the best method and features by using in training: (1) using only indomain data (In-Domain); (2) adding out-ofdomain data into the training set (Pooling); and (3) combining in- and out-of-domain data with EasyAdapt domain adaptation (EasyAdapt). The experimental setup is identical to the one described in the previous experiments. Table 8 shows the model performance in macro-averaged F1 using the best performing feature set. Results show that, in all but one case, adding out-of-domain data helps predictive performance. The apparel domain is qualitatively very different from the others as a large number of complaints are about returns or the company not stocking items, hence leading to different features being important for prediction. Domain adaptation is beneficial the majority of domains, lowering performance on a single domain compared to data pooling. This highlights the differences in expressing complaints across domains. Overall, predictive performance is high across all domains, with the exception of transport. Cross Domain Experiments Finally, Table 9 presents the results of models trained on tweets from one domain and tested on all tweets from other domains, with additional models trained on tweets from all domains except the one that the model is tested on. We observe that predictive performance is relatively consistent across all domains with two exceptions (‘Food & Beverage’ consistently shows lower performance, while ‘Other’ achieves higher performance) when using all the data available from the other domains. 7 Conclusions & Future Work We presented the first computational approach using methods from computational linguistics and machine learning to modeling complaints as deTest F&B A R Ca Se So T E O Train Food & Bev. – 58.1 52.5 66.4 59.7 58.9 54.1 61.4 53.7 Apparel 63.9 – 74.4 65.1 70.8 71.2 68.5 76.9 85.6 Retail 58.8 74.4 – 70.1 72.6 69.9 68.7 69.6 82.7 Cars 68.7 61.1 65.1 – 58.8 67. 59.3 62.9 68.2 Services 65. 74.2 75.8 74. – 68.8 74.2 77.9 77.9 Software 62. 74.2 68. 67.9 72.8 – 72.8 72.1 80.6 Transport 59.3 71.7 72.4 67. 74.6 75. – 72.6 81.7 Electronics 61.6 75.2 71. 68. 75. 69.9 68.2 – 78.7 Other 56.1 71.3 72.4 70.2 73.5 67.2 68.5 71. – All 70.3 77.7 79.5 82.0 79.6 80.1 76.8 81.7 88.2 Table 9: Performance of models trained with tweets from one domain and tested on other domains. All results are reported in ROC AUC. The All line displays results on training on all categories except the category in testing. fined in prior studies in linguistics and pragmatics (Olshtain and Weinbach, 1987). To this end, we introduced the first data set consisting of English Twitter posts annotated with complaints across nine domains. We analyzed the syntactic patterns and linguistic markers specific of complaints. Then, we built predictive models of complaints in tweets using a wide range of features reaching up to 79% Macro F1 (0.885 AUC) and conducted experiments using distant supervision and domain adaptation to boost predictive performance. We studied performance of complaint prediction models on each individual domain and presented results with a domain adaptation approach which overall improves predictive accuracy. All data and code is available to the research community to foster further research on complaints. A predictive model for identification of complaints is useful to companies that wish to automatically gather and analyze complaints about a particular event or product. This would allow them to improve efficiency in customer service or to more cheaply gauge popular opinion in a timely manner in order to identify common issues around a product launch or policy proposal. In the future, we plan to identify the target of the complaint in a similar way to aspect-based sentiment analysis (Pontiki et al., 2016). We plan to use additional context and conversational structure to improve performance and identify the sociodemographic covariates of expressing and phrasing complaints. Another research direction is to study the role of complaints in personal conversation or in the political domain, e.g., predicting political stance in elections (Tsakalidis et al., 2018). Acknowledgments Nikolaos Aletras is supported by an Amazon AWS Cloud Credits for Research award. 5017 References Nikolaos Aletras and Benjamin Paul Chamberlain. 2018. Predicting Twitter User Socioeconomic Attributes with Network and Language Information. In Proceedings of the 29th on Hypertext and Social Media, HT, pages 20–24. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for Computational Linguistics. Computational Linguistics, 34(4):555–596. John Langshaw Austin. 1975. How to do Things with Words. Oxford University Press. David Bamman and Noah A Smith. 2015. Contextualized Sarcasm Detection on Twitter. In Proceedings of the 9th International Conference on Weblogs and Social Media, ICWSM, pages 574–577. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, Boom-Boxes and Blenders: Domain Adaptation for Sentiment Classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, ACL, pages 440–447. Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL, pages 31–40. Diana Boxer. 1993a. Complaints as Positive Strategies: What the Learner Needs to Know. Tesol Quarterly, 27(2):277–299. Diana Boxer. 1993b. Social Distance and Speech Behavior: The Case of Indirect Complaints. Journal of Pragmatics, 19(2):103–125. Penelope Brown and Stephen C Levinson. 1987. Politeness: Some Universals in Language Usage, volume 4. Cambridge University Press. Alessandra Cervone, Evgeny A Stepanov, Fabio Celli, and Giuseppe Riccardi. 2017. Irony detection: from the twittersphere to the news space. In CLiC-it 2017-Italian Conference on Computational Linguistics, volume 2006. Claudia Claridge. 2007. Constructing a Corpus from the Web: Message Boards. Language and Computers, 59(87). Mathieu Cliche. 2017. BB twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and LSTMs. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2017), *SEM, pages 573–580. Andrew D Cohen and Elite Olshtain. 1993. The Production of Speech Acts by EFL Learners. Tesol Quarterly, 27(1):33–56. Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A Computational Approach to Politeness with Application to Social Factors. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL, pages 250–259. Hal Daum´e III. 2007. Frustratingly Easy Domain Adaptation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, ACL, pages 256–263. Leon Derczynski, Alan Ritter, Sam Clark, and Kalina Bontcheva. 2013. Twitter Part-of-Speech Tagging for All: Overcoming Sparse and Noisy Data. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP, pages 198–206. Thomas G Dietterich. 1998. Approximate statistical tests for comparing supervised classification learning algorithms. Neural computation, 10(7):1895–1923. Paul Ekman. 1992. An Argument for Basic Emotions. Cognition & Emotion, 6(3-4):169–200. CJ Gilbert and Eric Hutto. 2014. VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. In Proceedings of the 8th International Conference on Weblogs and Social Media, ICWSM, pages 216–225. Erving Goffman. 1967. Interaction Ritual: Essays on Face-to-Face Interaction. Aldine. Roberto Gonz´alez-Ib´anez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying Sarcasm in Twitter: A Closer Look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers-Volume 2, ACL, pages 581–586. Beverly Hartford and Ahmar Mahboob. 2004. Models of Discourse in the Letter of Complaint. World Englishes, 23(4):585–600. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735– 1780. Kurt Hornik, Maxwell Stinchcombe, and Halbert White. 1989. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Marty Laforest. 2002. Scenes of Family Life: Complaining in Everyday Conversation. Journal of Pragmatics, 34(10-11):1595–1620. Vasileios Lampos, Nikolaos Aletras, Jens K. Geyti, Bin Zou, and Ingemar J. Cox. 2016. Inferring the Socioeconomic Status of Social Media Users Based on Behaviour and Language. In Advances in Information Retrieval, pages 689–695. 5018 Vasileios Lampos, Nikolaos Aletras, Daniel Preot¸iucPietro, and Trevor Cohn. 2014. Predicting and Characterising User Impact on Twitter. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL, pages 405–413. Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf Language Identification Tool. In Proceedings of the ACL 2012 system demonstrations, ACL, pages 25–30. Marja E Meinl. 2013. Electronic Complaints: An Empirical Study on British English and German Complaints on eBay, volume 18. Frank & Timme GmbH. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of the 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL, pages 746–751. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2018. Deconfounded Lexicon Induction for Interpretable Social Science. In Proceedings of the 2018 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL, pages 746–751. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. Semeval-2016 task 6: Detecting Stance in Tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), *SEM, pages 31–41. Saif M. Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2018), *SEM, pages 1–17. Saif M Mohammad and Svetlana Kiritchenko. 2015. Using Hashtags to Capture Fine Emotion Categories from Tweets. Computational Intelligence, 31(2):301–326. Saif M. Mohammad and Peter D. Turney. 2010. Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon. In Proceedings of the Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, NAACL, pages 26–34. Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a Word-Emotion Association Lexicon. Computational Intelligence, 29(3):436–465. Elite Olshtain and Liora Weinbach. 1987. Complaints: A Study of Speech Act Behavior among Native and Non-native Speakers of Hebrew. Bertuccelli-Papi, M. (Eds.), The Pragmatic Perspective: Selected Papers from the 1985 International Pragmatics Conference, pages 195–208. Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine Learning in Python. JMLR, 12. James W. Pennebaker, Roger J. Booth, Ryan L. Boyd, and Martha E. Francis. 2015. Linguistic Inquiry and Word Count: LIWC2015. Austin, TX: Pennebaker Conglomerates. James W. Pennebaker, Martha E. Francis, and Roger J. Booth. 2001. Linguistic Inquiry and Word Count. Mahway: Lawrence Erlbaum Associates. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1532–1543. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, AL-Smadi Mohammad, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph´ee De Clercq, et al. 2016. SemEval2016 Task 5: Aspect based Sentiment Analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 19–30. Daniel Preot¸iuc-Pietro, Vasileios Lampos, and Nikolaos Aletras. 2015. An Analysis of the User Occupational Class through Twitter Content. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL, pages 1754–1764. Daniel Preot¸iuc-Pietro and Lyle Ungar. 2018. UserLevel Race and Ethnicity Predictors from Twitter Text. In Proceedings of the 27th International Conference on Computational Linguistics, COLING, pages 1534–1545. Daniel Preot¸iuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. 2017. Beyond Binary Labels: Political Ideology Prediction of Twitter Users. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL, pages 729–740. Daniel Preot¸iuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015. Studying user income through language, behaviour and affect in social media. PloS one, 10(9):e0138717. Marilu Ranosa-Madrunio. 2004. The Discourse Organization of Letters of Complaint to Editors in Philippine English and Singapore English. Philippine Journal of Linguistics, 35(2):67–97. Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 Task 4: Sentiment analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), *SEM, pages 502–518. 5019 H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, and Martin EP Seligman. 2013. Personality, Gender, and Age in the Language of Social Media: The Open-vocabulary Approach. PloS ONE, 8(9). H. Andrew Schwartz, Salvatore Giorgi, Maarten Sap, Patrick Crutchley, Johannes Eichstaedt, and Lyle Ungar. 2017. DLATK: Differential Language Analysis ToolKit. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP, pages 55–60. John R Searle. 1969. Speech Acts: An Essay in the Philosophy of Language, volume 626. Cambridge University Press. Jianbo Shi and Jitendra Malik. 2000. Normalized Cuts and Image Segmentation. Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1631–1642. Jana Svarova. 2008. Politeness Markers in Spoken Language. Ph.D. thesis, Masarykova Univerzita. Anna Trosborg. 1995. Interlanguage Pragmatics: Requests, Complaints, and Apologies, volume 7. Walter de Gruyter. Adam Tsakalidis, Nikolaos Aletras, Alexandra I Cristea, and Maria Liakata. 2018. Nowcasting the stance of social media users in a sudden vote: The case of the Greek Referendum. CIKM, pages 367– 376. Cynthia Van Hee, Els Lefever, and V´eronique Hoste. 2016. Monday Mornings are my Fave:)# not Exploring the Automatic Recognition of Irony in English tweets. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2730–2739. Cynthia Van Hee, Els Lefever, and Veronique Hoste. 2018. Semeval-2018 Task 3: Irony detection in English Tweets. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2018), *SEM, pages 39–50. Camilla V´asquez. 2011. Complaints Online: The case of TripAdvisor. Journal of Pragmatics, 43(6):1707– 1717. Alakananda Vempala and Daniel Preot¸iuc-Pietro. 2019. Categorizing and Inferring the Relationship between the Text and Image of Twitter Posts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL. Svitlana Volkova and Yoram Bachrach. 2016. Inferring Perceived Demographics from User Emotional Tone and User-Environment Emotional Contrast. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL, pages 1567–1578. Svitlana Volkova and Eric Bell. 2017. Identifying Effective Signals to Predict Deleted and Suspended Accounts on Twitter across Languages. ICWSM, pages 290–298. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating Expressions of Opinions and Emotions in Language. Language Resources and Evaluation, 39(2-3):165–210. Wei Yang, Luchen Tan, Chunwei Lu, Anqi Cui, Han Li, Xi Chen, Kun Xiong, Muzi Wang, Ming Li, Jian Pei, and Jimmy Lin. 2019. Detecting Customer Complaint Escalation with Recurrent Neural Networks and Manually-Engineered Features. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Industry Track), NAACL, pages 56–63. Xiaoshi Zhong, Aixin Sun, and Erik Cambria. 2017. Time expression analysis and recognition using syntactic token types and general heuristic rules. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL, pages 420– 429. Guangyu Zhou and Kavita Ganesan. 2016. Linguistic Understanding of Complaints and Praises in User Reviews. In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), NAACL, pages 109–114. Hui Zou and Trevor Hastie. 2005. Regularization and Variable Selection via the Elastic Net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320.
2019
495
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5020–5031 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5020 TWEETQA: A Social Media Focused Question Answering Dataset Wenhan Xiong†, Jiawei Wu†, Hong Wang†, Vivek Kulkarni†, Mo Yu∗, Shiyu Chang∗, Xiaoxiao Guo∗, William Yang Wang† † University of California, Santa Barbara ∗IBM Research {xwhan, william}@cs.ucsb.edu, [email protected], {shiyu.chang, xiaoxiao.guo}@ibm.com Abstract With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on realtime knowledge. While previous datasets have concentrated on question answering (QA) for formal text like news and Wikipedia, we present the first large-scale dataset for QA over social media data. To ensure that the tweets we collected are useful, we only gather tweets used by journalists to write news articles. We then ask human annotators to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, we allow the answers to be abstractive. We show that two recently proposed neural models that perform well on formal texts are limited in their performance when applied to our dataset. In addition, even the finetuned BERT model is still lagging behind human performance with a large margin. Our results thus point to the need of improved QA systems targeting social media text. 1 1 Introduction Social media is now becoming an important realtime information source, especially during natural disasters and emergencies. It is now very common for traditional news media to frequently probe users and resort to social media platforms to obtain real-time developments of events. According to a recent survey by Pew Research Center2, in 2017, more than two-thirds of Americans read some of their news on social media. Even for American people who are 50 or older, 55% of them report getting news from social media, 1The Dataset can be found at https://tweetqa. github.io/. 2http://www.journalism.org/2017/09/07/news-useacross-social-media-platforms-2017/ Passage: Oh man just read about Paul Walkers death. So young. Ugggh makes me sick especially when it’s caused by an accident. God bless his soul. – Jay Sean (@jaysean) December 1, 2013 Q: why is sean torn over the actor’s death? A: walker was young Table 1: An example showing challenges of TWEETQA. Note the highly informal nature of the text and the presence of social media specific text like usernames which need to be comprehended to accurately answer the question. which is 10% points higher than the number in 2016. Among all major social media sites, Twitter is most frequently used as a news source, with 74% of its users obtaining their news from Twitter. All these statistical facts suggest that understanding user-generated noisy social media text from Twitter is a significant task. In recent years, while several tools for core natural language understanding tasks involving syntactic and semantic analysis have been developed for noisy social media text (Gimpel et al., 2011; Ritter et al., 2011; Kong et al., 2014; Wang et al., 2014), there is little work on question answering or reading comprehension over social media, with the primary bottleneck being the lack of available datasets. We observe that recently proposed QA datasets usually focus on formal domains, e.g. CNN/DAILYMAIL (Hermann et al., 2015) and NewsQA (Trischler et al., 2016) on news articles; SQuAD (Rajpurkar et al., 2016) and WIKIMOVIES (Miller et al., 2016) that use Wikipedia. In this paper, we propose the first large-scale dataset for QA over social media data. Rather than naively obtaining tweets from Twitter using 5021 the Twitter API3 which can yield irrelevant tweets with no valuable information, we restrict ourselves only to tweets which have been used by journalists in news articles thus implicitly implying that such tweets contain useful and relevant information. To obtain such relevant tweets, we crawled thousands of news articles that include tweet quotations and then employed crowd-sourcing to elicit questions and answers based on these event-aligned tweets. Table 1 gives an example from our TWEETQA dataset. It shows that QA over tweets raises challenges not only because of the informal nature of oral-style texts (e.g. inferring the answer from multiple short sentences, like the phrase “so young” that forms an independent sentence in the example), but also from tweet-specific expressions (such as inferring that it is “Jay Sean” feeling sad about Paul’s death because he posted the tweet). Furthermore, we show the distinctive nature of TWEETQA by comparing the collected data with traditional QA datasets collected primarily from formal domains. In particular, we demonstrate empirically that three strong neural models which achieve good performance on formal data do not generalize well to social media data, bringing out challenges to developing QA systems that work well on social media domains. In summary, our contributions are: • We present the first question answering dataset, TWEETQA, that focuses on social media context; • We conduct extensive analysis of questions and answer tuples derived from social media text and distinguish it from standard question answering datasets constructed from formaltext domains; • Finally, we show the challenges of question answering on social media text by quantifying the performance gap between human readers and recently proposed neural models, and also provide insights on the difficulties by analyzing the decomposed performance over different question types. 2 Related Work Tweet NLP Traditional core NLP research typically focuses on English newswire datasets such as the Penn Treebank (Marcus et al., 1993). In recent 3https://developer.twitter.com/ years, with the increasing usage of social media platforms, several NLP techniques and datasets for processing social media text have been proposed. For example, Gimpel et al. (2011) build a Twitter part-of-speech tagger based on 1,827 manually annotated tweets. Ritter et al. (2011) annotated 800 tweets, and performed an empirical study for partof-speech tagging and chunking on a new Twitter dataset. They also investigated the task of Twitter Named Entity Recognition, utilizing a dataset of 2,400 annotated tweets. Kong et al. (2014) annotated 929 tweets, and built the first dependency parser for tweets, whereas Wang et al. (2014) built the Chinese counterpart based on 1,000 annotated Weibo posts. To the best of our knowledge, question answering and reading comprehension over short and noisy social media data are rarely studied in NLP, and our annotated dataset is also an order of magnitude large than the above public social-media datasets. Reading Comprehension Machine reading comprehension (RC) aims to answer questions by comprehending evidence from passages. This direction has recently drawn much attention due to the fast development of deep learning techniques and large-scale datasets. The early development of the RC datasets focuses on either the cloze-style (Hermann et al., 2015; Hill et al., 2015) or quiz-style problems (Richardson et al., 2013; Lai et al., 2017). The former one aims to generate single-token answers from automatically constructed pseudo-questions while the latter requires choosing from multiple answer candidates. However, such unnatural settings make them fail to serve as the standard QA benchmarks. Instead, researchers started to ask human annotators to create questions and answers given passages in a crowdsourced way. Such efforts give the rise of large-scale human-annotated RC datasets, many of which are quite popular in the community such as SQuAD (Rajpurkar et al., 2016), MS MARCO (Nguyen et al., 2016), NewsQA (Trischler et al., 2016). More recently, researchers propose even challenging datasets that require QA within dialogue or conversational context (Reddy et al., 2018; Choi et al., 2018). According to the difference of the answer format, these datasets can be further divided to two major categories: extractive and abstractive. In the first category, the answers are in text spans of the given passages, while in the latter case, the answers may 5022 not appear in the passages. It is worth mentioning that in almost all previously developed datasets, the passages are from Wikipedia, news articles or fiction stories, which are considered as the formal language. Yet, there is little effort on RC over informal one like tweets. 3 TweetQA In this section, we first describe the three-step data collection process of TWEETQA: tweet crawling, question-answer writing and answer validation. Next, we define the specific task of TWEETQA and discuss several evaluation metrics. To better understand the characteristics of the TWEETQA task, we also include our analysis on the answer and question characteristics using a subset of QA pairs from the development set. 3.1 Data Collection Tweet Crawling One major challenge of building a QA dataset on tweets is the sparsity of informative tweets. Many users write tweets to express their feelings or emotions about their personal lives. These tweets are generally uninformative and also very difficult to ask questions about. Given the linguistic variance of tweets, it is generally hard to directly distinguish those tweets from informative ones. In terms of this, rather than starting from Twitter API Search, we look into the archived snapshots4 of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, we first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages. Note that another possible way to collect informative tweets is to download the tweets that are posted by the official Twitter accounts of news media. However, these tweets are often just the summaries of news articles, which are written in formal text. As our focus is to develop a dataset for QA on informal social media text, we do not consider this approach. After we extracted tweets from archived news articles, we observed that there is still a portion of tweets that have very simple semantic structures and thus are very difficult to raise meaningful questions. An example of such tweets can be like: 4https://archive.org/ Figure 1: An example we use to guide the crowdworkers when eliciting question answer pairs. We elicit question that are neither too specific nor too general, do not require background knowledge. “Wanted to share this today - @IAmSteveHarvey”. This tweet is actually talking about an image attached to this tweet. Some other tweets with simple text structures may talk about an inserted link or even videos. To filter out these tweets that heavily rely on attached media to convey information, we utilize a state-of-the-art semantic role labeling model trained on CoNLL-2005 (He et al., 2017) to analyze the predicate-argument structure of the tweets collected from news articles and keep only the tweets with more than two labeled arguments. This filtering process also automatically filters out most of the short tweets. For the tweets collected from CNN, 22.8% of them were filtered via semantic role labeling. For tweets from NBC, 24.1% of the tweets were filtered. Question-Answer Writing We then use Amazon Mechanical Turk to collect question-answer pairs for the filtered tweets. For each Human Intelligence Task (HIT), we ask the worker to read three tweets and write two question-answer pairs for each tweet. To ensure the quality, we require the workers to be located in major English speaking countries (i.e. Canada, US, and UK) and have an acceptance rate larger than 95%. Since we use tweets as context, lots of important information are contained in hashtags or even emojis. Instead of only showing the text to the workers, we use javascript to directly embed the whole tweet into 5023 each HIT. This gives workers the same experience as reading tweets via web browsers and help them to better compose questions. To avoid trivial questions that can be simply answered by superficial text matching methods or too challenging questions that require background knowledge. We explicitly state the following items in the HIT instructions for question writing: • No Yes-no questions should be asked. • The question should have at least five words. • Videos, images or inserted links should not be considered. • No background knowledge should be required to answer the question. To help the workers better follow the instructions, we also include a representative example showing both good and bad questions or answers in our instructions. Figure 1 shows the example we use to guide the workers. As for the answers, since the context we consider is relatively shorter than the context of previous datasets, we do not restrict the answers to be in the tweet, otherwise, the task may potentially be simplified as a classification problem. The workers are allowed to write their answers in their own words. We just require the answers to be brief and can be directly inferred from the tweets. After we retrieve the QA pairs from all HITs, we conduct further post-filtering to filter out the pairs from workers that obviously do not follow instructions. We remove QA pairs with yes/no answers. Questions with less than five words are also filtered out. This process filtered 13% of the QA pairs. The dataset now includes 10,898 articles, 17,794 tweets, and 13,757 crowdsourced question-answer pairs. The collected QA pairs will be directly available to the public, and we will provide a script to download the original tweets and detailed documentation on how we build our dataset. Also note that since we keep the original news article and news titles for each tweet, our dataset can also be used to explore more challenging generation tasks. Table 2 shows the statistics of our current collection, and the frequency of different types of questions is shown in Table 3. All QA pairs were written by 492 individual workers. Dataset Statistics # of Training triples 10,692 # of Development triples 1,086 # of Test triples 1,979 Average question length (#words) 6.95 Average answer length (#words) 2.45 Table 2: Basic statistics of TWEETQA Question Type Percentage What 42.33% Who 29.36% How 7.79% Where 7.00% Why 2.61% Which 2.43% When 2.16% Others 6.32% Table 3: Question Type statistics of TWEETQA Answer Validation For the purposes of human performance evaluation and inter-annotator agreement checking, we launch a different set of HITs to ask workers to answer questions in the test and development set. The workers are shown with the tweet blocks as well as the questions collected in the previous step. At this step, workers are allowed to label the questions as “NA” if they think the questions are not answerable. We find that 3.1% of the questions are labeled as unanswerable by the workers (for SQuAD, the ratio is 2.6%). Since the answers collected at this step and previous step are written by different workers, the answers can be written in different text forms even they are semantically equal to each other. For example, one answer can be “Hillary Clinton” while the other is “@HillaryClinton”. As it is not straightforward to automatically calculate the overall agreement, we manually check the agreement on a subset of 200 random samples from the development set and ask an independent human moderator to verify the result. It turns out that 90% of the answers pairs are semantically equivalent, 2% of them are partially equivalent (one of them is incomplete) and 8% are totally inconsistent. The answers collected at this step are also used to measure the human performance. We have 59 individual workers participated in this process. 3.2 Task and Evaluation As described in the question-answer writing process, the answers in our dataset are different from those in some existing extractive datasets. 5024 Thus we consider the task of answer generation for TWEETQA and we use several standard metrics for natural language generation to evaluate QA systems on our dataset, namely we consider BLEU-15 (Papineni et al., 2002), Meteor (Denkowski and Lavie, 2011) and RougeL (Lin, 2004) in this paper. To evaluate machine systems, we compute the scores using both the original answer and validation answer as references. For human performance, we use the validation answers as generated ones and the original answers as references to calculate the scores. 3.3 Analysis In this section, we analyze our dataset and outline the key properties that distinguish it from standard QA datasets like SQuAD (Rajpurkar et al., 2016). First, our dataset is derived from social media text which can be quite informal and user-centric as opposed to SQuAD which is derived from Wikipedia and hence more formal in nature. We observe that the shared vocabulary between SQuAD and TWEETQA is only 10.79%, suggesting a significant difference in their lexical content. Figure 2 shows the 1000 most distinctive words in each domain as extracted from SQuAD and TWEETQA. Note the stark differences in the words seen in the TWEETQA dataset, which include a large number of user accounts with a heavy tail. Examples include @realdonaldtrump, @jdsutter, @justinkirkland and #cnnworldcup, #goldenglobes. In contrast, the SQuAD dataset rarely has usernames or hashtags that are used to signify events or refer to the authors. It is also worth noting that the data collected from social media can not only capture events and developments in real-time but also capture individual opinions and thus requires reasoning related to the authorship of the content as is illustrated in Table 1. In addition, while SQuAD requires all answers to be spans from the given passage, we do not enforce any such restriction and answers can be free-form text. In fact, we observed that 43% of our QA pairs consists of answers which do not have an exact substring matching with their corresponding passages. All of the above distinguishing factors have implications to existing models 5The answer phrases in our dataset are relatively short so we do not consider other BLEU scores in our experiments which we analyze in upcoming sections. We conduct analysis on a subset of TWEETQA to get a better understanding of the kind of reasoning skills that are required to answer these questions. We sample 150 questions from the development set, then manually label their reasoning categories. Table 4 shows the analysis results. We use some of the categories in SQuAD (Rajpurkar et al., 2016) and also proposes some tweet-specific reasoning types. Our first observation is that almost half of the questions only require the ability to identify paraphrases. Although most of the “paraphrasing only” questions are considered as fairly easy questions, we find that a significant amount (about 3/4) of these questions are asked about event-related topics, such as information about “who did what to whom, when and where”. This is actually consistent with our motivation to create TWEETQA, as we expect this dataset could be used to develop systems that automatically collect information about real-time events. Apart from these questions, there are also a group of questions that require understanding common sense, deep semantics (i.e. the answers cannot be derived from the literal meanings of the tweets), and relations of sentences6 (including coreference resolution), which are also appeared in other RC datasets (Rajpurkar et al., 2016). On the other hand, the TWEETQA also has its unique properties. Specifically, a significant amount of questions require certain reasoning skills that are specific to social media data: • Understanding authorship: Since tweets are highly personal, it is critical to understand how questions/tweets related to the authors. • Oral English & Tweet English: Tweets are often oral and informal. QA over tweets requires the understanding of common oral English. Our TWEETQA also requires understanding some tweet-specific English, like conversation-style English. • Understanding of user IDs & hashtags: Tweets often contains user IDs and hashtags, which are single special tokens. Understanding these special tokens is important to answer person- or event-related questions. 6There are more instances of this reasoning type compared to formal datasets since tweets are usually short sentences. 5025 Type Fraction (%) Example Paraphrasing only 47.3 P: Belgium camp is 32 miles from canceled game at US base. Surprised Klinsmann didn’t offer to use his helicopter pilot skills to give a ride. – Grant Wahl (@GrantWahl) Q: what expertise does klinsmann possess? A: helicopter pilot skills Types Beyond Paraphrasing Sentence relations 10.7 P: My heart is hurting. You were an amazing tv daddy! Proud and honored to have worked with one of the best. Love and Prayers #DavidCassidy— Alexa PenaVega (@alexavega) November 22, 2017 Q: who was an amazing tv daddy? A: #davidcassidy Authorship 17.3 P: Oh man just read about Paul Walkers death. So young. Ugggh makes me sick especially when it’s caused by an accident. God bless his soul. – Jay Sean (@jaysean) Q: why is sean torn over the actor’s death? A: walker was young Oral/Tweet English habits 10.7 P: I got two ways to watch the OLYMPICS!! CHEAH!! USA!! Leslie Jones (@Lesdoggg) August 6, 2016 Q: who is being cheered for? A: usa UserIDs & Hashtags 12.0 P: Started researching this novel in 2009. Now it is almost ready for you to read. Excited! #InTheUnlikelyEvent – Judy Blume (@judyblume) Q: what is the name of the novel? A: in the unlikely event. Other commonsense 6.7 P: Don’t have to be Sherlock Holmes to figure out what Russia is up to ... – Lindsey Graham (@LindseyGrahamSC) Q: what literary character is referenced? A: sherlock holmes. Deep semantic 3.3 P: @MayorMark its all fun and games now wait until we are old enough to vote #lastlaugh – Dylan (@DFPFilms1) Q: when does the author suggest a change? A: when he’s of voting age. Ambiguous 5.3 P: The #endangeredriver would be a sexy bastard in this channel if it had water. Quick turns. Narrow. (I’m losing it) – John D. Sutter (@jdsutter) (Meaningless questions) Q: what is this user ”losing” A: he is losing it Table 4: Types of reasoning abilities required by TWEETQA. Underline indicates tweet-specific reasoning types, which are common in TWEETQA but are rarely observed in previous QA datasets. Note that the first type represents questions that only require the ability of paraphrasing, while the rest of the types require some other more salient abilities besides paraphrasing. Overlaps could exist between different reasoning types in the table. For example, the second example requires both the understanding of sentences relations and tweet language habits to answer the question; and the third example requires both the understanding of sentences relations and authorship. 5026 Figure 2: Visualization of vocabulary differences between SQuAD (left) and TWEETQA (right). Note the presence of a heavy tail of hash-tags and usernames on TWEETQA that are rarely found on SQuAD. The color range from red to gray indicates the frequency (red the highest and gray the lowest). 4 Experiments To show the challenge of TweetQA for existing approaches, we consider four representative methods as baselines. For data processing, we first remove the URLs in the tweets and then tokenize the QA pairs and tweets using NLTK.7 This process is consistent for all baselines. 4.1 Query Matching Baseline We first consider a simple query matching baseline similar to the IR baseline in Kocisk´y et al. (2017). But instead of only considering several genres of spans as potential answers, we try to match the question with all possible spans in the tweet context and choose the span with the highest BLEU-1 score as the final answer, which follows the method and implementation8 of answer span selection for open-domain QA (Wang et al., 2017). We include this baseline to show that TWEETQA is a nontrivial task which cannot be easily solved with superficial text matching. 4.2 Neural Baselines We then explore three typical neural models that perform well on existing formal-text datasets. One takes a generative perspective and learns to decode the answer conditioned on the question and context, while the others learns to extract a text span from the context that best answers the question. Generative QA RNN-based encoder-decoder models (Cho et al., 2014; Bahdanau et al., 2014) have been widely used for natural language generation tasks. Here we consider a recently pro7http://www.nltk.org 8https://github.com/shuohangwang/mprc posed generative model (Song et al., 2017) that first encodes the context and question into a multi-perspective memory via four different neural matching layers, then decodes the answer using an attention-based model equipped with both copy and coverage mechanisms. The model is trained on our dataset for 15 epochs and we choose the model parameters that achieve the best BLEU-1 score on the development set. BiDAF Unlike the aforementioned generative model, the Bi-Directional Attention Flow (BiDAF) (Seo et al., 2016) network learns to directly predict the answer span in the context. BiDAF first utilizes multi-level embedding layers to encode both the question and context, then uses bi-directional attention flow to get a query-aware context representation, which is further modeled by an RNN layer to make the span predictions. Since our TWEETQA does not have labeled answer spans as in SQuAD, we need to use the human-written answers to retrieve the answerspan labels for training. To get the approximate answer spans, we consider the same matching approach as in the query matching baseline. But instead of using questions to do matching, we use the human-written answers to get the spans that achieve the best BLEU-1 scores. Fine-Tuning BERT This is another extractive RC model that benefits from the recent advance in pretrained general language encoders (Peters et al., 2018; Devlin et al., 2018). In our work, we select the BERT model (Devlin et al., 2018) which has achieved the best performance on SQuAD. In our experiments, we use the PyTorch reimple5027 Evaluation on Dev/Test Data Models BLEU-1 METEOR ROUGE-L HUMAN 76.4|78.2 63.7|66.7 70.9|73.5 EXTRACT-UB 79.5|80.3 68.8|69.8 74.3|75.6 Query-Matching 30.3|29.4 12.0|12.1 17.0|17.4 Neural Baselines BiDAF 48.3|48.7 31.6|31.4 38.9|38.6 Generative 53.4|53.7 32.1|31.8 39.5|39.0 BERT 67.3|69.6 56.9|58.6 62.6|64.1 Table 5: Overall performance of baseline models. EXTRACT-UB refers to our estimation of the upper bound of extractive methods. mentation9 of the uncased base model. The batch size is set as 12 and we fine-tune the model for 2 epochs with learning rate 3e-5. 5 Evaluation 5.1 Overall Performance We test the performance of all baseline systems using the three generative metrics mentioned in Section 3.2. As shown in Table 5, there is a large performance gap between human performance and all baseline methods, including BERT, which has achieved superhuman performance on SQuAD. This confirms than TWEETQA is more challenging than formal-test RC tasks. We also show the upper bound of the extractive models (denoted as EXTRACT-UPPER). In the upper bound method, the answers are defined as n-grams from the tweets that maximize the BLEU-1/METEOR/ROUGE-L compared to the annotated groundtruth. From the results, we can see that the BERT model still lags behind the upper bound significantly, showing great potential for future research. It is also interesting to see that the HUMAN performance is slightly worse compared to the upper bound. This indicates (1) the difficulty of our problem also exists for humanbeings and (2) for the answer verification process, the workers tend to also extract texts from tweets as answers. According to the comparison between the two non-pretraining baselines, our generative baseline yields better results than BiDAF. We believe this is largely due to the abstractive nature of our dataset, since the workers can sometimes write the answers using their own words. 9https://github.com/huggingface/ pytorch-pretrained-BERT 5.2 Performance Analysis over Human-Labeled Question Types Reasoning Types Generative|BERT METEOR ROUGE-L Paraphrasing 37.6|73.4 44.1|81.8 Sentence relations 34.0|46.1 42.2|51.1 Authorship 38.4|55.9 46.1|61.9 Oral/Tweet habits 37.2|50.3 40.7|51.0† UserIDs&Hashtags 3.8⋄|13.0† 9.9⋄|16.2† Commonsense 20.1|63.5 33.1|67.1 Deep semantics 7.19⋄|7.1† 13.4⋄|10.3† Ambiguous 4.1⋄|25.0† 11.0⋄|67.1 Table 6: BiDAF’s and the Generative model’s performance on questions that require different types of reasoning. ⋄and † denote the three most difficult reasoning types for the Generative and the BERT models. To better understand the difficulty of the TWEETQA task for current neural models, we analyze the decomposed model performance on the different kinds of questions that require different types of reasoning (we tested on the subset which has been used for the analysis in Table 4). Table 6 shows the results of the best performed non-pretraining and pretraining approach, i.e., the generative QA baseline and the fine-tuned BERT. Our full comparison including the BiDAF performance and evaluation on more metrics can be found in Appendix A. Following previous RC research, we also include analysis on automaticallylabeled question types in Appendix B. As indicated by the results on METEOR and ROUGE-L (also indicated by a third metric, BLEU-1, as shown in Appendix A), both baselines perform worse on questions that require the understanding deep semantics and userID&hashtags. The former kind of questions also appear in other benchmarks and is known to be challenging for many current models. The second kind of questions is tweet-specific and is related to specific properties of social media data. Since both models are designed for formal-text passages and there is no special treatment for understanding user IDs and hashtags, the performance is severely limited on the questions requiring such reasoning abilities. We believe that good segmentation, disambiguation and linking tools developed by the social media community for processing the userIDs and hashtags will significantly help these question types. 5028 On non-pretraining model Besides the easy questions requiring mainly paraphrasing skill, we also find that the questions requiring the understanding of authorship and oral/tweet English habits are not very difficult. We think this is due to the reason that, except for these tweet-specific tokens, the rest parts of the questions are rather simple, which may require only simple reasoning skill (e.g. paraphrasing). On pretraining model Although BERT was demonstrated to be a powerful tool for reading comprehension, this is the first time a detailed analysis has been done on its reasoning skills. From the results, the huge improvement of BERT mainly comes from two types. The first is paraphrasing, which is not surprising because that a well pretrained language model is expected to be able to better encode sentences. Thus the derived embedding space could work better for sentence comparison. The second type is commonsense, which is consistent with the good performance of BERT (Devlin et al., 2018) on SWAG (Zellers et al., 2018). We believe that this provides further evidence about the connection between largescaled deep neural language model and certain kinds of commonsense. 6 Conclusion We present the first dataset for QA on social media data by leveraging news media and crowdsourcing. The proposed dataset informs us of the distinctiveness of social media from formal domains in the context of QA. Specifically, we find that QA on social media requires systems to comprehend social media specific linguistic patterns like informality, hashtags, usernames, and authorship. These distinguishing linguistic factors bring up important problems for the research of QA that currently focuses on formal text. We see our dataset as a first step towards enabling not only a deeper understanding of natural language in social media but also rich applications that can extract essential real-time knowledge from social media. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. arXiv preprint arXiv:1808.07036. Michael J. Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In WMT@EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Kevin Gimpel, Nathan Schneider, Brendan T. O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Partof-speech tagging for twitter: Annotation, features, and experiments. In ACL. Luheng He, Kenton Lee, Mike Lewis, and Luke S. Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In ACL. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proc. of Conf. on Advances in NIPS. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301. Tom´as Kocisk´y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. 2017. The narrativeqa reading comprehension challenge. CoRR, abs/1712.07040. Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith. 2014. A dependency parser for tweets. In EMNLP. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale reading comprehension dataset from examinations. Proc. of Conf. on EMNLP. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. 5029 Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. EMNLP. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Kishore Papineni, Salim E. Roucos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proc. of Conf. on EMNLP. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proc. of Conf. on EMNLP. Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the conference on empirical methods in natural language processing, pages 1524–1534. Association for Computational Linguistics. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. CoRR, abs/1611.01603. Linfeng Song, Zhiguo Wang, and Wael Hamza. 2017. A unified query-based generative model for question generation and question answering. CoRR, abs/1709.01058. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. NewsQA: A machine comprehension dataset. arXiv preprint arXiv:1611.09830. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2017. R3: Reinforced reader-ranker for open-domain question answering. arXiv preprint arXiv:1709.00023. William Yang Wang, Lingpeng Kong, Kathryn Mazaitis, and William W. Cohen. 2014. Dependency parsing for weibo: An efficient probabilistic logic programming approach. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), Doha, Qatar. ACL. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326. 5030 A Full results of Performance Analysis over Human-Labeled Question Types Table 7 gives our full evaluation on human annotated question types. Compared with the BiDAF model, one interesting observation is that the generative baseline gets much worse results on ambiguous questions. We conjecture that although these questions are meaningless, they still have many words that overlapped with the contexts. This can give BiDAF potential advantage over the generative baseline. B Performance Analysis over Automatically-Labeled Question Types Besides the analysis on different reasoning types, we also look into the performance over questions with different first tokens in the development set, which provide us an automatic categorization of questions. According to the results in Table 8, the three neural baselines all perform the best on “Who” and “Where” questions, to which the answers are often named entities. Since the tweet contexts are short, there are only a small number of named entities to choose from, which could make the answer pattern easy to learn. On the other hand, the neural models fail to perform well on the “Why” questions, and the results of neural baselines are even worse than that of the matching baseline. We find that these questions generally have longer answer phrases than other types of questions, with the average answer length being 3.74 compared to 2.13 for any other types. Also, since all the answers are written by humans instead of just spans from the context, these abstractive answers can make it even harder for current models to handle. We also observe that when people write “Why” questions, they tend to copy word spans from the tweet, potentially making the task easier for the matching baseline. 5031 BLEU-1 METEOR ROUGE-L Reasoning Types BiDAF|Generative|BERT Paraphrasing 49.1|56.8|81.7 35.4|37.6|73.4 44.5|44.1|81.8 Sentence relations 43.3|53.4|50.0 26.8|34.0|46.1 32.8|42.2|51.1 Authorship 52.5|65.4|63.0 30.5|38.4|55.9 42.3|46.1|61.9 Oral/Tweet habits 45.8|60.8|60.4 34.8|37.2|50.3 35.1|40.7|51.0† UserIDs&Hashtags 30.0⋆|41.5⋄|29.3† 8.30⋆|3.81⋄|13.0† 13.7⋆|9.88⋄|16.2† Commonsense 27.6⋆|38.1⋄|72.9 22.4⋆|20.1|63.5 31.0⋆|33.1|67.1 Deep semantics 34.8⋆|53.8|25.0† 7.85⋆|7.19⋄|7.1† 17.5⋆|13.4⋄|10.3† Ambiguous 35.1|18.1⋄|31.6† 29.2|4.11⋄|25.0† 34.3|11.0⋄|67.1 Table 7: BiDAF’s and the Generative model’s performance on questions that require different types of reasoning. ⋆, ⋄and † denote the three most difficult reasoning types for BiDAF/Generative/BERT models. First-Word Question Types Models What Who How Where When Why Which Others HUMAN 74.1 83.5 61.1 74.8 72.2 66.0 76.8 76.0 Query-Matching 32.4 29.8 28.4 27.1 22.9 51.9 22.7 21.1 Neural Baselines BiDAF 44.5 54.9 41.0 60.2 46.5 36.1 44.7 41.6 Generative 46.8 63.8 53.4 61.7 45.4 44.3 51.4 43.1 BERT 64.8 72.5 57.7 78.1 64.5 61.0 67.2 59.2 Table 8: BLEU-1 scores on different types of questions. Calculated on the development set.
2019
496
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5032–5046 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5032 Asking the Crowd: Question Analysis, Evaluation and Generation for Open Discussion on Online Forums Zi Chai1,2, Xinyu Xing1,2, Xiaojun Wan1,2, Bo Huang3 1Institute of Computer Science and Technology, Peking University 2The MOE Key Laboratory of Computational Linguistics, Peking University 3Zhihu Institude {chaizi, xingxinyu, wanxiaojun}@pku.edu.cn, [email protected] Abstract Teaching machines to ask questions is an important yet challenging task. Most prior work focused on generating questions with fixed answers. As contents are highly limited by given answers, these questions are often not worth discussing. In this paper, we take the first step on teaching machines to ask open-answered questions from real-world news for open discussion (openQG). To generate high-qualified questions, effective ways for question evaluation are required. We take the perspective that the more answers a question receives, the better it is for open discussion, and analyze how language use affects the number of answers. Compared with other factors, e.g. topic and post time, linguistic factors keep our evaluation from being domain-specific. We carefully perform variable control on 11.5M questions from online forums to get a dataset, OQRanD, and further perform question analysis. Based on these conclusions, several models are built for question evaluation. For openQG task, we construct OQGenD, the first dataset as far as we know, and propose a model based on conditional generative adversarial networks and our question evaluation model. Experiments show that our model can generate questions with higher quality compared with commonlyused text generation methods. 1 Introduction Teaching machines to ask questions from given corpus, i.e. question generation (QG), is an important yet challenging task in natural language processing. In recent years, QG has received increasing attention from both the industrial and academic communities due to its wide applications. Dialog systems can be proactive by asking users questions (Wang et al., 2018), question answering (QA) systems can benefit from the corpus produced by a QG model (Duan et al., 2017), education (Heilman and Smith, 2010) and clinical (Weizenbaum et al., 1966; Colby et al., 1971) systems require QG as well. We can divide all questions into two categories. Fixed-answered questions have standard answers, e.g. “who invented the car? (Karl Benz)”. In contrast, different people may have distinct answers over open-answered questions like “what do you think of the self-driving car?”. Most prior work about QG (QA) aimed to generate (answer) fixedanswered questions. As questions are targeting on answers which are certain spans of given corpus, they are always not worth discussing. Nowadays, with the help of online QA forums (e.g. Quora and Zhihu1), open-answered questions can greatly arouse open discussion that helps people under different backgrounds to share knowledge and ideas (high-qualified questions can help to attract more visitors for QA forums as well). This kind of questions are also useful for many tasks, e.g. making dialog systems more proactive. In this paper, we focus on generating openanswered questions for open discussion, i.e. the openQG task. To make our model useful in practice, we generate questions from real-world news which are suitable for arousing open discussion. As far as we know, no research has focused on this task before due to the two difficulties: • To generate high-qualified questions (for open discussion), we need to perform question evaluation, which is rather challenging. • Questions in most existed QG (QA) datasets, e.g. SQuAD (Rajpurkar et al., 2016), are fixed-answered thus not suitable for openQG. It is worth mentioning that a good question evaluation metric is not only a necessity to compare 1Quora and Zhihu are large-scale online English, Chinese QA forums, respectively (https://www.quora.com/, https://www.zhihu.com/). 5033 different models, but can also throw light on the text generation process, e.g. acting as the reward function through reinforcement learning. Based on the perspective that the more answers a question receives, the higher quality it has for open discussion, we analyze how language use affects the number of answers. Compared with other factors, e.g. the topic and post time, focusing on language use can keep our evaluation from being domain-specific. To this end, we carefully perform variable control on 11.5M online questions from Zhihu and build the “open-answered question ranking dataset (OQRanD)”, containing 22K question pairs (questions in each pair only differ in language use). Based on OQRanD, we reach to some interesting conclusions on how linguistic factors affects the number that a question receives, and further build question evaluation models. After building our linguistic-based question evaluation model, we propose a QG model based on conditional generative adversarial network (CGAN). During the adversarial training process, we perform reinforcement learning to introduce information from the evaluation model. This architecture was not used in QG before as far as we know, and experiments show that our model gets better performance compared with commonlyused text generation methods in the quality of generated questions. All the experiments are performed on the “open-answered question generation dataset (OQGenD)” we build, which contains 20K news-question pairs. It is the first dataset for openQG to the best of our knowledge. Above all, the main contributions of this paper are threefold: • We propose the openQG task, and build OQGenD, OQRanD from 11.5M questions for generating and evaluating questions. • We study how language use affects the number of answers a question receives, and draw some interesting conclusions for linguisticbased question evaluation. • We propose a model based on CGAN and our question evaluation model, which outperforms commonly-used text generation models in the quality of generated questions. In this paper, the two datasets OQRanD and OQGend are available at https://github. com/ChaiZ-pku/OQRanD-and-OQGenD. 2 Related Work 2.1 Question Evaluation Question evaluation is a rather challenging task. Automatic evaluation metrics such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) and METEOR (Lavie and Agarwal, 2007) were widely used to measure n-gram overlaps between generated questions and ground truth questions, however, they are far from enough since we cannot list all possible ground truth questions in openQG. To this end, we need to develop specific evaluation metrics for questions. Some researches (Heilman and Smith, 2010; Figueroa and Neumann, 2013) directly trained question ranking (QR) models via supervised learning, and used it to perform evaluation. However, these models are always domainspecific and not interpretable since we cannot tell what makes a question get a high (low) score. Rao and Daumé III (2018) took a step further, and pointed out that a good question is one whose expected answer will be useful. By using the “expected value of perfect information”, they proposed a useful evaluation model. However, our task significantly differs from it in two aspects: first, there is no correct answer for open-answered questions thus it is hard to tell which answer is “useful”. Second, the goal of openQG is to arouse open discussions instead of “solving a problem”. Intuitively, a good question evaluation metric should be interpretable and keeps away from being domain-specific. To this end, we first analyze how language use affects the number of answers, and then build evaluation models based on these conclusions. There are some researches (Guerini et al., 2011; Danescu-Niculescu-Mizil et al., 2012; Guerini et al., 2012; Tan et al., 2014) about how language use affects the reaction that a piece of text generates, but we are the first to focus on questions as far as we know. 2.2 Question Generation QG was traditionally tackled by rule-based approaches (Heilman and Smith, 2010; Lindberg et al., 2013; Mazidi and Nielsen, 2014; Hussein et al., 2014; Labutov et al., 2015). In recent years, neural network (NN) approaches have taken the mainstream. Du et al. (2017) pioneered NN-based QG by using Seq2seq models (Sutskever et al., 2014). Many researches have tried to make it more suitable for QG tasks since then, including using answer position features (Zhou et al., 5034 2017), pointer mechanism (Kumar et al., 2018a; Zhao et al., 2018), etc. Adding more constraints, e.g. controlling the topic (Hu et al., 2018) and difficulty (Gao et al., 2018) of QG, or combining it with QA (Duan et al., 2017; Wang et al., 2017; Tang et al., 2017) have also been studied. Recently, using adversarial training and reinforcement learning (Yuan et al., 2017; Kumar et al., 2018b; Yao et al., 2018) have become a new trend. As far as we know, the CGAN model we proposed has not used before. Besides, most prior researches aimed to generate fixed-answered questions, and we are the first to propose openQG task to the best of our knowledge. It is worth mentioning that though we only focus on text-based QG, we can also generate questions from images, i.e. visual question generation (Ren et al., 2015; Fan et al., 2018) and knowledge graphs (Serban et al., 2016; Elsahar et al., 2018) as well. 3 Question Analysis and Evaluation In this section, we deal with question analysis and evaluation. We first perform variable control and build OQGenD. After that, we analyze how language use affects the number of answers a question receives. Based on these conclusions, we further build question evaluation models. 3.1 Construction of OQRanD The number of answers a question receives is affected by many factors. As pointed out by a number of prior researches, there are four dominated variables: topic, author, time and language use. In other words, we should control the first three variables to study the effect of language use. We perform our analysis based on an in-house dataset from Zhihu. There are 11.5M open-domain questions, and the following information is also provided for each question: the post time, the author (user ID), the author’s followers and followees, the manually-tagged topics, the number of answers, viewers and followers. Although we mainly focus on the number of answers, the counts of viewers and followers of the question are also interesting. Especially, if a question receives more answers, can we expect it to be viewed and followed by more people as well? To figure it out, we perform correlation analysis using the Pearson correlation coefficient (PCC) (Lee Rodgers and Nicewander, 1988). PCC is a measure of the linear correlation between two random variables. It is a real number between [-1, 1], where 1 means there is a total positive linear correlation, 0 means no linear correlation exixts, and -1 means there is a total negative linear correlation. PCC between the number of answers and viewers is 0.93, and that number between the number of answers and followers is 0.86. So a question with more answers can always attract more visitors and followers. As for variable control, we first focus on topic. Since each of the 11.5M questions has one of the 37 manually-tagged topics (all topics are listed in the appendix), we divide them into 37 subsets, and further extract question-pairs in each subset independently. In each pair, we want the topics of two questions as close as possible. Since questions are short texts (often about 10 words), topics are greatly reflected by nouns. We measure topicsimilarity for questions q1, q2 by: TS(q1, q2) = # nouns in both q1 and q2 # nouns in q1 + # nouns in q2 (1) where “#” means “the number of”. The larger TS(q1, q2) for q1, q2, the closer they are in topics. We set a boundary µ, and filter out question pairs whose TS(q1, q2) < µ. A number of values for µ is tried, and we finally choose µ = 0.3 since the topics of (q1, q2) are already close enough without discarding too much data. Finally, we get 24.2M topic-controlled (TC) question pairs. Based on TC pairs, we further control the effect of authors. Since users with more followers are expected to get more responses, we need to eliminate the effect of their social network. To do so, we collect all active users provided by Zhihu and build a “follower network”. In this network, each user is a node, and there is an edge from A to B if user A follows user B. We run PageRank algorithms (Page et al., 1999) on the network, and get a PageRank value for each user (real values are rounded to integers). By excluding TC pairs whose authors do not have the same PageRank value, we get 10.8M topic- and author-controlled (TAC) question pairs. Controlling the effect of time is rather complex, since few questions are posted at exactly the same time. An earlier question may benefit from “first-move advantage” (Borghol et al., 2012), but a later question might be preferred because the earlier can become “stale” (Tan et al., 2014). For a TAC pair (q1, q2), we use (n1, n2) to denote the 5035 Figure 1: The effect of time lag (∆t) on D. number of their answers, and (t1, t2) to show their posted times. The idea is: we first study how time factors affect the number of answers, i.e. how ∆t = |t1 −t2| affects ∆n = |n1 −n2|. After that, we can find if certain ∆t has small effects. By picking TAC pairs with such ∆t, the effect of time can be greatly reduced. To study how ∆t affects ∆n, we should leave ∆t as the only variable, i.e. control the effect of language use in TAC pairs. To do so, we measure the distance between q1 and q2 by normalized edit distance: d(q1, q2) = edit(q1, q2) max(len(q1), len(q2) (2) where edit(q1, q2) is the edit distance, and len(·) is the length of a question. The smaller d(q1, q2) between q1 and q2, the more similar they are in language use. We further rank all TAC pairs by d values from small to large, and pick up the first 2% pairs to get 217K topic-, author- and languagecontrolled (TALC) question pairs. Now that ∆t is the only difference, the smaller effect it has, the smaller ∆n is expected. The number of TALC pairs decreases exponentially with the growth of ∆t. As pointed out by Tan et al. (2014), directly computing E(∆n|∆t) is not reliable since the estimate will be dominated by TALC pairs with small ∆n. Instead, we should use the deviation estimate: D = X 0≤n1≤9 | ˆE(n2|n1) −n1| (3) Figure 2: D under different n1 (the smaller, the better). where ˆE(n2|n1) is the average n2 over question pairs whose q1 has n1 answers, and TALC pairs whose n1 > 9 are not considered since the number is too few, making the results less reliable. In Figure 1(a), we show how D varies with ∆t (a smaller effect of ∆t makes D closer to 0). As we can see, D is rather small when ∆t is close to 0, which is in accordance with common sense. As ∆t grows, D increases sharply, which is largely caused by the “first move advantage” described in (Borghol et al., 2012). Although D decreases when ∆t is about 100 hours (we think the main reason is: earlier questions starts to become “stale”), it is not so small as before. When ∆t is about 200 hours (the later questions also starts to become “stale”), D increases again and maintains at a high level. Figure 1(b) shows the case when ∆t is close to 0. As mentioned above, if we control ∆t to make D rather small, the effect of time will be greatly reduced. However, we may filter out too many data if making ∆t too close to 0. Intuitively, 90 seems like a good upper-bound, and we use ∆tD<90 to denote the time interval composed by all ∆t that make D < 90. To further test this upper-bound, we pick out TALC pairs whose ∆t ∈∆tD<90, and compute the deviation |E(n2|n1) −n1| under different n1 to get Figure 2 (in contrast, we also show the case when ∆t is not controlled). As we can see, by choosing pairs whose ∆t ∈ ∆tD<90, we can greatly reduce deviations. Since |E(n2|n1)−n1| < 5 under each n1, we can further eliminate the remaining time-effect by enlarging ∆n. Based on thse conclusions, we perform timecontrol on all TAC pairs by choosing pairs whose ∆t ∈∆tD<90 and ∆n > 20 (20 is much larger than 5). To study the effect of language use, we want q1, q2 not so close. So we further discard the remaining pairs whose d(q1, q2) < 0.6, and get 22K question pairs to build OQRanD. 5036 notation t-test efficacy ↑↑↑↑, ↓↓↓↓ p ≤0.0001 ↑↑↑, ↓↓↓ p ≤0.001 ↑↑, ↓↓ p ≤0.01 ↑, ↓ p ≤0.05 Table 1: The number of arrows and t-test efficacy. length ↓↓↓↓ puctuation ↓↓↓↓ noun ↓↓↓↓ 1st ppron ↓↓↓↓ verb ↑↑↑↑ 2nd ppron ↑↑↑↑ adjective ↓↓↓ 3rd ppron ↓↓↓↓ adverb ↑↑↑↑ please-word ↓↓↓↓ preposition ↓↓↓ positive-word ↑↑↑↑ pronoun negative-word quantifier ↑↑↑ sentiment-word ↑ numeral common-word ↑ Table 2: Significance tests on text features. The “ppon” denotes for “personal pronoun”. 3.2 The Effects of Language Use To show how language use affects the number of answers that a question receives, we perform significant tests on different linguistic features. The one-sided paired t-test with Bonferroni correction (for multiple comparisons) is adopted. For significant levels, we set α = .05, .01, .001, .0001, which correspond with the number of arrows (Table 1). The direction of arrows show how the feature affects the number of answers: up arrows (↑) indicate that a large feature-value (e.g. a longer length, a higher perplexity) can lead to more answers, and down arrows (↓) means small feature values are preferred. Here are some interesting conclusions 2: Ask concise questions. The basic sanity check we perform is the length of questions. Table 2 indicates that questions with less words tend to get more answers. This is in accordance with Simmons et al. (2011) which shows that short version of memes are more likely to become popular. In contrast, Tan et al. (2014) found that longer versions of tweets are more likely to be popular. This indicates that attracting more answers is different from making a blog retweeting by more people. Ask one thing a time and make it vivid. What kinds of words can help to get more answers? We test the proportion of different parts 2More details, (e.g. how we trained the language models) are listed in the appendix). data for training language models ppl (word n-grams) ppl (POS n-grams) random sampled questions 3-gram ↓↓↓↓ ↓↓↓↓ 2-gram ↓↓↓↓ ↓↓↓↓ 1-gram ↓↓↓↓ ↓↓↓↓ most answered questions 3-gram ↓↓↓↓ ↓↓↓↓ 2-gram ↓↓↓↓ ↓↓↓↓ 1-gram ↓↓↓↓ ↓↓↓↓ news headlines 3-gram ↓↓↓↓ ↓↓↓↓ 2-gram ↓↓↓↓ ↑↑↑↑ 1-gram ↓↓↓↓ ↑↑↑↑ Table 3: Significance tests on LM-based features. ppl stands for perplexity. of speech (POS) that occurs (proportions are better than word counts since they can eliminate the effect of length). As Table 2 suggests, using less nouns, adjectives and prepositions is helpful. As nouns are often topic words (occurred with adjectives and prepositions), it is better to contain less topics and ask one thing a time. On the other hand, it is better to use more verbs and adverbs to make the question vivid. Besides, using less punctuation helps (this often leads to more concise questions). Interact with readers naturally. We check the proportions of personal pronouns (ppron), and find it helps to be interactive by using more second ppron, e.g. 你认为(what do you think of). We also check the proportion of please-words, e.g. 请 教(could you please answering...). As Table 2 indicates, we should not use too many honorifics. Just interact with others naturally as if we are talking to our close friends. Positive words help. Can we get more answers by picking words with sentiments? We check the occurrence of positive and negative words based on a word emotional polarity dictionary, NTUSD 3. As shown in Table 2, more sentiment words can help, especially positive words. Use familiar expressions. Distinctive expressions may attract attention, but using “common language” can make a question better understood. Intuitively, if more commonly-used words occurs, a question is easier to read. To this end, we collect 4K words with the highest frequency from OQRanD and measure their occurrence. Table 2 shows that it is better to use common words and make the question familiar. 3https://github.com/data-science-lab/ sentimentCN/tree/master/dict. 5037 Model Accuracy traditional traditional+ours LR 78.61% 82.33% RF 81.70% 87.74% SVM 79.02% 87.96% RNN 74.68% CNN 83.18% Table 4: Results for QR task. For LR (logistic regression), RF (random forest) and SVM (support vector machine), “traditional” means n-gram word and POS features. “+ours” means adding the 33 features that pass the significant test in Table 2 and Table 3. For LSTM (long-short term memory network) and CNN (convolution neural network), “traditional” means word and POS embeddings. In addition, we randomly sample 134K questions that are not appeared in OQRanD to build six language models (LMs) based on 1, 2, 3 gram word and POS features, respectively. Table 3 indicates that questions with smaller perplexity (i.e. more familiar) are always better. Imitate good questions. Since a number of questions have already aroused a large range of open discussion, can we get more answers by imitating them? We pick 80K questions that are not appeared in OQRanD with the highest answer number as “good questions” and train six LMs (similar to above). Table 3 shows that the less perplexity a question gets, the more answers it arouses. In conclusion, imitating good questions helps. We also explore if news headlines are worth imitating. On one hand, they are carefully-written concise texts. On the other hand, as pointed out by Wei and Wan (2017), a lot of Chinese news headlines are intentionally written to be attentiongetting. From Table 3, it turns out that imitating their word use is useful. 3.3 Question Evaluation Model Based on OQRanD and our conclusions about how language use affects the answer that a question receives, we can train models to predict which question can receive more answers in each pair. Since questions in the same pair only differ in language use, models based on OQRanD can concentrate on linguistic facts to avoid being domain-specific. Given pair (q1, q2), we label it as “1” if n1 > n2, otherwise we use label “0”. In this way, our task turns into a binary classification task. We further train a model Fs which inputs a question and outputs a score. The larger Fs(·), the more answer is expected. By comparing Fs(q1), Fs(q2), we can make the final prediction. Although we can also use both q1, q2 as inputs and train a model that directly outputs label 0 or 1, using Fs on q1, q2 respectively is more flexible when we need to rank more than two question. Besides, Fs can be directly used for getting rewards during the reinforcement QG process. We use several models as Fs, and perform training based on the hinge loss. Table 4 shows the accuracy of different models (hyper-parameters and training details are provided in the appendix). When features in Section 3.2 are not used, the CNN model gets the best performance, which is not surprised. However, adding these features greatly improves the performance of all statistical models, making SVM and RF significantly surpass CNN. This illustrates the importance of linguistic factors. 4 Question Generation In this section, we perform openQG. We construct OQGenD, the first dataset for openQG as far as we know, and propose a model based on CGAN. Especially, we use the question evaluation model based on OQRanD to introduce prior knowledge. Finally, we perform experiments and use multiple evaluation metrics (including our linguistic-based model) and reach to the conclusions. 4.1 Construction of OQGenD Since real-world news are suitable for arousing open discussion, we built OQGenD from news and open-answered questions. We crawled news (published in the last three years) from Tencent News4, and performed data cleaning (removing non-textual components and filtering out redundant data) to get 59K news at last. To make questions in OQGenD suitable for open discussion, we ranked the 11.5M questions mentioned in Section 3.1 by their number of answers from large to small and picked the first half (576K). To match news and questions, we first used automatic ways to find a “candidate dataset” and then performed human labeling to build our final OQGenD dataset. To get the candidate dataset, three heuristic unsupervised methods were used to compute the distance between a piece of news 4https://news.qq.com/. It is one of the largest social media company in China 5038 Figure 3: Architecture of our model. and a question: (1) term frequency-inverse document frequency (tf-idf), which first extracted 5 (10) key words from each question (news) by tfidf values, and then measured distances by the number of intersected key words; (2) cosine distance, which is based on the bag-of-words model; (3) weighted averaged word embeddings, which was proposed by Arora et al. (2016). It first computed a weighted average of the word vectors in the sentence and then performed a “common component removal”. For each piece of news, we picked out questions with the smallest two distances under each method. We further hired five native speakers to label the candidate dataset. An NQ-pair was preserved only if it was appropriate for a human to raise the question given the piece of news. In other words, the question should be related to the given news while not mentioning extra information. In case that too many NQ-pairs were discarded, we allowed human labelers to perform two kinds of modifications on each question to preserve more data. First, we allowed them to modify the question in an NQpair by at most two entities, e.g. change it from “马 克龙是怎样一个人?(What is Macron like?)” to “特朗普是怎样的一个人(What is Trump like?)”. Second, we allowed them to use a meaningful substring to replace the original question. We ensured that each NQ-pair was labeled by three people, and it was preserved in OQGenD only if all of them agreed. In this way, we got 20K NQ-pairs. Among these pairs, there were 9K news, each corresponding with more than one questions. The average word numbers in each piece of news, question were 508, 12, respectively. 4.2 Model As shown in Figure 3, our model is composed by a generator Gθ and a discriminator Dφ. Gθ outputs a question ˆY = {ˆy1, ˆy2, ..., ˆyn} from given news X = {x1, x2, ..., xm}. It is a Seq2seq network with the attention mechanism (Luong et al., 2015). Both encoder and decoder are GRU (Chung et al., 2014) networks. Dφ takes an NQ-pair (X, YD) as input, and predicts how likely it comes from real-world dataset. First, it embeds the X, YD into vnews, vques respectively by two CNNs similar to Zhang and Wallace (2015). Based on the two representations, it computes vmatch = Wm [vnews; vques] + bm vfluent = Wf vques + bf (4) where [vnews; vques] is the concatenation of the two vectors vnews, vques, and Wm, Wf, bm, bf are parameters of our model. We expect vmatch to measure if the question matches the news, and vfluent to measure if the question is fluent enough (like human-written questions). The final prediction Dφ(X, YD) is computed by Dφ(X, YD) = σ(Wproj [vmatch; vfluent]+bproj) (5) where σ is the sigmoid function and Wproj, bproj are parameters. As we can see, both Gθ(X) and Dφ(X, YD) are conditioned on X, thus our model can be viewed as a special type of CGAN (Mirza and Osindero, 2014), which provides more control to make generated questions closely related to input news. 5039 Algorithm 1 Training process. Input: NQ-pairs (X, Y ) from OQGenD; Generator Gθ; Discriminator Dφ; Evaluator Q; Output: Well-trained generator. 1: Initialize Gθ, Dφ (Q is frozen); 2: Pre-train Gθ on (X, Y ) by MLE; 3: repeat 4: for d-steps do 5: Sample ˆY ∼Gθ( ˆY |X); 6: Use X, Y, ˆY to generate fake NQ-pairs (Xf, Yf); 7: Train Dφ on real NQ-pairs (X, Y ) and fake NQ-pairs (Xf, Yf) by Eq. 6; 8: end for 9: for g-steps do 10: Sample ˆY ∼Gθ( ˆY |X); 11: Compute rewards for ˆY by Eq. 10; 12: Update Gθ on (X, ˆY ) by Eq. 9; 13: end for 14: until G, D converge 4.3 Adversarial Training The training process of GAN is formalized as a game in which the generative model is trained to generate outputs to fool the discriminator (Goodfellow et al., 2014). For our model, the training process is described in algorithm 1. Before adversarial training, we pre-train Gθ by maximizing the log probability of a question Y given X (X, Y come from OQGenD), i.e. Maximum Likelihood Estimate (MLE), as described in Sutskever et al., 2014. This is helpful for making the adversarial training process more stable. Besides, the parameters of our question evaluation model Q is frozen during the whole process. We iteratively perform d-steps and g-steps to train Dφ, Gθ respectively during the adversarial traing process. In d-steps, we fix the parameters of Gθ, and the inputs for Dφ are three-folds: (1) NQpairs (X, Y ) from OQGenD. (2) News and questions generated by Gθ, i.e. (X, ˆY ). (3) Unmatched NQ-pairs created from OQGenD. We label “real data” (1) as “1”; and regard both (2), (3) as “fake data” with label “0”. It is worth mentioning that the unmatched NQ-pairs are used to keep Dφ from only focusing on the questions. To train Dφ, we minimize the objective function: JD(φφφ) = −E(X,Y )∼Preal data log Dφφφ(X, Y ) −E(X,Y )∼Pfake data log(1 −Dφφφ(X, Y )) (6) Since text-generation is a discrete process, we cannot directly use Dφ(X, ˆY ) to update θ in Gθ. A commonly-used idea (Yu et al., 2017; Li et al., 2017) is to train Gθ based on policy gradient (Sutton et al., 2000). In this case, Gθ is regarded as a policy network. At time-step t, state st is the generated text ˆY[1:t], and action at is generating the next word ˆyt+1 with a probability πG(at|st) = pG(ˆyt+1| ˆY[1:t], X). To get reward rt, we perform Monte-Carlo search, i.e. sample ˆY[1:t] into a complete sentence ˆYMC for k times, and perform: rt = 1 k k X i=1 Dφ( ˆY (i) MC, X) (7) After getting rt, θ is updated by minimizing JG(θθθ) = −E[ X t rt · log π(at|st)] (8) We can also change Eq 8 into a penalty-based version: J′ G(θθθ) = E[ X t (1 −rt) · π(at|st)] = JG(θθθ) + E[ X t π(at|st)] (9) where E[P t π(at|st)] can be viewed as a regularization term. It forces the generator to prefer a smaller π(at|st). In this way, it can generate more diversified results. Since we have already trained a question evaluation model Fs(·) in Section 3.3, we can use: rt = 1 k k X i=1 (γDφ( ˆY (i) MC, X) + (1 −γ)Fs( ˆY (i) MC)) (10) to replace Eq. 7. In Eq. 10, we add prior knowledge about “how language use affects the number of answers” into the adversarial training process through reinforcement learning, and expect the linguistic affects that we have discovered can throw light on the text generation process. 4.4 Experiments We choose several typical text-generation models as baselines. We apply a Seq2seq model similar to Du et al. (2017), and use a CopyNet similar to Kumar et al. (2018b). As adversarial training has become a new trend in QG, we also adopt the SeqGAN proposed by Yu et al. (2017) and SentiGAN by Wang et al. (2018). For our model, the “vanilla” 5040 Models BLEU ROUGEL METEOR Fs (SVM) 1 2 3 4 Seq2seq 36.35∗ ⋄ 20.25∗ ⋄ 14.90∗ ⋄ 13.22∗ ⋄ 36.72∗ ⋄ 21.57∗ ⋄ -2.28⋄ CopyNet 37.89∗ ⋄ 21.09∗ ⋄ 15.77∗ ⋄ 14.07∗ ⋄ 38.05∗ ⋄ 22.63∗ ⋄ -1.80∗ ⋄ SeqGAN 38.51⋄ 22.29⋄ 16.97∗ ⋄ 14.92∗ ⋄ 38.40⋄ 23.13∗ ⋄ -1.67∗ ⋄ SentiGAN 37.25∗ ⋄ 21.52∗ ⋄ 17.24∗ 15.60 36.85∗ ⋄ 23.57 -2.42∗ ⋄ Ours (vanilla) 39.67 23.62 18.01⋄ 16.00⋄ 39.87⋄ 24.52⋄ -1.89⋄ Ours (full) 39.35 23.25 18.62 16.44 39.10 24.96 -1.54 Table 5: Results for openQG. ∗(⋄) denotes that our vanilla (full) model differs from the baseline significantly based on one-side paired t-test with p < 0.05. version uses Eq. 7 to compute rewards, and the “full” version uses Eq. 10 (the SVM model which gets the best performance in Table 4 are adopted as Fs). More details about hyper-parameters and training process are provided in the appendix. We adopt the commonly-used BLEU, ROUGEL and METEOR for question evaluation. Besides, our score function Fs based on OQRanD is also used. Similarly, we choose the the SVM model which gets the best performance in Table 4. We compute Fs( ˆY ) for each generated question ˆY , and report the average value in “Fs-SVM” column of Table 5. As mentioned above, Fs shows if the generated questions are expected to receive more answers thus are more suitable for open discussion. The higher Fs a model gets, the better performance it has. The results of our experiments are listed in Table 5. When it comes to BLEU, ROUGE-L and METEOR, our models get the best performance. This shows the advantage of making both of the generator and discriminator conditioned on input news. Besides, the full version of our model gets the best BLEU-3, BLEU-4 and METEOR values by introducing the linguistic-based question evaluation model during adversarial training. Of all the baselines, SentiGAN gets the best performances on BLEU-3 and BLEU-4, which is largely contributed by its penalty based objective function. Since the same piece of news always corresponds with multiple questions (and these questions may differ a lot) in OQGenD, models based on adversarial training (SeqGAN, SentiGAN and ours) always get better results than others (Seq2seq and CopyNet). When it comes to Fs, the full version of our model gets the best performance, which illustrates that information from the SVM model is useful to generate questions with better quality. Besides, we can also use the conclusions in Section 3.2 to compare different models, e.g. questions generated by our full version model are the most concise (9.68 words per question). On the other hand, SentiGAN generates the longest questions (11.54 words per question). 5 Conclusion and Future Work In this paper, we take the first step on teaching machines to ask open-answered questions from news for open discussion. To generate high-qualified questions, we analysis how language use affects the number of answers that a question receives based on OQRanD, a dataset created by variable control. These conclusions help us to build question evaluation models, and can also used to compare results of different question generation models. For question generation, we propose a model based on CGAN using reinforcement learning to introduce information from our evaluation model. Experiments show that our model outperforms commonly-used text generation methods. There are many future works to be done. First, we will explore more powerful QG structure to deal with the huge difference between the length of input and output texts. Besides, how to better leverage prior knowledge during openQG (like human often do) is also interesting. Finally, combining openQG with its reverse task, openQA, is also worth exploration. Acknowledgments This work was supported by National Natural Science Foundation of China (61772036) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. 5041 References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2016. A simple but tough-to-beat baseline for sentence embeddings. Youmna Borghol, Sebastien Ardon, Niklas Carlsson, Derek Eager, and Anirban Mahanti. 2012. The untold story of the clones: content-agnostic factors that impact youtube video popularity. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1186– 1194. ACM. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Kenneth Mark Colby, Sylvia Weber, and Franklin Dennis Hilf. 1971. Artificial paranoia. Artificial Intelligence, 2(1):1–25. Cristian Danescu-Niculescu-Mizil, Justin Cheng, Jon Kleinberg, and Lillian Lee. 2012. You had me at hello: How phrasing affects memorability. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1, pages 892–901. Association for Computational Linguistics. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. arXiv preprint arXiv:1705.00106. Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 866–874. Hady Elsahar, Christophe Gravier, and Frederique Laforest. 2018. Zero-shot question generation from knowledge graphs for unseen predicates and entity types. arXiv preprint arXiv:1802.06842. Zhihao Fan, Zhongyu Wei, Siyuan Wang, Yang Liu, and Xuanjing Huang. 2018. A reinforcement learning framework for natural question generation using bi-discriminators. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1763–1774. Alejandro Figueroa and Günter Neumann. 2013. Learning to rank effective paraphrases from query logs for community question answering. In TwentySeventh AAAI Conference on Artificial Intelligence. Yifan Gao, Jianan Wang, Lidong Bing, Irwin King, and Michael R Lyu. 2018. Difficulty controllable question generation for reading comprehension. arXiv preprint arXiv:1807.03586. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Marco Guerini, Alberto Pepe, and Bruno Lepri. 2012. Do linguistic style and readability of scientific abstracts affect their virality? In Sixth International AAAI Conference on Weblogs and Social Media. Marco Guerini, Carlo Strapparava, and Gozde Ozbal. 2011. Exploring text virality in social networks. In Fifth International AAAI Conference on Weblogs and Social Media. Michael Heilman and Noah A Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617. Association for Computational Linguistics. Wenpeng Hu, Bing Liu, Jinwen Ma, Dongyan Zhao, and Rui Yan. 2018. Aspect-based question generation. Hafedh Hussein, Mohammed Elmogy, and Shawkat Guirguis. 2014. Automatic english question generation system based on template driven scheme. International Journal of Computer Science Issues (IJCSI), 11(6):45. Vishwajeet Kumar, Kireeti Boorla, Yogesh Meena, Ganesh Ramakrishnan, and Yuan-Fang Li. 2018a. Automating reading comprehension by generating question and answer pairs. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 335–348. Springer. Vishwajeet Kumar, Ganesh Ramakrishnan, and YuanFang Li. 2018b. A framework for automatic question generation from text using deep reinforcement learning. arXiv preprint arXiv:1808.04961. Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 889–898. Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228–231. Association for Computational Linguistics. Joseph Lee Rodgers and W Alan Nicewander. 1988. Thirteen ways to look at the correlation coefficient. The American Statistician, 42(1):59–66. Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547. 5042 Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. David Lindberg, Fred Popowich, John Nesbit, and Phil Winne. 2013. Generating natural language questions to support learning on-line. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 105–114. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Karen Mazidi and Rodney D Nielsen. 2014. Linguistic considerations in automatic question generation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 321–326. Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Sudha Rao and Hal Daumé III. 2018. Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information. arXiv preprint arXiv:1805.04655. Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Exploring models and data for image question answering. In Advances in neural information processing systems, pages 2953–2961. Iulian Vlad Serban, Alberto García-Durán, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. arXiv preprint arXiv:1603.06807. Matthew P Simmons, Lada A Adamic, and Eytan Adar. 2011. Memes online: Extracted, subtracted, injected, and recollected. In Fifth international AAAI conference on weblogs and social media. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063. Chenhao Tan, Lillian Lee, and Bo Pang. 2014. The effect of wording on message propagation: Topicand author-controlled natural experiments on twitter. arXiv preprint arXiv:1405.1438. Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027. Ke Wang and Xiaojun Wan. 2018. Sentigan: Generating sentimental texts via mixture adversarial networks. In IJCAI, pages 4446–4452. Tong Wang, Xingdi Yuan, and Adam Trischler. 2017. A joint model for question answering and question generation. arXiv preprint arXiv:1706.01450. Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. 2018. Learning to ask questions in opendomain conversational systems with typed decoders. Wei Wei and Xiaojun Wan. 2017. Learning to identify ambiguous and misleading news headlines. arXiv preprint arXiv:1705.06031. Joseph Weizenbaum et al. 1966. Eliza—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36–45. Kaichun Yao, Libo Zhang, Tiejian Luo, Lili Tao, and Yanjun Wu. 2018. Teaching machines to ask questions. In IJCAI, pages 4546–4552. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence. Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, and Adam Trischler. 2017. Machine comprehension by text-to-text neural question generation. arXiv preprint arXiv:1705.02012. Ye Zhang and Byron Wallace. 2015. A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901–3910. 5043 Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 662–671. Springer. 5044 A Details of Language Model In this section, we introduce the details of our language models described in section 3.2. We used the HanLP toolkit 5 perform word segmentation. The toolkit was also used to get the POS of each word. To train language models, we adopted the SRILM toolkit 6. During this process, we used modified kneser-ney smoothing for all the language models based on word n-grams and witten-bell smoothing for language models based on POS n-grams. B Details of Question Evaluation Models In this section, we introduce the details of our question evaluation models described in section 3.3. We adopted the Ranklib toolkit 7 to train the random forest model. For the SVM model, we used the SVM-rank toolkit 8. More specifically, we set the trade-off between training error and margin of SVM to 3 and chose the linear kernel function. For CNN and RNN models, the word embedding size is 128, and the size of POS embedding is 32. The RNN model is a single-layer bidirectional LSTM network with 128 hidden units. As for the CNN model, the convolution layer contains filters whose sizes are 160 × 1, 160 × 2, 160 × 3, 160 × 4. The counts for each kind of filters are 64, 64, 64, 64, and the stride for each of them is 1. After the convolution layer, there is a max-pooling layer and a fully connected layer with the sigmoid activation to get the final result. C Details of Question Generation models In this section, we introduce the details of our question generation model described in section 4.2. Our model is composed by a generator and a discriminator. The generator is a typical seq2seq model. It has three components: an encoder network, a decoder network and an attention network. The encoder is a single-layer bidirectional GRU with 64 hidden units while the decoder is a singlelayer unidirectional GRU with 128 hidden units. The CNN of discriminator for news contains filters whose sizes are 128 × 1, 128 × 2, 128 × 3, 5http://hanlp.linrunsoft.com 6http://www.speech.sri.com/projects/srilm/ 7http://www.lemurproject.org/ranklib.php 8http://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html 128 × 4, 128 × 5. The counts for each kind of filters are 32, 64, 64, 32, 16, and the stride for each of them is is set to 1. The CNN of discriminator for questions contains filters whose sizes are 128 × 1, 128 × 2, 128 × 3, 128 × 4. The counts for each kind of filters are 32, 64, 64, 32, and the stride for each of them is set to 1. D Examples of Our Datasets As mentioned above, we controlled the effect of topic, time and author to get OQRanD. During this process, we divided all the questions into 37 subsets according to manually-tagged topics. These topics are listed in Table 6. The examples of OQRanD are shown in Table 7. The examples of OQGenD are shown in Table 8 (in case that the original news are too long, we omit the sentences that is not related to the qestions). 5045 Topics 宗教(Religion) 自然科学(Science) 职场(Workplace) 政治(Politics) 运动健身(Physical Exercise) 娱乐(Entertainment) 游戏(Game) 影视(Film and Television) 音乐(Music) 艺术(Art) 心理学(Psychology) 体育(Sports) 时尚(Fashion) 社会科学(Social Sciences) 设计(Design) 商业(Business) 人文(Humanity) 情感(Emotion) 汽车(Car) 美食(Food) 旅行(Travel) 科技(Science and Technology) 军事(Military) 经济(Economics) 金融(Finance) 教育(Education) 健康(Health) 家居(Home Furnishing) 工程学(Engineering) 法律(Law) 宠物(Pets) 财务(Finance) 动漫(Comic) 母婴(Mother and Child) 其他(Other) 两性(Bisexual) ACG Table 6: Topics of our questions. Questions #Ans 1 有什么有趣且有知识的书推荐? (What interesting and knowledgeable books can you recommend?) 10 2015 年你读过最好的书有哪些?为什么? (What are the best books you have read in 2015? Why?) 45 2 你的家乡有什么初次尝试不太容易接受的美食吗? ( Is there any food that is hard to accept for the first time in your hometown?) 1 有哪些在自己家乡很正常但在外地人眼里是黑暗料理的美食? (Which foods are normal in your hometown but are dark cuisine in the eyes of foreigners?) 89 3 请推荐值得一看的电影(列表)? (Please recommend some movies that are worthy of watching (make a list)?) 4 你会推荐哪些值得一看的电影? What movies do you think are worthy of watching?) 24 4 如何判断自己得了抑郁症? (How to judge that if I am suffering from depression?) 5 抑郁症有哪些症状表现? (What are the symptoms of depression?) 38 5 能帮我推荐一支送女生的口红吗? (Can you recommend me a lipstick as a gift for a girl?) 3 有什么适合女生的平价口红? (Is there any cheap lipstick for girls?) 1062 Table 7: Examples of OQRanD. “#Ans” denotes for “the number of answers”. 5046 news 最后一次世界杯,C罗和梅西谁会赢。C罗和梅西谁更强?这个问题自两 人出道就争论至今。2018年俄罗斯世界杯,...... (Who will win the last World Cup between Ronaldo and Messi? Who is stronger, Ronaldo or Messi? This issue has been debated since the beginning of their career. The 2018 World Cup in Russia ...) gold questions 最后一次世界杯,C罗和梅西谁会赢? (Who will win the last World Cup between Ronaldo and Messi?) 最后一次世界杯,C罗会战胜梅西吗? (Will Ronaldo defeat Messi in the last World Cup?) 最后一次世界杯,C罗会输给梅西吗? (Will Ronaldo lose to Messi in the last World Cup?) 最后一次世界杯,梅西会输给C罗吗? (Will Messi lose to Ronaldo in the last World Cup?) 最后一次世界杯,梅西会战胜C罗吗? (Will Messi defeat Ronaldo in the last World Cup?) news 欧盟支持科威特出面"斡旋"卡塔尔断交风波。中新社布鲁塞尔6月19日 电(记者沈晨) 欧盟外交与安全政策高级代表莫盖里尼19日在欧盟外长例行 会议上表态,支持科威特出面“斡旋”卡塔尔断交风波,...... (EU supports Kuwait to “mediate” Qatar’s tumult of break-up of diplomatic relations. China News Service report in Brussels(reporter shen chen). Federica Mogherin, the European Union’s foreign-policy chief, spoke at the routine meeting of EU foreign ministers on the 19th to support Kuwait to “mediate” Qatar’s tumult of break-up of diplomatic relations ...) gold questions 如何看待埃及、沙特、巴林几乎同时宣布与卡塔尔断交? (How do you think that Egypt, Saudi Arabia and Bahrain almost simultaneously announced the break-up of diplomatic relations with Qatar?) 国家之间断交意着什么? (What does it mean when countries break off?) Table 8: Examples of OQGenD.
2019
497
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5047–5058 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5047 Tree LSTMs with Convolution Units to Predict Stance and Rumor Veracity in Social Media Conversations Sumeet Kumar Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213, USA [email protected] Kathleen M. Carley Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213, USA [email protected] Abstract Learning from social-media conversations has gained significant attention recently because of its applications in areas like rumor detection. In this research, we propose a new way to represent social-media conversations as binarized constituency trees that allows comparing features in source-posts and their replies effectively. Moreover, we propose to use convolution units in Tree LSTMs that are better at learning patterns in features obtained from the source and reply posts. Our Tree LSTM models employ multi-task (stance + rumor) learning and propagate the useful stance signal up in the tree for rumor classification at the root node. The proposed models achieve stateof-the-art performance, outperforming the current best model by 12% and 15% on F1-macro for rumor-veracity classification and stance classification tasks respectively. 1 Introduction Online misinformation, commonly called ‘fake news’, has become a serious problem in society (Ferrara, 2015) to the extent that they are impacting election decisions (Allcott and Gentzkow, 2017). Many machine-learning approaches have been proposed to identify and contain the fakenews shared on online social-media platforms (Jin et al., 2016; Rubin et al., 2016; Rubin and Lukoianova, 2015; Schifferes et al., 2014; Tacchini et al., 2017; Volkova et al., 2017; Vosoughi et al., 2018). One approach that combines machine-learning and human-intelligence by exploiting stance in reply posts has gained significant attention recently (Zubiaga et al., 2016a, 2015). In this approach, we first identify the stance – categorized as ‘supporting’, ‘denying’, ‘commenting’ and ‘querying’ – in the replies to the original post and then use the stance signal to find rumor veracity i.e. if a rumor is true or false. Prior work Putin is missing. www.abcnews.co.ir Source Tweet This is not verified It’s on TV as well Stance: Deny Stance: Deny He went missing last week Stance: Favor Rumor: False T1: R1: R11: R2: Reply Tweet Reply Tweet Reply Tweet Figure 1: Twitter threads with stance and rumorveracity labels. The conversation tree shown above has two branches a) T1–R1–R11 and b) T1-R2. R1 and R2 are 1st level reply tweets and R11 is a 2nd level reply tweet. Stance labels for each reply is relative to the tweet it is replied to i.e. stance for R11 is with-respectto R1. There is a rumor-veracity label on the root tweet (T1 in the example above). The goal of this research is to learn the root tweet’s veracity based on pattern in replies. has confirmed that replies to a ‘false’ (misleading) rumor contain specific patterns, e.g. more replies deny the claim made in the source post (Zubiaga et al., 2016b). This approach is promising as people are reasonably good at pointing out misinformation (Babcock et al., 2019) and if such posts could be automatically found, the post could go through enhanced scrutiny before it gets circulated widely. In this research, we extend this line of work on rumor-veracity and stance learning by proposing a new way to represent conversation trees and new LSTM cells that could be used to detect rumors more effectively. In past, researchers have explored various models to learn from tree structured 5048 T1 R1 R11 R2 T1 R1 R11 R2 T1 VT1R2 VR1R11 VT1R1R11 VT1R2T1R1R11 R1 VT1R1 Figure 2: Normal tree structure (left) and the modified binarized constituency tree (BCTree) structure for the conversation shown in Fig. 1. On left, a tree with structure representing the original thread in which a node can have any number of children. On right, a binary tree structure where source post and reply posts are all leaf nodes such that each reply is placed next to the tweet it was made against and connected to a virtual parent node. E.g. R11 was made against R1 so are connected to VR1R11. data (Wang et al., 2007; Gildea, 2004). For rumor veracity classification, prior research have found that the approach that performs the best on socialmedia conversations is a sequence model (like the Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) as discussed in (Zubiaga et al., 2018)). Sequential classifiers like LSTMs are good at learning temporal structure and are biased to use prior inputs to predict outputs (Eck and Schmidhuber, 2002). However, when it comes to comparison tasks like stance classification in threaded discussions, each reply is made against a post or a response to a source post (see Fig. 1). So, we ask, is the regular sequential model apt to learn the relationship between a source post and its replies in conversations? Would a model that can learn the contrast between a source and the reply tweets be more appropriate for rumor classification? To this end, we propose a new tree structure that is obtained from social-media conversation trees but allows for easy comparison of the source and its replies. Additionally, we use a convolution unit to learn patterns in local features for stance classification, and the tree model propagates the signal up the tree for the rumor classification at the root of the tree. To evaluate our models, we use a humanlabeled Twitter dataset that contains stance labels and rumor labels for around two thousand rumour threads related to five different events. Our proposed models achieve the state-of-the-art performance, outperforming the current best model by 12% and 15% on F1-macro for rumor classification and stance classification tasks respectively. 2 Models for Tree Structured Social Media Conversations Tai et al. 2015 proposed a tree structured LSTM networks and showed its utility on two tasks of semantic relatedness and sentiment classification. In their work, the tree LSTM is composed of sentence sub-phrases using a given syntactic structure. The benefits of using a recursive tree approach was discussed by Li et al. (Li et al., 2015) where the authors concluded that tree models are more suitable for root level identification. Socialmedia conversations are naturally structured as trees. Can Tree LSTMs be used for classifying node labels in such conversations trees? In this work, we try to answer this question by modeling conversations as trees where each node in the tree is a sentence representation (Fig. 2). Node labels in tree structured conversations can be learned using: a) branches of the tree as input to an LSTM (Branch LSTM Model) as used in many prior research e.g. (Zubiaga et al., 2016a, 2018) b) using the entire tree as the input (Tree LSTM Model) c) modifying the structure of the tree to better capture the inherent correlations in conversations for a given task (Binarized Constituency Tree LSTM Model). We discuss these formulations next. 5049 2.1 Branch LSTM Model In branch LSTM, the encodings of source-tweet text and the replies text along a tree branch are used as the input and the stance-labels are used as the output (as illustrated in Fig. 3). Using a simple text encoder (like mean of a word vectors), at each step, the LSTM gets a sentence embedding and predicts a label. The process is repeated for all nodes in the thread. For example, if we take the thread (T1-R1-R11) (see an example thread in Fig. 1), the LSTM takes the R11 as the input in the first time step, R1 as the input in the second time step and T1 as the input in the third time step. Embedding Recurrent Fully Connected Softmax FC + Rumor Softmax Stance Softmax Stance Softmax Stance Softmax R1 R11 T1 False Unver-ified True Favor Deny Query Comm. Favor Deny Query Comm. Favor Deny Query Comm. Figure 3: Branch LSTM: Recurrent Neural Network (RNN) architecture for sequence labeling. T1 , R1 and R11 are embeddings. At each time step, the LSTM uses a sentence embedding vector as input to output a stance label. At the root node T1, the RNN outputs a rumor-veracity label. Modelling tree conversations as branches of the tree has two limitations: a) repetition of input as many branches share nodes (e.g. root node is present in all branches) b) no communication between branches during the learning process. The LSTM uses branches independently. Thus, there is no communication between branches during training and inference. We expect that not all branches are useful to predict the veracity of a rumor post and a few branches might have stronger signal. The branch LSTM weighs all branches equally and therefore, is likely to under perform when there are many uninformative branches in a tree. This problem is solved in Tree LSTM. T1 Encoding R1 Encoding R11 Encoding R2 Encoding FC + Rumor Softmax Favor False Unverified Stance Softmax FC + Stance Softmax FC + Stance Softmax FC + Deny Query Comment True Favor Deny Query Comment Favor Deny Query Comment Figure 4: Tree LSTM model: Latent vectors at all nodes (except the root node) are used to predict stance label and the latent vector at the root node is used to predict the rumor-veracity label of the conversation. 2.2 Tree LSTM Model A typical social-media conversations consists of a post (source post), its reply and reply to the replies. This is a tree structure with the source post as the root node and the replies as the child nodes. Models for such tree structures was explored in (Tai et al., 2015) where authors suggested a modification of the LSTM cell to accommodate an unknown number of inputs at a node. For a general tree with any number of child nodes, they suggested ‘Child Sum Unit’ that sums the hidden vectors of child nodes (as in eqn. 8). We generalize this formulation to accommodate other operations as shown in Fig. 4. ˜h = O k∈C(j)hk (1) where C(j) denotes the set of children of node j and Ok is an operator that acts on the hidden vector hk of child k to output ˜h. Using this, we define the LSTM transition equations as follows: ij = σ  W (i)xj + U i ˜hj + b(i) (2) fjk = σ  W (f)xj + U (f)hk + b(f) (3) oj = σ  W (o)xj + U o ˜hj + b(o) (4) uj = tanh  W (u)xj + U (u)˜hj + b(u) (5) 5050 cj = ij ⊙uj + X k∈C(j) fjk ⊙ck (6) hj = oj ⊙tanh(cj) (7) Except wherever specified, the notations used are of standard Tree LSTM as described in Tai et al. 2015. 2.2.1 Child Sum Tree Unit The child-sum unit involves using sum of all hk vectors which means O = P. Therefore ˜h = X k∈C(j) hk (8) 2.2.2 Child Max-Pooling Unit The child max-pooling unit involves using the maximum of all hk vectors across a dimension. Therfore ˜h = max P k∈C(j)hk (9) 2.2.3 Child Convolve + MaxPooling Tree Unit Child convolve uses convolution operation of the set of child hidden vectors i.e. O = ⊛where ⊛denotes vector convolution operation. As a normal tree node can have any number of child nodes, convolution operation using all child nodes requires a max-pooling operation to preserve the dimension of ˜h. ˜h = max P ⊛k∈C(j)hk (10) where ⊛denotes vector convolution operation and maxP denotes max pooling operation. A 2d convolution over h matrix results in another matrix and the max pooling operator maps the matrix to vector containing the maximum value of each column in the matrix. A neural-network model (like an LSTM) expects a pre-defined size of input. Using an operation that reduces the children hidden layer matrix ˜h to fixed dimension vector like in equation 8 or in equation 10 attempts to solve the problem. However, these reduction operators have limitations e.g. ‘sum’ weighs all children equally and ’convolve+maxpool’ only picks the convoluted features with maximum value. Ideally this importance factor should be learned from data itself, which is what we intend to achieve using Binarized Constituency Tree (BCTree) LSTM Model. 2.3 Binarized Constituency Tree (BCTree) LSTM Model Social media conversations are in the format of a tree where a node can have many children. Converting this tree structure to another tree structure in which each node always contain two children creates a consistent format which is convenient for matrix operations needed to train neural networks. Additionally, for tasks like stance learning, where its important to compare a reply against its source post, a source reply-pair should be placed such that the contrast features can be effectively learned. To achieve this, we modify the original structure to a binary tree which we call Binarized Constituency Tree (BCTree). T1 R1 R11 R2 T1 HT1R2 HR1R11 HT1R2T1R1R11 R1 HT1R1 FC + Rumor Softmax FC + Stance Softmax FC + Stance Softmax Stance Softmax FC + False Unverified True Favor Deny Query Comment Favor Deny Query Comm Favor Deny Query Comm Figure 5: BCTree LSTM model: Latent vectors at virtual parent node of each leaf node is used to predict stance labels (e.g. HR1R11 to predict stance of R11) and the latent vector at the root node is used to predict the rumor-veracity label of the conversation. In BCTree, all source posts and their replies appear as leaf nodes (Fig. 5). A reply is always paired with its source (this requires source node to be duplicated) and they are connected to a new (virtual) parent node. To construct a BCTree from a tree, we replace all parent node with a new virtual node. The original parent node and a child node are then connected to the new virtual parent node. If a parent node has more than one child, additional virtual nodes are created to keep the tree binary. Because each node in a BCTree always has only two children, and therefore is consistent, many operators are trivially supported. E.g. we can use hidden vector concatenation. Similarly, for convolution, a convolution unit with kernel size 2 and 5051 stride size 1 (comparing a source post and a reply) preserves the dimension of hk (as BCTree node always have 2 children). Thus additional operation like ‘Sum’ or ‘MaxPooling’ is not needed. 2.3.1 Child Sum BCTree Unit This uses the same operation as in the normal tree structure (see equation 8). 2.3.2 Child Concat BCTree Unit ˜h = ⊕k∈C(j)hk (11) where ⊕denotes vector concatenation operation. 2.3.3 Child Convolve BCTree Unit ˜h = ⊛k∈C(j)hk (12) where ⊛denotes vector convolution operation. 2.3.4 Combinations of BCTree Units Because a BCTree has a uniform structure, any combination of the previous discussed units can also be combined together. Some possible combinations we try are ’Convolve + Concat’, ’Convolve + Sum ’ and ’Convolve + Concat + Sum ’. 3 Experiments and Results 3.1 Datasets We use Pheme 5 events dataset. This dataset was created as a part of the Pheme project 1 which aims to find and verify rumors shared on socialmedia platforms (Zubiaga et al., 2015, 2016b). The dataset consist of Twitter conversation threads on five different events and contains three types of annotations. Each thread is labeled as either rumor or non-rumor. Rumors are annotated for their veracity as ‘true’, ‘false’ or ‘unverified’ (see Tab. 1). For a subset of the true rumors, we also have stance labels for each reply in the threaded conversations. The stance labels are ‘support’, ‘deny’, ‘comment’ and ‘query’ (see Tab. 2). As we can observe in Tab. 2, this dataset is highly skewed towards ‘comment’. 3.2 Feature Representation We use four different models that have shown good results on various NLP tasks to extract text features. 1https://www.pheme.eu/ Events True False Unverified Charlie Hebdo (CH) 193 116 149 Sydney siege (SS) 382 86 54 Ferguson (FG) 10 8 266 Ottawa shooting (OS) 329 72 69 Germanwingscrash (GC) 94 111 33 Total 1008 393 571 Table 1: Conversation threads in the Pheme dataset Events Support Deny Query Comment CH 239 58 53 721 SS 220 89 98 700 FG 176 91 99 718 OS 161 76 63 477 GC 69 11 28 173 Total 865 325 341 2789 Table 2: Stance labels for Tweets in the conversations. Event codes are described in Tab. 1 3.2.1 Mean of Glove word vectors To get word vectors, we used Glove (Pennington et al., 2014) and the mean of these word vectors are used as the sentence embedding. Before extracting the Glove word vectors, we perform some basic text cleaning which involves removing any @mentions, any URLs and the Twitter artifact (like ‘RT’) which gets added before a retweet. Some tweets, after cleaning did not contain any text (e.g. a tweet that only contains a URL or an @mention). For such tweets, we generate an embedding vector containing uniformly generated numbers between -0.5 and 0.5. The same text cleaning was performed before generating features for all embeddings described in the rest of the paper. 3.2.2 BERT embeddings BERT 2 is not a ready to use model to generate embeddings in its original form. It is rather a model that can be tuned for a task (Devlin et al., 2018). We first tried to tune the model on our rumor classification task. But since the rumor classification dataset is relatively small, while evalu2https://github.com/huggingface/pytorch-pretrainedBERT 5052 ating we found that tuning did not lead to a good performance. We then considered other datasets that can be used for tuning. Because natural language entailment task (which predicts entailment, contradiction, or neutral between two sentences) is similar to stance learning, we use the BERT model and tune it on Multi-Genre Natural Language Inference task (Williams et al., 2018). The tuned model is then used to generate BERT embedding which is the vector representation on the last layer of the Bert model. This tuned BERT model generates a 768 dimension vector for each sentence. 3.2.3 Skipthought (SKP) embeddings We use the pre-trained model shared by the authors of Skipthought (Kiros et al., 2015) 3. The model uses a neural-network that takes sentences as input and generate a 4800 dimension embedding for each sentence. Thus, on our dataset, for each post in Twitter conversations, we get a 4800 dimension vector. 3.2.4 DeepMoji (EMT) embeddings We use the DeepMoji (Felbo et al., 2017) pretrained model 4 to generate deepmoji vectors. Like skipthought, DeepMoji is a neural network model that takes sentences as input and outputs a 64 dimension feature vectors. 3.2.5 Skipthought and DeepMoji joint (SKPEMT) embeddings Because DeepMoji and Skipthoughts are different types of encodings, we also tried a concatenated version of them which we call SKPEMT. This encoding is of size 4864 dimension. 3.3 Models Training Following the convention in prior work (Zubiaga et al., 2018), we use event wise cross-validation, which means out of five events, four events are used to train a model and one event is used to validate the performance. We define the overall objective function using cross-entropy loss, as can be seen in equation 13, where i ∈n samples, j are classes, y is the (onehot) true label, and p is the probability output for each label. In multi-task training, the total loss is the sum of loss for stance learning task and rumor learning task. As shown in Fig. 3, Fig. 4 and Fig. 3https://github.com/ryankiros/skip-thoughts 4https://github.com/huggingface/torchMoji 5, we use the output of the softmax layer for classifying stance and rumor labels of nodes in trees. L(y, p) = −1 n X i,j yij log(pij) (13) All operations in our models are fully differentiable, so these models can be trained end-to-end. Because the dataset has unbalanced labels, we can use over sampling of minority classes to create balanced input to train models. For rumor, balancing is easy as each tree has one rumor label, so we over-sample minority labeled trees to balance the training set. For stance labels, balancing is not trivial. The stance classes can be balanced by creating duplicate nodes of minority classes and connecting the new nodes to the original parent nodes. However, this results in changing the structure of trees. Thus we only used balancing on original conversation trees for stance classification and not for rumor classification on BCTrees. Our LSTM models are built using PyTorch 5 and DGL library 6. The Branch LSTM models used feature vectors as input, adds an LSTM layer, a linear dense activation layer followed by a dropout (0.3) (Srivastava et al., 2014) and uses a softmax layer for the output (rumor or stance). The models are trained using stochastic gradient descent (SGD) optimization using a cross-entropy loss function. The size of LSTM hidden layer and learning rate were used as hyper-parameter. The learning rate we tried were in range .0001 to 0.01. The LSTM layer size we tried varied from 16 to 256. We found 64 to be the best hidden dimension vector size and 0.08 to be a good learning rate for training the branch LSTMs. Once we find the best value for these hyper parameters by initial experiments, they remain unchanged during training and evaluations of the model for all five events. The training of tree models also followed the same pattern except they use an entire tree conversation. The convolution units use convolution kernels of size 2 (i.e. it used two hidden vectors at time) and stride of 1. We tried learning rate from 0.001 to 0.1, and .008 was found to work the best. We again used stochastic gradient descent (SGD) optimization with a cross-entropy loss function. For multi-task training, we used step wise training that alternates between rumor objective and stance objective. We train the models for 30 epochs. 5https://pytorch.org/ 6https://www.dgl.ai 5053 Model↓Event → CH SS FG OS GC Mean F1 Majority 0.189 0.190 0.197 0.192 0.175 0.188 Branch LSTM Models GLOVE 0.332 0.322 0.298 0.305 0.385 0.329 BERT 0.384 0.393 0.332 0.380 0.425 0.383 SKP 0.424 0.417 0.373 0.454 0.455 0.425 EMT 0.370 0.332 0.365 0.399 0.442 0.381 SKPEMT 0.428 0.424 0.397 0.463 0.468 0.436 Tree LSTM Models - ‘Child Sum’ Cell Type BERT 0.512 0.580 0.528 0.481 0.522 0.524 SKP 0.490 0.565 0.540 0.495 0.568 0.532 EMT 0.443 0.514 0.444 0.453 0.509 0.473 SKPEMT 0.509 0.577 0.524 0.504 0.529 0.529 Tree LSTM Models - ‘Child Convolve + MaxPooling’ Cell Type BERT 0.510 0.564 0.522 0.476 0.530 0.520 SKP 0.514 0.579 0.553 0.469 0.547 0.532 EMT 0.486 0.478 0.530 0.439 0.496 0.486 SKPEMT 0.480 0.574 0.497 0.477 0.598 0.525 Prior Research (Zubiaga et al., 2018) 0.465 0.446 0.373 0.475 0.543 0.460 (Zubiaga et al., 2016a) 0.427 0.495 0.390 0.457 0.523 0.458 (Lukasik et al., 2016) 0.326 0.323 0.260 0.323 NA NA Table 3: Stance learning results: F1-score (macro) and mean of F1-macro (Mean-F1) for different events. To evaluate the trained models, we use F1-score which is defined as the harmonic mean of precision and recall. Rather than using accuracy, we use F1-score as the metric for evaluating the performance of the models for two reasons: a) Pheme dataset (the dataset we use) is skewed towards one class (‘comment’), hence, a classifier that predicts the majority class can get a good accuracy. F1-score (macro) balances the classes and considers precision as well as recall. 2) Prior work on this dataset used F1-score (Zubiaga et al., 2018). Thus, the use of this measure allows to compare with prior research. The performance for a validation event is the F1-macro obtained by evaluating the model trained on all data except the validation event data. This step is performed for all five events, and the mean of F1-macro scores from all five events is used to compare the models. For the stance classification task, the F1-score (macro) is defined in Eqn. 14. For the rumor classification task, the F1-score (macro) is defined in Eqn. 15. F1stance = F1deny + F1favor + F1query + F1com. 4 (14) F1rumor = F1true + F1false + F1unverified 3 (15) 3.4 Stance Classification Results We present the results of evaluating the models for stance classification in Tab. 3. The Tree LSTM model that uses ‘Child Convolve + Maxpooling’ with skipthought features outperforms all other models (0.532 mean f1). The Tree LSTM model using ‘Child sum’ unit performs equally well on mean value but was worse on three events. Q S D C Predicted label Q S D C True label 0.50 0.15 0.34 0.01 0.13 0.62 0.16 0.09 0.27 0.31 0.40 0.02 0.01 0.11 0.03 0.84 0.2 0.4 0.6 0.8 Figure 6: Normalized stance confusion matrix. Q, S, D and C labels indicate ‘Query’, ‘Support’, ’Deny’ and ‘Comments’ respectively. In Fig. 6, we show the confusion matrix for the best performing stance classifier. As we can observe, the model is best at classifying ‘Comment’ and is worst at classifying ‘Denial’. The poor performance of the denial class could be partially attributed to the unbalance of classes (‘Deny’ being the smallest) in the dataset. If we compare the stance classification results 5054 based on feature types, we see that BERT and SKP are often comparable and EMT is slightly worse then them. SKPEMT performs better than EMT and BERT, but is as not as good as SKP. Because of space limitation, we do not present results for Glove features for Tree based models as, in almost all cases, the mean of Glove vectors as sentence representation performed worse than other features. For stance learning, the BCTree based models did not work as well as the Tree LSTM based models. This is likely because we are not able to balance stance classes in BCT trees. BCTrees stance nodes can be balanced before binarizing, but that adds many additional new nodes. These new virtual nodes don’t have stance labels and results in poor performance. 3.5 Rumor Classification Results We present the rumor classification results in Table 4. CellType ↓Feature → SKP EMT BERT SKPEMT Branch LSTM - Multitask 0.358 0.359 0.332 0.347 Tree LSTM - Multitask Sum 0.364 0.348 0.341 0.364 MaxPool 0.369 0.352 0.339 0.375 Convolve + MaxPool 0.379 0.365 0.359 0.370 BCTree LSTM - Multitask Sum 0.371 0.356 0.338 0.371 Convolve 0.367 0.335 0.337 0.362 Convolve+Sum 0.353 0.353 0.329 0.364 Convolve + Concat 0.370 0.354 0.340 0.364 MaxPool 0.353 0.354 0.326 0.352 Convolve+MaxPool 0.363 0.349 0.333 0.357 Concat + Sum 0.364 0.341 0.324 0.364 Convolve+Sum+Concat 0.366 0.343 0.342 0.354 Baselines and Prior Research (Kochkina et al., 2018) 0.329 NileTMRG (Enayet and El-Beltagy, 2017) 0.339 Majority 0.223 Table 4: Rumor classification results: Mean F1score from different cell-type and feature-type combinations. For NileTMRG, we used the results presented in (Kochkina et al., 2018), Tbl. 3. For rumor classification, the best performing model uses ‘Convolve + MaxPool’ as units in Tree LSTM (Mean F1 of 0.379 using SKP features) and is trained in multi-task fashion. Other comparable models are ‘sum’ and ‘Convolve + concat’ units with BCTree LSTM. For SKPEMT features, the best performance was obtained using ‘Maxpool’ cell with a Tree LSTM model. We expected BCTree LSTM to work better than Tree LSTM. They are almost comparable but BCTree LSTM is slightly worse. This is likely because binarizing a tree creates many new nodes (without labels), and as height of trees increase it becomes more difficult for LSTMs to propagate useful information to the top root node for rumor-veracity classification. If we compare the different types of features, SKP features outperformed others in almost all cases. It should be noted that SKP features are also higher in dimension (4800) in comparison to EMT 64 and BERT 768. If we compare, multi-task vs single-task, in almost all cases, performance improved by training in a multitask fashion. F U T Predicted label F U T True label 0.34 0.20 0.46 0.12 0.54 0.35 0.20 0.17 0.62 0.2 0.3 0.4 0.5 0.6 Figure 7: Normalized rumor confusion matrix. F, U and T labels indicate ‘False’, ‘Unverified’ and ‘True’ respectively. Overall, for rumor classification, the best model is the LSTM model that uses ’Convolve + MaxPool’ unit and trained on Tree LSTM using multitask. This exceeds the best prior work by 12% in f1-score. For this model, we show the confusion matrix in Fig. 7. As we can observe, ‘True’ (T) and ‘Unknown’ (U) performs equally well and the ‘False’ (F) rumor is the most confusing class. The poor performance of ‘False’ rumors could be linked to the poor performance of ‘Denials’ stance in stance classification. Prior research have shown that a high number of denials is a good indicator of ‘False’ rumors, and therefore a model that is poor at predicting denials also performs poorly at predicting ‘False’ rumors. 5055 4 Related Work Stance learning and rumor detection lie at the intersection of many different fields. We highlight important related topics here. 4.1 Stance Learning Computational approaches of Stance learning – which involves finding people’s attitude about a topic of interest – have primarily appeared in two flavors. 1) Recognizing stance in debates (Somasundaran and Wiebe, 2010; Ozer et al., 2016) 2) Conversations on online social-media platforms. Since our research focuses on conversations on social-media platforms, we discuss some important contributions here. Mohammad et al. built a stance dataset using Tweets and organized a SemEval competition in 2016 (Task 6). Many researchers (Augenstein et al., 2016; Liu et al., 2016; Wei et al., 2016) used the dataset and proposed algorithms to learn stance from this text data. In almost the same time frame, work on stance in conversations appeared in the context of fake-news and misinformation identification, we discuss this in the next section. 4.2 Rumor and Misinformation Identification Finding misinformation on social-media platforms has been an active area of research in recent years (Hassan et al., 2015; Lukasik et al., 2015; Dang et al., 2016; Volkova et al., 2017; Zubiaga et al., 2018; Zhou et al., 2019; Sharma et al., 2019). Rumor detection that uses stance in the reply posts was in initiated by the Pheme project 7 and was popularized as a SemEval 2017 task 8 8. The task involved predicting stance (‘supporting’, ‘denying’, ‘commenting’ and ‘querying’) in replies to rumor posts on Twitter and the dataset is described in (Zubiaga et al., 2015, 2016b). A number of researchers used this dataset and proposed many algorithms. For example, (Derczynski et al., 2017) proposed an LSTM that uses branches in conversation trees to classify stance in reply posts, and (Kochkina et al., 2018) used sequential classifiers for joint stance and rumor classification. More recently (Ma et al., 2018) suggested two tree structured neural-networks to find rumors i.e. if a post is rumor or not. In this work, we focus on rumorveracity and stance learning objectives. Our work 7https://www.pheme.eu/ 8http://www.aclweb.org/anthology/S17-2006 extends this thread of research by showing that convolution operations that compare source and reply tweets are more effective in learning stance and rumor-veracity. 4.3 LSTM and Convolutional Neural Networks Deep neural networks (DNN) have shown great success in many fields (Hinton et al., 2012). Researchers have used DNNs for various NLP tasks like POS tagging, named entity recognition (Collobert and Weston, 2008). Convolution neural networks (LeCun et al., 2010) are popular in computer vision tasks for quite some time but lately they have shown potential in NLP tasks as well (Zhang et al., 2015). Yoon Kim (Kim, 2014) used convolution neural networks (CNN) for various NLP tasks. To the best of our knowledge, this is the first work that uses a convolution unit in LSTMs. 5 Conclusion In this work, we explored a few variants of LSTM cells for rumor-veracity and stance learning tasks in social-media conversations. We also proposed a new Binarized Constituency Tree structure to model social-media conversations. Using a human labeled dataset with rumor-veracity labels for source posts and stance labels for replies, we evaluated the proposed models and compared their strengths and weaknesses. We find that using convolution unit in LSTMs is useful for both stance and rumor classification. We also experimented with different types of features and find that skipthoughts and BERT are competitive features while skipthoughts have slight advantage for rumor-veracity prediction task. Acknowledgments We are thankful to anonymous reviewers for their valuable feedback. This work was supported in part by the ONR Award No. N00014182106, ONR Award No. N0001418SB001 and the Center for Computational Analysis of Social and Organization Systems (CASOS). The views and conclusions contained in this document are those of the authors only. Funding to attend this conference was partly provided by the CMU GSA/Provost Conference funding. 5056 References Hunt Allcott and Matthew Gentzkow. 2017. Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2) 211-36. Isabelle Augenstein, Andreas Vlachos, and Kalina Bontcheva. 2016. Usfd at semeval-2016 task 6: Any-target stance detection on twitter with autoencoders. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 389–393. Matthew Babcock, Ramon Alfonso Villa Cox, and Sumeet Kumar. 2019. Diffusion of pro- and antifalse information tweets: the black panther movie case. Computational and Mathematical Organization Theory, 25(1):72–84. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM. Anh Dang, Michael Smit, Abidalrahman Moh’d, Rosane Minghim, and Evangelos Milios. 2016. Toward understanding how users respond to rumours in social media. In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 777–784. IEEE. Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. SemEval-2017 task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 69–76, Vancouver, Canada. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Douglas Eck and Juergen Schmidhuber. 2002. Finding temporal structure in music: Blues improvisation with lstm recurrent networks. In Neural Networks for Signal Processing, 2002. Proceedings of the 2002 12th IEEE Workshop on, pages 747–756. IEEE. Omar Enayet and Samhaa R El-Beltagy. 2017. Niletmrg at semeval-2017 task 8: Determining rumour and veracity support for rumours on twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 470– 474. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Emilio Ferrara. 2015. Manipulation and abuse on social media. ACM SIGWEB Newsletter, (Spring):4. Daniel Gildea. 2004. Dependencies vs. constituents for tree-based alignment. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Naeemul Hassan, Chengkai Li, and Mark Tremayne. 2015. Detecting check-worthy factual claims in presidential debates. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 1835–1838. ACM. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiwei Jin, Juan Cao, Yongdong Zhang, and Jiebo Luo. 2016. News verification by exploiting conflicting social viewpoints in microblogs. In AAAI, pages 2972–2978. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302. Elena Kochkina, Maria Liakata, and Arkaitz Zubiaga. 2018. All-in-one: Multi-task learning for rumour verification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3402–3413. Association for Computational Linguistics. Yann LeCun, Koray Kavukcuoglu, and Cl´ement Farabet. 2010. Convolutional networks and applications in vision. In Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pages 253–256. IEEE. Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2304–2314, Lisbon, Portugal. Association for Computational Linguistics. 5057 Can Liu, Wen Li, Bradford Demarest, Yue Chen, Sara Couture, Daniel Dakota, Nikita Haduong, Noah Kaufman, Andrew Lamont, Manan Pancholi, et al. 2016. Iucl at semeval-2016 task 6: An ensemble model for stance detection in twitter. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 394–400. Michal Lukasik, Trevor Cohn, and Kalina Bontcheva. 2015. Classifying tweet level judgements of rumours in social media. In EMNLP. Michal Lukasik, PK Srijith, Duy Vu, Kalina Bontcheva, Arkaitz Zubiaga, and Trevor Cohn. 2016. Hawkes processes for continuous time sequence classification: an application to rumour stance classification in twitter. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 393–398. Jing Ma, Wei Gao, and Kam-Fai Wong. 2018. Rumor detection on twitter with tree-structured recursive neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1980– 1989, Melbourne, Australia. Association for Computational Linguistics. Saif M Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3):26. Mert Ozer, Nyunsu Kim, and Hasan Davulcu. 2016. Community detection in political twitter networks using nonnegative matrix factorization methods. In Advances in Social Networks Analysis and Mining (ASONAM), 2016 IEEE/ACM International Conference on, pages 81–88. IEEE. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Victoria Rubin, Niall Conroy, Yimin Chen, and Sarah Cornwell. 2016. Fake news or truth? using satirical cues to detect potentially misleading news. In Proceedings of the Second Workshop on Computational Approaches to Deception Detection, pages 7–17. Victoria L Rubin and Tatiana Lukoianova. 2015. Truth and deception at the rhetorical structure level. Journal of the Association for Information Science and Technology, 66(5):905–917. Steve Schifferes, Nic Newman, Neil Thurman, David Corney, Ayse G¨oker, and Carlos Martin. 2014. Identifying and verifying news through social media: Developing a user-centred tool for professional journalists. Digital Journalism, 2(3):406–418. Karishma Sharma, Feng Qian, He Jiang, Natali Ruchansky, Ming Zhang, and Yan Liu. 2019. Combating fake news: A survey on identification and mitigation techniques. ACM Trans. Intell. Syst. Technol., 10(3):21:1–21:42. Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 116–124. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Eugenio Tacchini, Gabriele Ballarin, Marco L. Della Vedova, Stefano Moret, and Luca de Alfaro. 2017. Some like it hoax: Automated fake news detection in social networks. CoRR, abs/1704.07506. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566, Beijing, China. Association for Computational Linguistics. Svitlana Volkova, Kyle Shaffer, Jin Yea Jang, and Nathan Hodas. 2017. Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 647–653. Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, 359(6380):1146–1151. Wei Wang, Kevin Knight, and Daniel Marcu. 2007. Binarizing syntax trees to improve syntax-based machine translation accuracy. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Wan Wei, Xiao Zhang, Xuqin Liu, Wei Chen, and Tengjiao Wang. 2016. pkudblab at semeval-2016 task 6: A specific convolutional neural network system for effective stance detection. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 384–388. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 5058 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657. Kaimin Zhou, Chang Shu, Binyang Li, and Jey Han Lau. 2019. Early rumour detection. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1614–1623, Minneapolis, Minnesota. Association for Computational Linguistics. Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, and Michal Lukasik. 2016a. Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2438–2448, Osaka, Japan. The COLING 2016 Organizing Committee. Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik, Kalina Bontcheva, Trevor Cohn, and Isabelle Augenstein. 2018. Discourseaware rumour stance classification in social media using sequential classifiers. Information Processing & Management, 54(2):273–290. Arkaitz Zubiaga, Maria Liakata, Rob Procter, Kalina Bontcheva, and Peter Tolmie. 2015. Crowdsourcing the annotation of rumourous conversations in social media. In Proceedings of the 24th International Conference on World Wide Web, pages 347– 353. ACM. Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. 2016b. Analysing how people orient to and spread rumours in social media by looking at conversational threads. PloS one, 11(3):e0150989.
2019
498
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5059–5069 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5059 HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization Xingxing Zhang, Furu Wei and Ming Zhou Microsoft Research Asia, Beijing, China {xizhang,fuwei,mingzhou}@microsoft.com Abstract Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these inaccurate labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders (Devlin et al., 2018), we propose HIBERT (as shorthand for HIerachical Bidirectional Encoder Representations from Transformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained HIBERT to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets. 1 Introduction Automatic document summarization is the task of rewriting a document into its shorter form while still retaining its important content. Over the years, many paradigms for document summarization have been explored (see Nenkova and McKeown (2011) for an overview). The most popular two among them are extractive approaches and abstractive approaches. As the name implies, extractive approaches generate summaries by extracting parts of the original document (usually sentences), while abstractive methods may generate new words or phrases which are not in the original document. Extractive summarization is usually modeled as a sentence ranking problem with length constraints (e.g., max number of words or sentences). Top ranked sentences (under constraints) are selected as summaries. Early attempts mostly leverage manually engineered features (Filatova and Hatzivassiloglou, 2004a). Based on these sparse features, sentence are selected using a classifier or a regression model. Later, the feature engineering part in this paradigm is replaced with neural networks. Cheng and Lapata (2016) propose a hierarchical long short-term memory network (LSTM; Hochreiter and Schmidhuber 1997) to encode a document and then use another LSTM to predict binary labels for each sentence in the document. This architecture is widely adopted recently (Nallapati et al., 2017; Narayan et al., 2018; Zhang et al., 2018). Our model also employs a hierarchical document encoder, but we adopt a hierarchical transformer (Vaswani et al., 2017) rather a hierarchical LSTM. Because recent studies (Vaswani et al., 2017; Devlin et al., 2018) show the transformer model performs better than LSTM in many tasks. Abstractive models do not attract much attention until recently. They are mostly based on sequence to sequence (seq2seq) models (Bahdanau et al., 2015), where a document is viewed a sequence and its summary is viewed as another sequence. Although seq2seq based summarizers can be equipped with copy mechanism (Gu et al., 2016; See et al., 2017), coverage model (See et al., 2017) and reinforcement learning (Paulus et al., 2017), there is still no guarantee that the generated summaries are grammatical and convey the same meaning as the original document does. It seems that extractive models are more reliable than their abstractive counterparts. However, extractive models require sentence level labels, which are usually not included in most summarization datasets (most datasets only contain document-summary pairs). Sentence labels are usually obtained by rule-based methods (e.g., maximizing the ROUGE score between a set of sentences and reference summaries) and may not be accurate. Extractive models proposed re5060 cently (Cheng and Lapata, 2016; Nallapati et al., 2017) employ hierarchical document encoders and even have neural decoders, which are complex. Training such complex neural models with inaccurate binary labels is challenging. We observed in our initial experiments on one of our dataset that our extractive model (see Section 3.3 for details) overfits to the training set quickly after the second epoch, which indicates the training set may not be fully utilized. Inspired by the recent pre-training work in natural language processing (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018), our solution to this problem is to first pre-train the “complex”’ part (i.e., the hierarchical encoder) of the extractive model on unlabeled data and then we learn to classify sentences with our model initialized from the pre-trained encoder. In this paper, we propose HIBERT, which stands for HIerachical Bidirectional Encoder Representations from Transformers. We design an unsupervised method to pre-train HIBERT for document modeling. We apply the pre-trained HIBERT to the task of document summarization and achieve state-of-the-art performance on both the CNN/Dailymail and New York Times dataset. 2 Related Work In this section, we introduce work on extractive summarization, abstractive summarization and pre-trained natural language processing models. For a more comprehensive review of summarization, we refer the interested readers to Nenkova and McKeown (2011) and Mani (2001). Extractive Summarization Extractive summarization aims to select important sentences (sometimes other textual units such as elementary discourse units (EDUs)) from a document as its summary. It is usually modeled as a sentence ranking problem by using the scores from classifiers (Kupiec et al., 1995), sequential labeling models (Conroy and O’leary, 2001) as well as integer linear programmers (Woodsend and Lapata, 2010). Early work with these models above mostly leverage human engineered features such as sentence position and length (Radev et al., 2004), word frequency (Nenkova et al., 2006) and event features (Filatova and Hatzivassiloglou, 2004b). As the very successful applications of neural networks to a wide range of NLP tasks, the manually engineered features (for document encoding) are replaced with hierarchical LSTMs/CNNs and the sequence labeling (or classification) model is replaced with an LSTM decoder (Cheng and Lapata, 2016; Nallapati et al., 2017). The architecture is widely adopted in recent neural extractive models and is extended with reinforcement learning (Narayan et al., 2018; Dong et al., 2018), latent variable models (Zhang et al., 2018), joint scoring (Zhou et al., 2018) and iterative document representation (Chen et al., 2018). Recently, transformer networks (Vaswani et al., 2017) achieves good performance in machine translation (Vaswani et al., 2017) and a range of NLP tasks (Devlin et al., 2018; Radford et al., 2018). Different from the extractive models above, we adopt a hierarchical Transformer for document encoding and also propose a method to pre-train the document encoder. Abstractive Summarization Abstractive summarization aims to generate the summary of a document with rewriting. Most recent abstractive models (Nallapati et al., 2016) are based on neural sequence to sequence learning (Bahdanau et al., 2015; Sutskever et al., 2014). However, the generated summaries of these models can not be controlled (i.e., their meanings can be quite different from the original and contents can be repeated). Therefore, copy mechanism (Gu et al., 2016), coverage model (See et al., 2017) and reinforcement learning model optimizing ROUGE (Paulus et al., 2017) are introduced. These problems are alleviated but not solved. There is also an interesting line of work combining extractive and abstractive summarization with reinforcement learning (Chen and Bansal, 2018), fused attention (Hsu et al., 2018) and bottom-up attention (Gehrmann et al., 2018). Our model, which is a very good extractive model, can be used as the sentence extraction component in these models and potentially improves their performance. Pre-trained NLP Models Most model pretraining methods in NLP leverage the natural ordering of text. For example, word2vec uses the surrounding words within a fixed size window to predict the word in the middle with a log bilinear model. The resulting word embedding table can be used in other downstream tasks. There are other word embedding pre-training methods using similar techniques (Pennington et al., 2014; Bojanowski et al., 2017). Peters et al. (2018) and Radford et al. (2018) find even a sentence encoder 5061 Figure 1: The architecture of HIBERT during training. senti is a sentence in the document above, which has four sentences in total. sent3 is masked during encoding and the decoder predicts the original sent3. (not just word embeddings) can also be pre-trained with language model objectives (i.e., predicting the next or previous word). Language model objective is unidirectional, while many tasks can leverage the context in both directions. Therefore, Devlin et al. (2018) propose the naturally bidirectional masked language model objective (i.e., masking several words with a special token in a sentence and then predicting them). All the methods above aim to pre-train word embeddings or sentence encoders, while our method aims to pre-train the hierarchical document encoders (i.e., hierarchical transformers), which is important in summarization. 3 Model In this section, we present our model HIBERT. We first introduce how documents are represented in HIBERT. We then describe our method to pre-train HIBERT and finally move on to the application of HIBERT to summarization. 3.1 Document Representation Let D = (S1, S2, . . . , S|D|) denote a document, where Si = (wi 1, wi 2, . . . , wi |Si|) is a sentence in D and wi j a word in Si. Note that following common practice in natural language processing literatures, wi |Si| is an artificial EOS (End Of Sentence) token. To obtain the representation of D, we use two encoders: a sentence encoder to transform each sentence in D to a vector and a document encoder to learn sentence representations given their surrounding sentences as context. Both the sentence encoder and document encoder are based on the Transformer encoder described in Vaswani et al. (2017). As shown in Figure 1, they are nested in a hierarchical fashion. A transformer encoder usually has multiple layers and each layer is composed of a multi-head self attentive sub-layer followed by a feed-forward sub-layer with residual connections (He et al., 2016) and layer normalizations (Ba et al., 2016). For more details of the Transformer encoder, we refer the interested readers to Vaswani et al. (2017). To learn the representation of Si, Si = (wi 1, wi 2, . . . , wi |Si|) is first mapped into continuous space Ei = (ei 1, ei 2, . . . , ei |Si|) where ei j = e(wi j) + pj (1) where e(wi j) and pj are the word and positional embeddings of wi j, respectively. The word embedding matrix is randomly initialized and we adopt the sine-cosine positional embedding (Vaswani et al., 2017)1. Then the sentence encoder (a Transformer) transforms Ei into a list of hidden representations (hi 1, hi 2, . . . , hi |Si|). We take the last hidden representation hi |Si| (i.e., the representation at the EOS token) as the representation of sentence Si. Similar to the representation of each word in Si, we also take the sentence position into account. The final representation of Si is ˆhi = hi |Si| + pi (2) Note that words and sentences share the same positional embedding matrix. In analogy to the sentence encoder, as shown in Figure 1, the document encoder is yet another Transformer but applies on the sentence level. After running the Transformer on a sequence of sentence representations (ˆh1, ˆh2, . . . , ˆh|D|), we obtain the context sensitive sentence representations (d1, d2, . . . , d|D|). Now we have finished the encoding of a document with a hierarchical bidirectional transformer encoder HIBERT. Note that in previous work, document representation are also 1We use the sine-cosine embedding because it works well and do not introduce additional trainable parameters. 5062 learned with hierarchical models, but each hierarchy is a Recurrent Neural Network (Nallapati et al., 2017; Zhou et al., 2018) or Convolutional Neural Network (Cheng and Lapata, 2016). We choose the Transformer because it outperforms CNN and RNN in machine translation (Vaswani et al., 2017), semantic role labeling (Strubell et al., 2018) and other NLP tasks (Devlin et al., 2018). In the next section we will introduce how we train HIBERT with an unsupervised training objective. 3.2 Pre-training Most recent encoding neural models used in NLP (e.g., RNNs, CNNs or Transformers) can be pretrained by predicting a word in a sentence (or a text span) using other words within the same sentence (or span). For example, ELMo (Peters et al., 2018) and OpenAI-GPT (Radford et al., 2018) predict a word using all words on its left (or right); while word2vec (Mikolov et al., 2013) predicts one word with its surrounding words in a fixed window and BERT (Devlin et al., 2018) predicts (masked) missing words in a sentence given all the other words. All the models above learn the representation of a sentence, where its basic units are words. HIBERT aims to learn the representation of a document, where its basic units are sentences. Therefore, a natural way of pre-training a document level model (e.g., HIBERT) is to predict a sentence (or sentences) instead of a word (or words). We could predict a sentence in a document with all the sentences on its left (or right) as in a (document level) language model. However, in summarization, context on both directions are available. We therefore opt to predict a sentence using all sentences on both its left and right. Document Masking Specifically, suppose D = (S1, S2, . . . , S|D|) is a document, where Si = (wi 1, wi 2, . . . , wi |Si|) is a sentence in it. We randomly select 15% of the sentences in D and mask them. Then, we predict these masked sentences. The prediction task here is similar with the Cloze task (Taylor, 1953; Devlin et al., 2018), but the missing part is a sentence. However, during test time the input document is not masked, to make our model can adapt to documents without masks, we do not always mask the selected sentences. Once a sentence is selected (as one of the 15% selected masked sentences), we transform it with one of three methods below. We will use an example to demonstrate the transformation. For instance, we have the following document and the second sentence is selected2: William Shakespeare is a poet . He died in 1616 . He is regarded as the greatest writer . In 80% of the cases, we mask the selected sentence (i.e., we replace each word in the sentence with a mask token [MASK]). The document above becomes William Shakespeare is a poet . [MASK] [MASK] [MASK] [MASK] [MASK] He is regarded as the greatest writer . (where “He died in 1616 . ” is masked). In 10% of the cases, we keep the selected sentence as it is. This strategy is to simulate the input document during test time (with no masked sentences). In the rest 10% cases, we replace the selected sentence with a random sentence. In this case, the document after transformation is William Shakespeare is a poet . Birds can fly . He is regarded as the greatest writer . The second sentence is replaced with “Birds can fly .” This strategy intends to add some noise during training and make the model more robust. Sentence Prediction After the application of the above procedures to a document D = (S1, S2, . . . , S|D|), we obtain the masked document eD = ( ˜S1, ˜S2, . . . , ˜ S|D|). Let K denote the set of indicies of selected sentences in D. Now we are ready to predict the masked sentences M = {Sk|k ∈K} using eD. We first apply the hierarchical encoder HIBERT in Section 3.1 to eD and obtain its context sensitive sentence representations ( ˜d1, ˜d2, . . . , ˜ d|D|). We will demonstrate how we predict the masked sentence Sk = (wk 0, wk 1, wk 2, . . . , wk |Sk|) one word per step (wk 0 is an artificially added BOS token). At the jth step, we predict wk j given wk 0, . . . , wk j−1 and eD. ˜ dk already encodes the information of eD with a focus around its kth sentence ˜Sk. As shown in Figure 1, we employ a Transformer decoder (Vaswani et al., 2017) to predict wk j with ˜ dk as its additional input. The transformer decoder we used here is slightly different from the original one. The original decoder employs two multi-head attention layers to 2There might be multiple sentences selected in a document, but in this example there is only one. 5063 include both the context in encoder and decoder, while we only need one to learn the decoder context, since the context in encoder is a vector (i.e., ˜ dk). Specifically, after applying the word and positional embeddings to (wk 0, . . . , wk j−1), we obtain eEk 1:j−1 = ( ˜ek 0, . . . , ˜ ek j−1) (also see Equation 1). Then we apply multi-head attention sub-layer to eEk 1:j−1: ˜ hj−1 = MultiHead(qj−1, Kj−1, Vj−1) qj−1 = WQ ˜ ek j−1 Kj−1 = WK eEk 1:j−1 Kj−1 = WV eEk 1:j−1 (3) where qj−1, Kj−1, Vj−1 are the input query, key and value matrices of the multi-head attention function (Vaswani et al., 2017) MultiHead(·, ·, ·), respectively. WQ ∈Rd×d, WK ∈Rd×d and WV ∈Rd×d are weight matrices. Then we include the information of eD by addition: ˜ xj−1 = ˜ hj−1 + ˜ dk (4) We also follow a feedforward sub-layer (one hidden layer with ReLU (Glorot et al., 2011) activation function) after ˜ xj−1 as in Vaswani et al. (2017): ˜ gj−1 = Wff 2 max(0, Wff 1 ˜ xj−1 + b1) + b2 (5) Note that the transformer decoder can have multiple layers by applying Equation (3) to (5) multiple times and we only show the computation of one layer for simplicity. The probability of wk j given wk 0, . . . , wk j−1 and eD is: p(wk j |wk 0:j−1, eD) = softmax(WO ˜ gj−1) (6) Finally the probability of all masked sentences M given eD is p(M| eD) = Y k∈K |Sk| Y j=1 p(wk j |wk 0:j−1, eD) (7) The model above can be trained by minimizing the negative log-likelihood of all masked sentences given their paired documents. We can in theory have unlimited amount of training data for HIBERT, since they can be generated automatically from (unlabeled) documents. Therefore, we can first train HIBERT on large amount of data and then apply it to downstream tasks. In the next section, we will introduce its application to document summarization. Figure 2: The architecture of our extractive summarization model. The sentence and document level transformers can be pretrained. 3.3 Extractive Summarization Extractive summarization selects the most important sentences in a document as its summary. In this section, summarization is modeled as a sequence labeling problem. Specifically, a document is viewed as a sequence of sentences and a summarization model is expected to assign a True or False label for each sentence, where True means this sentence should be included in the summary. In the following, we will introduce the details of our summarization model based HIBERT. Let D = (S1, S2, . . . , S|D|) denote a document and Y = (y1, y2, . . . , y|D|) its sentence labels (methods for obtaining these labels are in Section 4.1). As shown in Figure 2, we first apply the hierarchical bidirectional transformer encoder HIBERT to D and yields the context dependent representations for all sentences (d1, d2, . . . , d|D|). The probability of the label of Si can be estimated using an additional linear projection and a softmax: p(yi|D) = softmax(WS di) (8) where WS ∈R2×d. The summarization model can be trained by minimizing the negative loglikelihood of all sentence labels given their paired documents. 4 Experiments In this section we assess the performance of our model on the document summarization task. We 5064 first introduce the dataset we used for pre-training and the summarization task and give implementation details of our model. We also compare our model against multiple previous models. 4.1 Datasets We conducted our summarization experiments on the non-anonymous version CNN/Dailymail (CNNDM) dataset (Hermann et al., 2015; See et al., 2017), and the New York Times dataset (Durrett et al., 2016; Xu and Durrett, 2019). For the CNNDM dataset, we preprocessed the dataset using the scripts from the authors of See et al. (2017)3. The resulting dataset contains 287,226 documents with summaries for training, 13,368 for validation and 11,490 for test. Following (Xu and Durrett, 2019; Durrett et al., 2016), we created the NYT50 dataset by removing the documents whose summaries are shorter than 50 words from New York Times dataset. We used the same training/validation/test splits as in Xu and Durrett (2019), which contain 137,778 documents for training, 17,222 for validation and 17,223 for test. To create sentence level labels for extractive summarization, we used a strategy similar to Nallapati et al. (2017). We label the subset of sentences in a document that maximizes ROUGE (Lin, 2004) (against the human summary) as True and all other sentences as False. To unsupervisedly pre-train our document model HIBERT (see Section 3.2 for details), we created the GIGA-CM dataset (totally 6,626,842 documents and 2,854 million words), which includes 6,339,616 documents sampled from the English Gigaword4 dataset and the training split of the CNNDM dataset. We used the validation set of CNNDM as the validation set of GIGA-CM as well. As in See et al. (2017), documents and summaries in CNNDM, NYT50 and GIGA-CM are all segmented and tokenized using Stanford CoreNLP toolkit (Manning et al., 2014). To reduce the vocabulary size, we applied byte pair encoding (BPE; Sennrich et al. 2016) to all of our datasets. To limit the memory consumption during training, we limit the length of each sentence to be 50 words (51th word and onwards are removed) and split documents with more than 30 sentences into smaller documents with each containing at most 30 sentences. 3Scripts publicly available at https://github.com/ abisee/cnn-dailymail 4https://catalog.ldc.upenn.edu/LDC2012T21 4.2 Implementation Details Our model is trained in three stages, which includes two pre-training stages and one finetuning stage. The first stage is the open-domain pretraining and in this stage we train HIBERT with the pre-training objective (Section 3.2) on GIGA-CM dataset. In the second stage, we perform the indomain pre-training on the CNNDM (or NYT50) dataset still with the same pre-training objective. In the final stage, we finetune HIBERT in the summarization model (Section 3.3) to predict extractive sentence labels on CNNDM (or NYT50). The sizes of the sentence and document level Transformers as well as the Transformer decoder in HIBERT are the same. Let L denote the number of layers in Transformer, H the hidden size and A the number of attention heads. As in (Vaswani et al., 2017; Devlin et al., 2018), the hidden size of the feedforward sublayer is 4H. We mainly trained two model sizes: HIBERTS (L = 6, H = 512 and A = 8) and HIBERTM (L = 6, H = 768 and A = 12). We trained both HIBERTS and HIBERTM on a single machine with 8 Nvidia Tesla V100 GPUs with a batch size of 256 documents. We optimized our models using Adam with learning rate of 1e-4, β1 = 0.9, β2 = 0.999, L2 norm of 0.01, learning rate warmup 10,000 steps and learning rate decay afterwards using the strategies in Vaswani et al. (2017). The dropout rate in all layers are 0.1. In pre-training stages, we trained our models until validation perplexities do not decrease significantly (around 45 epochs on GIGA-CM dataset and 100 to 200 epochs on CNNDM and NYT50). Training HIBERTM for one epoch on GIGA-CM dataset takes approximately 20 hours. Our models during fine-tuning stage can be trained on a single GPU. The hyper-parameters are almost identical to these in the pre-training stages except that the learning rate is 5e-5, the batch size is 32, the warmup steps are 4,000 and we train our models for 5 epochs. During inference, we rank sentences using p(yi|D) (Equation (8)) and choose the top K sentences as summary, where K is tuned on the validation set. 4.3 Evaluations We evaluated the quality of summaries from different systems automatically using ROUGE (Lin, 2004). We reported the full length F1 based ROUGE-1, ROUGE-2 and ROUGE-L on the 5065 Model R-1 R-2 R-L Pointer+Coverage 39.53 17.28 36.38 Abstract-ML+RL 39.87 15.82 36.90 DCA 41.69 19.47 37.92 SentRewrite 40.88 17.80 38.54 InconsisLoss 40.68 17.97 37.13 Bottom-Up 41.22 18.68 38.34 Lead3 40.34 17.70 36.57 SummaRuNNer 39.60 16.20 35.30 NeuSum 40.11 17.52 36.39 Refresh 40.00 18.20 36.60 NeuSum-MMR 41.59 19.01 37.98 BanditSum 41.50 18.70 37.60 JECS 41.70 18.50 37.90 LatentSum 41.05 18.77 37.54 HierTransformer 41.11 18.69 37.53 BERT 41.82 19.48 38.30 HIBERTS (in-domain) 42.10 19.70 38.53 HIBERTS 42.31 19.87 38.78 HIBERTM 42.37 19.95 38.83 Table 1: Results of various models on the CNNDM test set using full-length F1 ROUGE-1 (R-1), ROUGE-2 (R2), and ROUGE-L (R-L). CNNDM and NYT50 datasets. We compute ROUGE scores using the ROUGE-1.5.5.pl script. Additionally, we also evaluated the generated summaries by eliciting human judgments. Following (Cheng and Lapata, 2016; Narayan et al., 2018), we randomly sampled 20 documents from the CNNDM test set. Participants were presented with a document and a list of summaries produced by different systems. We asked subjects to rank these summaries (ties allowed) by taking informativeness (is the summary capture the important information from the document?) and fluency (is the summary grammatical?) into account. Each document is annotated by three different subjects. 4.4 Results Our main results on the CNNDM dataset are shown in Table 1, with abstractive models in the top block and extractive models in the bottom block. Pointer+Coverage (See et al., 2017), Abstract-ML+RL (Paulus et al., 2017) and DCA (Celikyilmaz et al., 2018) are all sequence to sequence learning based models with copy and coverage modeling, reinforcement learning and deep communicating agents extensions. SentRewrite (Hsu et al., 2018) and InconsisLoss (Chen and Bansal, 2018) all try to decompose the word by word summary generation into sentence selection from document and “sentence” level summarization (or compression). Bottom-Up (Gehrmann et al., 2018) generates summaries by combines a word prediction model with the decoder attention model. The extractive models are usually based on hierarchical encoders (SummaRuNNer; Nallapati et al. 2017 and NeuSum; Cheng and Lapata 2016). They have been extended with reinforcement learning (Refresh; Narayan et al. 2018 and BanditSum; Dong et al. 2018), Maximal Marginal Relevance (NeuSum-MMR; Zhou et al. 2018), latent variable modeling (LatentSum; Zhang et al. 2018) and syntactic compression (JECS; Xu and Durrett 2019). Lead3 is a baseline which simply selects the first three sentences. Our model HIBERTS (in-domain), which only use one pretraining stage on the in-domain CNNDM training set, outperforms all of them and differences between them are all significant with a 0.95 confidence interval (estimated with the ROUGE script). Note that pre-training HIBERTS (in-domain) is very fast and it only takes around 30 minutes for one epoch on the CNNDM training set. Our models with two pre-training stages (HIBERTS) or larger size (HIBERTM) perform even better and HIBERTM outperforms BERT by 0.5 ROUGE5. We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in 3.3) without pretraining. Note the setting for HeriTransfomer is (L = 4,H = 300 and A = 4) 6. We can see that the pre-training (details in Section 3.2) leads to a +1.25 ROUGE improvement. Another baseline is based on a pre-trained BERT (Devlin et al., 2018)7 and finetuned on the CNNDM dataset. We used the BERTbase model because our 16G RAM V100 GPU cannot fit BERTlarge for the summarization task even with batch size of 1. The positional embedding of BERT supports input length up to 512 words, we therefore split documents with more than 10 sentences into multiple blocks 5The difference is significant according to the ROUGE script. 6We tried deeper and larger models, but obtained inferior results, which may indicates training large or deep models on this dataset without a good initialization is challenging. 7Our BERT baseline is adapted from this implementation https://github.com/huggingface/ pytorch-pretrained-BERT 5066 Models R-1 R-2 R-L Lead 41.80 22.60 35.00 EXTRACTION 44.30 25.50 37.10 JECS 45.50 25.30 38.20 HeriTransformer 47.44 28.08 39.56 BERT 48.38 29.04 40.53 HIBERTS (in-domain) 48.92 29.58 41.10 HIBERTM (in-domain) 49.06 29.70 41.23 HIBERTS 49.25 29.92 41.43 HIBERTM 49.47 30.11 41.63 Table 2: Results of various models on the NYT50 test set using full-length F1 ROUGE. HIBERTS (indomain) and HIBERTM (in-domain) only uses one pretraining stage on the NYT50 training set. Pretraining Strategies R-1 R-2 R-L Open-Domain 42.97 20.31 39.51 In-Domain 42.93 20.28 39.46 Open+In-Domain 43.19 20.46 39.72 Table 3: Results of summarization model (HIBERTS setting) with different pre-training strategies on the CNNDM validation set using full-length F1 ROUGE. (each block with 10 sentences8). We feed each block (the BOS and EOS tokens of each sentence are replaced with [CLS] and [SEP] tokens) into BERT and use the representation at [CLS] token to classify each sentence. Our model HIBERTS outperforms BERT by 0.4 to 0.5 ROUGE despite with only half the number of model parameters (HIBERTS 54.6M v.s. BERT 110M). Results on the NYT50 dataset show the similar trends (see Table 2). EXTRACTION is a extractive model based hierarchical LSTM and we use the numbers reported by Xu and Durrett (2019). The improvement of HIBERTM over the baseline without pre-training (HeriTransformer) becomes 2.0 ROUGE. HIBERTS (in-domain), HIBERTM (in-domain), HIBERTS and HIBERTM all outperform BERT significantly according to the ROUGE script. We also conducted human experiment with 20 randomly sampled documents from the CNNDM test set. We compared our model HIBERTM against Lead3, DCA, Latent, BERT and the human reference (Human)9. We asked the subjects to rank 8We use 10 sentences per block, because maximum sentence length 50 × 10 < 512 (maximum BERT supported length). The last block of a document may have less than 10 sentences. 9We obtained the outputs of DCA via emails. Models 1st 2nd 3rd 4th 5th 6th MeanR Lead3 0.03 0.18 0.15 0.30 0.30 0.03 3.75 DCA 0.08 0.15 0.18 0.20 0.15 0.23 3.88 Latent 0.05 0.33 0.28 0.20 0.13 0.00 3.03 BERT 0.13 0.37 0.32 0.15 0.03 0.00 2.58 HIBERTM 0.30 0.35 0.25 0.10 0.00 0.00 2.15 Human 0.58 0.15 0.20 0.00 0.03 0.03 1.85 Table 4: Human evaluation: proportions of rankings and mean ranks (MeanR; lower is better) of various models. the outputs of these systems from best to worst. As shown in Table 4, the output of HIBERTM is selected as the best in 30% of cases and we obtained lower mean rank than all systems except for Human. We also converted the rank numbers into ratings (rank i to 7 −i) and applied student t-test on the ratings. HIBERTM is significantly different from all systems in comparison (p < 0.05), which indicates our model still lags behind Human, but is better than all other systems. Pre-training Strategies As mentioned earlier, our pre-training includes two stages. The first stage is the open-domain pre-training stage on the GIGA-CM dataset and the following stage is the in-domain pre-training on the CNNDM (or NYT50) dataset. As shown in Table 3, we pretrained HIBERTS using only open-domain stage (Open-Domain), only in-domain stage (InDomain) or both stages (Open+In-Domain) and applied it to the CNNDM summarization task. Results on the validation set of CNNDM indicate the two-stage pre-training process is necessary. 5 Conclusions The core part of a neural extractive summarization model is the hierarchical document encoder. We proposed a method to pre-train document level hierarchical bidirectional transformer encoders on unlabeled data. When we only pre-train hierarchical transformers on the training sets of summarization datasets with our proposed objective, application of the pre-trained hierarchical transformers to extractive summarization models already leads to wide improvement of summarization performance. Adding the large open-domain dataset to pre-training leads to even better performance. In the future, we plan to apply models to other tasks that also require hierarchical document encodings (e.g., document question answering). We are also interested in improving the architectures 5067 of hierarchical document encoders and designing other objectives to train hierarchical transformers. Acknowledgments We would like to thank Nan Yang, Houwen Peng, Li Dong and the ACL reviewers for their valuable feedback. We are grateful to Jiacheng Xu and Greg Durrett for sharing their splits of the New York Times dataset with us. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In In Proceedings of the 3rd International Conference on Learning Representations, San Diego, California. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1662–1675, New Orleans, Louisiana. Xiuying Chen, Shen Gao, Chongyang Tao, Yan Song, Dongyan Zhao, and Rui Yan. 2018. Iterative document representation learning towards summarization with polishing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4088–4097. Association for Computational Linguistics. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Association for Computational Linguistics. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. John M Conroy and Dianne P O’leary. 2001. Text summarization via hidden markov models. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 406–407. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805. Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, and Jackie Chi Kit Cheung. 2018. Banditsum: Extractive summarization as a contextual bandit. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3739–3748. Association for Computational Linguistics. Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1998–2008. Association for Computational Linguistics. Elena Filatova and Vasileios Hatzivassiloglou. 2004a. Event-based extractive summarization. In Text Summarization Branches Out: Proceedings of the ACL04 Workshop, pages 104–111, Barcelona, Spain. Elena Filatova and Vasileios Hatzivassiloglou. 2004b. Event-based extractive summarization. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109. Association for Computational Linguistics. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315– 323. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640. Association for Computational Linguistics. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693– 1701. Curran Associates, Inc. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. 5068 Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 132–141. Association for Computational Linguistics. Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval, pages 68–73. ACM. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain. Inderjeet Mani. 2001. Automatic Summarization. John Benjamins Pub Co. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pages 3075–3091, San Francisco, California. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759, New Orleans, Louisiana. Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval, 5(2–3):103–233. Ani Nenkova, Lucy Vanderwende, and Kathleen McKeown. 2006. A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 573–580. ACM. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Dragomir Radev, Timothy Allison, Sasha BlairGoldensohn, John Blitzer, Arda C¸ elebi, Stanko Dimitrov, Elliott Drabek, Ali Hakim, Wai Lam, Danyu Liu, Jahna Otterbacher, Hong Qi, Horacio Saggion, Simone Teufel, Michael Topper, Adam Winkel, and Zhu Zhang. 2004. Mead - a platform for multidocument multilingual text summarization. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04). European Language Resources Association (ELRA). Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Association for Computational Linguistics. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027–5038. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. 5069 Wilson L Taylor. 1953. cloze procedure: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Kristian Woodsend and Mirella Lapata. 2010. Automatic generation of story highlights. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 565–574, Uppsala, Sweden. Jiacheng Xu and Greg Durrett. 2019. Neural extractive text summarization with syntactic compression. arXiv preprint arXiv:1902.00863. Xingxing Zhang, Mirella Lapata, Furu Wei, and Ming Zhou. 2018. Neural latent extractive document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 779–784. Association for Computational Linguistics. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–663. Association for Computational Linguistics.
2019
499
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 38–43 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 38 Boosting Dialog Response Generation Wenchao Du Language Technologies Institute Carnegie Mellon University [email protected] Alan W Black Language Technologies Institute Carnegie Mellon University [email protected] Abstract Neural models have become one of the most important approaches to dialog response generation. However, they still tend to generate the most common and generic responses in the corpus all the time. To address this problem, we designed an iterative training process and ensemble method based on boosting. We combined our method with different training and decoding paradigms as the base model, including mutual-information-based decoding and reward-augmented maximum likelihood learning. Empirical results show that our approach can significantly improve the diversity and relevance of the responses generated by all base models, backed by objective measurements and human evaluation. 1 Introduction Sequence-to-sequence models (Sutskever et al., 2014) has become one of the most popular approaches to dialog systems, for it provides a high degree of automation and flexibility. On the other hand, they are known to suffer from the “dullresponse” problem (Li et al., 2015). Various research attempts have been made to improve the diversity of responses generated by sequence-tosequence models. One line of research investigate alternatives to maximum likelihood learning and decoding, which is believed to be the main cause of monotonicity. (Li et al., 2015) employed a decoding objective based on mutual information between contexts and responses; (Li et al., 2017a) used reinforcement learning techniques for training the decoder to generate responses that maximize pre-defined rewards instead of perplexities; (Li et al., 2017b; Xu et al., 2017) adopted adversarial learning, in which a generator is trained to deceive a discriminator that tries to differentiate between generated responses and human responses. Beside changing training and decoding objectives, (Liu et al., 2018; Lison and Bibauw, 2017) considered reweighting data points by penalizing those with overly frequent responses or by emphasizing high-quality responses. (Serban et al., 2017; Zhao et al., 2017) introduced stochastic latent variables into their models to capture discourse information on an inter-utterance level. (Shao et al., 2017) experimented with a novel segment-based training and decoding paradigm to help mitigate the problem of redundancy and contradiction. Yet another type of approach has not been investigated in the literature in the context of response generation – boosting and ensembling, despite having been studied for machine translation (Xiao et al., 2010; Zhang et al., 2017). Being a long established machine learning method (Freund and Schapire, 1997), the process typically involves iteratively training multiple models on reweighted instances according to the error of the previous models and combining these models. The idea has been recently revived and extended to generative models and image generation, which also suffers from diversity problem (Tolstikhin et al., 2017; Grover and Ermon, 2018). In computer vision, the state-of-the-art models tend to generate a few categories of objects all the time and ignore the rest, known as the problem of “missing modes”. Boosting has been shown to significantly improve the coverage of image generation models. For language generation, given the prior success with data re-weighting and bootstrap approach (Zhang et al., 2017; Liu et al., 2018), we believe dialog response generation may benefit from boosting as well. In this work, we designed a principled framework of boosting response generation, based on the recently developed theory of boosting generative models. Moreover, we combined boosting with different training and/or decoding paradigms, and empirically show that boosting can invariably improve them, in both quantitative and 39 qualitative evaluation. 2 Preliminaries For standard sequence-to-sequence approaches, training of models and decoding for generations are done through maximum likelihood estimation: log p(y | x) = n X i=1 log p(yi | y1 . . . yi−1, x) (1) where x is the source (or context) and y is the target (or response). (Li et al., 2015) proposed a decoding objective based on mutual information of x and y to improve diversity: MMI(x, y) = log p(y | x) −λp(y) (2) The conditional probability of y given x is estimated from sequence-to-sequence models, and the marginal probability of y from a separately trained language model. Reward-augmented maximum likelihood learning (RAML) (Norouzi et al., 2016) incorporates task rewards into maximum likelihood training. An exponential payoff distribution is defined: s(y | y∗; τ) = 1 Z(y∗, τ) exp{r(y, y∗)/τ} (3) where y∗is the true target, r is a pre-defined reward function, and τ is temperature parameter. The model is trained to minimize the KLdivergence of the conditional distribution of y and the payoff distribution: X x,y∗ DKL(s(y | y∗) || p(y | x)) = − X x,y∗ X y s(y | y∗) log p(y | x) + const (4) In multiplicative boosting, the density estimate of at each iteration T is given by: qT = hαT T qT−1 = QT t=1 hαt t ZT (5) where ht is tth model’s estimate, and αt is models’ weights. The goal of boosting is to approximate better the true distribution, P. It is shown in (Grover and Ermon, 2018) that if the model at each iteration can optimize for a re-weighted distribution of the following form perfectly: dt ∝( p qt )βt (6) the distance of models’ density estimate and the true distribution is decreasing, that is, DKL(P || Qt) ≤DKL(P || Qt−1) (7) In equation (5) - (7), the density estimates are for the joint distribution of x and y. We make an additional assumption that the sources are uniformly distributed so that p(x, y) = 1 np(y | x), for the ease of applying the boosting algorithm to sequence-to-sequence training. The true distribution P is usually set to be uniform to boost the coverage of generative models. One of our innovations in this work is extending it to the exponential payoff distribution in RAML setting. The decreasing property of KLdivergence still holds, as the theoretical analysis is very much similar to that in (Grover and Ermon, 2018). 3 Design We discuss some practical considerations when applying boosting framework to response generation problem. 3.1 Data Reweighting In the generative boosting method of (6), the weights of data are inversely proportional to the perplexities of the responses. However, it is observed in experiments that the generic responses do not always have low perplexities. If not handled properly, such responses end up being boosted, and become the frequently generated responses at the next iteration. In search for a consistent way to penalize generic responses with high perplexities, we first considered the discriminative boosting approach introduced in (Grover and Ermon, 2018). A discriminator is trained to differentiate between generated responses and human responses. The weights of data after discriminative boosting is the density ratio from the discriminator. The idea is closely related to generative adversarial learning (Goodfellow et al., 2014). However, in our case it is difficult to apply such approach. Because the generated responses are very limited, most classifiers can easily memorize all of them. The discriminators end up assigning extremely high probabilities to most of the human responses, and close-to-zero densities to generated responses. In other words, the amount of negative examples is 40 Model Win Loss Tie MLE 37.6 ± 6.4% 17.6 ± 4.0% 44.8 ± 6.4% MMI 36.0 ± 9.2% 16.8 ± 6.8% 47.2 ± 8.8% RAML 44.8% ± 10.8% 16.8 ± 4.8% 38.4 ± 12.4% Table 1: Human evaluation results. “Win” stands for the boosted model winning. too small to train a discriminator to obtain good decision boundaries and generalization. Instead, we resort to a simple rule-based discriminator. At each iteration, we maintain a list of most frequently generated responses, Ct. We choose a binary function to decide whether two responses, y, z, are similar, denoted by sim(y, z). The discriminator is defined as Dt(y) = ( c if ∃y0 ∈S t Ct, sim(y, y0) = 1 0.5 otherwise (8) And the weights of data at round t is given by dt(x, y) ∝( p(x, y) qt(x, y))βt Dt(y) 1 −Dt(y) (9) In our experiments, the similarity function is chosen to be a predicate of whether there is an n-gram overlap with n ≥4. We chose to be aggressive and set c = 0, so responses that are similar to those generated by previous models are excluded. The sizes of Ct is chosen to be around 20 so that the amount of training data reduces by about 10 percent at each iteration. In our experiments, we include bootstrapping as an additional baseline. At each iteration, 80% of the data are randomly sampled for training and validation. 3.2 Model Combination At decoding time, due to the discrete nature of text data, the optimization for the response that has highest probability (or mutual information) is intractable, so we use the following heuristics. Candidate responses are generated from the single best model using beam search. The candidates are then scored by all models, and the one with the highest average score is chosen. The model weights αt are set to be uniform. Since each model are trained on data with different weights, their un-normalized probability density estimates may have different scales. Hence, at decoding time, scores of each model are z-normalized with mean and standard deviation calculated from the training data. 3.3 Other Details For RAML, the reward function is based on tf-idf matching – that is, the sum of products of term frequency and inverse document frequency of each word, divided by lengths. The rationale is to encourage models to include key content words in their generations. Empirically, we observed that RAML with aforementioned reward can generate better responses than MLE baseline even without boosting. The temperature parameter τ is set to be 0.1. To approximate the expectation term in the objective of RAML, three additional responses with highest rewards are selected from training data for each message-response pair in the beginning. We do not sample new responses at the following iterations for the sake of fair comparison. We set βt in equation (6) to be 1 bt where b is between 10 and 20, and is tuned on validation set. 4 Experiments We evaluate our algorithm on single-turn conversations from Persona Dataset (Zhang et al., 2018). Participants are instructed to converse according to their given personalized background. In the preparation of training data, persona descriptions are prepended to the sources, and all trailing punctuations are truncated from the responses. We use a standard sequence-to-sequence architecture with attention mechanism. Both encoder and decoder are LSTMs with hidden size of 512 and input size of 300. Attentional contexts are weighted sums of hidden states of words in personas. We use Adam optimizer to train the model with learning rate of 0.001. All model parameters including word embeddings are randomly initialized between −0.1 and 0.1. In addition to the base models mentioned before, we investigate the combination of RAML and MMI, in which models are trained with RAML and decoded with MMI. 41 (a) BLEU (b) ROUGE-L (c) Cosine Similarity (d) Inertia (e) Number of unigrams (f) Number of trigrams Figure 1: Quantitative results. X-axis is for iteration and y-axis for metrics. The numbers at iteration 1 represent the base models. 4.1 Quantitative Evaluation We employ two standard word-overlap-based metrics, BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004). We also performed embedding-based evaluation. We embed the responses using the word averaging approach by (Arora et al., 2016), and measure the cosine similarity of the embeddings of generated responses and true responses. To measure the diversity of the responses, we perform k-means clustering on their embeddings with 10 clusters, and measure the inertia. The larger inertia indicates more diversity. We also show statistics on number of distinct n-grams. As can be seen in Figure 1, the general trend of boosting is that performance drastically improves up to the third model, then it slowly gets better or stays the same. Boosting is far better than bootstrapping. Boosting can improve lexical-level semantic similarity between generate responses and true responses, measured by cosine similarity. While BLEU scores only fluctuate in a tight range, ROUGE-L suffered from boosting a little, when used on base models that can generate more diversified responses. But we do not consider BLEU and ROUGE the most important metrics. Diversity measures, including count of distinct n-grams and inertia of clusters, are significantly improved by boosting. Combining RAML and MMI seems to give an advantage in BLEU (mainly because generated responses are longer), inertia, and number of unigrams. 4.2 Qualitative Evaluation To ensure the diversified responses are as relevant as before boosting, we ask 5 annotators to evaluate a randomly sampled subset of 100 examples from each base model against its boosted counterpart. Each context are paired with two responses – one from the base model and one from the boosted model. The annotators are asked to choose the most appropriate response, or tie if they are equal. The results are shown in Table 1. On average, about 38 to 47 percent of the time the annotators showed no preferences, and boosted models beat base models for 36 to 45 percent of the trials. Note that all individual tests show annotators preferred the boosted model over the base model, except for one case, where the annotator chose MMI base model over the boosted model slightly more often. We also provide an example of generated responses in Table 2. 5 Conclusion We investigated the use of boosting to improve the diversity and relevance of dialog response generation, with various training and decoding objectives including mutual-information-based decoding and reward-augmented maximum likelihood learning. Our combination of boosting and RAML for response generation is novel, and its combination 42 Context my family lives in alaska . it is freezing down there . Human i bet it is oh i could not Baseline what do you do for a living Boosted do you live near the beach ? i live in canada Table 2: Examples of generated responses from baseline sequence-to-sequence model and its boosted counterpart. with MMI gives some of the most diversified results. Quantitative evaluation shows our method can substantially improve the diversity without harming the quality of generated responses. Our human evaluation provides evidence that diversified responses by boosting are even more appropriate than those generated from baseline models. Acknowledgments This material is based upon work supported by the National Science Foundation (Award No. 1722822). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of National Science Foundation, and no official endorsement should be inferred. References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2016. A simple but tough-to-beat baseline for sentence embeddings. Yoav Freund and Robert E Schapire. 1997. A decisiontheoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Aditya Grover and Stefano Ermon. 2018. Boosted generative models. In Thirty-Second AAAI Conference on Artificial Intelligence. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055. Jiwei Li, Will Monroe, and Dan Jurafsky. 2017a. Learning to decode for future success. arXiv preprint arXiv:1701.06549. Jiwei Li, Will Monroe, Tianlin Shi, S˙ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017b. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Pierre Lison and Serge Bibauw. 2017. Not all dialogues are created equal: Instance weighting for neural conversational models. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 384–394. Association for Computational Linguistics; Stroudsburg, PA. Yahui Liu, Wei Bi, Jun Gao, Xiaojiang Liu, Jian Yao, and Shuming Shi. 2018. Towards less generic responses in neural conversation models: A statistical re-weighting method. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2769–2774. Mohammad Norouzi, Samy Bengio, Navdeep Jaitly, Mike Schuster, Yonghui Wu, Dale Schuurmans, et al. 2016. Reward augmented maximum likelihood for neural structured prediction. In Advances In Neural Information Processing Systems, pages 1723–1731. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence. Yuanlong Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating high-quality and informative conversation responses with sequence-to-sequence models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2210–2219. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. 43 Ilya O Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, and Bernhard Sch¨olkopf. 2017. Adagan: Boosting generative models. In Advances in Neural Information Processing Systems, pages 5424–5433. Tong Xiao, Jingbo Zhu, Muhua Zhu, and Huizhen Wang. 2010. Boosting-based system combination for machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 739–748. Association for Computational Linguistics. Zhen Xu, Bingquan Liu, Baoxun Wang, SUN Chengjie, Xiaolong Wang, Zhuoran Wang, and Chao Qi. 2017. Neural response generation via gan with an approximate embedding layer. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 617–626. Dakun Zhang, Jungi Kim, Josep Crego, and Jean Senellart. 2017. Boosting neural machine translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 271–276. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2204–2213. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–664.
2019
5
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 527–536 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 527 MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations Soujanya Poria†, Devamanyu HazarikaΦ, Navonil Majumder‡, Gautam Naik¶, Erik Cambria¶, Rada Mihalceaι †Information Systems Technology and Design, SUTD, Singapore ΦSchool of Computing, National University of Singapore, Singapore ‡Centro de Investigaci´on en Computaci´on, Instituto Polit´ecnico Nacional, Mexico ¶Computer Science & Engineering, Nanyang Technological University, Singapore ιComputer Science & Engineering, University of Michigan, USA [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Emotion recognition in conversations (ERC) is a challenging task that has recently gained popularity due to its potential applications. Until now, however, there has been no largescale multimodal multi-party emotional conversational database containing more than two speakers per dialogue. To address this gap, we propose the Multimodal EmotionLines Dataset (MELD), an extension and enhancement of EmotionLines. MELD contains about 13,000 utterances from 1,433 dialogues from the TV-series Friends. Each utterance is annotated with emotion and sentiment labels, and encompasses audio, visual, and textual modalities. We propose several strong multimodal baselines and show the importance of contextual and multimodal information for emotion recognition in conversations. The full dataset is available for use at http:// affective-meld.github.io. 1 Introduction With the rapid growth of Artificial Intelligence (AI), multimodal emotion recognition has become a major research topic, primarily due to its potential applications in many challenging tasks, such as dialogue generation, user behavior understanding, multimodal interaction, and others. A conversational emotion recognition system can be used to generate appropriate responses by analyzing user emotions (Zhou et al., 2017; Rashkin et al., 2018). Although significant research work has been carried out on multimodal emotion recognition using audio, visual, and text modalities (Zadeh et al., 2016a; Wollmer et al., 2013), significantly less work has been devoted to emotion recognition in conversations (ERC). One main reason for this is the lack of a large multimodal conversational dataset. According to Poria et al. (2019), ERC presents several challenges such as conversational context modeling, emotion shift of the interlocutors, and others, which make the task more difficult to address. Recent work proposes solutions based on multimodal memory networks (Hazarika et al., 2018). However, they are mostly limited to dyadic conversations, and thus not scalable to ERC with multiple interlocutors. This calls for a multi-party conversational data resource that can encourage research in this direction. In a conversation, the participants’ utterances generally depend on their conversational context. This is also true for their associated emotions. In other words, the context acts as a set of parameters that may influence a person to speak an utterance while expressing a certain emotion. Modeling this context can be done in different ways, e.g., by using recurrent neural networks (RNNs) and memory networks (Hazarika et al., 2018; Poria et al., 2017; Serban et al., 2017). Figure 1 shows an example where the speakers change their emotions (emotion shifts) as the dialogue develops. The emotional dynamics here depend on both the previous utterances and their associated emotions. For example, the emotion shift in utterance eight (in the figure) is hard to determine unless cues are taken from the facial expressions and the conversational history of both speakers. Modeling such complex inter-speaker dependencies is one of the major challenges in conversational modeling. Conversation in its natural form is multimodal. In dialogues, we rely on others’ facial expressions, vocal tonality, language, and gestures to anticipate their stance. For emotion recognition, multimodal528 1) You liked it? You really liked it? 2) Oh, yeah! 3) Which part exactly? 4) The whole thing! Can we go? 5) What about the scene with the kangaroo? 6) I was surprised to see a kangaroo in a world war epic. 7) You fell asleep! 8) Don’t go, I’m sorry. Surprise (Positive) Neutral (Neutral) Neutral (Neutral) Anger (Negative) Dialogue Joey Chandler Joy (Positive) Neutral (Neutral) Surprise (Negative) Sadness (Negative) Emotion (Sentiment) : Figure 1: Emotion shift of speakers in a dialogue in comparison with their previous emotions. Figure 2: Importance of multimodal cues. Green shows primary modalities responsible for sentiment and emotion. ity is particularly important. For the utterances with language that is difficult to understand, we often resort to other modalities, such as prosodic and visual cues, to identify their emotions. Figure 2 presents examples from the dataset where the presence of multimodal signals in addition to the text itself is necessary in order to make correct predictions of their emotions and sentiments. Multimodal emotion recognition of sequential turns encounters several other challenges. One such example is the classification of short utterances. Utterances like “yeah”, “okay”, “no” can express varied emotions depending on the context and discourse of the dialogue. However, due to the difficulty of perceiving emotions from text alone, most models resort to assigning the majority class (e.g., non-neutral in EmotionLines). Approximately 42% of the utterances in MELD are shorter than five words. We thus provide access to the multimodal data sources for each dialogue and posit that this additional information would benefit the emotion recognition task by improving the context representation and supplementing the missing or misleading signals from other modalities. Surplus information from attributes such as the speaker’s facial expressions or intonation in speech could guide models for better classification. We also provide evidence for these claims through our experiments. The development of conversational AI thus depends on the use of both contextual and multimodal information. The publicly available datasets for multimodal emotion recognition in conversations – IEMOCAP and SEMAINE – have facilitated a significant number of research projects, but also have limitations due to their relatively small number of total utterances and the lack of multi-party conversations. There are also other multimodal emotion and sentiment analysis datasets, such as MOSEI (Zadeh et al., 2018), MOSI (Zadeh et al., 2016b), and MOUD (P´erez-Rosas et al., 2013), but they contain individual narratives instead of dialogues. On the other hand, EmotionLines (Chen et al., 2018) is a dataset that contains dialogues from the popular TV-series Friends with more than two speakers. However, EmotionLines can only be used for textual analysis as it does not provide data from other modalities. In this work, we extend, improve, and further develop the EmotionLines dataset for the multimodal scenario. We propose the Multimodal EmotionLines Dataset (MELD), which includes not only textual dialogues, but also their corresponding visual and audio counterparts. This paper makes several contributions: • MELD contains multi-party conversations that are more challenging to classify than dyadic variants available in previous datasets. • There are more than 13,000 utterances in MELD, 529 which makes our dataset nearly double the size of existing multimodal conversational datasets. • MELD provides multimodal sources and can be used in a multimodal affective dialogue system for enhanced grounded learning. • We establish a strong baseline, proposed by Majumder et al. (2019), which is capable of emotion recognition in multi-party dialogues by interparty dependency modeling. The remainder of the paper is organized as follows: Section 2 illustrates the EmotionLines dataset; we then present MELD in Section 3; strong baselines and experiments are elaborated in Section 4; future directions and applications of MELD are covered in Section 5 and 6, respectively; finally, Section 7 concludes the paper. 2 EmotionLines Dataset The MELD dataset has evolved from the EmotionLines dataset developed by Chen et al. (2018). EmotionLines contains dialogues from the popular sitcom Friends, where each dialogue contains utterances from multiple speakers. EmotionLines was created by crawling the dialogues from each episode and then grouping them based on the number of utterances in a dialogue into four groups of [5, 9], [10, 14], [15, 19], and [20, 24] utterances respectively. Finally, 250 dialogues were sampled randomly from each of these groups, resulting in the final dataset of 1,000 dialogues. 2.1 Annotation The utterances in each dialogue were annotated with the most appropriate emotion category. For this purpose, Ekman’s six universal emotions (Joy, Sadness, Fear, Anger, Surprise, and Disgust) were considered as annotation labels. This annotation list was extended with two additional emotion labels: Neutral and Non-Neutral. Each utterance was annotated by five workers from the Amazon Mechanical Turk (AMT) platform. A majority voting scheme was applied to select a final emotion label for each utterance. The overall Fleiss’ kappa score of this annotation process was 0.34. 3 Multimodal EmotionLines Dataset (MELD) We start the construction of the MELD corpus by extracting the starting and ending timestamps of Dataset # Dialogues # Utterances train dev test train dev test EmotionLines 720 80 200 10561 1178 2764 MELD 1039 114 280 9989 1109 2610 Table 1: Comparison between the original EmotionLines dataset and MELD. all utterances from every dialogue in the EmotionLines dataset. To accomplish this, we crawl through the subtitles of all the episodes and heuristically extract the respective timestamps. In particular, we enforce the following constraints: 1. Timestamps of the utterances in a dialogue must be in an increasing order. 2. All the utterances in a dialogue have to belong to the same episode and scene. These constraints revealed a few outliers in EmotionLines where some dialogues span across scenes or episodes. For example, the dialogue in Table 2 contains two natural dialogues from episode 4 and 20 of season 6 and 5, respectively. We decided to filter out these anomalies, thus resulting in a different number of total dialogues in MELD as compared to EmotionLines (see Table 1). Next, we employ three annotators to label each utterance, followed by a majority voting to decide the final label of the utterances. We drop a few utterances where all three annotations were different, and also remove their corresponding dialogues to maintain coherence. A total of 89 utterances spanning 11 dialogues fell under this category. Finally, after obtaining the timestamp of each utterance, we extract their corresponding audiovisual clips from the source episode followed by the extraction of audio content from these clips. We format the audio files as 16-bit PCM WAV files for further processing. The final dataset includes visual, audio, and textual modalities for each utterance.1 3.1 Dataset Re-annotation The utterances in the original EmotionLines dataset were annotated by looking only at the transcripts. However, due to our focus on multimodality, we re-annotate all the utterances by asking the three annotators to also look at the available video clip of the utterances. We then use majority-voting to obtain the final label for each utterance. 1We consulted a legal office to verify that the usage and distribution of very short length videos fall under the fair use category. 530 Episode Utterance Speaker Emotion Sentiment S6.E4 What are you talkin about? I never left you! Youve always been my agent! Joey surprise negative Really?! Estelle surprise positive Yeah! Joey joy positive Oh well, no harm, no foul. Estelle neutral neutral S5.E20 Okay, you guys free tonight? Gary neutral neutral Yeah!! Ross joy positive Tonight? You-you didn’t say it was going to be at nighttime. Chandler surprise negative Table 2: A dialogue in EmotionLines where utterances from two different episodes are present. The first four utterances in this dialogue have been taken from episode 4 of season 6. The last three utterances in red font are from episode 20 of season 5. The annotators were graduate students with high proficiency in English speaking and writing. Before starting the annotation, they were briefed about the annotation process with a few examples. We achieve an overall Fleiss’ kappa score of 0.43 which is higher than the original EmotionLines annotation whose kappa score was 0.34 (kappa of IEMOCAP annotation process was 0.4), thus suggesting the usefulness of the additional modalities during the annotation process. 2,772 utterances in the EmotionLines dataset were labeled as non-neutral where the annotators agreed that the emotion is not neutral but they could not reach agreement regarding the correct emotion label. This hampers classification, as the non-neutral utterance space and the other emotionlabel spaces get conflated. In our case, we remove the utterances where the annotators fail to reach an agreement on the definite emotion label. The number of disagreements in our annotation process is 89, which is much lower than the 2,772 disagreements in EmotionLines, reflecting again the annotation improvement obtained through a multimodal dataset. Table 3 shows examples of utterances where the annotators failed to reach consensus. Table 4 shows the label-wise comparison between EmotionLines and MELD dataset. For most of the utterances in MELD, the annotations match the original annotations in EmotionLines. Yet, there exists a significant amount of samples whose utterances have been changed in the re-annotation process. For example, the utterance This guy fell asleep! (see Table 5), was labeled as non-neutral Utterance Annotator 1 Annotator 2 Annotator 3 You know? Forget it! sadness disgust anger Oh no-no, give me anger sadness neutral some specifics. I was surprised to see a surprise anger joy kangaroo in a World War epic. Or, call an ambulance. anger surprise neutral Table 3: Some examples of the utterances for which annotators could not reach consensus. EmotionLines MELD Categories Train Dev Test Train Dev Test Emotion anger 524 85 163 1109 153 345 disgust 244 26 68 271 22 68 fear 190 29 36 268 40 50 joy 1283 123 304 1743 163 402 neutral 4752 491 1287 4710 470 1256 sadness 351 62 85 683 111 208 surprise 1221 151 286 1205 150 281 Sentiment negative 2945 406 833 neutral 4710 470 1256 positive 2334 233 521 Table 4: Emotion and Sentiment distribution in MELD vs. EmotionLines. in EmotionLines but after viewing the associated video clip, it is correctly re-labeled as anger in MELD. The video of this utterance reveals an angry and frustrated facial expression along with a high vocal pitch, thus helping to recognize its correct emotion. The annotators of EmotionLines had access to the context, but this was not sufficient, as the availability of additional modalities can sometime bring more information for the classification of such instances. These scenarios justify both context and multimodality to be important aspects for emotion recognition in conversation. Timestamp alignment. There are many utterances in the subtitles that are grouped within identical timestamps in the subtitle files. In order to find the accurate timestamp for each utterance, we use a transcription alignment tool Gentle,2 which automatically aligns a transcript with the audio by extracting word-level timestamps from the audio (see Table 6). In Table 7, we show the final format of the MELD dataset. Dyadic MELD. We also provide another version of MELD where all the non-extendable contiguous dyadic sub-dialogues of MELD are extracted. For example, let a three-party dialogue in MELD with speaker ids 1,2,3 have their turns in the following 2http://github.com/lowerquality/gentle 531 order: [1,2,1,2,3,2,1,2]. From this dialogue sequence, dyadic MELD will have the following sub-dialogues as samples: [1,2,1,2],[2,3,2] and [2,1,2]. However, the reported results in this paper are obtained using only the multiparty variant of MELD. Utterance Speaker MELD EmotionLines I’m so sorry! Chandler sadness sadness Look! Chandler surprise surprise This guy fell asleep! Chandler anger non-neutral Table 5: Difference in annotation between EmotionLines and MELD. 3.2 Dataset Exploration As mentioned before, we use seven emotions for the annotation, i.e., anger, disgust, fear, joy, neutral, sadness, and surprise, across the training, development, and testing splits (see Table 4). It can be seen that the emotion distribution in the dataset is expectedly non-uniform with the majority emotion being neutral. We have also converted these fine-grained emotion labels into more coarse-grained sentiment classes by considering anger, disgust, fear, sadness as negative, joy as positive, and neutral as neutral sentiment-bearing class. Surprise is an example of a complex emotion which can be expressed with both positive and negative sentiment. The three annotators who performed the utterance annotation further annotated the surprise utterances into either positive or negative sentiment classes. The entire sentiment annotation task reaches a Fleiss’ kappa score of 0.91. The distribution of positive, negative, neutral sentiment classes is given in Table 4. Table 8 presents several key statistics of the dataset. The average utterance length – i.e. number of words in an utterance – is nearly the same across training, development, and testing splits. On average, three emotions are present in each dialogue of the dataset. The average duration of an utterance is 3.59 seconds. The emotion shift of a speaker in a dialogue makes emotion recognition task very challenging. We observe that the number of such emotion shifts in successive utterances of a speaker in a dialogue is very frequent: 4003, 427, and 1003 in train/dev/test splits, respectively. Figure 1 shows an example where speaker’s emotion changes with time in the dialogue. Character Distribution. In Figure 3, we present the distributional details of the primary characters in MELD. Figure a and b illustrate the distribution across the emotion and sentiment labels, respectively. Figure c shows the overall coverage of the speakers across the dataset. Multiple infrequent speakers (< 1% utterances) are grouped as Others. 3.3 Related Datasets Most of the available datasets in multimodal sentiment analysis and emotion recognition are nonconversational. MOSI (Zadeh et al., 2016b), MOSEI (Zadeh et al., 2018), and MOUD (P´erez-Rosas et al., 2013) are such examples that have drawn significant interest from the research community. On the other hand, IEMOCAP and SEMAINE are two popular dyadic conversational datasets where each utterance in a dialogue is labeled by emotion. The SEMAINE Database is an audiovisual database created for building agents that can engage a person in a sustained and emotional conversation (McKeown et al., 2012). It consists of interactions involving a human and an operator (either a machine or a person simulating a machine). The dataset contains 150 participants, 959 conversations, each lasting around 5 minutes. A subset of this dataset was used in AVEC 2012’s fully continuous sub-challenge (Schuller et al., 2012) that requires predictions of four continuous affective dimensions: arousal, expectancy, power, and valence. The gold annotations are available for every 0.2 second in each video for a total of 95 videos comprising 5,816 utterances. The Interactive Emotional Dyadic Motion Capture Database (IEMOCAP) consists of videos of dyadic conversations among pairs of 10 speakers spanning 10 hours of various dialogue scenarios (Busso et al., 2008). Videos are segmented into utterances with annotations of fine-grained emotion categories: anger, happiness, sadness, neutral, excitement, and frustration. IEMOCAP also provides continuous attributes: activation, valence, and dominance. These two types of discrete and continuous emotional descriptors facilitate the complementary insights about the emotional expressions of humans and emotional communications between people. The labels in IEMOCAP were annotated by at least three annotators per utterance and self-assessment manikins (SAMs) were also employed to evaluate the corpus (Bradley and Lang, 1994). 3.4 Comparison with MELD Both resources mentioned above are extensively used in this field of research and contain settings 532 Incorrect Splits Corrected Splits Utterance Season Episode Start Time End Time Start Time End Time Chris says they’re closing 3 6 00:05:57,023 00:05:59,691 00:05:57,023 00:05:58,734 down the bar. No way! 3 6 00:05:57,023 00:05:59,691 00:05:58,734 00:05:59,691 Table 6: Example of timestamp alignment using the Gentle alignment tool. Utterance Speaker Emotion D ID U ID Season Episode StartTime EndTime But then who? The waitress I went out Joey surprise 1 0 9 23 00:36:40,364 00:36:42,824 with last month? You know? Forget it! Rachel sadness 1 1 9 23 00:36:44,368 00:36:46,578 Table 7: MELD dataset format for a dialogue. Notations: D ID = dialogue ID, U ID = utterance ID. StartTime and EndTime are in hh:mm:ss,ms format. that are aligned to the components of MELD. However, MELD is different in terms of both complexity and quantity. Both IEMOCAP and SEMAINE contain dyadic conversations, wherein the dialogues in MELD are multi-party. Multi-party conversations are more challenging compared to dyadic. They provide a flexible setting where multiple speakers can engage. From a research perspective, such availability also demands proposed dialogue models to be scalable towards multiple speakers. MELD also includes more than 13000 emotion labeled utterances, which is nearly double the annotated utterances in IEMOCAP and SEMAINE. Table 9 provides information on the number of available dialogues and their constituent utterances for all three datasets, i.e., IEMOCAP, SEMAINE, and MELD. Table 10 shows the distribution for common emotions as well as highlights a few key statistics of IEMOCAP and MELD. 4 Experiments 4.1 Feature Extraction We follow Poria et al. (2017) to extract features for each utterance in MELD. For textual features, we initialize each token with pre-trained 300-dimensional GloVe vectors (Pennington et al., 2014) and feed them to a 1D-CNN to extract 100 MELD Statistics Train Dev Test # of modalities {a,v,t} {a,v,t} {a,v,t} # of unique words 10,643 2,384 4,361 Avg./Max utterance length 8.0/69 7.9/37 8.2/45 # of dialogues 1039 114 280 # of dialogues dyadic MELD 2560 270 577 # of utterances 9989 1109 2610 # of speakers 260 47 100 Avg. # of utterances per dialogue 9.6 9.7 9.3 Avg. # of emotions per dialogue 3.3 3.3 3.2 Avg./Max # of speakers per dialogue 2.7/9 3.0/8 2.6/8 # of emotion shift 4003 427 1003 Avg. duration of an utterance 3.59s 3.59s 3.58s Table 8: Dataset Statistics. {a,v,t} = {audio, visual, text} dimensional textual features. For audio, we use the popular toolkit openSMILE (Eyben et al., 2010), which extracts 6373 dimensional features constituting several low-level descriptors and various statistical functionals of varied vocal and prosodic features. As the audio representation is high dimensional, we employ L2-based feature selection with sparse estimators, such as SVMs, to get a dense representation of the overall audio segment. For the baselines, we do not use visual features, as videobased speaker identification and localization is an open problem. Bimodal features are obtained by concatenating audio and textual features. 4.2 Baseline Models To provide strong benchmarks for MELD, we perform experiments with multiple baselines. Hyperparameter details for each baseline can be found at http://github.com/senticnet/meld. text-CNN applies CNN to the input utterances without considering the context of the conversation (Kim, 2014). This model represents the simplest baseline which does not leverage context or multimodality in its approach. bcLSTM is a strong baseline proposed by Poria et al. (2017), which represents context using a bi-directional RNN. It follows a two-step hierarchical process that models uni-modal context first and then bi-modal context features. For unimodal text, a CNN-LSTM model extracts contextual representations for each utterance taking the GloVe emDataset Type # dialogues # utterances train dev test train dev test IEMOCAP acted 120 31 5810 1623 SEMAINE acted 58 22 4386 1430 MELD acted 1039 114 280 9989 1109 2610 Table 9: Comparison among IEMOCAP, SEMAINE, and proposed MELD datasets 533 0 750 1500 2250 3000 Chandler Ross Phoebe Monica Joey Rachel Others Neutral Surprise Joy Sadness Fear Anger Disgust 0 750 1500 2250 3000 Chandler Ross Phoebe Monica Joey Rachel Others Positive Negative Neutral a) Character-Emotion b) Character-Sentiment Others 16% Rachel 16% Joey 16% Monica 14% Phoebe 13% Ross 14% Chandler 11% c) Character-distribution Figure 3: Character distribution across MELD. Dataset Emotions Other Statistics Happy/Joy Anger Disgust Sadness Surprise Neutral Avg. utterence length #Unique words Avg. conversation length IEMOCAP 648 1103 2 1084 107 1708 15.8 3,598 49.2 MELD 2308 1607 361 1002 1636 6436 8.0 10,643 9.6 Table 10: Comparison among IEMOCAP and proposed MELD datasets. beddings as input. For unimodal audio, an LSTM model gets audio representations for each audio utterance feature vector. Finally, the contextual representations from the unimodal variants are supplied to the bimodal model for classification. bcLSTM does not distinguish among different speakers and models a conversation as a single sequence. DialogueRNN represents the current state of the art for conversational emotion detection (Majumder et al., 2019). It is a strong baseline with effective mechanisms to model context by tracking individual speaker states throughout the conversation for emotion classification. DialogueRNN is capable of handling multi-party conversation so it can be directly applied on MELD. It employs three stages of gated recurrent units (GRU) (Chung et al., 2014) to model emotional context in conversations. The spoken utterances are fed into two GRUs: global and party GRU to update the context and speaker state, respectively. In each turn, the party GRU updates its state based on 1) the utterance spoken, 2) the speaker’s previous state, and 3) the conversational context summarized by the global GRU through an attention mechanism. Finally, the updated speaker state is fed into the emotion GRU which models the emotional information for classification. Attention mechanism is used on top of the emotion GRU to leverage contextual utterances by different speakers at various distances. To analyze the role of multimodal signals, we analyze DialogueRNN and bcLSTM on MELD for both uni and multimodal settings. Training involved usage of class weights to alleviate imbalance issues. 4.3 Results We provide results for the two tasks of sentiment and emotion classification on MELD. Table 13 shows the performance of sentiment classification by using DialogueRNN, whose multimodal variant achieves the best performance (67.56% F-score) surpassing multimodal bcLSTM (66.68% F-score). Multimodal DialogueRNN also outperforms its unimodal counterparts. However, the improvement due to fusion is about 1.4% higher than the textual modality which suggests the possibility of further improvement through better fusion mechanisms. The textual modality outperforms the audio modality by about 17%, which indicates the importance of spoken language in sentiment analysis. For positive sentiment, audio modality performs poorly. It would be interesting to analyze the clues specific to positive sentiment bearing utterances in MELD that the audio modality could not capture. Future work should aim for enhanced audio feature extraction schemes to improve the classification performance. Table 11 presents the results of the baseline models on MELD emotion classification. The performance on the emotion classes disgust, fear, and sadness are particularly poor. The primary reason for this is the inherent imbalance in the dataset which has fewer training instances for these mentioned emotion classes (see Table 4). We partially tackle this by using class-weights as hyper-parameters. Yet, the imbalance calls for further improvement for future work to address. We also observe high 534 Models Emotions anger disgust fear joy neutral sadness surprise w-avg. text-CNN 34.49 8.22 3.74 49.39 74.88 21.05 45.45 55.02 cMKL text+audio 39.50 16.10 3.75 51.39 72.73 23.95 46.25 55.51 bcLSTM text 42.06 21.69 7.75 54.31 71.63 26.92 48.15 56.44 audio 25.85 6.06 2.90 15.74 61.86 14.71 19.34 39.08 text+audio 43.39 23.66 9.38 54.48 76.67 24.34 51.04 59.25 DialogueRNN text 40.59 2.04 8.93 50.27 75.75 24.19 49.38 57.03 audio 35.18 5.13 5.56 13.17 65.57 14.01 20.47 41.79 text+audio 43.65 7.89 11.68 54.40 77.44 34.59 52.51 60.25 Table 11: Test-set weighted F-score results of DialogueRNN for emotion classification in MELD. Note: w-avg denotes weighted-average. text-CNN and cMKL: contextual information were not used. mis-classification rate between the anger, disgust, and fear emotion categories as these emotions have subtle differences among them causing harder disambiguation. Similar to sentiment classification trends, the textual classifier outperforms (57.03% F-score) the audio classifier (41.79% F-score). Multimodal fusion helps in improving the emotion recognition performance by 3%. However, multimodal classifier performs worse than the textual classifier in classifying sadness. To analyze further, we also run experiments on 5-class emotions by dropping the infrequent fear and disgust emotions (see Table 12). Not surprisingly, the results improve over the 7-class setting with significantly better performance by the multimodal variant. Overall, emotion classification performs poorer than sentiment classification. This observation is expected as emotion classification deals with classification with more fine-grained classes. 4.4 Additional Analysis Role of Context. One of the main purposes of MELD is to train contextual modeling in a conversation for emotion recognition. Table 11 and 13 show that the improvement over the non-contextual model such as text-CNN – which only uses a CNN (see Section 4.1) – is 1.4% to 2.5%. Inter-speaker influence. One of the important considerations while modeling conversational emoMode Emotions ang joy neu sad surp w-avg. bcLSTM T+A 45.9 52.2 77.9 11.2 49.9 60.6 dRNN∗ T 41.7 53.7 77.8 21.2 47.7 60.8 A 34.1 18.8 66.2 16.0 16.6 44.3 T+A 48.2 53.2 77.7 20.3 48.5 61.6 ∗dRNN: DialogueRNN, T: text, A: audio Table 12: Test-set weighted F-score results of DialogueRNN for 5-class emotion classification in MELD. Note: w-avg denotes weighted-average. surp: surprise emotion. tion dynamics is the influence of fellow speakers in the multi-party setting. We analyze this factor by looking at the activation of the attention module on the global GRU in DialogueRNN. We observe that in 63% (882/1381) of the correct test predictions, the highest historical attention is given to utterances from different speakers. This significant proportion suggests inter-speaker influence to be an important parameter. Unlike DialogueRNN, Mode Sentiments pos. neg. neu. w-avg. text-CNN 53.23 55.42 74.69 64.25 bcLSTM T+A 74.68 57.87 60.04 66.68 dRNN∗ T 54.35 60.10 74.94 66.10 A 25.47 45.53 62.33 49.61 T+A 54.29 58.18 78.40 67.56 Table 13: Test set weighted F-score results of DialogueRNN for sentiment classification in MELD. bcLSTM does not utilize speaker information while detecting emotion. Table 11 shows that in all the experiments, DialogueRNN outperforms bcLSTM by 1-2% margin. This result supports the claim by Majumder et al. (2019) that speaker-specific modeling of emotion recognition is beneficial as it helps in improving context representation and incorporates important clues such as inter-speaker relations. Emotion shifts. The ability to anticipate the emotion shifts within speakers throughout the course of a dialogue has synergy with better emotion classification. In our results, DialogueRNN achieves a recall of 66% for detecting emotion shifts. However, in the ideal scenario, we would want to detect shift along with the correct emotion class. For this setting, DialogueRNN gets a recall of 36.7%. The deterioration observed is expected as solving both tasks together has a higher complexity. Future methods would need to improve upon their capabilities of detecting shifts to improve the emotion 535 classification. Contextual distance. Figure 4 presents the distribution of distances between the target utterance and its second highest attended utterance within the conversation by DialogueRNN in its emotion GRU. For the highest attention, the model largely focuses on utterances nearby to the target utterance. However, the dependency on distant utterances increases with the second highest attention. Moreover, it is interesting to see that the dependency exists both towards the historical and the future utterances, thus incentivizing utilization of bi-directional models. 5 Future Directions Future research using this dataset should focus on improving contextual modeling. Helping models reason about their decisions, exploring emotional influences, and identifying emotion shifts are promising aspects. Another direction is to use visual information available in the raw videos. Identifying face of the speaker in a video where multiple other persons are present is very challenging. This is the case for MELD too as it is a multi-party 15 10 5 0 5 10 15 0 200 400 600 800 1000 1200 Frequency of correct predictions Δt Distance between test utterance and Highest attention 2nd highest attention Figure 4: Histogram of ∆t = distance between the target and its context utterance based on emotion GRU attention scores. dataset. Enhancements can be made by extracting relevant visual features through processes utilizing audio-visual speaker diarization. Such procedures would enable utilizing a visual modality in the baselines. In our results, audio features do not help significantly. Thus, we believe that it is necessary to improve the feature extraction for these auxiliary modalities in order to improve the performance further. So far, we have only used concatenation as a feature fusion approach, and showed that it outperforms the unimodal baselines by about 1-3%. We believe there is room for further improvement using other more advanced fusion methods such as MARN (Zadeh et al., 2018). 6 Applications of MELD MELD has multiple use-cases. It can be used to train emotion classifiers to be further used as emotional receptors in generative dialogue systems. These systems can be used to generate empathetic responses (Zhou et al., 2017). It can also be used for emotion and personality modeling of users in conversations (Li et al., 2016). By being multimodal, MELD can also be used to train multimodal dialogue systems. Although by itself it is not large enough to train an end-to-end dialogue system (Table 1), the procedures used to create MELD can be adopted to generate a largescale corpus from any multimodal source such as popular sitcoms. We define multimodal dialogue system as a platform where the system has access to the speaker’s voice and facial expressions which it exploits to generate responses. Multimodal dialogue systems can be very useful for real time personal assistants such as Siri, Google Assistant where the users can use both voice and text and facial expressions to communicate. 7 Conclusion In this work, we introduced MELD, a multimodal multi-party conversational emotion recognition dataset. We described the process of building this dataset, and provided results obtained with strong baseline methods applied on this dataset. MELD contains raw videos, audio segments, and transcripts for multimodal processing. Additionally, we also provide the features used in our baseline experiments. We believe this dataset will also be useful as a training corpus for both conversational emotion recognition and multimodal empathetic response generation. Building upon this dataset, future research can explore the design of efficient multimodal fusion algorithms, novel ERC frameworks, as well as the extraction of new features from the audio, visual, and textual modalities. Acknowledgments This material is based in part upon work supported by the National Science Foundation (grant #1815291), by the John Templeton Foundation (grant #61156), and by DARPA (grant #HR001117S0026-AIDA-FP-045). 536 References Margaret M Bradley and Peter J Lang. 1994. Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry, 25(1):49–59. Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation, 42(4):335–359. Sheng-Yeh Chen, Chao-Chun Hsu, Chuan-Chun Kuo, Lun-Wei Ku, et al. 2018. Emotionlines: An emotion corpus of multi-party conversations. arXiv preprint arXiv:1802.08379. Junyoung Chung, C¸ aglar G¨ulc¸ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. CoRR, abs/1412.3555. Florian Eyben, Martin W¨ollmer, and Bj¨orn Schuller. 2010. Opensmile: the munich versatile and fast open-source audio feature extractor. In Proceedings of the 18th ACM international conference on Multimedia, pages 1459–1462. ACM. Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. 2018. Conversational memory network for emotion recognition in dyadic dialogue videos. In NAACL, volume 1, pages 2122–2132. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pages 1746– 1751. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In ACL, volume 1, pages 994–1003. Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. DialogueRNN: An attentive RNN for emotion detection in conversations. Thirty-Third AAAI Conference on Artificial Intelligence. Gary McKeown, Michel Valstar, Roddy Cowie, Maja Pantic, and Marc Schroder. 2012. The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. Affective Computing, IEEE Transactions on, 3(1):5–17. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Ver´onica P´erez-Rosas, Rada Mihalcea, and LouisPhilippe Morency. 2013. Utterance-level multimodal sentiment analysis. In ACL (1), pages 973– 982. Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Mazumder, Amir Zadeh, and LouisPhilippe Morency. 2017. Context-dependent sentiment analysis in user-generated videos. In ACL, pages 873–883. Soujanya Poria, Navonil Majumder, Rada Mihalcea, and Eduard Hovy. 2019. Emotion recognition in conversation: Research challenges, datasets, and recent advances. arXiv preprint arXiv:1905.02947. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. I know the feeling: Learning to converse with empathy. arXiv preprint arXiv:1811.00207. Bj¨orn Schuller, Michel Valster, Florian Eyben, Roddy Cowie, and Maja Pantic. 2012. Avec 2012: the continuous audio/visual emotion challenge. In Proceedings of the 14th ACM international conference on Multimodal interaction, pages 449–456. ACM. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pages 3295–3301. Martin Wollmer, Felix Weninger, Timo Knaup, Bjorn Schuller, Congkai Sun, Kenji Sagae, and LouisPhilippe Morency. 2013. Youtube movie reviews: Sentiment analysis in an audio-visual context. IEEE Intelligent Systems, 28(3):46–53. Amir Zadeh, Tadas Baltruˇsaitis, and Louis-Philippe Morency. 2016a. Deep constrained local models for facial landmark detection. arXiv preprint arXiv:1611.08657. Amir Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. In ACL, volume 1, pages 2236–2246. Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016b. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 31(6):82–88. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2017. Emotional chatting machine: Emotional conversation generation with internal and external memory. arXiv preprint arXiv:1704.01074.
2019
50
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5070–5081 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5070 Hierarchical Transformers for Multi-Document Summarization Yang Liu and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh [email protected], [email protected] Abstract In this paper, we develop a neural summarization model which can effectively process multiple input documents and distill abstractive summaries. Our model augments a previously proposed Transformer architecture (Liu et al., 2018) with the ability to encode documents in a hierarchical manner. We represent cross-document relationships via an attention mechanism which allows to share information as opposed to simply concatenating text spans and processing them as a flat sequence. Our model learns latent dependencies among textual units, but can also take advantage of explicit graph representations focusing on similarity or discourse relations. Empirical results on the WikiSum dataset demonstrate that the proposed architecture brings substantial improvements over several strong baselines.1 1 Introduction Automatic summarization has enjoyed renewed interest in recent years, thanks to the popularity of neural network models and their ability to learn continuous representations without recourse to preprocessing tools or linguistic annotations. The availability of large-scale datasets (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018) containing hundreds of thousands of documentsummary pairs has driven the development of neural architectures for summarizing single documents. Several approaches have shown promising results with sequence-to-sequence models that encode a source document and then decode it into an abstractive summary (See et al., 2017; Celikyilmaz et al., 2018; Paulus et al., 2018; Gehrmann et al., 2018). Multi-document summarization — the task of producing summaries from clusters of themati1Our code and data is available at https://github. com/nlpyang/hiersumm. cally related documents — has received significantly less attention, partly due to the paucity of suitable data for the application of learning methods. High-quality multi-document summarization datasets (i.e., document clusters paired with multiple reference summaries written by humans) have been produced for the Document Understanding and Text Analysis Conferences (DUC and TAC), but are relatively small (in the range of a few hundred examples) for training neural models. In an attempt to drive research further, Liu et al. (2018) tap into the potential of Wikipedia and propose a methodology for creating a large-scale dataset (WikiSum) for multidocument summarization with hundreds of thousands of instances. Wikipedia articles, specifically lead sections, are viewed as summaries of various topics indicated by their title, e.g.,“Florence” or “Natural Language Processing”. Documents cited in the Wikipedia articles or web pages returned by Google (using the section titles as queries) are seen as the source cluster which the lead section purports to summarize. Aside from the difficulties in obtaining training data, a major obstacle to the application of end-to-end models to multi-document summarization is the sheer size and number of source documents which can be very large. As a result, it is practically infeasible (given memory limitations of current hardware) to train a model which encodes all of them into vectors and subsequently generates a summary from them. Liu et al. (2018) propose a two-stage architecture, where an extractive model first selects a subset of salient passages, and subsequently an abstractive model generates the summary while conditioning on the extracted subset. The selected passages are concatenated into a flat sequence and the Transformer (Vaswani et al., 2017), an architecture well-suited to language modeling over long sequences, is used to 5071 decode the summary. Although the model of Liu et al. (2018) takes an important first step towards abstractive multidocument summarization, it still considers the multiple input documents as a concatenated flat sequence, being agnostic of the hierarchical structures and the relations that might exist among documents. For example, different web pages might repeat the same content, include additional content, present contradictory information, or discuss the same fact in a different light (Radev, 2000). The realization that cross-document links are important in isolating salient information, eliminating redundancy, and creating overall coherent summaries, has led to the widespread adoption of graph-based models for multi-document summarization (Erkan and Radev, 2004; Christensen et al., 2013; Wan, 2008; Parveen and Strube, 2014). Graphs conveniently capture the relationships between textual units within a document collection and can be easily constructed under the assumption that text spans represent graph nodes and edges are semantic links between them. In this paper, we develop a neural summarization model which can effectively process multiple input documents and distill abstractive summaries. Our model augments the previously proposed Transformer architecture with the ability to encode multiple documents in a hierarchical manner. We represent cross-document relationships via an attention mechanism which allows to share information across multiple documents as opposed to simply concatenating text spans and feeding them as a flat sequence to the model. In this way, the model automatically learns richer structural dependencies among textual units, thus incorporating well-established insights from earlier work. Advantageously, the proposed architecture can easily benefit from information external to the model, i.e., by replacing inter-document attention with a graph-matrix computed based on the basis of lexical similarity (Erkan and Radev, 2004) or discourse relations (Christensen et al., 2013). We evaluate our model on the WikiSum dataset and show experimentally that the proposed architecture brings substantial improvements over several strong baselines. We also find that the addition of a simple ranking module which scores documents based on their usefulness for the target summary can greatly boost the performance of a multi-document summarization system. 2 Related Work Most previous multi-document summarization methods are extractive operating over graph-based representations of sentences or passages. Approaches vary depending on how edge weights are computed e.g., based on cosine similarity with tf-idf weights for words (Erkan and Radev, 2004) or on discourse relations (Christensen et al., 2013), and the specific algorithm adopted for ranking text units for inclusion in the final summary. Several variants of the PageRank algorithm have been adopted in the literature (Erkan and Radev, 2004) in order to compute the importance or salience of a passage recursively based on the entire graph. More recently, Yasunaga et al. (2017) propose a neural version of this framework, where salience is estimated using features extracted from sentence embeddings and graph convolutional networks (Kipf and Welling, 2017) applied over the relation graph representing cross-document links. Abstractive approaches have met with limited success. A few systems generate summaries based on sentence fusion, a technique which identifies fragments conveying common information across documents and combines these into sentences (Barzilay and McKeown, 2005; Filippova and Strube, 2008; Bing et al., 2015). Although neural abstractive models have achieved promising results on single-document summarization (See et al., 2017; Paulus et al., 2018; Gehrmann et al., 2018; Celikyilmaz et al., 2018), the extension of sequence-to-sequence architectures to multi-document summarization is less straightforward. Apart from the lack of sufficient training data, neural models also face the computational challenge of processing multiple source documents. Previous solutions include model transfer (Zhang et al., 2018; Lebanoff and Liu, 2018), where a sequence-to-sequence model is pretrained on single-document summarization data and finetuned on DUC (multi-document) benchmarks, or unsupervised models relying on reconstruction objectives (Ma et al., 2016; Chu and Liu, 2018). Liu et al. (2018) propose a methodology for constructing large-scale summarization datasets and a two-stage model which first extracts salient information from source documents and then uses a decoder-only architecture (that can attend to very long sequences) to generate the summary. We follow their setup in viewing multi-document summarization as a supervised machine learning prob5072 ranked paragraphs source paragraphs paragraph ranker encoder para 1 para L para L decoder abstractive summarizer target summary Figure 1: Pipeline of our multi-document summarization system. L source paragraphs are first ranked and the L′-best ones serve as input to an encoder-decoder model which generates the target summary. lem and for this purpose assume access to large, labeled datasets (i.e., source documents-summary pairs). In contrast to their approach, we use a learning-based ranker and our abstractive model can hierarchically encode the input documents, with the ability to learn latent relations across documents and additionally incorporate information encoded in well-known graph representations. 3 Model Description We follow Liu et al. (2018) in treating the generation of lead Wikipedia sections as a multidocument summarization task. The input to a hypothetical system is the title of a Wikipedia article and a collection of source documents, while the output is the Wikipedia article’s first section. Source documents are webpages cited in the References section of the Wikipedia article and the top 10 search results returned by Google (with the title of the article as the query). Since source documents could be relatively long, they are split into multiple paragraphs by line-breaks. More formally, given title T, and L input paragraphs {P1, · · · , PL} (retrieved from Wikipedia citations and a search engine), the task is to generate the lead section D of the Wikipedia article. Our summarization system is illustrated in Figure 1. Since the input paragraphs are numerous and possibly lengthy, instead of directly applying an abstractive system, we first rank them and summarize the L′-best ones. Our summarizer follows the very successful encoder-decoder architecture (Bahdanau et al., 2015), where the encoder encodes the input text into hidden representations and the decoder generates target summaries based on these representations. In this paper, we focus exclusively on the encoder part of the model, our decoder follows the Transformer architecture introduced in Vaswani et al. (2017); it generates a summary token by token while attending to the source input. We also use beam search and a length penalty (Wu et al., 2016) in the decoding process to generate more fluent and longer summaries. 3.1 Paragraph Ranking Unlike Liu et al. (2018) who rank paragraphs based on their similarity with the title (using tf-idfbased cosine similarity), we adopt a learningbased approach. A logistic regression model is applied to each paragraph to calculate a score indicating whether it should be selected for summarization. We use two recurrent neural networks with Long-Short Term Memory units (LSTM; Hochreiter and Schmidhuber 1997) to represent title T and source paragraph P: {ut1, · · · , utm} = lstmt({wt1, · · · , wtm}) (1) {up1, · · · , upn} = lstmp({wp1, · · · , wpn}) (2) where wti, wpj are word embeddings for tokens in T and P, and uti, upj are the updated vectors for each token after applying the LSTMs. A max-pooling operation is then used over title vectors to obtain a fixed-length representation ˆut: ˆut = maxpool({ut1, · · · , utm}) (3) We concatenate ˆut with the vector upi of each token in the paragraph and apply a non-linear transformation to extract features for matching the title and the paragraph. A second max-pooling operation yields the final paragraph vector ˆp: pi = tanh(W1([upi; ˆut])) (4) ˆp = maxpool({p1, · · · , pn}) (5) Finally, to estimate whether a paragraph should be selected, we use a linear transformation and a sigmoid function: s = sigmoid(W2 ˆ (p)) (6) where s is the score indicating whether paragraph P should be used for summarization. All input paragraphs {P1, · · · , PL} receive scores {s1, · · · , sL}. The model is trained by minimizing the cross entropy loss between si and ground-truth scores yi denoting the relatedness of a paragraph to the gold standard summary. We adopt ROUGE-2 recall (of paragraph Pi against 5073 gold target text D) as yi. In testing, input paragraphs are ranked based on the model predicted scores and an ordering {R1, · · · , RL} is generated. The first L′ paragraphs {R1, · · · , RL′} are selected as input to the second abstractive stage. 3.2 Paragraph Encoding Instead of treating the selected paragraphs as a very long sequence, we develop a hierarchical model based on the Transformer architecture (Vaswani et al., 2017) to capture inter-paragraph relations. The model is composed of several local and global transformer layers which can be stacked freely. Let tij denote the j-th token in the i-th ranked paragraph Ri; the model takes vectors x0 ij (for all tokens) as input. For the l-th transformer layer, the input will be xl−1 ij , and the output is written as xl ij. 3.2.1 Embeddings Input tokens are first represented by word embeddings. Let wij ∈Rd denote the embedding assigned to tij. Since the Transformer is a nonrecurrent model, we also assign a special positional embedding peij to tij, to indicate the position of the token within the input. To calculate positional embeddings, we follow Vaswani et al. (2017) and use sine and cosine functions of different frequencies. The embedding ep for the p-th element in a sequence is: ep[i] = sin(p/100002i/d) (7) ep[2i + 1] = cos(p/100002i/d) (8) where ep[i] indicates the i-th dimension of the embedding vector. Because each dimension of the positional encoding corresponds to a sinusoid, for any fixed offset o, ep+o can be represented as a linear function of ep, which enables the model to distinguish relative positions of input elements. In multi-document summarization, token tij has two positions that need to be considered, namely i (the rank of the paragraph) and j (the position of the token within the paragraph). Positional embedding peij ∈Rd represents both positions (via concatenation) and is added to word embedding wij to obtain the final input vector x0 ij : peij = [ei; ej] (9) x0 ij = wij + peij (10) 3.2.2 Local Transformer Layer A local transformer layer is used to encode contextual information for tokens within each paragraph. The local transformer layer is the same as the vanilla transformer layer (Vaswani et al., 2017), and composed of two sub-layers: h = LayerNorm(xl−1 + MHAtt(xl−1)) (11) xl = LayerNorm(h + FFN(h)) (12) where LayerNorm is layer normalization proposed in Ba et al. (2016); MHAtt is the multihead attention mechanism introduced in Vaswani et al. (2017) which allows each token to attend to other tokens with different attention distributions; and FFN is a two-layer feed-forward network with ReLU as hidden activation function. 3.2.3 Global Transformer Layer A global transformer layer is used to exchange information across multiple paragraphs. As shown in Figure 2, we first apply a multi-head pooling operation to each paragraph. Different heads will encode paragraphs with different attention weights. Then, for each head, an inter-paragraph attention mechanism is applied, where each paragraph can collect information from other paragraphs by selfattention, generating a context vector to capture contextual information from the whole input. Finally, context vectors are concatenated, linearly transformed, added to the vector of each token, and fed to a feed-forward layer, updating the representation of each token with global information. Multi-head Pooling To obtain fixed-length paragraph representations, we apply a weightedpooling operation; instead of using only one representation for each paragraph, we introduce a multi-head pooling mechanism, where for each paragraph, weight distributions over tokens are calculated, allowing the model to flexibly encode paragraphs in different representation subspaces by attending to different words. Let xl−1 ij ∈Rd denote the output vector of the last transformer layer for token tij, which is used as input for the current layer. For each paragraph Ri, for head z ∈{1, · · · , nhead}, we first transform the input vectors into attention scores az ij and value vectors bz ij. Then, for each head, we calculate a probability distribution ˆaz ij over tokens 5074 within the paragraph based on attention scores: az ij = W z a xl−1 ij (13) bz ij = W z b xl−1 ij (14) ˆaz ij = exp(az ij)/ n X j=1 exp(az ij) (15) where W z a ∈R1∗d and W z b ∈Rdhead∗d are weights. dhead = d/nhead is the dimension of each head. n is the number of tokens in Ri. We next apply a weighted summation with another linear transformation and layer normalization to obtain vector headz i for the paragraph: headz i = LayerNorm(W z c n X j=1 az ijbz ij) (16) where W z c ∈Rdhead∗dhead is the weight. The model can flexibly incorporate multiple heads, with each paragraph having multiple attention distributions, thereby focusing on different views of the input. Inter-paragraph Attention We model the dependencies across multiple paragraphs with an inter-paragraph attention mechanism. Similar to self-attention, inter-paragraph attention allows for each paragraph to attend to other paragraphs by calculating an attention distribution: qz i = W z q headz i (17) kz i = W z k headz i (18) vz i = W z v headz i (19) contextz i = m X i=1 exp(qz i T kz i′) Pm o=1 exp(qz i T kzo)vz i′ (20) where qz i , kz i , vz i ∈ Rdhead∗dhead are query, key, and value vectors that are linearly transformed from headz i as in Vaswani et al. (2017); contextz i ∈Rdhead represents the context vector generated by a self-attention operation over all paragraphs. m is the number of input paragraphs. Figure 2 provides a schematic view of inter-paragraph attention. Feed-forward Networks We next update token representations with contextual information. We first fuse information from all heads by concatenating all context vectors and applying a linear transformation with weight Wc ∈Rd∗d: ci = Wc[context1 i ; · · · ; contextnhead i ] (21) Multi-head Pooling Multi-head Pooling head 1 head 2 head 3 head 1 head 2 head 3 context 1 context 2 context 3 context 1 context 2 context 3 Inter-paragraph Attention Inter-paragraph Attention Inter-paragraph Attention context this is para one Feed Forward Feed Forward Feed Forward Feed Forward context this is para two Feed Forward Feed Forward Feed Forward Feed Forward this is para one this is para two Figure 2: A global transformer layer. Different colors indicate different heads in multi-head pooling and inter-paragraph attention. We then add ci to each input token vector xl−1 ij , and feed it to a two-layer feed-forward network with ReLU as the activation function and a highway layer normalization on top: gij = Wo2ReLU(Wo1(xl−1 ij + ci)) (22) xl ij = LayerNorm(gij + xl−1 ij ) (23) where Wo1 ∈Rdff∗d and Wo2 ∈Rd∗dff are the weights, dff is the hidden size of the feed-forward later. This way, each token within paragraph Ri can collect information from other paragraphs in a hierarchical and efficient manner. 3.2.4 Graph-informed Attention The inter-paragraph attention mechanism can be viewed as learning a latent graph representation (self-attention weights) of the input paragraphs. Although previous work has shown that similar latent representations are beneficial for downstream NLP tasks (Liu and Lapata, 2018; Kim et al., 2017; Williams et al., 2018; Niculae et al., 2018; Fernandes et al., 2019), much work in multi-document summarization has taken advantage of explicit graph representations, each focusing on different facets of the summarization task 5075 (e.g., capturing redundant information or representing passages referring to the same event or entity). One advantage of the hierarchical transformer is that we can easily incorporate graphs external to the model, to generate better summaries. We experimented with two well-established graph representations which we discuss briefly below. However, there is nothing inherent in our model that restricts us to these, any graph modeling relationships across paragraphs could have been used instead. Our first graph aims to capture lexical relations; graph nodes correspond to paragraphs and edge weights are cosine similarities based on tf-idf representations of the paragraphs. Our second graph aims to capture discourse relations (Christensen et al., 2013); it builds an Approximate Discourse Graph (ADG) (Yasunaga et al., 2017) over paragraphs; edges between paragraphs are drawn by counting (a) co-occurring entities and (b) discourse markers (e.g., however, nevertheless) connecting two adjacent paragraphs (see the Appendix for details on how ADGs are constructed). We represent such graphs with a matrix G, where Gii′ is the weight of the edge connecting paragraphs i and i′. We can then inject this graph into our hierarchical transformer by simply substituting one of its (learned) heads z′ with G. Equation (20) for calculating the context vector for this head is modified as: contextz′ i = m X i′=1 Gii′ Pm o=1 Gio vz′ i′ (24) 4 Experimental Setup WikiSum Dataset We used the scripts and urls provided in Liu et al. (2018) to crawl Wikipedia articles and source reference documents. We successfully crawled 78.9% of the original documents (some urls have become invalid and corresponding documents could not be retrieved). We further removed clone paragraphs (which are exact copies of some parts of the Wikipedia articles); these were paragraphs in the source documents whose bigram recall against the target summary was higher than 0.8. On average, each input has 525 paragraphs, and each paragraph has 70.1 tokens. The average length of the target summary is 139.4 tokens. We split the dataset with 1, 579, 360 instances for training, 38, 144 for validation and 38, 205 for test. Methods ROUGE-L Recall L′ = 5 L′ = 10 L′ = 20 L′ = 40 Similarity 24.86 32.43 40.87 49.49 Ranking 39.38 46.74 53.84 60.42 Table 1: ROUGE-L recall against target summary for L′-best paragraphs obtained with tf-idf cosine similarity and our ranking model. For both ranking and summarization stages, we encode source paragraphs and target summaries using subword tokenization with SentencePiece (Kudo and Richardson, 2018). Our vocabulary consists of 32, 000 subwords and is shared for both source and target. Paragraph Ranking To train the regression model, we calculated the ROUGE-2 recall (Lin, 2004) of each paragraph against the target summary and used this as the ground-truth score. The hidden size of the two LSTMs was set to 256, and dropout (with dropout probability of 0.2) was used before all linear layers. Adagrad (Duchi et al., 2011) with learning rate 0.15 is used for optimization. We compare our ranking model against the method proposed in Liu et al. (2018) who use the tf-idf cosine similarity between each paragraph and the article title to rank the input paragraphs. We take the first L′ paragraphs from the ordered paragraph set produced by our ranker and the similarity-based method, respectively. We concatenate these paragraphs and calculate their ROUGE-L recall against the gold target text. The results are shown in Table 1. We can see that our ranker effectively extracts related paragraphs and produces more informative input for the downstream summarization task. Training Configuration In all abstractive models, we apply dropout (with probability of 0.1) before all linear layers; label smoothing (Szegedy et al., 2016) with smoothing factor 0.1 is also used. Training is in traditional sequence-to-sequence manner with maximum likelihood estimation. The optimizer was Adam (Kingma and Ba, 2014) with learning rate of 2, β1 = 0.9, and β2 = 0.998; we also applied learning rate warmup over the first 8, 000 steps, and decay as in (Vaswani et al., 2017). All transformer-based models had 256 hidden units; the feed-forward hidden size was 1, 024 for all layers. All models were trained on 4 GPUs (NVIDIA TITAN Xp) for 500, 000 steps. We used 5076 Model ROUGE-1 ROUGE-2 ROUGE-L Lead 38.22 16.85 26.89 LexRank 36.12 11.67 22.52 FT (600 tokens, no ranking) 35.46 20.26 30.65 FT (600 tokens) 40.46 25.26 34.65 FT (800 tokens) 40.56 25.35 34.73 FT (1,200 tokens) 39.55 24.63 33.99 T-DMCA (3000 tokens) 40.77 25.60 34.90 HT (1,600 tokens) 40.82 25.99 35.08 HT (1,600 tokens) + Similarity Graph 40.80 25.95 35.08 HT (1,600 tokens) + Discourse Graph 40.81 25.95 35.24 HT (train on 1,600 tokens/test on 3000 tokens) 41.53 26.52 35.76 Table 2: Test set results on the WikiSum dataset using ROUGE F1. gradient accumulation to keep training time for all models approximately consistent. We selected the 5 best checkpoints based on performance on the validation set and report averaged results on the test set. During decoding we use beam search with beam size 5 and length penalty with α = 0.4 (Wu et al., 2016); we decode until an end-of-sequence token is reached. Comparison Systems We compared the proposed hierarchical transformer against several strong baselines: Lead is a simple baseline that concatenates the title and ranked paragraphs, and extracts the first k tokens; we set k to the length of the ground-truth target. LexRank (Erkan and Radev, 2004) is a widelyused graph-based extractive summarizer; we build a graph with paragraphs as nodes and edges weighted by tf-idf cosine similarity; we run a PageRank-like algorithm on this graph to rank and select paragraphs until the length of the ground-truth summary is reached. Flat Transformer (FT) is a baseline that applies a Transformer-based encoder-decoder model to a flat token sequence. We used a 6-layer transformer. The title and ranked paragraphs were concatenated and truncated to 600, 800, and 1, 200 tokens. T-DMCA is the best performing model of Liu et al. (2018) and a shorthand for Transformer Decoder with Memory Compressed Attention; they only used a Transformer decoder and compressed the key and value in selfattention with a convolutional layer. The model has 5 layers as in Liu et al. (2018). Its hidden size is 512 and its feed-forward hidden size is 2, 048. The title and ranked paragraphs were concatenated and truncated to 3,000 tokens. Hierarchical Transformer (HT) is the model proposed in this paper. The model architecture is a 7-layer network (with 5 localattention layers at the bottom and 2 global attention layers at the top). The model takes the title and L′ = 24 paragraphs as input to produce a target summary, which leads to approximately 1, 600 input tokens per instance. 5 Results Automatic Evaluation We evaluated summarization quality using ROUGE F1 (Lin, 2004). We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency. Table 2 summarizes our results. The first block in the table includes extractive systems (Lead, LexRank), the second block includes several variants of Flat Transformer-based models (FT, T-DMCA), while the rest of the table presents the results of our Hierarchical Transformer (HT). As can be seen, abstractive models generally outperform extractive ones. The Flat Transformer, achieves best results when the input length is set to 800 tokens, while longer input (i.e., 1, 200 tokens) actually hurts performance. The Hierarchical Transformer with 1, 600 input tokens, outper5077 Model R1 R2 RL HT 40.82 25.99 35.08 HT w/o PP 40.21 24.54 34.71 HT w/o MP 39.90 24.34 34.61 HT w/o GT 39.01 22.97 33.76 Table 3: Hierarchical Transformer and versions thereof without (w/o) paragraph position (PP), multi-head pooling (MP), and global transformer layer (GT). forms FT, and even T-DMCA when the latter is presented with 3, 000 tokens. Adding an external graph also seems to help the summarization process. The similarity graph does not have an obvious influence on the results, while the discourse graph boosts ROUGE-L by 0.16. We also found that the performance of the Hierarchical Transformer further improves when the model is presented with longer input at test time.2 As shown in the last row of Table 2, when testing on 3, 000 input tokens, summarization quality improves across the board. This suggests that the model can potentially generate better summaries without increasing training time. Table 3 summarizes ablation studies aiming to assess the contribution of individual components. Our experiments confirmed that encoding paragraph position in addition to token position within each paragraph is beneficial (see row w/o PP), as well as multi-head pooling (w/o MP is a model where the number of heads is set to 1), and the global transformer layer (w/o GT is a model with only 5 local transformer layers in the encoder). Human Evaluation In addition to automatic evaluation, we also assessed system performance by eliciting human judgments on 20 randomly selected test instances. Our first evaluation study quantified the degree to which summarization models retain key information from the documents following a question-answering (QA) paradigm (Clarke and Lapata, 2010; Narayan et al., 2018). We created a set of questions based on the gold summary under the assumption that it contains the most important information from the input paragraphs. We then examined whether participants were able to answer these questions by reading system summaries alone without access to the gold summary. The more questions a system can answer, the better it is at summarization. We created 57 questions in total varying from two to 2This was not the case with the other Transformer models. Model QA Rating Lead 31.59 -0.383 FT 35.69 0.000 T-DMCA 43.14 0.147 HT 54.11 0.237 Table 4: System scores based on questions answered by AMT participants and summary quality rating. four questions per gold summary. Examples of questions and their answers are given in Table 5. We adopted the same scoring mechanism used in Clarke and Lapata (2010), i.e., correct answers are marked with 1, partially correct ones with 0.5, and 0 otherwise. A system’s score is the average of all question scores. Our second evaluation study assessed the overall quality of the summaries by asking participants to rank them taking into account the following criteria: Informativeness (does the summary convey important facts about the topic in question?), Fluency (is the summary fluent and grammatical?), and Succinctness (does the summary avoid repetition?). We used Best-Worst Scaling (Louviere et al., 2015), a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales (Kiritchenko and Mohammad, 2017). Participants were presented with the gold summary and summaries generated from 3 out of 4 systems and were asked to decide which summary was the best and which one was the worst in relation to the gold standard, taking into account the criteria mentioned above. The rating of each system was computed as the percentage of times it was chosen as best minus the times it was selected as worst. Ratings range from −1 (worst) to 1 (best). Both evaluations were conducted on the Amazon Mechanical Turk platform with 5 responses per hit. Participants evaluated summaries produced by the Lead baseline, the Flat Transformer, T-DMCA, and our Hierarchical Transformer. All evaluated systems were variants that achieved the best performance in automatic evaluations. As shown in Table 4, on both evaluations, participants overwhelmingly prefer our model (HT). All pairwise comparisons among systems are statistically significant (using a one-way ANOVA with posthoc Tukey HSD tests; p < 0.01). Examples of system output are provided in Table 5. 5078 Pentagoet Archeological District GOLD The Pentagoet Archeological District is a National Historic Landmark District located at the southern edge of the Bagaduce Peninsula in Castine, Maine. It is the site of Fort Pentagoet, a 17th-century fortified trading post established by fur traders of French Acadia. From 1635 to 1654 this site was a center of trade with the local Abenaki, and marked the effective western border of Acadia with New England. From 1654 to 1670 the site was under English control, after which it was returned to France by the Treaty of Breda. The fort was destroyed in 1674 by Dutch raiders. The site was designated a National Historic Landmark in 1993. It is now a public park. QA What is the Pentagoet Archeological District? [a National Historic Landmark District] Where is it located? [Castine , Maine] What did the Abenaki Indians use the site for? [trading center] LEAD The Pentagoet Archeological District is a National Historic Landmark District located in Castine, Maine. This district forms part of the traditional homeland of the Abenaki Indians, in particular the Penobscot tribe. In the colonial period, Abenakis frequented the fortified trading post at this site, bartering moosehides, sealskins, beaver and other furs in exchange for European commodities. ”Pentagoet Archeological district” is a National Historic Landmark District located at the southern edge of the Bagaduce Peninsula in Treaty Of Breda. FT the Pentagoet Archeological district is a National Historic Landmark District located at the southern edge of the Bagaduce Peninsula in Treaty Of Breda. It was listed on the national register of historic places in 1983. T-DMCA The Pentagoet Archeological District is a national historic landmark district located in castine , maine . this district forms part of the traditional homeland of the abenaki indians , in particular the Penobscot tribe. The district was listed on the national register of historic places in 1982. HT The Pentagoet Archeological district is a National Historic Landmark District located in Castine, Maine. This district forms part of the traditional homeland of the Abenaki Indians, in particular the Penobscot tribe. In the colonial period, Abenaki frequented the fortified trading post at this site, bartering moosehides, sealskins, beaver and other furs in exchange for European commodities. Melanesian Whistler GOLD The Melanesian whistler or Vanuatu whistler (Pachycephala chlorura) is a species of passerine bird in the whistler family Pachycephalidae. It is found on the Loyalty Islands, Vanuatu, and Vanikoro in the far southeastern Solomons. QA What is the Melanesian Whistler? [a species of passerine bird in the whistler family Pachycephalidae] Where is it found? [Loyalty Islands , Vanuatu , and Vanikoro in the far south-eastern Solomons] LEAD The Australian golden whistler (Pachycephala pectoralis) is a species of bird found in forest, woodland, mallee, mangrove and scrub in Australia (except the interior and most of the north) Most populations are resident, but some in south-eastern Australia migrate north during the winter. FT The Melanesian whistler (P. Caledonica) is a species of bird in the family Muscicapidae. It is endemic to Melanesia. T-DMCA The Australian golden whistler (Pachycephala chlorura) is a species of bird in the family Pachycephalidae, which is endemic to Fiji. HT The Melanesian whistler (Pachycephala chlorura) is a species of bird in the family Pachycephalidae, which is endemic to Fiji. Table 5: GOLD human authored summaries, questions based on them (answers shown in square brackets) and automatic summaries produced by the LEAD-3 baseline, the Flat Transformer (FT), T-DMCA (Liu et al., 2018) and our Hierachical Transformer (HT). 6 Conclusions In this paper we conceptualized abstractive multidocument summarization as a machine learning problem. We proposed a new model which is able to encode multiple input documents hierarchically, learn latent relations across them, and additionally incorporate structural information from well-known graph representations. We have also demonstrated the importance of a learning-based approach for selecting which documents to summarize. Experimental results show that our model produces summaries which are both fluent and informative outperforming competitive systems by a wide margin. In the future we would like to apply our hierarchical transformer to question answering and related textual inference tasks. Acknowledgments We would like to thank Laura Perez-Beltrachini for her help with preprocessing the dataset. This research is supported by a Google PhD Fellowship to the first author. The authors gratefully acknowledge the financial support of the European Research Council (award number 681760). 5079 References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In In Proceedings of the 3rd International Conference on Learning Representations, San Diego, California. Regina Barzilay and Kathleen R. McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics, 31(3):297– 327. Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca Passonneau. 2015. Abstractive multidocument summarization via phrase selection and merging. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1587–1597, Beijing, China. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1662–1675, New Orleans, Louisiana. Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2013. Towards coherent multidocument summarization. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1163–1173, Atlanta, Georgia. Association for Computational Linguistics. Eric Chu and Peter J Liu. 2018. Unsupervised neural multi-document abstractive summarization. arXiv preprint arXiv:1810.05739. James Clarke and Mirella Lapata. 2010. Discourse constraints for document compression. Computational Linguistics, 36(3):411–441. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457–479. Patrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Structured neural summarization. In Proceedings of the 7th International Conference on Learning Representations, New Orleans, Louisiana. Katja Filippova and Michael Strube. 2008. Sentence fusion via dependency graph compression. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 177– 185, Honolulu, Hawaii. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719, New Orleans, Louisiana. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, pages 1693– 1701. Curran Associates, Inc. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. 2017. Structured attention networks. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proceedings of the 4th International Conference on Learning Representations, San Juan, Puerto Rico. Svetlana Kiritchenko and Saif Mohammad. 2017. Best-worst scaling more reliable than rating scales: A case study on sentiment intensity annotation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 465– 470, Vancouver, Canada. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226. Logan Lebanoff and Fei Liu. 2018. Automatic detection of vague words and sentences in privacy policies. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3508–3517, Brussels, Belgium. 5080 Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by summarizing long sequences. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. Yang Liu and Mirella Lapata. 2018. Learning structured text representations. Transactions of the Association for Computational Linguistics, 6:63–75. Jordan J Louviere, Terry N Flynn, and Anthony Alfred John Marley. 2015. Best-worst scaling: Theory, methods and applications. Cambridge University Press. Shulei Ma, Zhi-Hong Deng, and Yunlun Yang. 2016. An unsupervised multi-document summarization framework based on neural document model. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1514–1523, Osaka, Japan. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759, New Orleans, Louisiana. Vlad Niculae, Andr´e F. T. Martins, and Claire Cardie. 2018. Towards dynamic computation graphs via sparse latent structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 905–911, Brussels, Belgium. Daraksha Parveen and Michael Strube. 2014. Multidocument summarization using bipartite graphs. In Proceedings of TextGraphs-9: the workshop on Graph-based Methods for Natural Language Processing, pages 15–24, Doha, Qatar. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. Dragomir Radev. 2000. A common theory of information fusion from multiple text sources step one: Cross-document structure. In 1st SIGdial Workshop on Discourse and Dialogue, pages 74–83, Hong Kong, China. Evan Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, 6(12). Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Xiaojun Wan. 2008. An exploration of document impact on graph-based multi-document summarization. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 755–762, Honolulu, Hawaii. Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018. Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Computational Linguistics, 6:253–267. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. In arXiv preprint arXiv:1609.08144. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 452–462, Vancouver, Canada. Jianmin Zhang, Jiwei Tan, and Xiaojun Wan. 2018. Adapting neural single-document summarization model for abstractive multi-document summarization: A pilot study. In Proceedings of the International Conference on Natural Language Generation. A Appendix We describe here how the similarity and discourse graphs discussed in Section 3.2.4 were created. These graphs were added to the hierarchical transformer model as a means to enhance summary quality (see Section 5 for details). 5081 A.1 Similarity Graph The similarity graph S is based on tf-idf cosine similarity. The nodes of the graph are paragraphs. We first represent each paragraph pi as a bag of words. Then, we calculate the tf-idf value vik for each token tik in a paragraph: vik = Nw(tik)log( Nd Ndw(tik)) (25) where Nw(t) is the count of word t in the paragraph, Nd is the total number of paragraphs, and Ndw(t) is the total number of paragraphs containing the word. We thus obtain a tf-idf vector for each paragraph. Then, for all paragraph pairs < pi, pi′ >, we calculate the cosine similarity of their tf-idf vectors and use this as the weight Sii′ for the edge connecting the pair in the graph. We remove edges with weights lower than 0.2. A.2 Discourse Graphs To build the Approximate Discourse Graph (ADG) D, we follow Christensen et al. (2013) and Yasunaga et al. (2017). The original ADG makes use of several complex features. Here, we create a simplified version with only two features (nodes in this graph are again paragraphs). Co-occurring Entities For each paragraph pi, we extract a set of entities Ei in the paragraph using the Spacy3 NER recognizer. We only use entities with type {PERSON, NORP, FAC, ORG, GPE, LOC, EVENT, WORK OF ART, LAW}. For each paragraph pair < pi, pj >, we count eij, the number of entities with exact match. Discourse Markers We use the following 36 explicit discourse markers to identify edges between two adjacent paragraphs in a source webpage: again, also, another, comparatively, furthermore, at the same time,however, immediately, indeed, instead, to be sure, likewise, meanwhile, moreover, nevertheless, nonetheless, notably, otherwise, regardless, similarly, unlike, in addition, even, in turn, in exchange, in this case, in any event, finally, later, as well, especially, as a result, example, in fact, then, the day before 3https://spacy.io/api/entityrecognizer If two paragraphs < pi, pi′ > are adjacent in one source webpage and they are connected with one of the above 36 discourse markers, mii′ will be 1, otherwise it will be 0. The final edge weight Dii′ is the weighted sum of eii′ and mii′ Dii′ = 0.2 ∗eii′ + mii′ (26)
2019
500
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5082–5092 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5082 Abstractive text summarization based on deep learning and semantic content generalization Panagiotis Kouris School of Electrical & Computer Engineering National Technical University of Athens, Greece [email protected] Georgios Alexandridis School of Electrical & Computer Engineering National Technical University of Athens, Greece [email protected] Andreas Stafylopatis School of Electrical & Computer Engineering National Technical University of Athens, Greece [email protected] Abstract This work proposes a novel framework for enhancing abstractive text summarization based on the combination of deep learning techniques along with semantic data transformations. Initially, a theoretical model for semantic-based text generalization is introduced and used in conjunction with a deep encoder-decoder architecture in order to produce a summary in generalized form. Subsequently, a methodology is proposed which transforms the aforementioned generalized summary into human-readable form, retaining at the same time important informational aspects of the original text and addressing the problem of out-of-vocabulary or rare words. The overall approach is evaluated on two popular datasets with encouraging results. 1 Introduction Text Summarization (TS) aims at composing a concise version of an original text, retaining its salient information. Since manual TS is a demanding, time expensive and generally laborious task, automatic TS is gaining increasing popularity and therefore constitutes a strong motivation for further research. Current efforts in automatic TS mainly focus on summarizing single documents (e.g. news, articles, scientific papers, weather forecasts, etc.) and multi-documents (e.g. news from different sources, user reviews, e-mails etc.), reducing the size of the initial text while at the same time preserving key informational elements and the meaning of content. Two main approaches to automatic TS have been reported in the relevant literature; extractive and abstractive (Gambhir and Gupta, 2017; Allahyari et al., 2017). In the former case, those sentences of original text that convey its content are firstly identified and then extracted in order to construct the summary. In the latter case, new sentences are generated which concatenate the overall meaning of the initial text, rephrasing its content. Abstractive TS is a more challenging task; it resembles human-written summaries, as it may contain rephrased sentences or phrases with new words (i.e. sentences, phrases and words that do not appear in the original text), thereby improving the generated summary in terms of cohesion, readability or redundancy. The main contribution of this work is a novel abstractive TS technique that combines deep learning models of encoder-decoder architecture and semantic-based data transformations. Since the majority of literature in abstractive TS focuses in either of the aforementioned parts, the proposed approach tries to bridge this gap by introducing a framework that combines the potential of machine learning with the importance of semantics. The said framework is comprised of three components; (i) a theoretical model for text generalization (Section 3) (ii) a deep learning network whose input is the text and its output a summary in generalized form (Section 4) and (iii) a methodology of transforming the “generalized” summary into a human-readable form, containing salient information of the original document (Section 5). Additionally, the proposed framework is capable of coping with the problem of out-of-vocabulary (OOV) words (or words of limited occurrences), thereby achieving semantic content generalization. The overall architecture is evaluated on Gigaword (Napoles et al., 2012; Rush et al., 2015) and Duc 2004 (Over et al., 2007), two popular datasets used in TS tasks, with the obtained results being promising, outperforming the current state-of-the-art. The rest of this paper is organized as follows; Section 2 overviews the related work and Sections 3-5 outline the components of the proposed framework. Section 6 describes the experimental procedure in detail and discusses the obtained results. Finally, the paper concludes in Section 7, where 5083 possible future extensions are examined. 2 Related work Abstractive TS methods can be broadly classified into structure and semantic based approaches (Moratanch and Chitrakala, 2016). The former make use of pre-defined structures (e.g. ontologies, trees, templates, graphs and rules), whereas the latter utilize the semantic representation of text along with natural language generation systems (based on information items, predicate arguments and semantic graphs). Recently, deep learning architectures have been widely adopted in abstractive TS and they have since become the stateof-the-art (Gupta and Gupta, 2019), especially in short text summarization (Paulus et al., 2017) that is the focus of the current work. The proposed approach further extends the said architectures with semantic-based concept generalization, in an effort to improve the overall system performance. In particular, semantic-based approaches utilizing (semantic) graphs produce the desired summaries through the extraction of ontological and syntactical relations in text, mainly by reducing the graph or by locating its key concepts (Khan et al., 2018; Joshi et al., 2018; Moawad and Aref, 2012). Item-based solutions, on the other hand, employ the notion of information item (the smallest unit of coherent textual information such as subject, verb and object triplets) in order to generate the summary out of the top-rated sentences. For example, the information items, along with temporal and spatial characteristics, are used in (Genest and Lapalme, 2011) in order to produce the abstractive summary. Predicate argument-based approaches merge the respective structures of text (i.e. verbs, subjects and objects) and the summary is being formed from the top-ranked such structures (Alshaina et al., 2017; Zhang et al., 2016). Nevertheless, semantic-based methods are not able to achieve comparable performance to deep learning approaches (Gupta and Gupta, 2019) and for this reason, a framework utilizing semantic-based data generalization for the enhancement of sequenceto-sequence (seq2seq) deep learning abstractive summarization is presented in this work. Seq2seq architectures require a sequence of words at their input and also emit a different, in the general case, sequence of words at their output. An early approach to using semantic resources for the generalization of concepts connected with a conjunctive or disjunctive relation is due to (Belkebir and Guessoum, 2016), which replaces two or more consecutive concepts by one more general word, entailing the meaning of the initial ones (e.g. the phrase “apples and oranges” may be replaced by the word “fruits”). Our proposed methodology, however, is not limited to conjunctive and disjunctive relations and can, therefore, generalize every concept of a text. The state-of-the-art in abstractive TS deep learning systems employ seq2seq models of encoder-decoder architectures along with attention mechanisms, primarily based on recurrent neural networks (RNNs) and especially on long short-term memory networks (LSTMs) and gated recurrent units (GRUs) (Chopra et al., 2016; Nallapati et al., 2016; See et al., 2017; Song et al., 2018; Chen et al., 2016; Gupta and Gupta, 2019). In these cases, the encoder input is a sequence of words which are subsequently converted into a vector representation and the decoder, assisted by the attention mechanism which focuses on specific words at each step of the input sequence (Bahdanau et al., 2014), determines the output, emitting the next word of the summary based on the previous ones. The methodology described above is further extended in (Rush et al., 2015), where a neural attention-based model is trained end-to-end on a large amount of data (article-summary pairs) that learns to produce abstractive summaries. Similarly, Nallapati et al. (2016) and See et al. (2017) train encoder-decoder models with attention mechanisms in order to face the problem of unseen (out-of-vocabulary) words, incorporating a pointer generator network in their system. Furthermore, See et al. (2017) avoid repetition of the same words in the summary through the inclusion of a coverage mechanism, while Lin et al. (2018) address the same problem by proposing a model of a convolutional gated unit that performs global encoding for the improvement of the representation of the input data. Finally, Song et al. (2018) propose a deep LSTM-CNN (convolutional neural network) framework, which generates summaries via the extraction of phrases from source sentences. The presented approach in this work is also based on a seq2seq deep learning model (See et al., 2017). In contrast to the systems outlined above, 5084 the novelty of our technique lies in the device of a semantic-based methodology for text generalization, which is going to be presented in detail in the forthcoming sections. 3 Text generalization The basic assumption of text generalization is the existence of a taxonomy of concepts that can be extracted from text (Definition 3.1). More specifically, the said taxonomy contains concepts and their hypernyms (Definition 3.2) in a hierarchical structure. Once the concepts have been extracted, the taxonomy path (Definition 3.3), containing the ordered sequence of concepts according to their taxonomy depth (Definition 3.4), is used for generalizing text. Figure 1 illustrates an example taxonomy of five concepts, where concept c4 has a taxonomy depth equal to 3 and a taxonomy path Pc4 = {c4, c2, c1, c0}. c0: entity c1: food c2: fruit c4: banana c3: cheese Figure 1: A taxonomy of concepts. Definition 3.1 (Taxonomy of concepts) A taxonomy of concepts consists of a hierarchical structure of concepts which are related with an is-a type of a relationship. Definition 3.2 (Hypernym) Given a taxonomy of concepts, concept cj is a hypernym of ci if and only if ci semantically entails cj (ci |= cj). Definition 3.3 (Taxonomy path of concept) Given a taxonomy of concepts, a taxonomy path Pca of ca is an ordered sequence of concepts Pca = {ca, ca+1, . . . , cn} where ci |= cj, ∀i < j and cn is the root concept of the taxonomy. Definition 3.4 (Taxonomy depth of concept) Given a taxonomy path of concepts Pca = {ca, ca+1, . . . , ci, . . . , cn}, the taxonomy depth of concept ci is the number of concepts from ci to the root concept cn in the path of concepts (dci = n −i). By definition, the depth of the root concept is equal to zero. A piece of text can be generalized only when it contains generalizable concepts (Definition 3.5). A concept ci with a taxonomy path Pci is said to have been generalized when it has been replaced by a concept cj ∈Pci such that dcj < dci. Accordingly, a text excerpt is said to have been generalized when it contains at least one generalized concept (Definition 3.6). The minimum taxonomy depth of a generalized concept constitutes the level of generalization of the given text (Definition 3.7). Definition 3.5 (Generalizable concept) A concept ci of taxonomy depth dci is said to be generalizable when at least one concept of its taxonomy path has a taxonomy depth less than dci. Definition 3.6 (Generalizable text) A text excerpt is said to be generalizable when it contains at least one generalizable concept. Definition 3.7 (Level of generalization) The level of generalization of a text excerpt is equal to the minimum depth of its generalized concepts. 3.1 Text generalization strategies Given the above definitions, two novel strategies for text generalization are presented, which take into account the frequency of a concept in the source text. The intuition behind this transformation is the fact that machine learning systems tend to require a sufficient number of training samples prior to producing accurate predictions. Therefore, low-frequency terms should ideally be replaced by respective high-frequency hypernyms that semantically convey the original meaning. Text generalization strategies are used to generalize both the training set (i.e. the articles and their respective summaries) as well as the test set (i.e. the unseen text). As it shall be described next, the machine learning model of Section 4 generates a generalized summary that is transformed to a readable text through the post-processing methodology of Section 5. 3.1.1 Named Entities-driven Generalization (NEG) NEG only generalizes those concepts whose taxonomy path contains particular named entities (NEs) such as location, person and organization (Algorithm 1). For example, given the set of named entities E = {location, person}, the sentence “John has been in Paris” can be generalized to “ person has been in location ”, where NEs 5085 are enclosed in underscores in order to be distinguished from the corresponding words that may appear in the dataset. Algorithm 1 requires: (i) the input text, (ii) the taxonomy of concepts T, (iii) the set C of tuples of extracted concepts ci along with their respective taxonomy paths Pi and frequency fi (C = {(c1, P1, f1), (c2, P2, f2), . . . , (cn, Pn, fn)}), (iv) the set E of named entities (E = {e1, e2, . . .}) and (v) the threshold θf of the minimum number of occurrences of a concept. In lines 2 −4 of Algorithm 1, a term can be generalized when both its frequency in the input text is less than the specified threshold θf and its taxonomy path Pi contains a named entity c ∈E. In this case, ci is replaced by its hypernym c (line 4). The output of the algorithm is a generalized version of the input text (genText). It should be noted that when θf = ∞, the operation of the NEG algorithm resembles that of named entity anonymization (Hassan et al., 2018). Algorithm 1 Named entities-driven text generalization (NEG) Require: text, T, C, E, θf 1: genText ←text 2: for all (ci, Pi, fi) ∈C do 3: if fi ≤θf and ∃c ∈Pi s.t c ∈E then 4: genText ←replace ci with c 5: end if 6: end for 7: return genText 3.1.2 Level-driven Generalization (LG) LG generalizes the concepts according to the given level of generalization d (Definition 3.7), as illustrated in Algorithm 2. For instance, given the taxonomy of Figure 1 and d = 1, the sentence “banana is nutritious” may be generalized to “food is nutritious”. Similarly to Algorithm 1, Algorithm 2 requires (i) the input text, (ii) the taxonomy T, (iii) the set of tuples C, (iv) the threshold θf and (v) the level of generalization d. In lines 6 −25, a term ci is candidate for generalization when its frequency fi is below the specified threshold θf (line 7). More specifically, ci is replaced by its hypernym ch (line 11) only when the depth dch of the latter is at least equal to d (line 9). When a term is generalized, the set of concepts C is either updated by merging ci with its hypernym ch (lines 14 −18) or a new entry is added in C, if ch is not already a member of the set (lines 20 −21). Both the outer while-loop and the inner for-loop are terminated when no more generalization can be applied to the text because either the frequency of all concepts is greater than θf or all concepts have a taxonomy depth less or equal to d. In this case, the algorithm returns the generalized version of the input text (line 27) and terminates. Algorithm 2 Level-driven text generalization (LG) Require: text, T, C, d, θf 1: genText ←text 2: inLoop ←true 3: while inLoop do 4: Cnew ←C 5: inLoop ←false 6: for all (ci, Pi, fi) ∈Cnew do 7: if fi ≤θf then 8: ch ←hypernym of ci from Pi 9: if dch ≥d then 10: inLoop ←true 11: genText ←replace ci with ch 12: C ←C \ {(ch, Ph, fh)} 13: if ∃ch ∈C then 14: Ph ←get Ph from C 15: fh ←get fh from C 16: fhnew ←fh + fi 17: C ←C \ {(ch, Ph, fh)} 18: C ←C∪{(ch, Ph, fhnew)} 19: else 20: Ph ←get Ph from T 21: C ←C ∪{(ch, Ph, fi)} 22: end if 23: end if 24: end if 25: end for 26: end while 27: return genText The strategies described above are not limited to a single text; they may also be applied to datasets of concatenated documents. 4 Deep learning model After the text generalization phase outlined in the previous section completes, the summaries are produced by an encoder-decoder deep learning model, inspired from the “Sequence-to-sequence attentional model” (See et al., 2017). The en5086 coder consists of a bi-directional LSTM (Graves et al., 2013), the decoder of a unidirectional LSTM and the attention mechanism employed is similar to that of Bahdanau et al. (2014). Words are represented using a neural language model like word2vec (Mikolov et al., 2013) and the overall model is trained on article-summary pairs. Once the training phase is over, the model is expected to predict an output vector of tokens Y ′ = (y′ 1, y′ 2, ...) (summary) given an input vector of tokens X = (x1, x2, ...) (text). During training, the sequence of tokens (word embeddings) of the source text X = (x1, x2, . . . , xn) is given to the encoder one-byone in forward and reverse order, producing a hidden state hi = bi lstm(xi, hi−1) for each embedding xi. Then, the target sequence of tokens Y = (y1, y2, . . . , ym) is given to the decoder, which learns to predict the next word yt given the previous one yt−1, the state of the decoder st = lstm(st−1, yt−1, ct) and the context vector ct, as computed by the attention mechanism. More specifically, the context vector ct is computed as a weighted sum of the encoder hidden states hi, according to the Equations 1-3 below ct = |X| X i=1 atihi (1) ati = softmax(eti) (2) eti = tanh(Whhi + Wsst−1 + b) (3) where ati is the weight, at each time step t, of the hidden state of the encoder hi (i.e. ati indicates the importance of hi), eti indicates how well the output of step t matches with the input around word xi, st−1 is the previous state of decoder, Wh, Ws and b are the weights and bias, respectively. Summary prediction is achieved using beam search (Graves, 2012; Boulanger-Lewandowski et al., 2013); for each time step of the beam searchbased decoder, the w candidate tokens with the highest log-probability are kept in order to determine the best output summary, where w is the beam width. 5 Post-processing of the predicted summary Since the output of the deep learning model described in Section 4 is in generalized form, a post-processing technique for determining the specific meaning of each general concept is necessary. More specifically, a method should be devised that would match the generalized concepts of the predicted summary with the appropriate tokens of the original text. Essentially, this is a problem of optimal bipartite matching, between the general concepts of the (generalized) summary and candidate concepts of the original text. To address this issue, Algorithm 3 is proposed, which performs the best matching based on the similarity of the context around the generalized concepts of the summary and the candidate concepts of the text. Algorithm 3 Matching Algorithm Require: genSum, text, T 1: cr ←{} ▷candidate replacements of generalized concepts 2: gc ←{} ▷generalized concepts 3: summary ←genSum 4: for all tokens ∈genSum do 5: if tokens is generalized then 6: gc ←gc ∪{tokens} 7: for all tokena ∈text do 8: if ∃c ∈Ptokena s.t. tokens = c then 9: s ←similarity(tokens, tokena) 10: cr ←cr ∪{(tokens, tokena, s)} 11: end if 12: end for 13: end if 14: end for 15: sort cr in descending order of s 16: for all (tokens, tokena, s) ∈cr do 17: if tokens ∈gc then 18: summary ←replace tokens with tokena 19: gc ←gc \ tokens 20: end if 21: end for 22: return summary Algorithm’s 3 input is the generalized summary genSum, the original text text and the taxonomy of concepts T. In the first loop (lines 4 −14), the similarity s between the context of each generalized token tokens and each token tokena of the source text that has a hypernym c similar to tokens is computed (line 9) and the tuple {(tokens, tokena, s)} is added to the set cr of candidate replacements of the generalized concepts (line 10). When all the generalized concepts of the (generalized) summary have been examined, cr is sorted in descending order according to s (line 15). In the second loop (lines 16 −21), tokens is replaced by tokena of maximum s (line 18) and is subsequently removed from gc (line 19). Eventually, Algorithm 3 returns the final summary summary (line 22) in human-readable 5087 form, which also contains specific information according to the source text. Algorithm 3 works for both strategies of Section 3.1. In the LG strategy (Section 3.1.2), it is trivial to check whether tokens exists in the taxonomy path of tokena and therefore become candidate for replacement. In the case of NEG (Section 3.1.1), tokens (e.g. a general concept of the summary such as location or person) may be replaced by a concept of the article, when the taxonomy path of the latter contains the former. Finally, an important aspect affecting the performance of Algorithm 3 is the choice of the similarity function (line 9), which is a hyperparameter of the approach. Candidate similarity functions range from some well established indices like the cosine distance or the Jaccard coefficient to more complex measures like the word mover distance (Kusner et al., 2015) and the Levenshtein edit distance (Yujian and Bo, 2007). Of course, the optimal choice is highly dependant on the available data and we further reason on this subject on the experimental part of this submission. 6 Experiments & Results The experimental methodology followed in this work is in accordance with some widely-adopted practices in the relevant literature (Rush et al., 2015; Nallapati et al., 2016; Chopra et al., 2016; See et al., 2017; Gao et al., 2019). 6.1 Datasets Two popular datasets used in automatic TS tasks have been selected; Gigaword (Napoles et al., 2012) and DUC 2004 (Over et al., 2007). The first dataset, Gigaword, is obtained as it is described by Rush et al. (2015) and further preprocessed in order to remove duplicate entries, punctuation and summaries whose length is either greater than or equal to the length of the articles they summarize. Moreover, the dataset has been normalized by expanding the contractions in the text (e.g. “I’ve” to “I have”)1. After the completion of this step, the training set contains about 3 million article-summary pairs which consist of 99, 224 unique words (out of a total of 110 million words). The average article and summary length is 28.9 and 8.3 words, respectively. Finally, 4, 000 pairs have been selected randomly from the test set 1Expanding of contractions is performed by pycontractions package: https://pypi.org/project/pycontractions/ to form the validation set and another 4.000 pairs were also randomly selected to form the final test vectors as it is commonly done in the relevant literature (Rush et al., 2015; Nallapati et al., 2016; Chopra et al., 2016; Gao et al., 2019). The DUC 2004 dataset, on the other hand, contains 500 news articles and 4 human-generated summaries for each one of them. The same preprocessing methodology is applied to this dataset as well, but since it contains very few instances it is solely used for evaluation purposes (and not during model training). As it is a common practice in relevant experimental procedures, only the first sentence of the articles is used and the summaries are set to have a maximum length of 75 bytes (Rush et al., 2015; Nallapati et al., 2016; Gao et al., 2019). 6.2 Baseline and competitive approaches The deep learning model outlined in Section 4 serves as the baseline approach. Its optimal hyperparameters are reported in the subsequent section; however, no generalization scheme is used. The baseline approach is tested on both datasets (Gigaword and DUC 2004). Additionally, the results of some other approaches (ABS+ (Rush et al., 2015), RAS-Elman (Chopra et al., 2016), words-lvt5k-1sent (Nallapati et al., 2016) and GLEAM (Gao et al., 2019)) are also reported on the DUC 2004 dataset. A direct comparison is possible, since the same evaluation methodology is adopted. Such a direct comparison is not possible for the Gigaword dataset, due to the extra preprocessing steps of our approach and the random sampling of the testing data. 6.3 Parameter tuning The methodology outlined in this work is dependant on a number of parameters and hyperparameters. Initially, the neural language model for the vector representation of words must be decided upon; after a brief experimentation with various representations and vector-spaces, pre-trained word2vec embeddings of size 300 were selected (Mikolov et al., 2013). Following, a suitable similarity function for Algorithm 3 (line 8) must be specified. Several notions of word similarity have been considered, ranging from simple indices in-between single words (e.g. cosine similarity, Jaccard coefficient) to more advanced measurements like the word 5088 mover distance (Kusner et al., 2015) and the Levenshtein Edit distance (Yujian and Bo, 2007). The approach that achieved the best result was that of the combination of cosine similarity of averaged word2vec vectors and cosine similarity based on bag of words. In particular, the best performance was achieved when the windows around the candidate and the generalized concepts were set to 10 and 6, respectively. The optimal hyper-parameters of the deep learning model (Section 4) have been determined to be as follows; The encoder (bi-directional LSTM) consists of two layers (of size 200 each), while the decoder (unidirectional LSTM) is single-layered, again of size 200. The batch size has been set to 64, the learning rate to 0.001 and the training data were randomly shuffled at each epoch. The employed optimization method has been the Adam algorithm (Kingma and Ba, 2014), with gradient norm clipping (Pascanu et al., 2013) and crossentropy as the loss function (Golik et al., 2013). Finally, all words of the vocabulary have been considered in the training phase and a beam search of width equal to 4 has been used in the evaluation phase. In order to assess the effect of the two generalization strategies discussed in Section 3.1, three distinct system configurations have been evaluated. The first one is the baseline approach of Section 6.2. The second system is an extension of the baseline, using NEG as the generalization methodology and the third one is also an extension of the baseline, employing the LG strategy. police raided several locations near Input nobe after receiving word of a threat text but no evidence of a planned attack was found police raided several locations near Generalized location after receiving word of a text threat but no evidence of a planned attack was found Generalized police raided several locations near summary location Output police raided several locations near summary nobe Table 1: An example of NEG strategy from the input text to the output summary 6.4 Procedure As it has been discussed above, the experimental procedure includes three sets of experiments in for the second day in a row astronauts Input boarded space shuttle endeavour on text friday for liftoff on nasa first space station construction flight for the second day in a row astronauts Generalized boarded space equipment endeavour text on friday for rise on nasa first space station construction flight Generalized astronauts boarded spacecraft for summary rise Output astronauts boarded spacecraft for summary liftoff Table 2: An example of LG strategy from the input text to the output summary total, with two of them based on the generalization strategies of Section 3.1. The WordNet taxonomy of concepts has been used (Miller, 1995; Fellbaum, 1998), out of which the hypernyms and the taxonomy paths have been extracted. To select the appropriate synset for extracting its taxonomy path, we use the WordNet first sense, as it has proved to be a very hard baseline in knowledgebased word sense disambiguation approaches (Raganato et al., 2017). Both generalization strategies are only applied to nouns in text, which are identified by the application of part-of-speech tagging and more specifically, the Stanford log-linear partof-speech tagger (Toutanova et al., 2003). The set of named entities used in NEG strategy (Section 3.1.1) is E = {Location, Person, Organization}, as the datasets contain news articles which are dominated by relevant entities. The named entities are extracted from the text using a named entity recognizer (NER) (specifically, the Stanford NER, Finkel et al., 2005) in conjunction with the WordNet taxonomy. Firstly, the pre-trained NER is executed and then the remaining named entities are extracted from WordNet; when a term in the text has a hypernym in the predefined set of named entities E, this word is annotated as a named entity. The performance of this generalization strategy is assessed for various thresholds of word frequency θf (as stated in the respective Section, a word is generalized only if its frequency in the dataset is less than θf). The level of generalization (i.e. the taxonomy depth of a generalized concept) used in LG (Section 3.1.2) has been determined to be d = 5. This level has been chosen as the concepts become very general when d < 5, rendering the production of 5089 Model θf ROUGE-1 ROUGE-2 ROUGE-L NEG-100 100 45.95 23.52 43.30 NEG-200 200 46.20 23.86 43.45 NEG-500 500 46.30 23.88 43.94 NEG-1k 1000 46.14 23.31 43.35 NEG-infinity ∞ 44.45 21.91 41.34 LG-100 100 46.34 24.02 43.65 LG-200 200 46.09 23.91 43.34 LG-500 500 46.04 23.64 43.25 LG-1k 1000 45.57 23.09 42.77 LG-infinity ∞ 42.49 19.52 39.53 Baseline 44.35 22.43 41.87 Table 3: ROUGE scores on the Gigaword dataset. the final summary a difficult task (Section 5). In a similar fashion to the NEG strategy, the performance of the LG approach is assessed for various thresholds of word frequency θf. The overall architecture and all model configurations were trained on single Titan XP GPU2. Each training epoch took approximately 3.5 hours and all models converged around epoch 15. Finally, the performance of all systems is measured on the official ROUGE package (Lin, 2004) of ROUGE-1 (word overlap), ROUGE-2 (bigram overlap) and ROUGE-L (longest common sequence). More specifically, for Gigaword testing data the F-measure of ROUGE score is reported while for the DUC dataset the evaluation metric is the standard ROUGE recall (Nallapati et al., 2016; Chopra et al., 2016; Gao et al., 2019). Table 1 illustrates an example NEG approach which includes the input text, the generalized text (after the application of the NEG algorithm), the predicted generalized summary and the output summary (after post-processing the predicted summary). The underlined words are those that have been generalized and vice versa. Similarly, Table 2 outlines an example LG approach. 6.5 Results Table 3 illustrates the ROUGE scores on the Gigaword dataset for both generalization strategies (NEG, LG) and various thresholds of word frequency θf. Similarly, Table 4 contains the ROUGE scores on the DUC 2004 dataset. Apart from the NEG-infinity and LG-infinity configurations (which over-generalize), the other configurations of our model outperform the baseline approach on both datasets. 2Source code: https://github.com/pkouris/abtextsum Intuitively, improved results were expected especially in the generalization of low-frequency words, as machine learning approaches typically require a sufficient number of samples in order to be trained properly. This is exactly the case for the LG strategy, as the best results are obtained when generalizing words that have at most 100 occurrences (θf = 100) in the Gigaword dataset. Similarly, the best ROUGE1 and ROUGE-2 scores for the LG strategy in the DUC 2004 dataset are also obtained when θf = 100. However, the NEG strategy exhibits its best performance at θf = 500 on the Gigaword dataset and at θf = 1000 on the DUC 2004 dataset, with the exception of the ROUGE-2 metric which is maximized at θf = 500. Therefore, the LG strategy seems to be more fit in improving the performance of the deep learning system when generalizing low-frequency words. On the other hand, the NEG strategy has a positive effect on system performance, even though frequent words (θf ≥500) are generalized to the predefined named entities. This may be happening because most words describing named entities (especially those in E) have a specific function within the text and the reduction of their number (through the generalization to named entities) may lead to a more accurate prediction. In both strategies, the configurations that generalize all concepts regardless of their frequency (θf = ∞), exhibit the worst performance. In these cases of over-generalization, the deep learning model fails to learn the particular function of each word, as the generalized terms have a wide range of uses in the text. Another possible explanation of this failure is that the post-processing task of producing the final summary is not able 5090 Model θf ROUGE-1 ROUGE-2 ROUGE-L NEG-100 100 27.85 9.74 25.79 NEG-200 200 27.80 9.57 25.23 NEG-500 500 28.50 10.07 26.11 NEG-1k 1000 28.73 9.87 26.12 NEG-infinity ∞ 27.33 9.01 24.41 LG-100 100 28.89 10.10 24.46 LG-200 200 28.68 9.84 25.76 LG-500 500 28.66 9.32 25.77 LG-1k 1000 28.40 9.21 25.43 LG-infinity ∞ 26.49 7.89 23.72 Baseline 27.56 8.90 25.20 ABS+ 28.18 8.49 23.81 RAS-Elman 28.97 8.26 24.06 words-lvt5k-1sent 28.61 9.42 25.24 GLEAM 29.51 9.78 25.60 Table 4: ROUGE scores on the DUC 2004 dataset. to accurately match the generalized concepts with specific words, due to a large amount of the former. Obviously, a trade-off exists between θf and the obtained performance. The last lines of Table 4 also exhibit that the best NEG and LG configurations outperform the other systems in terms of the ROUGE-2 and ROUGEL scores and demonstrate a near-optimal performance when the ROUGE-1 score is considered, thereby indicating the robustness of the proposed methodology on the DUC 2004 dataset. In case of the Gigaword dataset, the further preprocessing of data has led to a significant performance improvements, especially in comparison to previous work (Chopra et al., 2016; Gao et al., 2019). Even though the aforementioned steps have resulted in more informative and accurate summaries, they do not permit a direct comparison with previously reported results. 7 Conclusion and Future Work Even though deep learning approaches have been widely used in abstractive TS, it is evident that their combination with semantic-based or structure-based methodologies needs to be more thoroughly studied. In this direction, the proposed novel framework combines deep learning techniques with semantic-based content methodologies so as to produce abstractive summaries in generalized form, which, in turn, are transformed into the final summaries. The experimental results have demonstrated that the followed approach enhances the performance of deep learning models. The positive results may be attributed to the optimization of the parameters of the deep leaning model and the ability of the method to handle OOV and very low frequency words. The obtained results show that the proposed approach is an effective methodology of handling OOV or rare words and it improves the performance of text summarization. Of course, certain aspects of the proposed methodology could be extended. Since currently only nouns are considered for generalization, an expansion to verbs could result in additional improvement. Moreover, as the ambiguity is a challenging problem in natural language processing, it would be interesting to capture the particular meaning of each word in the text so that our methodology manages to uncover the specific semantic meaning of words. Finally, the distinct semantic representation of each word could further enhance the performance of the deep learning model. Acknowledgments We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. References Mehdi Allahyari, Seyedamin Pouriyeh, Mehdi Assefi, Saeid Safaei, Elizabeth D Trippe, Juan B Gutierrez, and Krys Kochut. 2017. Text summarization techniques: a brief survey. arXiv preprint arXiv:1707.02268. S Alshaina, Ansamma John, and Aneesh G Nath. 2017. Multi-document abstractive summarization based on 5091 predicate argument structure. In Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2017 IEEE International Conference on, pages 1–6. IEEE. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Riadh Belkebir and Ahmed Guessoum. 2016. Concept generalization and fusion for abstractive sentence generation. Expert Systems with Applications, 53:43–56. Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. 2013. Audio chord recognition with recurrent neural networks. In ISMIR, pages 335– 340. Citeseer. Qian Chen, Xiao-Dan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling document. In IJCAI, pages 2754–2760. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98. Christiane Fellbaum, editor. 1998. WordNet: An electronic lexical database. MIT Press. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 363–370, Stroudsburg, PA, USA. Association for Computational Linguistics. Mahak Gambhir and Vishal Gupta. 2017. Recent automatic text summarization techniques: a survey. Artificial Intelligence Review, 47(1):1–66. Yang Gao, Yang Wang, Luyang Liu, Yidi Guo, and Heyan Huang. 2019. Neural abstractive summarization fusing by global generative topics. Neural Computing and Applications. Pierre-Etienne Genest and Guy Lapalme. 2011. Framework for abstractive summarization using text-to-text generation. In Proceedings of the Workshop on Monolingual Text-To-Text Generation, pages 64–73. Association for Computational Linguistics. Pavel Golik, Patrick Doetsch, and Hermann Ney. 2013. Cross-entropy vs. squared error training: a theoretical and experimental comparison. In Interspeech, volume 13, pages 1756–1760. Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711. Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidirectional lstm. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 273–278. IEEE. Som Gupta and S. K Gupta. 2019. Abstractive summarization: An overview of the state of the art. Expert Systems with Applications, 121:49 – 65. Fadi Hassan, Josep Domingo-Ferrer, and Jordi SoriaComas. 2018. Anonymization of unstructured data via named-entity recognition. In International Conference on Modeling Decisions for Artificial Intelligence, pages 296–305. Springer. Monika Joshi, Hui Wang, and Sally McClean. 2018. Dense semantic graph and its application in single document summarisation. In Emerging Ideas on Information Filtering and Retrieval, pages 55–67. Springer. Atif Khan, Naomie Salim, Haleem Farman, Murad Khan, Bilal Jan, Awais Ahmad, Imran Ahmed, and Anand Paul. 2018. Abstractive text summarization based on improved semantic graph approach. International Journal of Parallel Programming, pages 1–25. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In International Conference on Machine Learning, pages 957–966. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Junyang Lin, Xu Sun, Shuming Ma, and Qi Su. 2018. Global encoding for abstractive summarization. arXiv preprint arXiv:1805.03989. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Ibrahim F Moawad and Mostafa Aref. 2012. Semantic graph reduction approach for abstractive text summarization. In Computer Engineering & Systems (ICCES), 2012 Seventh International Conference on, pages 132–138. IEEE. N Moratanch and S Chitrakala. 2016. A survey on abstractive text summarization. In Circuit, Power and Computing Technologies (ICCPCT), 2016 International Conference on, pages 1–7. IEEE. 5092 Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, pages 95–100. Association for Computational Linguistics. Paul Over, Hoa Dang, and Donna Harman. 2007. Duc in context. Information Processing & Management, 43(6):1506–1520. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pages 1310–1318. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 99–110. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Shengli Song, Haitao Huang, and Tongxiao Ruan. 2018. Abstractive text summarization using lstmcnn based deep learning. Multimedia Tools and Applications, pages 1–19. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 173–180, Stroudsburg, PA, USA. Association for Computational Linguistics. Li Yujian and Liu Bo. 2007. A normalized levenshtein distance metric. IEEE transactions on pattern analysis and machine intelligence, 29(6):1091–1095. Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2016. Abstractive cross-language summarization via translation model enhanced predicate argument structure fusing. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(10):1842–1853.
2019
501
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5093–5100 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5093 Studying Summarization Evaluation Metrics in the Appropriate Scoring Range Maxime Peyrard∗ EPFL [email protected] Abstract In summarization, automatic evaluation metrics are usually compared based on their ability to correlate with human judgments. Unfortunately, the few existing human judgment datasets have been created as by-products of the manual evaluations performed during the DUC/TAC shared tasks. However, modern systems are typically better than the best systems submitted at the time of these shared tasks. We show that, surprisingly, evaluation metrics which behave similarly on these datasets (average-scoring range) strongly disagree in the higher-scoring range in which current systems now operate. It is problematic because metrics disagree yet we can’t decide which one to trust. This is a call for collecting human judgments for high-scoring summaries as this would resolve the debate over which metrics to trust. This would also be greatly beneficial to further improve summarization systems and metrics alike. 1 Introduction The progress in summarization is tightly intertwined with the capability to quickly measure improvements. Thus, a significant body of research was dedicated to the development of automatic metrics (Lloret et al., 2018). Yet, this remains an open problem (Rankel et al., 2013). Typically, evaluation metrics are compared based on their ability to correlate with humans (Lin and Hovy, 2003). Then, the selected metrics heavily influence summarization research by guiding progress (Lloret et al., 2018) and by providing supervision for training summarization systems (Yogan et al., 2016). Despite their central role, few human judgment datasets have been created. The existing ones are the result of the manual evaluations performed ∗Research partly done at UKP Lab from TU Darmstadt Figure 1: The blue distribution represents the score distribution of summaries available in the human judgment datasets of TAC-2008 and TAC-2009. The red distribution is the score distribution of summaries generated by mordern systems. The green distribution corresponds to the score distribution of summaries we generated in this work as described in section 3. during shared tasks (Dang and Owczarzak, 2008, 2009). Thus, the annotated summaries are mostly average compared to nowadays standards. Indeed, the best systems submitted at the time of these sharedtasks have typically served as baselines for subsequent works. This is illustrated by figure 1, which compares the score distribution of summaries in human judgment datasets with the score distribution of modern summarization systems.1 The score distribution on which evaluation metrics are tested (blue zone) differs from the one in which they now operate (red zone). Thus, there is no guarantee that evaluation metrics behave according to human judgments in the high-scoring range. Yet, summarization systems explicitly target highscoring summaries (Radev et al., 2003). In this work, we study several evaluation metrics in this high-scoring range based on an automatically generated dataset. We show that, even though current evaluation metrics correlate well 1for modern systems, we used the scores of summaries from Hong et al. (2014) and other recent approaches (Cao et al., 2015; Nallapati et al., 2017). 5094 with each other in the average range, they strongly disagree for high-scoring summaries. This is related to the Simpson paradox, where different conclusions are drawn depending on which slice of the population is considered (Wagner, 1982). This is problematic because current metrics cannot be distinguished based solely on an analysis of available human judgments. Nevertheless, they will promote very different summaries and systems. These results call for the gathering of human judgments in the high-scoring range. We provide data and code to reproduce our experiments.2 Contributions: (i) We present a simple methodology to study the behavior of metrics in the high-scoring range. (ii) We observe low and even some negative correlations in this range. (iii) This work serves as a motivation to gather human annotations in the relevant scoring range. 2 Background Usually, evaluation metrics are compared based on their ability to correlate with human judgments (Lin and Hovy, 2003). Several works followed this principle and provided different recommendations about which metric to use. For instance, Owczarzak et al. (2012) used a signed Wilcoxon test to find significant differences between metrics and recommended to use ROUGE-2 recall with stemming and stopwords not removed. In a wider study, Graham (2015) found ROUGE-2 precision with stemming and stopwords removed to be the best. Rankel et al. (2013) used accuracy and found ROUGE-4 to perform well. They also observe that the correlation between ROUGE and human judgments decreases when looking at the best systems only. This is in agreement with our work, except that we look at summaries better than the current state-of-the-art. Radev et al. (2003) also observed that the high-scoring range is the most relevant for comparing evaluation metrics because summarizers aim to extract high-scoring summaries. However, they performed analysis on the best scoring summaries from 6 systems which remain average compared to nowadays standard. Our analysis differs from such meta-evaluation (evaluation of evaluation metrics) because we do not provide another metric recommendation. In2https://github.com/PeyrardM/ acl-2019-Compare_Evaluation_Metrics stead, we start from the observation that human judgments are limited in their coverage and analyze the behavior of existing candidate metrics in the high-scoring range not available in these datasets. These previous works computed correlations between metrics and humans, we compute correlations between pairs of metrics in scoring ranges for which there are no human judgments available. 3 Data Generation In this work, we study the following metrics: ROUGE-2 (R-2): measures the bigram overlap between the candidate summary and the pool of reference summaries (Lin, 2004). ROUGE-L (R-L): a variant of ROUGE which measures the size of the longest common subsequence between candidate and reference summaries. ROUGE-WE (R-WE): instead of hard lexical matching of bigrams, R-WE uses soft matching based on the cosine similarity of word embeddings (Ng and Abrecht, 2015). JS divergence (JS-2): uses Jensen-Shannon divergence between bigram distributions of references and candidate summaries (Lin et al., 2006). S3: a metric trained explicitly to maximize its correlation with manual Pyramid annotations (Peyrard et al., 2017). We chose these metrics because they correlate well with available human judgments (about .4 Kendall’s τ; the exact numbers are provided in appendix A) and are easily available. For a recent overview of evaluation metrics, we recommend Lloret et al. (2018). Once an evaluation metric becomes standard, it is optimized, either directly by supervised methods or indirectly via repeated comparisons of unsupervised systems. To mimic this procedure, we optimized each metric using a recently introduced genetic algorithm for summarization (Peyrard and Eckle-Kohler, 2016).3 The metric m is used as the fitness function. The resulting population is a set of summaries ranging from random to upperbound according to m. For both TAC-2008 and TAC-2009, we used a population of 400 summaries per topic (per metric). The final dataset contains 160, 523 summaries for an average of 3https://github.com/UKPLab/ coling2016-genetic-swarm-MDS 5095 R-WE R-L JS-2 S3 R-2 (W) (A) (T) .774 .644 .016 .708 .532 -.187 .871 .887 .284 .799 .744 .096 R-WE (W) (A) (T) .692 .462 -.254 .703 .530 -.145 .824 .752 .131 R-L (W) (A) (T) .647 .492 -.274 .709 .571 -.200 JS-2 (W) (A) (T) .738 .659 -.046 Table 1: Pairwise correlation (Kendall’s τ) between evaluation metrics on various scoring range. (T) is the high-scoring range, (A) is the average-scoring range (human judgment datasets) and (W) is the whole scoring range 1, 763 summaries per topic (less than 5 ∗400 due to removed duplicates). We refer to this dataset as (W) as it covers the whole scoring range. In order to focus on the top-scoring summaries, we preserve the summaries scoring higher than the LexRank baseline (Erkan and Radev, 2004) for at least one metric. LexRank is a graph-based extractive summarizer often used as a baseline. Thus, most current and future summarization systems should perform better and should be covered by the selected scoring range. Besides, LexRank is strong enough to discard a large number of average scoring summaries. The resulting dataset contains an average of 102 summaries kept per topic. This dataset of top-scoring summaries is noted (T). The ROUGE-2 score distribution of (T) is depicted by the green area in figure 1. We provide the pseudo-code and other details concerning the data generation procedure in appendix B. Additionally, we refer to the summaries available as part of the human judgment datasets as (A) because they cover the average-scoring range. 4 Correlation Analysis We compute the pairwise correlations between evaluation metrics averaged over all topics for different scoring ranges and report the results in table 1. For (A) and (W), we observe high correlations between pairs of metrics (> .6 Kendall’s τ). JS-2 and R-2 have the strongest correlation, while R-L is less correlated with the others. It is worth remembering that JS-2 and R-2 both operate on Figure 2: Percentage of disagreement between metrics for increasing scores of summary pairs (Scores have been normalized). bigrams which also explain their stronger connection. However, in the high-scoring range (T), correlations are low and often negative. Even, R-2 and JS-2 only retain little correlation (< 0.3 τ). For most pairs, the correlations are close to what would be expected from random behavior. Additionally, R-L has negative correlations with other metrics. It indicates that there is no global agreement on what constitutes improvements when the summaries are already better than the baseline. This is akin to the Simpson paradox because considering different sub-populations yields different conclusions (Wagner, 1982). In fact, it is simple to distinguish obviously bad from obviously good summaries, which results in superficially high correlations when the whole scoring range is considered (Radev et al., 2003). However, summarization systems target the high-scoring range and evaluation metrics should accurately distinguish between high-scoring summaries. Unfortunately, existing metrics disagree wildly in this range. Disagreement increases with higher-scoring summaries: We also visualize the gradual change in metrics agreement when moving from the average to the high-scoring range in figure 2. For each pair of metrics, the disagreement clearly increases for higher scoring summary pairs. This confirms that metrics disagree for high-scoring summaries. It is more pronounced for some pairs like the ones involving R-L as already observed in table 1. 5096 The problem with reporting several disagreeing metrics: It is a common good practice to report the results of several evaluation metrics. In the average scoring range, where metrics generally agree, this creates a robust measure of progress. The specificities of each metric are averaged-out putting the focus on the general trend. However, when the metrics do not correlate, improvements according to one metric are rarely improvements for the other ones. Let M = {m1, . . . , mn} be the set of evaluation metrics. Then, for a topic T from the dataset (W), we select a summary s and ask: among the summaries which are better than s for one metric (N), how many are better for all metrics (F)? This is given by: F N = |{x ∈T | ∀m ∈M, m(x) > m(s)}| |{x ∈T | ∃m ∈M, m(x) > m(s)}| (1) Here, m(x) is the score of the summary x according to the metric m. Thus, F N measures the difficulty of finding consistent improvements across metrics. The process is repeated for 5, 000 randomly sampled summaries in the sources. In figure 3, the resulting F N ratios are reported against the normalized average score of the selected summaries s. We observe a quick decrease in the ratio F N . The proportion of consistent improvements (agreed by all metrics) is dropping when the average score of summaries increases. When the baseline scores go up, the disagreement between metrics is strong enough that we cannot identify summaries which are considered better than the baseline for each metric. Thus, there is no common trend between metrics that can be exploited by reporting them together. Discussion: Intuitively, smaller populations and narrow scoring ranges can also lead to lower correlations. However, (T) displays low correlations with 102 summaries per topic whereas (A) has strong correlations with 50 summaries per topic. Also, the high-scoring range covers 38% of the full scoring range (from LexRank to upper-bound), while human judgments cover 35% of the full scoring range. Thus, the width of the scoring range and the population size do not explain the Figure 3: The x-axis is the score of the normalized average score of s given by 1 n P i mi(s) after the metrics have been normalized between 0 and 1. On the y-axis: F N associated to the sampled summary s. We also report the average performance of current systems. observed differences. As a limitation of this study, we can note that the data generation procedure simulates further progress in summarization by stochastically optimizing each evaluation metric. While this constitutes a good approximation, there is no guarantee that high-scoring summaries are sampled with the same distribution as future summarization systems. However, the sampling still covers a large diversity of high-scoring summary and reveal general properties of evaluation metrics. Other tasks: Our analysis is performed on TAC-2008 and TAC-2009 because they are benchmark datasets typically used for comparing evaluation metrics. However, our approach can be applied to any dataset. In particular, for future work, this study could be replicated for related fields like Machine Translation or Natural Language Generation. 5 Conclusion Evaluation metrics behave similarly on the average scoring range covered by existing human judgment datasets. Thus, we cannot clearly decide which one is the best. Yet, we showed that they will promote very different summaries in the highscoring range. This disagreement is strong enough that there is no common trend which could be captured by reporting improvements across several metrics. This casts some doubts on the evaluation methodologies in summarization and calls for the collection of human annotations for high-scoring 5097 summaries. Indeed, since metrics strongly disagree in the high-scoring regime, at least some of them are deviating largely from humans. By collecting human judgments in this specific range, we could identify the best ones using standard meta-evaluation techniques. Such annotations would also be greatly beneficial to improve summarization systems and evaluation metrics alike. Acknowledgements This work was partly supported by the German Research Foundation (DFG) as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1, and via the German-Israeli Project Cooperation (DIP, grant No. GU 798/17-1). We also thank the anonymous reviewers for their comments. References Ziqiang Cao, Furu Wei, Sujian Li, Wenjie Li, Ming Zhou, and WANG Houfeng. 2015. Learning Summary Prior Representation for Extractive Summarization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2, pages 829–833. Hoa Trang Dang and Karolina Owczarzak. 2008. Overview of the TAC 2008 Update Summarization Task. In Proceedings of the First Text Analysis Conference (TAC 2008), pages 1–16. Hoa Trang Dang and Karolina Owczarzak. 2009. Overview of the TAC 2009 Summarization Track. In Proceedings of the First Text Analysis Conference (TAC 2009), pages 1–12. G¨unes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based Lexical Centrality As Salience in Text Summarization. Journal of Artificial Intelligence Research, pages 457–479. Yvette Graham. 2015. Re-evaluating automatic summarization with bleu and 192 shades of rouge. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 128–137. Association for Computational Linguistics. Kai Hong, John M. Conroy, benoit Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A Repository of State of the Art and Competitive Baseline Summaries for Generic News Summarization. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 1608–1616. Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Chin-Yew Lin, Guihong Cao, Jianfeng Gao, and JianYun Nie. 2006. An Information-Theoretic Approach to Automatic Evaluation of Summaries. In Proceedings of the Human Language Technology Conference at NAACL, pages 463–470, New York City, USA. Chin-Yew Lin and Eduard Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Cooccurrence Statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, volume 1, pages 71–78. Elena Lloret, Laura Plaza, and Ahmet Aker. 2018. The Challenging Task of Summary Evaluation: An Overview. Language Resources and Evaluation, 52(1):101–148. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents. In AAAI, pages 3075–3081. Jun-Ping Ng and Viktoria Abrecht. 2015. Better summarization evaluation with word embeddings for rouge. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1925–1930, Lisbon, Portugal. Association for Computational Linguistics. Karolina Owczarzak, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2012. An Assessment of the Accuracy of Automatic Evaluation in Summarization. In Proceedings of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization, pages 1–9, Montreal, Canada. Association for Computational Linguistics. Maxime Peyrard, Teresa Botschen, and Iryna Gurevych. 2017. Learning to score system summaries for better content selection evaluation. In Proceedings of the EMNLP workshop ”New Frontiers in Summarization”, pages 74–84. Association for Computational Linguistics. Maxime Peyrard and Judith Eckle-Kohler. 2016. A General Optimization Framework for MultiDocument Summarization Using Genetic Algorithms and Swarm Intelligence. In Proceedings of the 26th International Conference on Computational Linguistics (COLING), pages 247 – 257. Maxime Peyrard and Judith Eckle-Kohler. 2017. A principled framework for evaluating summarizers: Comparing models of summary quality against human judgments. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), volume Volume 2: Short Papers, pages 26–31. Association for Computational Linguistics. 5098 Maxime Peyrard and Iryna Gurevych. 2018. Objective function learning to match human judgements for optimization-based summarization. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 654–660. Association for Computational Linguistics. Dragomir R. Radev, Simone Teufel, Horacio Saggion, Wai Lam, John Blitzer, Hong Qi, Arda C¸ elebi, Danyu Liu, and Elliott Drabek. 2003. Evaluation Challenges in Large-scale Document Summarization. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 375–382. Peter A. Rankel, John M. Conroy, Hoa Trang Dang, and Ani Nenkova. 2013. A Decade of Automatic Content Evaluation of News Summaries: Reassessing the State of the Art. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 131–136, Sofia, Bulgaria. Association for Computational Linguistics. Clifford H Wagner. 1982. Simpson’s Paradox in Real Life. The American Statistician, 36(1):46–48. Jaya Kumar Yogan, Ong Sing Goh, Basiron Halizah, Hea Choon Ngo, and C Puspalata. 2016. A Review on Automatic Text Summarization Approaches. Journal of Computer Science, 12(4):178– 190. 5099 A Correlation with Human Judgments For each metric that we consider in the paper, we computed its correlation with human judgments in both TAC-2008 and TAC-2009 datasets (Peyrard and Eckle-Kohler, 2017). We used two kinds of human annotations available in these datasets: Responsiveness which is score given by human on 5-point LIKERT scale, and Pyramid where annotators follow the Pyramid annotation guideline to annotate content selection. The correlations are computed with Kendall’s τ for each topic and averaged over all topics in both datasets. The results are reported in table 2. responsiveness Pyramid R-2 .391 .451 R-WE .378 .431 R-L .353 .392 JS-2 .379 .444 S3 .403 .477 Table 2: Correlation of automatic metrics with human judgments for TAC-2008 and TAC-2009. The correlation is measured with Kendall’s τ. B Data Generation Algorithm The general data generation procedure is described by algorithm 1. The function Score(S, M) takes a list S of summaries and a list M of evaluation metrics and outputs a list where each summary has been scored by each evaluation metric in M. The SampleSummaries function is the genetic algorithm introduced genetic algorithm for summarization (Peyrard and Eckle-Kohler, 2016; Peyrard and Gurevych, 2018). The evaluation metric is optimized by the genetic algorithm and the resulting population is a set of summaries ranging from random to upper-bound. We used a population of k = 400. Then, the final dataset contains 160, 523 summaries for an average of 1, 763 summaries per topic (less than 5 ∗400 due to removed duplicates). This algorithm results in a dataset covering the whole scoring range. In order to filter out low and average scoring summaries, we employ the procedure described by algorithm 2. In this algorithm, the function Score(T , m) returns a list of all the summaries in the topic T scored by the metric m. The baseline B is an existing algorithm used as Algorithm 1: Generate a Dataset of Scored Summaries Input : D = {s1, . . . , sn}: document as a set of sentences L: length constraint k: number of summaries to generate M = {m1, . . . , me}: evaluation metrics considered Output: C = [S1, . . . , Sk]: a set of scored summaries 1 Function GenerateData(D, L, k, M): 2 C := [] 3 for m ∈M do 4 S := SampleSummaries(D, L, k, m) 5 S := RemoveDuplicate(S) 6 C ←Score(S, M) 7 end a threshold: for each metric, we keep every summary scoring higher than B. The final set of topscoring summaries is the union of the top-scoring summaries of each metric. For the thresholding, we chose LexRank (Erkan and Radev, 2004), because it is a heavily used baseline. Therefore, most current and future summarization systems should perform better and should be covered by the selected scoring range. Besides, LexRank is strong enough to discard a large number of average scoring summaries. After the selection, we ended up with an average of 102 summaries kept per topic. C Scatter Matrix Plots: TAC-2008 and TAC-2009 We compute the pairwise correlation between metrics using the existing human judgments (TAC2008 and TAC-2009). Figure 4 is the scatter matrix plot describing the correlations between pairs of candidate metrics. The number and the cell background color indicate the Kendall’s τ between the two metrics. This measures the proportion of pairs of summaries ranked in the same order by both metrics. Thus, the kendall’s τ are the ones depicted in the paper in table 1. Diagonal cells represent the score distribution of summaries for the given metric. 5100 (a) Whole scoring range (W). (b) Average-scoring range (A). (c) High-scoring range (T). Figure 4: Pairwise correlation between evaluation metrics on various scoring range. The generated dataset uses the topics from TAC-2008 and TAC-2009. The human judgments are the ones available as part of TAC-2008 and TAC-2009. Algorithm 2: Select Top-Scoring Summaries Input : D = {T1, . . . , Tn}: dataset as a list of topics (each topic contains a list of summaries) B: baseline algorithm used to decide the high-scoring summaries M = {m1, . . . , me}: evaluation metrics considered Output: D(top): dataset which contain only top-scoring summaries 1 Function SelectTopSummaries(D, B, M): 2 D(top) := [] 3 for T ∈D do 4 T (top) := [] 5 for m ∈M do 6 S := [] 7 for s ∈Score(T , m) do 8 if m(s) > m(B(T .source)) then 9 S ←s 10 end 11 end 12 T (top) := T (top) ∪S 13 end 14 D(top) ←T (top) 15 end
2019
502
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5101–5106 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5101 Simple Unsupervised Summarization by Contextual Matching Jiawei Zhou Harvard University [email protected] Alexander M. Rush Harvard University [email protected] Abstract We propose an unsupervised method for sentence summarization using only language modeling. The approach employs two language models, one that is generic (i.e. pretrained), and the other that is specific to the target domain. We show that by using a productof-experts criteria these are enough for maintaining continuous contextual matching while maintaining output fluency. Experiments on both abstractive and extractive sentence summarization data sets show promising results of our method without being exposed to any paired data. 1 Introduction Automatic text summarization is the process of formulating a shorter output text than the original while capturing its core meaning. We study the problem of unsupervised sentence summarization with no paired examples. While datadriven approaches have achieved great success based on various powerful learning frameworks such as sequence-to-sequence models with attention (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016), variational auto-encoders (Miao and Blunsom, 2016), and reinforcement learning (Paulus et al., 2017), they usually require a large amount of parallel data for supervision to do well. In comparison, the unsupervised approach reduces the human effort for collecting and annotating large amount of paired training data. Recently researchers have begun to study the unsupervised sentence summarization tasks. These methods all use parameterized unsupervised learning methods to induce a latent variable model: for example Schumann (2018) uses a length controlled variational autoencoder, Fevry and Phang (2018) use a denoising autoencoder but only for extractive summarization, and Wang and Lee (2018) apply a reinforcement learning procedure combined with GANs, which takes a further step to the goal of Miao and Blunsom (2016) using language as latent representations for semisupervised learning. This work instead proposes a simple approach to this task that does not require any joint training. We utilize a generic pretrained language model to enforce contextual matching between sentence prefixes. We then use a smoothed problem specific target language model to guide the fluency of the generation process. We combine these two models in a product-of-experts objective. This approach does not require any task-specific training, yet experiments show results on par with or better than the best unsupervised systems while producing qualitatively fluent outputs. The key aspect of this technique is the use of a pretrained language model for unsupervised contextual matching, i.e. unsupervised paraphrasing. 2 Model Description Intuitively, a sentence summary is a shorter sentence that covers the main point succinctly. It should satisfy the following two properties (similar to Pitler (2010)): (a) Faithfulness: the sequence is close to the original sentence in terms of meaning; (b) Fluency: the sequence is grammatical and sensible to the domain. We propose to enforce the criteria using a product-of-experts model (Hinton, 2002), P(y|x) ∝pcm(y|x)pfm(y|x)λ, |y| ≤|x| (1) where the left-hand side is the probability that a target sequence y is the summary of a source sequence x, pcm(y|x) measures the faithfulness in terms of contextual similarity from y to x, and pfm(y|x) measures the fluency of the token sequence y with respect to the target domain. We 5102 use λ as a hyper-parameter to balance the two expert models. We consider this distribution (1) being defined over all possible y whose tokens are restricted to a candidate list C determined by x. For extractive summarization, C is the set of word types in x. For abstractive summarization, C consists of relevant word types to x by taking K closest word types from a full vocabulary V for each source token measured by pretrained embeddings. 2.1 Contextual Matching Model The first expert, pcm(y|x), tracks how close y is to the original input x in terms of a contextual ”trajectory”. We use a pretrained language model to define the left-contextual representations for both the source and target sequences. Define S(x1:m, y1:n) to be the contextual similarity between a source and target sequence of length m and n respectively under this model. We implement this as the cosine-similarity of a neural language model’s final states with inputs x1:m and y1:n. This approach relies heavily on the observed property that similar contextual sequences often correspond to paraphrases. If we can ensure close contextual matching, it will keep the output faithful to the original. We use this similarity function to specify a generative process over the token sequence y, pcm(y|x) = N Y n=1 qcm(yn|y<n, x). The generative process aligns each target word to a source prefix. At the first step, n = 1, we compute a greedy alignment score for each possible word w ∈C, sw = maxj≥1 S(x1:j, w) for all source prefixes up to length j. The probability qcm(y1 = w|x) is computed as softmax(s) over all target words. We also store the aligned context z1 = arg maxj≥1 S(x1:j, y1). For future words, we ensure that the alignment is strictly monotonic increasing, such that zn < zn+1 for all n. Monotonicity is a common assumption in summarization (Yu et al., 2016a,b; Raffel et al., 2017). For n > 1 we compute the alignment score sw = maxj>zn−1 S(x1:j, [y1:n−1, w]) to only look at prefixes longer than zn−1, the last greedy alignment. Since the distribution conditions on y the past alignments are deterministic to compute (and can be stored). The main computational cost is in extending the target language ? zn x y Encode candidate words using language model with the current prefix Calculate the similarity scores with best match Figure 1: Generative process of the contextual matching model. model context to compute S. This process is terminated when a sampled token in y is aligned to the end of the source sequence x, and the strict monotonic increasing alignment constraint guarantees that the target sequence will not be longer than the source sequence. The generative process of the above model is illustrated in Fig. 1. 2.2 Domain Fluency Model The second expert, pfm(y|x), accounts for the fluency of y with respect to the target domain. It directly is based on a domain specific language model. Its role is to adapt the output to read closer shorter sentences common to the summarization domain. Note that unlike the contextual matching model where y explicitly depends on x in its generative process, in the domain fluency language model, the dependency of y on x is implicit through the candidate set C that is determined by the specific source sequence x. The main technical challenge is that the probabilities of a pretrained language model are not well-calibrated with the contextual matching model within the candidate set C, and so the language model tends to dominate the objective because it has much higher variance (more peaky) in the output distribution than the contextual matching model. To manage this issue we apply kernel smoothing over the language model to adapt it from the full vocab V down to the candidate word list C. Our smoothing process focuses on the output embeddings from the pretrained language model. First we form the Voronoi partition (Aurenham5103 mer, 1991) over all the embeddings using the candidate set C. That is, each word type w′ in the full vocabulary V is exactly assigned to one region represented by a word type w in the candidate set C, such that the distance from w′ to w is not greater than its distance to any other word types in C. As above, we use cosine similarity between corresponding word embeddings to define the regions. This results in a partition of the full vocabulary space into |C| distinct regions, called Voronoi cells. For each word type w ∈C, we define N(w) to be the Voronoi cell formed around it. We then use cluster smoothing to define a new probability distribution: pfm(y|x) = N Y n=1 X w′∈N(yn) lm(w′|y<n) where lm is the conditional probability distribution of the pretrained domain fluency language model. By our construction, pfm is a valid distribution over the candidate list C. The main benefit is that it redistributes probability mass lost to terms in V to the active words in C. We find this approach smoothing balances integration with pcm. 2.3 Summary Generation To generate summaries we maximize the log probability (1) to approximate y∗using beam search. We begin with a special start token. A sequence is moved out of beam if it has aligned to the end token appended to the source sequence. To discourage extremely short sequences, we apply length normalization to re-rank the finished hypotheses. We choose a simple length penalty as lp(y) = |y| + α with α a tuning parameter. 3 Experimental Setup For the contextual matching model’s similarity function S, we adopt the forward language model of ELMo (Peters et al., 2018) to encode tokens to corresponding hidden states in the sequence, resulting in a three-layer representation each of dimension 512. The bottom layer is a fixed character embedding layer, and the above two layers are LSTMs associated with the generic unsupervised language model trained on a large amount of text data. We explicitly manage the ELMo hidden states to allow our model to generate contextual embeddings sequentially for efficient beam search.1 The fluency language model component lm is task specific, and pretrained on a corpus of summarizations. We use an LSTM model with 2 layers, both embedding size and hidden size set to 1024. It is trained using dropout rate 0.5 and SGD combined with gradient clipping. We test our method on both abstractive and extractive sentence summarization tasks. For abstractive summarization, we use the English Gigaword data set pre-processed by Rush et al. (2015). We train pfm using its 3.8 million headlines in the training set, and generate summaries for the input in test set. For extractive summarization, we use the Google data set from Filippova and Altun (2013). We train pfm on 200K compressed sentences in the training set and test on the first 1000 pairs of evaluation set consistent with previous works. For generation, we set λ = 0.11 in (1) and beam size to 10. Each source sentence is tokenized and lowercased, with periods deleted and a special end of sentence token appended. In abstractive summarization, we use K = 6 in the candidate list and use the fixed embeddings at the bottom layer of ELMo language model for similarity. Larger K has only small impact on performance but makes the generation more expensive. The hyper-parameter α for length penalty ranges from -0.1 to 0.1 for different tasks, mainly for desired output length as we find ROUGE scores are not sensitive to it. We use concatenation of all ELMo layers as default in pcm. 4 Results and Analysis Quantitative Results. The automatic evaluation scores are presented in Table 1 and Table 2. For abstractive sentence summarization, we report the ROUGE F1 scores compared with baselines and previous unsupervised methods. Our method outperforms commonly used prefix baselines for this task which take the first 75 characters or 8 words of the source as a summary. Our system achieves comparable results to Wang and Lee (2018) a system based on both GANs and reinforcement training. Note that the GAN-based system needs both source and target sentences for training (they are unpaired), whereas our method only needs the target domain sentences for a simple language model. In Table 1, we also list scores of the stateof-the-art supervised model, an attention based 1Code available at https://github.com/jzhou316/Unsuper vised-Sentence-Summarization. 5104 Model R1 R2 RL Lead-75C 23.69 7.93 21.5 Lead-8 21.30 7.34 19.94 Schumann (2018) 22.19 4.56 19.88 Wang and Lee (2018) 27.09 9.86 24.97 Contextual Match 26.48 10.05 24.41 Cao et al. (2018) 37.04 19.03 34.46 seq2seq 33.50 15.85 31.44 Contextual Oracle 37.03 15.46 33.23 Table 1: Experimental results of abstractive summarization on Gigaword test set with ROUGE metric. The top section is prefix baselines, the second section is recent unsupervised methods and ours, the third section is state-of-the-art supervised method along with our implementation of a seq-to-seq model with attention, and the bottom section is our model’s oracle performance. Wang and Lee (2018) is by author correspondence (scores differ because of evaluation setup). For another unsupervised work Fevry and Phang (2018), we attempted to replicate on our test set, but were unable to obtain results better than the baselines. Model F1 CR F&A Unsupervised 52.3 Contextual Match 60.90 0.38 Filippova et al. (2015) 82.0 0.38 Zhao et al. (2018) 85.1 0.39 Table 2: Experimental results of extractive summarization on Google data set. F1 is the token overlapping score, and CR is the compression rate. F&A is an unsupervised baseline used in Filippova and Altun (2013), and the bottom section is supervised results. seq-to-seq model of our own implementation, as well as the oracle scores of our method obtained by choosing the best summary among all finished hypothesis from beam search. The oracle scores are much higher, indicating that our unsupervised method does allow summaries of better quality, but with no supervision it is hard to pick them out with any unsupervised metric. For extractive sentence summarization, our method achieves good compression rate and significantly raises a previous unsupervised baseline on token level F1 score. Analysis. Table 3 considers analysis of different aspects of the model. First, we look at the fluency model and compare the cluster smoothing abstractive extractive Models R1 R2 RL F1 CR CS + cat 26.48 10.05 24.41 60.90 0.38 CS + avg 26.34 9.79 24.23 60.09 0.38 CS + top 26.21 9.69 24.14 62.18 0.34 CS + mid 25.46 9.39 23.34 59.32 0.40 CS + bot 15.29 3.95 14.06 21.14 0.23 TEMP5 + cat 26.31 9.38 23.60 52.10 0.43 TEMP10 + cat 25.63 8.82 22.86 42.33 0.47 NA + cat 24.81 8.89 22.87 49.80 0.32 Table 3: Comparison of different model choices. The top section evaluates the effects of contextual representation in the matching model, and the bottom section evaluates the effects of different smoothing methods in the fluency model. (CS) approach with softmax temperature (TEMPx with x being the temperature) commonly used for generation in LM-integrated models (Chorowski and Jaitly, 2016) as well as no adjustment (NA). Second, we vary the 3-layer representation out of ELMo forward language model to do contextual matching (bot/mid/top: bottom/middle/top layer only, avg: average of 3 layers, cat: concatenation of all layers). Results show the effectiveness of our cluster smoothing method for the vocabulary adaptive language model pfm, although temperature smoothing is an option for abstractive datasets. Additionally Contextual embeddings have a huge impact on performance. When using word embeddings (bottom layer only from ELMo language model) in our contextual matching model pcm, the summarization performance drops significantly to below simple baselines as demonstrated by score decrease. This is strong evidence that encoding independent tokens in a sequence with generic language model hidden states helps maintain the contextual flow. Experiments also show that even when only using pcm (by setting λ = 0), utilizing the ELMo language model states allows the generated sequence to follow the source x closely, whereas normal context-free word embeddings would fail to do so. Table 4 shows some examples of our unsupervised generation of summaries, compared with the human reference, an attention based seq-to-seq model we trained using all the Gigaword parallel data, and the GAN-based unsupervised system from Wang and Lee (2018). Besides our default of using all ELMo layers, we also show generations by using the top and bottom (context-independent) 5105 I: japan ’s nec corp. and UNK computer corp. of the united states said wednesday they had agreed to join forces in supercomputer sales G: nec UNK in computer sales tie-up s2s: nec UNK to join forces in supercomputer sales GAN: nec corp. to join forces in sales CM (cat): nec agrees to join forces in supercomputer sales CM (top): nec agrees to join forces in computer sales CM (bot): nec to join forces in supercomputer sales I: turnout was heavy for parliamentary elections monday in trinidad and tobago after a month of intensive campaigning throughout the country , one of the most prosperous in the caribbean G: trinidad and tobago poll draws heavy turnout by john babb s2s: turnout heavy for parliamentary elections in trinidad and tobago GAN: heavy turnout for parliamentary elections in trinidad CM (cat): parliamentary elections monday in trinidad and tobago CM (top): turnout is hefty for parliamentary elections in trinidad and tobago CM (bot): trinidad and tobago most prosperous in the caribbean I: a consortium led by us investment bank goldman sachs thursday increased its takeover offer of associated british ports holdings , the biggest port operator in britain , after being threatened with a possible rival bid G: goldman sachs increases bid for ab ports s2s: goldman sachs ups takeover offer of british ports GAN: us investment bank increased takeover offer of british ports CM (cat): us investment bank goldman sachs increases shareholdings CM (top): investment bank goldman sachs increases investment in britain CM (bot): britain being threatened with a possible bid Table 4: Abstractive sentence summary examples on Gigaword test set. I is the input, G is the reference, s2s is a supervised attention based seq-to-seq model, GAN is the unsupervised system from Wang and Lee (2018), and CM is our unsupervised model. The third example is a failure case we picked where the sentence is fluent and makes sense but misses the point as a summary. layer only. Our generation has fairly good qualities, and it can correct verb tenses and paraphrase automatically. Note that top representation actually finds more abstractive summaries (such as in example 2), and the bottom representation fails to focus on the proper context. The failed examples are mostly due to missing the main point, as in example 3, or the summary needs to reorder tokens in the source sequence. Moreover, as a byproduct, our unsupervised method naturally generates hard alignments between summary and source sentences in the contextual matching process. We show some examples in Figure 2 correjapan 's nec corp. and UNK computer corp. of the united states said wednesday they had agreed to join forces in supercomputer sales nec agrees to join forces in computer sales turnout was heavy for parliamentary elections monday in trinidad and tobago after a month of intensive campaigning throughout the country , one of the most prosperous in the caribbean turnout is hefty for parliamentary elections in trinidad and tobago a consortium led by us investment bank goldman sachs thursday increased its takeover offer of associated british ports holdings , the biggest port operator in britain , after being threatened with a possible rival bid investment bank goldman sachs increases investment in britain Figure 2: Examples of alignment results generated by our unsupervised method between the abstractive summaries and corresponding source sentences in the Gigaword test set. sponding to the sentences in Table 4. 5 Conclusion We propose a novel methodology for unsupervised sentence summarization using contextual matching. Previous neural unsupervised works mostly adopt complex encoder-decoder frameworks. We achieve good generation qualities and competitive evaluation scores. We also demonstrate a new way of utilizing pre-trained generic language models for contextual matching in untrained generation. Future work could be comparing language models of different types and scales in this direction. Acknowledgements We would like to thank Yuntian Deng and Yoon Kim for useful discussions. This work was supported by NSF 1845664 and research awards from Google, Oracle, and Facebook. 5106 References Franz Aurenhammer. 1991. Voronoi diagramsa survey of a fundamental geometric data structure. ACM Computing Surveys (CSUR), 23(3):345–405. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 152–161. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98. Jan Chorowski and Navdeep Jaitly. 2016. Towards better decoding and language model integration in sequence to sequence models. arXiv preprint arXiv:1612.02695. Thibault Fevry and Jason Phang. 2018. Unsupervised sentence compression using denoising autoencoders. arXiv preprint arXiv:1809.02669. Katja Filippova, Enrique Alfonseca, Carlos A Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360–368. Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. arXiv preprint arXiv:1609.07317. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Emily Pitler. 2010. Methods for sentence compression. Colin Raffel, Minh-Thang Luong, Peter J Liu, Ron J Weiss, and Douglas Eck. 2017. Online and lineartime attention by enforcing monotonic alignments. arXiv preprint arXiv:1704.00784. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Raphael Schumann. 2018. Unsupervised abstractive sentence summarization using length controlled variational autoencoder. arXiv preprint arXiv:1809.05233. Yau-Shian Wang and Hung-Yi Lee. 2018. Learning to encode text as human-readable summaries using generative adversarial networks. arXiv preprint arXiv:1810.02851. Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. 2016a. The neural noisy channel. arXiv preprint arXiv:1611.02554. Lei Yu, Jan Buys, and Phil Blunsom. 2016b. Online segment to segment neural transduction. arXiv preprint arXiv:1609.08194. Yang Zhao, Zhiyuan Luo, and Akiko Aizawa. 2018. A language model based evaluator for sentence compression. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 170–175.
2019
503
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5107–5116 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5107 Generating Summaries with Topic Templates and Structured Convolutional Decoders Laura Perez-Beltrachini Yang Liu Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB {lperez,mlap}@inf.ed.ac.uk [email protected] Abstract Existing neural generation approaches create multi-sentence text as a single sequence. In this paper we propose a structured convolutional decoder that is guided by the content structure of target summaries. We compare our model with existing sequential decoders on three data sets representing different domains. Automatic and human evaluation demonstrate that our summaries have better content coverage. 1 Introduction Abstractive multi-document summarization aims at generating a coherent summary from a cluster of thematically related documents. Recently, Liu et al. (2018) proposed generating the lead section of a Wikipedia article as a variant of multidocument summarization and released WikiSum, a large-scale summarization dataset which enables the training of neural models. Like most previous work on neural text generation (Gardent et al., 2017; See et al., 2017; Wiseman et al., 2017; Puduppully et al., 2019; Celikyilmaz et al., 2018; Liu et al., 2018; PerezBeltrachini and Lapata, 2018; Marcheggiani and Perez-Beltrachini, 2018), Liu et al. (2018) represent the target summaries as a single long sequence, despite the fact that documents are organized into topically coherent text segments, exhibiting a specific structure in terms of the content they discuss (Barzilay and Lee, 2004). This is especially the case when generating text within a specific domain where certain topics might be discussed in a specific order (Wray, 2002). For instance, the summary in Table 1 is about a species of damselfly; the second sentence describes the region where the species is found and the fourth the type of habitat the species lives in. We would expect other Animal Wikipedia summaries to exhibit similar content organization. In this work we propose a neural model which is guided by the topic structure of target summaries, i.e., the way content is organized into sentences and the type of content these sentences discuss. Our model consists of a structured decoder which is trained to predict a sequence of sentence topics that should be discussed in the summary and to generate sentences based on these. We extend the convolutional decoder of Gehring et al. (2017) so as to be aware of which topics to mention in each sentence as well as their position in the target summary. We argue that a decoder which explicitly takes content structure into account could lead to better summaries and alleviate well-known issues with neural generation models being too general, too brief, or simply incorrect. Although content structure has been largely unexplored within neural text generation, it has been been recognized as useful for summarization. Barzilay and Lee (2004) build a model of the content structure of source documents and target summaries and use it to extract salient facts from the source. Sauper and Barzilay (2009) cluster texts by target topic and use a global optimisation algorithm to select the best combination of facts from each cluster. Although these models have shown good results in terms of content selection, they cannot generate target summaries. Our model is also related to the hierarchical decoding approaches of Li et al. (2015) and Tan et al. (2017). However, the former approach is auto-encoding the same inputs (our model carries out content selection for the summarization task), while the latter generates independent sentences. They also both rely on recurrent neural models, while we use convolutional neural networks. To our knowledge this is the first hierarchical decoder proposed for a non-recurrent architecture. To evaluate our model, we introduce WIKICATSUM, a dataset1 derived from Liu et al. (2018) 1Our dataset and code are available at https:// 5108 agriocnemis zerafica is a species of damselfly in the family coenagrionidae. it is native to africa, where it is widespread across the central and western nations of the continent. it is known by the common name sahel wisp. this species occurs in swamps and pools in dry regions. there are no major threats but it may be affected by pollution and habitat loss to agriculture and development. agriocnemis zerafica EOT global distribution: the species is known from north-west uganda and sudan, through niger to mauritania and liberia: a larger sahelian range, i.e., in more arid zone than other african agriocnemis. record from angola unlikely. northeastern africa distribution: the species was listed by tsuda for sudan. [· · · ]. EOP very small, about 20mm. orange tail. advised agriocnemis sp. id by kd dijkstra: [· · · ] EOP same creature as previously posted as unknown, very small, about 20mm, over water, top view. advised probably agriocnemis, ”whisp” damselfly. EOP [· · · ] EOP justification: this is a widespread species with no known major widespread threats that is unlikely to be declining fast enough to qualify for listing in a threatened category. it is therefore assessed as least concern. EOP the species has been recorded from northwest uganda and sudan, through niger to mauritania and [· · · ] EOP the main threats to the species are habitat loss due to agriculture, urban development and drainage, as well as water pollution. Table 1: Summary (top) and input paragraphs (bottom) from the Animal development dataset (EOP/T is a special token indicating the end of paragraph/title). which consists of Wikipedia abstracts and source documents and is representative of three domains, namely Companies, Films, and Animals. In addition to differences in vocabulary and range of topics, these domains differ in terms of the linguistic characteristics of the target summaries. We compare single sequence decoders and structured decoders using ROUGE and a suite of new metrics we propose in order to quantify the content adequacy of the generated summaries. We also show that structured decoding improves content coverage based on human judgments. 2 The Summarization Task The Wikipedia lead section introduces the entity (e.g., Country or Brazil) the article is about, highlighting important facts associated with it. Liu et al. (2018) further assume that this lead section is a summary of multiple documents related to the entity. Based on this premise, they propose the multi-document summarization task of generating the lead section from the set of documents cited in Wikipedia articles or returned by Google (using article titles as queries). And create WikiSum, a large-scale multi-document summarization dataset with hundreds of thousands of instances. Liu et al. (2018) focus on summarization from very long sequences. Their model first selects a subset of salient passages by ranking all paragraphs from the set of input documents (based on their TF-IDF similarity with the title of the article). The L best ranked paragraphs (up to 7.5k tokens) are concatenated into a flat sequence and a decoder-only architecture (Vaswani et al., 2017) is used to generate the summary. We explicitly model the topic structure of summaries, under the assumption that documents cover different topics about a given entity, while the summary covers the most salient ones and organizes them into a coherent multi-sentence text. We further assume that different lead summaries are appropriate for different entities (e.g. Animals github.com/lauhaide/WikiCatSum. vs. Films) and thus concentrate on specific domains. We associate Wikipedia articles with “domains” by querying the DBPedia knowledge-base. A training instance in our setting is a (domainspecific) paragraph cluster (multi-document input) and the Wikipedia lead section (target summary). We derive sentence topic templates from summaries for Animals, Films, and Companies and exploit these to guide the summariser. However, there is nothing inherent in our model that restricts its application to different domains. 3 Generation with Content Guidance Our model takes as input a set of ranked paragraphs P = {p1 · · · p|P|} which we concatenate to form a flat input sequence X = (x1 · · · x|X|) where xi is the i-th token. The output of the model is a multi-sentence summary S = (s1, · · · , s|S|) where st denotes the t-th sentence. We adopt an encoder-decoder architecture which makes use of convolutional neural networks (CNNs; Gehring et al. 2017). CNNs permit parallel training (Gehring et al., 2017) and have shown good performance in abstractive summarization tasks (e.g., Narayan et al. 2018). Figure 1 illustrates the architecture of our model. We use the convolutional encoder of Gehring et al. (2017) to obtain a sequence of states (z1, · · · , z|X|) given an input sequence of tokens (x1, · · · , x|X|). A hierarchical convolutional decoder generates the target sentences (based on the encoder outputs). Specifically, a document-level decoder first generates sentence vectors (LSTM Document Decoder in Figure 1), representing the content specification for each sentence that the model plans to decode. A sentence-level decoder (CNN Sentence Decoder in Figure 1) is then applied to generate an actual sentence token-by-token. In the following we describe the two decoders in more detail and how they are combined to generate summaries. 3.1 Document-level Decoder The document-level decoder builds a sequence of sentence representations (s1, · · · , s|S|). For exam5109 <sod> CNN Encoder CNN Sentence Decoder LSTM Document Decoder firm Aero is a <pad> <s> Aero <EOT> Aero was created … Its headquarters… The offices … </s> ... ... ; <pad> Aero, 1, 2 is, 1, 3 a, 1, 4 <s>, 1, 1 Figure 1: Sequence encoder and structured decoder. ple, s1 in Figure 1 is the vector representation for the sentence Aero is a firm. This layer uses an LSTM with attention. At each time step t, the LSTM will construct an output state st, representing the content of the t-th sentence that the model plans to generate: ht = LSTM(ht−1, st−1) (1) st = tanh(Ws[ht; cs t]) (2) where ht is the LSTM hidden state of step t and cs t is the context vector computed by attending to the input. The initial hidden state h0 is initialized with the averaged sum of the encoder output states. We use a soft attention mechanism (Luong et al., 2015) to compute the context vector cs t: αs tj = exp(ht • zj) P j ′ exp(ht • zj ′) (3) cs t = |X| X j=1 αs tj zj (4) where αs jt is the attention weight for the document-level decoder attending to input token xj at time step t. 3.2 Sentence-level Decoder Each sentence st = (yt1, . . . , yt|st|) in target summary S is generated by a sentence-level decoder. The convolutional architecture proposed in Gehring et al. (2017) combines word embeddings with positional embeddings. That is, the word representation wti of each target word yti is combined with vector ei indicating where this word is in the sentence, wti = emb(yti) + ei. We extend this representation by adding a sentence positional embedding. For each st the decoder incorporates the representation of its position t. This explicitly informs the decoder which sentence in the target document to decode for. Thus, we redefine word representations as wti = emb(yti) + ei + et. 3.3 Hierarchical Convolutional Decoder In contrast to recurrent networks where initial conditioning information is used to initialize the hidden state, in the convolutional decoder this information is introduced via an attention mechanism. In this paper we extend the multi-step attention (Gehring et al., 2017) with sentence vectors st generated by the document-level decoder. The output vectors for each layer l in the convolutional decoder, when generating tokens for the t-th sentence are2: {ol t1, · · · , ol tn} = conv({o′l−1 t1 , · · · , o′l−1 tn ) (5) o′l ti = ol ti + st + cl ti (6) where o′l ti is obtained by adding the corresponding sentence state st produced by the document-level decoder (Equation (2)) and sentence-level context vector cl ti. cl ti is calculated by combining ol ti and st with the previous target embedding gti: dl ti = W l d(ol ti + st) + gti (7) al tij = exp(dl ti • zj) P j ′ exp(dl ti • zj ′) (8) cl ti = |X| X j=1 al tij(zj + ej) (9) The prediction of word yti is conditioned on the output vectors of the top convolutional layer, as P(yti|yt{1:i−1}) = softmax(Wy(oL ti + cL ti)). The model is trained to optimize negative log likelihood LNLL. 3.4 Topic Guidance To further render the document-level decoder topic-aware, we annotate the sentences of groundtruth summaries with topic templates and force the model to predict these. To discover topic templates from summaries, we train a Latent Dirichlet Allocation model (LDA; Blei et al. (2003)), treating sentences as documents, to obtain sentencelevel topic distributions. Since the number of topics discussed in the summary is larger than the 2Padding and masking are used to keep the autoregressive property in decoding. 5110 Company #12: operation, start, begin, facility, company, expand #29: service, provide, airline, member, operate, flight #31: product, brand, sell, launch, company, include #38: base, company, office, locate, development, headquarters Epos Now’s UK headquarters are located in Norwich, England and their US headquarters are in Orlando, Florida. [#38] Film #10: base, film, name, novel, story, screenplay #14: win, film, music, award, nominate, compose #18: film, receive, review, office, box, critic #19: star, film, role, play, lead, support The film is based on the novel Intruder in the dust by William Faulkner. [#10] Animal #0: length, cm, reach, grow, centimetre, size, species #1: forewing, hindwing, spot, line, grey, costa #17: population, species, threaten, list, number, loss, endanger #24: forest, habitat, consist, area, lowland, moist, montane It might be in population decline due to habitat loss. [#17] Table 2: Topics discovered for different domains and examples of sentence annotations. Category InstNb R1 R2 RL TopicNb Company 62,545 .551 .217 .438 40 Film 59,973 .559 .243 .456 20 Animal 60,816 .541 .208 .455 30 Table 3: Number of instances (InstNb), ROUGE 1-2 recall (R1 and R2) of source texts against target summaries and number of topics (TopicNb). number of topics discussed in a single sentence, we use a symmetric Dirichlet prior (i.e., we have no a-priori knowledge of the topics) with the concentration parameter set to favour sparsity in order to encourage the assignment of few topics to sentences. We use the learnt topic model consisting of K = {k1, · · · , k|K|} topics to annotate summary sentences with a topic vector. For each sentence, we assign a topic label from K corresponding to its most likely topic. Table 2 shows topics discovered by LDA and the annotated target sentences for the three domains we consider. We train the document-level decoder to predict the topic kt of sentence st as an auxiliary task, P(kt|s1:t−1) = softmax(Wk(st)), and optimize the summation of the LNLL loss and the negative log likelihood of P(kt|s1:t−1). 4 Experimental setup Data Our WIKICATSUM data set includes the first 800 tokens from the input sequence of paragraphs (Liu et al., 2018) and the Wikipedia lead sections. We included pairs with more than 5 source documents and with more than 23 tokens in the lead section (see Appendix A for details). Each dataset was split into train (90%), validation (5%) and test set (5%). Table 3 shows dataset statistics. We compute recall ROUGE scores of the input documents against the summaries to asses the amount of overlap and as a reference for the interpretation of the scores achieved by the models. Across domains content overlap (R1) is ˜50 points. However, R2 is much lower indicating that there is abstraction, paraphrasing, and content selection in the summaries with respect to the input. We rank input paragraphs with a weighted TF-IDF similarity metric which takes paragraph length into account (Singhal et al., 2017). The column TopicNb in Table 3 shows the number of topics in the topic models selected for each domain and Table 2 shows some of the topics (see Appendix A for training and selection details). The optimal number of topics differs for each domain. In addition to general topics which are discussed across domain instances (e.g., topic #0 in Animal), there are also more specialized ones, e.g., relating to a type of company (see topic #29 in Company) or species (see topic #1 in Animal). Model Comparison We compared against two baselines: the Transformer sequence-to-sequence model (TF-S2S) of Liu et al. (2018) and the Convolutional sequence-to-sequence model (CVS2S) of Gehring et al. (2017). CV-S2D is our variant with a single sequence encoder and a structured decoder; and +T is the variant with topic label prediction. TF-S2S has 6 layers, the hidden size is set to 256 and the feed-forward hidden size was 1,024 for all layers. All convolutional models use the same encoder and decoder convolutional blocks. The encoder block uses 4 layers, 256 hidden dimensions and stride 3; the decoder uses the same configuration but 3 layers. All embedding sizes are set to 256. CV-S2D models are trained by first computing all sentence hidden states st and then decoding all sentences of the summary in parallel. See Appendix A for models training details. At test time, we use beam size of 5 for all models. The structured decoder explores at each sentence step 5 different hypotheses. Generation stops when the sentence decoder emits the EndOf-Document (EOD) token. The model trained to predict topic labels, will predict the End-Of-Topic label. This prediction is used as a hard constraint by the document-level decoder, setting the probability of the EOD token to 1. We also use trigram blocking (Paulus et al., 2018) to control for sentence repetition and discard consecutive sentence steps when these overlap on more than 80% of the tokens. 5111 Model Company Film Animal R1 R2 RL R1 R2 RL R1 R2 RL TF-S2S .260 .095 .204 .365 .188 .310 .440 .288 .400 CV-S2S .245 .094 .199 .346 .198 .307 .422 .284 .385 CV-S2D .276 .105 .213 .377 .208 .320 .423 .273 .371 CV-S2D+T .275 .106 .214 .380 .212 .323 .427 .279 .379 A C A C A C CV-S2S .046 .307 .097 .430 .229 .515 CV-S2D .051 .314 .098 .429 .219 .499 CV-S2D+T .051 .316 .101 .433 .223 .506 Table 4: ROUGE F-scores (upper part) and additional content metrics (bottom part). 5 Results Automatic Evaluation Our first evaluation is based on the standard ROUGE metric (Lin, 2004). We also make use of two additional automatic metrics. They are based on unigram counts of content words and aim at quantifying how much the generated text and the reference overlap with respect to the input (Xu et al., 2016). We expect multi-document summaries to cover details (e.g., names and dates) from the input but also abstract and rephrase its content. Abstract (A) computes unigram f-measure between the reference and generated text excluding tokens from the input. Higher values indicate the model’s abstraction capabilities. Copy (C) computes unigram fmeasure between the reference and generated text only on their intersection with the input. Higher values indicate better coverage of input details. Table 4 summarizes our results on the test set. In all datasets the structured decoder brings a large improvement in ROUGE-1 (R1), with the variant using topic labels (+T) bringing gains of +2 points on average. With respect to ROUGE-2 and -L (R2 and RL), the CV-S2D+T variant obtains highest scores on Company and Film, while on Animal it is close below to the baselines. Table 4 also presents results with our additional metrics which show that CV-S2D models have a higher overlap with the gold summaries on content words which do not appear in the input (A). All models have similar scores with respect to content words in the input and reference (C). Human Evaluation We complemented the automatic evaluation with two human-based studies carried out on Amazon Mechanical Turk (AMT) over 45 randomly selected examples from the test set (15 from each domain). We compared the TSS2S, CV-S2S and CV-S2D+T models. The first study focused on assessing the extent to which generated summaries retain salient information from the input set of paragraphs. We folModel Company Film Animal QA Rank QA Rank QA Rank TF-S2S 5 1.87 6 2.27 9 1.87 CV-S2S 5 2.27 6.67 1.76 8.33 2.04 CV-S2D+T 7 1.87 7 1.98 9.33 2.09 Table 5: QA-based evaluation and system ranking. lowed a question-answering (QA) scheme as proposed in Clarke and Lapata (2010). Under this scheme, a set of questions are created based on the gold summary; participants are then asked to answer these questions by reading system summaries alone without access to the input. The more questions a system can answer, the better it is at summarizing the input paragraphs as a whole (see Appendix A for example questions). Correct answers are given a score of 1, partially correct answers score 0.5, and zero otherwise. The final score is the average of all question scores. We created between two and four factoid questions for each summary; a total of 40 questions for each domain. We collected 3 judgements per system-question pair. Table 5 shows the QA scores. Summaries by the CV-S2D+T model are able to answer more questions, even for the Animals domain where the TS-S2S model obtained higher ROUGE scores. The second study assessed the overall content and linguistic quality of the summaries. We asked judges to rank (lower rank is better) system outputs according to Content (does the summary appropriately captures the content of the reference?), Fluency (is the summary fluent and grammatical?), Succinctness (does the summary avoid repetition?). We collected 3 judgments for each of the 45 examples. Participants were presented with the gold summary and the output of the three systems in random order. Over all domains, the ranking of the CV-S2D+T model is better than the two single-sequence models TS-S2S and CONVS2S. 6 Conclusions We introduced a novel structured decoder module for multi-document summarization. Our decoder is aware of which topics to mention in a sentence as well as of its position in the summary. Comparison of our model against competitive singlesequence decoders shows that structured decoding yields summaries with better content coverage. Acknowledgments We thank the ACL reviewers for their constructive feedback. We gratefully acknowledge the financial support of the European Research Council (award number 681760). 5112 References Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. arXiv preprint cs/0405039. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1662–1675. Association for Computational Linguistics. James Clarke and Mirella Lapata. 2010. Discourse constraints for document compression. Computational Linguistics, 36(3):411–441. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Proceedings of the 10th International Conference on Natural Language Generation, pages 124–133, Santiago de Compostela, Spain. (INLG 2017). Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. In Proceedings of the 34th International Conference on Machine Learning, pages 1243–1252, Sydney, Australia. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, ICLR. Jiwei Li, Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1106–1115. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Wo rkshop, pages 74–81, Barcelona, Spain. Peter Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by summarizing long sequences. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60, Baltimore, Maryland. Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep Graph Convolutional Encoders for Structured Data to Text Generation. In Proceedings of the 11th International Conference on Natural Language Generation, pages 1–9, Tilburg University, The Netherlands. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. Laura Perez-Beltrachini and Mirella Lapata. 2018. Bootstrapping Generators from Noisy Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, pages 1516–1527, New Orleans, Louisiana. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-Text Generation with Content Selection and Planning. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, Hawaii. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en. Michael R¨oder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM ’15, pages 399–408, New York, NY, USA. ACM. Christina Sauper and Regina Barzilay. 2009. Automatically generating Wikipedia articles: A structureaware approach. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 5113 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 208–216, Suntec, Singapore. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Amit Singhal, Chris Buckley, and Manclar Mitra. 2017. Pivoted document length normalization. ACM SIGIR Forum, 51(2):176–184. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 2818–2826. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1181, Vancouver, Canada. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263, Copenhagen, Denmark. Alison Wray. 2002. Formulaic Language and the Lexicon. Cambridge University Press, Cambridge. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Category SentNb SentLen Company 5.09±3.73 24.40±13.47 Film 4.17±2.71 23.54±11.91 Animal 4.71±3.53 19.68±18.69 Table 6: Average number of sentences in target summaries (SentNb) and sentence length (SentLen) in terms of word counts. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. A Appendix A.1 Data WikiSum consist of Wikipedia articles each of which are associated with a set of reference documents.3 We associate Wikipedia articles (i.e., entities) with a set of categories by querying the DBPedia knowledge-base.4 The WikiSum dataset originally provides a set of URLs corresponding to the source reference documents; we crawled online for these references using the tools provided in Liu et al. (2018).5 We used the Stanford CoreNLP (Manning et al., 2014) to tokenize the lead section into sentences. We observed that the Animal data set contains overall shorter sentences but also sentences consisting of long enumerations which is reflected in the higher variance in sentence length (see SentLen in Table 6). An example (lead) summary and related paragraphs in shown in Table 7. The upper part shows the target summary and the bottom the input set of paragraphs. EOP tokens separate the different paragraphs, EOT indicates the title of the Wikipedia article. To discover sentence topic templates in summaries, we used the Gensim framework ( ˇReh˚uˇrek and Sojka, 2010) and learned LDA models on summaries of the train splits. We performed grid search on the number of topics [10, · · · , 90] every ten steps, and used the context-vector-based topic coherence metric (cf. (R¨oder et al., 2015)) as guidance to manually inspect the output topic sets and 3We take the processed Wikipedia articles from https://github.com/tensorflow/ tensor2tensor/tree/master/tensor2tensor/ data_generators/wikisum released on April 25th 2018. 4Entities of Wikipedia articles are associated with categories using the latest DBPedia release http:// wiki.dbpedia.org/downloads-2016-10 to obtain the instance types (http://mappings.dbpedia.org/ server/ontology/classes/). 5The crawl took place in July 2018 and was supported by Google Cloud. 5114 select the most appropriate ones. For competing topic sets, we trained the models and selected the topic set which led to higher ROUGE scores on the development set. We used the following hyperparameters to train topic models with Gensim ( ˇReh˚uˇrek and Sojka, 2010). We set the α = 0.001 and η = ’auto’; and used the following training configuration: random state=100, eval every=5, chunksize=10000, iterations=500, passes=50. We train on the preprocessed version of the summaries with lemmas of content words (stop words were removed). A.2 Model Training Details In all convolutional models we used dropout (Srivastava et al., 2014) in both encoder and sentencelevel decoder with a rate of 0.2. For the normalisation and initialisation of the convolutional architectures, we follow (Gehring et al., 2017). Similarly, to train the convolutional models we follow the optimisation setup in (Gehring et al., 2017). For the transformer-based baseline we applied dropout (with probability of 0.1) before all linear layers and label smoothing (Szegedy et al., 2016) with smoothing factor 0.1. The optimizer was Adam (Kingma and Ba, 2015) with learning rate of 2, β1 = 0.9, and β2 = 0.998; we also applied learning rate warm-up over the first 8,000 steps, and decay as in (Vaswani et al., 2017). We select the best models based on ROUGE scores on the development set. As for the data, we discarded examples where the lead contained sentences longer than 200 tokens (often been long enumerations of items). For the training of all models we only retained those data examples fitting the maximum target length of the structured decoder, 15 sentences with maximum length of 40 tokens (sentences longer than this where split). We used a source and target vocabulary of 50K words for all datasets. On decoding we normalise log-likelihood of the candidate hypotheses y by their length, |y|α with α = 1 (Wu et al., 2016), except for the structured decoder on the Animals dataset where we use α = 0.9. For the transformer model we use α = 0.6. A.3 Evaluation and System Outputs In the automatic evaluation we used pyrouge6 and ROUGE-1.5.5.pl with stemming (parameters= “-c 95 -r 1000 -n 2 -m”). 6pypi.python.org/pypi/pyrouge Table 8 shows an example of gold summary and corresponding question set from the questionanswering study in Section 5. Table 9 shows examples of system output on the development set. Specifically, we show summaries generated by CONVS2S and CONVS2D+Topic, and also include the reference Gold standard. 5115 agriocnemis zerafica is a species of damselfly in the family coenagrionidae. it is native to africa, where it is widespread across the central and western nations of the continent. it is known by the common name sahel wisp. this species occurs in swamps and pools in dry regions. there are no major threats but it may be affected by pollution and habitat loss to agriculture and development. agriocnemis zerafica EOT specimen count 1 record last modified 21 apr 2016 nmnh -entomology dept. taxonomy animalia arthropoda insecta odonata coenagrionidae collector eldon h. newcomb preparation envelope prep count 1 sex male stage adult see more items in specimen inventory entomology place area 5.12km. ne. dakar, near kamberene; 1:30-4:30 p.m., senegal collection date 21 may 1944 barcode 00342577 usnm number usnment342577 published name agriocnemis zerafica le roi EOP global distribution: the species is known from north-west uganda and sudan, through niger to mauritania and liberia: a larger sahelian range, i.e., in more arid zone than other african agriocnemis. record from angola unlikely. northeastern africa distribution: the species was listed by tsuda for sudan. this record needs confirmation. may also occur in kenya as well. EOP very small, about 20mm. orange tail. advised agriocnemis sp. id by kd dijkstra: hard to see details, but i believe this is not a. exilis EOP same creature as previously posted as unknown, very small, about 20mm, over water, top view. advised probably agriocnemis, ”whisp” damselfly. EOP thank you for taking the time to provide feedback on the iucn red list of threatened species website, we are grateful for your input. EOP justification: this is a widespread species with no known major widespread threats that is unlikely to be declining fast enough to qualify for listing in a threatened category. it is therefore assessed as least concern. EOP the species has been recorded from northwest uganda and sudan, through niger to mauritania and liberia: a larger sahelian range, i.e., in more arid zone than other african EOP the main threats to the species are habitat loss due to agriculture, urban development and drainage, as well as water pollution. EOP no conservation measures known but information on taxonomy, population ecology, habitat status and population trends would be valuable. Table 7: Summary (top) and input paragraphs (bottom) from the Animal development dataset. Film Gold Mary Queen of Scots is a 2013 Swiss period drama directed by Thomas Imbach. It is his first film in English and French language starring the bilingual french actress Camille Rutherford. The film portrays the inner life of Mary, the Queen of Scotland. The film is based on austrian novelist Stefan Zweig’s 1935 biography, Mary Stuart, a long-term bestseller in Germany and France but out of print in the UK and the us for decades until 2010. The film was first screened at the 2013 International Film Festival Locarno and was later shown at the 2013 Toronto International Film Festival. QA What does the film portrays? [the inner life of Mary , the Queen of Scotland] At which festival was the film first screened? [2013 International Film Festival Locarno] Who is the author of the novel the film is based on? [Stefan Zweig] TF-S2S Mary Queen of Scots is a 2013 British biographical film based on the life of Mary Queen Mary Mary Queen of Scots. It was directed by Ian Hart and stars Vanessa Redgrave as the title role. It was released in the United Kingdom on 18 april 2013. CV-S2S Mary Queen of Scots is a 2013 German drama film directed by Thomas UNK. It was screened in the contemporary world cinema section at the 2013 Toronto International Film Festival. CV-S2D+T Mary Queen of Scots ( german : das UNK der UNK ) is a 2013 German drama film directed by Thomas UNK. The film is based on the life of Mary Ellen of Scots. The film was released in the united states on January 17 , 2013. Table 8: Example of Gold summary, question set and system outputs for the QA evaluation study. 5116 Company Gold Seagull Book, formerly called Seagull Book & Tape, is an American retail chain bookstore focusing on products for members of the Church of Jesus Christ of latter-day Saints (lds church), with over two dozen stores in Utah, Idaho, Arizona, and nevada. It was the second largest lds bookstore until being acquired in 2006 by market-leader deseret book, and since then Seagull has continued to operate as a discount chain, distinct from deseret book branded retail stores. CV-S2S Seagull Book & Tape, Inc. is a book publishing company based in american fork, Utah, United States. It was founded in 1987 by jonathan UNK. CV-S2D+T Seagull Book & Tape, Inc. is an American book retailer with 26 stores throughout Utah, Idaho and California. The company is based in Boise, Idaho. The company is based in Boise, idaho, with its sister company Seagull Book & Tape. Film Gold To Write Love on Her Arms (also known as Day One; formerly Renee) is a 2012 american biographical drama film written and directed by Nathan Frankowski, starring Kat Dennings, Chad Michael Murray, Rupert Friend, Juliana Harkavy, Corbin Bleu and Mark Saul. The film is based on the life of troubled teenager Renee Yohe and the founding of To Write Love on Her Arms by Jamie Tworkowski, after he and others helped Yohe to overcome her challenges enough to be able to enter rehab. The film premiered on march 11, 2012 at the Omaha Film Festival, and was eventually released direct-to-dvd on March 3, 2015. CV-S2S To UNK Love on Her Arms is a 2015 American biographical drama film directed by Renee UNK and written by Renee UNK. The film is based on the true story of a girl whose journey is threatened by her arms. CV-S2D+T To Write Love on Her Arms is a 2015 American biographical drama film directed by Renee UNK. The film is based on the true story of Renee UNK. The film was released in the United States on March 3, 2015. The film is based on the book of the same name by Renee UNK. Animal Gold Compacta Capitalis is a moth in the Crambidae family. It was described by Grote in 1881. It is found in North America, where it has been recorded from Maryland to Florida, West to Texas and possibly Colorado, North to Illinois. The wingspan is about 35 mm. The forewings are forewing are white with a reddish-brown shading at the base and along the inner margin and two black discal spots, as well as an irregular subterminal line. There is a dark apical blotch on both wings. Adults are on wing from May to August. CV-S2S Compacta UNK is a moth in the Crambidae family. It was described by Barnes and McDunnough in 1918. It is found in North America, where it has been recorded from Alabama, Florida, Georgia, Illinois, Indiana, Kentucky, Maine, Maryland, Massachusetts, Minnesota, New Brunswick, New Hampshire, New Jersey, New york, North Carolina, Ohio, Oklahoma, Ontario, Pennsylvania, Quebec, South Carolina, Tennessee, Texas and Virginia. CV-S2D+T Compacta UNK is a moth in the Crambidae family. It was described by Grote in 1878. It is found in North America, where it has been recorded from Florida. It is also found in Mexico. The wingspan is about 20 mm. Adults have been recorded on wing from April to September. Table 9: Examples of system output on the development set.
2019
504
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5117–5126 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5117 Morphological Irregularity Correlates with Frequency Shijie Wu Department of Computer Science Johns Hopkins University Baltimore, US [email protected] Ryan Cotterell The Computer Laboratory University of Cambridge Cambridge, UK [email protected] Timothy J. O’Donnell Department of Linguistics McGill University Montr´eal, Canada [email protected] Abstract We present a study of morphological irregularity. Following recent work, we define an information-theoretic measure of irregularity based on the predictability of forms in a language. Using a neural transduction model, we estimate this quantity for the forms in 28 languages. We first present several validatory and exploratory analyses of irregularity. We then show that our analyses provide evidence for a correlation between irregularity and frequency: higher frequency items are more likely to be irregular and irregular items are more likely be highly frequent. To our knowledge, this result is the first of its breadth and confirms longstanding proposals from the linguistics literature. The correlation is more robust when aggregated at the level of whole paradigms—providing support for models of linguistic structure in which inflected forms are unified by abstract underlying stems or lexemes. Code is available at https://github.com/shijie-wu/ neural-transducer. 1 Introduction Irregularity is a pervasive phenomenon in the inflectional morphology of the world’s languages and raises a number of questions about language design, learnability, and change. Nevertheless, irregularity remains an understudied phenomenon and many basic questions remain unanswered (Kiefer, 2000; Stolz et al., 2012). Do all languages exhibit irregularity? What is the relationship between irregularity and frequency? Is irregularity best thought of as a property of individual forms, or a property of more abstract objects like morphological paradigms? In this paper, we examine these questions, focusing in particular on the relationship between irregularity and frequency. One of the fundamental challenges in studying irregularity is defining the phenomenon in a way that is applicable across languages. We begin the paper by addressing this question. First, we formalize the problem of inflectional morphology and present a novel, information-theoretic measure of the degree of irregularity of an inflected form. This definition builds on recent work that defines (ir)regularity in terms of the probabilistic predictability of a form given the rest of the language (Cotterell et al., 2018a; Ackerman and Malouf, 2013). Making use of a state-of-the-art model of morphological inflection, we estimate our measure of irregularity across a large number of word forms from 28 languages drawn from the UniMorph database (Kirov et al., 2018). Based on these estimates we perform three studies. First, we validate our estimates by examining the predictions on English past tense forms— showing that the model’s predicts accord with human judgements of irregularity. We also examine the overall rate of accuracy of our model. Second, we examine the degree of irregularity across languages, showing that the model predicts wide variance in the average amount of irregularity between the languages in our sample. Finally, we provide empirical evidence for a correlation between irregularity and frequency across languages. While this relationship has been observed for individual languages (e.g., English: Marcus et al., 1992; Bybee, 1985), this is the first confirmation of the effect across this many languages. This result is especially relevant given recent discussions calling the relationship into question (e.g., Fratini et al., 2014; Yang, 2016). We find, furthermore, that the correlation between irregularity and frequency is much more robust when irregularity is considered as a property of whole lexemes (or stems/paradigms) rather than as a property of individual word forms. We discuss the implications of these findings. 5118 2 Formalizing Inflectional Morphology In this work, each word type is represented as a triple consisting of the following components: • A lexeme1 ℓ: An arbitrary integer or string that indexes an abstract word (e.g., GO, which provides an index to forms of the verb go such as goes and went). • A slot σ: An arbitrary integer, string, or more structured object that indicates how the word is inflected (e.g., [pos=v, tns=past, person=3rd, num=sg] for the form went). • A surface form w: A string over a fixed phonological or orthographic alphabet Σ (e.g., went). A paradigm ℓ(boldface ℓ) is a lexeme-specific map from slots to surface forms for lexeme ℓ.2 Typically, slots are indexed by structured entities— known as morpho-syntactic feature vectors or morpho-syntactic tags—represented by a set of key-value pairs: σ = [k1=v1, . . . , kn=vn]. For example, the English verb form runs, which has the feature vector [tns=pres, per=3rd, num=sing]. In what follows, the keys ki and the corresponding values vi are taken from the universal inventory, defined by the UniMorph annotation scheme and denoted M (Kirov et al., 2018). We use dot notation to refer to specific forms or sets of forms in a paradigm indexed by some slot GO.past = went. Given the pieces just sketched, a complete model of inflectional morphology will specify a joint distribution over surface forms, lexemes, and slots, that is P(w, ℓ, σ), or one of its associated conditional distributions, such as P(ℓ, σ | w)—the distribution over lexemes and features, given a surface form; or P(w | ℓ, σ)—the conditional probability of a surface form given a lexeme and inflectional features. In this paper, we will focus on the latter, defining a probabilistic model to approximate this distribution and using that to estimate degrees of irregularity. 1This terminology is characteristic of word-and-paradigm approaches to morphology. In item-and-arrangement approaches, this might be called the stem (Hockett, 1954). 2See (Baerman et al., 2015, Part II) for a tour of alternative views of inflectional paradigms. 3 Operationalizing Irregularity The informal distinction between regular and irregular forms is an important one for many theories of grammar (e.g., Siegel, 1974), language processing (e.g., Hay, 2003), and language acquisition (e.g., Pinker, 1999; Marcus et al., 1992; McClelland and Patterson, 2002a,b; Pinker and Ullman, 2002b,a; Rumelhart and McClelland, 1986; Prasada and Pinker, 1993; Pinker and Prince, 1988). However, there have been few proposals for how the notion can be characterized precisely or measured quantitatively. Clearly, the regularity of a form (or rule) can only be defined with respect to the language as a whole—what makes something irregular is that it does not behave in the way that would be expected given other forms in the language. But what is meant by expected? Here, we follow recent work by defining the notion of expectedness in terms of a probabilistic model of inflection which approximates P(w | ℓ, σ) (Cotterell et al., 2018a; Ackerman and Malouf, 2013). However, there remains a wrinkle. A form like went is highly expected as the past tense of GO for an adult speaker of English, but is also irregular. How do we capture this? We take the correct notion of expectedness to be the expectedness of the word form treated as if it were the first instance of that lexeme which had been observed. Thus, we base our measures of regularity on the conditional probability of a word type w given the rest of the forms in the language with the target lexeme removed. P(w | ℓ, σ, L−ℓ) (1) Of course, since the target language L is generally infinite, we will need to make use of some model-based estimate of this probability pθ(w | ℓ, σ, L−ℓ). In essence, our definition of irregularity is based on wug-testing (Berko, 1958) such a probabilistic model to see how robustly it generalizes to the target form w. In practice, we will estimate this quantity by performing a holdout evaluation of the target form under our model. More irregular forms will tend to have a lower wug-test probability P(w | ℓ, σ, L−ℓ) than most regular forms. However, the absolute value of such a probability is not directly interpretable. To turn these probabilities into interpretable values which directly measure irregularity, we take the negative log odds of the probability of the correct word 5119 form. ι(w) = −log  P(w | ℓ, σ, L−ℓ) 1 −P(w | ℓ, σ, L−ℓ)  (2) We refer to this quantity as the degree of irregularity of a form. If probability of the correct form w is exactly 0.5, then eq. (2) will be 0. However, if P(w | ℓ, σ, L−ℓ) > P w′̸=w P(w′ | ℓ, σ, L−ℓ), then eq. (2) will be negative. Otherwise, the quantity is positive. In other words, the metric is more strongly positive when a form is less predictable given other forms in the language and more strongly negative when a form is more strongly predictable. The midpoint at 0 occurs when there is an equal amount of probability mass on the correct form and all other forms. Note that this definition of ι neatly addresses several challenges in studying the notion of (ir)regularity. First, it doesn’t require us to define a binary notion of regular versus irregular or even to explicitly define any such notion at all—a model may treat regularity as an implicit rather than explicit feature of a form or paradigm. Second, and relatedly, we do not require data annotated with the regularity of forms to train or test our model. Third, this definition inherently captures the idea of degree of regularity, for instance, capturing the distinction between wholly suppletive forms such as went and semi-productive inflectional classes such as ring/rang, sing/sang, etc. Fourth and finally, regularity is known to be correlated with other features of morphological structure, such as productivity. Our definition sidesteps the tricky issue of disentangling these different properties of inflection. Note that our definition of ι conditions on L−ℓ— the language without the target lexeme—rather than on L−w—the language without the target word. Thus, we are measuring the probability that the model will generalize to the correct form without any evidence of a lexeme at all. Thus, we rule out predictability that comes from similar forms within a paradigm ℓ. For example, in our approach a model cannot make use of the irregularity of the past tense form ring to guess that the past participle form was more likely to be rung. We discuss the implications of this assumption in more detail below §5.4. poner pongo pongas ponga pongan pondr´ıas pondr´ıais pondr´ıan pondr´ıas Figure 1: Lemma paradigm tree 4 Modeling Morphological Inflection Our goal is to estimate P(w | ℓ, σ, L−ℓ) from data. We do this by using a structured probabilistic model of string transduction which we call pθ. In the following sections, we describe this model, how we handle syncretism in the model, our training (holdout and test) scheme, and our estimates of the degree of irregularity ι. 4.1 A Lemma-Based Model In linguistic morphology, a major division is between item-and-arrangement or morpheme-based models and word-and-paradigm or word-based models (Hockett, 1954). Following (Cotterell et al., 2017b), we adopt a word-based approach. To do this, we designate a unique surface form for each paradigm ℓknown as the lemma. The lemma is associated with a slot which we notate ˇσ: ℓ.ˇσ ∈Σ∗. The lemma can be thought of as a dictionary or citation form of a word and is traditionally chosen by lexicographers of a language. For example, in many Western European languages the lemma of verb forms is the infinitive. Figure 1 shows several of the forms of the Spanish verb poner (“to put”) organized around the lemma form. In what follows, we use the lemma to identify lexemes, and wherever a probability distribution would condition on the abstract lexeme ℓwe instead condition on the lemma ℓ.ˇσ. Our probabilistic model of string transduction pθ is a monotonic model with hard attention described in Wu and Cotterell (2019) and can be viewed as a graphical model over strings like the one shown in 5120 SG PL SG PL NOM Wort W¨orter Herr Herren GEN Wortes W¨orter Herrn Herren ACC Wort W¨orter Herrn Herren DAT Worte W¨ortern Herrn Herren Table 1: Full paradigms for the German nouns Wort (“word”) and Herr (“mister”) with abbreviated and tabularized UniMorph annotation. The syncretic forms are bolded and colored by ambiguity class. Note that, while in the plural, the nominative and accusative are always syncretic across all paradigms, the same is not true in the singular. Figure 1. It is expressed as follows. pθ(w | ℓ.ˇσ, σ, L−ℓ) = X a∈A(w,ℓ.ˇσ) pθ(w, a | ℓ.ˇσ, σ, L−ℓ). (3) The definition of the model includes a sum over all monotonic (non-crossing) alignments A(w, ℓ.ˇσ) between the lemma ℓ.ˇσ and the output surface form w. The inner term of this sum is estimated using a sequence to sequence model. The sum itself is computable in polynomial time using a variant of the forward algorithm (Rabiner, 1989). The model achieves state-of-the-art performance on the SIGMORPHON 2017 shared task on morphological reinflection (Cotterell et al., 2017a). We follow the hyperparameter used by Wu and Cotterell (2019). 4.2 Handling Syncretism Many inflectional systems display syncretism— the morphological phenomenon whereby two slots with distinct morpho-syntactic tags may have an identical surface form. In contrast to many models of inflectional morphology, we collapse syncretic forms of a word into a single paradigm slot, thereby assuming that every every surface form w in a paradigm is distinct. An example of such a collapsed paradigm in German is given in Table 1. Our formalization includes a slot that merges the genitive, accusative and dative singular into a single slot due to the word Herr. To accomplish this we assume that each lexeme is associated with a set of syncretism classes denoted by Cℓ. Cℓ: M →M is a map from a slot σ to a citation form slot σ′ which indexed the canonical suface citation form for that combination of features. Cℓis used to collapse paradigm cells with identical surface forms. For instance, all forms of the lexeme GO are realized as went in the English past tense, regardless of person and number features; thus, for example, CGO([tns = past, per = 3rd, num = sing]) = CGO([tns = past, per = 2nd, num = plural]). We say that two lexemes ℓand ℓ′ are syncretically equivalent if Cℓ(σ) = Cℓ′(σ) for all σ. We assume the mappings Cℓare known and given in advance in what follows. We will use this syncretism-collapsed representation for all simulations below. In particular, this assumption will allow us to simply count the surface forms of each word in Wikipedia without dealing with the tricky issue of assigning individual words to the correct combination of morphosyntactic features (see, Cotterell et al., 2018b, for detailed discussion). 4.3 Handling Derived Forms As discussed above, we hold out whole lexemes, including all of their inflected forms during training. However, derivational morphology presents a potential challenge for this approach. Consider the irregular verb do/did/done. This verb appears in a number of derived prefixed forms such as redo and undo. These forms all inflect identically to the base form do—for example, redo/redid/redone.3 If we train our probability model on such derived forms, it is likely to estimate too high a wug-test probability for all forms which are built from the shared stem. To obviate this problem, we remove all derived forms from the data we consider. To do so we develop a heuristic approach to isolate all words that may have been derived from another. Note that a key desideratum of heuristic is that it should be high precision with respect to finding derivational transformation—we would rather overexclude forms as potentially derivative of another, rather than leave a derived form in the data. We consider a lexeme ℓ′ to be derived from a lexeme ℓif and only if there is a string s ∈Σ+ such that (∀σ)[ℓ′.σ = ℓ.σ · s] or (∀σ)[ℓ′.σ = s · ℓ.σ] where s · t denotes string concatenation of strings s and t. For example, DO and REDO satisfy this condition, while SING and RING do not. We 3An anonymous reviewer points out that in some languages, such as Dutch, forms derived from irregular verbs become regular (e.g., zeggen/zei but toezeggen/toezegde). In those languages, it should be unnecessary to apply our heuristic approach. 5121 perform a search for candidate s for all pairs of lexemes in each language and remove all ℓ′ that meet this criterion. 4.4 Measuring Irregularity With the above definitions in place, we can define an approximation to our degree of irregularity ι. ι(w) = −log pθ(w | ℓ.ˇσ, σ, L−ℓ) 1 −pθ(w | ℓ.ˇσ, σ, L−ℓ) (4) In our analyses below, we will also wish to measure the irregularity of lexemes as a whole. To do this, we take the average irregularity score over the entire paradigm ℓ. ι(ℓ) = P {(w,σ,ℓ)∈ℓ| w̸=ℓ.ˇσ} −log pθ(w|ℓ.ˇσ,σ,L−ℓ) 1−pθ(w|ℓ.ˇσ,σ,L−ℓ) |ℓ| −1 (5) 5 Studies of Irregularity The empirical portion of our work consists of three studies. We first validate and examine the accuracy of the model (§5.2.1). Second, we examine the distribution of irregularity across the languages in our sample (§5.3). Finally, we examine the correlation between irregularity and frequency (§5.4). Before presenting these studies we first give an overview of the data and simulations common to all of them. 5.1 Simulations Data Provenance. All word forms, paradigms, and morphosyntactic features are taken from the UniMorph project (Kirov et al., 2018). Specifically, we examine the following 28 languages: Albanian, Arabic, Armenian, Basque, Bulgarian, Czech, Danish, Dutch, English, Estonian, French, German, Hebrew, Hindi, Irish, Italian, Latvian, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Turkish, Ukrainian, Urdu, and Welsh. The languages come from 4 stocks (Indo-European, AfroAsiastic, Finno-Urgic and Turkic) with Basque, a language isolate, included as well. Although this sample represents a reasonable degree of typological diversity, the Indo-European family is overrepresented in the UniMorph dataset, as is the case for most current multilingual corpora. However, within the Indo-European family, we consider a diverse set of subfamilies: Albanian, Armenian, Slavic, Germanic, Romance, Indo-Aryan, Baltic, and Celtic. For each subfamily, we sample subset of languages randomly. All of our form-level frequencies were computed from Wikipedia.4 Lexeme counts are achieved by summing over all entries in the paradigm associated with a lexeme. In all simulations, we predict the orthographic form of the target word w from the orthographic form of the lemma ℓ.ˇσ as a proxy for phonological transcriptions which do not exist for our all languages in UniMorph. Lexeme-Based Cross Validation. In the studies that follow, we train a separate instance of our model on the forms in each language using the following procedure. We first remove morphologically-complex forms that are derived from other lemmas in the corpus using the heuristic technique described in §4.3. We then randomly assign the remaining lexemes of each language to one of ten splits. Note that each split will contain all of the forms associated with each lexeme and a lexeme will never be divided across splits. We then perform 10-fold cross-validation, training the model pθ on 8 splits, tuning on one of the remaining two splits, and testing on the final remaining split. Note that this approach to cross-validation allows us to approximate L−ℓwithout the costly procedure of retraining for every held-out lexeme. However, also note that this approach has a potential confound. Lexemes can often be grouped into inflectional classes in which all lexemes mark different slots in the same way. For example, verbs such as sing/sang/sung and ring/rang/rung form an inflectional class in English. Inflectional classes vary in their size and regularity (Stump, 2001). If all or most lexemes in the same irregular inflectional class end up together in the test split under our approach, we may systematically overestimate their irregularity. 5.2 Validation and Accuracy 5.2.1 Validation on English Verbs The first question we wish to ask is whether the irregularity predictions made by our model are consistent with human intuitions. To answer this question, we examine the predictions of our model on the English past tense—a morphological system which has been intensely studied for decades (see Pinker, 1999, for overview) and for which there is general agreement about which forms are regular 4Wikipedia data retrieved on Feb 1st, 2019. 5122 Albright and Hayes (2003) O’Donnell (2015) 0.670 0.559 Table 2: Validation of our irregularity metric. Spearman’s ρ between gold-standard irregularity annotations from Albright and Hayes (2003) and O’Donnell (2015) and our irregularity metric. or irregular. We make use of the databases of Albright and Hayes (2003) which consists of 4039 English verb forms and the dataset of O’Donnell (2015) which consists of 15202 verb forms, both hand-annotated for irregularity by experts. We present our results in Table 2. We find that our measure of irregularity strongly correlates with human intuitions on English verbs. We take this as tentative validation of our metric. Future work will investigate the linguistic plausibility of our metric on a greater diversity of languages. 5.2.2 Wug-Test Accuracy Language Family Avg. Accuracy Lexemes Forms Avg. Forms/Lexeme Albanian Indo-European 0.83 537 26993 50.4 Arabic Semitic 0.63 3559 89879 25.5 Armenian Indo-European 0.95 4614 144841 31.4 Basque Isolate 0.01 26 10382 441.9 Bulgarian Slavic 0.94 2042 36007 17.7 Czech Slavic 0.92 4470 61251 13.8 Danish Germanic 0.65 2580 19968 7.8 Dutch Germanic 0.94 3932 20680 5.3 English Germanic 0.95 9915 40210 4.1 Estonian Uralic 0.79 817 31711 38.9 French Romance 0.86 5378 195638 37.4 German Germanic 0.92 14739 69190 4.7 Hebrew Semitic 0.78 492 11240 23.3 Hindi Indo-Aryan 0.74 254 26404 104.0 Irish Celtic 0.85 6527 69551 10.7 Italian Romance 0.99 6495 269908 41.9 Latvian Baltic 0.97 5347 60146 11.9 Persian Iranian 0.70 271 26336 98.3 Polish Slavic 0.93 8317 106914 13.0 Portuguese Romance 0.98 2621 138372 52.9 Romanian Romance 0.78 3409 51670 15.3 Russian Slavic 0.95 19991 243748 12.2 Spanish Romance 0.97 3904 232676 59.9 Swedish Germanic 0.89 6451 43118 6.7 Turkish Turkic 0.85 2697 150477 55.9 Ukrainian Slavic 0.86 1426 13844 9.8 Urdu Indo-Aryan 0.38 180 5581 31.0 Welsh Celtic 0.41 179 9083 50.8 Table 3: Accuracy per language. Our lexeme-based cross-validation setup differs substantially from the form-based setup typically used to evaluate models of inflectional morphology (see, e.g., Cotterell et al., 2017a). In the typical evaluation setup, individual surface word forms are heldout, rather than all of the forms associated with entire lexemes. This means, amongst other things, that words from irregular lexemes will often be split between test and train, giving models an opportunity to learn partially productive and Figure 2: Average degree of irregularity ι across languages. semi-regular patterns of inflection. Our approach however makes this impossible by strictly assigning all forms from each lexeme to either train or test. It is important to ask, therefore, how well does our model predict the forms of heldout lexemes given this stricture? The results are displayed in Table 3. This table displays the average accuracy for each language in our sample as well as the number of lexemes for that language, the total number of forms, and the average number of forms per lexeme. The majority of languages show very high generalization accuracy to our lexeme-based wug-tests: 21 out of 28 have an average accuracy of 75% or higher. Three languages stand out in terms of their low accuracy and are highlighted in Table 3: Basque, Urdu, and Welsh. These languages, Basque especially, are characterized by smaller numbers of lexemes and larger numbers of forms per lexeme. In the §5.4, we discuss the correlation between irregularity and frequency. The interpretation of these results relies on the ability of our model to accurately capture regular structure in the inflectional systems of the languages that we study. For this reason, we make the conservative choice to exclude all languages whose average accuracy was below 75% from all further analyses below. 5.3 Irregularity across Languages It is often observed that there are differences in the prevalence of irregularity across languages (Stolz et al., 2012). On one end of the spectrum, some languages have widespread (often suppletive) allomorphy in their marking of inflectional features. For example, Arabic marks plurality on nouns in 5123 one of more than a dozen different ways and these are idiosyncratic to the noun stem. Similarly, Georgian verbs often have different roots depending on their tense, aspect, or mood marking. On the other end of the spectrum, it is sometimes claimed that agglutinative languages like Turkish exhibit no irregularity whatsoever. Figure 2 displays the average irregularity score per language for the 21 languages remaining after our 75% accuracy criterion. Recall from eq. (2) that the degree of irregularity ι is positive when the majority of predicted probability mass falls on forms that are not the correct target form (i.e., the form is irregular), and negative when the majority of probability mass falls on the predicted form (i.e., the form is regular). As can be seen from the figure, average irregularity is negative across languages. This is expected—most forms in these languages are predicted accurately by the model. However, there is wide variability in the average irregularity score between languages. In particular, in the most regular language, Portuguese, correct forms are about 25,000 times more likely on average than alternative forms. In the most irregular language, Hebrew, correct forms are only about 16 times more likely on average than alternative forms. We leave it to future work to validate and further study these cross-linguistic differences in irregularity predictions. 5.4 Irregularity and Frequency In some morphological systems, such as the English past tense, there is a strong and well-known correlation between irregularity and frequency is well-known (Marcus et al., 1992; Pinker, 1999). In such systems, the most frequent past forms tend to be irregular and irregular forms tend to come from the most frequent verbs. Based on cases like this, it is widely believed in linguistics and psycholinguistics that there is an association between frequency and irregularity (Bybee, 1991; Haspelmath and Sims, 2010; Kiefer, 2000). However, to our knowledge, this relationship has never been explicitly tested quantitatively across many languages at once. Recently, several authors have questioned the received wisdom that irregularity and frequency are related (Yang, 2016; Fratini et al., 2014).5 Thus, it has become important to test this relationship empirically. An example of such a challenge to 5But see Herce (2016). Figure 3: Correlations between irregularity and frequency at the form level. the standard assumption comes from Yang (2016) who proposed an influential theory of morphological productivity known as the tolerance principle. The mathematical derivation of the tolerance principle relies on the assumption that irregular forms are uniformly distributed throughout the frequency range (Yang, 2016).6 Here we present the first study to probe the relationship between irregularity and frequency at scale. We first examine the relationship between the degree of irregularity ι and the frequency of individual word forms. To study this question, we examined the Pearson correlation between the logtransformed frequency of word forms in each language and their predicted irregularity scores ι(w). Because word occurrences fall into the class of large number of rare event distributions, finite samples will tend to underestimate the probability of infrequent words—word forms that appear 0 times in some sample often differ by orders of magnitude in their true probability (Chitashvili and Baayen, 1993; Baayen, 2001). For this reason, we chose to exclude all frequency 0 forms from our analyses. The correlations for the 21 languages considered in this study are shown in Figure 3 with significant correlations (p < 0.05) marked in blue. Overall, a slight trend towards a positive correlation between irregularity and frequency is discernible in this set of word forms. Following Mahowald et al. (2018), we tested this by fitting a mixed-effect model with irregularity as the dependent variable, language as a random effect (slopes and intercepts) and log count as a fixed effect (Gelman and Hill, 2007). The 6Yang tentatively proposes that the correlation between frequency and irregularity might be accidental in languages such as English. He argues, however, that his theory is not contingent on this being the case (Yang, 2016, pp. 65). 5124 Figure 4: Correlations between irregularity and frequency at the lexeme level. results give a positive coefficient of 0.064 for the log count factor. The AIC-corrected log-odds ratio in favor of the model with a fixed effect of count (compared to a model with just random effects) is 3.44. A nested-model likelihood-ratio χ-squared test shows that the log factor is significant with p < 0.04. An important question about irregularity is whether it is a property of individual forms, or rather whether it inheres to whole paradigms (Baerman et al., 2010; Stolz et al., 2012; Herce, 2016). To examine this question more closely, we ran an alternative correlational analysis examining the correlation between the sum of the counts of all forms associated with a lexeme and the average irregularity score for all forms associated with the lexeme (as in eq. (5)). Figure 4 shows the results. Overall, a stronger trend towards a positive correlation between irregularity and frequency is discernible at the lexeme level than at the word-form level. We tested this by fitting a mixed-effect model with irregularity as the dependent variable, language as a random effect (slopes and intercepts) and log count as a fixed effect. The models gives a positive coefficient of 0.14 for the log count factor. The AIC-corrected log-odds ratio in favor of the model with a fixed effect of count (compared to a model with just random effects) is 11.8. A nested-model likelihood-ratio χ-squared test shows that the log count factor is significant with p < 0.001. Thus, the correlation between irregularity and frequency is considerably more robust when considered at the lexeme level. 6 Conclusion In this paper, we have introduced a measure of irregularity based on wug-testing a model of morphological inflection. In §5.2.1, we showed that this measure produces results that are consistent with human judgements. Focusing on a subset of the languages for which the model was able to recover the correct inflected forms at a high rate (§5.2.2), we showed that average irregularity varies a good deal between languages. This result is consistent with the findings of Cotterell et al. (2018a) which gave large scale empirical evidence of a tradeoff between the size of morphological paradigms and the predictability of individual forms within each paradigm. The main novel empirical result of our paper was presented in §5.4 which showed that irregularity is correlated with frequency both at the level of individual forms as well as at the level of lexemes. To our knowledge, this is the first large-scale empirical demonstration of this piece of linguistic folk wisdom and provides evidence relevant to recent proposals questioning this generalization (Fratini et al., 2014; Yang, 2016). Perhaps of greater interest than this positive result is the difference in the strength of the correlation between the level of individual forms and the level of lexemes. This difference appears to be driven by the fact that, in many cases, lexemes that contain high-frequency forms will also contain a few low frequency forms as well. Adopting the terminology of Yang (2002), we can say that low frequency forms free-ride on the higher frequency members of the lexeme. This finding lends credence to models of linguistic structure which group words together by their lexeme or stem. Such models seem necessary to account for paradigmatic structure cross linguistically and to deal with phenomena such as the existence of defective paradigms—the phenomenon whereby certain inflected forms of a word seem to be impossible for speakers (Baerman et al., 2010). A canonical example is the past participle of stride (e.g., ∗strode/∗stridden/∗strided). In these cases, the problem seems to be that the irregularity of the overall lexeme is known, but the particular word form has never been observed. Our results provide further support for the view that inflected forms represent surface exponence of common underlying morphological objects. More generally, we observe that our wug-test 5125 techniques provides a general way of studying regularity and predictability within languages and may prove useful for attacking other difficult problems in the literature, such as detecting inflectional classes. By measuring which words or lexemes are most predictable from one another, a general picture of morphological relatedness within a language can be built in a bottom-up way. Acknowledgments The third author gratefully acknowledges support from the Fonds de Recherche du Qu´ebec—Soci´et´e et Culture and the Natural Sciences and Engineering Research Council of Canada. References Farrell Ackerman and Robert Malouf. 2013. Morphological organization: The low conditional entropy conjecture. Language, 89(3):429–464. Adam Albright and Bruce Hayes. 2003. Rules vs. analogy in English past tenses: A computational/experimental study. Cognition, 90(2):119– 161. R. Harald Baayen. 2001. Word Frequency Distributions. Springer, Berlin, Germany. Matthew Baerman, Dunstan Brown, and Greville G. Corbett. 2015. Understanding and measuring morphological complexity: An introduction. Oxford University Press. Matthew Baerman, Greville G. Corbett, and D. P. Brown. 2010. Defective Paradigms: Missing forms and what they tell us. Oxford University Press, Oxford, England. Jean Berko. 1958. The child’s learning of English morphology. Word, 14:150–177. Joan L. Bybee. 1985. Morphology: A Study of the Relation between Meaning and Form. John Benjamins, Amsterdam. Joan L. Bybee. 1991. Natural morphology: The organization of paradigms and language acquisition. In Thom Huebner and Charles A. Ferguson, editors, Cross Currents in Second Language Acquisition and Linguistic Theory. John Benjamins Publishing Company. Revas J. Chitashvili and R. Harald Baayen. 1993. Word frequency distributions. Quantitative Text Analysis, pages 54–135. Ryan Cotterell, Christo Kirov, Mans Hulden, and Jason Eisner. 2018a. On the complexity and typology of inflectional morphological systems. Transaction of the Association for Computational Linguistics (TACL). Ryan Cotterell, Christo Kirov, Sebastian J. Mielke, and Jason Eisner. 2018b. Unsupervised disambiguation of syncretism in inflected lexicons. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 548–553. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G˙eraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K¨ubler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017a. CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 1–30. Association for Computational Linguistics. Ryan Cotterell, John Sylak-Glassman, and Christo Kirov. 2017b. Neural graphical models over strings for principal parts morphological paradigm completion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL2017). Viviana Fratini, Joana Acha, and Itziar Laka. 2014. Frequency and morphological irregularity are independent variables. Evidence from a corpus study of Spanish verbs. Corpus Linguistics and Linguistic Theory, 10(2):289 –314. Andrew Gelman and Jennifer Hill. 2007. Data Analysis using Regression and Multilevel/Hierarchical Models. Cambridge University Press, Cambridge. Martin Haspelmath and Andrea D. Sims. 2010. Understanding Morphology. Hodder Education. Jennifer Hay. 2003. Causes and Consequences of Word Structure. Routledge, New York, NY. Borja Herce. 2016. Why frequency and morphological irregularity are not independent variables in Spanish: A response to Fratini et al. (2014). Corpus Linguistics and Linguistic Theory, 12(2). Charles F. Hockett. 1954. Two models of grammatical description. Word, 10:210–231. Ferenc Kiefer. 2000. Regularity. In Morphologie: Ein internationales Handbuch zur Flexion und Wortbildung/Morphology: An international Handbook on Inflection and Word-Formation. Walter de Gruyter, Berlin. Christo Kirov, Ryan Cotterell, John Sylak-Glassman, G´eraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian Mielke, Arya D McCarthy, Sandra K¨ubler, et al. 2018. Unimorph 2.0: Universal morphology. arXiv preprint arXiv:1810.11101. 5126 Kyle Mahowald, Isabelle Dautriche, Edward Gibson, and Steven Thomas Piantadosi. 2018. Word forms are structured for efficient use. Cognitive Science, 42(8):3116–3134. Gary F. Marcus, Steven Pinker, Michael T. Ullman, Michelle Hollander, T. John Rosen, and Fei Xu. 1992. Overregularization in Language Acquisition. Monographs of the society for research in child development. University of Chicago Press, Chicago, IL. James L. McClelland and Karalyn Patterson. 2002a. Rules or connections in past-tense inflections: What does the evidence rule out? Trends in Cognitive Sciences, 6(11):465–472. James L. McClelland and Karalyn Patterson. 2002b. ‘Words or Rules’ cannot exploit the regularity in exceptions. Trends in Cognitive Sciences, 6(11):464– 465. Timothy J. O’Donnell. 2015. Productivity and Reuse in Language: A Theory of Linguistic Computation and Storage. The MIT Press, Cambridge, Massachusetts. Steven Pinker. 1999. Words and Rules. HarperCollins, New York, NY. Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28:73–193. Steven Pinker and Michael T. Ullman. 2002a. Combination and structure, not gradedness, is the issue. Trends in Cognitive Sciences, 6(11):472–474. Steven Pinker and Michael T. Ullman. 2002b. The past and future of the past tense debate. Trends in Cognitive Sciences, 6(11):456–463. Sandeep Prasada and Steven Pinker. 1993. Generalisation of regular and irregular morphological patterns. Language and Cognitive Processes, 8(1):1–56. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257– 286. David E. Rumelhart and James L. McClelland. 1986. On learning the past tenses of English verbs. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition., volume 2, pages 216– 271. Bradford Books/MIT Press, Cambridge, MA. Dorothy Siegel. 1974. Topics in English Morphology. Ph.D. thesis, Massachusetts Institute of Technology. Thomas Stolz, Hitomi Otsuka, Aina Urdze, and Johan van der Auwera. 2012. Introduction: Irregularity — glimpses of a ubiquitous phenomenon. In Thomas Stolz, Hitomi Otsuka, Aina Urdze, and Johan van der Auwera, editors, Irregularity in Morphology (and Beyond), pages 7–38. Akademie Verlag, Berlin, Germany. Gregory T. Stump. 2001. Inflection. In Handbook of Morphology. Blackwell, Oxford, England. Shijie Wu and Ryan Cotterell. 2019. Exact hard monotonic attention for character-level transduction. arXiv preprint arXiv:1905.06319. Charles D. Yang. 2002. Knowledge and Learning in Natural Language. Oxford linguistics. Oxford University Press, New York. Charles D. Yang. 2016. The Price of Productivity: How Children Learn to Break the Rules of Language. The MIT Press, Cambridge, Massachusetts.
2019
505
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5127–5136 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5127 Like a Baby: Visually Situated Neural Language Acquisition Alexander G. Ororbia*1,2, Ankur Mali *1,2, Matthew A. Kelly1, and David Reitter1,3 (1) The Pennsylvania State University, University Park, PA, USA (2) Rochester Institute of Technology, Rochester, NY, USA (3) Google Research, New York City, NY, USA [email protected], [email protected], [email protected], [email protected] Abstract We examine the benefits of visual context in training neural language models to perform next-word prediction. A multi-modal neural architecture is introduced that outperform its equivalent trained on language alone with a 2% decrease in perplexity, even when no visual context is available at test. Fine-tuning the embeddings of a pre-trained state-of-theart bidirectional language model (BERT) in the language modeling framework yields a 3.5% improvement. The advantage for training with visual context when testing without is robust across different languages (English, German and Spanish) and different models (GRU, LSTM, ∆-RNN, as well as those that use BERT embeddings). Thus, language models perform better when they learn like a baby, i.e, in a multi-modal environment. This finding is compatible with the theory of situated cognition: language is inseparable from its physical context. 1 Introduction The theory of situated cognition postulates that a person’s knowledge is inseparable from the physical or social context in which it is learned and used (Greeno and Moore, 1993). Similarly, Perceptual Symbol Systems theory holds that all of cognition, thought, language, reasoning, and memory, is grounded in perceptual features (Barsalou, 1999). Knowledge of language cannot be separated from its physical context, which allows words and sentences to be learned by grounding them in reference to objects or natural concepts on hand (see Roy and Reiter, 2005, for a review). Nor can knowledge of language be separated from its social context, where language is learned interactively through communicating with others to facilitate problem-solving. Simply put, language does not occur in a vacuum. Yet, statistical language models, typically connectionist systems, are often trained in such a vacuum. Sequences of symbols, such as sentences or phrases composed of words in any language, such as English or German, are often fed into the model independently of any real-world context they might describe. In the classical language modeling framework, a model learns to predict a word based on a history of words it has seen so far. While these models learn a great deal of linguistic structure from these symbol sequences alone, acquiring the essence of basic syntax, it is highly unlikely that this approach can create models that acquire much in terms of semantics or pragmatics, which are integral to the human experience of language. How might one build neural language models that “understand” the semantic content held within the symbol sequences, of any language, presented to it? In this paper, we take a small step towards a model that understands language as a human does by training a neural model jointly on corresponding linguistic and visual data. From an imagecaptioning dataset, we create a multi-lingual corpus where sentences are mapped to the real-world images they describe. We ask how adding such real-world context at training can improve language model performance. We create a unified multi-modal connectionist architecture that incorporates visual context and uses either ∆-RNN (Ororbia II et al., 2017), Long Short Term Memory (LSTM; Hochreiter and Schmidhuber, 1997) or Gated Recurrent Unit (GRU; Cho et al., 2014) units. We find that the models acquire more knowledge of language than if they were trained without corresponding, real-world visual context. 5128 2 Related Work Both behavioral and neuroimaging studies have found considerable evidence for the contribution of perceptual information to linguistic tasks (Barsalou, 2008). It has long been held that language is acquired jointly with perception through interaction with the environment (e.g. Frank et al., 2008). Eye-tracking studies show that visual context influences word recognition and syntactic parsing from even the earliest moments of comprehension (Tanenhaus et al., 1995). Computational cognitive models can account for bootstrapped learning of word meaning and syntax when language is paired with perceptual experience (Abend et al., 2017) and for the ability of children to rapidly acquire new words by inferring the referent from their physical environment (Alishahi et al., 2008). Some distributional semantics models integrate word co-occurrence data with perceptual data, either to achieve a better model of language as it exists in the minds of humans (Baroni, 2016; Johns and Jones, 2012; Kievit-Kylar and Jones, 2011; Lazaridou et al., 2014) or to improve performance on machine learning tasks such as object recognition (Frome et al., 2013; Lazaridou et al., 2015a), image captioning (Kiros et al., 2014; Lazaridou et al., 2015b), or image search (Socher et al., 2014). Integrating language and perception can facilitate language acquisition by allowing models to infer how a new word is used from the perceptual features of its referent (Johns and Jones, 2012) or to allow for fast mapping between a new word and a new object in the environment (Lazaridou et al., 2014). Likewise, this integration allows models to infer the perceptual features of an unobserved referent from how a word is used in language (Johns and Jones, 2012; Lazaridou et al., 2015b). As a result, language data can be used to improve object recognition by providing information about unobserved or infrequently observed objects (Frome et al., 2013) or for differentiating objects that often co-occur in photos (e.g., cats and sofas; Lazaridou et al., 2015a). By representing the referents of concrete nouns as arrangements of elementary visual features (Biederman, 1987), Kievit-Kylar and Jones (2011) found that the visual features of nouns capture semantic typicality effects, and that a combined representation, consisting of both visual features and word co-occurrence data, more strongly correlates with human judgments of semantic similarity than representations extracted from a corpus alone. While modeling similarity judgments is distinct from the problem of predictive language modeling, we take this finding as evidence that visual perception informs semantics, which suggests there are gains to be had integrating perception with predictive language models. In contrast to prior work in machine learning, where mappings between vision and language have been examined (Kiros et al., 2014; Vinyals et al., 2015; Xu et al., 2015), our goal in integrating visual and linguistic data is not to accomplish a task such as image search/captioning that inherently requires a mapping between these modalities. Rather, our goal is to show that, since perceptual information is intrinsic to how humans process language, a language model that is trained on both visual and linguistic data will be a better model, consistently across languages, than one trained on linguistic data alone. Due to the ability of language models to constrain predictions on the basis of preceding context, language models play a central role in natural-language and speech processing applications. However, the psycholinguistic questions surrounding how people acquire and use linguistic knowledge are fundamentally different from the aims of machine learning. Using NLP language models to address psycholinguistic questions is a new approach that integrates well with the theory of predictive coding in cognitive psychology (Clark, 2013; Rao and Ballard, 1999). For language processing this means that when reading text or comprehending speech, humans constantly anticipate what will be said next. Predictive coding in humans is a fast, implicit cognitive process similar to the kind of sequence learning that recurrent neural models excel at. We do not propose recurrent neural models as direct accounts of human language processing. Instead, our intent is to use a general purpose machine learning algorithm as a tool to investigate the informational characteristics of the language learning task. More specifically, we use machine learning to explore the question as to whether natural languages are most easily learned when situated in an environmental context and grounded in perception. 5129 3 The Multi-modal Neural Architecture We will evaluate the multi-modal training approach on several well-known complex architectures, including the LSTM, and further examine the effect of using pre-trained BERT embeddings. However, to simply describe the the neural model, we start from the Differential State Framework (DSF; Ororbia II et al., 2017), which unifies gated recurrent architectures under the general view that state memory is a simple parametrized mixture of “fast” and “slow” states. Our aim is to model sequences of symbols, such as the words that compose sentences, where at each time we process xt, or the one-hot encoding of a token1 One of the simplest models that can be derived from the DSF is the ∆-RNN (Ororbia II et al., 2017). A ∆-RNN is a simple gated RNN that captures longer-term dependencies in sequences through the use of a parametrized, flexible state “mixing” function. The model computes a new state at a given time step by comparing a fast state (which is proposed after accounting for the current token) and a slow state (a form of longterm memory). The model is defined by parameters Θ = {W, V, br, β1, β2, α} (input-to-hidden weights W, recurrent weights V , gating-control coefficients β1, β2, α, and the rate-gate bias br). Inference is defined as: drec t = V ht−1, ddat t = Wew,t (1) d1 t = α ⊗drec t ⊗ddat t (2) d2 t = β1 ⊗drec t + β2 ⊗ddat t (3) zt = φhid(d1 t + d2 t ) (4) ht = Φ((1 −r) ⊗zt + r ⊗ht−1) (5) r = 1/(1 + exp(−[ddat t + br])) (6) where ew,t is the 1-of-k encoding of the word w at time t. Note that {α, β1, β2} are learnable bias vectors that modulate internal multiplicative interactions. The rate gate r controls how slow and fast-moving memory states are mixed inside the model. In contrast to the model originally trained in Ororbia II et al. (2017), the outer activation is the linear rectifier, Φ(v) = max(0, v), instead of the identity or hyperbolic tangent, because we found that it worked much better. The inner activation function φhid(v) is tanh(v) = (e(2v)−1) (e(2v)+1). 1One-hot encoding represents tokens as binary-valued vectors with one dimension for each type of token. Only one dimension has a non-zero value, indicating the presence of a token of that type. To integrate visual context information into the ∆-RNN, we fuse the model with a neural vision system, motivated by work done in automated image captioning (Xu et al., 2015). We adopt a transfer learning approach and incorporate a stateof-the-art convolutional neural network into the ∆-RNN model, namely the Inception-v3 network (Szegedy et al., 2016)2, in order to create a multimodal ∆-RNN model (MM-∆-RNN; see Figure 1). Since our focus is on language modeling, the parameters of the vision network are fixed. To obtain a distributed representation of an image from the Inception-v3 network, we extract the vector produced from the final max-pooling layer, c, after running an image through the model (note that this operation occurs right before the final, fully-connected processing layers which are usually task-specific parameters, such as in object classification). The ∆-RNN can make use of the information in this visual context vector if we modify its state computation in one of two ways. The first way would be to modify the inner state to be a linear combination of the data-dependent pre-activation, the filtration, and a learned linear mapping of c as follows: zt = φhid(d1 t + d2 t + Mc + b) (7) where M is a learnable synaptic connections matrix that connects the visual context representation with the inner state. The second way to modify the ∆-RNN would be change its outer mixing function instead: ht = Φ([(1 −r) ⊗zt + r ⊗ht−1] ⊗(Mc)) (8) Here in Equation 8 we see the linearly-mapped visual context embedding interacts with the currently computation state through a multiplicative operation, allowing the visual-context to persist and work in a longer-term capacity. In either situation, using a parameter matrix M frees us from having to set the dimensionality of the hidden state to be the same as the context vector produced by the Inception-v3 network. We do not use regularization techniques with this model. The application of regularization techniques is, in principle, possible (and typically im2In preliminary experiments, we also examined VGGNet and a few others, but found that the Inception worked the best when it came to acquiring more general distributed representations of natural images. 5130 Figure 1: Integration of visual information in an unrolled network (here, the MM-∆-RNN. Grey-dashed: identity connections; black-dash-dotted: next-step predictions; solid-back lines: weight matrices. proves performance of the ∆-RNN), but it is damaging to performance in this particular case, where an already compressed and regularized representation of the images from Inception-v3 serves as input to the multi-modal language modeling network. Let w1, . . . , wN be a variable-length sequence of N words corresponding to an image I. In general, the distribution over the variables follows the graphical model: Pθ(w1, . . . , wT |I) = T Y t=1 PΘ(wt|w<t, I) For all model variants the state ht calculated at any time step is fed into a maximum-entropy classifier3 defined as: P(w, ht) = PΘ(w|ht) = exp (wTUht) P w′ exp ((w′)TUht) The model parameters Θ optimized with respect to the sequence negative log likelihood: L = − N X i=1 T X t=1 log PΘ(wt|h) We differentiate with respect to this cost function to calculate gradients. 3Bias term omitted for clarity. 3.1 GRU, LSTM and BERT variants Does visually situated language learning benefit from the specific architecture of the ∆-RNN, or does the proposal work with state-of-the-art language models? We applied the same architecture to Gated Recurrent Units (GRU, Cho et al., 2014), Long Short Term Memory (LSTM, Hochreiter and Schmidhuber, 1997), and BERT (Devlin et al., 2018). We train these models on text alone and compare to the two variations of the multi-modal ∆-RNN, as described in the previous section. The multi-modal GRU, with context information directly integrated, is defined as follows: dc = Mc zt = σ(Wzxt + Vzht−1) rt = σ(Wrxt + Vrht−1) bht = tanh(Wbhxt + Vbh(rt ⊗ht−1)) ht = [zt ⊗ht−1 + (1 −zt) ⊗bht] ⊗dc where we note the parameter matrix M that maps the visual context c into the GRU state effectively gates the outer function.4 The multi-modal variant of the LSTM (with peephole connections) is 4We tried both methods of integration, Equations 7 and 8. The second formulation gave better performance. 5131 defined as follows: dc = Mc ht = [rt ⊗Φ(ct)] ⊗dc, where, rt = σ(Wrxt + Vrht−1 + Urct) ct = ft ⊗ct−1 + it ⊗zt, where, zt = Φ(Wzxt + Vzht−1), it = σ(Wixt + Viht−1 + Uict−1), ft = σ(Wfxt + Vfht−1 + Ufct−1). We furthermore created one more variant of each multi-modal RNN by initializing a portion of their input-to-hidden weights with embeddings extracted from the Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al., 2018). This would correspond to initializing W in the ∆-RNN, Wi in the LSTM, and Wˆh in the GRU. Note that in our results, we only report the best-performing model, which turned out to be the LSTM variant. Since the models in this work are at the word level and BERT operates at the subword level, we create initial word embeddings by first decomposing each word into its appropriate subword components, according to the WordPieces model (Wu et al., 2016), and then extract the relevant BERT representation for each. For each subword token, a representation is created by summing together a specific learned token embedding, a segmentation embedding, and a position embedding. For a target word, we linearly combine subword input representations and initialize the relevant weight with this final embedding. 4 Experiments The experiments in this paper were conducted using the MS-COCO image-captioning dataset.5 Each image in the dataset has five captions provided by human annotators. We use the captions to create five different ground truth splits. We translated each ground truth split into German and Spanish using the Google Translation API, which was chosen as a state-of-the-art, independently evaluated MT tool that produces, according to our inspection of the results, idiomatic, and syntactically and semantically faithful translations. To our knowledge, this represents the first Multi-lingual MSCOCO dataset on situated learning. We tokenize the corpus and obtain a 16.6K vocabulary for English, 33.2K for German and 18.2k for Spanish. 5https://competitions.codalab.org/competitions/3221 As our primary concern is the next-step prediction of words/tokens, we use negative log likelihood and perplexity to evaluate the models. This is different from the goals of machine translation or image captioning, which, in most cases, is concerned with a ranking of possible captions where one measures how similar the model’s generated sequences are to ground-truth target phrases. Baseline results were obtained with neural language models trained on text alone. For the ∆-RNN, this meant implementing a model using only Equations 1-7. The best results were achieved using the BERT Large model (bidirectional Transformer, 24 layers, 1024dims, 16 attention heads: Devlin et al. 2018). We used the large pretrained model and then trained with visual context. All models were trained to minimize the sequence loss of the sentences in the training split. The weight matrices of all models were initialized from uniform distribution, U(−0.1, 0.1), biases were initialized from zero, and the ∆-RNNspecific biases {α, β1, β2} were all initialized to one. Parameter updates calculated through backpropagation through time required unrolling the model over 49 steps in time (this length was determined based on validation set likelihood). All symbol sequences were zero-padded and appropriately masked to ensure efficient mini-batching. Gradients were hard-clipped at a magnitude bound of l = 2.0. Over mini-batches of 32 samples, model parameters were optimized using simple stochastic gradient descent (learning rate λ = 1.0 which was halved if the perplexity, measured at the end of each epoch, goes up three or more times). To determine if our multi-modal language models capture knowledge that is different from a textonly language model, we evaluate each model twice. First, we compute the model perplexity on the test set using the sentences’ visual context vectors. Next, we compute model perplexity on test sentences by feeding in a null-vector to the multimodal model as the visual context. If the model did truly pick up some semantic knowledge that is not exclusively dependent on the context vector, its perplexity in the second setting, while naturally worse than the first setting, should still outperform text-only baselines. In Table 1, we report each model’s negative log likelihood (NLL) and per-word perplexity (PPL). 5132 20 25 30 35 Epoch Validation Perplexity Δ−RNN Δ−RNN (full) Δ−RNN (blind) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 LV-LV LV-L L-L (a) English ∆-RNNs. 20 25 30 35 40 Epoch Validation Perplexity Δ−RNN Δ−RNN (full) Δ−RNN (blind) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 LV-LV LV-L L-L (b) German ∆-RNNs. 15 20 25 30 35 40 Epoch Validation Perplexity Δ−RNN Δ−RNN (full) Δ−RNN (blind) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 LV-LV LV-L L-L (c) Spanish ∆-RNNs. Figure 2: Training ∆-RNNs in each language (English, German, Spanish). Baseline model is trained and evaluated on language (L-L), the full model uses the multi-modal signal (LV-LV), and the target model is trained on LV, but evaluated on L only (LV-L). PPL is calculated as: PPL = exp  −(1/N) N X i=1 T X t=1 log PΘ(wt|h)  We observe that in all cases the multi-modal models outperform their respective text-only baselines. More importantly, the multi-modal models, when evaluated without the Inception-v3 representations on holdout samples, still perform better than the text-only baselines. The improvement in language generalization can be attributed to the visual context information provided during training, enriching its representations over word sequences with knowledge of actual objects and actions. Figure 2 shows the validation perplexity of the ∆-RNN on each language as a function of the first 15 epochs of learning. We observe that throughout learning, the improvement in generalization afforded by the visual context c is persistent. Validation performance was also tracked for the various GRU and LSTM models, where the same trend was also observed (see supplementary material). 4.1 Model Analysis We analyze the decoders of text-only and multimodal models. We examine the parameter matrix U, which is directly involved in calculating the predictions of the underlying generative model. U can be thought of as “transposed embeddings”, an idea that has also been exploited to introduce further regularization into the neural language model learning process (Press and Wolf, 2016; Inan et al., 2016). If we treat each row of this matrix as the learned embedding for a particular word (we assume column-major orientation in implementation), we can calculate its proximity to other embeddings using cosine similarity. Table 3 shows the top ten words for several randomly selected query terms using the decoder parameter matrix. By observing the different sets of nearest-neighbors produced by the ∆-RNN and the multi-modal ∆-RNN (MM-∆-RNN), we can see that the MM-∆-RNN appears to have learned to combine the information from the visual context with the token sequence in its representations. For example, for the query “ocean”, we see that while the ∆-RNN does associate some relevant terms, such as “surfing” and “beach”, it also associates terms with marginal relevance to “ocean” such as “market” and “plays”. Conversely, nearly all of the terms the MM-∆-RNN associates with “ocean” are relevant to the query. The same is true for “kite” and “subway”. For “racket”, while the text-only baseline mostly associates the query with sports terms, especially sports equipment like “bat”, the MM-∆-RNN is able to relate the query to the correct sport, “tennis”. 4.2 Conditional Sampling To see how visual context influences the language model, we sample the conditional generative model. Beam search (size 13) allows us to generate full sentences (Table 2). Words were 5133 English German MT Spanish MT Model (Type) Test-NLL Test-PPL Test-NLL Test-PPL Test-NLL Test-PPL ∆-RNN (L-L) 2.714 15.086 2.836 17.052 2.546 12.755 MM-∆-RNN (LV-LV) 2.645 14.086 2.777 16.082 2.405 11.082 MM-∆-RNN (LV-L) 2.694 14.786 2.808 16.582 2.458 11.682 GRU (L-L) 2.764 15.871 2.854 17.369 2.554 12.866 MM-GRU (LV-LV) 2.654 14.189 2.790 16.285 2.426 11.3089 MM-GRU (LV-L) 2.687 14.689 2.815 16.701 2.466 11.781 LSTM (L-L) 2.722 15.217 2.814 17.070 2.494 12.114 MM-LSTM (LV-LV) 2.645 14.089 2.773 16.001 2.405 11.081 MM-LSTM (LV-L) 2.708 15.002 2.822 16.806 2.487 12.028 BERT+LSTM (L-L) 2.534 12.6011 2.702 14.9127 2.303 10.0011 BERT+MM-LSTM (LV-LV) 2.475 11.8776 2.661 14.3124 2.223 9.2319 BERT+MM-LSTM (LV-L) 2.503 12.2196 2.700 14.8102 2.283 9.8102 Table 1: Generalization performance as measured by negative log likelihood (NLL) and perplexity (PPL). Lower values indicate better performance. Baseline model (L-L) trained and evaluated on linguistic data only. Full model (LV-LV) trained and evaluated on both linguistic and visual data. Blind model (LV-L) trained on both but evaluated on language only. The difference between L-L and LV-L illustrates the performance improvement. German and Spanish data are machine-translated (MT) and provide additional, but correlated, evidence. For comparison, Devlin et al. (2018) report a perplexity of 3.23 for their (broad) English test data, using the same base model we use here to define input representations. a skateboarder and person in front of skyscrapers. a person with skateboarder on air. a person doing a trick with skateboarder. a person with camera with blue background. a food bowl on the table a bowl full of food on the table a green and red bowl on the table a salad bowl with chicken a dog on blue bed with blanket. a dog sleeps near wooden table. a dog sleeps on a bed. a dog on some blue blankets. Table 2: Some captions generated by the multi-modal ∆-RNN in English. ranked based on model probabilities. 5 Discussion and Conclusions Training with perceptual context improves multimodal neural models compared to training on language alone. Specifically, augmenting a predictive language model with images that illustrate the sentences being learned enhances its next-word or masked-word prediction ability. The performance improvement persists even in situations devoid of visual input, when the model is used as a pure language model. The near state-of-the-art language model, using BERT, reflects the case of human language acquisition less than do the other models, which were trained “ab initio” in a situated context. BERT is pre-trained on a very large corpus, but it still picked up a performance improvement when finetuned on the visual context and language, as compared to the corpus language signal alone. We do not expect this to be a ceiling for visual augmentation: in the world of training LMs, the MS COCO corpus is, of course, a small dataset. Neural language models, as used here, are contenders as cognitive and psycholinguistic models of the non-symbolic, implicit aspects of language representation. There is a great deal of evidence that something like a predictive language model exists in the human mind. The surprisal of a word or phrase refers to the degree of mismatch between what a human listener expected to be said next and what is actually said, for example, when a garden path sentence forces the listener to abandon a partial, incremental parse (Ferreira and Henderson, 1991; Hale, 2001). In the garden path sen5134 Ocean Kite Subway Racket ∆-RNN +MM ∆-RNN +MM ∆-RNN +MM ∆-RNN +MM surfing boats plane kites train railroad bat bat sandy beach kites airplane passenger train batter players filled pier airplane plane railroad locomotive catcher batter beach wetsuit surfboard airplanes trains trains skateboard swing market cloth planes planes gas steam umpire catcher crowded surfing airplanes airliner commuter gas soccer hitter topped windsurfing boats helicopter trolley commuter women ball plays boardwalk jet jets locomotive passenger pedestrians umpire cross flying aircraft biplane steam crowded players tennis snowy biplane jets jet it’s trolley uniform tatoos Table 3: The ten words most closely related to the bolded query word, rank ordered, trained without (∆-RNN) and with (+MM) visual input. tence “The horse raced past the barn fell”, the final word “fell” forces the reader to revise their initial interpretation of “raced” as the active verb (Bever, 1970). More generally, the idea of predictive coding holds that the mind forms expectations before perception occurs (see Clark, 2013, for a review). How these predictions are formed is unclear. Predictive language models trained with a generic neural architecture, without specific linguistic universals, are a reasonable candidate for a model of predictive coding in language. This does not imply neuropsychological realism of the low-level representations or learning algorithms, and we cannot advocate for a specific neural architecture as being most plausible. However, we can show that an architecture that predicts linguistic input well learns better when its input mimics that of a human language learner. A theory of human language processing might distinguish between symbolic language knowledge and processes that implement compositionality to produce semantics on the one hand, and implicit processes that leverage sequences and associations to produce expectations. With respect to acquiring the latter, implicit and predictive model, we note that children are exposed to a rich sensory environment, one more detailed than what is provided to our model here. If even static visual input alone improves language acquisition, then what could a sensorily rich environment achieve? When a multi-modal learner is considered, then, perhaps, the language acquisition stimulus that has been famously labeled to be rather poor (Chomsky, 1959; Berwick et al., 2013), is quite rich after all. Acknowledgments We would like to thank Tomas Mikolov, Emily Pitler, Zixin Tang, and Saranya Venkatraman for comments. Part of this work was funded by the National Science Foundation (BCS-1734304 to D. Reitter). References Omri Abend, Tom Kwiatkowski, Nathaniel J. Smith, Sharon Goldwater, and Mark Steedman. 2017. Bootstrapping language acquisition. Cognition, 164:116 – 143. Afra Alishahi, Afsaneh Fazly, and Suzanne Stevenson. 2008. Fast mapping in word learning: What probabilities tell us. In Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 57–64. Association for Computational Linguistics. Marco Baroni. 2016. Grounding distributional semantics in the visual world. Language and Linguistics Compass, 10(1):3–13. Lawrence W Barsalou. 1999. Perceptions of perceptual symbols. Behavioral and Brain Sciences, 22(4):637–660. Lawrence W Barsalou. 2008. Grounded cognition. Annual Review of Psychology, 59:617–645. Robert C Berwick, Noam Chomsky, and Massimo Piattelli-Palmarini. 2013. Poverty of the stimulus stands: Why recent challenges fail. In Rich Languages From Poor Inputs, chapter 1, pages 19–42. Oxford University Press. Thomas G Bever. 1970. The cognitive basis for linguistic structures. In Cognition and the development of language, pages 279–362. 5135 Irving Biederman. 1987. Recognition-by-components: A theory of human image understanding. Psychological Review, 94(2):115. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Noam Chomsky. 1959. A review of BF Skinner’s verbal behavior. Language, 35(1):26–58. Andy Clark. 2013. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and brain sciences, 36(3):181–204. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Fernanda Ferreira and John M Henderson. 1991. Recovery from misanalyses of garden-path sentences. Journal of Memory and Language, 30(6):725–745. Michael C Frank, Noah D Goodman, and Joshua B Tenenbaum. 2008. A Bayesian framework for crosssituational word-learning. In Advances in neural information processing systems, pages 457–464. Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. 2013. Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems, pages 2121–2129. James G Greeno and Joyce L Moore. 1993. Situativity and symbols: Response to Vera and Simon. Cognitive Science, 17(1):49–59. John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1–8, Pittsburgh, PA. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462. Brendan T Johns and Michael N Jones. 2012. Perceptual inference through global lexical similarity. Topics in Cognitive Science, 4(1):103–120. Brent Kievit-Kylar and Michael Jones. 2011. The semantic pictionary project. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, pages 2229–2234, Austin, TX. Cognitive Science Society. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539. Angeliki Lazaridou, Elia Bruni, and Marco Baroni. 2014. Is this a wampimuk? cross-modal mapping between distributional semantics and the visual world. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1403– 1414. Angeliki Lazaridou, Georgiana Dinu, Adam Liska, and Marco Baroni. 2015a. From visual attributes to adjectives through decompositional distributional semantics. Transactions of the Association for Computational Linguistics, 3:183–196. Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2015b. Combining language and vision with a multimodal skip-gram model. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 153–163, Denver, Colorado. Association for Computational Linguistics. Alexander G. Ororbia II, Tomas Mikolov, and David Reitter. 2017. Learning simpler language models with the differential state framework. Neural Computation, 29(12):3327–3352. Ofir Press and Lior Wolf. 2016. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859. Rajesh PN Rao and Dana H Ballard. 1999. Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1):79. Deb Roy and Ehud Reiter. 2005. Connecting language to the world. Artificial Intelligence, 167(1-2):1–12. Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association of Computational Linguistics, 2(1):207–218. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826. MK Tanenhaus, MJ Spivey-Knowlton, KM Eberhard, and JC Sedivy. 1995. Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217):1632–1634. 5136 Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 3156–3164. IEEE. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048–2057.
2019
506
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5137–5154 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5137 Relating Simple Sentence Representations in Deep Neural Networks and the Brain Sharmistha Jat1∗ Hao Tang2 Partha Talukdar1 Tom Mitchell2 1Indian Institute of Science, Bangalore 2School of Computer Science, Carnegie Mellon University {sharmisthaj,ppt}@iisc.ac.in [email protected], [email protected] Abstract What is the relationship between sentence representations learned by deep recurrent models against those encoded by the brain? Is there any correspondence between hidden layers of these recurrent models and brain regions when processing sentences? Can these deep models be used to synthesize brain data which can then be utilized in other extrinsic tasks? We investigate these questions using sentences with simple syntax and semantics (e.g., The bone was eaten by the dog.). We consider multiple neural network architectures, including recently proposed ELMo and BERT. We use magnetoencephalography (MEG) brain recording data collected from human subjects when they were reading these simple sentences. Overall, we find that BERT’s activations correlate the best with MEG brain data. We also find that the deep network representation can be used to generate brain data from new sentences to augment existing brain data. To the best of our knowledge, this is the first work showing that the MEG brain recording when reading a word in a sentence can be used to distinguish earlier words in the sentence. Our exploration is also the first to use deep neural network representations to generate synthetic brain data and to show that it helps in improving subsequent stimuli decoding task accuracy. 1 Introduction Deep learning methods for natural language processing have been very successful in a variety of Natural Language Processing (NLP) tasks. However, the representation of language learned by such methods is still opaque. The human brain is an excellent language processing engine, and the brain representation of language is of course very effective. Even though both brain and deep ∗This research was carried out during a research internship at the Carnegie Mellon University. learning methods are representing language, the relationships among these representations are not thoroughly studied. Wehbe et al. (2014b) and Hale et al. (2018) studied this question in some limited capacity. Wehbe et al. (2014b) studied the processing of a story context at a word level during language model computation. Hale et al. (2018) studied the syntactic composition in RNNG model (Dyer et al., 2016) with human encephalography (EEG) data. We extend this line of research by investigating the following three questions: (1) what is the relationship between sentence representations learned by deep learning networks and those encoded by the brain; (2) is there any correspondence between hidden layer activations in these deep models and brain regions; and (3) is it possible for deep recurrent models to synthesize brain data so that they can effectively be used for brain data augmentation. In order to evaluate these questions, we focus on representations of simple sentences. We employ various deep network architectures, including recently proposed ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) networks. We use MagnetoEncephaloGraphy (MEG) brain recording data of simple sentences as the target reference. We then correlate the representations learned by these various networks with the MEG recordings. Overall, we observe that BERT representations are the most predictive of MEG data. We also observe that the deep network models are effective at synthesizing brain data which are useful in overcoming data sparsity in stimuli decoding tasks involving brain data. In summary, in this paper we make the following contributions. • We initiate a study to relate representations of simple sentences learned by various deep networks with those encoded in the brain. We establish correspondences between activations in deep network layers with brain ar5138 eas. • We demonstrate that deep networks are capable of predicting change in brain activity due to differences in previously processed words in the sentence. • We demonstrate effectiveness of using deep networks to synthesize brain data for downstream data augmentation. We have made our code and data1 publicly available to support further research in this area. 2 Datasets In this section, we describe the MEG dataset and Simple Sentence Corpus used in the paper. 2.1 MEG Dataset Magnetoencephalography (MEG) is a noninvasive functional brain imaging technique which records magnetic fields produced by electrical currents in the brain. Sensors in the MEG helmet allow for recording of magnetic fluctuations caused by changes in neural activity of the brain. For the experiments in this paper, we used three different MEG datasets collected when subjects were shown simple sentences as stimulus. These datasets are summarized in Table 1, please see (Rafidi, 2014) for more details. Additional dataset details are mentioned in appendix section A.1. In the MEG helmet, 306 sensors were distributed over 102 locations and sampled at 1kHz. Native English speaking subjects were asked to read simple sentences. Each word within a sentence was presented for 300ms with 200ms subsequent rest. To reduce noise in the brain recordings, we represent a word’s brain activity by averaging 10 sentence repetitions (Sudre et al., 2012). Comprehension questions followed 10% of sentences, to ensure semantic engagement. MEG data was acquired using a 306 channel Elekta Neuromag device. Preprocessing included spatial filtering using temporal signal space separation (tSSS), low-pass filtering 150Hz with notch filters at 60 and 120Hz, and downsampling to 500Hz (Wehbe et al., 2014b). Artifacts from tSSS-filtered same-day empty room measurements, ocular and cardiac artifacts were removed via Signal Space Projection (SSP). 1https://github.com/SharmisthaJat/ ACL2019-SimpleSentenceRepr-DNN-Brain Dataset #Sentences Voice Repetition PassAct1 32 P+A 10 PassAct2 32 P+A 10 Act3 120 A 10 Table 1: MEG datasets used in this paper. Column ‘Voice’ refers to the sentence voice, ‘P’ is for passive sentences and ‘A’ is for active. Repetition is the number of times the human subject saw a sentence. For our experiments, we average MEG data corresponding to multiple repetitions of a single sentence. 2.2 Simple Sentence Corpus In this paper, we aim to understand simple sentence processing in deep neural networks (DNN) and the brain. In order to train DNNs to represent simple sentences, we need a sizeable corpus of simple sentences. While the MEG datasets described in Section 2.1 contain a few simple sentences, that set is too small to train DNNs effectively. In order to address this, we created a new Simple Sentence Corpus (SSC), consisting of a mix of simple active and passive sentences of the form “the woman encouraged the girl” and “the woman was encouraged by the boy”, respectively. The SSC dataset consists of 256,145 sentences constructed using the following two sets. • Wikipedia: We processed the 2009 Wikipedia dataset to get sentences matching the following patterns. “the [noun+] was [verb+] by the [noun+]” “the [noun+] [verb+] the [noun+]” If the last word in the pattern matched is not noun, then we retain the additional dependent clause in the sentence. We were able to extract 117,690 active, and 8210 passive sentences from wikipedia. • NELL triples: In order to ensure broader coverage of Subject-Verb-Object (SVO) triples in our sentence corpus, we used the NELL SVO triples2 (Talukdar et al., 2012). We subsample SVO triples based on their frequency (threshold = 6), a frequent verb list, and Freebase to get meaningful sentences. Any triple with subject or object or verb not in Freebase is discarded from the triple set. – Active sentence: Convert the verb to its past tense and concatenate the triple us2NELL SVO triples: http://rtw.ml.cmu.edu/ resources/svo/ 5139 Figure 1: Encoding model for MEG data. 306 channel 500ms MEG signal for a single word was compressed to 306 × 5 by averaging 100ms data into a single column. This MEG brain recording data is then encoded from text representation vector to brain activity using ridge regression. The evaluation is done using 5 fold cross-validation. Please see Section 4 for more details. ing the following pattern: “the [subject] [verb-past-tense] the [object]”. – Passive sentence: Concatenate the triple using pattern: “the [object] was [verbpast-tense] by the [subject]” We generate 86,452 active and 43,793 passive sentences in total from the NELL triples. We train our deep neural network models with 90% of sentences in this dataset and test on the remaining 10%. We used the spaCy (Honnibal and Montani, 2017) library to predict POS tags for words in this dataset. 3 Methods We test correlations between brain activity and deep learning model activations (LeCun et al., 2015) for a given sentence using a classification task, similar to previous works (Mitchell et al., 2008; Wehbe et al., 2014a,b). If we are able to predict brain activity from the neural network activation, then we hypothesize that there exists a relationship between the process captured by the neural network layer and the brain. The schematic of our encoding approach is shown in Figure 1. We investigate various deep neural network models using context sensitivity tests to evaluate their performance in predicting brain activity. Working with these models and their respective training assumptions help us in understanding which assumption contributes to the correlations with the brain activity data. We process the sentences incrementally for each model to prevent information from future words from affecting the current representation, in line with how information is processed by the brain. For example, in the sentence “the dog ate the biscuit”, the representation of the word “ate” is calculated by processing sentence segment “the dog ate” and taking the last representation in each layer as the context for the word “ate”. The following embedding models are used to represent sentences. • Random Embedding Model: In this model, we represent each word in a context by a randomly generated 300-dimensional vector. Each dimension is uniformly sampled between [0,1]. The results from this model help us establish the random baseline. • GloVe Additive Embedding Model: This model represents a word context as the average of the current word’s GloVe embedding (Pennington et al., 2014) and the previous word context. The first word in a sentence is initialized with its GloVe embedding as context. • Simple Bi-directional LSTM Language Model: We build a language model following (Inan et al., 2016). Given a sequence of words w1 . . . wt, we predict the next word wt+1 using a two layer bidirectional-LSTM model (Hochreiter and Schmidhuber, 1997). The model is trained on the simple language corpus data as described in Section 2.1 with a cross-entropy loss. We evaluate our model on 10% held out text data. The perplexity for the Bi-directional Language model is 9.97 on test data (the low perplexity value is due to the simple train and test dataset). • Multi-task Model: Motivated by the brain’s multitask capability, we build a model to predict next word and POS tag information. The 5140 multitask model is a simple two layer bidirectional LSTM model with separate linear layers predicting each of the tasks given the output of the last LSTM layer (Figure 2). The model is trained on the simple sentence corpus data as described in Section 2.1 with a cross-entropy loss. The model’s accuracy is 96.9% on the POS-tag prediction task and has perplexity of 9.09 on the 10% test data. The high accuracy and low perplexity are due to the simple nature of our language dataset. • ELMO (Peters et al., 2018): ELMo is a recent state-of-the-art deep contextualized word representation method which models a word’s features as internal states of a deep bidirectional language model (biLM) pretrained on a large text corpus. The contextualized word vectors are able to capture interesting word characteristics like polysemy. ELMO has been shown to improve performance across multiple tasks, such sentiment analysis and question answering. • BERT (Devlin et al., 2019): BERT uses a novel technique called Masked Language Model (MLM). MLM randomly masks some tokens inputs and then predicts them. Unlike previous models, this technique can use both left and right context to predict the masked token. The training also predicts the next sentence. The embedding in this model consists of 3 components: token embedding, sentence embedding and transformer positional embedding. Due to the presence of sentence embeddings, we observe an interesting performance of the embedding layer in our experiments. 4 Experiments and Results With human brain as the reference language processing engine, we investigate the relationship between deep neural network representation and brain activity recorded while processing the same sentence. For this task, we perform experiments at both the macro and micro sentence context level. The macro-context experiments evaluate the overall performance of deep neural networks in predicting brain data for input words (all words, nouns, verbs etc.). The micro-context experiments, by contrast, focus on evaluating the performance of deep neural network representations in Figure 2: Architecture diagram for the simple multitask model. The second LSTM layer’s output is processed by 2 linear layers each producing the next-word and the POS-tag prediction. We process each sentence incrementally to get the prediction for word at the nth position, this helps in removing forward bias from future words and therefore is consistent with the information our brain receives when processing the same sentence. Our Simple Bi-directional LSTM language model also has a similar architecture with just one output linear layer for next word prediction. detecting minor changes in sentence context prior to the token being processed. Regression task: Similar to previous research (Mitchell et al., 2008; Wehbe et al., 2014b), we use a classification task to align model representations with brain data. MEG data (Section 2.1) is used for these experiments. The task classifies between a candidate word and the true word a subject is reading at the time of brain activity recording. The classifier uses an intermediate regression step to predict the MEG activity from deep neural network representation for the true and the candidate word. The classifier then chooses the word with least Euclidean distance between the predicted and the true brain activity. A correct classification suggests that the deep neural network representation captures important information to differentiate between brain activity at words in different contexts. Detailed steps of this process are described as follows. Regression training: We perform regression from the neural-network representation (for each layer) to the brain activity for the same input words in context. We normalized, preprocessed 5141 and trained on the MEG data as described by (Wehbe et al., 2014b) (Section 2.3.2). We average the signal from every sensor (total 306) over 100ms non-overlapping windows, yielding a 306×5 sized MEG data for each word. To train the regression model, we take the training portion of the data in each fold, (X, Y ), in the tuple (xi, yi), xi is the layer representation for an input word i in a neural network model, and yi is the corresponding MEG recording of size 1530 (flattened 306*5). The Ridge regression model (f) (Pedregosa et al., 2011) is learned with generalized cross-validation to select λ parameter (Golub et al., 1979). Ridge regression model’s α parameter is selected from range [0.1, . . . , 100, 1000]. The trained regression model is used to estimate MEG activity from the stimulus features, i.e., ˆyi = f(xi). Regression testing: The trained regression model is used to predict ˆyi for each word stimulus (xi) in the test fold during cross-validation. We perform a pair-wise test for the classification accuracy (Acc) (Mitchell et al., 2008). The chance accuracy of this measure is 0.5. We use Euclidean distance (Edist) as given in (1) for the measure. Acc =      1, if Edist(f(xi), yi) + Edist(f(xj), yj) ≤Edist(f(xi), yj) + Edist(f(xj), yi) 0, otherwise (1) 4.1 Macro-context Experiments The macro-context experiments aggregate classification performance of each model’s layer on the entire stimuli set. We also evaluate on smaller sets such as only the nouns, verbs, passive sentence words, active sentence words, etc. The macro experiments help us to compare all the models on a large stimuli set. In summary, we observe the following: (1) the intermediate layers of state-ofthe-art deep neural network models are most predictive of brain activity (Jain and Huth (2018) also observe this on a 3 layer LSTM language model), (2) in-context representations are better at predicting brain activity than out-of-context representations (embeddings), and (3) Temporal lobe is predicted with highest accuracy from deep neural network representations. Detailed Observations: The results of pairwise classification tests for various models are presented in Figure 3. All the results reported in this section are for PassAct1 dataset. From the figure, we observe that BERT and ELMo outperform the simple models in predicting brain activity data. In the neural network language models, the middle layers perform better at predicting brain activity than the shallower or deeper layers. This could be due to the fact that the shallower layers represent low-level features and the deeper layers represent more task-oriented features. We tested this hypothesis by examining the performance scores at each lobe of the brain. For each area, we tested the left and right hemispheres independently and compared these performances with the bilateral frontal lobe as well as the activity across all regions. In particular, we examined the primary visual areas (left and right occipital lobe), speech and language processing areas (left temporal) and verbal memory (right temporal), sensory perception (left parietal) and integration (right parietal), language related movements (left frontal) and nonverbal functioning (right frontal). The frontal lobe was tested bilaterally as it is associated with higher level processing such as problem solving, language processing, memory, judgement, and social behavior. From our results, we observe that lower layers such as BERT layer 5 have very high accuracy for right occipital and left occipital lobe associated with low-level visual processing task. In contrast, higher layers such as linear layers in the Multitask Model and in Language Model have the highest accuracy in the left temporal region of the brain. Figure 4 shows the pairwise classification accuracy for a given brain region for best layers from each model. The accuracy is highest in left temporal region, responsible for syntactic and semantic processing of language. These results establish correspondences between representations learned by deep neural methods and those in the brain. Further experiments are needed to improve our understanding of this relationship. We performed additional experiments to predict on a restricted stimuli set. In each of these experiments, a subset of stimuli, for example active sentences, passive sentences, noun, and verb stimuli were used in classification training and testing. Detailed results for this experiment are documented in the appendix section (Figure 9). From the results, we observe that active sentences are predicted better (best accuracy = 0.93) than passive sentences (best accuracy = 0.87). This might be attributed to the nature of training datasets for 5142 Figure 3: Pairwise classification accuracy of brain activity data predicted from various model layer representations. We average 4 consecutive layers of BERT into one value. We find that BERT and ELMO model layers perform the best. The middle layers of most models and BERT, in particular, are good at predicting brain activity. Read ‘ f’ as forward layer and ‘Emb’ as the embedding layer. Figure 4: Pairwise accuracy of various brain regions from some selected deep neural network model layers. The left part of the brain which is considered central to language understanding is predicted with higher accuracy, especially left temporal region (L = left, R = right). deep neural networks, as active sentences are dominant in the training data of most of the pre-trained models. We also observe that for passive sentences, our simple multitask model (trained using about 250K active and passive sentences) has a lower performance gap between active and passive sentence as compared to ELMO and BERT models. This may be due to a more balanced active and passive sentence used to train the multitask model. Noun stimuli are predicted with the highest accuracy of 0.81, while the accuracy for verbs is 0.65. Both Multitask and ELMo models dominate verb prediction results, while BERT lags in this category. Further experiments should be done to compare the ability of Transformer (Vaswani et al., 2017) versus Recurrent Neural Network based models to represent verbs. 4.2 Micro-context Experiments In these micro-context experiments, we evaluate if our models are able to retain information from words in the sentence prior to the word being processed. For such context sensitivity tests, we only use the first repetition of the sentence shown to human subjects. This helps to ensure that the sentence has not been memorized by the subjects, which might affect the context sensitivity tests. Training: The micro-context experiment setup is illustrated in Figure 5. To train the regression model, each training instance corresponding to a word has the form (xi, yi), where xi is the layer representation for an input word i in a neural network model, and yi is the corresponding MEG brain recording data of size 1530 (flattened 306 × 5). During testing, we restrict the pairwise tests to word pairs (xi, xj) which satisfy some conditions. For example in noun context sensitivity test, the pair of words should be such that, they appear in a sentence with the same words except the noun. We describe these candidate word test pairs, in detail, in the following sections. In each of the following sensitivity tests, we perform a pair-wise accuracy test among the same candidate word (bold items) from sentences which 5143 Figure 5: Experimental setup for micro-context tests. Given two sentences with similar words except one in the past (underlined), the test evaluates if the deep neural network model representation contains sufficient information to tell the two words apart. Please see Section 4.2 for more details. are identical except for one word (underlined items). We vary the non-identical word type (noun, verb, adjective, determiner) among the two sentences to test the contribution of each of these word types to the context representation further in sentence. This test helps us understand what parts of the context are retained or forgotten by the neural network model representation. Detailed results of each test are included in the appendix section (Figure 10). Please note that the part of BERT word embedding is the sentence embedding, therefore the BERT embedding performs better than 0.5, unlike other embeddings. 4.2.1 Noun sensitivity “The dog ate the” vs. “The girl ate the” For the PassAct1 dataset, we observe that simple GloVe additive model (classification accuracy = 0.52) loses information about the noun while it is retained by most layers of other models like BERT (accuracy = 0.92), ELMo (accuracy = 0.91). Higher level layers, such as linear layer for POStag prediction (accuracy = 0.65), also perform poorly. This seems obvious due to the task it solves which focuses on POS-tag property at the word ‘the’ rather than the previous context. In summary, we observe that the language model context preserves noun information well. 4.2.2 Verb sensitivity “The dog saw the” vs. “The dog ate the” For the PassAct1 dataset, we observe that similar to noun sensitivity, most language model layers (accuracy = 0.92), except for simple GloVe Additive model, preserve the verb memory. By design, the GloVe Additive model retains little context from the past words, and therefore the result verifies the experiment setup. 4.2.3 First determiner sensitivity “A dog” vs. “The dog” For the PassAct2 dataset, we observe that determiner information is retained well by most layers. However, the shallow layers retain information better than the deeper layers. For example, BERT layer 3 (accuracy = 0.82), Multitask lstm 0 backward (accuracy = 0.82), BERT Layer 18/19 (accuracy 0.78). Since the earlier layers have a higher correlation with shallow feature processing, the determiner information may be useful for the early features in neural network representation. 4.2.4 Adjective sensitivity “The happy child” vs. “The child” For the Act3 dataset, we observe that middle layers of most models (BERT, Multitask) retain the adjective information well. However, surprisingly simple multitask model (lstm 1 forward layer accuracy = 0.89) retains adjective information better than BERT model (layer 7 accuracy = 0.84). This could be due to the importance of adjective in context for POS tag prediction. This result encourages the design of language models with diverse cost functions based on the kind of sentence context information that needs to be preserved in the final task. 4.2.5 Visualisation We visualise the average agreement of model predicted brain activity (from BERT layer 18) and true brain activity for candidate stimuli in microsensitivity tests. Please note that the microsensitivity tests predict brain activity for stimuli with almost similar past context except one word, this makes the task harder. We preprocess the brain activity values to be +1 for all positive values and -1 for all negative values. The predicted brain 5144 activity (y ′) and the true brain activity (y) are then compared to form an agreement activity (y ′′), resulting in a zero value for all locations where the sign predicted was incorrect. We average these agreement activities (y ′′) for all test examples in a cross-validation fold to form a single activity image (Y ′′). Figure 8 shows Y ′′ for the word ‘the’ in noun-sensitivity tests Section 4.2.1 (additional results are in the appendix section). We observe that our model prediction direction agrees with brain prediction direction in most of the brain regions. This shows that our neural network layer representation can preserve information from earlier words in the sentence. 4.3 Semi-supervised training using synthesized brain activity In this section, we consider the question of whether previously trained linear regression model (X1), which predicts brain activity for a given sentence, can be used to produce useful synthetic brain data (i.e., sentence-brain activity pairs). Constraints like high cost of MEG recording and physical limits on an individual subject during data collection, favor such synthetic data generation. We evaluate effectiveness of this synthetically generated brain data for data augmentation in the stimulus prediction task (Mitchell et al., 2008). Specifically, we train a decoding model (X2) to predict brain activity during a stimulus reading based on GloVe vectors for nouns. We consider two approaches. In the first approach, the same brain activity data as in previous sections was used. In the second approach, the real brain activity data is augmented with the synthetic activities generated by the regression model (X1). In our experiment, we generate new sentences using the same vocabulary as the original sentences in the PassAct1 dataset. Details of the original 32 sentences (Section A.1.1) along with the 160 generated sentences (Section A.1.2) are given in the appendix section. We process the 160 generated sentences with BERT layer 18 to get word stimulus features in context. The encoding model (X1) was trained using the PassAct1 dataset. Please note that BERT layer 18 was chosen based on the high accuracy results on macrocontext tests, therefore the layer aligned well with the whole brain activity. The choice of representation (deep neural network layer) to encode brain activity should be done carefully, as each representation may be good at encoding different parts of brain. A good criteria for representation selection requires further research. To demonstrate the efficacy of the synthetic dataset, we present the accuracy in predicting noun (or verb) stimuli from observed MEG activity with and without the additional synthetic MEG data. With linear ridge regression model (X2), a GloVe (Pennington et al., 2014) feature to brainactivity prediction models were trained to predict the MEG activity when a word is observed . To test the model performance, we calculate the accuracy of the predicted brain activity given the true brain activity during a word processing (Equation 1). All the experiments use 4-fold cross-validation. Figure 7 shows the increase in the noun/verb prediction accuracy with additional synthetically generated data. The statistical significance is calculated over 400 random label permutation tests. To summarize, these results show the utility of using previously trained regressor model to produce synthetic training data to improve accuracy on additional tasks. Given the high cost of collecting MEG recordings from human subjects and their individual capacity to complete the task, this data augmentation approach may provide an effective alternative in many settings. 5 Related Work Usage of machine learning models in neuroscience has been gaining popularity. Methods in this field use features of words and contexts to predict brain activity using various techniques (Agrawal et al., 2014). Previous research have used functional magnetic resonance imaging (FMRI) (Glover, 2011) and Magnetoencephalography (MEG) (Hmlinen et al., 1993) to record brain activity. Prefrontal cortex in rhesus monkeys was studied in Mante et al. (2013). They showed that an appropriately trained recurrent neural network model reproduces key physiological observations and suggests a new mechanism of input selection and integration. Barak (2017) argues that RNNs with reverse engineering can provide a framework for modeling in neuroscience, potentially serving as a powerful hypothesis generation tool. Prior research by Mitchell et al. (2008), Wehbe et al. (2014b), Jain and Huth (2018), Hale et al. (2018), Pereira et al. (2018), and Sun et al. (2019) have established a general correspondence between a computational model and brain’s re5145 Figure 6: Average sign agreement activity for noun sensitivity stimuli ‘the’. The red and blue colored areas are the +ive and -ive signed brain region agreement respectively, while the white colored region displays brain regions with prediction error. We observe that in most regions of the brain, the predicted and true activity agree on the activity sign, thereby providing evidence that deep learning representations can capture useful information about language processing consistent with the brain recording. (a) Noun prediction results (b) Verb prediction results Figure 7: Accuracy with and without synthetically generated MEG brain data on two stimuli prediction tasks: (a) Nouns (left) and (b) Verbs (right). We trained two models – one using true MEG brain recording and the other using both true and synthetically generated MEG brain data (Augmented data model). We observe that the augmented data model results in accuracy improvement on both tasks, on average 2.1% per subject for noun prediction and 2.4% for verb. Accuracy (chance) is the random permutation test accuracy, with the green shaded area representing standard deviation. Please see Section 4.3 for details. sponse to naturalistic language. We follow these prior research in our analysis work and extend the results by doing a fine-grained analysis of the sentence context. Additionally, we also use deep neural network representations to generate synthetic brain data for extrinsic experiments. 6 Conclusion In this paper, we study the relationship between sentence representations learned by deep neural network models and those encoded by the brain. We encode simple sentences using multiple deep networks, such as ELMo, BERT, etc. We make use of MEG brain imaging data as reference. Representations learned by BERT are the most effective in predicting brain activity. In particular, most models are able to predict activity in the left temporal region of the brain with high accuracy. This brain region is also known to be responsible for processing syntax and semantics for language understanding. To the best of our knowledge, this is the first work showing that the MEG data, when reading a word in a sentence, can be used to distinguish earlier words in the sentence. Encouraged by these findings, we use deep networks to generate synthetic brain data to show that it helps in improving accuracy in a subsequent stimulus decoding task. Such data augmentation approach is very promising as actual brain data collection in large quantities from human subjects is an expensive and labor-intensive process. We are hopeful that the ideas explored in the paper will promote further research in understanding relationships between representations learned by deep models and the brain during language processing tasks. 7 Acknowledgments This work was supported by The Government of India (MHRD) scholarship and BrainHub CMUIISc Fellowship awarded to Sharmistha Jat. We thank Dan Howarth and Erika Laing for help with MEG data preprocessing. 5146 References Pulkit Agrawal, Dustin Stansbury, Jitendra Malik, and Jack L. Gallant. 2014. Pixels to voxels: Modeling visual representation in the human brain. CoRR, abs/1407.5104. Omri Barak. 2017. Recurrent neural networks as versatile tools of neuroscience research. Current Opinion in Neurobiology, 46:1 – 6. Computational Neuroscience. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proc. of NAACL. Gary H Glover. 2011. Overview of functional magnetic resonance imaging. Neurosurgery clinics of North America, 22(2):133–vii. Gene H. Golub, Michael Heath, and Grace Wahba. 1979. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21(2):215–223. John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan Brennan. 2018. Finding syntax in human encephalography with beam search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2727–2736. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear. Matti Hmlinen, Riitta Hari, Risto Ilmoniemi, Jukka Knuutila, and Olli V. Lounasmaa. 1993. Magnetoencephalography: Theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys., 65:413–. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. CoRR, abs/1611.01462. Shailee Jain and Alexander Huth. 2018. Incorporating context into language encoding models for fmri. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 6628–6637. Curran Associates, Inc. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521:436. Valerio Mante, David Sussillo, Krishna V. Shenoy, and William T. Newsome. 2013. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature, 503:78 EP –. Tom M. Mitchell, Svetlana V. Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L. Malave, Robert A. Mason, and Marcel Adam Just. 2008. Predicting human brain activity associated with the meanings of nouns. Science, 320(5880):1191–1195. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python . Journal of Machine Learning Research, 12:2825–2830. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532– 1543. Francisco Pereira, Bin Lou, Brianna Pritchett, Samuel Ritter, Samuel J. Gershman, Nancy Kanwisher, Matthew Botvinick, and Evelina Fedorenko. 2018. Toward a universal decoder of linguistic meaning from brain activation. Nature Communications, 9(1):963. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Nicole Rafidi. 2014. The role of syntax in semantic processing: A study of active and passive sentences. [Online; accessed 2-March-2019]. Gustavo Sudre, Dean Pomerleau, Mark Palatucci, Leila Wehbe, Alona Fyshe, Riitta Salmelin, and Tom Mitchell. 2012. Tracking neural coding of perceptual and semantic features of concrete nouns. NeuroImage, 62:451–63. Jingyuan Sun, Shaonan Wang, Jiajun Zhang, and Chengqing Zong. 2019. Towards sentence-level brain decoding with distributed representations. AAAI Press. Partha Pratim Talukdar, Derry Wijaya, and Tom Mitchell. 2012. Acquiring temporal constraints between relations. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12, pages 992–1001, New York, NY, USA. ACM. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. 5147 Leila Wehbe, Brian Murphy, Partha Talukdar, Alona Fyshe, Aaditya Ramdas, and Tom Mitchell. 2014a. Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. PloS one, 9:e112575. Leila Wehbe, Ashish Vaswani, Kevin Knight, and Tom M. Mitchell. 2014b. Aligning context-based statistical models of language with brain activity during reading. In EMNLP, pages 233–243. ACL. 5148 A Appendices A.1 Dataset details Following are the sentences used in the paper for experiments described in Section 4. We list down the sentences in PassAct1 dataset and the generated sentences in the sections Section A.1.1 and Section A.1.2 respectively. The two datasets are disjoint in terms of the sentences they contain, but are built using the same vocabulary. Datasets PassAct2 dataset and Act3 dataset are detailed in subsections A.1.3 and A.1.4 respectively. A.1.1 PassAct1 dataset sentences the boy was liked by the girl the girl was watched by the man the man was despised by the woman the woman was encouraged by the boy the girl was liked by the woman the man was despised by the boy the girl was liked by the boy the boy was watched by the woman the man was encouraged by the girl the woman was despised by the man the woman was watched by the boy the girl was encouraged by the woman the man was despised by the girl the boy was liked by the man the boy was watched by the girl the woman was encouraged by the man the man despised the woman the girl encouraged the man the man liked the boy the girl despised the man the woman encouraged the girl the boy watched the woman the man watched the girl the girl liked the boy the woman despised the man the boy encouraged the woman the woman liked the girl the boy despised the man the man encouraged the woman the girl watched the boy the woman watched the boy the boy liked the girl A.1.2 PassAct1 dataset artificially generated sentences the girl was despised by the man the man despised the girl the man was liked by the girl the girl was liked by the man the girl liked the man the man liked the girl the girl was encouraged by the man the man encouraged the girl the man was watched by the girl the girl watched the man the boy was despised by the man the man despised the boy the man was liked by the boy the boy liked the man the man was encouraged by the boy the boy was encouraged by the man the boy encouraged the man the man encouraged the boy the man was watched by the boy the boy was watched by the man the boy watched the man the man watched the boy the man was despised by the women the women was despised by the man the women despised the man the man despised the women the man was liked by the women the women was liked by the man the women liked the man the man liked the women the man was encouraged by the women the women was encouraged by the man the women encouraged the man the man encouraged the women the man was watched by the women the women was watched by the man the women watched the man the man watched the women the girl was despised by the man the man despised the girl the girl was liked by the man the man was liked by the girl the man liked the girl the girl liked the man the girl was encouraged by the man the man encouraged the girl the man was watched by the girl the girl watched the man the girl was despised by the boy the boy was despised by the girl the boy despised the girl the girl despised the boy the girl was encouraged by the boy 5149 the boy was encouraged by the girl the boy encouraged the girl the girl encouraged the boy the girl was watched by the boy the boy watched the girl the girl was despised by the women the women was despised by the girl the women despised the girl the girl despised the women the girl was liked by the women the women was liked by the girl the women liked the girl the girl liked the women the girl was encouraged by the women the women was encouraged by the girl the women encouraged the girl the girl encouraged the women the girl was watched by the women the women was watched by the girl the women watched the girl the girl watched the women the boy was despised by the man the man despised the boy the man was liked by the boy the boy liked the man the boy was encouraged by the man the man was encouraged by the boy the man encouraged the boy the boy encouraged the man the boy was watched by the man the man was watched by the boy the man watched the boy the boy watched the man the boy was despised by the girl the girl was despised by the boy the girl despised the boy the boy despised the girl the boy was encouraged by the girl the girl was encouraged by the boy the girl encouraged the boy the boy encouraged the girl the girl was watched by the boy the boy watched the girl the boy was despised by the women the women was despised by the boy the women despised the boy the boy despised the women the boy was liked by the women the women was liked by the boy the women liked the boy the boy liked the women the boy was encouraged by the women the women was encouraged by the boy the women encouraged the boy the boy encouraged the women the boy was watched by the women the women was watched by the boy the women watched the boy the boy watched the women the women was despised by the man the man was despised by the women the man despised the women the women despised the man the women was liked by the man the man was liked by the women the man liked the women the women liked the man the women was encouraged by the man the man was encouraged by the women the man encouraged the women the women encouraged the man the women was watched by the man the man was watched by the women the man watched the women the women watched the man the women was despised by the girl the girl was despised by the women the girl despised the women the women despised the girl the women was liked by the girl the girl was liked by the women the girl liked the women the women liked the girl the women was encouraged by the girl the girl was encouraged by the women the girl encouraged the women the women encouraged the girl the women was watched by the girl the girl was watched by the women the girl watched the women the women watched the girl the women was despised by the boy the boy was despised by the women the boy despised the women the women despised the boy the women was liked by the boy the boy was liked by the women the boy liked the women the women liked the boy the women was encouraged by the boy the boy was encouraged by the women the boy encouraged the women 5150 the women encouraged the boy the women was watched by the boy the boy was watched by the women the boy watched the women the women watched the boy A.1.3 PassAct2 dataset sentences the monkey inspected the peach a monkey touched a school the school was inspected by the student a peach was touched by a student the peach was inspected by the monkey a school was touched by a monkey a doctor inspected a door the doctor touched the hammer the student found a door a student kicked the hammer the student inspected the school a student touched a peach a monkey found the hammer the monkey kicked a door a dog inspected a hammer the dog touched the door a dog found the peach the dog kicked a school the doctor found a school a doctor kicked the peach a school was kicked by the dog the peach was found by a dog the door was touched by the dog a hammer was inspected by a dog the peach was kicked by a doctor a school was found by the doctor the hammer was touched by the doctor a door was inspected by a doctor the hammer was kicked by a student a door was found by the student the hammer was found by a monkey a door was kicked by the monkey A.1.4 Act3 dataset sentences the teacher broke the small camera the student planned the protest the student walked along the long hall the summer was hot the storm destroyed the theater the storm ended during the morning the duck flew the duck lived at the lake the activist dropped the new cellphone the editor carried the magazine to the meeting the boy threw the baseball over the fence the bicycle blocked the green door the boat crossed the small lake the boy held the football the bird landed on the bridge the bird was red the reporter wrote about the trial the red plane flew through the cloud the red pencil was on the desk the reporter met the angry doctor the reporter interviewed the politician during the debate the tired lawyer visited the island the tired jury left the court the artist found the red ball the artist hiked along the mountain the angry lawyer left the office the army built the small hospital the army marched past the school the artist drew the river the actor gave the football to the team the angry activist broke the chair the cellphone was black the company delivered the computer the priest approached the lonely family the patient put the medicine in the cabinet the pilot was friendly the policeman arrested the angry driver the policeman read the newspaper the politician celebrated at the hotel the trial ended in spring the tree grew in the park the tourist hiked through the forest the activist marched at the trial the tourist ate bread on vacation the vacation was peaceful the dusty feather landed on the highway the accident destroyed the empty lab the horse kicked the fence the happy girl played in the forest the guard slept near the door the guard opened the window the glass was cold the green car crossed the bridge the voter read about the election the wealthy farmer fed the horse the wealthy family celebrated at the party the window was dusty the boy kicked the stone along the street the old farmer ate at the expensive hotel 5151 the man saw the fish in the river the man saw the dead mouse the man read the newspaper in church the lonely patient listened to the loud television the girl dropped the shiny dime the couple laughed at dinner the council read the agreement the couple planned the vacation the fish lived in the river the flood damaged the hospital the big horse drank from the lake the corn grew in spring the woman bought medicine at the store the woman helped the sick tourist the woman took the flower from the field the worker fixed the door at the church the businessman slept on the expensive bed the businessman lost the computer at the airport the businessman laughed in the theater the chicken was expensive at the restaurant the lawyer drank coffee the judge met the mayor the judge stayed at the hotel during the vacation the jury listened to the famous businessman the hurricane damaged the boat the journalist interviewed the judge the dog ate the egg the doctor helped the injured policeman the diplomat bought the aggressive dog the council feared the protest the park was empty in winter the parent watched the sick child the cloud blocked the sun the coffee was hot the commander ate chicken at dinner the commander negotiated with the council the commander opened the heavy door the old judge saw the dark cloud the young engineer worked in the office the farmer liked soccer the mob approached the embassy the mob damaged the hotel the minister spoke to the injured patient the minister visited the prison the minister found cash at the airport the minister lost the spiritual magazine the mouse ran into the forest the parent took the cellphone the soldier delivered the medicine during the flood the soldier arrested the injured activist the small boy feared the storm the egg was blue the editor gave cash to the driver the editor damaged the bicycle the expensive camera was in the lab the engineer built the computer the family survived the powerful hurricane the child held the soft feather the clever scientist worked at the lab the author interviewed the scientist after the flood the artist shouted in the hotel 5152 (a) Verb sign agreement image between true and predicted brain activations (b) Adjective sign agreement image between true and predicted brain activations (c) Determiner sign agreement image between true and predicted brain activations Figure 8: Sign agreement image for verb, determiner and adjective sensitivity test stimuli. The red and blue colored areas are the +ive and -ive signed brain region agreement. While, the white colored region displays brain regions with prediction error. We observe that in most regions of the brain the predicted and true image agree on the activity sign, thereby proving that deep learning representations can capture useful information about language processing. 5153 Figure 9: Pairwise Accuracy of predicting brain encodings for noun, verb, passive & active sentences. For each of the category the Ridge regression model is learned and tested on the stimulus subset like only nouns or only passive sentences. The color of a cell represents the value within overall accuracy scale with red indicating small values, yellow intermediate and green high values. We observe that Nouns are predicted better than verbs. And active sentences are predicted better than passive sentences. 5154 Figure 10: Micro-context sensitivity test results for all the layers. The color of a cell represents the value within overall accuracy scale with red indicating small values, yellow intermediate and green high values. We observe that noun and verbs are retained in the context with same accuracy followed by determiner and then adjective.
2019
507
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5155–5165 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5155 Modeling affirmative and negated action processing in the brain with lexical and compositional semantic models Vesna G. Djokic♠ Jean Maillard♣ Luana Bulat♣ Ekaterina Shutova♦ ♠Department of Neuroscience, University of Southern California, USA ♣Dept. of Computer Science & Technology, University of Cambridge, United Kingdom ♦ILLC, University of Amsterdam, The Netherlands [email protected], [email protected], [email protected], [email protected] Abstract Recent work shows that distributional semantic models can be used to decode patterns of brain activity associated with individual words and sentence meanings. However, it is yet unclear to what extent such models can be used to study and decode fMRI patterns associated with specific aspects of semantic composition such as the negation function. In this paper, we apply lexical and compositional semantic models to decode fMRI patterns associated with negated and affirmative sentences containing hand-action verbs. Our results show reduced decoding (correlation) of sentences where the verb is in the negated context, as compared to the affirmative one, within brain regions implicated in action-semantic processing. This supports behavioral and brain imaging studies, suggesting that negation involves reduced access to aspects of the affirmative mental representation. The results pave the way for testing alternate semantic models of negation against human semantic processing in the brain. 1 Introduction Computational semantic models are increasingly being evaluated in their ability to capture aspects of human semantic processing, including similarity and association judgments (De Deyne et al., 2016) and semantic representation in the brain (Bulat et al., 2017). Prior work shows that distributional semantic models (DSMs) are able to decode functional magnetic resonance imaging (fMRI) patterns associated with the meaning of concrete words (Anderson et al., 2013). Relevant to our work, Carota et al. (2017) showed that the similarity structure of DSMs for action words correlates with that of fMRI patterns in brain regions implicated in action-semantic processing. More recent studies have also investigated the ability of DSMs to predict fMRI patterns of sentential meanings (Pereira et al., 2018) and larger narrative text passages (Wehbe et al., 2014; Huth et al., 2016). They have shown that encoding models based on word embeddings are able to capture subtle aspects of sentence meaning in the brain, even when these models are oblivious of word order and syntactic structure. While promising, none of this research has so far systematically investigated specific semantic composition phenomena and processing at the syntax-semantic interface, such as that of the negation function. Negation is a fundamental abstraction necessary for efficient reasoning and communication (Horn, 1989). Although it is typically marked syntactically, the semantics of negation in natural language usage has proven to be rather challenging to pinpoint (Speranza and Horn, 2010). In logical negation, the negation operator has been succinctly described as a truth-functional operation, reversing the truth value of a sentence. On the other hand, from a pragmatic point of view, the primary function of negation is to direct attention to an alternative meaning and can thus be, more generally, compared to our ability for counterfactual thinking (Hasson and Glucksberg, 2006). It is also often assumed that negation entails affirmation (as it is always positive by default), yet the extent to which the the affirmative situation need be processes is debated (Orenes et al., 2014). Despite the intuition that negated meanings are indeed quite distinct from their affirmative counterparts, there is still no comprehensive account of how the brain represents negated entities. Neuroscientific studies on negation have predominantly focused on studying negation of action-related sentences and suggest that negation blocks access to aspects of the affirmative representation (Papeo et al., 2016). For exam5156 ple, negation of action-related sentences or imperatives involves decreased activity in motor systems of the brain implicated in action semantics when compared to the affirmative context (Tettamanti et al., 2008; Tomasino et al., 2010). However, overall reduced activation does not necessarily equate to a lack of information across patterns of activated or deactivated voxels in a brain region (Kriegeskorte et al., 2008). More importantly, the degree to which negation of action-related sentences impacts access to lexico-semantic representations and semantic similarity in the brain is not yet well understood. To contribute to our understanding of negation and its modeling, we investigate the extent to which lexical and compositional semantic models can decode fMRI patterns of negated and affirmative action sentences in the brain using similarity-based decoding (Anderson et al., 2016). We also test the extent to which the representational similarity structure (Kriegeskorte et al., 2008) of DSMs of actionverbs correlates with that of fMRI patterns associated with negated versus affirmative sentences containing hand-action verbs. We focus on motor areas and classical language-related brain regions implicated in action-semantic processing (e.g., understanding action words and sentences) (Pulvermuller, 2005; Kemmerer, 2015). DSMs have proven successful in modeling aspects of semantic composition in the context of the natural language inference task (Bowman et al., 2015b). Although the modeling of logical negation using DSMs is wrought with challenges (Kruszewski et al., 2017), current state-of-the-art neural network based models appear to capture elements of markedness asymmetry in negation (Li et al., 2016) and, presumably, implicitly model negation at some level. In our experiments, we investigate the extent to which DSMs are able to decode (correlate with) fMRI patterns associated with the reading of sentences containing negated and affirmative action verbs. We experiment with (1) word-level representations of action verbs; and (2) compositional semantic models (based on addition of word-level representations and long short-term memory (LSTM) networks). In agreement with previous work, our results show that distributional representations of action verbs (and to some extent verb-object phrases) show reduced decoding for negated versus affirmative action sentences. This is also reflected as a reduced correlation between the similarity structure of DSMs of action verbs and fMRI patterns of negated as compared to affirmative action sentences. Importantly, we show for the first time that negation impacts semantic similarity in motor areas, but also to some extent language-related brain regions. These findings lend further support to the hypothesis that negation may involve reduced access to aspects of the affirmative mental representation. 2 Related Work Decoding brain activity Mitchell et al. (2008) were the first to show that DSMs based on cooccurrence counts with 25 sensorimotor verbs (e.g. see, hear, taste) can predict fMRI patterns associated with the meaning of concrete nouns. Later research has demonstrated that a range of DSMs can decode fMRI patterns of concrete nouns (Murphy et al., 2012; Anderson et al., 2013; Bulat et al., 2017) and, more recently, abstract nouns (Anderson et al., 2017). Most relevant to our study, Carota et al. (2017) showed that the similarity structure of a Latent Semantic Analysis (LSA) model for action words (nouns and verbs) correlates with that of fMRI patterns in motor areas (left precentral gyrus (LPG)) and classical language-related brain regions (left inferior frontal gyrus (LIFG), left posterior middle temporal gyurs (LMTP)) implicated in lexico-semantic processing (Binder et al., 2009). Moving beyond words, other studies have shown that DSMs can predict brain activity patterns associated with larger linguistic units (Wehbe et al., 2014; Huth et al., 2016; Pereira et al., 2018). For example, Pereira et al. (2018) showed that a regression model mapping between fMRI patterns of words and their word embeddings could synthesize vector representations for novel sentences that correlate with the average of the word embeddings of the sentence. Working with larger text fragments, Wehbe et al. (2014) and Huth et al. (2016) have been able to predict neural activity associated with the processing of narratives in the brain using encoding models with word embeddings (also syntactic markers) as features. Although these findings suggest that DSMs are able to predict fMRI patterns associated with the processing of compositional meanings, they do not reveal to what extent the models capture specific compositional phenomena nor the specific impact 5157 of linguistic context on semantic representation in the brain. Our work extends this line of research to study individual aspects of semantic composition, focusing on the negation function. Modeling negation in NLP Kruszewski et al. (2017) contrast logical negation, which captures the idea of the complement of a set, with conversational negation, the phenomenon by which negation identifies a set of alternative plausible utterances: i.e., the assertion “this is not a dog” suggests that the speaker may have been talking about other mammals, but is unlikely to have been talking about a skyscraper. They argue that distributional semantics is a good fit to model conversational negation. Their focus is on compositional distributional methods, which model the negation of nouns via linear transformations. This approach, unlike those used in the present work, relies on the availability of parsed training data. The effect of negation has also been studied in recurrent neural network models for sentiment classification: Li et al. (2016) observe that their LSTM model does not simply learn a fixed transformation for “not”, but rather manages to capture differences in the composition of different words; while Wang et al. (2015) study the behaviour of the LSTM gates in response to negation, showing the network’s ability to simulate complex linguistic phenomena. Both groups of authors, like us, focus on LSTM networks, but their models were trained on a sentiment analysis task. We chose a natural language inference task, as it has over an order of magnitude more training data, and requires models to learn a full range of logical and commonsense inferences (Bowman et al., 2015a). Neurocognitive processing of negation Neuroimaging studies show that negated hand action sentences (e.g., Now I don’t push the button) and negative imperatives (e.g., Don’t write) involve decreased activity in motor systems of the brain compared to the same sentences in the affirmative context (Tettamanti et al., 2008; Tomasino et al., 2010). Importantly, Papeo et al. (2016) using Transcranial Magnetic Stimulation (TMS) provide evidence that negation of action-related imperatives involves an immediate reduction of motor (cortical-spinal) excitability for negated compared to affirmative sentences as early as at the initial semantic access stage. Interestingly, the authors show that this suppression does not necessarily reflect neural inhibition in motor areas in contrast to prior studies suggesting a link between action negation and the inhibition of actions (de Vega et al., 2016). These findings seem in some regards contrary to the predictions of linguistic theories of negation. For example, it has been suggested that, at some level, negation must involve processing of the affirmative situation followed by either its modification or rejection (Russell, 1948). Specifically, Kaup et al. (2007) suggest that the abstract syntactic negation marker may act to reverse the truth value of a sentence through a two-step simulation process involving first, a simulation of the affirmative situation, and, subsequently, a simulation of the actual state of affairs, leading eventually to the suppression of the affirmative situation. While a few behavioral studies have found evidence in favor of the idea that negation involves a simulation of the affirmative situation (Kaup et al., 2007), it has been argued that these effects may be the result of task-induced cognitive strategies (Papeo et al., 2016). On the whole, behavioral and neuroscientific findings do not paint a complete picture of negation, but they suggest that access to some aspects of the affirmative semantic representation in the brain are being immediately reduced (or blocked). Given the above, we might expect to see significant differences in the way in which the semantic similarity of DSM models for actionwords and sentences is reflected across the brain areas implicated in action-semantics when comparing affirmative and negated actions. 3 Brain Imaging Data We use the fMRI data by Djokic et al. (forthcoming), who investigated negation of literal and metaphoric actions in the brain. Participants Fifteen healthy adults (8 female, ages 18 to 35) took part in the study. All subjects were right-handed, native English speakers. Stimuli Thirty-one unique hand-action verbs were used to create 40 affirmative literal (AL), 40 negated literal (NL), 40 affirmative metaphor (AM), and 40 negated metaphor (NM). Each verb was repeated once for each condition, except 9 verbs which were repeated twice for each condition. Additionally, 40 affirmative literal paraphrases of the metaphor were created. All sentences are in the 3rd person singular, present tense, progressive (Figure 1). Stimuli were created by 5158 Condition Sentence Affirm. Literal She’s pushing the wheelbarrow Negated Literal He’s not pushing the carriage Affirm. Metaphor She’s pushing the agenda Negated Metaphor He’s not pushing the idea Figure 1: Sample stimuli for the verb push the authors of the study and normed for psycholinguistic variables in a separate experiment. Experimental Paradigm Subjects were instructed to passively read the object of the sentence (e.g. ‘the yellow lemon’), briefly shown on screen first, followed by the sentence (e.g. ‘She’s squeezing the lemon’). Catch trials were included that contained a semantically incongruent object (e.g., ‘the wooden table’, ‘She’s eating the table’). Participant’s recall of catch trials (and non-catch) trials was tested to ensure participants were paying attention. The object was shown on screen for 2 s, followed by a 0.5 s interval, then the sentence was presented for 4 s followed by a rest of 8 s. A total of 5 runs were completed, each lasting 10.15 minutes (3 subjects only completed 4 runs). Stimulus presentation was pseudo-randomized (i.e., such that sentences with the same verb were not shown in succession). fMRI Data Acquisition fMRI images were acquired with a Siemens MAGNETOM Trio 3T System with a 32-channel head matrix coil. Highresolution anatomical scans were acquired with a structural T1-weighted magnetization prepared rapid gradient echo (MPRAGE) with TR=1950 ms, TE=2.26 ms, flip angle 10◦, 256 × 256 mm matrix, 1 mm resolution, and 208 coronal slices. Whole brain functional images were obtained with a T2* weighted single-shot gradient-recalled echo-planar sequence (EPI) using blood oxygenation-level-dependent contrast with TR=2000 ms, TE=30 ms, flip angle 90◦, 64 × 64 mm matrix, 3.5 mm resolution. Each functional image consisted of 37 contiguous axial slices, acquired in interleaved mode. 4 Semantic models All our semantic models are based on GloVe (Pennington et al., 2014) word embeddings. We use the 100-dimensional word vectors provided by the authors, trained on Wikipedia and Gigaword corpora.1 We investigate the following models: 1https://nlp.stanford.edu/projects/glove/ Verb In this model, stimulus phrases are represented as the individual D-dimensional word embeddings of their verb. Addition This model takes the embeddings of the verb and object of the phrase, and computes the phrase representation as their average. LSTM As a more sophisticated compositional model, we take the long short-term memory (LSTM) recurrent neural network architecture (Hochreiter and Schmidhuber, 1997). Due to the lack of a large training set, directly training the LSTM model for our specific task (i.e. brain decoding) was not possible. Instead, we trained the LSTM on a natural language inference task (Bowman et al., 2015a), as it is a complex semantic task where we expect rich meaning representations to play an important role. Given two sentences, the goal of natural language inference is to decide whether the first entails or contradicts the second, or whether they are unrelated. We used the LSTM to compute hidden representations for each sentence, and then used a single-layer perceptron classifier as in Bowman (2016) to predict the correct relationship. The inputs were the same 100-dimensional word embeddings used for the other models, and were updated during training. The model was optimised using Adam (Kingma and Ba, 2014). We extracted the 100-dimensional hidden representations learnt by the LSTM for the verb-object phrases in our stimulus set. 5 Brain activity decoding 5.1 fMRI data preprocessing We restricted analysis to the 12 subjects that completed all runs (3 out of 15 subjects scanned only completed 4 out of 5 runs). The runs were combined across time to form each subject’s dataset. The functional data was co-registered with the MPRAGE structural image, high-pass filtered (90 secs) and motion corrected to the middle slice using the fMRI software FSL2. Lastly each dataset was linearly detrended and (baseline) normalized per run using PyMVPA3. 5.2 Estimation of fMRI Patterns GLM Modeling The Blood oxygenation level dependent (BOLD) signal response was estimated 2Oxford Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB’s) Software Library, https://fsl.fmrib.ox.ac.uk/fsl 3http://www.pymvpa.org/ 5159 using the general linear model (GLM) with the hemodynamic response function (HRF) regressor with PyMVPA. The entire stimulus duration for each object and action-related sentence was modeled as an event lasting six seconds (3 TRs) after taking into account the hemodynamic lag. This gave a response amplitude (Beta) estimate for each sentence resulting in voxel-wise Beta maps that were normalized to Z-scores. Verbs Estimated fMRI patterns were calculated for each of the thirty-one action-verbs by combining action-related sentences with the same action-verb across all stimuli, irrespective of sentence context (All Verbs). Estimated fMRI patterns for action-verbs presented in an affirmative context (Aff Verbs) were obtained by combining only affirmative sentences containing the same action-verbs. Similarly, fMRI estimates for action-verbs in a negative context (Neg Verbs) were obtained by combining negative sentences containing the same action-verbs. In all three cases, estimated brain responses for sentences containing the same action-verbs were averaged together across runs to yield voxel-wise Z-score maps for each of the thirty-one verb presentations and used to perform similarity-based analysis within each subject’s native functional space. We performed voxel selection by selecting the top fifteen percent of voxels that had the highest correlation stability across runs using All Verbs. Stimulus Phrases Estimated fMRI patterns for individual action sentences in each condition (affirmative literal (AL), affirmative metaphor (AM), negated literal (NL), and negated metaphor (NM)), were calculated, separately, by modeling unique action sentences within a condition as separate events. Analysis was restricted to only sentences within each condition representative of the 31 unique verbs. We performed voxel selection by selecting the top fifteen percent of voxels with the greatest correlation stability across runs between sentences in the specific condition being modeled. 5.3 Definition of Regions of Interest We selected a priori regions of interest (ROIs) implicated in action semantics to perform our analysis. This includes 1) left precentral gyrus (LPG), implicated in sensorimotor processing (i.e., motoric features) (Pulvermuller, 2005); 2) left middle temporal gyrus, posterior (LMTP); 3) left inferior frontal gyrus (LIFG), the latter two Figure 2: Neural and semantic correlation coefficient matrices. In the study the number of verbs is 31. implicated in language processing (i.e., lexicalsemantics/syntax) (Fedorenko et al., 2011). ROIs were created using the Harvard-Oxford Cortical Structural Probabilistic Atlases thresholded at 25% in FSL. Masks were transformed from the Montreal Neurological Institute (MNI) standard space into the subject’s native functional space. 5.4 Representational Similarity Analysis Representational similarity analysis (RSA) is a multivariate approach to fMRI data analysis and avoids model over-fitting and dependence on learning parameters when dealing with highdimensional data (Kriegeskorte et al., 2008). It calculates a global measure comparing the similarity structures of neural and model-based stimuli representations. The neural and semantic model vectors are first transformed into an abstracted similarity space by computing a similarity matrix from the brain activity vectors (N stimuli × N stimuli) and a similarity matrix from the semantic model-based vectors (N stimuli × N stimuli), as shown in Figure 2. The similarities are computed using Pearson correlation coefficient as a measure following Kriegeskorte et al. (2008). The elements in the neural and semantic correlation matrices are then converted into correlation distances (1 −r), leaving zeros in the diagonal. The resulting matrices are referred to as representational dissimilarity matrices (RDMs) and indicate the degree to which conditions can be distinguished from each other (i.e., distance in high-dimensional similarity space). An overall (dis)similarity measure is given by the strength of Spearman’s rank correlation between the vectorized lower below diagonal triangle of the model RDM and the vectorized lower below diagonal of the neural RDM giving an overall indication of the correspondence between the representational information carried in the brain and model. We used a one-sided Wilcoxon signed5160 rank test to test whether correlations across subjects were significantly greater than zero. FalseDiscovery-Rate (FDR) (Benjamini and Hochberg, 1995) was used to correct for multiple testing. 5.5 Group-level Similarity-based Decoding We used similarity-based decoding (Anderson et al., 2016), based on RSA, to investigate if our semantic models can decode fMRI patterns of action-related sentences. In similarity-based decoding, neural and semantic models are first each projected to a similarity space, in the same manner as in RSA, allowing decoding to be performed in a common unit space. Following Anderson et al. (2016), we perform leave-two-out decoding (for n = 31, possible pairs = 465). Given a pair of stimuli, the neural and semantic similarity codes for each stimulus are obtained by extracting the relevant labeled column vector from the neural similarity matrix and the semantic similarity matrix, respectively. These similarity codes are further reduced by removing the entries referring to that pair, to avoid auto-correlations. These reduced neural and semantic similarity codes are then correlated with each other. If the sum of the correlation coefficients of the correct labeling scheme (i.e. when the neural and semantic codes have the same label) has a higher sum of correlation coefficients than the incorrect labeling (i.e., when they don’t match) this is counted as a correct classification, otherwise as incorrect. The decoding accuracy is calculated as the number of correct classifications over the number of possible pairs. We performed group-level similarity-based decoding in which prior to the decoding step the neural similarity codes of each subject are averaged together to yield one single group-level neural similarity code, as in Anderson et al. (2016). Leave-two-out decoding was then performed using group-level neural similarity and model-based similarity codes as described above. Statistical significance of group-level decoding accuracies was assessed using permutation testing as in Anderson et al. (2016). The rows and columns of the model-based correlation matrix were shuffled to remove relationships between the stimulus label and its model-based similarity code, while the neural correlation matrix was held fixed. Classification accuracies were obtained using the randomly shuffled data. This procedure was repeated 10,000 times to obtain a null distribution of decoding accuracies, reflecting expected chancelevel accuracies with random labeling. The null hypothesis is that there is no relationship between the model-based and the group-level neural similarity codes of our stimuli. The p-value for each accuracy was calculated as the proportion of scores equal to or larger than that accuracy score. 6 Experiments and Results 6.1 Verb Model Representational Similarity Analysis We used RSA to obtain a measure of relatedness between our fMRI patterns for 31 verbs and the semantic similarity of the VERB model. We performed a condition-based analysis, comparing three types of neural estimates of the verbs: 1) All Verbs, 2) Aff Verbs, and 3) Neg Verbs. We correlated the RDMs for each condition of the neural estimates of the verbs (All Verbs, Aff Verbs, and Neg Verbs) separately with the RDM of the VERB model. Each analysis was performed within the a priori-defined ROIs (LPG, LIFG, and LMTP). Significant correlations (greater than zero) across subjects were found between the dissimilarity structures of the neural estimates for All Verbs and the VERB model in the LPG (r = 0.04, p < 0.01), LIFG (r = 0.04, p < 0.01), but not the LMTP (Table 1). Similarly, the Aff Verbs neural estimates showed significant correlations with the VERB model in the LPG (r = 0.04, p < 0.01), LIFG (r = 0.05, p < 0.01) and not the LMTP. In contrast, we did not find that Neg Verbs triggered any significant correlations with the VERB model in the ROIs tested. Moreover, Aff Verbs showed greater overall correlations with the VERB model when compared to Neg Verbs (as assessed by two-tailed paired Wilcoxon Sign Rank test) within the LPG and the LIFG (p < 0.05), but not the LMTP. These results suggest that (1) the semantic similarity of the VERB model correlates with fMRI patterns of sentences containing the same action verb (irrespective of polarity) in motor (LPG) and the language-related brain region (LIFG) (2) neural estimates for Neg Verbs show a reduced sensitivity to the similarity structure of the VERB model compared to Aff Verbs in the same ROIs, mainly motor (LPG) and the language-related brain region (LIFG). This suggests that negation involves reduced access to sensorimotor and lexico-semantic representations associated with the affirmative representation. 5161 Region All Aff Neg LPG 0.04(0.00) 0.04(0.00) -0.01(0.83) LIFG 0.04(0.00) 0.05(0.00) 0.00(0.21) LMTP 0.01(0.24) 0.01(0.18) 0.01(0.07) Table 1: RSA with VERB Model: Significant Spearman’s rank correlation coefficients and p-value in bold. Region All Aff Neg LPG 0.09(0.02) 0.09(0.00) -0.03(0.77) LIFG 0.05(0.00) 0.08(0.00) 0.04(0.11) LMTP 0.07(0.02) 0.10(0.00) 0.01(0.21) Table 2: RSA with VERB model for restricted set of verbs: Significant Spearman’s rank correlation coefficients and p-value in bold. We performed an additional analysis restricted to nine verbs, for which we had maximal number of sentences with these same verbs (giving improved signal to noise ratio). We observed a stronger but similar pattern with significant correlations for All Verbs in the LPG (r = 0.09, p < 0.05), LIFG (r = 0.05, p < 0.01), and also within the LMTP (r = 0.07, p < 0.05) (Table 2). Similarly, for Aff Verbs we found significant correlations across the LPG (r = 0.09, p < 0.01), LIFG (r = 0.08, p < 0.01), and also within the LMTP (r = 0.10, p < 0.01). These results are in line with work showing semantic category effects for action-words in brain regions implicated in actionsemantics (Carota et al., 2017), extending this to action sentences. Similar to the previous analysis, we did not find any significant correlations with the Neg Verbs in any of the ROIs tested (Table 2). In the restricted analysis only the LPG (p < 0.05) (as opposed to both the LPG and LIFG) showed greater correlations for Aff Verbs than Neg Verbs in line with work showing that action negation impacts modal (e.g., motor) areas (Ghio et al., 2018). Group-level Similarity-based Decoding We also performed the same condition-based analysis with group-level similarity-based decoding allowing us to observe systematic patterns across subjects, more generally. Table 3 shows the decoding accuracy obtained for each ROI at the grouplevel in the condition-based analysis. Overall, findings are in line with the RSA results with significant decoding accuracies found for All Verbs in the LPG (Acc = 0.72, p < 0.01) and LIFG (Acc = 0.64, p < 0.05), as well as, similar significant decoding accuracies for Aff Verbs in the LPG (Acc = 0.66, p < 0.05) and LIFG (Acc = 0.65, p < 0.05). Although the Neg Verbs Region All Aff Neg LPG 72(0.00) 66(0.01) 53(0.33) LIFG 64(0.02) 65(0.01) 42(0.77) LMTP 51(0.37) 52(0.35) 64(0.02) Table 3: Group-Level Similarity-based decoding with VERB. Significant accuracies (%) and p-value in bold. did not show significant decoding in the LPG and LIFG, we observed significant decoding within the LMTP for Neg Verbs (Acc = 0.64, p < 0.05). The above finding coupled with the fact that in the RSA analysis we never observed significant correlation differences between Neg Verbs and Aff Verbs in the LMTP, may suggest that this area is less impacted by polarity. 6.2 Addition and LSTM Models Group-level Similarity-Based Decoding As an exploratory component to our study we also performed group-level similarity-based decoding for the 31 sentences that each contained a unique verb for each condition type (i.e., AL, NL, AM, NM), separately, allowing us to assess the ability of compositional semantic models (ADDITION and LSTM models) to decode different kinds of negated and affirmative sentences. We observed that the ADDITION model showed significant decoding in the LPG (Acc = 0.64, p < 0.05) and LIFG (Acc = 0.65, p < 0.05) for the affirmative literal condition (AL) but not in the the negated condition (NL) (Table 4). Interestingly, while we found significant decoding accuracies for the affirmative metaphor condition (AM) in the LPG and LMTP, we also observed significant decoding accuracies for the negated metaphor condition (NM) within the LPG (Acc = 0.70, p < 0.01) and LIFG (Acc = 0.64, p < 0.05). For the LSTM model we showed significant decoding in the LPG for the affirmative literal condition (AL) (Acc = 0.67, p < 0.05) and affirmative metaphoric condition (AM) (Acc = 0.73, p < 0.01) but not for the negated conditions (NL, NM) (Table 5). Significant decoding was also found in the LMTP but only for the AM condition (Acc = 0.70, p < 0.01). The results suggest reduced decoding for the negated as compared to affirmative literal conditions primarily in sensorimotor brain areas in line with our previous RSA findings at the verb-level with more mixed results for the LIFG and LMTP. Given that we observed that the ADDITION model appears to be sensitive to negated metaphoric actions within the LPG and LIFG, suggests this may not be the 5162 Region AL NL AM NM LPG 64(0.01) 59(0.13) 73(0.00) 70(0.00) LIFG 65(0.01) 49(0.55) 53(0.33) 64(0.02) LMTP 58(0.15) 55(0.24) 70(0.00) 55(0.24) Table 4: Group-Level Similarity-based decoding with ADDITION. Significant accuracies and p-value in bold. Region AL NL AM NM LPG 67(0.01) 60(0.10) 71(0.00) 56(0.20) LIFG 50(0.48) 51(0.41) 61(0.08) 62(0.06) LMTP 56(0.22) 48(0.58) 75(0.00) 54(0.34) Table 5: Group-Level Similarity-based decoding with LSTM. Significant accuracies (%) and p-value in bold. case for the negated metaphoric condition. 7 Discussion Representational similarity analysis showed that the semantic similarity structure provided by the VERB model corresponded well with neural similarity of sentences containing the same action-verbs (All Verbs) within motor (LPG) and language-related brain regions (LIFG, LMTP), both implicated in action-semantic processing (Pulvermuller, 2005). Crucially, when looking at the specific impact of sentential context we found that the fMRI response patterns for negated action-verbs (Neg Verbs) showed significantly reduced correlations with the VERB model than the affirmative action-verbs (Aff Verbs) mainly in the LPG and LIFG. Similarly, when performing a group-level similarity-based decoding analysis, we also found evidence suggesting reduced decoding accuracies for Neg Verbs compared to Aff Verbs within the LPG and LIFG. Taken together, these findings provide support to previous neuroscientific studies that suggest that negation manifests foremost as reduced access to motor areas implicated in coding sensorimotor features of action verbs (Tettamanti et al., 2008; Tomasino et al., 2010; Papeo et al., 2016). However, they also provide compelling evidence in support of the idea that the modulatory impact of negation may extend to areas of the language-network. Lastly, our experiments with compositional models show that some of these effects may carry over to more complex models. Our RSA findings for All Verbs (and also Aff Verbs) are consistent with the work of Carota et al. (2017) who showed that an LSA model reflecting semantic category information about both verbs and objects associated with actions (e.g., tools and foods) significantly correlated with the similarity of fMRI patterns for verbs and objects in the LPG and LIFG (and to a lesser extent the LMTP). When this analysis was restricted to only action verbs, the LIFG was predominantly sensitive to the semantic similarity of action verbs. It is likely that our results for All Verbs (irrespective of polarity) are more closely aligned with their results for verbs and objects associated with actions, given that our action verbs were presented in a sentence context that included information about the object. Notably, we found a modulatory impact of negation in both sensorimotor (LPG) and to some extent the language-related brain region (LIFG). The LIFG has been implicated in lexical-semantic similarity in the brain but also in the selection of competing semantic alternatives (ThompsonSchill et al., 1997; Carota et al., 2017). For example, the LIFG may be important for event prediction, such as knowing which words (objects or tools) are implied by a given action verb (Carota et al., 2017). This provides further support to the hypothesis that negation involves reduced access to the affirmative mental representation. Importantly, this involves not only reduced access to motoric features, but also access to lexico-semantic relations in language-related brain regions. The LMTP may have been less impacted by action negation as it is more closely associated with higher-level object processing (Devereux et al., 2013) and, therefore, possibly captures less of the overall semantic variance associated with any given action verb. Moreover, in our study we focused on neural estimates of action verbs irrespective of their specific objects. Thus, the LPG and LIFG may more closely reflect action-semantic variance and show a greater modulatory effect of negation. However, given that similarity-based analysis is sensitive to the semantic distance of the stimuli in question, future work should investigate polarity decoding with verb-object phrases with maximal semantic-variance (e.g., action verbs associated with distinct effectors and object-directed goals). Lastly, when testing compositional models we also observed that significant decoding accuracies were predominantly found in motor areas (LPG) for affirmative conditions. Interestingly, we did observe an exception to this for negated actionverbs that were also used in a metaphorical con5163 text, possibly suggesting that compositional models are better able to capture motor features associated with metaphorical meanings on the whole, but this would need further investigation. Our main finding of a modulatory impact of negation on motor but also to some extent language-related brain regions is in line with the earlier work of Tettamanti et al. (2008) who found a reduction in activations within left-hemispheric frontal-temporal-parietal areas implicated in the representation of actions for negative compared to positive action sentences, but see (Ghio et al., 2018). Importantly, however, our results do not rule out the possibility that other brain regions may correlate with the VERB model. Recent neuroscientific work suggests that negation not only modulates modality-specific brain regions but also brain areas implicated in syntactic processing and cognitive control (Ghio et al., 2018). It is possible that prefrontal areas implicated in control and working memory may act as an intermediate stage in charge of assigning polarity and temporarily hold a representation of the affirmative situation. We are currently investigating this possibility through a whole-brain searchlight analysis, but note that the temporal resolution of fMRI may possibly hinder detection of any intermediate processing steps. In this study we provide support for the idea that negation may be mediated in part by reducing (or blocking) access to aspects of the affirmative representation. This may provide a ‘default’ negation meaning (Papeo et al., 2016), as well as allow competing or cooperating semantic alternatives to emerge. On the other hand, it is also possible that the results reflect a more ‘categorical’ representation of negation and that the current semantic models are merely not a suitable represenation for the negated meaning. Future work will need to understand the mechanisms by which negation modulates semantic similarity and lexico-semantic relations in brain regions implicated in action-semantics and how this gives rise to a negated meaning. It would be interesting to test alternate models for negation that can simultaneously explain, for example, why the verb ‘grasping’ has a more crystallized meaning than its negation ‘not grasping’, whose meaning may also depend to a greater extent on the specific linguistic (or extralinguistic) context. A fruitful avenue of research may be to investigate the extent to which contextual representations of LSTM models in the context of a sentiment classification task can be used to predict fMRI activations for positive versus negative affective phrases. Predicting sentiment is intimately tied to polarity (e.g., ‘good’ versus ‘not good’) and the relationship between affective words and their negated counterparts near orthogonal. Prior work shows the role of LSTM gates in modeling negation in sentiment prediction in part by locally minimizing the input of the negated affective word (Wang et al., 2015), providing insight into the role of learned contextual information in building the negated meaning. The sentiment test case may offer a means to measure how changes in contextual representations relevant to the semantic modeling of negation can contribute directly to predicting brain activity associated with negation processing. Alternatively, Kruszewski et al. (2017) show that conversational negation can be modeled with a distributional approach, acting like a ‘graded similarity function’ that prompts a search for ‘similar’ alternative meanings. Although prior psycholinguistics work on negation consistently shows evidence to suggest that negation reduces access to the affirmative representation, at least one study showed that this is not the case for entities semantically related to the negated representation (MacDonald and Just, 1989). This more closely aligns with the idea that some dimensions of the affirmative representation are being processed while others reduced, possibly due to competing semantic alternatives. Thus, future work should also investigate whether modeling negation as a set of alternative meanings can further show the impact of negation on semantic representation in the brain. 8 Conclusion In our work, we show for the first time that sensorimotor and to some extent language-related brain regions that correlate with distributional semantic models of action verbs may be impacted by negation. We also show that this effect may extend to more complex compositional models (in motor brain regions). Our work paves the way towards understanding the extent to which human meaning representation is impacted by negation. This finding can in turn inform the design of distributional models dealing with verb negation, for instance when modelling negation as a space of alternative meanings. 5164 References Andrew J Anderson, Elia Bruni, Ulisse Bordignon, Massimo Poesio, and Marco Baroni. 2013. Of words, eyes and brains: Correlating image-based distributional semantic models with neural representations of concepts. In EMNLP, pages 1960–1970. Andrew J Anderson, Douwe Kiela, Stephen Clark, and Massimo Poesio. 2017. Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns. Transactions of the Association for Computational Linguistics, 5:17–30. Andrew J Anderson, Benjamin D Zinszer, and Rajeev DS Raizada. 2016. Representational similarity encoding for fMRI: Pattern-based synthesis to predict brain activity using stimulus-model-similarities. NeuroImage, 128:44–53. Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc B., pages 289–300. Jeffrey R Binder, Rutvik H Desai, William W Graves, and Lisa L Conant. 2009. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex, 19(12):2767–96. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015a. A large annotated corpus for learning natural language inference. CoRR, abs/1508.05326. Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1466–1477, Berlin, Germany. Association for Computational Linguistics. Samuel R Bowman, Christopher Potts, and Christopher D Manning. 2015b. Learning distributed word representations for natural logic reasoning. Knowledge representation and reasoning: Integrating symbolic and neural approaches: Papers from the 2015 AAAI Spring Symposium., pages 289–300. Luana Bulat, Stephen Clark, and Ekaterina Shutova. 2017. Speaking, seeing, understanding: Correlating semantic models with conceptual representation in the brain. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1092–1102, Copenhagen, Denmark. Association for Computational Linguistics. Francesa Carota, Nikolaus Kriegeskorte, Hamed Nili, and Friedemann Pulvermuller. 2017. Representational similarity mapping of distributional semantics in left inferior frontal, middle temporal, and motor cortex. Cerebral Cortex, 27(1):294–309. Simon De Deyne, Amy Perfors, and Daniel J Navarro. 2016. Predicting human similarity judgments with distributional models: The value of word associations. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1861–1870. Barry Devereux, Lorraine Tyler, Jeroen Geertzen, and Billi Randall. 2013. The centre for speech, language and the brain (cslb) concept property norms. Behavior research methods, pages 1–9. Vesna G Djokic, Ekaterina Shutova, Elisabeth Wehling, Benjamin Bergen, and Lisa Aziz-Zadeh. forthcoming. Affirmation and negation of metaphorical actions in the brain. Evelina Fedorenko, Michael K Behra, and Nancy Kanwisher. 2011. Functional specificity for high-level linguistic processing in the human brain. Proceedings of the National Academy of Sciences of the United States of America, 108(39):16428–33. Marta Ghio, Karolin Haegert, Matilde M Vaghi, and Marco Tettamanti. 2018. Sentential negation of abstract and concrete conceptual categories: a brain decoding multivariate pattern analysis study. Philosophical Transactions B, (373). Uri Hasson and Sam Glucksberg. 2006. Does understanding negation entail affirmation? an examination of negated metaphors. Journal of Pragmatics, 38:1015–1032. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Laurence R Horn. 1989. A Natural History of Negation. University of Chicago Press, Chicago. Alexander G Huth, Wendy A de Heer, Thomas L Griffiths, Frederic E Theunissen, and Jack L Gallant. 2016. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532:453. Barbara Kaup, Richard H Yaxley, Carol J Madden, Rolf A Zwaan, and Jana L¨udtke. 2007. Experiential simulations of negated text information. Quarterly Journal of Experimental Psychology, 60(7):976– 990. David Kemmerer. 2015. Are the motor features of verb meanings represented in the precentral motor cortices? yes, but within the context of a flexible, multilevel architecture for conceptual knowledge. Psychonomic Bulletin Review, 22:1068–1075. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Nikolaus Kriegeskorte, Mur Marieke, and Peter Bandettini. 2008. Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2(4):4. 5165 German Kruszewski, Denis Paperno, Raffaella Bernardi, and Marco Baroni. 2017. There is no logical negation here, but there are alternatives: Modeling conversational negation with distributional semantics. Association for Computational Linguistics, 42(4):637–660. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. In HLT-NAACL, pages 681–691. Maryellen C MacDonald and Marcel A Just. 1989. Changes in activation levels with negation. Journal of Experimental Psychology. Learning, Memory, and Cognition, 15(4):633–642. Tom M Mitchell, Svetlana V Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L Malave, Robert A Mason, and Marcel A Just. 2008. Predicting human brain activity associated with the meanings of nouns. Science, 320(5880):1191–1195. Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012. Selecting corpus-semantic models for neurolinguistic decoding. In Proceedings of the First Joint Conference on Lexical and Computational Semantics, pages 114–123. Association for Computational Linguistics. Isabel Orenes, David Beltran, and Carlos Santamaria. 2014. How negation is understood: Evidence from the visual world paradigm. Journal of Memory and Language, 74:36–45. Liuba Papeo, Jean-Remy Hochmann, and Lorella Battelli. 2016. The default computation of negated meanings. Journal of Cognitive Neuroscience, 28(12):1980–1986. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Francisco Pereira, Bin Lou, Brianna Pritchett, Samuel Ritter, Samuel J Gershman, Nancy Kanwisher, Matthew Botvinick, and Evelina Fedorenko. 2018. Toward a universal decoder of linguistic meaning from brain activation. Nature Communications, 9:963. Friedemann Pulvermuller. 2005. Brain mechanisms linking language and action. Nature Reviews Neuroscience, 6:576–582. Bertrand Russell. 1948. Human knowledge: Its scope and limits. Simon & Schuster, New York. John L Speranza and Laurence R Horn. 2010. A brief history of negation. Journal of Applied Logic, 8(3):277–301. Marco Tettamanti, Rosa Manenti, Pasquale A Della Rosa, Andrea Falini, Daniela Perani, Stefano F Cappa, and Andrea Moro. 2008. Negation in the brain: Modulating action representations. NeuroImage, 43(2):358–367. Sharon L Thompson-Schill, Mark D’Esposito, Geoffrey K Aguirre, and Martha J Farah. 1997. Role of left prefrontal cortex in retrieval of semantic knowledge: a re-evaluation. Proc Natl Acad Sci., 94:14792–14797. Barbara Tomasino, Peter H Weiss, and Gereon R Fink. 2010. To move or not to move: Imperatives modulate action-related verb processing in the motor system. Neuroscience, 169(1):246–258. Manuel de Vega, Yvrena Morera, Immaculada Le´on, David Beltr´an, Pialr Casado, and Manuel Mart´ınLoeches. 2016. Sentential negation might share neurophysiological mechanisms with action inhibition. Evidence from frontal theta rhythm. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 36(22):6002–6010. Xin Wang, Yuanchao Liu, Chengjie SUN, Baoxun Wang, and Xiaolong Wang. 2015. Predicting polarities of tweets by composing word embeddings with long short-term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1343–1353, Beijing, China. Leila Wehbe, Brian Murphy, Partha Talukdar, Alona Fyshe, Aaditya Ramdas, and Tom Mitchell. 2014. Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. PLOS ONE, 9:11.
2019
508
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5166–5175 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5166 Word-order biases in deep-agent emergent communication Rahma Chaabouni1,2, Eugene Kharitonov1, Alessandro Lazaric1, Emmanuel Dupoux1,2 and Marco Baroni1,3 1Facebook A.I. Research 2Cognitive Machine Learning (ENS - EHESS - PSL Research University - CNRS - INRIA) 3ICREA {rchaabouni,kharitonov,lazaric,dpx,mbaroni}@fb.com Abstract Sequence-processing neural networks led to remarkable progress on many NLP tasks. As a consequence, there has been increasing interest in understanding to what extent they process language as humans do. We aim here to uncover which biases such models display with respect to “natural” word-order constraints. We train models to communicate about paths in a simple gridworld, using miniature languages that reflect or violate various natural language trends, such as the tendency to avoid redundancy or to minimize long-distance dependencies. We study how the controlled characteristics of our miniature languages affect individual learning and their stability across multiple network generations. The results draw a mixed picture. On the one hand, neural networks show a strong tendency to avoid long-distance dependencies. On the other hand, there is no clear preference for the efficient, non-redundant encoding of information that is widely attested in natural language. We thus suggest inoculating a notion of “effort” into neural networks, as a possible way to make their linguistic behavior more humanlike. 1 Introduction Deep neural networks, and in particular “sequence-to-sequence” (Seq2Seq, Sutskever et al., 2014) LSTM recurrent networks, attained astounding successes in many linguistic domains (Goldberg, 2017), but we still have a poor understanding of their language processing mechanisms (Lake and Baroni, 2018). We study here whether word-order constraints commonly observed in natural language are also found as “inductive” biases in recurrent networks. We consider three such constraints. The first is temporal iconicity, defined as the tendency of clauses denoting events to reflect the chronological order of the denoted events (as in Caesar’s veni, vidi, vici; Greenberg, 1963; Haiman, 1980; Newmeyer, 1992; Radden and Dirven, 2007; Diessel, 2008; Marcus and Calude, 2010; de Ruiter et al., 2018). The second is the need to disambiguate the role of sentence constituents, that can be achieved either by means of fixed-word order (e.g., in an SVO language the first noun phrase denotes the subject), or by overting morphological markers (e.g., the subject is marked with nominative case). As the two mechanisms are redundant, a trade-off is generally observed, where languages preferentially adopt one or the other (Comrie, 1981; Blake, 2001). Finally, we consider the general tendency of languages to avoid or minimize long-distance dependencies (Hawkins, 1994; Gibson, 1998; Futrell et al., 2015). As Futrell et al. (2015) observe, “I checked [it] out”, with one word intervening between the verb and the particle it composes with, ‘is easier or more efficient to produce and comprehend’ than “I checked [the place you recommended] out”, with four intervening words. We test whether such constraints affect LSTMbased Seq2Seq models. To this end, we train them as agents in a simple 2D gridworld environment, in which they give and receive navigation instructions in hand-designed artificial languages satisfying or violating the constraints. We first study which languages are harder to learn for individual agents. Then, we look at the cultural transmission of language characteristics through multiple agent generations by means of the iterated learning paradigm (Kirby et al., 2014).1 Our results suggest a mixed picture. LSTM agents are partially affected by natural constraints, both in terms of learning difficulty and stability of patterns through evolution. For example, they 1Code link: https://github.com/ facebookresearch/brica. 5167 show a strong tendency to avoid long-distance dependencies. Still, some patterns are considerably different from those encountered in human language. In particular, LSTMs generally have a preference for the reverse version of an iconic language, and only show a weak tendency towards avoidance of redundant coding. 2 Related work There is increasing interest in applying methods from linguistics and psychology to gain insights on the functioning of language processing networks, as witnessed by the recent BlackBoxNLP workshop at EMNLP 2018 (Linzen et al., 2018). In this context, researchers have looked at how trained models solve different NLP tasks characterizing their outputs and internal representation. We instead focus directly on uncovering their “innate” biases while learning a task. We study whether LSTM-based Seq2Seq models deployed as communicating agents are subject to some of the natural pressures that characterize the typology and evolution of human languages. In this respect, we connect to the recent research line on language emergence in deep network agents that communicate to accomplish a task (e.g., Jorge et al., 2016; Havrylov and Titov, 2017; Kottur et al., 2017; Lazaridou et al., 2017; Choi et al., 2018; Evtimova et al., 2018; Lazaridou et al., 2018; Mordatch and Abbeel, 2018). Most of this work provides the agents with a basic communication channel, and evaluates task success and the emerging communication protocol in an entirely bottom-up fashion. We train instead our agents to communicate with simple languages possessing the properties we want to study, and look at whether such properties make the languages easier or harder to learn. Other studies (Lee et al., 2017b,a) had also seeded their agents with (real) languages, but for different purposes (letting them develop translation skills). We introduce miniature artificial languages that respect or violate specific constraints. Other studies have used such languages with human subjects to test hypotheses about the origin of crosslinguistically frequent patterns (see Fedzechkina et al., 2016b, for a survey). We follow this approach to detect biases in Seq2Seq models. We specifically rely on two different measures. First, we evaluate the speed of learning a particular language, assuming that the faster it is, the easier its properties are for the agent (e.g., Tily et al., 2011; Hupp et al., 2009). Second, we look at the cultural evolution of a language by means of the iterated language learning paradigm (see Kirby et al., 2014, for a survey). That is, we investigate the changes that modern Seq2Seq networks exposed to a language through multiple generations introduce, checking which biases they expose. 3 Experimental setup 3.1 Languages Our environment is characterized by trajectories of 4 oriented actions (LEFT, RIGHT, UP, DOWN). A trajectory contains from 1 to 5 segments, each composed of maximally 3 steps in the same direction. A possible 3-segment trajectory is: LEFT LEFT RIGHT UP UP UP, with (LEFT LEFT), (RIGHT), and (UP UP UP) being its segments. Fixed- and free-order languages In a fixedorder language, a segment is denoted by a phrase made of a command (C) and a quantifier (Q). An utterance specifies an order for the phrases. For example, in the forward-iconic language, 3-phrase utterances are generated by the following rules: (1) U →P1 P2 P3 P(1|2|3) →C Q C →(left|right|up|down) Q →(1|2|3) Shorter and longer utterances are generated analogously (a N-phrase utterance always has form P1 P2 . . . PN). Importantly, the interpretation function associates PN to the N-th segment in a trajectory, hence the temporal iconicity of the grammar. For example, the utterance “left 2 right 1 up 3” denotes the 3-segment trajectory: LEFT LEFT RIGHT UP UP UP. The backward-iconic language is analogous, but phrases are interpreted right-to-left. Noniconic languages use the same interpretation function associating PN to the N-th segment, but now the grammar licenses phrases in a fixed order different from that of the trajectory. For example, 3phrase utterances might be generated by U →P2 P3 P1 (the trajectory above would be expressed by: “right 1 up 3 left 2”). Relative phrase ordering is fixed across utterances irrespective of length. For example, 2-phrase utterances in the language we just illustrated must be generated by U→P2 P1, to respect the fixed-relative-ordering constraint for 5168 P2 and P1 with respect to the 3-phrase rule. Fixed-order languages with (temporal ordering) markers use the same utterance rules, but now each phrase PN is also associated with an unambiguous marker. For example, the iconic+markers language obeys the first rule in (1), but the phrases are expanded by: (2) P1 →first C Q P2 →second C Q P3 →third C Q In the iconic+markers language, the trajectory above is expressed by “first left 2 second right 1 third up 3”. A free-order language licenses the same phrase structures as a fixed-order language and it uses the same interpretation function, but now there are rules expanding utterances with all possible phrase permutations (e.g., 3-phrase utterances are licensed by 6 rules: U →P1 P2 P3, U →P1 P3 P2, . . .).2 Both “second right 1 third up 3 first left 2” and “third up 3 second right 1 first left 2” are acceptable utterances in the free-order language with markers. Examples of trajectoryto-utterance mappings of these artificial languages are provided in Supplementary Long-distance language We consider a longdistance language where any phrase can be split and wrapped around a single other phrase so that a long-distance dependency is created between the components of the outermost phrase.3 We treat long-distance dependencies as optional, as in languages in which they are optionally triggered, e.g., by information structure factors. We compare the long-distance language to a local free-order language lacking the long-distance split construction. Since the long-distance option causes a combinatorial explosion of possible orders, we limit trajectories to 3 segments. At the same time, to have two languages partially comparable in terms of variety of allowed constructions, we extend the grammars of both to license free order within a phrase. Finally, markers are prefixed to both the command and the quantifier, to avoid ambiguities in the longdistance case. Summarizing, the local language is similar to the free-order+markers one above, but markers are repeated before each phrase element, 2Equivalently, a free-order language is generated in two stages from a fixed-order one through a scrambling process. 3 Note also that this language is projective, excluding cross-dependencies. and extra rules allow the quantifier to precede or go after the command, e.g., both of the following structures are permitted: P1 →first Q first C; P1 →first C first Q (“first left first 2”; “first 2 first left”). The long-distance grammar further includes rules where P1 has been split in two parts, such as: (3) U →first C1 P2 first Q1 P3 U →first Q1 P2 first C1 P3 with C1 and Q1 expandable into the usual terminals (LEFT, RIGHT...and 1, 2, 3, respectively).4 The interpretation function associates a discontinuous {CN, QN} phrase with the N-th segment in the trajectory. The first rule in (3) licenses the utterance “first left second right second 1 first 2 third up third 3”, denoting the example trajectory at the beginning of this section. Similar rules are introduced for all possible splits of a phrase around another phrase (e.g., the elements of P2 around P1, those of P1 around P3, etc.). Only one split is allowed per-utterance. Examples of trajectory-toutterance mappings in the long and local-distance languages are provided in Supplementary. Datasets We generate sentences associated to all possible trajectories in the environment (88572 in the fixed- and free-order language environment, 972 in the local- and long-distance environment experiments). We randomly split all possible distinct trajectory-utterance pairs into training (80%) and test/validation sections (10% each). 3.2 Models Architecture The agents are Encoder-Decoder Seq2Seq architectures (Cho et al., 2014; Sutskever et al., 2014) with single-layer LSTM recurrent units (Hochreiter and Schmidhuber, 1997). In light of the interactive nature of language, an agent is always trained to be both a Speaker, taking a trajectory as input and producing an utterance describing it, and as a Listener, executing the trajectory corresponding to an input utterance. Input and output vocabularies are identical, and contain all possible actions and words.5 When an agent plays the Speaker role, it uses input action representations and output word representations, and conversely in the Listener role. We tie the embed4Equivalently, long-distance constructions are derived by movement rules from canonical underlying structures. 5Word and action symbols are disjoint, e.g., the action symbol ‘LEFT’ is different from the word symbol ’left’. 5169 dings of the encoder input and of the decoder output (Press and Wolf, 2016) making input and output representations of words and actions coincide. As a result, Speaker training affects the representations used in Listener mode and vice versa. Experiments without tying (not reported) show similar results with slower convergence. We additionally explore a standard attention mechanism (Bahdanau et al., 2014). Training We consider two scenarios. In individual learning, an agent is taught a language by interacting with a hard-coded ground-truth “teacher”, represented by the training corpus. In the iterated learning setup, a lineage of agents is trained to speak and listen by interacting with a “parent” agent. After convergence, an agent is fixed and used as a parent to train the next child. Individual learning We synchronously train the agent to speak (from trajectory t to utterance u) and listen (from utterance u to trajectory t). Training the Listener is similar to standard Seq2Seq training with teacher forcing (Goodfellow et al., 2016, p. 376). We change the training procedure for the Speaker direction, as we must handle oneto-many trajectory-to-utterance mappings in freeorder languages. We describe it below. For each trajectory, we consider all corresponding utterances equally probable. Given a trajectory input, an agent must be able to produce, with equal probability, all utterances that correspond to the input. To achieve this, taking inspiration from the multi-label learning literature, we fit the agent’s output distribution to minimize KL-divergence from the uniform over target utterances. We adopt the “Na¨ıve” method proposed by Jin and Ghahramani (2003) (see Supplementary for how we derive the loss function in Eq. (4)). Formally, our languages map trajectories tj to one (fixed-order) or multiple (free-order) utterances {u}j = {u1 j, u2 j, . . .}. The trajectory t is fed into the encoder, which produces a representation of the action sequence. Next, the latter is fed into the decoder along with the start-of-thesequence element u0 = sos. At each step, the decoder’s output layer defines a categorical distribution pθ(uk|uk−1, hk) over the next output word uk. This distribution is conditioned by the previous word uk−1 and the hidden state hk. As with the Listener, we use teacher forcing, so that the distribution of each word is conditioned by the ground-truth terms coming before it. Overall, the model parameters θ are optimized to minimize the loss L over (tj, {u}j): L = − X j 1 nj X u∈{u}j |u| X k=1 log pθ(uk|uk−1, hj,k) (4) In Eq. (4), nj denotes the number of target utterances for the jth example, nj = |{u}j|; u iterates over the utterances {u}j; and uk enumerates words in the utterance u as k varies. As the number of ground-truth utterances {u}j can be high, we sub-sample n = 6 when training free- and fixed-order languages.6 This considerably speeds up training without significantly harming performance. We use all the possible utterances when training on long-distance languages (n equals the the number of all possible utterances). For all studied languages, we perform a grid search over hidden layer [16,20] and batch sizes [16,32], and report test set results of the best validation configuration for each language reinitialized with 5 different seeds. We stop training if development set accuracy does not increase for 5 epochs or when 500 epochs are reached. In all scenarios, the optimization is performed with the Amsgrad (Reddi et al., 2018) which is an improved version of the standard Adam (Kingma and Ba, 2014); we did not experiment with other optimizers. We use the algorithm with its default parameters, as implemented in Pytorch (Paszke et al., 2017). Iterated learning At “generation 0” agent Aθ0 is trained individually as described above. Once Aθ0 is trained, we fix its parameters and use it to train the next-generation agent, Aθ1. Aθ1, after training, is in its turn fixed and used to train the next agent Aθ2, etc. At each iteration, the child agent Aθi+1 is trained to imitate its parent Aθi as follows. Suppose that, given t, the parent agent produces n7 utterances {ˆu} = {ˆu1, ˆu2, ...ˆun} (these utterances are obtained by sampling from the parent’s decoder and can be identical). Then, we train the child agent to: (a) listen: map each utterance ˆuj to the trajectory t, and (b) speak: given 6Sampling is trivial in the latter case, since {u}j contains a single utterance. Note that in this case the loss L reduces to the negative log-likelihood. This allows us to use the same loss function for free- and fixed-order languages. 7We use the same number n defined in individual learning section. 5170 Decoder Encoder u <latexit sha1_base64="HdrHs+9WrEY+c 6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ +3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weF R+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fipm Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8 Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfk yFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oA UMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c 6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ +3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weF R+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fipm Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8 Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfk yFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oA UMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c 6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ +3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weF R+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fipm Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8 Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfk yFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oA UMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c 6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ +3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weF R+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fipm Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8 Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfk yFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oA UMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4a2M+Q=</latexit> t <latexit sha1_base64="fInOqGTCrWkR GJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7a duNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5 LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnX qnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfz QqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dv tDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgAR wQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkR GJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7a duNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5 LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnX qnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfz QqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dv tDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgAR wQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkR GJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7a duNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5 LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnX qnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfz QqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dv tDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgAR wQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkR GJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7a duNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5 LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnX qnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfz QqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dv tDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgAR wQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> u <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> t <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> t <latexit sha1_base64="fIn OqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34We tX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe /Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2D w5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm 1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/As RkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtul b4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQs UXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQv K75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5d z4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fIn OqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34We tX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe /Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2D w5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm 1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/As RkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtul b4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQs UXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQv K75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5d z4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fIn OqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34We tX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe /Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2D w5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm 1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/As RkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtul b4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQs UXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQv K75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5d z4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fIn OqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34We tX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe /Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2D w5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm 1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/As RkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtul b4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQs UXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQv K75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5d z4WrWtOPnMCf+B8/gDgKYz4</latexit> ˆui <latexit sha1_base64="YsbThW7FRbpuDhjeVv8c+8iT1U=" >AB8HicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzEOSJcxOJsmQmdlplcIS7CiwdFvPo53vwbJ8keNLGgoajqprsrSqS w6PvfXmFtfWNzq7hd2tnd2z8oHx41bZwaxhslrFpR9RyKTRvoEDJ24nhVEWSt6Lx7cxvPXFjRawfcJLwUNGhFgPBKDrpsTuimKXT nuiVK37Vn4OskiAnFchR75W/uv2YpYprZJa2wn8BMOMGhRM8mpm1qeUDamQ95xVFPFbZjND56SM6f0ySA2rjSufp7IqPK2omK XKeiOL3kz8z+ukOLgOM6GTFLlmi0WDVBKMyex70heGM5QTRygzwt1K2IgaytBlVHIhBMsvr5LmRTXwq8H9ZaV2k8dRhBM4hXMI4 ApqcAd1aADBc/wCm+e8V68d+9j0Vrw8plj+APv8wcrOJCi</latexit> <latexit sha1_base64="YsbThW7FRbpuDhjeVv8c+8iT1U=" >AB8HicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzEOSJcxOJsmQmdlplcIS7CiwdFvPo53vwbJ8keNLGgoajqprsrSqS w6PvfXmFtfWNzq7hd2tnd2z8oHx41bZwaxhslrFpR9RyKTRvoEDJ24nhVEWSt6Lx7cxvPXFjRawfcJLwUNGhFgPBKDrpsTuimKXT nuiVK37Vn4OskiAnFchR75W/uv2YpYprZJa2wn8BMOMGhRM8mpm1qeUDamQ95xVFPFbZjND56SM6f0ySA2rjSufp7IqPK2omK XKeiOL3kz8z+ukOLgOM6GTFLlmi0WDVBKMyex70heGM5QTRygzwt1K2IgaytBlVHIhBMsvr5LmRTXwq8H9ZaV2k8dRhBM4hXMI4 ApqcAd1aADBc/wCm+e8V68d+9j0Vrw8plj+APv8wcrOJCi</latexit> <latexit sha1_base64="YsbThW7FRbpuDhjeVv8c+8iT1U=" >AB8HicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzEOSJcxOJsmQmdlplcIS7CiwdFvPo53vwbJ8keNLGgoajqprsrSqS w6PvfXmFtfWNzq7hd2tnd2z8oHx41bZwaxhslrFpR9RyKTRvoEDJ24nhVEWSt6Lx7cxvPXFjRawfcJLwUNGhFgPBKDrpsTuimKXT nuiVK37Vn4OskiAnFchR75W/uv2YpYprZJa2wn8BMOMGhRM8mpm1qeUDamQ95xVFPFbZjND56SM6f0ySA2rjSufp7IqPK2omK XKeiOL3kz8z+ukOLgOM6GTFLlmi0WDVBKMyex70heGM5QTRygzwt1K2IgaytBlVHIhBMsvr5LmRTXwq8H9ZaV2k8dRhBM4hXMI4 ApqcAd1aADBc/wCm+e8V68d+9j0Vrw8plj+APv8wcrOJCi</latexit> <latexit sha1_base64="YsbThW7FRbpuDhjeVv8c+8iT1U=" >AB8HicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzEOSJcxOJsmQmdlplcIS7CiwdFvPo53vwbJ8keNLGgoajqprsrSqS w6PvfXmFtfWNzq7hd2tnd2z8oHx41bZwaxhslrFpR9RyKTRvoEDJ24nhVEWSt6Lx7cxvPXFjRawfcJLwUNGhFgPBKDrpsTuimKXT nuiVK37Vn4OskiAnFchR75W/uv2YpYprZJa2wn8BMOMGhRM8mpm1qeUDamQ95xVFPFbZjND56SM6f0ySA2rjSufp7IqPK2omK XKeiOL3kz8z+ukOLgOM6GTFLlmi0WDVBKMyex70heGM5QTRygzwt1K2IgaytBlVHIhBMsvr5LmRTXwq8H9ZaV2k8dRhBM4hXMI4 ApqcAd1aADBc/wCm+e8V68d+9j0Vrw8plj+APv8wcrOJCi</latexit> t <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY= ">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaF qRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqn sVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazp jR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCH a6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY= ">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaF qRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqn sVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazp jR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCH a6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY= ">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaF qRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqn sVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazp jR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCH a6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY= ">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaF qRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqn sVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazp jR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCH a6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> Decoder Encoder u <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> t <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> u <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> t <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> ˆt <latexit sha1_base64="qTVnGIGoy9VTmA6RVP920/L8OkE=">AB7nicbVDLSgNBEOz1GeMr6tHLYBA8hV0R9Bj 04jGCeUCyhNnJbDJkdmaZ6RXCko/w4kERr36PN/GSbIHTSxoKq6e6KUiks+v63t7a+sbm1Xdop7+7tHxWjo5bVmeG8SbTUptORC2XQvEmCpS8kxpOk0jydjS+m/ntJ26s0OoRJykPEzpUIhaMopPavRHFHKf9StWv+XOQVRIUpAoFGv3KV 2+gWZwhUxSa7uBn2KYU4OCST4t9zLU8rGdMi7jiqacBvm83On5NwpAxJr40ohmau/J3KaWDtJIteZUBzZW8m/ud1M4xvwlyoNEOu2GJRnEmCmsx+JwNhOEM5cYQyI9ythI2oQxdQmUXQrD8ipXdYCvxY8XFXrt0UcJTiFM7iAK6hDv fQgCYwGMzvMKbl3ov3rv3sWhd84qZE/gD7/MHqiPxQ=</latexit> <latexit sha1_base64="qTVnGIGoy9VTmA6RVP920/L8OkE=">AB7nicbVDLSgNBEOz1GeMr6tHLYBA8hV0R9Bj 04jGCeUCyhNnJbDJkdmaZ6RXCko/w4kERr36PN/GSbIHTSxoKq6e6KUiks+v63t7a+sbm1Xdop7+7tHxWjo5bVmeG8SbTUptORC2XQvEmCpS8kxpOk0jydjS+m/ntJ26s0OoRJykPEzpUIhaMopPavRHFHKf9StWv+XOQVRIUpAoFGv3KV 2+gWZwhUxSa7uBn2KYU4OCST4t9zLU8rGdMi7jiqacBvm83On5NwpAxJr40ohmau/J3KaWDtJIteZUBzZW8m/ud1M4xvwlyoNEOu2GJRnEmCmsx+JwNhOEM5cYQyI9ythI2oQxdQmUXQrD8ipXdYCvxY8XFXrt0UcJTiFM7iAK6hDv fQgCYwGMzvMKbl3ov3rv3sWhd84qZE/gD7/MHqiPxQ=</latexit> <latexit sha1_base64="qTVnGIGoy9VTmA6RVP920/L8OkE=">AB7nicbVDLSgNBEOz1GeMr6tHLYBA8hV0R9Bj 04jGCeUCyhNnJbDJkdmaZ6RXCko/w4kERr36PN/GSbIHTSxoKq6e6KUiks+v63t7a+sbm1Xdop7+7tHxWjo5bVmeG8SbTUptORC2XQvEmCpS8kxpOk0jydjS+m/ntJ26s0OoRJykPEzpUIhaMopPavRHFHKf9StWv+XOQVRIUpAoFGv3KV 2+gWZwhUxSa7uBn2KYU4OCST4t9zLU8rGdMi7jiqacBvm83On5NwpAxJr40ohmau/J3KaWDtJIteZUBzZW8m/ud1M4xvwlyoNEOu2GJRnEmCmsx+JwNhOEM5cYQyI9ythI2oQxdQmUXQrD8ipXdYCvxY8XFXrt0UcJTiFM7iAK6hDv fQgCYwGMzvMKbl3ov3rv3sWhd84qZE/gD7/MHqiPxQ=</latexit> <latexit sha1_base64="qTVnGIGoy9VTmA6RVP920/L8OkE=">AB7nicbVDLSgNBEOz1GeMr6tHLYBA8hV0R9Bj 04jGCeUCyhNnJbDJkdmaZ6RXCko/w4kERr36PN/GSbIHTSxoKq6e6KUiks+v63t7a+sbm1Xdop7+7tHxWjo5bVmeG8SbTUptORC2XQvEmCpS8kxpOk0jydjS+m/ntJ26s0OoRJykPEzpUIhaMopPavRHFHKf9StWv+XOQVRIUpAoFGv3KV 2+gWZwhUxSa7uBn2KYU4OCST4t9zLU8rGdMi7jiqacBvm83On5NwpAxJr40ohmau/J3KaWDtJIteZUBzZW8m/ud1M4xvwlyoNEOu2GJRnEmCmsx+JwNhOEM5cYQyI9ythI2oQxdQmUXQrD8ipXdYCvxY8XFXrt0UcJTiFM7iAK6hDv fQgCYwGMzvMKbl3ov3rv3sWhd84qZE/gD7/MHqiPxQ=</latexit> ˆui+1 <latexit sha1_base64="BbylQ/NfIdOEPpjSog8nzMg5UVE=">AB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEoigh6LXjxWsB/QhrL Zbtulm03cnRKyO/w4kERr/4Yb/4bt20O2vpg4PHeDPzglgKg67RTW1jc2t4rbpZ3dvf2D8uFR0SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpv5rQnXRkTqEacx90M6VGIgGEUr+d0RxTJeqm48LJeueJW3TnIKvFyUoEc9V75q9uPWBJyhUxSYzqeG6OfUo2CSZ6V uonhMWVjOuQdSxUNufHT+dEZObNKnwibUshmau/J1IaGjMNA9sZUhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyrZELzl1dJ87LquVXv4apSu83jKMIJnMI5eHANbiHOjSAwRM8wyu8ORPnxXl3PhatBSefOY/cD5/ANPEkh4=</late xit> <latexit sha1_base64="BbylQ/NfIdOEPpjSog8nzMg5UVE=">AB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEoigh6LXjxWsB/QhrL Zbtulm03cnRKyO/w4kERr/4Yb/4bt20O2vpg4PHeDPzglgKg67RTW1jc2t4rbpZ3dvf2D8uFR0SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpv5rQnXRkTqEacx90M6VGIgGEUr+d0RxTJeqm48LJeueJW3TnIKvFyUoEc9V75q9uPWBJyhUxSYzqeG6OfUo2CSZ6V uonhMWVjOuQdSxUNufHT+dEZObNKnwibUshmau/J1IaGjMNA9sZUhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyrZELzl1dJ87LquVXv4apSu83jKMIJnMI5eHANbiHOjSAwRM8wyu8ORPnxXl3PhatBSefOY/cD5/ANPEkh4=</late xit> <latexit sha1_base64="BbylQ/NfIdOEPpjSog8nzMg5UVE=">AB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEoigh6LXjxWsB/QhrL Zbtulm03cnRKyO/w4kERr/4Yb/4bt20O2vpg4PHeDPzglgKg67RTW1jc2t4rbpZ3dvf2D8uFR0SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpv5rQnXRkTqEacx90M6VGIgGEUr+d0RxTJeqm48LJeueJW3TnIKvFyUoEc9V75q9uPWBJyhUxSYzqeG6OfUo2CSZ6V uonhMWVjOuQdSxUNufHT+dEZObNKnwibUshmau/J1IaGjMNA9sZUhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyrZELzl1dJ87LquVXv4apSu83jKMIJnMI5eHANbiHOjSAwRM8wyu8ORPnxXl3PhatBSefOY/cD5/ANPEkh4=</late xit> <latexit sha1_base64="BbylQ/NfIdOEPpjSog8nzMg5UVE=">AB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEoigh6LXjxWsB/QhrL Zbtulm03cnRKyO/w4kERr/4Yb/4bt20O2vpg4PHeDPzglgKg67RTW1jc2t4rbpZ3dvf2D8uFR0SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpv5rQnXRkTqEacx90M6VGIgGEUr+d0RxTJeqm48LJeueJW3TnIKvFyUoEc9V75q9uPWBJyhUxSYzqeG6OfUo2CSZ6V uonhMWVjOuQdSxUNufHT+dEZObNKnwibUshmau/J1IaGjMNA9sZUhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyrZELzl1dJ87LquVXv4apSu83jKMIJnMI5eHANbiHOjSAwRM8wyu8ORPnxXl3PhatBSefOY/cD5/ANPEkh4=</late xit> Decoder Encoder u <latexit sha1_base64="HdrHs+9WrEY+c 6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ +3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weF R+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fipm Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8 Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfk yFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oA UMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c 6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ +3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weF R+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fipm Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8 Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfk yFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oA UMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c 6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ +3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weF R+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fipm Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8 Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfk yFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oA UMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c 6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ +3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weF R+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1Fipm Q7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8 Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfk yFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oA UMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4a2M+Q=</latexit> t <latexit sha1_base64="fInOqGTCrWkR GJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7a duNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5 LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnX qnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfz QqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dv tDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgAR wQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkR GJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7a duNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5 LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnX qnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfz QqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dv tDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgAR wQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkR GJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7a duNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5 LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnX qnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfz QqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dv tDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgAR wQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkR GJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7a duNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5 LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnX qnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfz QqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dv tDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgAR wQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> u <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> t <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> ˆui+1 <latexit sha1_base64="BbylQ/NfIdOEPpjSog8nzMg5UVE="> AB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEoigh6LXjxWsB/QhrLZbtulm03cnRKyO/w4kERr/4Yb/4bt20O2vpg4PHeDPzglgKg6 7RTW1jc2t4rbpZ3dvf2D8uFR0SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpv5rQnXRkTqEacx90M6VGIgGEUr+d0RxTJeqm48 LJeueJW3TnIKvFyUoEc9V75q9uPWBJyhUxSYzqeG6OfUo2CSZ6VuonhMWVjOuQdSxUNufHT+dEZObNKnwibUshmau/J1IaGjMNA9sZ UhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyrZELzl1dJ87LquVXv4apSu83jKMIJnMI5eHANb iHOjSAwRM8wyu8ORPnxXl3PhatBSefOY/cD5/ANPEkh4=</latexit> <latexit sha1_base64="BbylQ/NfIdOEPpjSog8nzMg5UVE="> AB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEoigh6LXjxWsB/QhrLZbtulm03cnRKyO/w4kERr/4Yb/4bt20O2vpg4PHeDPzglgKg6 7RTW1jc2t4rbpZ3dvf2D8uFR0SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpv5rQnXRkTqEacx90M6VGIgGEUr+d0RxTJeqm48 LJeueJW3TnIKvFyUoEc9V75q9uPWBJyhUxSYzqeG6OfUo2CSZ6VuonhMWVjOuQdSxUNufHT+dEZObNKnwibUshmau/J1IaGjMNA9sZ UhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyrZELzl1dJ87LquVXv4apSu83jKMIJnMI5eHANb iHOjSAwRM8wyu8ORPnxXl3PhatBSefOY/cD5/ANPEkh4=</latexit> <latexit sha1_base64="BbylQ/NfIdOEPpjSog8nzMg5UVE="> AB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEoigh6LXjxWsB/QhrLZbtulm03cnRKyO/w4kERr/4Yb/4bt20O2vpg4PHeDPzglgKg6 7RTW1jc2t4rbpZ3dvf2D8uFR0SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpv5rQnXRkTqEacx90M6VGIgGEUr+d0RxTJeqm48 LJeueJW3TnIKvFyUoEc9V75q9uPWBJyhUxSYzqeG6OfUo2CSZ6VuonhMWVjOuQdSxUNufHT+dEZObNKnwibUshmau/J1IaGjMNA9sZ UhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyrZELzl1dJ87LquVXv4apSu83jKMIJnMI5eHANb iHOjSAwRM8wyu8ORPnxXl3PhatBSefOY/cD5/ANPEkh4=</latexit> <latexit sha1_base64="BbylQ/NfIdOEPpjSog8nzMg5UVE="> AB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEoigh6LXjxWsB/QhrLZbtulm03cnRKyO/w4kERr/4Yb/4bt20O2vpg4PHeDPzglgKg6 7RTW1jc2t4rbpZ3dvf2D8uFR0SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpv5rQnXRkTqEacx90M6VGIgGEUr+d0RxTJeqm48 LJeueJW3TnIKvFyUoEc9V75q9uPWBJyhUxSYzqeG6OfUo2CSZ6VuonhMWVjOuQdSxUNufHT+dEZObNKnwibUshmau/J1IaGjMNA9sZ UhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyrZELzl1dJ87LquVXv4apSu83jKMIJnMI5eHANb iHOjSAwRM8wyu8ORPnxXl3PhatBSefOY/cD5/ANPEkh4=</latexit> t <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY= ">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaF qRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqn sVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazp jR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCH a6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY= ">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaF qRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqn sVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazp jR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCH a6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY= ">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaF qRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqn sVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazp jR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCH a6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY= ">AB6HicbVBNS8NAEJ34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaF qRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqn sVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazp jR0Cx7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCH a6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgKYz4</latexit> Decoder Encoder u <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> t <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> ˆt <latexit sha1_base64="qTVnGIGoy9VTmA6RVP920/L8OkE=">AB7nicbVDLSgNBEOz1GeMr6tHLYBA8hV0R9Bj 04jGCeUCyhNnJbDJkdmaZ6RXCko/w4kERr36PN/GSbIHTSxoKq6e6KUiks+v63t7a+sbm1Xdop7+7tHxWjo5bVmeG8SbTUptORC2XQvEmCpS8kxpOk0jydjS+m/ntJ26s0OoRJykPEzpUIhaMopPavRHFHKf9StWv+XOQVRIUpAoFGv3KV 2+gWZwhUxSa7uBn2KYU4OCST4t9zLU8rGdMi7jiqacBvm83On5NwpAxJr40ohmau/J3KaWDtJIteZUBzZW8m/ud1M4xvwlyoNEOu2GJRnEmCmsx+JwNhOEM5cYQyI9ythI2oQxdQmUXQrD8ipXdYCvxY8XFXrt0UcJTiFM7iAK6hDv fQgCYwGMzvMKbl3ov3rv3sWhd84qZE/gD7/MHqiPxQ=</latexit> <latexit sha1_base64="qTVnGIGoy9VTmA6RVP920/L8OkE=">AB7nicbVDLSgNBEOz1GeMr6tHLYBA8hV0R9Bj 04jGCeUCyhNnJbDJkdmaZ6RXCko/w4kERr36PN/GSbIHTSxoKq6e6KUiks+v63t7a+sbm1Xdop7+7tHxWjo5bVmeG8SbTUptORC2XQvEmCpS8kxpOk0jydjS+m/ntJ26s0OoRJykPEzpUIhaMopPavRHFHKf9StWv+XOQVRIUpAoFGv3KV 2+gWZwhUxSa7uBn2KYU4OCST4t9zLU8rGdMi7jiqacBvm83On5NwpAxJr40ohmau/J3KaWDtJIteZUBzZW8m/ud1M4xvwlyoNEOu2GJRnEmCmsx+JwNhOEM5cYQyI9ythI2oQxdQmUXQrD8ipXdYCvxY8XFXrt0UcJTiFM7iAK6hDv fQgCYwGMzvMKbl3ov3rv3sWhd84qZE/gD7/MHqiPxQ=</latexit> <latexit sha1_base64="qTVnGIGoy9VTmA6RVP920/L8OkE=">AB7nicbVDLSgNBEOz1GeMr6tHLYBA8hV0R9Bj 04jGCeUCyhNnJbDJkdmaZ6RXCko/w4kERr36PN/GSbIHTSxoKq6e6KUiks+v63t7a+sbm1Xdop7+7tHxWjo5bVmeG8SbTUptORC2XQvEmCpS8kxpOk0jydjS+m/ntJ26s0OoRJykPEzpUIhaMopPavRHFHKf9StWv+XOQVRIUpAoFGv3KV 2+gWZwhUxSa7uBn2KYU4OCST4t9zLU8rGdMi7jiqacBvm83On5NwpAxJr40ohmau/J3KaWDtJIteZUBzZW8m/ud1M4xvwlyoNEOu2GJRnEmCmsx+JwNhOEM5cYQyI9ythI2oQxdQmUXQrD8ipXdYCvxY8XFXrt0UcJTiFM7iAK6hDv fQgCYwGMzvMKbl3ov3rv3sWhd84qZE/gD7/MHqiPxQ=</latexit> <latexit sha1_base64="qTVnGIGoy9VTmA6RVP920/L8OkE=">AB7nicbVDLSgNBEOz1GeMr6tHLYBA8hV0R9Bj 04jGCeUCyhNnJbDJkdmaZ6RXCko/w4kERr36PN/GSbIHTSxoKq6e6KUiks+v63t7a+sbm1Xdop7+7tHxWjo5bVmeG8SbTUptORC2XQvEmCpS8kxpOk0jydjS+m/ntJ26s0OoRJykPEzpUIhaMopPavRHFHKf9StWv+XOQVRIUpAoFGv3KV 2+gWZwhUxSa7uBn2KYU4OCST4t9zLU8rGdMi7jiqacBvm83On5NwpAxJr40ohmau/J3KaWDtJIteZUBzZW8m/ud1M4xvwlyoNEOu2GJRnEmCmsx+JwNhOEM5cYQyI9ythI2oQxdQmUXQrD8ipXdYCvxY8XFXrt0UcJTiFM7iAK6hDv fQgCYwGMzvMKbl3ov3rv3sWhd84qZE/gD7/MHqiPxQ=</latexit> u <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> <latexit sha1_base64="HdrHs+9WrEY+c6wp70bq3BGtMmw=">AB6HicbVBNS8NAEJ 3Ur1q/qh69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipmQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6Imr Fe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyIyYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8A4 a2M+Q=</latexit> t <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> <latexit sha1_base64="fInOqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ 34WetX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe/Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2Dw5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHE pshaO7md96Qm1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/AsRkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtulb4bJdqWIneu/p6YsNiYcRzazpjR0C x7M/E/r5NRdBNMhEozQsUXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQvK75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5dz4WrWtOPnMCf+B8/gDgK Yz4</latexit> ˆui+2 <latexit sha1_base64="MRmpHo3STlnq5CjHCed6SOn/1K4=">AB9HicbVBNS8NAEJ34WetX1aOXxSIQkmKoMeiF48V7Ae0oW y23bpZhN3J4US8ju8eFDEqz/Gm/GbZuDtj4YeLw3w8y8IJbCoOt+O2vrG5tb24Wd4u7e/sFh6ei4aJEM95gkYx0O6CGS6F4AwVK3o41p2EgeSsY38381oRrIyL1iNOY+yEdKjEQjKV/O6IYpkvVRcVrNeqexW3DnIKvFyUoYc9V7pq9uPWBJyhUxSYzqeG6OfUo2CS Z4Vu4nhMWVjOuQdSxUNufHT+dEZObdKnwibUshmau/J1IaGjMNA9sZUhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyraELzl1dJs1rx3Ir3cFWu3eZxFOAUzuACPLiGtxDHRrA4Ame4RXenInz4rw7H4vWNSefOYE/cD5/ANVJkh8=< /latexit> <latexit sha1_base64="MRmpHo3STlnq5CjHCed6SOn/1K4=">AB9HicbVBNS8NAEJ34WetX1aOXxSIQkmKoMeiF48V7Ae0oW y23bpZhN3J4US8ju8eFDEqz/Gm/GbZuDtj4YeLw3w8y8IJbCoOt+O2vrG5tb24Wd4u7e/sFh6ei4aJEM95gkYx0O6CGS6F4AwVK3o41p2EgeSsY38381oRrIyL1iNOY+yEdKjEQjKV/O6IYpkvVRcVrNeqexW3DnIKvFyUoYc9V7pq9uPWBJyhUxSYzqeG6OfUo2CS Z4Vu4nhMWVjOuQdSxUNufHT+dEZObdKnwibUshmau/J1IaGjMNA9sZUhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyraELzl1dJs1rx3Ir3cFWu3eZxFOAUzuACPLiGtxDHRrA4Ame4RXenInz4rw7H4vWNSefOYE/cD5/ANVJkh8=< /latexit> <latexit sha1_base64="MRmpHo3STlnq5CjHCed6SOn/1K4=">AB9HicbVBNS8NAEJ34WetX1aOXxSIQkmKoMeiF48V7Ae0oW y23bpZhN3J4US8ju8eFDEqz/Gm/GbZuDtj4YeLw3w8y8IJbCoOt+O2vrG5tb24Wd4u7e/sFh6ei4aJEM95gkYx0O6CGS6F4AwVK3o41p2EgeSsY38381oRrIyL1iNOY+yEdKjEQjKV/O6IYpkvVRcVrNeqexW3DnIKvFyUoYc9V7pq9uPWBJyhUxSYzqeG6OfUo2CS Z4Vu4nhMWVjOuQdSxUNufHT+dEZObdKnwibUshmau/J1IaGjMNA9sZUhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyraELzl1dJs1rx3Ir3cFWu3eZxFOAUzuACPLiGtxDHRrA4Ame4RXenInz4rw7H4vWNSefOYE/cD5/ANVJkh8=< /latexit> <latexit sha1_base64="MRmpHo3STlnq5CjHCed6SOn/1K4=">AB9HicbVBNS8NAEJ34WetX1aOXxSIQkmKoMeiF48V7Ae0oW y23bpZhN3J4US8ju8eFDEqz/Gm/GbZuDtj4YeLw3w8y8IJbCoOt+O2vrG5tb24Wd4u7e/sFh6ei4aJEM95gkYx0O6CGS6F4AwVK3o41p2EgeSsY38381oRrIyL1iNOY+yEdKjEQjKV/O6IYpkvVRcVrNeqexW3DnIKvFyUoYc9V7pq9uPWBJyhUxSYzqeG6OfUo2CS Z4Vu4nhMWVjOuQdSxUNufHT+dEZObdKnwibUshmau/J1IaGjMNA9sZUhyZW8m/ud1Ehzc+KlQcYJcscWiQSIJRmSWAOkLzRnKqSWUaWFvJWxENWVocyraELzl1dJs1rx3Ir3cFWu3eZxFOAUzuACPLiGtxDHRrA4Ame4RXenInz4rw7H4vWNSefOYE/cD5/ANVJkh8=< /latexit> A✓ <latexit sha1_base64="o1x+fwDrJlqWKwnsorFfebhaxFE=">AB 8XicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHqxWMFW4tKZvtpF262YTdiVBC/4UXD4p49d9489+4bXPQ1gcDj/dmJkXJFIY8rxvp7Cyura+Udws bW3v7O6V9w+aJk41xwaPZaxbATMohcIGCZLYSjSyKJD4EIxupv7DE2ojYnVP4wS7ERsoEQrOyEqPV72sQ0MkNumVK17Vm8FdJn5OKpCj3it/dfox TyNUxCUzpu17CXUzpklwiZNSJzWYMD5iA2xbqliEpvNLp64J1bpu2GsbSlyZ+rviYxFxoyjwHZGjIZm0ZuK/3ntlMLbiZUkhIqPl8UptKl2J2 +7/aFRk5ybAnjWthbXT5kmnGyIZVsCP7iy8ukeVb1vap/d16pXedxFOEIjuEUfLiAGtxCHRrAQcEzvMKbY5wX5935mLcWnHzmEP7A+fwBrViQ6A= =</latexit> <latexit sha1_base64="o1x+fwDrJlqWKwnsorFfebhaxFE=">AB 8XicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHqxWMFW4tKZvtpF262YTdiVBC/4UXD4p49d9489+4bXPQ1gcDj/dmJkXJFIY8rxvp7Cyura+Udws bW3v7O6V9w+aJk41xwaPZaxbATMohcIGCZLYSjSyKJD4EIxupv7DE2ojYnVP4wS7ERsoEQrOyEqPV72sQ0MkNumVK17Vm8FdJn5OKpCj3it/dfox TyNUxCUzpu17CXUzpklwiZNSJzWYMD5iA2xbqliEpvNLp64J1bpu2GsbSlyZ+rviYxFxoyjwHZGjIZm0ZuK/3ntlMLbiZUkhIqPl8UptKl2J2 +7/aFRk5ybAnjWthbXT5kmnGyIZVsCP7iy8ukeVb1vap/d16pXedxFOEIjuEUfLiAGtxCHRrAQcEzvMKbY5wX5935mLcWnHzmEP7A+fwBrViQ6A= =</latexit> <latexit sha1_base64="o1x+fwDrJlqWKwnsorFfebhaxFE=">AB 8XicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHqxWMFW4tKZvtpF262YTdiVBC/4UXD4p49d9489+4bXPQ1gcDj/dmJkXJFIY8rxvp7Cyura+Udws bW3v7O6V9w+aJk41xwaPZaxbATMohcIGCZLYSjSyKJD4EIxupv7DE2ojYnVP4wS7ERsoEQrOyEqPV72sQ0MkNumVK17Vm8FdJn5OKpCj3it/dfox TyNUxCUzpu17CXUzpklwiZNSJzWYMD5iA2xbqliEpvNLp64J1bpu2GsbSlyZ+rviYxFxoyjwHZGjIZm0ZuK/3ntlMLbiZUkhIqPl8UptKl2J2 +7/aFRk5ybAnjWthbXT5kmnGyIZVsCP7iy8ukeVb1vap/d16pXedxFOEIjuEUfLiAGtxCHRrAQcEzvMKbY5wX5935mLcWnHzmEP7A+fwBrViQ6A= =</latexit> <latexit sha1_base64="o1x+fwDrJlqWKwnsorFfebhaxFE=">AB 8XicbVBNS8NAEJ3Ur1q/qh69BIvgqSQi6LHqxWMFW4tKZvtpF262YTdiVBC/4UXD4p49d9489+4bXPQ1gcDj/dmJkXJFIY8rxvp7Cyura+Udws bW3v7O6V9w+aJk41xwaPZaxbATMohcIGCZLYSjSyKJD4EIxupv7DE2ojYnVP4wS7ERsoEQrOyEqPV72sQ0MkNumVK17Vm8FdJn5OKpCj3it/dfox TyNUxCUzpu17CXUzpklwiZNSJzWYMD5iA2xbqliEpvNLp64J1bpu2GsbSlyZ+rviYxFxoyjwHZGjIZm0ZuK/3ntlMLbiZUkhIqPl8UptKl2J2 +7/aFRk5ybAnjWthbXT5kmnGyIZVsCP7iy8ukeVb1vap/d16pXedxFOEIjuEUfLiAGtxCHRrAQcEzvMKbY5wX5935mLcWnHzmEP7A+fwBrViQ6A= =</latexit> A✓i+1 <latexit sha1_base64="AF+TqMwdjC2hGAmkd2mT6fv+0/s=">AB+XicbVBNS8NAEN3U r1q/oh69BIsgCURQY9VLx4r2A9oQ9hsp+3SzSbsTgol5J948aCIV/+JN/+N2zYHbX0w8Hhvhpl5YSK4Rtf9tkpr6xubW+Xtys7u3v6BfXjU0nGqGDRZLGLVCakGwSU0kaOATqKARqGAdji +n/ntCSjNY/mE0wT8iA4lH3BG0UiBbd8GWQ9HgDTI+IWX54FdWvuHM4q8QpSJQUagf3V68csjUAiE1Trucm6GdUIWcC8kov1ZBQNqZD6BoqaQTaz+aX586ZUfrOIFamJDpz9fdERiOtp1F oOiOKI73szcT/vG6Kgxs/4zJESRbLBqkwsHYmcXg9LkChmJqCGWKm1sdNqKMjRhVUwI3vLq6R1WfPcmvd4Va3fFXGUyQk5JefEI9ekTh5IgzQJIxPyTF7Jm5VZL9a79bFoLVnFzDH5A+ vzB1K8k3E=</latexit> <latexit sha1_base64="AF+TqMwdjC2hGAmkd2mT6fv+0/s=">AB+XicbVBNS8NAEN3U r1q/oh69BIsgCURQY9VLx4r2A9oQ9hsp+3SzSbsTgol5J948aCIV/+JN/+N2zYHbX0w8Hhvhpl5YSK4Rtf9tkpr6xubW+Xtys7u3v6BfXjU0nGqGDRZLGLVCakGwSU0kaOATqKARqGAdji +n/ntCSjNY/mE0wT8iA4lH3BG0UiBbd8GWQ9HgDTI+IWX54FdWvuHM4q8QpSJQUagf3V68csjUAiE1Trucm6GdUIWcC8kov1ZBQNqZD6BoqaQTaz+aX586ZUfrOIFamJDpz9fdERiOtp1F oOiOKI73szcT/vG6Kgxs/4zJESRbLBqkwsHYmcXg9LkChmJqCGWKm1sdNqKMjRhVUwI3vLq6R1WfPcmvd4Va3fFXGUyQk5JefEI9ekTh5IgzQJIxPyTF7Jm5VZL9a79bFoLVnFzDH5A+ vzB1K8k3E=</latexit> <latexit sha1_base64="AF+TqMwdjC2hGAmkd2mT6fv+0/s=">AB+XicbVBNS8NAEN3U r1q/oh69BIsgCURQY9VLx4r2A9oQ9hsp+3SzSbsTgol5J948aCIV/+JN/+N2zYHbX0w8Hhvhpl5YSK4Rtf9tkpr6xubW+Xtys7u3v6BfXjU0nGqGDRZLGLVCakGwSU0kaOATqKARqGAdji +n/ntCSjNY/mE0wT8iA4lH3BG0UiBbd8GWQ9HgDTI+IWX54FdWvuHM4q8QpSJQUagf3V68csjUAiE1Trucm6GdUIWcC8kov1ZBQNqZD6BoqaQTaz+aX586ZUfrOIFamJDpz9fdERiOtp1F oOiOKI73szcT/vG6Kgxs/4zJESRbLBqkwsHYmcXg9LkChmJqCGWKm1sdNqKMjRhVUwI3vLq6R1WfPcmvd4Va3fFXGUyQk5JefEI9ekTh5IgzQJIxPyTF7Jm5VZL9a79bFoLVnFzDH5A+ vzB1K8k3E=</latexit> <latexit sha1_base64="AF+TqMwdjC2hGAmkd2mT6fv+0/s=">AB+XicbVBNS8NAEN3U r1q/oh69BIsgCURQY9VLx4r2A9oQ9hsp+3SzSbsTgol5J948aCIV/+JN/+N2zYHbX0w8Hhvhpl5YSK4Rtf9tkpr6xubW+Xtys7u3v6BfXjU0nGqGDRZLGLVCakGwSU0kaOATqKARqGAdji +n/ntCSjNY/mE0wT8iA4lH3BG0UiBbd8GWQ9HgDTI+IWX54FdWvuHM4q8QpSJQUagf3V68csjUAiE1Trucm6GdUIWcC8kov1ZBQNqZD6BoqaQTaz+aX586ZUfrOIFamJDpz9fdERiOtp1F oOiOKI73szcT/vG6Kgxs/4zJESRbLBqkwsHYmcXg9LkChmJqCGWKm1sdNqKMjRhVUwI3vLq6R1WfPcmvd4Va3fFXGUyQk5JefEI9ekTh5IgzQJIxPyTF7Jm5VZL9a79bFoLVnFzDH5A+ vzB1K8k3E=</latexit> A✓i+2 <latexit sha1_base64="6RKUhAcyXA+D84NWAwZkzMBjZLw=">AB+XicbVBNS8NAEJ34 WetX1KOXYBEoSRF0GPVi8cK9gPaEDbTbt0swm7k0IJ/SdePCji1X/izX/jts1BWx8MPN6bYWZemAqu0XW/rbX1jc2t7dJOeXdv/+DQPjpu6SRTlDVpIhLVCYlmgkvWRI6CdVLFSBwK1g5 H9zO/PWZK80Q+4SRlfkwGkecEjRSYNu3Qd7DIUMS5PyNp0GdsWtunM4q8QrSAUKNAL7q9dPaBYziVQrbuem6KfE4WcCjYt9zLNUkJHZMC6hkoSM+3n8unzrlR+k6UKFMSnbn6eyInsda TODSdMcGhXvZm4n9eN8Poxs+5TDNki4WRZlwMHFmMTh9rhFMTGEUMXNrQ4dEkUomrDKJgRv+eV0qpVPbfqPV5V6ndFHCU4hTO4A+uoQ4P0IAmUBjDM7zCm5VbL9a79bFoXbOKmRP4A+ vzB1RCk3I=</latexit> <latexit sha1_base64="6RKUhAcyXA+D84NWAwZkzMBjZLw=">AB+XicbVBNS8NAEJ34 WetX1KOXYBEoSRF0GPVi8cK9gPaEDbTbt0swm7k0IJ/SdePCji1X/izX/jts1BWx8MPN6bYWZemAqu0XW/rbX1jc2t7dJOeXdv/+DQPjpu6SRTlDVpIhLVCYlmgkvWRI6CdVLFSBwK1g5 H9zO/PWZK80Q+4SRlfkwGkecEjRSYNu3Qd7DIUMS5PyNp0GdsWtunM4q8QrSAUKNAL7q9dPaBYziVQrbuem6KfE4WcCjYt9zLNUkJHZMC6hkoSM+3n8unzrlR+k6UKFMSnbn6eyInsda TODSdMcGhXvZm4n9eN8Poxs+5TDNki4WRZlwMHFmMTh9rhFMTGEUMXNrQ4dEkUomrDKJgRv+eV0qpVPbfqPV5V6ndFHCU4hTO4A+uoQ4P0IAmUBjDM7zCm5VbL9a79bFoXbOKmRP4A+ vzB1RCk3I=</latexit> <latexit sha1_base64="6RKUhAcyXA+D84NWAwZkzMBjZLw=">AB+XicbVBNS8NAEJ34 WetX1KOXYBEoSRF0GPVi8cK9gPaEDbTbt0swm7k0IJ/SdePCji1X/izX/jts1BWx8MPN6bYWZemAqu0XW/rbX1jc2t7dJOeXdv/+DQPjpu6SRTlDVpIhLVCYlmgkvWRI6CdVLFSBwK1g5 H9zO/PWZK80Q+4SRlfkwGkecEjRSYNu3Qd7DIUMS5PyNp0GdsWtunM4q8QrSAUKNAL7q9dPaBYziVQrbuem6KfE4WcCjYt9zLNUkJHZMC6hkoSM+3n8unzrlR+k6UKFMSnbn6eyInsda TODSdMcGhXvZm4n9eN8Poxs+5TDNki4WRZlwMHFmMTh9rhFMTGEUMXNrQ4dEkUomrDKJgRv+eV0qpVPbfqPV5V6ndFHCU4hTO4A+uoQ4P0IAmUBjDM7zCm5VbL9a79bFoXbOKmRP4A+ vzB1RCk3I=</latexit> <latexit sha1_base64="6RKUhAcyXA+D84NWAwZkzMBjZLw=">AB+XicbVBNS8NAEJ34 WetX1KOXYBEoSRF0GPVi8cK9gPaEDbTbt0swm7k0IJ/SdePCji1X/izX/jts1BWx8MPN6bYWZemAqu0XW/rbX1jc2t7dJOeXdv/+DQPjpu6SRTlDVpIhLVCYlmgkvWRI6CdVLFSBwK1g5 H9zO/PWZK80Q+4SRlfkwGkecEjRSYNu3Qd7DIUMS5PyNp0GdsWtunM4q8QrSAUKNAL7q9dPaBYziVQrbuem6KfE4WcCjYt9zLNUkJHZMC6hkoSM+3n8unzrlR+k6UKFMSnbn6eyInsda TODSdMcGhXvZm4n9eN8Poxs+5TDNki4WRZlwMHFmMTh9rhFMTGEUMXNrQ4dEkUomrDKJgRv+eV0qpVPbfqPV5V6ndFHCU4hTO4A+uoQ4P0IAmUBjDM7zCm5VbL9a79bFoXbOKmRP4A+ vzB1RCk3I=</latexit> A✓i+1 <latexit sha1_base64="AF+TqMwdjC2hGAmkd2mT6fv+0/s=">AB+XicbVBNS8NAEN3U r1q/oh69BIsgCURQY9VLx4r2A9oQ9hsp+3SzSbsTgol5J948aCIV/+JN/+N2zYHbX0w8Hhvhpl5YSK4Rtf9tkpr6xubW+Xtys7u3v6BfXjU0nGqGDRZLGLVCakGwSU0kaOATqKARqGAdji +n/ntCSjNY/mE0wT8iA4lH3BG0UiBbd8GWQ9HgDTI+IWX54FdWvuHM4q8QpSJQUagf3V68csjUAiE1Trucm6GdUIWcC8kov1ZBQNqZD6BoqaQTaz+aX586ZUfrOIFamJDpz9fdERiOtp1F oOiOKI73szcT/vG6Kgxs/4zJESRbLBqkwsHYmcXg9LkChmJqCGWKm1sdNqKMjRhVUwI3vLq6R1WfPcmvd4Va3fFXGUyQk5JefEI9ekTh5IgzQJIxPyTF7Jm5VZL9a79bFoLVnFzDH5A+ vzB1K8k3E=</latexit> <latexit sha1_base64="AF+TqMwdjC2hGAmkd2mT6fv+0/s=">AB+XicbVBNS8NAEN3U r1q/oh69BIsgCURQY9VLx4r2A9oQ9hsp+3SzSbsTgol5J948aCIV/+JN/+N2zYHbX0w8Hhvhpl5YSK4Rtf9tkpr6xubW+Xtys7u3v6BfXjU0nGqGDRZLGLVCakGwSU0kaOATqKARqGAdji +n/ntCSjNY/mE0wT8iA4lH3BG0UiBbd8GWQ9HgDTI+IWX54FdWvuHM4q8QpSJQUagf3V68csjUAiE1Trucm6GdUIWcC8kov1ZBQNqZD6BoqaQTaz+aX586ZUfrOIFamJDpz9fdERiOtp1F oOiOKI73szcT/vG6Kgxs/4zJESRbLBqkwsHYmcXg9LkChmJqCGWKm1sdNqKMjRhVUwI3vLq6R1WfPcmvd4Va3fFXGUyQk5JefEI9ekTh5IgzQJIxPyTF7Jm5VZL9a79bFoLVnFzDH5A+ vzB1K8k3E=</latexit> <latexit sha1_base64="AF+TqMwdjC2hGAmkd2mT6fv+0/s=">AB+XicbVBNS8NAEN3U r1q/oh69BIsgCURQY9VLx4r2A9oQ9hsp+3SzSbsTgol5J948aCIV/+JN/+N2zYHbX0w8Hhvhpl5YSK4Rtf9tkpr6xubW+Xtys7u3v6BfXjU0nGqGDRZLGLVCakGwSU0kaOATqKARqGAdji +n/ntCSjNY/mE0wT8iA4lH3BG0UiBbd8GWQ9HgDTI+IWX54FdWvuHM4q8QpSJQUagf3V68csjUAiE1Trucm6GdUIWcC8kov1ZBQNqZD6BoqaQTaz+aX586ZUfrOIFamJDpz9fdERiOtp1F oOiOKI73szcT/vG6Kgxs/4zJESRbLBqkwsHYmcXg9LkChmJqCGWKm1sdNqKMjRhVUwI3vLq6R1WfPcmvd4Va3fFXGUyQk5JefEI9ekTh5IgzQJIxPyTF7Jm5VZL9a79bFoLVnFzDH5A+ vzB1K8k3E=</latexit> <latexit sha1_base64="AF+TqMwdjC2hGAmkd2mT6fv+0/s=">AB+XicbVBNS8NAEN3U r1q/oh69BIsgCURQY9VLx4r2A9oQ9hsp+3SzSbsTgol5J948aCIV/+JN/+N2zYHbX0w8Hhvhpl5YSK4Rtf9tkpr6xubW+Xtys7u3v6BfXjU0nGqGDRZLGLVCakGwSU0kaOATqKARqGAdji +n/ntCSjNY/mE0wT8iA4lH3BG0UiBbd8GWQ9HgDTI+IWX54FdWvuHM4q8QpSJQUagf3V68csjUAiE1Trucm6GdUIWcC8kov1ZBQNqZD6BoqaQTaz+aX586ZUfrOIFamJDpz9fdERiOtp1F oOiOKI73szcT/vG6Kgxs/4zJESRbLBqkwsHYmcXg9LkChmJqCGWKm1sdNqKMjRhVUwI3vLq6R1WfPcmvd4Va3fFXGUyQk5JefEI9ekTh5IgzQJIxPyTF7Jm5VZL9a79bFoLVnFzDH5A+ vzB1K8k3E=</latexit> t <latexit sha1_base64="fIn OqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34We tX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe /Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2D w5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm 1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/As RkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtul b4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQs UXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQv K75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5d z4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fIn OqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34We tX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe /Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2D w5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm 1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/As RkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtul b4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQs UXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQv K75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5d z4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fIn OqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34We tX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe /Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2D w5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm 1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/As RkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtul b4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQs UXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQv K75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5d z4WrWtOPnMCf+B8/gDgKYz4</latexit> <latexit sha1_base64="fIn OqGTCrWkRGJFOZWK1l6FLBY=">AB6HicbVBNS8NAEJ34We tX1aOXYBE8lUQEPRa9eGzBfkAbymY7aduNmF3IpTSX+DFgyJe /Une/Ddu2xy09cHA470ZuaFqRSGPO/bWVvf2NzaLuwUd/f2D w5LR8dNk2SaY4MnMtHtkBmUQmGDBElspxpZHEpshaO7md96Qm 1Eoh5onGIQs4ESkeCMrFSnXqnsVbw53FXi56QMOWq90le3n/As RkVcMmM6vpdSMGaBJc4LXYzgynjIzbAjqWKxWiCyfzQqXtul b4bJdqWIneu/p6YsNiYcRzazpjR0Cx7M/E/r5NRdBNMhEozQs UXi6JMupS4s6/dvtDISY4tYVwLe6vLh0wzTjabog3BX35lTQv K75X8etX5eptHkcBTuEMLsCHa6jCPdSgARwQnuEV3pxH58V5d z4WrWtOPnMCf+B8/gDgKYz4</latexit> Generation i <latexit sha1_base64="B+3+z04m5q8yd8YVlhrjuNya9tk=">ACBHicbVDLSgMxFM 3UV62vqstugkVwNcyIosuiC1WsA9oh5J7ShSWZIMkIZunDjr7hxoYhbP8Kdf2M67UJbDwTOPedebu4JE8608bxvp7Cyura+UdwsbW3v7O6V9w+aOk4VhQaNeazaIdHAmYSGYZDO1 FARMihFY6up37rAZRmsbw34wQCQaSRYwSY6VeuXIDElRe4O5QJ4RC5rvnQkw65WrnuvlwMvEn5MqmqPeK391+zFNBUhDOdG643uJCTKiDKMcJqVuqsFuGJEBdCyVRIAOsvyICT62Sh 9HsbJPGpyrvycyIrQei9B2CmKGetGbiv95ndREl0HGZJIakHS2KEo5NjGeJoL7TAE1fGwJoYrZv2I6JIpQY3Mr2RD8xZOXSfPU9T3Xvzur1q7mcRBR2hE+SjC1RDt6iOGoiR/SMX tGb8+S8O/Ox6y14MxnDtEfOJ8/+c2XpA=</latexit> <latexit sha1_base64="B+3+z04m5q8yd8YVlhrjuNya9tk=">ACBHicbVDLSgMxFM 3UV62vqstugkVwNcyIosuiC1WsA9oh5J7ShSWZIMkIZunDjr7hxoYhbP8Kdf2M67UJbDwTOPedebu4JE8608bxvp7Cyura+UdwsbW3v7O6V9w+aOk4VhQaNeazaIdHAmYSGYZDO1 FARMihFY6up37rAZRmsbw34wQCQaSRYwSY6VeuXIDElRe4O5QJ4RC5rvnQkw65WrnuvlwMvEn5MqmqPeK391+zFNBUhDOdG643uJCTKiDKMcJqVuqsFuGJEBdCyVRIAOsvyICT62Sh 9HsbJPGpyrvycyIrQei9B2CmKGetGbiv95ndREl0HGZJIakHS2KEo5NjGeJoL7TAE1fGwJoYrZv2I6JIpQY3Mr2RD8xZOXSfPU9T3Xvzur1q7mcRBR2hE+SjC1RDt6iOGoiR/SMX tGb8+S8O/Ox6y14MxnDtEfOJ8/+c2XpA=</latexit> <latexit sha1_base64="B+3+z04m5q8yd8YVlhrjuNya9tk=">ACBHicbVDLSgMxFM 3UV62vqstugkVwNcyIosuiC1WsA9oh5J7ShSWZIMkIZunDjr7hxoYhbP8Kdf2M67UJbDwTOPedebu4JE8608bxvp7Cyura+UdwsbW3v7O6V9w+aOk4VhQaNeazaIdHAmYSGYZDO1 FARMihFY6up37rAZRmsbw34wQCQaSRYwSY6VeuXIDElRe4O5QJ4RC5rvnQkw65WrnuvlwMvEn5MqmqPeK391+zFNBUhDOdG643uJCTKiDKMcJqVuqsFuGJEBdCyVRIAOsvyICT62Sh 9HsbJPGpyrvycyIrQei9B2CmKGetGbiv95ndREl0HGZJIakHS2KEo5NjGeJoL7TAE1fGwJoYrZv2I6JIpQY3Mr2RD8xZOXSfPU9T3Xvzur1q7mcRBR2hE+SjC1RDt6iOGoiR/SMX tGb8+S8O/Ox6y14MxnDtEfOJ8/+c2XpA=</latexit> <latexit sha1_base64="B+3+z04m5q8yd8YVlhrjuNya9tk=">ACBHicbVDLSgMxFM 3UV62vqstugkVwNcyIosuiC1WsA9oh5J7ShSWZIMkIZunDjr7hxoYhbP8Kdf2M67UJbDwTOPedebu4JE8608bxvp7Cyura+UdwsbW3v7O6V9w+aOk4VhQaNeazaIdHAmYSGYZDO1 FARMihFY6up37rAZRmsbw34wQCQaSRYwSY6VeuXIDElRe4O5QJ4RC5rvnQkw65WrnuvlwMvEn5MqmqPeK391+zFNBUhDOdG643uJCTKiDKMcJqVuqsFuGJEBdCyVRIAOsvyICT62Sh 9HsbJPGpyrvycyIrQei9B2CmKGetGbiv95ndREl0HGZJIakHS2KEo5NjGeJoL7TAE1fGwJoYrZv2I6JIpQY3Mr2RD8xZOXSfPU9T3Xvzur1q7mcRBR2hE+SjC1RDt6iOGoiR/SMX tGb8+S8O/Ox6y14MxnDtEfOJ8/+c2XpA=</latexit> Generation i + 1 <latexit sha1_base64="vQ84JsIk0Elo73102q+KzY5YLA=">ACBnicbVDLSgMxFM 34rPU16lKEYBEoUxE0WXRhS4r2Ae0pWTS2zY0yQxJRihDV278FTcuFHrN7jzb0ynXWjrgcC59zLzT1hLixQfDtLSwuLa+s5tby6xubW9v+zm7VRIlmUGRiHQ9pAYEV1Cx3Aqox qoDAXUwsH12K89gDY8Uvd2GENL0p7iXc6odVLbP7gBTorcLNvYsogJcVzKUeYn5C2XwiKQY8T8iUFNAU5b/1exELJGgLBPUmAYJYtKqbacCRjlm4kBt2NAe9BwVFEJpVmZ4zwkV M6uBtp95TFmfp7IqXSmKEMXaektm9mvbH4n9dIbPeylXIVJxYUmyzqJgLbCI8zwR2ugVkxdIQyzd1fMetTZl1yeVdCGT25HlSPS2SoEjuzgqlq2kcObSPDtExIugCldAtKqMKYugRP aNX9OY9eS/eu/cxaV3wpjN76A+8zx/jvZgU</latexit> <latexit sha1_base64="vQ84JsIk0Elo73102q+KzY5YLA=">ACBnicbVDLSgMxFM 34rPU16lKEYBEoUxE0WXRhS4r2Ae0pWTS2zY0yQxJRihDV278FTcuFHrN7jzb0ynXWjrgcC59zLzT1hLixQfDtLSwuLa+s5tby6xubW9v+zm7VRIlmUGRiHQ9pAYEV1Cx3Aqox qoDAXUwsH12K89gDY8Uvd2GENL0p7iXc6odVLbP7gBTorcLNvYsogJcVzKUeYn5C2XwiKQY8T8iUFNAU5b/1exELJGgLBPUmAYJYtKqbacCRjlm4kBt2NAe9BwVFEJpVmZ4zwkV M6uBtp95TFmfp7IqXSmKEMXaektm9mvbH4n9dIbPeylXIVJxYUmyzqJgLbCI8zwR2ugVkxdIQyzd1fMetTZl1yeVdCGT25HlSPS2SoEjuzgqlq2kcObSPDtExIugCldAtKqMKYugRP aNX9OY9eS/eu/cxaV3wpjN76A+8zx/jvZgU</latexit> <latexit sha1_base64="vQ84JsIk0Elo73102q+KzY5YLA=">ACBnicbVDLSgMxFM 34rPU16lKEYBEoUxE0WXRhS4r2Ae0pWTS2zY0yQxJRihDV278FTcuFHrN7jzb0ynXWjrgcC59zLzT1hLixQfDtLSwuLa+s5tby6xubW9v+zm7VRIlmUGRiHQ9pAYEV1Cx3Aqox qoDAXUwsH12K89gDY8Uvd2GENL0p7iXc6odVLbP7gBTorcLNvYsogJcVzKUeYn5C2XwiKQY8T8iUFNAU5b/1exELJGgLBPUmAYJYtKqbacCRjlm4kBt2NAe9BwVFEJpVmZ4zwkV M6uBtp95TFmfp7IqXSmKEMXaektm9mvbH4n9dIbPeylXIVJxYUmyzqJgLbCI8zwR2ugVkxdIQyzd1fMetTZl1yeVdCGT25HlSPS2SoEjuzgqlq2kcObSPDtExIugCldAtKqMKYugRP aNX9OY9eS/eu/cxaV3wpjN76A+8zx/jvZgU</latexit> <latexit sha1_base64="vQ84JsIk0Elo73102q+KzY5YLA=">ACBnicbVDLSgMxFM 34rPU16lKEYBEoUxE0WXRhS4r2Ae0pWTS2zY0yQxJRihDV278FTcuFHrN7jzb0ynXWjrgcC59zLzT1hLixQfDtLSwuLa+s5tby6xubW9v+zm7VRIlmUGRiHQ9pAYEV1Cx3Aqox qoDAXUwsH12K89gDY8Uvd2GENL0p7iXc6odVLbP7gBTorcLNvYsogJcVzKUeYn5C2XwiKQY8T8iUFNAU5b/1exELJGgLBPUmAYJYtKqbacCRjlm4kBt2NAe9BwVFEJpVmZ4zwkV M6uBtp95TFmfp7IqXSmKEMXaektm9mvbH4n9dIbPeylXIVJxYUmyzqJgLbCI8zwR2ugVkxdIQyzd1fMetTZl1yeVdCGT25HlSPS2SoEjuzgqlq2kcObSPDtExIugCldAtKqMKYugRP aNX9OY9eS/eu/cxaV3wpjN76A+8zx/jvZgU</latexit> Figure 1: Iterated learning. Language is transmitted to a child agent Aθi+1 by teaching it to speak imitating the utterances of parent Aθi given the same input trajectories (dashed lines) and to listen to the parent utterances, converting them to trajectories (continuous lines). After training, former child Aθi+1 becomes the parent of a new agent Aθi+2. the trajectory t, produce the utterance ˆu that is within {ˆu} (Fig. 1). Importantly, even if the parent’s parameters are fixed at each generation, the child agent is allowed, while achieving perfect accuracy, to introduce changes into its’ parent language, making the latter more closely aligned with its “innate” biases. 8 Importantly, the language is not forced to remain stationary across generations. Evaluation We evaluate agents both as Listeners and as Speakers. The former is standard, as each input u maps to a single output t. Since the Speaker can be one-to-many, in order to obtain a single prediction u given trajectory t, we predict at each time step k a word u∗ k = arg maxuk(pθ(uk|u∗ k−1, hk)). This word is fed to the next unit of the decoder, and so on until u∗ K = eos. The final prediction ˆu∗is then defined as the sequence [u∗ 1, u∗ 2...u∗ K], and compared to M samples from the true distribution P(u|t). If ˆu∗matches one of the true samples, the agent succeeds, otherwise it fails (in iterated learning, P(u|t) corresponds to the parent’s distribution). In other words, we are not evaluating the model on a perfect fit of the ground-truth (parent’s, in case of iterated learning) distribution, but we score a hit for it as long as it outputs a combination in P(u|t). This mismatch between the training and evaluation criteria allows the emergence of interesting 8as exemplified in the experiments below, the child can reach perfect accuracy while having a different distribution over the utterances than its parent. patterns (as we allow the agent to drift from the ground-truth distribution) while constituting a reasonable measure of actual communication success (as the agent produces an utterance that is associated to the input trajectory in the ground-truth). 4 Experiments 4.1 Iconicity, word order, and markers We compare languages with fixed and free order, with and without markers. Experiments with humans have shown that, as listeners, children perform better with iconic sentences than noniconic ones (de Ruiter et al., 2018). We check whether Seq2Seq networks show similar preferences in terms of learning speed and diachronic persistence. We compare in particular the forwardiconic order with the backward-iconic language, and three randomly selected non-iconic languages where the relation between segment and phrase order is fixed but arbitrary. Concerning the relation between fixed order and markers, typological studies show a trade-off between these cues. For example, languages with flexible word order (e.g., Japanese, and Russian) often use case to mark grammatical function, whereas languages with fixed word order (such as English and Mandarin) often lack case marking (Blake, 2001; Comrie, 1981). This might be explained by a universal preference for efficient and non-redundant grammatical coding (Fedzechkina et al., 2016a; Qian and Jaeger, 2012; Zipf, 1949). Seq2Seq agents might show similar preferences when tested as Speakers. That is, they might show a learning and preservation preference for either fixed nomarking languages or free marking languages. Individual learning. Fig. 2 shows test accuracy during learning for each language type. The noattention agent has a preference for backwardiconic both in speaking and listening. This is in line with the observation that Seq2Seq machine translation models work better when the source is presented in reverse order as it makes the optimization problem easier by introducing shorterterm dependencies (Sutskever et al., 2014). The (forward) iconic order is better than the non-iconic ones in the speaking direction only. The attentionenhanced model shows much faster convergence to near-perfect communication, with less room for clear biases to emerge. Still, we observe some interesting initial preferences. In speaking mode, the 5171 0 20 40 60 80 100 Epochs 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy (a) Speaker: no attention 0 20 40 60 80 100 Epochs 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy (b) Listener: no attention 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 Epochs 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy (c) Speaker: attention 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 Epochs 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy (d) Listener: attention Figure 2: Iconicity / Fixed vs. free order: Mean test set accuracy in function of training epoch. Error bars represent standard deviation over five random seeds. The NonIconic-average curve pools measurements for 3 noniconic languages, each with five runs. Chance accuracy is represented by the horizontal dotted line. The continuous lines represent languages without markers, while the dashed lines represent languages with markers. agent learns fastest with the forward iconic language, followed by the backward one. The noniconic language without markers is the most difficult to learn, as expected. On the other hand, in listening mode we encounter again a preference for backward iconicity. Only the attention agent in speaking mode shows a trade-off between order and markers coding, with a preference for markers-free fixedorder iconic languages over their counterparts with markers, and for the free-order language with markers over the marker-less one. Only the non-iconic languages violate the trend: arguably, though, non-iconic order coding is so sub-optimal that redundant markers are justified in this case. In listening mode, this agent shows the expected preference for markers in the free-order case (as the free-order language without markers is massively ambiguous, with most utterances mapping to multiple trajectories). However, among the fixed-order languages, both backward and noniconic prefer redundant coding. The agent without attention also displays a preference for freeorder+markers in listening mode (while it has serious difficulties to learn to speak this language), but no clear avoidance for redundant coding in either modes. In sum, we confirm a preference for iconic orders. Only the attention-enhanced agent in speaking mode displays avoidance of redundant coding. Iterated learning. In iterated learning, we might expect the lineage of agents that starts with less natural non-iconic languages to either converge to speak more iconic ones, or possibly to drift into low communication accuracy. We moreover expect redundant coding to fade, with fixedorder+markers languages to either evolve free order or lose markers. Regarding the free-word order marked language, we expect it to either converge to a fixed order (possibly iconic) while losing its markers, as in the historical development from Old English (a language with flexible constituent order and rich case marking) to Modern English (a language with fixed constituent order and a rudimentary case system) (Traugott, 1972), or to remain stable maintaining good communica5172 tion accuracy. We focus on the attention agent, as the no-attention one converges too slowly for multiple-generation experiments. We simulate 10 generations, repeating each experiment with 5 different initialization seeds. For non-iconic orders, we sample the same 3 languages sampled for individual learning. For fixed-order languages, we do not observe any change in accuracy or behavior in the listener direction (the last-generation child is perfectly parsing the initial language). However, we observe in speaker mode a (relatively small) decrease in accuracy across generations, which, importantly, affects the most natural language (forward iconic without markers) the least, and the most difficult language (non-iconic without markers) the most (results are in Supplementary). Again, we observe a (weak) tendency for the attention agent to yield to the expected natural pressures. We counted the overall number of markers produced by children in speaker mode after convergence, for all test trajectories in all languages with redundant coding. It was always constant, showing no trend towards losing markers to avoid redundant coding. Similarly, there was no tendency, across generations, to start producing multiple utterances in response to the same test trajectory. In the evolution of the free-order language with markers, accuracy was relatively stable in both speaking and listening (99.82% and 100%, respectively, for the last-generation agent, averaging across 25 runs).9 However, we noticed that across generations, the language becomes more fixed with some preferred orders emerging. Fig. 3 quantifies this in terms of the entropy of the observed phrase order probabilities across all test set trajectories (the lower the entropy, the more skewed the distribution). There is already a clear decrease for the first agent with respect to the ground-truth distribution, and the trend continues across generations. We analyzed the distribution of Speaker utterances for the longest (5-segment) test trajectories in the last generation. We found that, out of 120 possible phrase orders, no lastgeneration agent used more than 10. This is in line with the typological observation that even nonconfigurational languages favor (at least statisti9We run more simulations in this case as we noticed that the final language depends on the initial seed, and hence there is high variance with only 5 runs. Specifically, we start with 5 different parents and simulate 10 generations, repeating each experiment with 5 different seeds -1 0 1 2 3 4 5 6 7 8 9 10 Generations 2 3 4 5 6 7 Entropy Figure 3: Phrase-order entropy in attention Speaker utterances given test set trajectories, in function of training generation (-1 represents the initial groundtruth distribution). Curve represents mean across 25 runs, with error bars for standard deviations. cally) certain orders (Hale, 1992; Mithun, 1992) and thus an equiprobable distribution of orders, as it is the case in our free word-order+markers language, is unlikely. The “survivor” orders of the last generation were not necessarily iconic but depended notably on the seed. The absence of clear preference for a specific order could be explained by the fact that attention-enhanced agents, as we saw, can learn any fixed-order language very fast. In this case, the seed of one generation, by randomly skewing the statistics in favor of one order or the other, can significantly impact the preference toward the favored order, that will then spread diachronically throughout the whole iteration. 4.2 Local vs. long-distance We finally contrast the long-distance and local languages described in Section 3.1. In accordance with the linguistic literature (see Introduction), we predict that the long-distance language will be harder to learn, and it will tend to reduce longdistance constructions in diachrony. Although evidence for distance minimization is typically from production experiments (e.g., Futrell et al., 2015), we expect long-distance constructions to also be harder in perception, as they cannot be fully incrementally processed and require keeping material in memory for longer spans. Individual learning. As the long-distance language includes all utterances from the local language, it might be trivially harder to learn. To account for this, we construct a set of control languages by randomly sampling, for each trajectory, the same number of possible utterances for the local and long-distance controls. We report averaged 5173 0 50 100 150 200 250 300 Epochs 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy (a) Speaker: attention 0 50 100 150 200 250 300 Epochs 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy (b) Listener: attention Figure 4: Long vs. local distance: Mean test set accuracy as a function of training epoch. The error bars correspond to the standard deviation, calculated over five random seeds. 0 50 100 Generation 0 0 50 100 Generation 1 0 200 400 0 50 100 Generation 3 0 200 400 0 50 100 Generation 7 Epochs Frequency, % Figure 5: Frequency of the local and long-distance utterances produced by the attention Speaker in function of training epoch. The input trajectories are taken from the test set. Test set accuracies for the four generations shown: 99.99%, 87.62%, 84.54%, 79.38%. At Generation 0, less epochs were run due to early stopping. results for 3 such languages of both kinds. Details on their construction are in Supplementary. Fig. 4 shows test set accuracy across 300 training epochs for the attention model. The results, for speaking and listening, confirm the preference for the local language. The control languages are harder to learn, as they impose an arbitrary constraint on free word order, but they display the preference for the local language even more clearly. Overall, we see a tendency for listening to be easier than speaking, but this cuts across the local/long-distance division, and it seems to be a more general consequence of free-order languages with markers being easier in parsing than production (cf. the no-attention agent results in Fig. 2). Results without attention (not shown) are comparable in general, although the listener/speaker asymmetry is sharper, with no difference in difficulty among the 4 languages when listening. Iterated learning. We study multiplegeneration transmission of the long-distance language with the attention agent. To deal with the problem of skewed relative frequency of long-distance and entirely local utterances, the Speaker direction is trained by ensuring that the output utterance set {u} for each input trajectory t contains the same number of long-distance and local constructions. This is achieved by sub-sampling n = 48 long-distance utterances to match the number of possible local constructions. Fig. 5 shows the relative frequency across generations of local and long-distance utterances produced by the agent as a Speaker in function of training (one representative seed of 5). As predicted, a clear preference for local constructions emerges, confirming the presence of a distance minimization bias in Seq2Seq models. 5 Discussion We studied whether word-order constraints widely attested in natural languages affect learning and diachronic transmission in Seq2Seq agents. We found that some trends follow natural patterns, such as the tendency to limit word order to few configurations, and long-distance dependency minimization. In other ways, our agents depart from typical human language patterns. For example, they exhibit a preference for a backward order, and there are only weak signs of a trade-off between different ways to encode constituent roles, with redundant solutions often being preferred. The research direction we introduced might lead to a better understanding of the biases that affect the linguistic behaviour of LSTMs and simi5174 lar models. This could help current efforts towards the development of artificial agents that communicate to solve a task, with the ultimate goal of developing AIs that can talk with humans. It has been observed that the communication protocol emerging in such simulations is very different from human language (e.g., Kottur et al., 2017; Lewis et al., 2017; Bouchacourt and Baroni, 2018). A better understanding of what are the “innate” biases of standard models in highly controlled setups, such as the one studied here, should complement large-scale simulations, as part of the effort to develop new methods to encourage the emergence of more human-like language. For example, our results suggest that current neural networks, as they are not subject to human-like least-effort constraints, might not display the same trend towards efficient communication that we encounter in natural languages. How to incorporate “effort”-based pressures in neural networks is an exciting direction for future work. 6 Acknowledgments We would like to thank Roger Levy, Diane Bouchacourt, Alex Cristea, Kristina Gulordava and Armand Joulin for their very helpful feedback. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Barry Blake. 2001. Case. MIT Press, Cambridge, MA. Diane Bouchacourt and Marco Baroni. 2018. How agents see things: On visual representations in an emergent language game. In Proceedings of EMNLP, pages 981–985, Brussels, Belgium. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Edward Choi, Angeliki Lazaridou, and Nando de Freitas. 2018. Compositional obverter communication learning from raw visual input. In Proceedings of ICLR Conference Track, Vancouver, Canada. Benrard Comrie. 1981. Language Universals and Linguistic Typology. Blackwell, Malden, MA. Laura de Ruiter, Anna Theakston, Silke Brandt, and Elena Lieven. 2018. Iconicity affects children’s comprehension of complex sentences: The role of semantics, clause order, input and individual differences. Cognition, 171:202–224. Holger Diessel. 2008. Iconicity of sequence: A corpusbased analysis of the positioning of temporal adverbial clauses in English. Cognitive Linguistics, 19(3):465–490. Katrina Evtimova, Andrew Drozdov, Douwe Kiela, and Kyunghyun Cho. 2018. Emergent communication in a multi-modal, multi-step referential game. In Proceedings of ICLR Conference Track, Vancouver, Canada. Maryia Fedzechkina, Elissa Newport, and T. Florian Jaeger. 2016a. Balancing effort and information transmission during language acquisition: Evidence from word order and case marking. Cognitive Science, 41:n/a–n/a. Maryia Fedzechkina, Elissa Newport, and T. Florian Jaeger. 2016b. Miniature artificial language learning as a complement to typological data, pages 211– 232. Richard Futrell, Kyle Mahowald, and Edward Gibson. 2015. Large-scale evidence of dependency length minimization in 37 language. Proceedings of the National Academy of Sciences, 112(33):10336– 10341. Edward Gibson. 1998. Linguistic complexity: Locality of syntactic dependencies. Cognition, 68(1):1–76. Yoav Goldberg. 2017. Neural Network Methods for Natural Language Processing. Morgan & Claypool, San Francisco, CA. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. Joseph Greenberg. 1963. Some universals of grammar with particular reference to the order of meaningful elements. In Joseph Greenberg, editor, Universals of Human Language, pages 73–113. MIT Press, Cambridge, MA. John Haiman. 1980. The iconicity of grammar: Isomorphism and motivation. Language, 56(3):515– 540. Kenneth Hale. 1992. Basic word order in two ‘free word order’ languages. In Doris Payne, editor, Pragmatics of word order flexibility, pages 63–82. John Benjamins, Amsterdam, the Netherlands. Serhii Havrylov and Ivan Titov. 2017. Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. In Proceedings of NIPS, pages 2149–2159, Long Beach, CA, USA. John Hawkins. 1994. A Performance Theory of Order and Constituency. Cambridge University Press, Cambridge, UK. 5175 Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Julie M. Hupp, Vladimir M. Sloutsky, and Peter W. Culicover. 2009. Evidence for a domain-general mechanism underlying the suffixation preference in language. Language and Cognitive Processes, 24(6):876–909. Rong Jin and Zoubin Ghahramani. 2003. Learning with multiple labels. In Advances in neural information processing systems, pages 921–928. Emilio Jorge, Mikael K˚ageb¨ack, and Emil Gustavsson. 2016. Learning to play Guess Who? and inventing a grounded language as a consequence. In Proceedings of the NIPS Deep Reinforcement Learning Workshop, Barcelona, Spain. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Simon Kirby, Tom Griffiths, and Kenny Smith. 2014. Iterated learning and the evolution of language. Current Opinion in Neurobiology, 28:108–114. Satwik Kottur, Jos´e Moura, Stefan Lee, and Dhruv Batra. 2017. Natural language does not emerge ‘naturally’ in multi-agent dialog. In Proceedings of EMNLP, pages 2962–2967, Copenhagen, Denmark. Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of ICML, pages 2879–2888, Stockholm, Sweden. Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. 2018. Emergence of linguistic communication from referential games with symbolic and pixel input. In Proceedings of ICLR Conference Track, Vancouver, Canada. Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2017. Multi-agent cooperation and the emergence of (natural) language. In Proceedings of ICLR Conference Track, Toulon, France. Jason Lee, Kyunghyun Cho, Jason Weston, and Douwe Kiela. 2017a. Emergent translation in multi-agent communication. arXiv preprint arXiv:1710.06922. Sang-Woo Lee, Yu-Jung Heo, and Byoung-Tak Zhang. 2017b. Answerer in questioner’s mind for goaloriented visual dialogue. Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? End-to-end learning of negotiation dialogues. In Proceedings of EMNLP, pages 2443–2453, Copenhagen, Denmark. Tal Linzen, Grzegorz Chrupała, and Afra Alishahi, editors. 2018. Proceedings of the EMNLP BlackboxNLP Workshop. ACL, Brussels, Belgium. Solomon Marcus and Andreea Calude. 2010. Syntactic iconicity, within and beyond its accepted principles. Revue Roumaine de Linguistique, 55(1):19–44. Marianne Mithun. 1992. Is basic word order universal? In Doris Payne, editor, Pragmatics of word order flexibility, pages 15–61. John Benjamins, Amsterdam, the Netherlands. Igor Mordatch and Pieter Abbeel. 2018. Emergence of grounded compositional language in multi-agent populations. In Thirty-Second AAAI Conference on Artificial Intelligence. Frederick Newmeyer. 1992. Iconicity and generative grammar. Language, 68(4):756–796. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. Ofir Press and Lior Wolf. 2016. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859. Ting Qian and T Florian Jaeger. 2012. Cue effectiveness in communicatively efficient discourse production. Cognitive science, 36(7):1312–1336. G¨unter Radden and Ren´e Dirven. 2007. Cognitive English Grammar. John Benjamins, Amsterdam, the Netherlands. Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. 2018. On the convergence of Adam and beyond. In International Conference on Learning Representations. Laura E. de Ruiter, Anna L. Theakston, Silke Brandt, and Elena V.M. Lieven. 2018. Iconicity affects children’s comprehension of complex sentences: The role of semantics, clause order, input and individual differences. Cognition, 171:202 – 224. Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104–3112, Montreal, Canada. Harry Tily, Michael C Frank, and T. Florian Jaeger. 2011. The learnability of constructed languages reflects typological patterns. pages 1364–1369. E. C. Traugott. 1972. IA history of English syntax. New York: Holt, Rinehart and Winston. George Zipf. 1949. Human Behavior and the Principle of Least Effort. Addison-Wesley, Boston, MA.
2019
509
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 537–546 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 537 Open-Domain Targeted Sentiment Analysis via Span-Based Extraction and Classification Minghao Hu†, Yuxing Peng†, Zhen Huang†, Dongsheng Li†, Yiwei Lv§ † National University of Defense Technology, Changsha, China § University of Macau, Macau, China {huminghao09,pengyuxing,huangzhen,dsli}@nudt.edu.cn Abstract Open-domain targeted sentiment analysis aims to detect opinion targets along with their sentiment polarities from a sentence. Prior work typically formulates this task as a sequence tagging problem. However, such formulation suffers from problems such as huge search space and sentiment inconsistency. To address these problems, we propose a span-based extract-then-classify framework, where multiple opinion targets are directly extracted from the sentence under the supervision of target span boundaries, and corresponding polarities are then classified using their span representations. We further investigate three approaches under this framework, namely the pipeline, joint, and collapsed models. Experiments on three benchmark datasets show that our approach consistently outperforms the sequence tagging baseline. Moreover, we find that the pipeline model achieves the best performance compared with the other two models. 1 Introduction Open-domain targeted sentiment analysis is a fundamental task in opinion mining and sentiment analysis (Pang et al., 2008; Liu, 2012). Compared to traditional sentence-level sentiment analysis tasks (Lin and He, 2009; Kim, 2014), the task requires detecting target entities mentioned in the sentence along with their sentiment polarities, thus being more challenging. Taking Figure 1 as an example, the goal is to first identify “Windows 7” and “Vista” as opinion targets and then predict their corresponding sentiment classes. Sentence: I love [Windows 7]+ which is a vast improvment over [Vista]-. Targets: Windows 7, Vista Polarities: positive, negative Figure 1: Open-domain targeted sentiment analysis. Typically, the whole task can be decoupled into two subtasks. Since opinion targets are not given, we need to first detect the targets from the input text. This subtask, which is usually denoted as target extraction, can be solved by sequence tagging methods (Jakob and Gurevych, 2010; Liu et al., 2015; Wang et al., 2016a; Poria et al., 2016; Shu et al., 2017; He et al., 2017; Xu et al., 2018). Next, polarity classification aims to predict the sentiment polarities over the extracted target entities (Jiang et al., 2011; Dong et al., 2014; Tang et al., 2016a; Wang et al., 2016b; Chen et al., 2017; Xue and Li, 2018; Li et al., 2018; Fan et al., 2018). Although lots of efforts have been made to design sophisticated classifiers for this subtask, they all assume that the targets are already given. Rather than using separate models for each subtask, some works attempt to solve the task in a more integrated way, by jointly extracting targets and predicting their sentiments (Mitchell et al., 2013; Zhang et al., 2015; Li et al., 2019). The key insight is to label each word with a set of target tags (e.g., B, I, O) as well as a set of polarity tags (e.g., +, -, 0), or use a more collapsed set of tags (e.g., B+, I-) to directly indicate the boundary of targeted sentiment, as shown in Figure 2(a). As a result, the entire task is formulated as a sequence tagging problem, and solved using either a pipeline model, a joint model, or a collapsed model under the same network architecture. However, the above annotation scheme has several disadvantages in target extraction and polarity classification. Lee et al. (2016) show that, when using BIO tags for extractive question answering tasks, the model must consider a huge search space due to the compositionality of labels (the power set of all sentence words), thus being less effective. As for polarity classification, the sequence tagging scheme turns out to be problematic for two reasons. First, tagging polarity over each word 538 Sentence: Pipeline/ Joint: I love Windows 7 ... over Vista . O O B I O B O 0 0 + + 0 0 Collapsed: O O B+ I+ O BO (a) Sequence tagging. The B/I/O labels indicate target span boundaries, while +/-/0 refer to sentiment polarities. Sentence: Pipeline/ Joint: I love Windows 7 ... over Vista . Target start: 3, 11 Target end: 4, 11 Collapsed: Polarity: +, Target start: 3+, 11- Target end: 4+, 11(b) Span-based labeling. The number denotes the start/end position of the given target in the sentence. Figure 2: Comparison of different annotation schemes for the pipeline, joint, and collapsed models. ignores the semantics of the entire opinion target. Second, since predicted polarities over target words may be different, the sentiment consistency of multi-word entity can not be guaranteed, as mentioned by Li et al. (2019). For example, there is a chance that the words “Windows” and “7” in Figure 2(a) are predicted to have different polarities due to word-level tagging decisions. To address the problems, we propose a spanbased labeling scheme for open-domain targeted sentiment analysis, as shown in Figure 2(b). The key insight is to annotate each opinion target with its span boundary followed by its sentiment polarity. Under such annotation, we introduce an extract-then-classify framework that first extracts multiple opinion targets using an heuristic multispan decoding algorithm, and then classifies their polarities with corresponding summarized span representations. The advantage of this approach is that the extractive search space can be reduced linearly with the sentence length, which is far less than the tagging method. Moreover, since the polarity is decided using the targeted span representation, the model is able to take all target words into account before making predictions, thus naturally avoiding sentiment inconsistency. We take BERT (Devlin et al., 2018) as the default backbone network, and explore two research questions. First, we make an elaborate comparison between tagging-based models and span-based models. Second, following previous works (Mitchell et al., 2013; Zhang et al., 2015), we compare the pipeline, joint, and collapsed models under the span-based labeling scheme. Extensive experiments on three benchmark datasets show that our models consistently outperform sequence tagging baselines. In addition, the pipeline model firmly improves over both the joint and collapsed models. Source code is released to facilitate future research in this field1. 1https://github.com/huminghao16/SpanABSA 2 Related Work Apart from sentence-level sentiment analysis (Lin and He, 2009; Kim, 2014), targeted sentiment analysis, which requires the detection of sentiments towards mentioned entities in the open domain, is also an important research topic. As discussed in §1, this task is usually divided into two subtasks. The first is target extraction for identifying entities from the input sentence. Traditionally, Conditional Random Fields (CRF) (Lafferty et al., 2001) have been widely explored (Jakob and Gurevych, 2010; Wang et al., 2016a; Shu et al., 2017). Recently, many works concentrate on leveraging deep neural networks to tackle this task, e.g., using CNNs (Poria et al., 2016; Xu et al., 2018), RNNs (Liu et al., 2015; He et al., 2017), and so on. The second is polarity classification, assuming that the target entities are given. Recent works mainly focus on capturing the interaction between the target and the sentence, by utilizing various neural architectures such as LSTMs (Hochreiter and Schmidhuber, 1997; Tang et al., 2016a) with attention mechanism (Wang et al., 2016b; Li et al., 2018; Fan et al., 2018), CNNs (Xue and Li, 2018; Huang and Carley, 2018), and Memory Networks (Tang et al., 2016b; Chen et al., 2017; Li and Lam, 2017). Rather than solving these two subtasks with separate models, a more practical approach is to directly predict the sentiment towards an entity along with discovering the entity itself. Specifically, Mitchell et al. (2013) formulate the whole task as a sequence tagging problem and propose to use CRF with hand-crafted linguistic features. Zhang et al. (2015) further leverage these linguistic features to enhance a neural CRF model. Recently, Li et al. (2019) have proposed a unified model that contains two stacked LSTMs along with carefully-designed components for maintaining sentiment consistency and improving target 539 ps love Windows 7 0 1h 0 2h 0 3h 0 4h 4 L h 3 L h 2 L h 1 L h 0.1 0.1 0.6 0.2 0.1 0.1 0.3 0.5 I decoding start: 3 end: 4 pe (a) Multi-target extractor. 4 L h 3 L h 0 1h 0 2h 0 3h 0 4h 2 L h 1 L h + v love Windows 7 I start: 3 end: 4 Extractor (b) Polarity classifier. Figure 3: An overview of the proposed framework. Word embeddings are fed to the BERT encoder (Devlin et al., 2018) that contains L pre-trained Transformer blocks (Vaswani et al., 2017). The last block’s hidden states are used to (a) propose one or multiple candidate targets based on the probabilities of the start and end positions, (b) predict the sentiment polarity using the span representation of the given target. word detection. Our work differs from these approaches in that we formulate this task as a spanlevel extract-then-classify process instead. The proposed span-based labeling scheme is inspired by recent advances in machine comprehension and question answering (Seo et al., 2017; Hu et al., 2018), where the task is to extract a continuous span of text from the document as the answer to the question (Rajpurkar et al., 2016). To solve this task, Lee et al. (2016) investigate several predicting strategies, such as BIO prediction, boundary prediction, and the results show that predicting the two endpoints of the answer is more beneficial than the tagging method. Wang and Jiang (2017) explore two answer prediction methods, namely the sequence method and the boundary method, finding that the later performs better. Our approach is related to this line of work. However, unlike these works that extract one span as the final answer, our approach is designed to dynamically output one or multiple opinion targets. 3 Extract-then-Classify Framework Instead of formulating the open-domain targeted sentiment analysis task as a sequence tagging problem, we propose to use a span-based labeling scheme as follows: given an input sentence x = (x1, ..., xn) with length n, and a target list T = {t1, ..., tm}, where the number of targets is m and each target ti is annotated with its start position, its end position, and its sentiment polarity. The goal is to find all targets from the sentence as well as predict their polarities. The overall illustration of the proposed framework is shown in Figure 3. The basis of our framework is the BERT encoder (Devlin et al., 2018): we map word embeddings into contextualized token representations using pre-trained Transformer blocks (Vaswani et al., 2017) (§3.1). A multitarget extractor is first used to propose multiple candidate targets from the sentence (§3.2). Then, a polarity classifier is designed to predict the sentiment towards each extracted candidate using its summarized span representation (§3.3). We further investigate three different approaches under this framework, namely the pipeline, joint, and collapsed models in §3.4. 3.1 BERT as Backbone Network We use Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018), a pre-trained bidirectional Transformer encoder that achieves state-of-the-art performances across a variety of NLP tasks, as our backbone network. We first tokenize the sentence x using a 30,522 wordpiece vocabulary, and then generate the input sequence ˜x by concatenating a [CLS] token, the tokenized sentence, and a [SEP] token. Then for each token ˜xi in ˜x, we convert it into vector space by summing the token, segment, and position embeddings, thus yielding the input embeddings h0 2 R(n+2)⇥h, where h is the hidden size. Next, we use a series of L stacked Transformer blocks to project the input embeddings into a sequence of contextual vectors hi 2 R(n+2)⇥h as: hi = TransformerBlock(hi−1), 8i 2 [1, L] Here, we omit an exhaustive description of the block architecture and refer readers to Vaswani et al. (2017) for more details. 540 3.2 Multi-Target Extractor Multi-target extractor aims to propose multiple candidate opinion targets (Figure 3(a)). Rather than finding targets via sequence tagging methods, we detect candidate targets by predicting the start and end positions of the target in the sentence, as suggested in extractive question answering (Wang and Jiang, 2017; Seo et al., 2017; Hu et al., 2018). We obtain the unnormalized score as well as the probability distribution of the start position as: gs = wshL , ps = softmax(gs) where ws 2 Rh is a trainable weight vector. Similarly, we can get the probability of the end position along with its confidence score by: ge = wehL , pe = softmax(ge) During training, since each sentence may contain multiple targets, we label the span boundaries for all target entities in the list T. As a result, we can obtain a vector ys 2 R(n+2), where each element ys i indicates whether the i-th token starts a target, and also get another vector ye 2 R(n+2) for labeling the end positions. Then, we define the training objective as the sum of the negative log probabilities of the true start and end positions on two predicted probabilities as: L = − Xn+2 i=1 ys i log(ps i) − Xn+2 j=1 ye j log(pe j) At inference time, previous works choose the span (k, l) (k l) with the maximum value of gs k + ge l as the final prediction. However, such decoding method is not suitable for the multi-target extraction task. Moreover, simply taking top-K spans according to the addition of two scores is also not optimal, as multiple candidates may refer to the same text. Figure 4 gives a qualitative example to illustrate this phenomenon. Sentence: Great food but the service was dreadful! Targets: food, service Predictions: food but the service, food, Great food, service, service was dreadful, ... Figure 4: An example shows that there are many redundant spans in top-K predictions. To adapt to multi-target scenarios, we propose an heuristic multi-span decoding algorithm as shown in Algorithm 1. For each example, topM indices are first chosen from the two predicted scores gs and ge (line 2), and the candidate span (si, ej) (denoted as rl) along with its heuristicregularized score ul are then added to the lists R and U respectively, under the constraints that the end position is no less than the start position as well as the addition of two scores exceeds a threshold γ (line 3-8). Note that we heuristically calculate ul as the sum of two scores minus the span length (line 6), which turns out to be critical to the performance as targets are usually short entities. Next, we prune redundant spans in R using the non-maximum suppression algorithm (Rosenfeld and Thurston, 1971). Specifically, we remove the span rl that possesses the maximum score ul from the set R and add it to the set O (line 1011). We also delete any span rk that is overlapped with rl, which is measured with the word-level F1 function (line 12-14). This process is repeated for remaining spans in R, until R is empty or top-K target spans have been proposed (line 9). Algorithm 1 Heuristic multi-span decoding Input: gs, ge, γ, K gs denotes the score of start positions ge denotes the score of end positions γ is a minimum score threshold K is the maximum number of proposed targets 1: Initialize R, U, O = {}, {}, {} 2: Get top-M indices S, E from gs, ge 3: for si in S do 4: for ej in E do 5: if si ej and gs si + ge ej ≥γ then 6: ul = gs si + ge ej −(ej −si + 1) 7: rl = (si, ej) 8: R = R [ {rl}, U = U [ {ul} 9: while R 6= {} and size(O) < K do 10: l = arg max U 11: O = O [ {rl}; R = R −{rl}; U = U −{ul} 12: for rk in R do 13: if f1(rl, rk) 6= 0 then 14: R = R −{rk}; U = U −{uk} 15: return O 3.3 Polarity Classifier Typically, polarity classification is solved using either sequence tagging methods or sophisticated neural networks that separately encode the target and the sentence. Instead, we propose to summarize the target representation from contextual sentence vectors according to its span boundary, and use feed-forward neural networks to predict the sentiment polarity, as shown in Figure 3(b). Specifically, given a target span r, we calculate a summarized vector v using the attention mechanism (Bahdanau et al., 2014) over tokens in its 541 corrsponding bound (si, ej), similar to Lee et al. (2017) and He et al. (2018): ↵= softmax(w↵hL si:ej) v = Xej t=si ↵t−si+1hL t where w↵2 Rh is a trainable weight vector. The polarity score is obtained by applying two linear transformations with a Tanh activation in between, and is normalized with the softmax function to output the polarity probability as: gp = Wptanh(Wvv) , pp = softmax(gp) where Wv 2 Rh⇥h and Wp 2 Rk⇥h are two trainable parameter matrices. We minimize the negative log probabilities of the true polarity on the predicted probability as: J = − Xk i=1 yp i log(pp i ) where yp is an one-hot label indicating the true polarity, and k is the number of sentiment classes. During inference, the polarity probability is calculated for each candidate target span in the set O, and the sentiment class that possesses the maximum value in pp is chosen. 3.4 Model Variants Following Mitchell et al. (2013); Zhang et al. (2015), we investigate three kinds of models under the extract-then-classify framework: Pipeline model We first build a multi-target extractor where a BERT encoder is exclusively used. Then, a second backbone network is used to provide contextual sentence vectors for the polarity classifier. Two models are separately trained and combined as a pipeline during inference. Joint model In this model, each sentence is fed into a shared BERT backbone network that finally branches into two sibling output layers: one for proposing multiple candidate targets and another for predicting the sentiment polarity over each extracted target. A joint training loss L + J is used to optimize the whole model. The inference procedure is the same as the pipeline model. Collapsed model We combine target span boundaries and sentiment polarities into one label space. For example, the sentence in Figure 2(b) has a positive span (3+, 4+) and a negative span Dataset #Sent #Targets #+ ##0 LAPTOP 1,869 2,936 1,326 990 620 REST 3,900 6,603 4,134 1,538 931 TWITTER 2,350 3,243 703 274 2,266 Table 1: Dataset statistics. ‘#Sent’ and ‘#Targets’ denote the number of sentences and targets, respectively. ‘+’, ‘-’, and ‘0’ refer to the positive, negative, and neutral sentiment classes. (11-, 11-). We then modify the multi-target extractor by producing three sets of probabilities of the start and end positions, where each set corresponds to one sentiment class ( e.g., ps+ and pe+ for positive targets). Then, we define three objectives to optimize towards each polarity. During inference, the heuristic multi-span decoding algorithm is performed on each set of scores (e.g., gs+ and ge+), and the output sets O+, O−, and O0 are aggregated as the final prediction. 4 Experiments 4.1 Setup Datasets We conduct experiments on three benchmark sentiment analysis datasets, as shown in Table 1. LAPTOP contains product reviews from the laptop domain in SemEval 2014 ABSA challenges (Pontiki et al., 2014). REST is the union set of the restaurant domain from SemEval 2014, 2015 and 2016 (Pontiki et al., 2015, 2016). TWITTER is built by Mitchell et al. (2013), consisting of twitter posts. Following Zhang et al. (2015); Li et al. (2019), we report the ten-fold cross validation results for TWITTER, as there is no train-test split. For each dataset, the gold target span boundaries are available, and the targets are labeled with three sentiment polarities, namely positive (+), negative (-), and neutral (0). Metrices We adopt the precision (P), recall (R), and F1 score as evaluation metrics. A predicted target is correct only if it exactly matches the gold target entity and the corresponding polarity. To separately analyze the performance of two subtasks, precision, recall, and F1 are also used for the target extraction subtask, while the accuracy (ACC) metric is applied to polarity classification. Model settings We use the publicly available BERTLARGE2 model as our backbone network, 2https://github.com/google-research/bert 542 Model LAPTOP REST TWITTER Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 UNIFIED 61.27 54.89 57.90 68.64 71.01 69.80 53.08 43.56 48.01 TAG-pipeline 65.84 67.19 66.51 71.66 76.45 73.98 54.24 54.37 54.26 TAG-joint 65.43 66.56 65.99 71.47 75.62 73.49 54.18 54.29 54.20 TAG-collapsed 63.71 66.83 65.23 71.05 75.84 73.35 54.05 54.25 54.12 SPAN-pipeline 69.46 66.72 68.06 76.14 73.74 74.92 60.72 55.02 57.69 SPAN-joint 67.41 61.99 64.59 72.32 72.61 72.47 57.03 52.69 54.55 SPAN-collapsed 50.08 47.32 48.66 63.63 53.04 57.85 51.89 45.05 48.11 Table 2: Main results on three benchmark datasets. A BERTLARGE backbone network is used for both the “TAG” and “SPAN” models. State-of-the-art results are marked in bold. and refer readers to Devlin et al. (2018) for details on model sizes. We use Adam optimizer with a learning rate of 2e-5 and warmup over the first 10% steps to train for 3 epochs. The batch size is 32 and a dropout probability of 0.1 is used. The number of candidate spans M is set as 20 while the maximum number of proposed targets K is 10 (Algorithm 1). The threshold γ is manually tuned on each dataset. All experiments are conducted on a single NVIDIA P100 GPU card. 4.2 Baseline Methods We compare the proposed span-based approach with the following methods: TAG-{pipeline, joint, collapsed} are the sequence tagging baselines that involve a BERT encoder and a CRF decoder. “pipeline” and “joint” denote the pipeline and joint approaches that utilize the BIO and +/-/0 tagging schemes, while “collapsed” is the model following the collapsed tagging scheme (Figure 2(a)). UNIFIED (Li et al., 2019) is the current stateof-the-art model on targeted sentiment analysis3. It contains two stacked recurrent neural networks enhanced with multi-task learning and adopts the collapsed tagging scheme. We also compare our multi-target extractor with the following method: DE-CNN (Xu et al., 2018) is the current stateof-the-art model on target extraction, which combines a double embeddings mechanism with convolutional neural networks (CNNs)4. Finally, the polarity classifier is compared with the following methods: 3https://github.com/lixin4ever/E2E-TBSA 4https://www.cs.uic.edu/hxu/ MGAN (Fan et al., 2018) uses a multi-grained attention mechanism to capture interactions between targets and sentences for polarity classification. TNet (Li et al., 2018) is the current state-of-the-art model on polarity classification, which consists of a multi-layer context-preserving network architecture and uses CNNs as feature extractor5. 4.3 Main Results We compare models under either the sequence tagging scheme or the span-based labeling scheme, and show the results in Table 2. We denote our approach as “SPAN”, and use BERTLARGE as backbone networks for both the “TAG” and “SPAN” models to make the comparison fair. Two main observations can be obtained from the Table. First, despite that the “TAG” baselines already outperform previous best approach (“UNIFIED”), they are all beaten by the “SPAN” methods. The best span-based method achieves 1.55%, 0.94% and 3.43% absolute gains on three datasets compared to the best tagging method, indicating the efficacy of our extract-then-classify framework. Second, among the span-based methods, the SPAN-pipeline achieves the best performance, which is similar to the results of Mitchell et al. (2013); Zhang et al. (2015). This suggests that there is only a weak connection between target extraction and polarity classification. The conclusion is also supported by the result of SPANcollapsed method, which severely drops across all datasets, implying that merging polarity labels into target spans does not address the task effectively. 5https:// github.com/lixin4ever/TNet 543 Model LAPTOP REST TWITTER DE-CNN 81.59 TAG 85.20 84.48 73.47 SPAN 83.35 82.38 75.28 Table 3: F1 comparison of different approaches for target extraction. Figure 5: F1 on LAPTOP and REST w.r.t different sentence lengths for target extraction. 4.4 Analysis on Target Extraction To analyze the performance on target extraction, we run both the tagging baseline and the multitarget extractor on three datasets, as shown in Table 3. We find that the BIO tagger outperforms our extractor on LAPTOP and REST. A likely reason for this observation is that the lengths of input sentences on these datasets are usually small (e.g., 98% of sentences are less than 40 words in REST), which limits the tagger’s search space (the power set of all sentence words). As a result, the computational complexity has been largely reduced, which is beneficial for the tagging method. In order to confirm the above hypothesis, we plot the F1 score with respect to different sentence lengths in Figure 5. We observe that the performance of BIO tagger dramatically decreases as the sentence length increases, while our extractor is more robust for long sentences. Our extractor manages to surpass the tagger by 16.1 F1 and 1.0 F1 when the length exceeds 40 on LAPTOP and REST, respectively. The above result demonstrates that our extractor is more suitable for long sentences due to the fact that its search space only increases linearly with the sentence length. Since a trade-off between precision and recall can be adjusted according to the threshold γ in our extractor, we further plot the precision-recall curves under different ablations to show the effects of heuristic multi-span decoding algorithm. As can be seen from Figure 6, ablating the length Figure 6: Precision-recall curves on LAPTOP and REST for target extraction. “NMS” and “heuristics” denote the non-maximum suppression and the length heuristics in Algorithm 1. heuristics results in consistent performance drops across two datasets. By sampling incorrect predictions we find that there are many targets closely aligned with each other, such as “perfect [size]+ and [speed]+”, “[portions]+ all at a reasonable [price]+”, and so on. The model without length heuristics is very likely to output the whole phrase as a single target, thus being totally wrong. Moreover, removing the non-maximum suppression (NMS) leads to significant performance degradations, suggesting that it is crucial to prune redundant spans that refer to the same text. 4.5 Analysis on Polarity Classification To assess the polarity classification subtask, we compare the performance of our span-level polarity classifier with the CRF-based tagger in Table 5. The results show that our approach significantly outperforms the tagging baseline by achieving 9.97%, 8.15% and 15.4% absolute gains on three datasets, and firmly surpasses previous stateof-the-art models on LAPTOP. The large improvement over the tagging baseline suggests that detecting sentiment with the entire span representation is much more beneficial than predicting polarities over each word, as the semantics of the given target has been fully considered. To gain more insights on performance improvements, we plot the accuracy of both methods with respect to different target lengths in Figure 7. We find that the accuracy of span-level classifier only drops a little as the number of words increases on the LAPTOP and REST datasets. The performance of tagging baseline, however, significantly decreases as the target becomes longer. It demonstrates that the tagging method indeed suffers from the sentiment inconsistency problem when it comes to multi-word target entities. Our 544 Sentence TAG SPAN 1. I thought the transition would be difficult at best and would take some time to fully familiarize myself with the new [Mac ecosystem]0. [ecosystem]+ (7) [Mac ecosystem]0 2. I would normally not finish the [brocolli]+ when I order these kinds of food but for the first time, every piece was as eventful as the first one... the [scallops]+ and [prawns]+ was so fresh and nicely cooked. [brocolli]- (7), [scallops and prawns]+ (7), [food]0 (7) [brocolli]+, [scallops]+, [prawns]+ 3. I like the [brightness]+ and [adjustments]+. [brightness]+, [adjustments]+ [brightness]+, None (7) 4. The [waiter]- was a bit unfriendly and the [feel]- of the restaurant was crowded. [waiter]-, [feel][waiter]-, None (7) 5. However, it did not have any scratches, zero [battery cycle count]+ (pretty surprised), and all the [hardware]+ seemed to be working perfectly. [battery cycle count]0 (7), [hardware]+ [battery cycle count]+, [hardware]+ 6. I agree that dining at [Casa La Femme]- is like no other dining experience! [Casa La Femme]+ (7) [Casa La Femme]Table 4: Case study. The extracted targets are wrapped in brackets with the predicted polarities given as subscripts. Incorrect predictions are marked with 7. Model LAPTOP REST TWITTER MGAN 75.39 TNet 76.54 TAG 71.42 81.80 59.76 SPAN 81.39 89.95 75.16 Table 5: Accuracy comparison of different approaches for polarity classification. span-based method, on the contrary, can naturally alleviate such problem because the polarity is classified by taking all target words into account. 4.6 Case Study Table 4 shows some qualitative cases sampled from the pipeline methods. As observed in the first two examples, the “TAG” model incorrectly predicts the target span by either missing the word “Mac” or proposing a phrase across two targets (“scallps and prawns”). A likely reason of its failure is that the input sentences are relatively longer, and the tagging method is less effective when dealing with them. But when it comes to shorter inputs (e.g., the third and the fourth examples), the tagging baseline usually performs better than our approach. We find that our approach may sometimes fail to propose target entities (e.g., “adjustments” in (3) and “feel” in (4)), which is due to the fact that a relatively large γ has been set. As a result, the model only makes cautious but confident predictions. In contrast, the tagging method does not rely on a threshold and is observed to have a higher recall. For example, it additionally predicts the entity “food” as a target in the second example. Moreover, we find that the tagging method sometimes fails to predict the correct senFigure 7: Accuracy on LAPTOP and REST w.r.t different number of target words for polarity classification. timent class, especially when the target consists of multiple words (e.g., “battery cycle count” in (5) and “Casa La Femme” in (6)), indicating the tagger can not effectively maintain sentiment consistency across words. Our polarity classifier, however, can avoid such problem by using the target span representation to predict the sentiment. 5 Conclusion We re-examine the drawbacks of sequence tagging methods in open-domain targeted sentiment analysis, and propose an extract-then-classify framework with the span-based labeling scheme instead. The framework contains a pre-trained Transformer encoder as the backbone network. On top of it, we design a multi-target extractor for proposing multiple candidate targets with an heuristic multispan decoding algorithm, and introduce a polarity classifier that predicts the sentiment towards each candidate using its summarized span representation. Our approach firmly outperforms the sequence tagging baseline as well as previous stateof-the-art methods on three benchmark datasets. Model analysis reveals that the main performance improvement comes from the span-level polarity classifier, and the multi-target extractor is more 545 suitable for long sentences. Moreover, we find that the pipeline model consistently surpasses both the joint model and the collapsed model. Acknowledgments We thank the anonymous reviewers for their insightful feedback. We also thank Li Dong for his helpful comments and suggestions. This work was supported by the National Key Research and Development Program of China (2016YFB1000101). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of ACL. Feifan Fan, Yansong Feng, and Dongyan Zhao. 2018. Multi-grained attention network for aspect-level sentiment classification. In Proceedings of EMNLP. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. arXiv preprint arXiv:1805.04787. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In Proceedings of ACL. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2018. Reinforced mnemonic reader for machine reading comprehension. In Proceedings of IJCAI. Binxuan Huang and Kathleen Carley. 2018. Parameterized convolutional neural networks for aspect level sentiment classification. In Proceedings of EMNLP. Niklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single-and cross-domain setting with conditional random fields. In Proceedings of EMNLP. Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Proceedings of ACL. Yoon Kim. 2014. Convolutional neural networks for sentence classification. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. arXiv preprint arXiv:1707.07045. Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In Proceedings of ACL. Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019. A unified model for opinion target extraction and target sentiment prediction. In Proceedings of AAAI. Xin Li and Wai Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In Proceedings of EMNLP. Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of CIKM. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1–167. Pengfei Liu, Shafiq Joty, and Helen Meng. 2015. Finegrained opinion mining with recurrent neural networks and word embeddings. In Proceedings of EMNLP. Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. 2013. Open domain targeted sentiment. In Proceedings of EMNLP. Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends R⃝in Information Retrieval, 2(1–2):1–135. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, ALSmadi Mohammad, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph´ee De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In Proceedings of SemEval-2016. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of SemEval 2015. 546 Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of SemEval-2014. Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2016. Aspect extraction for opinion mining with a deep convolutional neural network. Knowledge-Based Systems, 108:42–49. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of EMNLP. Azriel Rosenfeld and Mark Thurston. 1971. Edge and curve detection for visual scene analysis. IEEE Transactions on computers, (5):562–569. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In Proceedings of ICLR. Lei Shu, Hu Xu, and Bing Liu. 2017. Lifelong learning crf for supervised aspect extraction. In Proceedings of the ACL. Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective lstms for target-dependent sentiment classification. In Proceedings of COLING. Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect level sentiment classification with deep memory network. arXiv preprint arXiv:1605.08900. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS. Shuohang Wang and Jing Jiang. 2017. Machine comprehension using match-lstm and answer pointer. In Proceedings of ICLR. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016a. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of EMNLP. Yequan Wang, Minlie Huang, Li Zhao, et al. 2016b. Attention-based lstm for aspect-level sentiment classification. In Proceedings of EMNLP. Hu Xu, Bing Liu, Lei Shu, and Philip S Yu. 2018. Double embeddings and cnn-based sequence labeling for aspect extraction. In Proceedings of ACL. Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. arXiv preprint arXiv:1805.07043. Meishan Zhang, Yue Zhang, and Duy Tin Vo. 2015. Neural networks for open domain targeted sentiment. In Proceedings of EMNLP.
2019
51
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5176–5181 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5176 NNE: A Dataset for Nested Named Entity Recognition in English Newswire Nicky Ringland1 † Xiang Dai1,2 ‡ Ben Hachey1,3 Sarvnaz Karimi2 Cecile Paris2 James R. Curran1 1University of Sydney, Sydney, Australia 2CSIRO Data61, Sydney, Australia 3Digital Health CRC, Sydney, Australia †[email protected][email protected] Abstract Named entity recognition (NER) is widely used in natural language processing applications and downstream tasks. However, most NER tools target flat annotation from popular datasets, eschewing the semantic information available in nested entity mentions. We describe NNE—a fine-grained, nested named entity dataset over the full Wall Street Journal portion of the Penn Treebank (PTB). Our annotation comprises 279,795 mentions of 114 entity types with up to 6 layers of nesting. We hope the public release of this large dataset for English newswire will encourage development of new techniques for nested NER. 1 Introduction Named entity recognition—the task of identifying and classifying entity mentions in text—plays a crucial role in understanding natural language. It is used for many downstream language processing tasks, e.g., coreference resolution, question answering, summarization, entity linking, relation extraction and knowledge base population. However, most NER tools are designed to capture flat mention structure over coarse entity type schemas, reflecting the available annotated datasets. Focusing on flat mention structures ignores important information that can be useful for downstream tasks. Figure 1 includes examples of nested named entities illustrating several phenomena: • Entity-entity relationships can be embedded in nested mentions. For instance, the location of the ‘Ontario Supreme Court’ is indicated by the embedded STATE mention ‘Ontario’; • Entity attribute values can be embedded in nested mentions. For instance, the title is the embedded ROLE ‘Former U.N. Ambassador’, which also encodes the employment relation ... the Ontario Supreme Court said it will postpone ... state government Former U.N. Ambassador Jeane Kirkpatrick ... org:other role first name role per role per ... this wealthy Southern California beach community ... state region Figure 1: Example nested mentions in NNE. between the PERSON ‘Jane Kirkpatrick‘ and ORG ‘U.N.’; • Part-whole relationships can be encoded in nested mention structure. For instance, the REGION ‘Southern California’ is part of the STATE ‘California’. Recent work has demonstrated increasing interest in nested entity structure, including local approaches (Xu et al., 2017; Sohrab and Miwa, 2018), hypergraph-based approaches (Lu and Roth, 2015; Muis and Lu, 2017; Katiyar and Cardie, 2018; Wang and Lu, 2018), cascaded approaches (Alex et al., 2007; Ju et al., 2018), and parsing approaches (Finkel and Manning, 2009; Wang et al., 2018). See Dai (2018) for a survey. Yet these techniques have seen little translation from the research literature to toolsets or downstream applications. To facilitate ongoing research on nested NER, we introduce NNE—a large, manuallyannotated, nested named entity dataset over English newswire. This new annotation layer over the Wall Street Journal portion of the PTB includes 279,795 mentions. All mentions are annotated, including nested structures with depth as high as 5177 six layers. A fine-grained entity type schema is used, extending the flat BBN (Weischedel and Brunstein, 2005) annotation from 64 to 114 entity types. We are publicly releasing the standoff annotations along with detailed annotation guidelines and scripts for knitting annotations onto the underlying PTB corpus.1 Benchmark results using recent state-of-the-art approaches demonstrate that good accuracy is possible, but complexity and run time are open challenges. As a new layer over the already rich collection of PTB annotations, NNE provides an opportunity to explore joint modelling of nested NER and other tasks at an unprecedented scale and detail. 2 The NNE dataset Annotation Scheme: BBN (Weischedel and Brunstein, 2005) is a pronoun coreference and entity type corpus, annotated with 64 types of entities, numerical and time expressions. We use its flat entity schema as a starting point to design our schema. We analyzed existing BBN annotations to develop and automatically apply structured preannotation for predictable entity types. Additional fine-grained categories and further structural elements of entities, inspired by Sekine et al. (2002) and Nothman et al. (2013), are used to augment the BBN schema. We adhere to the following general principles when annotating nested named entities in the corpus: • Annotate all named entities, all time and date (TIMEX) and numerical (NUMEX) entities, including all non-sentence initial words in title case, and instances of proper noun mentions that are not capitalized. • Annotate all structural elements of entities. These elements could be other entities, such as ‘Ontario’ (STATE) in ‘Ontario Supreme Court’ (GOVERNMENT), or structural components such as ‘40’ (CARDINAL) and ‘miles’ (UNIT) in ‘40 miles’ (QUANTITY:1D), as well as the internal structure induced by syntactic elements, such as coordination. • Add consistent substructure to avoid spurious ambiguity. For example, the token ‘Toronto’, which is a CITY, would be labeled as part 1https://github.com/nickyringland/nested named entities of an ORG:EDU organization span ‘University of Toronto’. We add layers of annotations to allow each token to be annotated as consistently as possible, e.g., [University of [Toronto]CITY]ORG:EDU. • Add additional categories to avoid category confusion. Some entities are easy to identify, but difficult to categorize consistently. For instance, a hotel (or any business at a fixed location) has both organizational and locative qualities, or is at least treated metonymously as a location. Rather than requiring annotators to make an ambiguous decision, we elect to add category HOTEL to simplify the individual annotation decision. We also apply this principle when adding MEDIA, FUND, and BUILDING categories. • Pragmatic annotation. Many annotation decisions are ambiguous and difficult, thus may require substantial research. For instance, knowing that ‘The Boeing Company’ was named after founder ‘William E. Boeing’ would allow us to annotate ‘Boeing’ with an embedded PERSON entity. However, this does not apply for other companies, such as ‘Sony Corporation’. To let annotation decisions be made without reference to external knowledge, we label all tokens that seem to be the names of people as NAME, regardless of whether they are actually a person’s name. Entity types and mention frequencies can be found in Appendix A. See Ringland (2016) for annotation guidelines and extended discussion of annotation decisions. Annotation Process: Although some existing annotation tools allow nested structures (e.g., Brat (Stenetorp et al., 2012)), we built a custom tool that allowed us to create a simple and fast way to add layers of entities, and suggest reusing existing structured annotations for the same span. Using the annotations from BBN as underlying annotations, the annotator is shown a screen with the target sentence, as well as the previous and next sentences, if any. A view of the whole article is also possible to help the annotator with contextual cues. When annotators select a span, they are prompted with suggestions based on their own previous annotations, and common entities. Some entities are repeated frequently in an article, 5178 Depth Number % Three most frequent categories 1 118,525 45.5 CORP (22,752), DATE (15,927), PER (13,460) 2 106,144 40.8 CARDINAL (19,834), NAME (18,640), UNIT (14,871) 3 31,573 12.1 CARDINAL (11,697), MULT (5,859), NAME (3,450) 4 3,813 1.5 CARDINAL (1,650), MULT (1,041), UNIT (400) 5 327 0.1 CARDINAL (154), MULT (96), UNIT (51) 6 4 0.0 UNIT (1), CITY-STATE (1), MULT (1) Table 1: Number of spans at each layer of nesting with their most frequent categories. or over many articles in the corpus. The annotation tool allows a user to add a specified annotation to all strings matching those tokens in the same article, or in all articles. Four annotators, each with a background in linguistics and/or computational linguistics were selected and briefed on the annotation task and purpose. The WSJ portion of the PTB consists of 25 sections (00–24). Each annotator started with a subset of section 00 as annotation training, and was given feedback before moving on to other sections. Weekly meetings were held with all annotators to discuss ambiguities in the guidelines, gaps in the annotation categories, edge cases and ambiguous entities and to resolve discrepancies. Total annotation time for the corpus was 270 hours, split between the four annotators. Sections 00 and 23 were doubly annotated, and section 02 was annotated by all four annotators. An additional 17 hours was used for adjudicating these sections annotated by multiple annotators. Dataset Analysis: The resulting NNE dataset includes a large number of entity mentions of substantial depth, with more than half of mentions occurring inside another mentions. Of the 118,525 top-level entity mentions, 47,020 (39.6%) do not have any nested structure embedded. The remaining 71,505 mentions contain 161,270 mentions, averaging 2.25 structural mentions per each of these top-layer entity mentions. Note that one span can be assigned multiple entity types. For example, the span ‘1993’ can be annotated as both DATE and YEAR. In NNE, 19,144 out of 260,386 total spans are assigned multiple types. Table 1 lists the number of spans occurring at each depth. To measure how clearly the annotation guidelines delineate each category, and how reliable our annotations are, inter-annotator agreement was calculated using annotations on Section 02, which was annotated by all four annotators. An adjudicated version was created by deciding a correct existing candidate label from within the four possibilities, or by adjusting one of them on a token level. For the purposes of inter-annotator agreement, a tag stack is calculated for each word, essentially flattening each token’s nested annotation structure into one label. For example, the tag of token ‘California’ in the third sentence of Figure 1 is STATE REGION, while ‘beach’ is O O. Agreement using Fleiss’ kappa over all tokens is 0.907. Considering only tokens that are part of at least one mention according to at least one annotator, Fleiss’ kappa is 0.832. Both results are above the 0.8 threshold for good reliability (Carletta, 1996). Average precision, recall and F1 score across four annotators with respect to the adjudicated gold standard are 94.3, 91.8 and 93.0. 3 Benchmark results We evaluate three existing NER models on our dataset: (1) the standard BiLSTM-CRF model which can handle only flat entities (Lample et al., 2016); (2) hypergraph-based (Wang and Lu, 2018); and, (3) transition-based (Wang et al., 2018) models. The latter two models were proposed to recognize nested mentions. We follow CoNLL evaluation schema in requiring an exact match of mention start, end and entity type (Sang and Meulder, 2003). We use sections 02 as development set, sections 23 and 24 as test set, and the remaining sections as training set. The model that performs best on the development set is evaluated on the test set for the final result. Since the standard BiLSTM-CRF model cannot handle nested entities, we use either the outermost (BiLSTMCRF-TOP in Table 2) or the innermost mentions (BiLSTM-CRF-BOTTOM) for training. We also combine the outputs from these two flat NER models, and denote the result as BiLSTM-CRFBOTH. From Table 2, we can see that single flat NER models can achieve high precision but suffer from low recall. For example, the model pretrained on outermost (top) mentions has 38.0 recall, as 5179 P R F1 BiLSTM-CRF-TOP 89.9 38.0 53.5 BiLSTM-CRF-BOTTOM 93.8 62.0 74.7 BiLSTM-CRF-BOTH 92.2 85.8 88.9 Hypergraph 91.8 91.0 91.4 Transition 77.4 70.1 73.6 Table 2: NER results on NNE using different methods. around 60% of mentions are nested within others. The hypergraph-based model performs best on our dataset, presumably because it can capture mentions from different levels and does not suffer from issues of structural ambiguity during inference (Muis and Lu, 2017; Wang and Lu, 2018). However, its decoding speed of 9 words per second is slow due to the large number of entity categories of our dataset.2 The transition-based method has a higher decode speed of 57 words per second, but has much lower precision than flat NER models. 4 Related Work Other corpora with nested entities: We briefly compare existing annotated English corpora involving nested entities. A comparison of statistics between our dataset and two widely used benchmark datasets is shown in Table 3. The ACE corpora (Mitchell et al., 2004; Walker et al., 2005) consist of data of various types annotated for entities, relations and events. The entity component of ACE is framed in terms of nominal modification, and nested mentions are only annotated in nominal mentions, not inside other named entity mentions. For example, in ACE2005, ‘Secretary of Homeland Security Tom Ridge’ is annotated as a PERSON, containing two other PERSON annotations: ‘Secretary’ and ‘Secretary of Homeland Security’. In contrast, our annotations capture more interactions between different semantic spans: PERSON consisting of ROLE and NAME, and ROLE containing GOVERNMENT. The GENIA corpus (Kim et al., 2003) is a richly-annotated corpus for bio-text mining that has 36 entity types among 2,000 MEDLINE abstracts. Due to the biomedical domain’s specialized terminology and complex naming conventions, entities of interest, such as genes, proteins or 2The decoding time complexity of the method proposed by Wang and Lu (2018) is O(cmn), where m is the number of entity types, n is the sentence length, and c is the maximal mention length. Item NNE GENIA ACE2005 Documents 2,312 2,000 464 Sentences 49,208 18,546 12,548 Sentences 32,387 9,533 4,266 w. nesting Tokens 1.1M 0.5M 0.3M Mentions 279,795 92,681 30,966 Entity types 114 36 7 Mentions 5.69 4.99 2.46 per sentence Top-level mentions 118,525 76,582 23,464 Maximum depth 6 4 6 Table 3: A comparison between NNE and two commonly used corpora with nested entities. disease names, often nest. For example, the RNA ‘CIITA mRNA’ contains a DNA mention ‘CIITA’. In addition to these two commonly used nested entity corpora, Byrne (2007) and Alex et al. (2007) introduced datasets with nested entities in historical archive and biomedical domains, respectively. However, their datasets are not publicly available. Four percent of entity mentions annotated in the English entity discovery and linking task in TACKBP track include nesting (Ji et al., 2014). Resources built on the PTB: A lots of effort has been made on adding syntactic and semantic information to the PTB (Marcus et al., 1993). PropBank (Kingsbury et al., 2002) extended the PTB with the predicate argument relationships between verbs and their arguments. NomBank (Meyers et al., 2004) extended the argument structure for instances of common nouns. Vadas and Curran (2007), and Ficler and Goldberg (2016) extended the PTB with noun phrase and coordination annotations, respectively. Our dataset is built on top of the PTB and enriches the full ecosystem of resources and systems that stem from it. 5 Summary We present NNE, a large-scale, nested, finegrained named entity dataset. We are optimistic that NNE will encourage the development of new NER models that recognize structural information within entities, and therefore understand finegrained semantic information captured. Additionally, our annotations are built on top of the PTB, so that the NNE dataset will allow joint learning models to take advantage of semantic and syntactic annotations, and ultimately to understand and exploit the true structure of named entities. 5180 Acknowledgments We would like to thank annotators for their excellent work: Kellie Webster, Vivian Li, Joanne Yang and Kristy Hughes. We also thank three anonymous reviewers for their insightful comments. References Beatrice Alex, Barry Haddow, and Claire Grover. 2007. Recognising nested named entities in biomedical text. In BioNLP, pages 65–72. Kate Byrne. 2007. Nested named entity recognition in historical archive text. In ICSC, pages 589–596. Jean Carletta. 1996. Assessing agreement on classification tasks: The Kappa statistic. Comput. Linguist., 22(2):249–254. Xiang Dai. 2018. Recognizing complex entity mentions: A review and future directions. In ACL-SRW, pages 37–44. Jessica Ficler and Yoav Goldberg. 2016. Coordination annotation extension in the Penn tree bank. In ACL, pages 834–842. Jenny Rose Finkel and Christopher Manning. 2009. Nested named entity recognition. In EMNLP, pages 141–150. Heng Ji, Joel Nothman, and Ben Hachey. 2014. Overview of TAC-KBP2014 entity discovery and linking tasks. In TAC, pages 1333–1339. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In NAACL, pages 1446–1459. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In NAACL, pages 861– 871. J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun ichi Tsujii. 2003. GENIA corpus a semantically annotated corpus for bio-textmining. Bioinformatics, 19:i180– i182. Paul Kingsbury, Martha Palmer, and Mitch Marcus. 2002. Adding predicate argument structure to the Penn Treebank. In HLT, pages 252–256. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL, pages 260–270. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In EMNLP, pages 857–867. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Comput. Linguist., 19. Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. The nombank project: An interim report. Alexis Mitchell, Stephanie Strassel, Shudong Huang, and Ramez Zakhary. 2004. ACE 2004 multilingual training corpus. LDC. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In EMNLP, pages 2608– 2618. Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R Curran. 2013. Learning multilingual named entity recognition from Wikipedia. Artif. Intell., 194:151–175. Nicky Ringland. 2016. Structured Named Entities. Ph.D. thesis, University of Sydney. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In CoNLL. Satoshi Sekine, Kiyoshi Sudo, and Chikashi Nobata. 2002. Extended named entity hierarchy. In LREC. Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In EMNLP, pages 2843–2849. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. brat: a web-based tool for NLP-assisted text annotation. In EACL, pages 102–107. David Vadas and James Curran. 2007. Adding noun phrase structure to the Penn Treebank. In ACL, pages 240–247. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2005. ACE 2005 multilingual training corpus. LDC. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In EMNLP, pages 204–214. Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In EMNLP, pages 1011–1017. Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type corpus. LDC. Mingbin Xu, Hui Jiang, and Sedtawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. In ACL, pages 1237–1247. A Full annotation scheme 5181 Category Frequency Category Frequency Category Frequency CARDINAL 43873 STREET 475 QUANTITY2D 81 NAME 28537 GRPORG 437 PRODUCTFOOD 80 ORGCORP 23339 ORGPOLITICAL 436 SUBURB 78 UNIT 19289 VEHICLE 432 GRPLOC 63 DATE 17381 LAW 419 HOTEL 55 PER 14960 ORGEDU 411 QUANTITYOTHER 55 DURATION 13655 CONTINENT 354 FUND 54 MONEY 12640 BUILDING 346 SONG 54 MULT 7851 SEASON 337 SPACE 53 FIRST 6797 GPE 333 RIVER 52 CITY 6723 FOLD 313 WAR 51 PERCENT 6542 MIDDLE 313 CHEMICAL 45 REL 6170 TIME 296 BRIDGE 44 CORPJARGON 5560 WEIGHT 293 PLAY 42 HON 5524 OCEAN 291 STADIUM 37 NATIONALITY 5193 LOCATIONOTHER 261 AWARD 36 GOVERNMENT 4674 EVENT 260 ORGRELIGIOUS 35 COUNTRY 4047 DISEASE 246 AIRPORT 32 QUAL 3903 QUANTITY1D 220 ANIMATE 29 YEAR 3421 CITYSTATE 220 GOD 29 MONTH 3385 WOA 207 HOSPITAL 25 STATE 3245 TVSHOW 172 ATTRACTION 24 ORDINAL 2590 ELECTRONICS 167 WEAPON 23 IPOINTS 2395 SPORTSTEAM 166 MUSEUM 17 ROLE 2368 DATEOTHER 164 ENERGY 17 RATE 2141 QUANTITY3D 156 SPEED 14 MEDIA 1712 NAMEMOD 155 PAINTING 13 DAY 1631 GRPPER 154 BAND 10 NUMDAY 1495 BOOK 149 SPORTSSEASON 8 INI 1445 ARMY 139 SCINAME 7 NORPOTHER 1247 FACILITY 129 ADDRESSNON 3 ORGOTHER 1099 PRODUCTDRUG 116 ALBUM 3 PERIODIC 1066 HURRICANE 107 TEMPERATURE 2 REGION 864 SPORTSEVENT 100 NATURALDISASTER 2 NORPPOLITICAL 731 RELIGION 99 CONCERT 2 AGE 661 NICKNAME 96 STATION 1 INDEX 657 LANGUAGE 92 BORDER 1 PRODUCTOTHER 656 FILM 89 CHANNEL 1
2019
510
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5182–5192 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5182 Sequence-to-Nuggets: Nested Entity Mention Detection via Anchor-Region Networks Hongyu Lin1,3, Yaojie Lu1,3, Xianpei Han1,2,∗, Le Sun1,2 1Chinese Information Processing Laboratory 2State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences, Beijing, China 3University of Chinese Academy of Sciences, Beijing, China {hongyu2016,yaojie2017,xianpei,sunle}@iscas.ac.cn Abstract Sequential labeling-based NER approaches restrict each word belonging to at most one entity mention, which will face a serious problem when recognizing nested entity mentions. In this paper, we propose to resolve this problem by modeling and leveraging the head-driven phrase structures of entity mentions, i.e., although a mention can nest other mentions, they will not share the same head word. Specifically, we propose Anchor-Region Networks (ARNs), a sequence-to-nuggets architecture for nested mention detection. ARNs first identify anchor words (i.e., possible head words) of all mentions, and then recognize the mention boundaries for each anchor word by exploiting regular phrase structures. Furthermore, we also design Bag Loss, an objective function which can train ARNs in an end-toend manner without using any anchor word annotation. Experiments show that ARNs achieve the state-of-the-art performance on three standard nested entity mention detection benchmarks. 1 Introduction Named entity recognition (NER), or more generally entity mention detection1, aims to identify text spans pertaining to specific entity types such as Person, Organization and Location. NER is a fundamental task of information extraction which enables many downstream NLP applications, such as relation extraction (GuoDong et al., 2005; Mintz et al., 2009), event extraction (Ji and Grishman, 2008; Li et al., 2013) and machine reading comprehension (Rajpurkar et al., 2016; Wang et al., 2016). Previous approaches (Zhou and Su, 2002; Chieu and Ng, 2002; Bender et al., 2003; Settles, 2004; ∗Corresponding author. 1In entity mention detection, a mention can be either a named, nominal or pronominal reference of an entity (Katiyar and Cardie, 2018). The minister of the department of education convened a meeting. ORG PER Figure 1: An example of nested entity mentions. Due to the nested structure, “the”,“department”,“of” and “education” belong to both PER and ORG mentions. Lample et al., 2016) commonly regard NER as a sequential labeling task, which generate label sequence for each sentence by assigning one label to each token. These approaches commonly restrict each token belonging to at most one entity mention and, unfortunately, will face a serious problem when recognizing nested entity mentions, where one token may belong to multiple mentions. For example in Figure 1, an Organization entity mention “the department of education” is nested in another Person entity mention “the minister of the department of education”. Nested entity mentions are very common. For instance, in the well-known ACE2005 and RichERE datasets, more than 20% of entity mentions are nested in other mentions. Therefore, it is critical to consider nested mentions for real-world applications and downstream tasks. In this paper, we propose a sequence-to-nuggets approach, named as Anchor-Region Networks (ARNs), which can effectively detect all entity mentions by modeling and exploiting the headdriven phrase structures (Pollard and Sag, 1994; Collins, 2003) of them. ARNs originate from two observations. First, although an entity mention can nest other mentions, they will not share the same head word. And the head word of a mention can provide strong semantic evidence for its entity type (Choi et al., 2018). For example in Figure 1, although the ORG mention is nested in the PER mention, they have different head words “department” and “minister” respectively, and these head words strongly indicate their corresponding entity types to be ORG and PER. Second, entity men5183 Anchor words Sentence Mention Nuggets The minister of the department of education convened a meeting. … PER … ORG … minister … department … The minister ... education… …the department of education… [The minister ... education] [the department of education] PER ORG PER ORG Anchor Detector Region Recognizer Figure 2: The overall architecture of ARNs. Here “minister” and “department” are detected anchor words for two mentions respectively. tions mostly have regular phrase structures. For the two mentions in Figure 1, they share the same “DET NN of NP” structure, where the NN after DET are their head words. Based on above observations, entity mentions can be naturally detected in a sequence-to-nuggets manner by 1) identifying the head words of all mentions in a sentence; and 2) recognizing entire mention nuggets centered at detected head words by exploiting regular phrase structures of entity mentions. To this end, we propose ARNs, a new neural network-based approach for nested mention detection. Figure 2 shows the architecture of ARNs. First, ARNs employs an anchor detector network to identify whether each word is a head word of an entity mention, and we refer the detected words as anchor words. After that, a region recognizer network is used to determine the mention boundaries centering at each anchor word. By effectively capturing head-driven phrase structures of entity mentions, the proposed ARNs can naturally address the nested mention problem because different mentions have different anchor words, and different anchor words correspond to different mention nuggets. Furthermore, because the majority of NER datasets are not annotated with head words, they cannot be directly used to train our anchor detector. To address this issue, we propose Bag Loss, an objective function which can be used to train ARNs in an end-to-end manner without any anchor word annotation. Specifically, our Bag Loss is based on at-least-one assumption, i.e., each mention should have at least one anchor word, and the anchor word should strongly indicate its entity type. Based on this assumption, Bag Loss can automatically select the best anchor word within each mention during training, according to the association between words and the entity type of the mention. For example, given an ORG training instance “the department of education”, Bag Loss will select “department” as the anchor word of this mention based on its tight correlation with type ORG. While other words in the mention, such as “the” and “of”, will not be regarded as anchor words, because of their weak association with ORG type. We conducted experiments on three standard nested entity mention detection benchmarks, including ACE2005, GENIA and TAC-KBP2017 datasets. Experiments show that ARNs can effectively detect nested entity mentions and achieve the state-of-the-art performance on all above three datasets. For better reproduction, we openly release the entire project at github.com/ sanmusunrise/ARNs. Generally, our main contributions are: • We propose a new neural network architecture named as Anchor-Region Networks. By effectively modeling and leveraging the headdriven phrase structures of entity mentions, ARNs can naturally handle the nested mention detection problem and achieve the stateof-the-art performance on three benchmarks. To the best of our knowledge, this is the first work which attempts to exploit the headdriven phrase structures for nested NER. • We design an objective function, named as Bag Loss. By exploiting the association between words and entity types, Bag Loss can effectively learn ARNs in an end-to-end manner, without using any anchor word annotation. • Head-driven phrase structures are widely spread in natural language. This paper proposes an effective neural network-based solution for exploiting this structure, which can potentially benefit many NLP tasks, such as semantic role labeling (Zhou and Xu, 2015; He et al., 2017) and event extraction (Chen et al., 2015; Lin et al., 2018). 2 Related Work Nested mention detection requires to identify all entity mentions in texts, rather than only outmost mentions in conventional NER. This raises a critical issue to traditional sequential labeling models 5184 because they can only assign one label to each token. To address this issue, mainly two kinds of methods have been proposed. Region-based approaches detect mentions by identifying over subsequences of a sentence respectively, and nested mentions can be detected because they correspond to different subsequences. For this, Finkel and Manning (2009) regarded nodes of parsing trees as candidate subsequences. Recently, Xu et al. (2017) and Sohrab and Miwa (2018) tried to directly classify over all subsequences of a sentence. Besides, Wang et al. (2018) proposed a transition-based method to construct nested mentions via a sequence of specially designed actions. Generally, these approaches are straightforward for nested mention detection, but mostly with high computational cost as they need to classify over almost all sentence subsequences. Schema-based approaches address nested mentions by designing more expressive tagging schemas, rather than changing tagging units. One representative direction is hypergraph-based methods (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018), where hypergraphbased tags are used to ensure nested mentions can be recovered from word-level tags. Besides, Muis and Lu (2017) developed a gap-based tagging schema to capture nested structures. However, these schemas should be designed very carefully to prevent spurious structures and structural ambiguity (Wang and Lu, 2018). But more expressive, unambiguous schemas will inevitably lead to higher time complexity during both training and decoding. Different from previous methods, this paper proposes a new architecture to address nested mention detection. Compared with region-based approaches, our ARNs detect mentions by exploiting head-driven phrase structures, rather than exhaustive classifying over subsequences. Therefore ARNs can significantly reduce the size of candidate mentions and lead to much lower time complexity. Compared with schema-based approaches, ARNs can naturally address nested mentions since different mentions will have different anchor words. There is no need to design complex tagging schemas, no spurious structures and no structural ambiguity. Furthermore, we also propose Bag Loss, which can train ARNs in an end-to-end manner without any anchor word annotation. The design of Bag Loss is partially inspired by multi-instance learning (MIL) (Zhou and Zhang, 2007; Zhou et al., 2009; Surdeanu et al., 2012), but with a different target. MIL aims to predict a unified label of a bag of instances, while Bag Loss is proposed to train ARNs whose anchor detector is required to predict the label of each instance. Therefore previous MIL methods are not suitable for training ARNs. 3 Anchor-Region Networks for Nested Entity Mention Detection Given a sentence, Anchor-Region Networks detect all entity mentions in a two-step paradigm. First, an anchor detector network identifies anchor words and classifies them into their corresponding entity types. After that, a region recognizer network is applied to recognize the entire mention nugget centering at each anchor word. In this way, ARNs can effectively model and exploit head-driven phrase structures of entity mentions: the anchor detector for recognizing possible head words and the region recognizer for capturing phrase structures. These two modules are jointly trained using the proposed Bag Loss, which learns ARNs in an end-to-end manner without using any anchor word annotation. This section will describe the architecture of ARNs. And Bag Loss will be introduced in the next section. 3.1 Anchor Detector An anchor detector is a word-wise classifier, which identifies whether a word is an anchor word of an entity mention of specific types. For the example in Figure 1, the anchor detector should identify that “minister” is an anchor word of a PER mention and “department” is an anchor word of an ORG mention. Formally, given a sentence x1, x2, ..., xn, all words are first mapped to a sequence of word representations x1, x2, ..., xn where xi is a combination of word embedding, part-of-speech embedding and character-based representation of word xi following Lample et al. (2016). Then we obtain a context-aware representation hA i of each word xi using a bidirectional LSTM layer: −→ hA i = LSTM(xi, −−→ hA i−1) ←− hA i = LSTM(xi, ←−− hA i+1) hA i = [ −→ hA i ; ←− hA i ] (1) The learned representation hA i is then fed into a multi-layer perceptron(MLP) classifier, which 5185 computes the scores OA i of the word xi being an anchor word of specific entity types (or NIL if this word is not an anchor word): OA i = MLP(hA i ) (2) where OA i ∈R|C| and |C| is the number of entity types plus one NIL class. Finally a softmax layer is used to normalize OA i to probabilities: P(cj|xi) = eOA ij P|C| k=1 eOA ik (3) where OA ij is the jth element in OA i , P(cj|xi) is the probability of word xi being an anchor word of class cj. Note that because different mentions will not share the same anchor word, the anchor detector can naturally solve nested mention detection problem by recognizing different anchor words for different mentions. 3.2 Region Recognizer Given an anchor word, ARNs will determine its exact mention nugget using a region recognizer network. For the example in Figure 1, the region recognizer will recognize that “the minister of the department of education” is the mention nugget for anchor word “minister” and “the department of education” is the mention nugget for anchor word “department”. Inspired by the recent success of pointer networks (Vinyals et al., 2015; Wang and Jiang, 2016), this paper designs a pointer-based architecture to recognize the mention boundaries centering at an anchor word. That is, our region recognizer will detect the mention nugget “the department of education” for anchor word “department” by recognizing “the” to be the left boundary and “education” to be the right boundary. Similar to the anchor detector, a bidirectional LSTM layer is first applied to obtain the contextaware representation hR i of word xi. For recognizing mention boundaries, local features commonly play essential roles. For instance, a noun before a verb is an informative boundary indicator for entity mentions. To capture such local features, we further introduce a convolutional layer upon hR i : ri = tanh(W hR i−k:i+k + b) (4) where hR i−k:i+k is the concatenation of vectors from hR i−k to hR i+k, W and b are the convolutional kernel and the bias term respectively. k is the (one-side) window size of convolutional layer. Finally, for each anchor word xi, we compute its left mention boundary score Lij and right mention boundary score Rij at word xj by Lij = tanh(rT j Λ1hR i + U1rj + b1) Rij = tanh(rT j Λ2hR i + U2rj + b2) (5) In the above two equations, the first term within the tanh function computes the score of word xj serving as the left/right boundary of a mention centering at word xi. And the second term models the possibility of word xj itself serving as the boundary universally. After that, we select the best left boundary word xj and best right boundary word xk for anchor word xi, and the nugget {xj, ..., xi, ..., xk} will be a recognized mention. 4 Model Learning with Bag Loss This section describes how to train ARNs using existing NER datasets. The main challenge here is that current NER corpus are not annotated with anchor words of entity mentions, and therefore they cannot be directly used to train the anchor detector. To address this problem, we propose Bag Loss, an objective function which can effectively learn ARNs in an end-to-end manner, without using any anchor word annotation. Intuitively, one naive solution is to regard all words in a mention as its anchor words. However, this naive solution will inevitably result in two severe problems. First, a word may belong to different mentions when nested mentions exist. Therefore this naive solution will lead to ambiguous and noisy anchor words. For the example in Figure 1, it is unreasonable to annotate the word “department” as an anchor word of both PER and ORG mentions, because it has little association to PER type although the PER mention also contains it. Second, many words in a mention are just function words, which are not associated with its entity type. For example, words “the”,“of” and “education” in “the department of education” are not associated with its type ORG. Therefore annotating them as anchor words of the ORG mention will introduce remarkable noise. To resolve the first problem, we observe that a word can only be the anchor word of the innermost mention containing it. This is because a mention nested in another mention can be regarded as a replaceable component, and changing it will not affect the structure of outer mentions. For the case in Figure 1, if we replace the nested mention “the department of education” by other ORG mention(e.g., changing it to “State”), the type of the 5186 [ The minister of [ the department of education ]ORG ]PER convened a meeting. ORG PER NIL B0=B1=B2={The, minister, of} → PER NIL NIL B3=B4=B5=B6 ={the, department, of education} →ORG B7={convened} → NIL B8={a} → NIL B9={meeting} → NIL Figure 3: An illustration of bags. Bi represents the bag where word xi is in. This sentence forms five bags, two of which correspond to two entity mentions and three of which correspond to NIL. outer mention will not change. Therefore, words in a nested mention should not be regarded as the anchor word of outer mentions, and therefore a word can only be assigned as the anchor word of the innermost mention containing it. To address the second problem, we design Bag Loss based on the at-least-one assumption, i.e., for each mention at least one word should be regarded as its anchor word. Specifically, we refer to all words belonging to the same innermost mention as a bag. And the type of the bag is the type of that innermost mention. For example, in Figure 3,{the, minister, of} will form a PER bag, and {the, department, of education} will form an ORG bag. Besides, each word not covered by any mention will form a one-word bag with NIL type. So there are three NIL bags in Figure 3, including {convened}, {a} and {meeting}. Given a bag, Bag Loss will make sure that at least one word in each bag will be selected as its anchor word, and be assigned to the bag type. While other words in that bag will be classified into either the bag type or NIL. Bag Loss selects anchor words according to their associations with the bag type. That is, only words highly related to the bag type (e.g., “department” in “the department of education”) will be trained towards the bag type, and other irrelevant words (e.g., “the” and “of” in the above example) will be trained towards NIL. Bag Loss based End-to-End Learning. For ARNs, each training instance is a tuple x = (xi, xj, xk, ci), where xj, ..., xk is an entity mention with left boundary xj and right boundary xk. cj is its entity type and word xi is a word in this mention’s bag2. For each instance, Bag loss considers two situations: 1) If xi is its anchor word, the loss will be the sum of the anchor detector loss (i.e., the loss of correctly classifying xi into its bag type ci) and the region recognizer loss 2For words not in any mention, we define xj = xk = xi and ci = NIL, but their boundary will not be considered during optimization according to Equation (7). (i.e., the loss of correctly recognizing the mention boundary xj and xk); 2) If xi is not its anchor word, the loss will be only the anchor detector loss (i.e., correctly classifying xi into NIL). The final loss for this instance is a weighted sum of the loss of these two situations, where the weight are determined using the association between word xi and the bag type ci compared with other words in the same bag. Formally, Bag Loss is written as: L(xi; θ) = ωi · [−log P(ci|xi) + LR(xi; θ)] + (1 −ωi) · [−log P(NIL|xi)] (6) where −log P(ci|xi) is the anchor detector loss. LR(xi; θ) = Lleft(xi; θ) + Lright(xi; θ) is the loss for the region recognizer measuring how preciously the region recognizer can identify the boundaries centered at anchor word xi. We define Lleft(xi; θ) using max-margin loss: Lleft(xi; θ) = ( 0, ci = NIL max(0, γ−Lij + max t̸=j Lit), ci ̸= NIL (7) where γ is a hyper-parameter representing the margin, and Lright(xi; θ) is similarly defined. Besides, ωi in Equation (6) measures the correlation between word xi and the bag type ci. Compared with other words in the same bag, a word xi should have larger wi if it has a tighter association with the bag type. Therefore, ωi can be naturally defined as: ωi = [ P(ci|xi) maxxt∈Bi P(ci|xt)]α. (8) where Bi denotes the bag xi belonging to, i.e., all words that share the same innermost mention with xi. α is a hyper-parameter controlling how likely a word will be regarded as an anchor word rather than regarded as NIL. α = 0 means that all words are annotated with the bag type. And α →+∞ means that Bag Loss will only choose the word with highest P(ci|xi) as anchor word, while all other words in the same bag will be regarded as NIL. Consequently, Bag Loss guarantees that 5187 at least one anchor word (the one with highest P(ci|xi), and its corresponding wi will be 1.0) will be selected for each bag. For other words that are not associated with the type (the ones with low P(ci|xi)), Bag Loss can make it to automatically learn towards NIL during training. 5 Experiments 5.1 Experimental Settings We conducted experiments on three standard English entity mention detection benchmarks with nested mentions: ACE2005, GENIA and TACKBP2017 (KBP2017) datasets. For ACE2005 and GENIA, we used the same setup as previous work (Ju et al., 2018; Wang et al., 2018; Wang and Lu, 2018; Katiyar and Cardie, 2018). For KBP2017, we evaluated our model on the 2017 English evaluation dataset (LDC2017E55), using previous RichERE annotated datasets (LDC2015E29, LDC2015E68, LDC2016E31 and LDC2017E02) as the training set except 20 randomly sampled documents reserved as development set. Finally, there were 866/20/167 documents for KBP2017 train/dev/test set. In ACE2005, GENIA and KBP2017, there are 22%, 10% and 19% mentions nested in other mentions respectively. We used Stanford CoreNLP toolkit (Manning et al., 2014) to preprocess all documents for sentence splitting and POS tagging. Adadelta update rule (Zeiler, 2012) is applied for optimization. Word embeddings are initialized with pretrained 200-dimension Glove (Pennington et al., 2014) vectors3. Hyper-parameters are tuned on the development sets4 apart from α in Equation (8), which will be further discussed in Section 5.4. 5.2 Baselines We compare ARNs with following baselines5: • Conventional CRF models, including LSTMCRF (Lample et al., 2016) and Multi-CRF. LSTM-CRF is a classical baseline for NER, which doesn’t consider nested mentions so only outmost mentions are used for training. MultiCRF is similar to LSTM-CRF but learns one 3http://nlp.stanford.edu/data/glove. 6B.zip 4The hyper-parameter configures are openly released together with our source code at github.com/ sanmusunrise/ARNs. 5As Wang and Lu (2018) reported, neural network-based baselines significantly outperform all non-neural methods. So we only compared with neural network-based baselines. model for each entity type, and thus is able to recognize nested mentions if they have different types. • Region-based methods, including FOFE (Xu et al., 2017), Cascaded-CRF (Ju et al., 2018) and a transition model (refered as Transition) proposed by Wang et al. (2018). FOFE directly classifies over all sub-sequences of a sentence and thus all potential mentions can be considered. Cascaded-CRF uses several stacked CRF layers to recognize nested mentions at different levels. Transition constructs nested mentions through a sequence of actions. • Hypergraph-based methods, including the LSTM-Hypergraph (LH) model (Katiyar and Cardie, 2018) and the Segmental Hypergraph (SH) by Wang and Lu (2018). LH used an LSTM model to learn features and then decode them into a hypergraph. SH further considered the transition between labels to alleviate labeling ambiguity, which is the state-of-the-art in both ACE2005 and GENIA6 datasets. Besides, we also compared the performance of ARNs with the best system in TAC-KBP 2017 Evaluation (Ji et al., 2017). The same as all previous studies, models are evaluated using microaveraged Precision(P), Recall(R) and F1-score. To balance time complexity and performance, Wang and Lu (2018) proposed to restrict the maximum length of mentions to 6, which covers more than 95% mentions. So we also compared to baselines where the maximum length of mention is restricted or unrestricted. Besides, we also compared the decoding time complexity of different methods. 5.3 Overall Results Table 1 shows the overall results on ACE2005, GENIA and KBP2017 datasets. From this table, we can see that: 1) Nested mentions have a significant influence on NER performance and are required to be specially treated. Compared with LSTMCRF and Multi-CRF baselines, all other methods dealing with nested mentions achieved significant F1-score improvements. So it is critical to take nested mentions into consideration for real-world applications and downstream tasks. 6Even Sohrab and Miwa (2018) reported a higher performance on GENIA, their experimental settings are obviously different from other baselines. As they didn’t release their dataset splits and source code, we are unable to compare it with listed baselines. 5188 ACE2005 GENIA KBP2017 Time Model P R F1 P R F1 P R F1 Complexity LSTM-CRF (Lample et al., 2016) 70.3 55.7 62.2 75.2 64.6 69.5 71.5 53.3 61.1 O(mn) Multi-CRF 69.7 61.3 65.2 73.1 64.9 68.8 69.7 60.8 64.9 O(mn) FOFE(c=6) (Xu et al., 2017) 76.5 66.3 71.0 75.4 67.8 71.4 81.8 62.0 70.6 O(mn2) FOFE(c=n) (Xu et al., 2017) 76.9 62.0 68.7 74.0 65.5 69.5 79.1 62.5 69.8 O(mn2) Transition (Wang et al., 2018) 74.5 71.5 73.0 78.0 70.2 73.9 74.7 67.0 70.1 O(mn) Cascaded-CRF (Ju et al., 2018) 74.2 70.3 72.2 78.5 71.3 74.7 LH (Katiyar and Cardie, 2018) 70.6 70.4 70.5 79.8 68.2 73.6 O(mn) SH(c=6) (Wang and Lu, 2018) 75.9 70.0 72.8 76.8 71.8 74.2 73.3 65.8 69.4 O(cmn) SH(c=n) (Wang and Lu, 2018) 76.8 72.3 74.5 77.0 73.3 75.1 79.2 66.5 72.3 O(mn2) KBP2017 Best (Ji et al., 2017) 72.6 73.0 72.8 Anchor-Region Networks (c=6) 75.2 72.5 73.9 75.2 73.3 74.2 76.2 71.5 73.8 O(mn + ck) Anchor-Region Networks (c=n) 76.2 73.6 74.9 75.8 73.9 74.8 77.7 71.8 74.6 O(mn + nk) Table 1: Overall experiment results on ACE2005, GENIA and KBP2017 datasets. c is the maximum length of mention and n refers to the length of sentence. For time complexity, m denotes the number of class and k denotes the average number of anchor words in each sentence(k << n). The time complexity of Cascaded-CRF depends on datasets so is not listed here. 2) Our Anchor-Region Networks can effectively resolve the nested mention detection problem, and achieved the state-of-the-art performance in all three datasets. On ACE2005 and GENIA, ARNs achieved the state-of-the-art performance on both the restricted and the unrestricted mention length settings. On KBP2017, ARNs outperform the top-1 system in the 2017 Evaluation by a large margin. This verifies the effectiveness of our new architecture. 3) By modeling and exploiting head-driven phrase structure of entity mentions, ARNs reduce the computational cost significantly. ARNs only detect nuggets centering at detected anchor words. Note that for each sentence, the number of potential anchor words k is significantly smaller than the sentence length n. Therefore the computational cost of our region recognizer is significantly lower than that of traditional regionbased methods which perform classification on all sub-sequences, as well as hypergraph-based methods which introduced structural dependencies between labels to prevent structural ambiguity (Wang and Lu, 2018). Furthermore, ARNs are highly parallelizable if we replace the BiLSTM context encoder with other parallelizable context encoder architecture (e.g., Transformer (Vaswani et al., 2017)). 5.4 Effects of Bag Loss In this section, we investigate effects of Bag Loss by varying the values of hyper-parameter α in Equation (8) on the system performance. Figure 4 shows the F1 curves on both ACE2005 and 71 73 75 77 79 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 ACE2005 KBP2017 α Figure 4: The F1-score w.r.t. different α in Bag Loss on development sets. When α = 0, the model ablates Bag Loss and will treat all words in the same innermost mention as anchor words during training. KBP2017 datasets when α varies. We can see that: 1) Bag Loss is effective for anchor word selection during training. In Figure 4, setting α to 0 significantly undermines the performance. Note that setting α to 0 is the same as ablating Bag Loss, i.e., the model will treat all words in the same innermost mention as anchor words. This result further verifies the necessity of Bag Loss. That is, because not all words in a mention are related to its type, it will introduce remarkable noise by regarding all words in mentions as anchor words. 2) Bag Loss is not sensitive to α when it is larger than a threshold. In Figure 4, our systems achieve nearly the same performance when α > 0.8. We find that this is because our model can predict anchor word in a very sharp probability distribution, so slight change of α does not make a big difference. Therefore, in all our 5189 Type Most Frequent Anchor Words PER I, you, he, they, we, people, president, Mandela, family, officials ORG government, Apple, they, its, Nokia, company, Microsoft, military, party, bank FAC building, home, prison, house, store, factories, factory, school, streets, there GPE country, China, U.S., US, Cyprus, our, state, countries, Syria, Russia LOC world, moon, areas, space, European, Europe, area, region, places, border NIL the, a, of, ’s, in, and, to, his, who, former Table 2: The top-10 most frequent anchor words of each type on KBP2017 datasets. Line NIL shows most frequent words that appears in a mention but are not regarded as anchor words. experiments we empirically set α = 1 without special declaration. This also verified that Bag Loss can discover head-driven phrase structure steadily without using anchor word annotations. 5.5 Further Discussion on Bag Loss and Marginalization-based Loss One possible alternative solution for Bag Loss is to regard the anchor word as a hidden variable, and obtain the likelihood of each mention by marginalizing over all words in the mention nugget with P(c, xj, xk) = X xi P(xi, c)P(xj, xk|xi, c). (9) For P(xi, c), if we assume that the prior for each word being the anchor word is equal, it can be refactorized by P(xi, c) = P(c|xi)P(xi) ∝P(c|xi). (10) However, we find that this approach does not work well in practice. This may because that, as we mentioned above, the prior probability of each word being the anchor word should not be equal. Words with highly semantic relatedness to the types are more likely to be the anchor word. Furthermore, this marginalization-based training object can only guarantee that words being regarded as the anchor words are trained towards the mention type, but will not encourage the other irrelevant words in the mention to be trained towards NIL. Therefore, compared with Bag Loss, the marginalization-based solution can not achieve the promising results for ARNs training. 5.6 Analysis on Anchor Words To analyze the detected anchor words, Table 2 shows the most common anchor words for all entity types. Besides, words that frequently appear in a mention but being recognized as NIL are also presented. We can see that the top-10 anchor ACE2005 GENIA KBP2017 Anchor Detector 82.9 82.7 83.0 Entire ARNs 74.9 74.8 74.6 ∆ 8.0 7.9 8.4 Table 3: F1-scores gap between the anchor detector and the entire ARNs (anchor + region). … was [a man of [African] appearance, about 30 years old , with a small beard] . PER LOC LOC PER Figure 5: A representative error case of ARNs, where the right boundary of the PER mention is misclassified. Braces above the sentence indicate the output of ARNs, and brackets in the sentence represent the golden annotation. We find that the majority of errors occur because of the long-term dependencies stemming from postpositive attributive and attributive clauses. words of each type are very convincing: all these words are strong indicators of their entity types. Besides, we can see that frequent NIL words in entity mentions are commonly function words, which play significant role in the structure of mention nuggets (e.g., “the” and “a” often indicates the start of an entity mention) but have little semantic association with entity types. This supports our motivation and further verifies the effectiveness of Bag Loss for anchor word selection. 5.7 Error Analysis This section conducts error analysis on ARNs. Table 3 shows the performance gap between the anchor detector and the entire ARNs. We can see that there is still a significant performance gap from the anchor detector to entire ARNs. That is, there exist a number of mentions whose anchor words are correctly detected by the anchor detector but their boundaries are mistakenly recognized by the region recognizer. To investigate the reason 5190 behind this above performance gap, we analyze these cases and find that most of these errors stem from the existence of postpositive attributive and attributive clauses. Figure 5 shows an error case stemming from postpositive attributive. These cases are quite difficult for neural networks because long-term dependencies between clauses need to be carefully considered. One strategy to handle these cases is to introduce syntactic knowledge, which we leave as future work for improving ARNs. 6 Conclusions and Future Work This paper proposes Anchor-Region networks, a sequence-to-nuggets architecture which can naturally detect nested entity mentions by modeling and exploiting head-driven phrase structures of entity mentions. Specifically, an anchor detector is first used to detect the anchor words of entity mentions and then a region recognizer is designed to recognize the mention boundaries centering at each anchor word. Furthermore, we also propose Bag Loss to train ARNs in an end-to-end manner without using any anchor word annotation. Experiments show that ARNs achieve the state-of-theart performance on all three benchmarks. As the head-driven structures are widely spread in natural language, the solution proposed in this paper can also be used for modeling and exploiting this structure in many other NLP tasks, such as semantic role labeling and event extraction. Acknowledgments We sincerely thank the reviewers for their insightful comments and valuable suggestions. Moreover, this work is supported by the National Natural Science Foundation of China under Grants no. 61433015, 61572477 and 61772505; the Projects of the Chinese Language Committee under Grants no. WT135-24; and the Young Elite Scientists Sponsorship Program no. YESS20160177. References Oliver Bender, Franz Josef Och, and Hermann Ney. 2003. Maximum entropy models for named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLTNAACL 2003-Volume 4, pages 148–151. Association for Computational Linguistics. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 167–176. Hai Leong Chieu and Hwee Tou Ng. 2002. Named entity recognition: a maximum entropy approach using global information. In Proceedings of the 19th international conference on Computational linguisticsVolume 1, pages 1–7. Association for Computational Linguistics. Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettlemoyer. 2018. Ultra-fine entity typing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 87–96. Association for Computational Linguistics. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational linguistics, 29(4):589–637. Jenny Rose Finkel and Christopher D Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 141–150. Association for Computational Linguistics. Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 427–434. Association for Computational Linguistics. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and whats next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 473–483. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. Proceedings of ACL-08: HLT, pages 254–262. Heng Ji, Xiaoman Pan, Boliang Zhang, Joel Nothman, James Mayfield, Paul McNamee, Cash Costello, and Sydney Informatics Hub. 2017. Overview of tackbp2017 13 languages entity discovery and linking. In Proceedings of the Tenth Text Analysis Conference (TAC2017). Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446–1459. Association for Computational Linguistics. 5191 Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861–871. Association for Computational Linguistics. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 73–82. Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2018. Nugget proposal networks for chinese event detection. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1565–1574. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857–867. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2608–2618. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Carl Pollard and Ivan A Sag. 1994. Head-driven phrase structure grammar. University of Chicago Press. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Burr Settles. 2004. Biomedical named entity recognition using conditional random fields and rich feature sets. In Proceedings of the international joint workshop on natural language processing in biomedicine and its applications, pages 104–107. Association for Computational Linguistics. Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843–2849. Association for Computational Linguistics. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 455–465. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204–214. Association for Computational Linguistics. Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1011–1017. Association for Computational Linguistics. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211. Mingbin Xu, Hui Jiang, and Sedtawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. 5192 In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1237–1247. Association for Computational Linguistics. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. GuoDong Zhou and Jian Su. 2002. Named entity recognition using an hmm-based chunk tagger. In proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 473–480. Association for Computational Linguistics. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1127–1137. Zhi-Hua Zhou, Yu-Yin Sun, and Yu-Feng Li. 2009. Multi-instance learning by treating instances as noniid samples. In Proceedings of the 26th annual international conference on machine learning, pages 1249–1256. ACM. Zhi-Hua Zhou and Min-Ling Zhang. 2007. Multiinstance multi-label learning with application to scene classification. In Advances in neural information processing systems, pages 1609–1616.
2019
511
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5193–5202 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5193 Improving Textual Network Embedding with Global Attention via Optimal Transport Liqun Chen, Guoyin Wang, Chenyang Tao, Dinghan Shen, Pengyu Cheng, Xinyuan Zhang, Wenlin Wang, Yizhe Zhang, Lawrence Carin [email protected] Abstract Constituting highly informative network embeddings is an important tool for network analysis. It encodes network topology, along with other useful side information, into lowdimensional node-based feature representations that can be exploited by statistical modeling. This work focuses on learning contextaware network embeddings augmented with text data. We reformulate the networkembedding problem, and present two novel strategies to improve over traditional attention mechanisms: (i) a content-aware sparse attention module based on optimal transport, and (ii) a high-level attention parsing module. Our approach yields naturally sparse and self-normalized relational inference. It can capture long-term interactions between sequences, thus addressing the challenges faced by existing textual network embedding schemes. Extensive experiments are conducted to demonstrate our model can consistently outperform alternative state-of-the-art methods. 1 Introduction When performing network embedding, one maps network nodes into vector representations that reside in a low-dimensional latent space. Such techniques seek to encode topological information of the network into the embedding, such as affinity (Tang and Liu, 2009), local interactions (e.g, local neighborhoods) (Perozzi et al., 2014), and high-level properties such as community structure (Wang et al., 2017). Relative to classical network-representation learning schemes (Zhang et al., 2018a), network embeddings provide a more fine-grained representation that can be easily repurposed for other downstream applications (e.g., node classification, link prediction, content recommendation and anomaly detection). For real-world networks, one naturally may have access to rich side information about each node. Of particular interest are textual networks, where the side information comes in the form of natural language sequences (Le and Lauw, 2014). For example, user profiles or their online posts on social networks (e.g., Facebook, Twitter), and documents in citation networks (e.g., Cora, arXiv). The integration of text information promises to significantly improve embeddings derived solely from the noisy, sparse edge representations (Yang et al., 2015). Recent work has started to explore the joint embedding of network nodes and the associated text for abstracting more informative representations. Yang et al. (2015) reformulated DeepWalk embedding as a matrix factorization problem, and fused text-embedding into the solution, while Sun et al. (2016) augmented the network with documents as auxiliary nodes. Apart from direct embedding of the text content, one can first model the topics of the associated text (Blei et al., 2003) and then supply the predicted labels to facilitate embedding (Tu et al., 2016). Many important downstream applications of network embeddings are context-dependent, since a static vector representation of the nodes adapts to the changing context less effectively (Tu et al., 2017). For example, the interactions between social network users are context-dependent (e.g., family, work, interests), and contextualized user profiling can promote the specificity of recommendation systems. This motivates context-aware embedding techniques, such as CANE (Tu et al., 2017), where the vector embedding dynamically depends on the context. For textual networks, the associated texts are natural candidates for context. CANE introduced a simple mutual attention weighting mechanism to derive context-aware dynamic embeddings for link prediction. Following 5194 the CANE setup, WANE (Shen et al., 2018) further improved the contextualized embedding, by considering fine-grained text alignment. Despite the promising results reported thus far, we identify three major limitations of existing context-aware network embedding solutions. First, mutual (or cross) attentions are computed from pairwise similarities between local text embeddings (word/phrase matching), whereas global sequence-level modeling is known to be more favorable across a wide range of NLP tasks (MacCartney and Manning, 2009; Liu et al., 2018; Malakasiotis and Androutsopoulos, 2007; Guo et al., 2018). Second, related to the above point, low-level affinity scores are directly used as mutual attention without considering any high-level parsing. Such an over-simplified operation denies desirable features, such as noise suppression and relational inference (Santoro et al., 2017), thereby compromising model performance. Third, mutual attention based on common similarity measures (e.g., cosine similarity) typically yields dense attention matrices, while psychological and computational evidence suggests a sparse attention mechanism functions more effectively (Martins and Astudillo, 2016; Niculae and Blondel, 2017). Thus such naive similarity-based approaches can be suboptimal, since they are more likely to incorporate irrelevant word/phrase matching. This work represents an attempt to improve context-aware textual network embedding, by addressing the above issues. Our contributions include: (i) We present a principled and moregeneral formulation of the network embedding problem, under reproducing kernel Hilbert spaces (RKHS) learning; this formulation clarifies aspects of the existing literature and provides a flexible framework for future extensions. (ii) A novel global sequence-level matching scheme is proposed, based on optimal transport, which matches key concepts between text sequences in a sparse attentive manner. (iii) We develop a high-level attention-parsing mechanism that operates on top of low-level attention, which is capable of capturing long-term interactions and allows relational inference for better contextualization. We term our model Global Attention Network Embedding (GANE). To validate the effectiveness of GANE, we benchmarked our models against state-of-theart counterparts on multiple datasets. Our models consistently outperform competing methods. 2 Problem setup We introduce basic notation and definitions used in this work. Textual networks. Let G = (V, E, T ) be our textual network, where V is the set of nodes, E ⊆V × V are the edges between the nodes, and T = {Sv}v∈V are the text data associated with each node. We use Sv = [ω1, · · · , ωnv] to denote the token sequence associated with node v ∈V, of length nv = |Sv| where | · | denotes the counting measure. To simplify subsequent discussion, we assume all tokens have been pre-embedded in a p-dimensional feature space. As such, Sv can be directly regarded as a Rp×nv matrix tensor. We use {u, v} to index the nodes throughout the paper. We consider directed unsigned graphs, meaning that for each edge pair (u, v) ∈E there is a nonnegative weight wuv associated with it, and wuv does not necessarily equal wvu. Textual network embedding. The goal of textual network embedding is to identify a ddimensional embedding vector zv ∈Rd for each node v ∈V, which encodes network topology (E) via leveraging information from the associated text (T ). In mathematical terms, we want to learn an encoding (embedding) scheme ZG ≜{zv = Enc(v; G)}v∈V and a probabilistic decoding model with likelihood pθ(E; Z), where E ⊆V × V is a random network topology for node set V, such that the likelihood for the observed topology pθ(E|ZG) is high. Note that for efficient coding schemes, the embedding dimension is much smaller than the network size (i.e., d ≪|V|). In a more general setup, the decoding objective can be replaced with pθ(A|Z), where A denotes observed attributes of interest (e.g., node label, community structure, etc.). Context-aware embedding. One way to promote coding efficiency is to contextualize the embeddings. More specifically, the embeddings additionally depend on an exogenous context c. To distinguish it from the context-free embedding zu, we denote the context-aware embedding as zu|c, where c is the context. For textual networks, when the embedding objective is network topology reconstruction, a natural choice is to treat the text as context (Tu et al., 2017). In particular, when modeling the edge wuv, Sv and Su are respectively treated as the context for context-aware embeddings zu|c and zv|c, which are then used in the prediction of edge likelihood. 5195 Attention & text alignment. Much content can be contained in a single text sequence, and retrieving them with a fixed length feature vector can be challenging. A more flexible solution is to employ an attention mechanism, which only attends to content that is relevant to a specific query (Vaswani et al., 2017). Specifically, attention models leverage a gating mechanism to de-emphasize irrelevant parts in the input; this method pools information only from the useful text, which is also a fixed length vector but that only encodes information with respect to one specific content (Santos et al., 2016). Popular choices of attention include normalized similarities in the feature space (e.g., Softmax normalized cosine distances). For two text sequences, one can build a mutual attention by cross-relating the content from the respective text (Santoro et al., 2017). In text alignment, one further represents the content from one text sequence using the mutual attention based attentive-pooling on the other sequence (Shen et al., 2018). Optimal transport (OT). Consider µ = {(xi, µi)}n i=1 and ν = {(yj, νj)}m j=1, a set of locations and their associated nonnegative mass (we assume P i µi = P j νj = 1). We call π ∈Rn×m + a valid transport plan if it properly redistributes mass from µ to ν, i.e., P i πij = νj and P j πij = µi. In other words, π breaks mass at {xi} into smaller parts and transports πij units of xi to yj. Given a cost function c(x, y) for transporting unit mass from x to y, discretized OT solves the following constrained optimization for an optimal transport plan π∗(Peyr´e et al., 2017): Dc(µ, ν) = inf π∈Π(µ,ν)    X ij πijc(xi, yj)   , (1) where Π(µ, ν) denotes the set of all viable transport plans. Note that c(x, y) is a distance metric on X, and Dc(µ, ν) induces a distance metric on the space of probability distributions supported on X, commonly known as the Wasserstein distance (Villani, 2008). Popular choices of cost include Euclidean cost ∥x −y∥2 2 for general probabilistic learning (Gulrajani et al., 2017) and cosine similarity cost cos(x, y) for natural language models (Chen et al., 2018). Computationally, OT plans are often approximated with Sinkhorn-type iterative schemes (Cuturi, 2013). Algorithm 1 summarizes a particular variant used in our study (Xie et al., 2018). Algorithm 1 Optimal transport solver (SolveOT) 1: Input: Sentence matrices S = {wi}n 1 , S′ = {w′ j}m 1 and generalized stepsize 1/β, 2: σ = 1 m1m, T(1) = 1n1m ⊤ 3: Cij = c(zi, z′ j), Aij = e− Cij β 4: for t = 1, 2, 3 . . . do 5: Q = A ⊙T(t) // ⊙is Hadamard product 6: for k = 1, . . . K do // K = 1 in practice 7: δ = 1 nQσ , σ = 1 mQ⊤δ 8: end for 9: T(t+1) = diag(δ)Qdiag(σ) 10: end for 11: Return T 3 Proposed Method 3.1 Model framework overview To capture both the topological information (network structure E) and the semantic information (text content T ) in the textual network embedding, we explicitly model two types of embeddings for each node v ∈V: (i) the topological embedding zt u, and (ii) the semantic embedding zs u. The final embedding is constructed by concatenating the topological and semantic embeddings, i.e., zu = [zt u; zs u]. We consider the topological embedding zt as a static property of the node, fixed regardless of the context. On the other hand, the semantic embedding zs dynamically depends on the context, which is the focus of this study. Motivated by the work of (Tu et al., 2017), we consider the following probabilistic objective to train the network embeddings: ℓ(Θ) = Ee∼E {ℓ(e; Θ)} , (2) where e = (u, v) represents sampled edges from the network and Θ = {Z, θ} is the collection of model parameters. The edge loss ℓ(e; Θ) is given by the cross entropy ℓ(euv; Θ) = −wuv log pΘ(u|v), (3) where pΘ(u|v) denotes the conditional likelihood of observing a (weighted) link between nodes u and v, with the latter serving as the context. More specifically, pΘ(u|v) = ⟨zu, zv⟩−log(Z), (4) where Z = P u′∈V exp(⟨zu′, zv⟩) is the normalizing constant and ⟨·, ·⟩is an inner product operation, to be defined momentarily. Note here we have suppressed the dependency on Θ to simplify notation. To capture both the topological and semantic information, along with their interactions, we propose to use the following decomposition for our inner product term: 5196 ⟨zu, zv⟩= ⟨zt u, zt v⟩tt | {z } topology + ⟨zs u, zs v⟩ss | {z } semantic + ⟨zt u, zs v⟩ts + ⟨zs u, zt v⟩st | {z } interaction (5) Here we use ⟨za u, zb v⟩ab , a, b ∈{s, t} to denote the inner product evaluation between the two feature embeddings za u and zb v, which can be defined by a semi-positive-definite kernel function κab(za u, zb v) (Alvarez et al., 2012), e.g., Euclidean kernel, Gaussian RBF, IMQ kernel, etc. Note that for a ̸= b, za u and zb v do not reside on the same feature space. As such, embeddings are first mapped to the same feature space for inner product evaluation. In this study, we use the Euclidean kernel ⟨x1, x2⟩X = xT 1 x2 for inner product evaluation with x1, x2 ∈X ⊆ Rd, and linear mapping ⟨x, y⟩XY = ⟨x, Ay⟩X , where A ∈Rd×d′ for feature space realignment with x ∈X ⊆ Rd, y ∈Y ⊆Rd′. Here A is a trainable parameter, and throughout this paper we omit the bias terms in linear maps to avoid notational clutter. Note that our solution differs from existing network-embedding models in that: (i) our objective is a principled likelihood loss, while prior works heuristically combine the losses of four different models (Tu et al., 2017), which may fail to capture the non-trivial interactions between the fixed and dynamic embeddings; and (ii) we present a formal derivation of network embedding in a reproducing kernel Hilbert space. Negative sampling. Direct optimization of (3) requires summing over all nodes in the network, which can be computationally infeasible for largescale networks. To alleviate this issue, we consider other more computationally efficient surrogate objectives. In particular, we adopt the negative sampling approach (Mikolov et al., 2013), which replaces the bottleneck Softmax with a more tractable approximation given by log p(v|u) ≈log σ(⟨zu, zv⟩)+ PK j=1 Evk∼pn[log σ(−⟨zu, zvk⟩)], (6) where σ(x) = 1 1+exp(−x) is the sigmoid function, and pn(v) is a noise distribution over the nodes. Negative sampling can be considered as a special variant of noise contrastive estimation (Gutmann and Hyv¨arinen, 2010), which seeks to recover the ground-truth likelihood by contrasting Transport matrix ! Text "# Text "$ "$←# "#←$ Aggregation &#|$ &$|# Aggregation Optimal transport Figure 1: Schematic of the proposed mutual attention mechanism. In this setup, bag-of-words feature matchings are explicitly abstracted to infer the relationship between vertices. data samples with noise samples, thereby bypassing the need to compute the normalizing constant. As the number of noise samples K goes to infinity, this approximation becomes exact1 (Goldberg and Levy, 2014). Following the practice of Mikolov et al. (2013), we set our noise distribution to pn(v) ∝d 3 4v , where dv denotes the out-degree of node v. Context matching. We argue that a key to the context-aware network embedding is the design of an effective attention mechanism, which crossmatches the relevant content between the node’s associated text and the context. Over-simplified dot-product attention limits the potential of existing textual network embedding schemes. In the following sections, we present two novel, efficient attention designs that fulfill the desiderata listed in our Introduction. Our discussion follows the setup used in CANE (Tu et al., 2017) and WANE (Shen et al., 2018), where the text from the interacting node is used as the context. Generalization to other forms of context is straightforward. 3.2 Optimal-transport-based matching We first consider reformulating content matching as an optimal transport problem, and then repurpose the transport plan as our attention score to aggregate context-dependent information. More specifically, we see a node’s text and context as two (discrete) distributions over the content space. Related content will be matched in the sense that they yield a higher weight in the optimal transport plan π∗. The following two properties make the optimal transport plan more appealing for use as attention score. (i) Sparsity: when solved exactly, π∗is a sparse matrix with at most (2m −1) non-zero elements, where m is the number of 1This is a non-trivial result, for completeness we provide an intuitive justification in Supplementary Material. 5197 contents (Brualdi et al. (1991), §8.1.3); (ii) Selfnormalized: row-sum and column-sum equal the respective marginal distributions. Implementation-wise, we first feed embedded text sequence Su and context sequence Sv into our OT solver to compute the OT plan, Tuv = SolveOT(Su, Sv) ∈Rnu×nv. (7) Note that here we treat pre-embedded sequence Su as nu point masses in the feature space, each with weight 1/nu, and similarly for Sv. Next we “transport” the semantic content from context Sv according to the estimated OT plan with matrix multiplication Su←v = TuvSv ∈Rnu×p , (8) where we have treated Sv as a Rnv×p matrix. Intuitively, this operation aligns the context with the target text sequence via averaging the context semantic embeddings with respect to the OT plan for each content element in Su. To finalize the contextualized embedding, we aggregate the information from both Su and the aligned Su←v with an operator Fagg, zu|v = Fagg(Su, Su←v) ∈Rd×1. (9) In this case, we practice the following simple aggregation strategy: first concatenate Su and the aligned Su←v along the feature dimension, and then take max-pooling along the temporal dimension to reduce the feature vector into a 2p vector, followed by a linear mapping to project the embedding vector to the desired dimensionality. 3.3 Attention parsing Direct application of attention scores based on a low-level similarity-based matching criteria (e.g., dot-product attention) can be problematic in a number of ways: (i) low-level attention scores can be noisy (i.e., spurious matchings), and (ii) similarity-matching does not allow relational inference. To better understand these points, consider the following cases. For (i), if the sequence embeddings used do not explicitly address the syntactic structure of the text, a relatively dense attention score matrix can be expected. For (ii), consider the case when the context is a query, and the matching appears as a cue in the node’s text data; then the information needed is actually in the vicinity rather than the exact matching location (e.g., shifted a few steps ahead). Inspired by the work of Wang et al. (2018), we propose a new mechanism called attention parsing to address the aforementioned issues. As the name suggests, attention parsing recalibrates the raw low-level attention scores to better integrate the information. To this end, we conceptually treat the raw attention matrix Traw as a two-dimensional image and apply convolutional filters to it: H = ReLU(Conv2d(Traw, WF )) ∈Rnu×nv×c , (10) where WF ∈Rh×w×c denotes the filter banks with h, w and c respectively as window sizes and channel number. We can stack more convolutional layers, break sequence embedding dimensions to allow multi-group (channel) low-level attention as input, or introduce more-sophisticated model architectures (e.g., ResNet (He et al., 2016), Transformer (Vaswani et al., 2017), etc.) to enhance our model. For now, we focus on the simplest model described above, for the sake of demonstration. With H ∈Rnu×nv×c as the high-level representation of attention, our next step is to reduce it to a weight vector to align information from the context Sv. We apply a max-pooling operation with respect to the context dimension, followed by a linear map to get the logits h ∈Rnu×1 of the weights h = MaxPool(H, column) · B, (11) where B ∈Rc×1 is the projection matrix. Then the parsed attention weight w is obtained by w = Softmax(h) ∈Rnu×1 , (12) which is used to compute the aligned context embedding su←v = wT Sv ∈R1×p. (13) Note that here we compute a globally aligned context embedding vector su←v, rather than one for each location in Su as described in the last section (Su←v). In the subsequent aggregation operation, su←v is broadcasted to all the locations in Su. We call this global alignment, to distinguish it from the local alignment strategy described in the last section. Both alignment strategies have their respective merits, and in practice they can be directly combined to produce the final context-aware embedding. 4 Related Work Network embedding models. Prior network embedding solutions can be broadly classified into two categories: (i) topology embedding, which 5198 only uses the link information; and (ii) fused embedding, which also exploits side information associated with the nodes. Methods from the first category focus on encoding high-order network interactions in a scalable fashion, such as LINE (Tang et al., 2015), DeepWalk (Perozzi et al., 2014). However, models based on topological embeddings alone often ignore rich heterogeneous information associated with the vertices. Therefore, the second type of model tries to incorporate text information to improve network embeddings. For instance, TADW (Yang et al., 2015), CENE (Sun et al., 2016), CANE (Tu et al., 2017), WANE (Shen et al., 2018), and DMTE (Zhang et al., 2018b). Optimal Transport in NLP. OT has found increasing application recently in NLP research. It has been successfully applied in many tasks, such as topic modeling (Kusner et al., 2015), text generation (Chen et al., 2018), sequence-to-sequence learning (Chen et al., 2019), and word-embedding alignment (Alvarez-Melis and Jaakkola, 2018). Our model is fundamentally different from these existing OT-based NLP models in terms of how OT is used: these models all seek to minimize OT distance to match sequence distributions, while our model used the OT plan as an attention mechanism to integrate context-dependent information. Attention models. Attention was originally proposed in QA systems (Weston et al., 2015) to overcome the limitations of the sequential computation associated with recurrent models (Hochreiter et al., 2001). Recent developments, such as the Transformer model (Vaswani et al., 2017), have popularized attention as an integral part of compelling sequence models. While simple attention mechanisms can already improve model performance (Bahdanau et al., 2015; Luong et al., 2015), significant gains can be expected from more delicate designs (Yang et al., 2016; Li et al., 2015). Our treatment of attention is inspired by the LEAM model (Wang et al., 2018), which significantly improves mutual attention in a computationally efficient way. 5 Experiments 5.1 Experimental setup Datasets and tasks. We consider three benchmark datasets: (i) Cora2, a paper citation network with text information, built by McCallum 2https://people.cs.umass.edu/ ˜mccallum/data.html Cora Hepth Zhihu #vertices 2,227 1,038 10,000 #edges 5,214 1,990 43,894 #avg text len 90 54 190 #labels 7 NA NA Table 1: Dataset statistics. et al. (2000). We prune the dataset so that it only has papers on the topic of machine learning. (ii) Hepth3, a paper citation network from Arxiv on high energy physics theory, with paper abstracts as text information. (iii) Zhihu, a Q&A network dataset constructed by (Tu et al., 2017), which has 10,000 active users with text descriptions and their collaboration links. Summary statistics of these three datasets are summarized in Table 1. Preprocessing protocols from prior studies are used for data preparation (Shen et al., 2018; Zhang et al., 2018b; Tu et al., 2017). For quantitative evaluation, we tested our model on the following tasks: (a) Link prediction, where we deliberately mask out a portion of the edges to see if the embedding learned from the remaining edges can be used to accurately predict the missing edges. (b) Multi-label node classification, where we use the learned embedding to predict the labels associated with each node. Note that the label information is not used in our embedding. We also carried out ablation study to identify the gains. In addition to the quantitative results, we also visualized the embedding and the attention matrices to qualitatively verify our hypotheses. Evaluation metrics. For the link prediction task, we adopt the area under the curve (AUC) score to evaluate the performance, AUC is employed to measure the probability that vertices in existing edges are more similar than those in the nonexistent edge. For each training ratio, the experiment is executed 10 times and the mean AUC scores are reported, where higher AUC indicates better performance. For multi-label classification, we evaluate the performance with Macro-F1 scores. The experiment for each training ratio is also executed 10 times and the average Macro-F1 scores are reported, where a higher value indicates better performance. Baselines. To demonstrate the effectiveness of the proposed solutions, we evaluated our model along with the following strong baselines. (i) Topology only embeddings: MMB (Airoldi et al., 3https://snap.stanford.edu/data/ cit-HepTh.html 5199 Cora Hepth %Training Edges 15% 35% 55% 75% 95% 15% 35% 55% 75% 95% MMB 54.7 59.5 64.9 71.1 75.9 54.6 57.3 66.2 73.6 80.3 node2vec 55.9 66.1 78.7 85.9 88.2 57.1 69.9 84.3 88.4 89.2 LINE 55.0 66.4 77.6 85.6 89.3 53.7 66.5 78.5 87.5 87.6 DeepWalk 56.0 70.2 80.1 85.3 90.3 55.2 70.0 81.3 87.6 88.0 Naive combination 72.7 84.9 88.7 92.4 94.0 78.7 84.7 88.7 92.1 92.7 TADW 86.6 90.2 90.0 91.0 92.7 87.0 91.8 91.1 93.5 91.7 CENE 72.1 84.6 89.4 93.9 95.5 86.2 89.8 92.3 93.2 93.2 CANE 86.8 92.2 94.6 95.6 97.7 90.0 92.0 94.2 95.4 96.3 DMTE 91.3 93.7 96.0 97.4 98.8 NA NA NA NA NA WANE 91.7 94.1 96.2 97.5 99.1 92.3 95.7 97.5 97.7 98.7 GANE-OT 92.0 95.7 97.3 98.6 99.2 93.4 97.0 97.9 98.2 98.8 GANE-AP 94.0 97.2 98.0 98.8 99.3 93.8 97.3 98.1 98.4 98.9 Table 2: AUC scores for link prediction on the Cora and Hepth dataset. %Training Edges 15% 25% 35% 45% 55% 65% 75% 85% 95% DeepWalk 56.6 58.1 60.1 60.0 61.8 61.9 63.3 63.7 67.8 node2vec 54.2 57.1 57.3 58.3 58.7 62.5 66.2 67.6 68.5 LINE 52.3 55.9 59.9 60.9 64.3 66.0 67.7 69.3 71.1 MMB 51.0 51.5 53.7 58.6 61.6 66.1 68.8 68.9 72.4 Naive combination 55.1 56.7 58.9 62.6 64.4 68.7 68.9 69.0 71.5 TADW 52.3 54.2 55.6 57.3 60.8 62.4 65.2 63.8 69.0 CENE 56.2 57.4 60.3 63.0 66.3 66.0 70.2 69.8 73.8 CANE 56.8 59.3 62.9 64.5 68.9 70.4 71.4 73.6 75.4 DMTE 58.4 63.2 67.5 71.6 74.0 76.7 78.5 79.8 81.5 WANE 58.7 63.5 68.3 71.9 74.9 77.0 79.7 80.0 82.6 GANE-OT 61.6 66.4 70.8 73.0 77.3 80.6 80.4 81.8 83.2 GANE-AP 64.6 69.4 72.8 74.2 79.1 82.6 81.8 83.0 84.3 Table 3: AUC scores for link prediction on the Zhihu dataset. 2008), DeepWalk (Perozzi et al., 2014), LINE (Tang et al., 2015), Node2vec (Grover and Leskovec, 2016). (ii) Joint embedding of topology & text: Naive combination, TADW (Yang et al., 2015), CENE (Sun et al., 2016), CANE (Tu et al., 2017), WANE (Shen et al., 2018), DMTE (Zhang et al., 2018b). A brief summary of these competing models is provided in the Supplementary Material (SM). 5.2 Results We consider two variants of our model, denoted as GANE-OT and GANE-AP. GANE-OT employs the most basic OT-based attention model, specifically, global word-by-word alignment model; while GANE-AP additionally uses a one-layer convolutional neural network for the attention parsing. Detailed experimental setups are described in the SM. Link prediction. Tables 2 and 3 summarize the results from the link-prediction experiments on all three datasets, where a different ratio of edges are used for training. Results from models other than GANE are collected from Tu et al. (2017), Shen et al. (2018) and Zhang et al. (2018b). We have also repeated these experiments on our own, and the results are consistent with the ones reported. Note that Zhang et al. (2018b) did not report results on DMTE. Both GANE variants consistently outperform competing solutions. In the low-training-sample regime our solutions lead by a large margin, and the performance gap closes as the number of training samples increases. This indicates that our OT-based mutual attention framework can yield more informative textual representations than other methods. Note that GANE-AP delivers better results compared with GANE-OT, suggesting the attention parsing mechanism can further improve the low-level mutual attention matrix. More results on Cora and Hepth are provided in the SM. Multi-label Node Classification. To further evaluate the effectiveness of our model, we consider multi-label vertex classification. Following the setup described in (Tu et al., 2017), we first computed all context-aware embeddings. Then we averaged over each node’s context-aware embeddings with all other connected nodes, to obtain a global embedding for each node, i.e., zu = 1 du P v zu|v, where du denotes the degree of node u. A linear SVM is employed, instead of a 5200 %training labels 10% 30% 50% 70% LINE 53.9 56.7 58.8 60.1 TADW 71.0 71.4 75.9 77.2 CANE 81.6 82.8 85.2 86.3 DMTE 81.8 83.9 86.3 87.9 WANE 81.9 83.9 86.4 88.1 GANE-OT 82.0 84.1 86.6 88.3 GANE-AP 82.3 84.2 86.7 88.5 Table 4: Test Macro-F1 scores for multi-label node classification on Cora. Figure 2: n-gram length VS AUC on Cora. Figure 3: t-SNE visualization on Cora dataset. sophisticated deep classifier, to predict the label attribute of a node. We randomly sample a portion of labeled vertices with embeddings (10%, 30%, 50%, 70%) to train the classifier, using the rest of the nodes to evaluate prediction accuracy. We compare our results with those from other state-of-the-art models in Table 4. The GANE models delivered better results compared with their counterparts, lending strong evidence that the OT attention and attention parsing mechanism promise to capture more meaningful representations. Ablation study. We further explore the effect of n-gram length in our model (i.e., the filter size for the covolutional layers used by the attention parsing module). In Figure 2 we plot the AUC scores for link prediction on the Cora dataset against varying n-gram length. The performance peaked around length 20, then starts to drop, indicating a moderate attention span is more preferable. Similar results are observed on other datasets (results not shown). Experimental details on the ablation study can be found in the SM. low WANE Ours high Figure 4: Mutual attention between two nodes in Cora. Left: WANE attention. Right: OT attention (ours). 5.3 Qualitative Analysis Embedding visualization. We employed tSNE (Maaten and Hinton, 2008) to project the network embeddings for the Cora dataset in a twodimensional space using GANE-OT, with each node color coded according to its label. As shown in Figure 3, papers clustered together belong to the same category, with the clusters well-separated from each other in the network embedding space. Note that our network embeddings are trained without any label information. Together with the label classification results, this implies our model is capable of extracting meaningful information from both context and network topological. Attention matrix comparison. To verify that our OT-based attention mechanism indeed produces sparse attention scores, we visualized the OT attention matrices and compared them with those simarlity-based attention matrices (e.g., WANE). Figure 4 plots one typical example. Our OT solver returns a sparse attention matrix, while dot-product-based WANE attention is effectively dense. This underscores the effectiveness of OTbased attention in terms of noise suppression. 6 Conclusion We have proposed a novel and principled mutualattention framework based on optimal transport (OT). Compared with existing solutions, the attention mechanisms employed by our GANE model enjoys the following benefits: (i) it is naturally sparse and self-normalized, (ii) it is a global sequence matching scheme, and (iii) it can capture long-term interactions between two sentences. These claims are supported by experimental evidence from link prediction and multi-label vertex classification. Looking forward, our attention mechanism can also be applied to tasks such as relational networks (Santoro et al., 2017), natural language inference (MacCartney and Manning, 2009), and QA systems (Zhou et al., 2015). 5201 Acknowledgments This research was supported in part by DARPA, DOE, NIH, ONR and NSF. References Edoardo M Airoldi, David M Blei, Stephen E Fienberg, and Eric P Xing. 2008. Mixed membership stochastic blockmodels. Journal of Machine Learning Research, 9(Sep):1981–2014. Mauricio A Alvarez, Lorenzo Rosasco, Neil D Lawrence, et al. 2012. Kernels for vector-valued functions: A review. Foundations and Trends R⃝in Machine Learning, 4(3):195–266. David Alvarez-Melis and Tommi S Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. arXiv preprint arXiv:1809.00013. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Richard A Brualdi, Herbert J Ryser, et al. 1991. Combinatorial matrix theory, volume 39. Springer. Liqun Chen, Shuyang Dai, Chenyang Tao, Dinghan Shen, Zhe Gan, Haichao Zhang, Yizhe Zhang, and Lawrence Carin. 2018. Adversarial text generation via feature-mover’s distance. In NIPS. Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019. Improving sequence-to-sequence learning via optimal transport. In ICLR. Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In NIPS. Yoav Goldberg and Omer Levy. 2014. word2vec explained: deriving mikolov et al.’s negativesampling word-embedding method. arXiv preprint arXiv:1402.3722. Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In KDD. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. 2017. Improved training of Wasserstein GANs. In NIPS. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft layer-specific multi-task summarization with entailment and question generation. In ACL. Michael Gutmann and Aapo Hyv¨arinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, J¨urgen Schmidhuber, et al. 2001. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In ICML. Tuan MV Le and Hady W Lauw. 2014. Probabilistic latent document network embedding. In 2014 IEEE International Conference on Data Mining, pages 270–279. IEEE. Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In ACL. Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for natural language inference. arXiv preprint arXiv:1804.07888. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv:1508.04025. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Bill MacCartney and Christopher D Manning. 2009. Natural language inference. Prodromos Malakasiotis and Ion Androutsopoulos. 2007. Learning textual entailment using svms and string similarity measures. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 42–47. Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In ICML. Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. 2000. Automating the construction of internet portals with machine learning. Information Retrieval, 3(2):127–163. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. Vlad Niculae and Mathieu Blondel. 2017. A regularized framework for sparse and structured neural attention. In NIPS. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In KDD. 5202 Gabriel Peyr´e, Marco Cuturi, et al. 2017. Computational optimal transport. Technical report. Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural network module for relational reasoning. In NIPS. Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. arXiv preprint arXiv:1602.03609. Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018. Improved semantic-aware network embedding with fine-grained word alignment. In EMNLP. Xiaofei Sun, Jiang Guo, Xiao Ding, and Ting Liu. 2016. A general framework for content-enhanced network representation learning. In arXiv preprint arXiv:1610.02906. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. LINE: Large-scale information network embedding. In WWW. Lei Tang and Huan Liu. 2009. Relational learning via latent social dimensions. In KDD. Cunchao Tu, Han Liu, Zhiyuan Liu, and Maosong Sun. 2017. CANE: Context-aware network embedding for relation modeling. In ACL. Cunchao Tu, Weicheng Zhang, Zhiyuan Liu, Maosong Sun, et al. 2016. Max-margin deepwalk: Discriminative learning of network representation. In IJCAI. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. C´edric Villani. 2008. Optimal Transport: Old and New. Springer. Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018. Joint embedding of words and labels for text classification. In ACL. Xiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, and Shiqiang Yang. 2017. Community preserving network embedding. In AAAI. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In ICLR. Yujia Xie, Xiangfeng Wang, Ruijia Wang, and Hongyuan Zha. 2018. A fast proximal point method for Wasserstein distance. In arXiv:1802.04307. Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Chang. 2015. Network representation learning with rich text information. In IJCAI. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL. Daokun Zhang, Jie Yin, Xingquan Zhu, and Chengqi Zhang. 2018a. Network representation learning: A survey. IEEE transactions on Big Data. Xinyuan Zhang, Yitong Li, Dinghan Shen, and Lawrence Carin. 2018b. Diffusion maps for textual network embedding. In NIPS. Guangyou Zhou, Tingting He, Jun Zhao, and Po Hu. 2015. Learning continuous word embedding with metadata for question retrieval in community question answering. In ACL.
2019
512
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5203–5213 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5203 Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin and Debasis Ganguly IBM Research – Ireland Dublin, Ireland {yhou|charlesj|mgleize|fbonin|debasga1}@ie.ibm.com Abstract While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain. 1 Introduction Recent years have witnessed a significant increase in the number of laboratory-based evaluation benchmarks in many of scientific disciplines, e.g., in the year 2018 alone, 140,616 papers were submitted to the pre-print repository arXiv1 and among them, 3,710 papers are under the Computer Science – Computation and Language category. This massive increase in evaluation benchmarks (e.g., in the form of shared tasks) is particularly true for an empirical field such as NLP, which strongly encourages the research community to develop a set of publicly available benchmark tasks, datasets and tools so as to reinforce reproducible experiments. Researchers have realized the importance of conducting meta-analysis of a number of comparable publications, i.e., the ones which use similar, if not identical, experimental settings, from shared tasks and proceedings, as shown by special issues 1https://arxiv.org/ dedicated to analysis of reproducibility in experiments (Ferro et al., 2018), or by detailed comparative analysis of experimental results reported on the same dataset in published papers (Armstrong et al., 2009). A useful output of this meta-analysis is often a summary of the results of a comparable set of experiments (in terms of the tasks they are applied on, the datasets on which they are tested and the metrics used for evaluation) in a tabular form, commonly referred to as a leaderboard. Such a meta-analysis summary in the form of a leaderboard is potentially useful to researchers for the purpose of (1) choosing the appropriate existing literature for fair comparisons against a newly proposed method; and (2) selecting strong baselines, which the new method should be compared against. Although recently there has been some effort to manually keep an account of progress on various research fields in the form of leaderboards, either by individual researchers2 or in a moderated crowd-sourced environment by organizations3, it is likely to become increasingly difficult and timeconsuming over the passage of time. In this paper, we develop a model to automatically identify tasks, datasets, evaluation metrics, and to extract the corresponding best numeric scores from experimental scientific papers. An illustrative example is shown in Figure 1: given the sample paper shown on the left, which carries out research work on three different tasks (i.e., coreference resolution, named entity recognition, and entity linking), the system is supposed to extract the corresponding Task-Dataset-Metric-Score tuples as shown on the right part in Figure 1. It is noteworthy that we aim to identify a set of pre2https://github.com/sebastianruder/ NLP-progress 3https://paperswithcode.com 5204 Abstract: We present a joint model of three core tasks in the entity analysis stack: coreference resolution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to Wikipedia entities). Our model is formally a structured conditional random field. Unary factors encode local features from strong baselines for each task. We then add binary and ternary factors to capture cross-task interactions, such as the constraint that coreferent mentions have the same semantic type. On the ACE 2005 and OntoNotes datasets, we achieve state-of-the- art results for all three tasks. Moreover, joint modeling improves performance on each task over strong independent baselines. A Joint Model for Entity Analysis: Coreference, Typing, and Linking Task Dataset Evaluation Metric Best Result Named Entity Recognition ACE 2005 (Test) Accuracy 85.60 Entity Linking ACE 2005 (Test) Accuracy 76.78 Coreference Resolution ACE 2005 (Test) Avg. F1 76.35 … … … … … Leaderboard Annotations Figure 1: An illustrative example of leaderboard construction from a sample article. The cue words related to the annotated tasks, datasets, evaluation metrics and the corresponding best scores are shown in blue, red, purple and green, respectively. Note that sometimes the cue words appearing in the article are different from the documentlevel annotations, e.g., Avg. – Avg. F1, NER – Named Entity Recognition. defined Task-Dataset-Metric (TDM) triples from a taxonomy for a paper, and the corresponding cue words appearing in the paper could have a different surface form, e.g., Named Entity Recognition (taxonomy) – Name Tagging (paper). Different from most previous work on information extraction from scientific literature which concentrates mainly on the abstract section or individual paragraphs (Augenstein et al., 2017; G´abor et al., 2018; Luan et al., 2018), our task needs to analyze the entire paper. More importantly, our main goal is to tag papers using TDM triples from a taxonomy and to use these triples to organize papers. We adopt an approach similar to that used for some natural language inference (NLI) tasks (Bowman et al., 2015; Poliak et al., 2018). Specifically, given a scientific paper in PDF format, our system first extracts the key contents from the abstract and experimental sections, as well as from the tables. Then, we identify a set of TaskDataset-Metric (TDM) triples or Dataset-Metric (DM) pairs per paper. Our approach predicts if the textual context matches the TDM/DM label hypothesis, forcing the model to learn the similarity patterns between the text and various TDM triples. For instance, the model will capture the similarities between ROUGE-2 and “Rg-2”. We further demonstrate that our framework is able to generalize to the new (unobserved) TDM triples at test time in a zero-shot TDM triple identification setup. To evaluate our approach, we create a dataset NLP-TDMS which contains around 800 leaderboard annotations for more than 300 papers. Experiments show that our model outperforms several baselines by a large margin for extracting TDM triples. We further carry out experiments on a much larger dataset ARC-PDN and demonstrate that our system can support the construction of various leaderboards from a large number of scientific papers in the NLP domain. To the best of our knowledge, our work is the first attempt towards the creation of NLP Leaderboards in an automatic fashion. We pre-process both datasets (papers in PDF format) using GROBID (Lopez, 2009) and an in-house PDF table extractor. The processed datasets and code are publicly available at: https://github.com/ IBM/science-result-extractor. 2 Related Work A number of studies have recently explored methods for extracting information from scientific papers. Initial interest was shown in the analysis of citations (Athar and Teufel, 2012a,b; Jurgens et al., 2018) and analysis of the topic trends in the scientific communities (Vogel and Jurafsky, 2012). Gupta and Manning (2011); G´abor et al. (2016) propose unsupervised methods for the extraction of entities such as papers’ focus and methodology; similarly, in (Tsai et al., 2013), an unsupervised bootstrapping method is used to identify and cluster the main concepts of a paper. But only in 2017, Augenstein et al. (2017) formalized a new task (SemEval 2017 Task 10) for the identification of three types of entities (called keyphrases, i.e., Tasks, Methods, and Materials) and two relation types (hyponym-of and synonymof) in a corpus of 500 paragraphs from articles in the domains of Computer Science, Material Sciences and Physics. G´abor et al. (2018) also presented the task of IE from scientific papers (Se5205 Macro P Macro R Macro F1 Table caption 79.2 87.0 82.6 Numeric value + IsBolded + Table caption 71.1 77.7 74.0 Numeric value + Row label+ Table caption 55.5 71.4 61.4 Numeric value + Column label + Table caption 49.8 67.2 55.4 Numeric value + IsBolded + Row label + Column label + Table caption 36.6 60.9 43.0 Table 1: Table extraction results of our table parser on 50 tables from 10 NLP papers in PDF format. mEval 2018 Task 7) with a dataset of 350 annotated abstracts. Ammar et al. (2017, 2018); Luan et al. (2017); Augenstein and Søgaard (2017) exploit these datasets to test neural models for IE on scientific literature. Luan et al. (2018) extend those datasets by adding more relation types and cross-sentence relations using coreference links. The authors also develop a framework called Scientific Information Extractor for the extraction of six types of scientific entities (Task, Method, Metric, Material, Other-ScientificTerm and Generic) and seven relation types (Compare, Part-of, Conjunction, Evaluate-for, Feature-of, Used-for, and Hyponym-of). They reach 64.2 F1 on entity recognition and 39.2 F1 on relation extraction. Differently from (Luan et al., 2018), (1) we concentrate on the identification of entities from a taxonomy that are necessary for the reconstruction of leaderboards (i.e., task, dataset, metric); (2) we analyse the entire paper, not only the abstract (the reason being that the score information is rarely contained in the abstract). Our method for TDMS identification resembles some approaches used for textual entailment (Dagan et al., 2006) or natural language inference (NLI) (Bowman et al., 2015). We follow the example of White et al. (2017) and Poliak et al. (2018) who reframe different NLP tasks, including extraction tasks, as NLI problems. Eichler et al. (2017) and Obamuyide and Vlachos (2018) have both used NLI approaches for relation extraction. Our work differs in the information extracted and consequently in what context and hypothesis information we model. Currently, one of the best performing NLI models (e.g., on the SNLI dataset) for three way classification is (Liu et al., 2019). The authors apply deep neural networks and make use of BERT (Devlin et al., 2019), a novel language representation model. They reach an accuracy of 91.1%. Kim et al. (2019) exploit denselyconnected co-attentive recurrent neural network, and reach 90% accuracy. In our scenario, we generate pseudo premises and hypotheses, then apply the standard transformer encoder (Ashish et al., 2017; Devlin et al., 2019) to train two NLI models. 3 Dataset Construction We create two datasets for testing our approach for task, dataset, metric, and score (TDMS) identification. Both datasets are taken from a collection of NLP papers in PDF format and both require similar pre-processing. First, we parse the PDFs using GROBID (Lopez, 2009) to extract the title, abstract, and for each section, the section title and its corresponding content. Then we apply an improved table parser we developed, built on GROBID’s output, to extract all tables containing numeric cells from the paper. Each extracted table contains the table caption and a list of numeric cells. For each numeric cell, we detect whether it has a bold typeface, and associate it to its corresponding row and column headers. For instance, for the sample paper shown in Figure 1, after processing the table shown, we extract the bolded number “85.60” and find its corresponding column headers “{Test, NER}”. We evaluated our table parser on a set of 10 papers from different venues (e.g., EMNLP, Computational Linguistics journal). In total, these papers contain 50 tables with 1,063 numeric content cells. Table 1 shows the results for extracting different table elements. Our table parser achieves a macro F1 score of 82.6 for identifying table captions, and 74.0 macro F1 for extracting tuples of <Numeric value, Bolded Info, Table caption>. In general, it obtains higher recall than precision in all evaluation dimensions. In the remainder of this section we describe our two datasets in detail. 3.1 NLP-TDMS The content of the NLP-progress Github repository4 provides us with expert annotations of various leaderboards for a few hundred papers in the 4https://github.com/sebastianruder/ NLP-progress 5206 Full Exp Papers 332 332 Extracted tables 1269 1269 “Unknown” annotations 90 Leaderboard annotations 848 606 Distinct leaderboards 168 77 Distinct tasks 35 18 Distinct datasets 99 44 Distinct metrics 72 30 Table 2: Statistics of leaderboard annotations in NLPTDMS (Full) and NLP-TDMS (Exp). NLP domain. The repository is organized following a “language-domain/task-dataset-leaderboard” structure. After crawling this information together with the corresponding papers (in PDF format), we clean the dataset manually. This includes: (1) normalizing task name, dataset name, and evaluation metrics across leaderboards created by different experts, e.g., using “F1” to represent “Fscore” and “Fscore”; (2) for each leaderboard table, only keeping the best result from the same paper5; (3) splitting a leaderboard table into several leaderboard tables if its column headers represent datasets instead of evaluation metrics. The resulting dataset NLP-TDMS (Full) contains 332 papers with 848 leaderboard annotations. Each leaderboard annotation is a tuple containing task, dataset, metric, and score (as shown in Figure 1). In total, we have 168 distinct leaderboards (i.e., <Task, Dataset, Metric> triples) and only around half of them (77) are associated with at least five papers. We treat these manually curated TDM triples as an NLP knowledge taxonomy and we aim to explore how well we can associate a paper to the corresponding TDM triples. We further create NLP-TDMS (Exp) by removing those leaderboards that are associated with fewer than five papers. If all leaderboard annotations of a paper belong to these removed leaderboards, we tag this paper as “Unknown”. Table 2 compares statistics of NLP-TDMS (Full) and NLPTDMS (Exp). All experiments in this paper (except experiments in the zero-shot setup in Section 7) are on NLP-TDMS (Exp) and going forward we will refer to that only as NLP-TDMS. 5In this paper, we focus on tagging papers with different leaderboards (i.e., TDM triples). For each leaderboard table, an ideal situation would be to extract all results reported in the same paper and associate them to different methods, we leave this for future work. #Papers #Extracted tables ACL 1958 4537 EMNLP 1167 3488 NAACL 730 1559 Total 3855 9584 Table 3: Statistics of papers and extracted tables in ARC-PDN. 3.2 ARC-PDN To test our model in a more realistic scenario, we create a second dataset ARC-PDN.6 We select papers (in PDF format) published in ACL, EMNLP, and NAACL between 2010 to 2015 from the most recent version of the ACL Anthology Reference Corpus (ARC) (Bird et al., 2008). Table 3 shows statistics about papers and extracted tables in this dataset after the PDF parsing described above. 4 Method for TDMS Identification 4.1 Problem Definition We represent each leaderboard as a <Task, Dataset, Metric> triple (TDM triple). Given an experimental scientific paper D, we want to identify relevant TDM triples from a taxonomy and extract the best numeric score for each predicted TDM triple. However, scientific papers are often long documents and only some parts of the document are useful to predict TDM triples and the associated scores. Hence, we define a document representation, called DocTAET and a table score representation, called SC (score context), as follows: DocTAET. For each scientific paper, its DocTAET representation contains the following four parts: Title, Abstract, ExpSetup, and TableInfo. Title and Abstract often help in predicting Task. ExpSetup contains all sentences which are likely to describe the experimental setup, which can help to predict Dataset and Metric. We use a few heuristics to extract such sentences.7 Finally, table captions and column headers are important in predicting Dataset and Metric. We collect them in the 6PDN comes from the anthology’s directory prefixes for ACL, EMNLP, and NAACL, respectively. 7A sentence is included in ExpSetup if it: (1) contains any of the following cue words/phrases: {experiment on, experiment in, evaluation(s), evaluate, evaluated, dataset(s), corpus, corpora}; and (2) belongs to a section whose title contains any of the following words: {experiment(s), evaluation, dataset(s)}. 5207 TableInfo part. Figure 2 (upper right) illustrates the DocTAET extraction for a given paper. SC. For each table in a scientific paper, we focus on boldfaced numeric scores because they are more likely to be the best scores for the corresponding TDM triples.8 For a specific boldfaced numeric score in a table, its context (SC) contains its corresponding column headers and the table caption. Figure 2 (lower right) shows the extracted SC for the scores 85.60 and 61.71. 4.2 TDMS-IE System We develop a system called TDMS-IE to associate TDM triples to a given experimental scientific paper. Our system also extracts the best numeric score for each predicted TDM triple. Figure 3 shows the system architecture for TDMS-IE. 4.2.1 TDMS-IE Classification Models To predict correct TDM triples and associate the appropriate scores, we adopt a natural language inference approach (NLI) (Poliak et al., 2018) and learn a binary classifier for pairs of document contexts and TDM label hypotheses. Specifically, we split the problem into two tasks: (1) given a document representation DocTAET, we would like to predict whether a specific TDM triple can be inferred (e.g., give a document we infer <Summarization, Gigaword, ROUGE-2>); (2) we predict whether a <Dataset, Metric> tuple (DM) can be inferred given a score context SC.9 This setup has two advantages: first, it naturally captures the inter-relations between different labels by encoding the three types of labels (i.e., task, dataset, metric) into the same hypothesis. Second, similar to approaches for NLI, it forces the model to focus on learning the similarity patterns between DocTAET and various TDM triples. For instance, the model will capture the similarities between ROUGE-2 and “Rg-2”. Recently, a multi-head self-attention encoder (Ashish et al., 2017) has been shown to perform well in various NLP tasks, including NLI (Devlin et al., 2019). We apply the standard transformer encoder (Devlin et al., 2019) to train our models, one for TDM triple prediction, and one for score 8We randomly choose 10 papers from NLP-TDMS (Full) and compare their TDMS tuple annotations with the results reported in the original tables. We found that 78% (18/23) of the annotated tuples contain boldfaced numeric scores. 9We look for the relation SC-DM, rather then SC-TDM, because rarely the task is mentioned in SC. extraction. In the following we describe how we generate training instances for these two models. DocTAET-TDM model. Illustrated in Figure 3 (upper left), this model predicts whether a TDM triple can be inferred from a DocTAET. For a set of n TDM triples ({t1, t2, ..., tn}) from a taxonomy, if a paper di (DocTAET) is annotated with t1 and t2, we then generate two positive training instances (di ⇒t1 and di ⇒t2) and n −2 negative training instances (di ̸⇒tj, 2 < j ≤n). SC-DM model. Illustrated in Figure 3 (lower left), this model predicts whether a score context SC indicates a DM pair. To form training instances, we start with the list of DM pairs ({p1, p2, ..., pm}) from a taxonomy and a paper di, which is annotated with a TDM triple t (containing p1) and a numeric score s. We first try to extract the score contexts (SC) for all bolded numeric scores. If di’s annotated score s is equal to one of the bolded scores sk (typically there should not be more than one), we generate a positive training instance (SCsk=1 ⇒p1). Negative instances can be generated for this context by choosing other DMs not associated with the context, i.e., m −1 negative training instances (SCsk=1 ̸⇒pj, 1 < j ≤m). For example, an SC with “ROUGE for anonymized CNN/Daily Mail” might form a positive instance with DM <CNN / Daily Mail, ROUGE-L>, and then a negative instance with DM <Penn Treebank, LAS>. Additional negative training instances come from bolded scores sk which do not match s (e.g., SCsk ̸⇒pj, 1 < k, 1 ≤j ≤m). 4.2.2 Inference During the inference stage (see Figure 3 (right)), for a given scientific paper in PDF format, our system first uses the PDF parser and table extractor (described in Section 3) to generate the document representation DocTAET. We also extract all boldfaced scores and their contexts from each table. Next, we apply the DocTAET-TDM model to predict TDM triples among all TDM triple candidates for the paper10. Then, to extract scores for the predicted TDM triples, we apply the SC-DM model to every extracted score context (SC) and predicted DM pair (taken from the predicted TDM triples). This step tells us how likely it is that a 10The TDM triple candidates could be the valid TDM triples from the training set, or a set of TDM triples from a taxonomy. 5208 Abstract: We present a joint model of three core tasks in the entity analysis stack: coreference resolution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to Wikipedia entities). Our model is formally a structured conditional random field. Unary factors encode local features from strong baselines for each task. We then add binary and ternary factors to capture cross-task interactions, such as the constraint that coreferent mentions have the same semantic type. On the ACE 2005 and OntoNotes datasets, we achieve state-of-the- art results for all three tasks. Moreover, joint modeling improves performance on each task over strong independent baselines. A Joint Model for Entity Analysis: Coreference, Typing, and Linking … Document representation Title A Joint Model for Entity Analysis: Coreference, Typing, and Linking Abstract We present a joint model of three core tasks in the entity analysis stack … ExpSetup We present results on two corpora. First, we use the ACE 2005 corpus (NIST, 2005): … TableInfo Table 1: Results on the ACE 2005 … and Joint models. Dev MUC B3 CEAFe Avg. NER Link Test … Table2: … Title Abstract ExpSetup: TableInfo: sentences describing experimental setup concatenation of the table caption and column headers for all tables Score context representation Table caption Corresponding column headers Score Score Context 85.60 Test NER Table 1: Results on the ACE 2005 dev and test sets for the INDEP. (Task-specific factors only) and Joint models. 61.71 Avg. F1 Table 4: CoNLL metric scores for our systems on the CoNLL 2012 blind test set. … … … … Figure 2: Examples of document representation (DocTAET) and score context (SC) representation. Entails H: Named entity recognition, ACE 2005 (Test), Accuracy H: Entity linking, ACE 2005 (Test), Accuracy H: Coreference resolution, CoNLL 2012 (Test), Avg. F1 Document representation H: Dependency parsing, Penn Treebank, UAS H: … Hypothesis space: <Task, Dataset, Metric> triples from a taxonomy Score1: s1 context Hypothesis space: Predicted <Dataset, Metric> H: ACE 2005 (Test), Accuracy H: CoNLL 2012 (Test), Avg. F1 Entails ✓ ✓ ✓ ✕ Score2: s2 context Scoren: sn context … Extract <Task, Dataset, Metric> Extract <Task, Dataset, Metric, Score> 0.87 0.25 0.68 0.35 0.05 0.75 For each predicted <Task, Dataset, Metric>, associate the score whose context has the highest confidence score to “entail” <Dataset, Metric> Training Stage DocTAET-TDM entailment Model SC-DM entailment Model Inference Stage DocTAET Task, Dataset, Metric Transformer Encoder True/False Hypothesis Text Score Context (SC) Dataset, Metric Transformer Encoder True/False Hypothesis Text PDF/Table Parser PDF/Table Parser Figure 3: System architecture for TDMS-IE. score context suggests a DM pair. Finally, for each predicted TDM triple, we select the score whose context has the highest confidence in predicting a link to the constituent DM pair. 5 Experimental Setup 5.1 Training/Test Datasets We split NLP-TDMS (described in Section 3) into training and test sets. The partitioning ensures that every TDM triple annotated in NLP-TDMS appears both in the training and test set, so that a classifier will not have to predict unseen labels (or infer unseen hypotheses). Table 4 shows statistics on these two splits. The 77 leaderboards in this dataset constitute the set of n TDM triples we aim to predict (see Section 4.2). For evaluation, we report macro- and microaveraged precision, recall, and F1 score for extracting TDM triples and TDMS tuples over papers in the test set. 5.2 Implementation Details Both of our models (DocTAET-TDM and SC-DM) have 12 transformer blocks, 768 hidden units, and 12 self-attention heads. For DocTAET-TDM, we first initialize it using BERTBASE, then fine-tune the model for 3 epochs with the learning rate of 5e −5. During training and testing, the maximum text length is set to 512 tokens. Note that the document representation DocTAET can contain more 5209 training test Papers 170 162 Extracted tables 679 590 “Unknown” annotations 46 44 Leaderboard annotations 325 281 Distinct leaderboards 77 77 Table 4: Statistics of training/test sets in NLP-TDMS. than 1000 tokens for some scientific papers, often due to very long content in ExpSetup and TableInfo. Therefore, in these cases, we use only the first 150 tokens from ExpSetup and TableInfo respectively. We initialize the SC-DM model using the trained DocTAET-TDM model. We suspect that DocTAET-TDM already captures some of the relationship between score contexts and DM pairs. After initialization, we continue fine-tuning the model for 3 epochs with the learning rate of 5e−5. For SC-DM, we set a maximum token length of 128 for both training and testing. 5.3 Baselines In this section, we introduce three baselines against which we can evaluate our method. StringMatch (SM). Given a paper, for each TDM triple, we first check whether the content of the title, abstract, or introduction contains the name of the task. Then we inspect the contexts of all extracted boldfaced scores to check whether: (1) the name of the dataset is mentioned in the table caption and one of the associated column headers matches the metric name; or (2) the metric name is mentioned in the table caption and one of the associated column headers matches the dataset name. If more than one numeric score is identified during the previous step, we choose the highest or lowest value according to the property of the metric (e.g., accuracy should be high, while perplexity should be low). Finally, if all of the above conditions are satisfied for a given paper, we predict the TDM triple along with the chosen score. Otherwise, we tag the paper as “Unknown”. Multi-label classification (MLC). For a machine learning baseline, we treat this task as a multi-class, multi-label classification problem where we would like to predict the TDM label for a given paper (as opposed to predicting whether we can infer a given TDM label based on the paper). The class labels are TDM triples and each paper can have multiple TDM labels as they may report results from different tasks, datasets, and with different metrics. For this classification we ignore instances with the ‘Unknown’ label in training because this does not form a coherent class (and would otherwise dominate the other classes). Then, for each paper, we extract bag-of-word features with tf-idf weights from the DocTAET representation described in Section 4. We train a multinomial logistic regression classifier implemented in scikit-learn (Pedregosa et al., 2011) using SAGA optimization (Defazio et al., 2014). In this multi-label setting, the classifier can return an empty set of labels. When this is the case we take the most likely TDM label as the prediction. After predicting TDM labels we need a separate baseline classifier to compare to the SC-DM model. Similar to the SC-DM model, the MLC should predict the best score based on the SC. For training this classifier we form instances from triples of paper, score, and SC (as described in Section 4), with a binary label for whether or not this score is the actual leaderboard score from the paper. This version of the training set for classification has 1647 instances, but is quite skewed with only 67 true labels. This skew is not as problematic because for this baseline we are not classifying whether or not the SC matches the leaderboard score, but instead we simply pick the most likely SC for a given paper.11 The scores chosen (in this case one per paper) are combined with the TDM predictions above to form the final TDMS predictions reported in Section 6.1. EntityLinking (EL) for TDM triples prediction. We apply the state-of-the-art IE system on scientific literature (Luan et al., 2018) to extract task, material and metric mentions from DocTAET. We then generate possible TDM triples by combining these three types of mentions (note that many combinations could be invalid TDM triples). Finally we link these candidates to the valid TDM triples in a taxonomy12 based on Jaccard similarity. Specifically, we predict a TDM triple for a paper if the similarity score between the triple and a candidate is greater than α (α is estimated in the 11Papers in the test set have an average of 47.3 scores to choose between. 12In this experiment, the taxonomy consists of 77 TDM triples reported in Table 4. 5210 Macro P Macro R Macro F1 Micro P Micro R Micro F1 (a) Task + Dataset + Metric Extraction SM 31.8 30.6 31.0 36.0 19.6 25.4 MLC 42.0 23.1 27.8 42.0 20.9 27.9 EL 18.1 31.8 20.5 24.3 36.3 29.1 TDMS-IE 62.5 75.2 65.3 60.8 76.8 67.8 (b) Task + Dataset + Metric Extraction (excluding papers with “Unknown” annotation) SM 8.1 6.4 6.9 16.8 7.8 10.6 MLC 56.8 30.9 37.3 56.8 23.8 33.6 EL 24.9 43.6 28.1 29.4 42.0 34.6 TDMS-IE 54.1 65.9 56.6 60.2 73.1 66.0 (c) Task + Dataset + Metric + Score Extraction (excluding papers with “Unknown” annotation) SM 1.3 1.0 1.1 3.8 1.8 2.4 MLC 6.8 6.1 6.2 6.8 2.9 4.0 TDMS-IE 9.3 11.8 9.9 10.8 13.1 11.8 Table 5: Leaderboard extraction results of TDMS-IE and several baselines on the NLP-TDMS test dataset. training set). If none of TDM triples was identified, we tag the paper as “Unknown”. 6 Experimental Results 6.1 Extraction Results on NLP-TDMS We evaluate our TDMS-IE on the test dataset of NLP-TDMS. Table 5 shows the results of our model compared to baselines in different evaluation settings: TDM extraction (Table 5a), TDM extraction excluding papers with “Unknown” annotation (Table 5b), and TDMS extraction excluding papers with “Unknown” annotation (Table 5c). TDMS-IE outperforms baselines by a large margin in all evaluation metrics for the first two evaluation scenarios, where the task is to extract triples <Task, Dataset, Metric>. On testing papers with at least one TDM triple annotation, it achieves a macro F1 score of 56.6 and a micro F1 score of 66.0 for predicting TDM triples, versus the 37.3 macro F1, and 33.6 micro F1 of the multi-label classification approach. However, when we add the score extraction (TDMS), even if TDMS-IE outperforms the baselines, the overall performances are still unsatisfactory, underlining the challenging nature of the task. A qualitative analysis showed that many of the errors were triggered by the noise from the table parser, e.g., failing to identify bolded numeric scores or column headers (see Table 1). Sometimes a few papers bold the numeric scores for methods from the previous work when comparing to the state-of-the-art results, and our model wrongly predicts these bolded scores for the targeting TDM triples. 6.2 Ablations To understand the effect of ExpSetup and TableInfo in document representation DocTAET for predicting TDM triples, we carry out an ablation experiment. We train and test our system with DocTAET containing only Title+Abstract, Title+Abstract+ExpSetup, and Title+Abstract+TableInfo respectively. Table 6 reports the results of different configurations for DocTAET. We observe that both ExpSetup and TableInfo are helpful for predicting TDM triples. It also seems that descriptions from table captions and headers (TableInfo) are more informative than descriptions of experiments (ExpSetup). 6.3 Results on ARC-PDN To test whether our system can support to construct various leaderboards from a large number of NLP papers, we apply our model trained on the NLP-TDMS training set to ARC-PDN. We exclude five papers which also appear in the training set and predict TDMS tuples for each paper. The set of 77 candidate TDM triples comes from the training data, and many of these contain datasets that appear only after 2015. Consequently, fewer papers are tagged with these triples. Therefore, for evaluation we manually choose ten TDM triples among all TDM triples with at least ten associated papers. These ten TDM triples cover various research areas in NLP and contain datasets appearing before 2015. For each chosen TDM triple, we rank predicted papers according to the confidence score from the DocTAET-TDM model and manually evaluate the top ten results. Table 7 reports P@1, P@3, P@5, and P@10 for each leaderboard (i.e., TDM triple). The macro 5211 Document Representation Macro P Macro R Macro F1 Micro P Micro R Micro F1 Title+Abstract 11.3 11.3 10.7 47.9 14.2 21.9 Title+Abstract + ExpSetup 20.8 20.1 19.4 50.0 23.7 32.2 Title+Abstract + TableInfo 29.6 29.1 28.1 68.6 40.3 50.8 Title+Abstract + ExpSetup + TableInfo 62.5 75.2 65.3 60.8 76.8 67.8 Table 6: Ablation experiments results of TDMS-IE for Task + Dataset + Metric prediction. Task:Dataset:Metric P@1 P@3 P@5 P@10 #Correct Score #Wrong Task Dependency parsing:Penn Treebank:UAS 1.0 1.0 0.8 0.9 2 0 Summarization:DUC 2004 Task 1:ROUGE-2 0.0 0.67 0.8 0.7 0 0 Word sense disambiguation:Senseval 2:F1 0.0 0.0 0.1 0.1 0 0 Word sense disambiguation:SemEval 2007:F1 1.0 1.0 0.8 0.7 1 0 Word segmentation:Chinese Treebank 6:F1 1.0 0.67 0.4 0.2 0 2 Word Segmentation:MSRA:F1 1.0 0.67 0.6 0.7 2 3 Sentiment analysis:SST-2:Accuracy 1.0 0.67 0.6 0.3 0 3 AMR parsing:LDC2014T12:F1 on All 0.0 0.67 0.4 0.2 0 5 CCG supertagging:CCGBank:Accuracy 1.0 1.0 1.0 0.8 0 1 Machine translation:WMT 2014 EN-FR:BLEU 1.0 0.33 0.2 0.1 0 0 Macro-average 0.70 0.67 0.57 0.46 Table 7: Results of TDMS-IE for ten leaderboards on ARC-PDN. average P@1 and P@3 are 0.70 and 0.67, respectively, which is encouraging. Overall, 86% of papers are related to the target task T. We found that most false positives are due to the fact that these papers conduct research on the target task T, but report results on a different dataset or use the target dataset D as a resource to extract features. For instance, most predicted papers for the leaderboard <Machine translation, WMT 2014 EN-FR, BLEU> are papers about Machine translation but these papers report results on the dataset WMT 2012 EN-FR or WMT 2014 EN-DE. For TDMS extraction, only five extracted TDMS tuples are correct. This is a challenging task and more efforts are required to address it in the future. 7 Zero-shot TDM Classification Since our framework in principle captures the similarities between DocTAET and various TDM triples, we estimate that it can perform zero-shot classification of new TDM triples at test time. We split NLP-TDMS (Full) into the training/test sets. The training set contains 210 papers with 96 (distinctive) TDM triple annotations and the test set contains 108 papers whose TDM triple annotations do not appear in the training set. We train our DocTAET-TDM model on the training set as described in Section 4.2.1. At test time, we use all valid TDM triples from NLP-TDMS (Full) to form the hypothesis space. To improve efficiency, one could also reduce this hypothesis space by focusing on the related Task or Dataset mentioned in the paper. On the test set of zero-shot TDM pairs classification, our model achieves a macro F1 score of 41.6 and a micro F1 score of 54.9, versus the 56.6 macro F1, and 66.0 micro F1 of the few-shot TDM pairs classification described in Section 6.1. 8 Conclusions In this paper, we have reported a framework to automatically extract tasks, datasets, evaluation metrics and scores from a set of published scientific papers in PDF format, in order to reconstruct the leaderboards for various tasks. We have proposed a method, inspired by natural language inference, to facilitate learning similarity patterns between labels and the content words of papers. Our first model extracts <Task, Dataset, Metric> (TDM) triples, and our second model associates the best score reported in the paper to the corresponding TDM triple. We created two datasets in the NLP domain to test our system. Experiments show that our model outperforms the baselines by a large margin in the identification of TDM triples. In the future, more effort is needed to extract the best score. Also the work reported in this paper is based on a small TDM taxonomy, we plan to construct a TDM knowledge base and provide an applicable system for a wide range of NLP papers. Acknowledgments The authors appreciate the valuable feedback from the anonymous reviewers. 5212 References Waleed Ammar, Dirk Groeneveld, Chandra Bhagavatula, Iz Beltagy, Miles Crawford, Doug Downey, Jason Dunkelberger, Ahmed Elgohary, Sergey Feldman, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew Peters, Joanna Power, Sam Skjonsberg, Lucy Wang, Chris Willhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the literature graph in semantic scholar. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), New Orleans, Louisiana, 1 – 6 June 2018, pages 84–91. Waleed Ammar, Matthew Peters, Chandra Bhagavatula, and Russell Power. 2017. The AI2 system at SemEval-2017 Task 10 (ScienceIE): Semisupervised end-to-end entity and relation extraction. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), Vancouver, Canada, 3 – 4 August 2017, pages 592–596. Timothy G. Armstrong, Alistair Moffat, William Webber, and Justin Zobel. 2009. Improvements that don’t add up: Ad-hoc retrieval results since 1998. In Proceedings of the ACM 18th Conference on Information and Knowledge Management (CIKM 2009), Hong Kong, China, 2–6 November 2009, pages 601–610. Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Lukasz, and Polosukhin Illia. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NIPS 2017), pages 1–11. Awais Athar and Simone Teufel. 2012a. Contextenhanced citation sentiment detection. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Montr´eal, Qu´ebec, Canada, 3–8 June 2012, pages 597–601. Awais Athar and Simone Teufel. 2012b. Detection of implicit citations for sentiment detection. In Proceedings of the Workshop on Detecting Structure in Scholarly Discourse, Jeju Island, Republic of Korea, 12 July, pages 18–26. Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017. SemEval 2017 Task 10: ScienceIE - Extracting keyphrases and relations from scientific publications. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), Vancouver, Canada, 3 – 4 August 2017, pages 546– 555. Isabelle Augenstein and Anders Søgaard. 2017. Multitask learning of keyphrase boundary classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Vancouver, Canada, 30 July – 4 August 2017, pages 341–346. Steven Bird, Robert Dale, Bonnie Dorr, Bryan Gibson, Mark Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir Radev, and Yee Fan Tan. 2008. The ACL anthology reference corpus: A reference dataset for bibliographic research in computational linguistics. In Proceedings of the 6th International Conference on Language Resources and Evaluation, Marrakech, Morocco, 26 May – 1 June 2008, pages 1755–1759. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015, pages 632–642. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, pages 177–190, Heidelberg, Germany. Aaron Defazio, Francis Bach, and Simon LacosteJulien. 2014. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 1646–1654. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, USA, 2–7 June 2019, pages 4171–4186. Kathrin Eichler, Feiyu Xu, Hans Uszkoreit, and Sebastian Krause. 2017. Generating pattern-based entailment graphs for relation extraction. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), Vancouver, Canada, 3 – 4 August 2017, pages 220–229. Nicola Ferro, Norbert Fuhr, and Andreas Rauber. 2018. Introduction to the special issue on reproducibility in information retrieval: Evaluation campaigns, collections, and analyses. Journal of Data and Information Quality, 10(3):9:1–9:4. Kata G´abor, Davide Buscaldi, Anne-Kathrin Schumann, Behrang QasemiZadeh, Ha¨ıfa Zargayouna, and Thierry Charnois. 2018. Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers. In Proceedings of The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT, New Orleans, Louisiana, June 5-6, 2018, pages 679–688. Sonal Gupta and Christopher Manning. 2011. Analyzing the dynamics of research by extracting key 5213 aspects of scientific papers. In Proceedings of 5th international joint conference on natural language processing, Chiang Mai, Thailand, 8–13 November 2011, pages 1–9. Kata G´abor, Haifa Zargayouna, Davide Buscaldi, Isabelle Tellier, and Thierry Charnois. 2016. Semantic annotation of the ACL anthology corpus for the automatic analysis of scientific literature. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Portoroˇz, Slovenia, 23–28 May 2016. David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics, 6:391–406. Seonhoon Kim, Jin-Hyuk Hong, Inho Kang, and Nojun Kwak. 2019. Semantic sentence matching with densely-connected recurrent and co-attentive information. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Hawaii, USA, 27 January–1 February 2019. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July– 2 August 2019. Patrice Lopez. 2009. GROBID: combining automatic bibliographic data recognition and term extraction for scholarship publications. In The 13th European Conference on Digital Libraries (ECDL 2009), Corfu, Greece, 27 September 27 – 2 October, 2009, pages 473–474. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October– 4 November 2018, pages 3219–3232. Yi Luan, Mari Ostendorf, and Hannaneh Hajishirzi. 2017. Scientific information extraction with semisupervised neural tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 7–11 November 2017, pages 2641–2651. Abiola Obamuyide and Andreas Vlachos. 2018. Zeroshot relation classification as textual entailment. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), Brussels, Belgium, 1 November 2018, pages 72–78. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October– 4 November 2018, pages 67–81. Chen-Tse Tsai, Gourab Kundu, and Dan Roth. 2013. Concept-based analysis of scientific literature. In Proceedings of the ACM 22nd Conference on Information and Knowledge Management (CIKM 2013), San Francisco, California, 27 October–1 November 2013, pages 1733–1738. Adam Vogel and Dan Jurafsky. 2012. He said, she said: Gender in the acl anthology. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, Jeju Island, Republic of Korea, 10 July, pages 33–41. Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is everything: Recasting semantic resources into a unified evaluation framework. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP 2017), Taipei, Taiwan, 27 November – 1 December 2017, pages 996– 1005.
2019
513
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5214–5223 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5214 Scaling Up Open Tagging from Tens to Thousands: Comprehension Empowered Attribute Value Extraction from Product Title Huimin Xu1,2, Wenting Wang2, Xin Mao1,2, Xinyu Jiang1, Man Lan1∗ 1 School of Computer Science and Software Engineering, East China Normal University 2 Alibaba Group {hmxu,xinmao,xyjiang}@stu.ecnu.edu.cn, [email protected] {xinjiu.xhm,nantiao.wwt,sunian.mx}@alibaba-inc.com Abstract Supplementing product information by extracting attribute values from title is a crucial task in e-Commerce domain. Previous studies treat each attribute only as an entity type and build one set of NER tags (e.g., BIO) for each of them, leading to a scalability issue which unfits to the large sized attribute system in real world e-Commerce. In this work, we propose a novel approach to support value extraction scaling up to thousands of attributes without losing performance: (1) We propose to regard attribute as a query and adopt only one global set of BIO tags for any attributes to reduce the burden of attribute tag or model explosion; (2) We explicitly model the semantic representations for attribute and title, and develop an attention mechanism to capture the interactive semantic relations in-between to enforce our framework to be attribute comprehensive. We conduct extensive experiments in real-life datasets. The results show that our model not only outperforms existing state-of-the-art NER tagging models, but also is robust and generates promising results for up to 8, 906 attributes. 1 Introduction Product attributes are vital to e-Commerce as platforms need attribute details to make recommendations and customers need attribute information to compare products and make purchase decisions. However, attribute information is often noisy and incomplete because of the inevitable hurdles posed to retailers by the extremely huge and complex e-Commerce attribute system. On the other hand, product titles which are carefully designed by retailers are packed tightly with details to highlight all important aspects of products. Figure 1 shows the product page of a ‘dress’ from AliExpress1 which is an emerging and fast1https://www.aliexpress.com/ Figure 1: Snapshot of a product page. growth global e-Commerce platform. The product title “2019 Summer Women Button Decorated Print Dress Off-shoulder Party Beach Sundress Boho Spaghetti Long Dresses Plus Size FICUSRONG” contains attribute values: (1) already listed in Item Specifics, such as ‘Women’ for Gender, ‘Summer’ for Season, etc; (2) missing in Item Specifics, such as ‘2019’ for Year, ‘Plus Size’ for Size, etc. In this paper, we are interested in supplementing attribute information from product titles, especially for the real world e-Commerce attribute system with thousands of attributes built-in and new attributes and values popping out everyday. Previous work (Ghani et al., 2006; Ling and Weld, 2012; Sheth et al., 2017) on attribute value extraction suffered from Closed World Assumption which heavily depends on certain pre-defined attribute value vocabularies. These methods were unable to distinguish polysemy values such as ‘camel’ which could be the Color for a sweater rather than its Brand Name, or find new attribute values which have not been seen before. More recently, many research works (More, 2016; Zheng et al., 2018) formulate attribute value extraction 5215 problem as a special case of Named Entity Recognition (NER) task (Bikel et al., 1999; Collobert et al., 2011). They adopted sequence tagging models in NER as an attempt to address the Open World Assumption purely from the attribute value point of view. However, such tagging approach still failed to resolve two fundamental challenges in real world e-Commerce domain: Challenge 1. Need to scale up to fit the large sized attribute system in the real world. Product attribute system in e-Commerce is huge and may overlap cross domains because each industry designs its own standards. The attribute size typically falls into the range from tens of thousands to millions, conservatively. For example, Sports & Entertainment category from AliExpress alone contains 344, 373 products (may vary daily) with 77, 699 attributes and 482, 780 values. Previous NER tagging models have to introduce one set of entity tags (e.g., BIO tags) for each attribute. Thus, the large attribute size in reality renders previous works an infeasible choice to model attribute extraction. Moreover, the distribution of attributes is severely skewed. For example, 85% of attributes appear in less than 100 Sports & Entertainment products. Model performance could be significantly degraded for such rarely occurring attributes (e.g., Sleeve Style, Astronomy, etc.) due to insufficient data. Challenge 2. Need to extend Open World Assumption to include new attribute. With the rapid development of e-Commerce, both new attributes and values for newly launched products are emerging everyday. For example, with the recent announcement of ‘foldable mobile phone, a new attribute Fold Type is created to describe how the mobile phone can be folded with corresponding new attribute values ‘inward fold’, ‘outward fold’, etc. Previous NER tagging models view each attribute as a separate entity type and neglect the hidden semantic connections between attributes. Thus, they all fail to identify new attributes with zero manual annotations. In this paper, to address the above two issues, we propose a novel attribute-comprehension based approach. Inspired by Machine Reading Comprehension (MRC), we regard the product title and product attribute as ‘context’ and ‘query’ respectively, then the ‘answer’ extracted from ‘context’ equals to the attribute value wanted. Specifically, we model the contexts of title and attribute respectively, capture the semantic interaction between them by attention mechanism, and then use Conditional Random Fields (CRF) (Lafferty et al., 2001) as output layer to identify the corresponding attribute value. The main contributions of our work are summarized as follows: • Model. To our knowledge, this is the first framework to treat attribute beyond NER type alone but leverage its contextual representation and interaction with title to extract corresponding attribute value. • Learning. Instead of the common BIO setting where each attribute has its own BIO tags, we adopt a novel BIO schema with only one output tag set for all attributes. This is enabled by our model designed to embed attribute contextually rather than attribute tag along. Then learning to extract thousands of attributes first becomes feasible. • Experiments. Extensive experiments in real world dataset are conducted to demonstrate the efficacy of our model. The proposed attribute-comprehension based model outperforms state-of-the-art models by average 3% in F1 score. Moreover, the proposed model scales up to 8, 906 attributes with an overall F1 score of 79.12%. This proves its ability to produce stable and promising results for not only low and rare frequency attributes, but also new attributes with zero extra annotations. To the best of our knowledge, this is the first framework to address the two fundamental real world issues for open attribute value extraction: scalability and new-attribute. Our proposed model does not make any assumptions on attribute size, attribute frequencies or the amount of additional annotations needed for new attributes. The rest of the paper is organized as follows. Section 2 gives a formal problem statement for this task. Section 3 depicts our proposed model in details. Section 4 lists the experimental settings of this work. Section 5 reports the experimental results and analysis. Section 6 summarizes the related work, followed by a conclusion in Section 7. 2 Problem Statement In this section, we formally define the attribute value extraction task. Given product title T and 5216 ... ... ...... Word Representation Layer Contextual Embedding Layer Attention Layer CRF Output Layer B ... ... ... ... Title Attribute x O I ... ... LSTM LSTM LSTM BERT LSTM BERT Attention Layer LSTM m t x 2 t x 1 t x 1 a x n a Figure 2: Architecture of the proposed attribute-comprehension open tagging model. attribute A, our goal is to extract corresponding attribute value for A from T. For example, the title and attributes from Figure 1 are given as below: • Product Title: 2019 Summer Women Button Decorated Print Dress Off-shoulder Party Beach Sundress Boho Spaghetti Long Dresses Plus Size FICUSRONG. • Attributes: Season, Gender, Neckline Considering the three attributes of interest, i.e., Season, Gender and Neckline, we aim to obtain ‘Summer’ for Season, ‘Women’ for Gender and ‘NULL’ for Neckline, where the former two attributes are described in title but the latter is not presented in title. Formally, given the product title T = {xt 1, xt 2, . . . , xt m} of length m and attribute A = {xa 1, xa 2, . . . , xa n} of length n, our model outputs the tag sequence y = {y1, y2, . . . , ym}, yi ∈ {B, I, O}, where B and I denote the beginning and inside tokens for the extracted attribute value respectively, and O denotes outside of the value. 3 Attribute-Comprehension Open Tagging Model Previous work on sequence tagging built one model for every attribute with a corresponding set of attribute-specific tags. Such approach is unrealistic on real-life large sized attribute set because of two reasons: (1) it is computationally inefficient to model thousands of attributes; (2) very limited data samples are presented for most attributes resulting in non-guaranteed performance. To tackle the two challenges raised in Section 1, we propose a novel attribute-comprehension based open tagging approach to attribute value extraction. Figure 2 shows the architecture of our proposed model. At first glance, our model, adopting BiLSTM, attention and CRF components, looks similar to previous sequence tagging systems including BiLSTM (Huang et al., 2015) and OpenTag (Zheng et al., 2018). But in fact our model is fundamentally different from previous works: unlike their strategy to regard attribute as only tag, we model attribute semantically, capture its semantic interaction with title via attention mechanism, then generate attribute-comprehension title representation to CRF for final tagging. Next we will describe the architecture of our model in detail. Word Representation Layer. We map each word in the title and attribute to a high-dimensional vector space through the pre-trained Bidirectional 5217 Encoder Representations from Transformers (BERT) (Devlin et al., 2018) which is the state-of-the-art language representation model. For each word in a sentence, BERT generates a particular word representation which considers the specific contexts. Formally, BERT encodes the title T and attribute A into a sequence of word representations {wt 1, wt 2, . . . , wt m} and {wa 1, wa 2, . . . , wa n}. Contextual Embedding Layer. Long-Short Term Memory (LSTM) Neural Network (Hochreiter and Schmidhuber, 1997) addresses the vanishing gradient problems and is capable of modeling long-term contextual information along the sequence. Bidirectional LSTM (BiLSTM) captures the context from both past and future time steps jointly while vanilla LSTM only considers the contextual information from the past. In this work, we adopt two BiLSTMs to model the title and attribute representation individually. One BiLSTM is used to get hidden states as contextual representation of title Ht = {ht 1, ht 2, . . . , ht m}. ht i = h−→ ht i; ←− ht i i = BiLSTM −−→ ht i+1, ←−− ht i−1, wt i  Another BiLSTM is used to obtain the attribute representation. Slightly different from the design for title, we only use the last hidden state of BiLSTM as the attribute representation ha since the length of attribute is normally much shorter (i.e., no more than 5). ha = h−→ ha n; ←− ha n i = BiLSTM −→ ha n, ←− ha n, wa n  Attention Layer. In Natural Language Processing (NLP), attention mechanism was first used in Neural Machine Translation (NMT) (Bahdanau et al., 2014) and has achieved a great success. It is designed to highlight the important information in a sequence, instead of paying attention to everything. OpenTag (Zheng et al., 2018) uses selfattention (Vaswani et al., 2017) to capture the important tokens in the title, but treats attribute only as a type and neglects attribute semantic information. Thus, OpenTag has to introduce one set of tags (Ba, Ia) for each attribute a, leading to its failure to be applicable in e-Commerce which has ten of thousands attributes. Different from their work, our model takes the hidden semantic interaction between attribute and title into consideration by computing the similarities between the attribute and each word in title. This means different tokens in the title would be attended in order to extract values for different attributes, resulting in different weight matrix. Thus, our model is able to handle huge amounts of attributes with only one set of tags (B, I, O). Even for attributes that have never been seen before, our model is able to identify tokens associated with it from the title by modeling its semantic information. We first compute the similarity between the attribute and each word in title to obtain attention vector S = {α1, α2, . . . , αm}. The attributecomprehension title is C = S ⊙Ht, where ⊙ represents element-wise. This vector indicates the weighted sum of words in the title with respect to the attribute. The similarity function between two vectors is measured by cosine similarity: αi = cosine  ht i, ha Output Layer. The goal of this task is to predict a tag sequence that marks the position of attribute values in the title. CRF is often used in sequence tagging model because it captures dependency between the output tags in a neighborhood. For example, if we already know the tag of a token is I, this decreases the probability of the next token to be B. We concatenate the title Ht and attributecomprehension title C to obtain a matrix M = Ht; C , which is passed into the CRF layer to predict tag sequence. Each column vector of M expected to contain contextual information about the word with respect to the title and attribute. The joint probability distribution of tags y is given by: Pr (y|T; ψ) ∝ m Y i=1 exp K X k=1 ψkfk (yi−1, yi, Mi) ! where ψk is corresponding weight, fk is the feature function, K is the number of features. The final output is the best label sequence y∗with the highest conditional probability: y∗= argmaxyPr (y|u; ψ) Training. For training this network, we use the maximum conditional likelihood estimation: L (ψ) = N X i=1 Pr (yi|ui; ψ) where N is the number of training instances. 5218 Groups Occurrence # of Attributes Example of attributes High [10,000, ∞) 10 Gender, Brand Name, Model Number, Type, Material Sub-high [1000, 10,000) 60 Feature, Color, Category, Fit, Capacity Medium [100, 1000) 248 Lenses Color, Pattern, Fuel, Design, Application Low [10, 100) 938 Heel, Shaft, Sleeve Style, Speed, Carbon Yarn Rare [1, 10) 7,650 Tension, Astronomy, Helmet Light, Flashlight Pouch Table 1: The statistics and examples of 8, 906 attributes with different frequencies in dataset AE-650K. 4 Experimental Setup 4.1 Dataset We use 344, 373 products collected from AliExpress Sports & Entertainment category as our dataset. For each product, their attributes and corresponding values presented in Item Specific are retained as ground truth for evaluation. The number of attributes varies greatly from different products. For example, up to 85 attributes are listed in one GQBQ children sport shoes product2. On average, each product contains about 10 attributes. We pair product title with its attributes and values present in Item Specific to form 3, 383, 547 triples, i.e., {title, attribute, value} as initial dataset. In initial dataset, there are 513, 564 positive triples (15%) whose value is included in title, the remainder are negative triples whose value is marked as ‘NULL’ as it is missing in title. We randomly select 143, 846 negative triples, then combine them with all positive triples to compose the dataset AE-650K whose positive-negative ratio is 4:1. Then this set of 657, 410 triples is partitioned into training, development and test set with the ratio of 7:1:2. In total, the AE-650k dataset contains 8, 906 types of attributes and their distributions are extremely uneven. In order to have a deep insight into the attribute distribution, we categorize them into five groups (i.e., High, Subhigh, Medium, Low and Rare frequency) according their occurrences. Table 1 shows the number of unique attributes in each frequency group together with some examples. We observe that high frequency attributes are more general (e.g., Gender, Material), while low and rare frequency attributes are more product specific (e.g., Sleeve Style, Astronomy). For example, one Barlow lens product has value ‘Telescope Eyepiece for Astron2https://www.aliexpress.com/item/32956754932.html Attributes Train Dev Test Brand Name 50,413 5,601 14,055 Material 22,814 2,534 6,355 Color 5,594 621 1,649 Category 5,906 590 1,462 Total 84,727 9,346 23,521 Table 2: Statistics of dataset AE-110K. omy 3. In addition, we find these attributes has “long tail” phenomenon, that is, a small number of general attributes can basically define a product while there are a large number of specific attributes to define products more detailedly. These details are important in the accurate produces recommendation or other personalized services. In order to make fair comparison between our model and previous sequence tagging models which cannot handle huge amounts of attributes, we pick up the four frequent attributes (i.e., Brand Name, Material, Color and Category) to compose the second dataset AE-110k with a total of 117, 594 triples. Table 2 shows the statistics and distributions of attributes in AE-110k. Moreover, since the dataset is automatically constructed based on Exact Match criteria by pairing product title with its attributes and values present in Item Specific, it may involve some noises for positive triples. For example, the title of a ‘dress’ contains ‘long dresses’, the word ‘long’ may be tagged as values for attributes Sleeve Length and Dresses Length simultaneously. Thus we randomly sampled 1, 500 triples from AE-650k for manual evaluation and the accuracy of automatic labeling is 95.6%. This shows that the dataset is high-quality. 3https://www.aliexpress.com/item/32735772355.html 5219 4.2 Evaluation Metrics We use precision, recall and F1 score as evaluation metrics denoted as P, R and F1. We follow Exact Match criteria in which the full sequence of extracted value need to be correct. Clearly, this is a strict criteria as one example gets credit only when the tag of each word is correct. 4.3 Baselines To make the comparison reliable and reasonable, three sequence tagging models serve as baselines due to their reported superior tagging results like OpenTag (Zheng et al., 2018) or their typical representation (Huang et al., 2015). • BiLSTM uses the pre-trained BERT model to represent each word in title, then applies BiLSTM to produce title contextual embedding. Finally, a softmax function is exploited to predict the tag for each word. • BiLSTM-CRF(Huang et al., 2015) is considered to be the pioneer and the state-of-the-art sequence tagging model for NER which uses CRF to model the association of predicted tags. In this baseline, the hidden states generated by BiLSTM are used as input features for CRF layer. • OpenTag(Zheng et al., 2018) is the recent sequence tagging model for this task which adds self-attention mechanism to highlight important information before CRF layer. Since the source code of OpenTag is not available, we implement it using Keras. 4.4 Implementation Details All models are implemented with Tensorflow (Abadi et al., 2016) and Keras (Chollet et al., 2015). Optimization is performed using Adam (Kingma and Ba, 2014) with default parameters. We train up to 20 epochs for each model. The model that performs the best on the development set is then used for the evaluation on the test set. For all models, the word embeddings are pre-trained via BERT and the dimension is 768. The dimension of the hidden states in BiLSTM is set to 512 and the minibatch size is fixed to 256. The BIO tagging strategy is adopted. Note that only one global set of BIO tags for any attributes is used in this work. Attributes Models P (%) R (%) F1 (%) Brand Name BiLSTM 95.08 96.81 95.94 BiLSTM-CRF 95.45 97.17 96.30 OpenTag 95.18 97.55 96.35 Our model-110k 97.21 96.68 96.94 Our model-650k 96.94 97.14 97.04 Material BiLSTM 78.26 78.54 78.40 BiLSTM-CRF 77.15 78.12 77.63 Opentag 78.69 78.62 78.65 Our model-110k 82.76 83.57 83.16 Our model-650k 83.30 82.94 83.12 Color BiLSTM 68.08 68.00 68.04 BiLSTM-CRF 68.13 67.46 67.79 Opentag 71.19 70.50 70.84 Our model-110k 75.11 72.61 73.84 Our model-650k 77.55 72.80 75.10 Category BiLSTM 82.74 78.40 80.51 BiLSTM-CRF 81.57 79.94 80.75 Opentag 82.74 80.63 81.67 Our model-110k 84.11 80.80 82.42 Our model-650k 88.11 81.79 84.83 Table 3: Performance comparison between our model and three baselines on four frequent attributes. For baselines, only the performance on AE-110K is reported since they do not scale up to large set of attributes; while for our model, the performances on both AE-110K and AE-650K are reported. 5 Results and Discussion We conduct a series of experiments under various settings with the purposes to (1) make comparison of attribute extraction performance on frequent attributes with existing state-of-the-art models; (2) explore the scalability of our model up to thousands of attributes; and (3) examine the capability of our model in discovering new attributes which have not been seen before. 5.1 Results on Frequent Attributes The first experiment is conducted on four frequent attributes (i.e., with sufficient data) on AE-110k and AE-650k datasets. Table 3 reports the comparison results of our two models (on AE-110k and AE-650k datasets) and three baselines. It is observed that our models are consistently ranked the best over all competing baselines. This indicates that our idea of regarding ‘attribute’ as ‘query’ successfully models the semantic information embedded in attribute which has been ignored by previous sequence tagging models. Besides, different from the self-attention mechanism only in5220 84.1 87.6 83.7 73.7 65.5 50.1 76.1 81.8 76.3 59.9 45.9 27.7 79.1 84.6 79.8 66.1 53.9 35.7 0 10 20 30 40 50 60 70 80 90 100 All High Sub-high Medium Low Rare Micro-P(%) Micro-R(%) Micro-F1(%) Figure 3: Performance of our model on 8, 906 attributes in AE-650K dataset. ‘All’ stands for all attributes while ‘High’, ‘Sub-high’, ‘Medium’, ‘Low’ and ’Rare’ denote the five frequency groups of attributes defined in Table 1, respectively. side title adopted by OpenTag, our interacted similarity between attribute and title does attend to words which are more relevant to current extraction. In addition, our model is the only one that can be applied to AE-650K dataset which contains 8, 906 types of attributes. From Table 3, we compare the performance of our two models trained on different sizes of triples. It is interesting to find that extra training data on other attributes boosts the performances of the target four attributes, and outperforms the best baseline by average 3% in F1 score. We believe the main reason is that all the other attributes in AE-650k can be viewed as relevant tasks from Multi-task (Caruana, 1997) perspective. Usually, the model would take the risk of over-fitting if it is only optimized upon the target attributes due to unavoidable noises in the dataset. However, the Multi-task learning implicitly increases training data of other relevant tasks having different noise patterns and can average these noise patterns to obtain a more general representation and thus improve generalization of the model. 5.2 Results on Thousands of Attributes The second experiment is to explore the scalability of models up to thousands of attributes. Clearly, previous sequence tagging models fail to report results on large amounts of tags for attributes. Using a single model to handle large amounts of attributes is one advantage of our model. To verify this characteristic, we compute Micro-P, Micro-R, Micro-F1 on entire test set of AE-650k, as shown in the leftmost set of columns of Figure 3. The performances of our model on 8, 906 attributes reach 84.13%, 76.08% and 79.12%, respectively. Attributes P (%) R (%) F1 (%) Frame Color 63.16 48.00 54.55 Lenses Color 64.29 40.91 50.00 Shell Material 54.05 44.44 48.78 Wheel Material 70.59 37.50 48.98 Product Type 64.86 43.29 51.92 Table 4: Performance of our model in discovering values for new attributes. In order to validate the robustness of our model, we also perform experiments on five attribute frequency groups defined in Table 1. Their results are shown in Figure 3. We observe that our model achieves Micro-F1 of 84.60% and 79.79% for frequent attributes in ‘High’ and ‘Sub-high’ groups respectively. But more importantly, our model achieves good performance (i.e., Micro-F1 66.06% and 53.94% respectively) for less frequent attributes in ‘Medium’ and ‘Low’ groups, and even a promising result (i.e., Micro-F1 35.70%) for ‘Rare’ attributes which are presented less than 10 times. Thus, we are confident to conclude that our model has the ability to handle large amounts of attributes with only a single model. 5.3 Results of Discovering New Attributes To further examine the ability of our model in discovering new attributes which has never been seen before, we select 5 attributes with relatively low occurrences: Frame Color, Lenses Color, Shell Material, Wheel Material, and Product Type. We shuffle the AE-650K dataset to make sure they are not in training and development set, and evaluate the performance for these 5 attributes. Table 4 reports the results of discovering 5 new attributes. It is not surprising to see that our model still achieves acceptable performance (i.e., averaged F1 50.85%) on new attributes with no additional training data. We believe that some data in training set are semantically related to unseen attributes and they provide hints to help the extraction. To further confirm this hypothesis, we map attributes features ha generated by contextual embedding layer into two-dimensional space by tSNE (Rauber et al., 2016), as shown in Figure 4. In Figure 4 the four colors of circles represent the attributes of Color-related,4 Type-related, Materi4‘a-related’ denotes all attributes whose text contains the substring a. 5221 Color Type Material Other Color Lenses Color Frame Color Material Wheel Material Shell Material Type Product Type Fabric Type Plastic Type Figure 4: Distribution between semantically related new and existing attributes. E.g., Shell Material and Wheel Material are new attributes while Material is frequently known attributes. al-related and others respectively, and the areas are proportional to the frequency of attributes. An interesting observation is that Color-related and Material-related attributes are clustered into a small and concentrated area of two-dimensional space, respectively. Meanwhile, although Type and Product Type are very close, the distribution of all Type-related attributes is scattered in general. It may be because Type is not a specifically defined concept compared to Color or Material, the meaning of a Type-related attribute is determined by the word paired with Type. Therefore, we select two Type-related attributes adjacent to Material and find they are Fabric Type and Plastic Type. In fact, these two attributes are indeed relevant to the material of products. To verify the ability of our model to handle a larger number of new attributes, we collect additional 20, 532 products from new category Christmas, and form 46, 299 triples as test set. The Christmas test set contains 1, 121 types of attributes, 708 of which are new attributes. Our model achieves Micro-F1 of 66.37% on this test set. This proves that our model has good generalization and is able to transfer to other domains with a large number of new attributes. 5.4 Attention Visualizations To illustrate the attention learned from the product in Figure 1, we plot the heat map of attention vectors S for three attributes (Year, Color and Brand Name) where the lighter the color is the higher the weight is. Since each bar in the heat map represents the importance of a word in the title of each Year Color Brand Name Figure 5: The heat map of attention vector S. attribute, it indirectly affects the prediction decision. By observing Figure 5, we see that our model indeed adjusts the attention vector according to different attributes to highlight the value. 6 Related Work Previous work for attribute value extraction use rule-based extraction techniques (Vandic et al., 2012; Gopalakrishnan et al., 2012) which use domain-specific seed dictionary to spot key phrase. Ghani et al. (2006) predefine a set of product attributes and utilize supervised learning method to extract the corresponding attributes values. An NER system was proposed by Putthividhya and Hu (2011) for extracting product attributes and values. In this work, supervised NER and bootstrapping technology are combined to expand the seed dictionary of attribute values. However, these methods suffer from Limited World Assumption. More (2016) build a similar NER system which leverage existing values to tag new values. With the development of deep neural network, several different neural network methods have been proposed and applied in sequence tagging successfully. Huang et al. (2015) is the first to apply BiLSTM-CRF model to sequence tagging task, but this work employ heavy feature engineering to extract character-level features. Lample et al. (2016) utilize BiLSTM to model both word-level and character-level information rather than hand-crafted features, thus construct end-toend BiLSTM-CRF model for sequence tagging task. Convolutional neural network (CNN) (Le5222 Cun et al., 1989) is employed to model characterlevel information in Chiu and Nichols (2016) which achieves competitive performance for two sequence tagging tasks at that time. Ma and Hovy (2016) propose an end to end LSTM-CNNs-CRF model. Recently, several approaches employ sequence tagging model for attribute value extraction. Kozareva et al. (2016) adopt BiLSTM-CRF model to tag several product attributes from search queries with hand-crafted features. Furthermore, Zheng et al. (2018) propose an end-to-end tagging model utilizing BiLSTM, CRF, and Attention without any dictionary and hand-crafted features. Besides extracting attribute value from title, other related tasks have been defined. Nguyen et al. (2011); Sheth et al. (2017); Qiu et al. (2015) extracted attribute-value pairs from specific product description. 7 Conclusion To extract product attribute values in e-Commerce domain, previous sequence tagging models face two challenges, i.e., the huge amounts of product attributes and the emerging new attributes and new values that have not been seen before. To tackle the above issues, we present a novel architecture of sequence tagging with the integration of attributes semantically. Even if the attribute size reaches tens of thousands or even millions, our approach only trains a single model for all attributes instead of building one specific model for each attribute. When labeling new attributes that have not encountered before, by leveraging the learned information from existing attributes which have similar semantic distribution as the new ones, this model is able to extract the new values for new attributes. Experiments on a large dataset prove that this model is able to scale up to thousands of attributes, and outperforms state-of-the-art NER tagging models. Acknowledgements The authors wish to thank all reviewers for their helpful comments and suggestions. This work was supported by Alibaba Group through Alibaba Innovative Research (AIR) Program. This work has been completed during Huimin Xu and Xin Mao’s internship in Alibaba Group. References Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2016, Savannah, GA, USA, November 2-4, 2016., pages 265–283. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Daniel M. Bikel, Richard M. Schwartz, and Ralph M. Weischedel. 1999. An algorithm that learns what’s in a name. Machine Learning, 34(1-3):211–231. Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Franc¸ois Chollet et al. 2015. Keras. https:// keras.io. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Rayid Ghani, Katharina Probst, Yan Liu, Marko Krema, and Andrew E. Fano. 2006. Text mining for product attribute extraction. SIGKDD Explorations, 8(1):41–48. Vishrawas Gopalakrishnan, Suresh Parthasarathy Iyengar, Amit Madaan, Rajeev Rastogi, and Srinivasan H. Sengamedu. 2012. Matching product titles using web-based enrichment. In 21st ACM International Conference on Information and Knowledge Management, CIKM’12, Maui, HI, USA, October 29 - November 02, 2012, pages 605–614. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. 5223 Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Zornitsa Kozareva, Qi Li, Ke Zhai, and Weiwei Guo. 2016. Recognizing salient entities in shopping queries. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 107–111. Association for Computational Linguistics. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 - July 1, 2001, pages 282–289. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics. Yann LeCun, Bernhard E. Boser, John S. Denker, Donnie Henderson, Richard E. Howard, Wayne E. Hubbard, and Lawrence D. Jackel. 1989. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551. Xiao Ling and Daniel S. Weld. 2012. Fine-grained entity recognition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 2226, 2012, Toronto, Ontario, Canada. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Association for Computational Linguistics. Ajinkya More. 2016. Attribute extraction from product titles in ecommerce. CoRR, abs/1608.04670. Hoa Nguyen, Ariel Fuxman, Stelios Paparizos, Juliana Freire, and Rakesh Agrawal. 2011. Synthesizing products for online catalogs. PVLDB, 4(7):409– 418. Duangmanee Putthividhya and Junling Hu. 2011. Bootstrapped named entity recognition for product attribute extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1557–1567. Disheng Qiu, Luciano Barbosa, Xin Luna Dong, Yanyan Shen, and Divesh Srivastava. 2015. DEXTER: large-scale discovery and extraction of product specifications on the web. PVLDB, 8(13):2194– 2205. Paulo E. Rauber, Alexandre X. Falc˜ao, and Alexandru C. Telea. 2016. Visualizing time-dependent data using dynamic t-sne. In Eurographics Conference on Visualization, EuroVis 2016, Short Papers, Groningen, The Netherlands, 6-10 June 2016., pages 73–77. Amit P. Sheth, Axel Ngonga, Yin Wang, Elizabeth Chang, Dominik Slezak, Bogdan Franczyk, Rainer Alt, Xiaohui Tao, and Rainer Unland, editors. 2017. Proceedings of the International Conference on Web Intelligence, Leipzig, Germany, August 23-26, 2017. ACM. Damir Vandic, Jan-Willem van Dam, and Flavius Frasincar. 2012. Faceted product search powered by the semantic web. Decision Support Systems, 53(3):425–437. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000–6010. Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pages 1049–1058.
2019
514
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5224–5233 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5224 Incorporating Linguistic Constraints into Keyphrase Generation Jing Zhao and Yuxiang Zhang ∗ Civil Aviation University of China, Tianjin, China [email protected], [email protected] Abstract Keyphrases, that concisely describe the highlevel topics discussed in a document, are very useful for a wide range of natural language processing tasks. Though existing keyphrase generation methods have achieved remarkable performance on this task, they generate many overlapping phrases (including sub-phrases or super-phrases) of keyphrases. In this paper, we propose the parallel Seq2Seq network with the coverage attention to alleviate the overlapping phrase problem. Specifically, we integrate the linguistic constraints of keyphrases into the basic Seq2Seq network on the source side, and employ the multi-task learning framework on the target side. In addition, in order to prevent from generating overlapping phrases with correct syntax, we introduce the coverage vector to keep track of the attention history and to decide whether the parts of source text have been covered by existing generated keyphrases. The experimental results show that our method can outperform the state-of-the-art CopyRNN on scientific datasets, and is also more effective in news domain. 1 Introduction Automatic keyphrase prediction recommends a set of representative phrases that are related to the main topics discussed in a document (Liu et al., 2009). Since keyphrases can provide a high-level topic description of a document, they are beneficial for a wide range of natural language processing tasks such as information extraction (Wan and Xiao, 2008), text summarization (Zhang et al., 2017) and question answering (Tang et al., 2017). However, the performance of existing methods is still far from satisfactory (Hasan and Ng, 2014). The main reason is that it is very challenging to determine whether a phrase or sets of phrases can ∗Corresponding author accurately capture main topics that are presented in the document. Existing approaches for keyphrase prediction can be broadly divided into extraction and generation methods. The conventional extraction methods directly select important consecutive words or phrases from the target document as keyphrases. This means that the extracted keyphrases must appear in the target document. In comparison with extraction methods, the generation methods choose keyphrases from a predefined vocabulary regardless of whether the generated keyphrases appear in the target document. CopyRNN (Meng et al., 2017) is the first to employ the sequence-tosequence (Seq2Seq) framework (Sutskever et al., 2014) to generate keyphrases for documents. This method is able to predict absent keyphrases that do not appear in the target document. Following the CopyRNN, a few extensions of Seq2Seq framework have been proposed to help better generate keyphrases. Through analyzing the results generated by these approaches, we find out that there are many overlapping phrases of correct (author-labeled) keyphrases. For example, in experimental results of CopyRNN, the authorlabeled keyphrases are “Internet” and “Distributed decision” but the predicted are “Internet held” and “Distributed”, respectively. There are two shortcomings that lie in the overlapping phrases. First, the correct keyphrase is not generated but its overlapping phrases are predicted as keyphrases. Second, the existing generation approaches often predict the keyphrase and its overlapping phrases as keyphrases. However, the overlapping phrases of keyphrases are not keyphrases in most cases. The more accurate description for this overlapping problem and shortcomings will be given in the next section, including the problem formulation and seriousness found in experimental results of the state-of-the-art CopyRNN. 5225 Sub-problems and formulations No. Seriousness of the problem (top-k, k=10) |Pi|/|Ol| (%) |P n i |/|On l | (%) n = 1 n = 2 n = 3 n ≥4 p /∈Ok pb ∈Ok 1 6.62 0 2.69 21.49 47.15 pu ∈Ok 2 11.10 23.98 3.30 1.71 0.81 p ∈Ok pb ∈Ok Top(p) > Top(pb) 3 5.58 0 5.09 17.44 17.36 Top(p) < Top(pb) 4 7.25 0 4.73 28.89 17.46 pu ∈Ok Top(p) > Top(pu) 5 1.41 0.85 2.39 0.63 0.24 Top(p) < Top(pu) 6 10.77 9.78 14.84 5.53 1.41 Total 42.73 34.61 33.04 75.69 84.43 Table 1: Problem formulation and seriousness in experimental results of CopyRNN. In this paper, we propose a parallel Seq2Seq network (ParaNet) with the coverage attention to alleviate the overlapping phrase problem. Specifically, we exploit two standalone encoders to encode separately the source text and syntactic constraints into network on the source side, and then applies multi-task learning framework to generate the keyphrases and part-of-speech (POS) tags for words in keyphrases on the target side. Most of keyphrases are noun phrases and they commonly consist of nouns and adjectives. The syntactic constraints are helpful to prevent from generating the overlapping phrases of keyphrases that are not noun phrases, e.g., “internet held” (which contains a verb). In addition, in order to prevent from generating overlapping phrases of keyphrases with correct syntax, we introduce the coverage vector (proposed in (Tu et al., 2016)) to keep track of the attention history and to decide whether the parts of source text have been covered by the existing generated keyphrases. The remaining of this paper is organized as follows. In the next section, we analyze the overlapping phrase problem in existing generation methods. We summarize related methods to keyphrase prediction, especially for keyphrase generation in Section 3. The proposed method is presented in Section 4. Finally, we show the experiments and results before concluding the paper. 2 Analysis of the Overlapping Problem In this section, we first formalize the overlapping phrase problem, and then present its seriousness by analyzing statistics obtained from CopyRNN. Let p=wiwi+1...wi+m be a phrase with lengths m+1 over a finite word dictionary D, i.e., wi ∈D. we define the phrase pb = wi+jwi+j+1...wi+j+k (j ≥0, j + k ≤m) as a sub-phrase of p. Conversely, we define the phrase p as a super-phrase of pb and denote the super-phrase of p as pu. Overlapping relations exist between phrase p and its sub/super-phrase pb/pu. Let Ol be a set of authorlabeled keyphrases, and Ok be a set of the generated keyphrases at top-k predictions, in which each generated phrase may be correct or incorrect. We assume that p is an author-labeled keyphrase, i.e., p ∈Ol, and its sub-phrase pb and superphrase pu are not keyphrases, i.e., pb, pu /∈Ol. Let Top(px) be the rank of predicted keyphrase px in Ok. Top(p) > Top(px) means that the rank of Top(p) is higher than Top(px). The overlapping phrase problem can be divided into two main problems according to whether p is generated at the top-k results. These two problems are further subdivided into six sub-problems, formulated as shown in Table 1. The formulations No.1-2 shown in Table 1 mean that the authorlabeled keyphrase p is not predicted, and only one of its sub-phrases pb or super-phrases pu is generated. The formulations No.3-6 in Table 1 mean that the author-labeled keyphrase p and one of its sub-phrases pb or super-phrases pu are generated. In addition, Top(p) < Top(pb/pu) is worse than Top(p) > Top(pb/pu). Note that p, pb and pu are rarely generated simultaneously. We next present the seriousness of this problem through analyzing statistics obtained from experimental results of CopyRNN on dataset KP20k. We first calculate the proportion of the keyphrases suffering from the i-th sub-problem in all correct keyphrases, i.e., |Pi|/|Ol|, where Pi is defined as Pi = {p|p ∈Ol ∧p suffers from the i-th subproblem}, |Pi| and |Ol| are respectively the size of Pi and Ol. We select top-k (k = 10) phrases gen5226 erated by CopyRNN as the final predictions. As the results of |Pi|/|Ol| shown in Table 1, a total of 42.73% keyphrases suffer from this problem. In addition, we calculate the proportion of the keyphrases with the length n which suffer from the i-th sub-problem in all correct keyphrases with the same length, i.e., |P n i |/|On l |, where P n i and On l are the subsets of Pi and Ol, respectively, in which the length of each keyphrase is n (i.e., keyphrase is n-gram). Table 1 also shows the seriousness of the sub-problems of overlapping phrase problem with varying n of n-grams. As the results show, we can observe that the long keyphrases can easily suffer from the sub-phrase problem (i.e., pb ∈Ok) and the short keyphrases can easily suffer from the super-phrase problem (i.e., pu ∈Ok in Table 1). Although the overlapping problem restricts the performance of existing methods, it also gives us an opportunity to help better generate keyphrases as the overlapping phrases are often very close to the correct keyphrases. 3 Related Works As mentioned in Section 1, existing approaches for keyphrase prediction can be broadly divided into extraction and generation methods. The extraction methods can be further classified into supervised and unsupervised approaches. The supervised approaches treat keyphrase extraction as a binary classification task, in which a learning model is trained on the features of labeled keyphrases to determine whether a candidate phrase is a keyphrase (Witten et al., 1999; Medelyan et al., 2009; Gollapalli et al., 2017). In contrast, the unsupervised approaches directly treat keyphrase extraction as a ranking problem, scoring each candidate using different kinds of techniques such as clustering (Liu et al., 2009), or graph-based ranking (Mihalcea and Tarau, 2004; Wan and Xiao, 2008). This work is mainly related to keyphrase generation approaches which have been proven to be effective in the keyphrase prediction task. Following CopyRNN (Meng et al., 2017) which is the first to generate absent keyphrases using Seq2Seq framework, the few extensions have been proposed to help better generate keyphrases. In CopyRNN, model training heavily relies on massive amounts of labeled data, which is often unavailable especially for the new domains. To solve this problem, Ye and Wang (2018) proposed a semi-supervised keyphrase generation model by leveraging both abundant unlabeled data and limited labeled data. CopyRNN does not model the one-to-many relationship between the document and keyphrases. Therefore, keyphrase generation only depends on the source document and ignores constraints on the correlation among keyphrases. To overcome this drawback, Chen et al. (2018) proposed a Seq2Seq network with correlation constraints for keyphrase generation. Chen et al. (2019) proposed a title-guided Seq2Seq network to use title of source text to improve performance. However, these methods did not consider the linguistic constraints of keyphrases. 4 Methodology 4.1 Problem Definition Given a text dataset D={xi, pi}N i=1, where xi is a source document, pi = {pi,j}Mi j=1 is the keyphrase set of xi, and N is the number of documents. Both the document xi and keyphrase pi,j are sequences of words, denoted as xi = (x(i) 1 , x(i) 2 , ..., x(i) Li) and pi,j = (y(i,j) 1 , y(i,j) 2 , ..., y(i,j) Lij ), where Li and Lij are the length of word sequence of xi and pi,j. The goal of a keyphrase generation is to design a model to map each document x into the keyphrase set p. 4.2 Model Overview Figure 1 illustrates the overview of the proposed method. The method consists of two components, which are the parallel encoders and decoders. The parallel encoders consist of the word encoder and syntactic information encoder, which are used to compress the source text and its syntactic information into the hidden vectors. The parallel decoders contain the keyphrase decoder and POS tag decoder, which are different decoders and used to generate the keyphrases and POS tags of words in keyphrases. During the training process, these two tasks boost each other providing strong representation for source text. In addition, we employ the coverage attention to alleviate generating the overlapping phrases of keyphrases. 4.3 Basic Seq2Seq Model Our approach is based on a Seq2Seq framework which consists of an encoder and a decoder. Both the encoder and decoder are implemented with recurrent neural networks (RNN). The encoder converts the variable-length source word sequence 5227 Figure 1: The overview of the proposed approach. x = (x1, x2, ..., xL) into a set of hidden representation vector {hi}L i=1, by iterating the following equation: hi = fe(xi, hi−1) (1) where where fe is a non-linear function in encoder. The decoder decompresses the context vector and generate the variable-length target keyphrase y = (y1, y2, ..., yL′) word by word, through the conditional language model: p(yi|y1,...,i−1, x) = g(yi−1, si, ci) (2) where g is a softmax function, and si is a decoder hidden vector calculated as: si =fd(yi−1, si−1, ci) (3) where fd is a non-linear function in decoder. ci is a context vector, calculated as a weight sum over source hidden vector h: ci = PL j=1 αi,jhj (4) αi,j = exp(a(si−1,hj)) PL k=1 exp(a(si−1,hk)) (5) where a(si−1, hj) is an alignment function that measures the similarity between si−1 and hj. Pure generation mode can not predict keyphrase which consists of out-of-vocabulary words. Thus, Meng et al. (2017) first introduced a copy mechanism (Gu et al., 2016) to predict out-of-vocabulary by directly copying words from source text. Consequently, the probability of generating a target word yi (i.e., Equ. 2) is modified as: p(yi|y<i, x) = pg(yi|y<i, x) + pc(yi|y<i, x) (6) where y<i represents y1,...,i−1 and pc is the probability of copying, calculated as: pc(yi|y<i, x) = 1 Z X j:xj=yi exp(φ(xj)), yi ∈X φ(xj) = σ(h⊤ j Wc)si (7) where σ is a non-linear function, X is the set of unique words in source text x, Wc is a learned parameter matrix and Z is the sum for normalization. 4.4 Parallel Seq2Seq Model Most of keyphrases are noun phrases which commonly consist of nouns and adjectives (Gollapalli and Caragea, 2014). Hence, the syntactic information is useful for improving keyphrase generation performance. Although conventional generation model is capable of implicitly learning the syntactic information from source text, it can not capture a lot of deep syntactic structural details (Shi et al., 2016). To overcome this shortcoming, we propose a parallel Seq2Seq model which deeply integrates the following additional syntactic information into the basic Seq2Seq model: • POS tag: Keyphrases are commonly noun phrases with a specified part-of-speech (POS) patterns (Hulth, 2003). In supervised approaches for keyphrase extraction, POS tags assigned to words have been chosen as one type of important syntactic features, used to train the classifier (Hasan and Ng, 2014; Gollapalli et al., 2017). We incorporate the POS tags into Seq2Seq network to capture the syntactic combinations of keyphrases. 5228 Sentence: The framework is useful for deciding the parameter estimation in probabilistic retrieval models POS tags: DT NN VBZ JJ IN VBG DT NN NN IN JJ NN NNS Phrase tags: NP NP VP ADJP PP VP NP NP NP PP NP NP NP Table 2: An example of word sequence with both POS and phrase tags. • Phrase tag: Phrase tags assigned to words are also one type of important syntactic features in supervised extraction approaches, since the words in keyphrase commonly share the same phrase tags (Gollapalli et al., 2017). Therefore, we integrate the phrase tags into Seq2Seq network to capture the inherent syntactic structure of keyphrases. We use Stanford Parser1 (Finkel et al., 2005) to obtain the 32 POS tags and 16 phrase tags of words. An example is shown in Table 2 with both POS and phrase tags, and the author-labeled keyphrase is highlighted in bold. 4.4.1 Parallel Encoders The proposed model encodes word sequence and tag sequences (including POS and phrase tags) in parallel. We use the RNN encoder to produce the set of word hidden vector {hw} from the source document x, and produce the set of syntactic tag hidden vector {ht} from the POS and phrase tags. We create the look-up based embedding matrices for word, POS tag and phrase tag, and concatenate the embeddings of POS tag and phrase tag into a long vector as input of the tag encoder. We employ two methods to combine the word and syntactic tag hidden vectors into a unified hidden vector h. The first method is inspired by the Tree-LSTM (Tai et al., 2015), which can selectively incorporate the information from each child node. The cell and hidden vectors are calculated by following transition equations: ii = σ(Wi whw i + Wi tht i) (8) fw i = σ(Wfw w hw i + Wfw t ht i) (9) ft i = σ(Wft w hw i + Wft t ht i) (10) oi = σ(Wo whw i + Wo t ht i) (11) ui = tanh(Wu whw i + Wu t ht i) (12) ci = ii ⊙ui + fw i ⊙cw i + ft i ⊙ct i (13) hi = oi ⊙tanh(ci) (14) where cw i and ct i are the cell vectors of word and tag, hw i and ht i are the hidden vectors of word and 1https://nlp.stanford.edu/software/lex-parser.shtml tag, and σ is the sigmoid function. Each of ii, fw i , ft i , oi and ui denotes an input gate, a forget gate of word, a forget gate of syntactic tag, an output gate, and a vector for updating the memory cell, respectively. More details are given in (Tai et al., 2015). The second method is the line transformation followed by the hyperbolic tangent function: hi = tanh(Wl whw i + Wl tht i). (15) 4.4.2 Parallel Decoders The proposed method consists of two parallel decoders: keyphrase decoder and POS tag decoder. The keyphrase decoder is used to generate a set of keyphrases for documents. Although the keyphrase decoder also can learn syntactic structures of keyphrases to some extent, it fails to capture deep syntactic details. In order to supervise the syntactic combinations of keyphrase, the POS tag decoder is employed to generate a series of POS tags of words in keyphrases. Note that the POS tag decoder in our model serves as a trainingassisted role and is not used in the testing. The probability of predicting each POS tag of word is given as follows: p(ti|t<i, x) = gt(ti−1, st i, ci) (16) where gt is a softmax function, st i is a hidden vector of POS tag decoder. 4.5 Coverage Attention Repetition is a common problem for the Seq2Seq models and is especially serious when generating text sequence, such as machine translation (Tu et al., 2016) and automatic text summarization (See et al., 2017). The reason for this is that the traditional attention mechanisms focus on calculating the attention weight of the current time step, ignoring the distribution of weights in history. There can be no doubt that existing Seq2Seq models for keyphrase generation also suffer from this problem, i.e., generating sub-phrases or superphrases of keyphrases. We employ the coverage 5229 Dataset #PKPs #AKPs #Abs #1-grams #2-grams #3-grams #4-grams #>4-grams Inspec 3,564 1,349 500 510/100 1,743/548 910/399 275/180 126/122 Krapivin 1,299 1,040 400 256/101 700/631 254/233 74/55 15/20 NUS 1,333 1,128 211 434/167 632/576 204/234 53/88 10/63 SemEval 625 841 100 162/107 309/398 113/204 28/60 13/72 KP20k 66,468 39,055 20,000 26,249/6,076 26,755/19,883 10,486/9,196 2,312/2,708 666/1,192 Table 3: Summary of Datasets. model, used in works (Tu et al., 2016; See et al., 2017), to alleviate this problem. In the coverage model, we maintain a coverage vector co to help adjust the future attention through keeping track of the attention history, calculated as: coi,j = coi−1,j + αi,j (17) where the coverage vector coi,j is used to measure the attention coverage degree of word xj at step i. More details are shown in (Tu et al., 2016; See et al., 2017). Finally, we integrate coverage vector the attention mechanism, by modifying the alignment function in Equation (5) as: a(si−1, hj, coi−1,j) = v⊤ c tanh(Wssi−1 + Whhj + Wcocoi−1,j) (18) where vc, Ws, Wh, and Wco are the learnable weight parameters. 4.6 Overall Loss Function Given the set of data pairs {xi, yi}N i=1, where x is the word sequence of the source text, y is the word sequence of its keyphrase, and y is the word of keyphrase y. The loss function consists of two parts. The first is the negative log-likelihood of the target words in keyphrase, calculated as: Lw(θ) = − N X i=1 Li X k=1 log(p(yi k|yi <k, xi; θw)) (19) where Li is the length of keyphrase y, and θw is the parameter of this task. The second loss function is the negative loglikelihood of the POS tags of words in keyphrases, calculated as follows: Lt(θ) = − N X i=1 Li X k=1 log(p(ti k|ti <k, xi; θt)) (20) where t is the POS tag, and θt are the parameter. The final goal is to jointly minimize the two losses with Adam optimizer (Kingma and Ba, 2015): L = (1 −λ)Lw + λLt (21) where λ is a hyper-parameter to tune the impacts of the two tasks. 5 Experiment 5.1 Datasets We use the dataset collected by Meng et al. (2017) from various online digital libraries, which contains about 568K articles2. Following Meng et al. (2017), we use about 530K articles for training the model, 20k articles for validating the model, and 20k articles (i.e., KP20k) for testing the model. Similar to Meng et al. (2017), we also test the model on four widely used public datasets from the computer science domain: Inspec (Hulth and Megyesi, 2006), Krapivin (Krapivin et al., 2009), NUS (Nguyen and Kan, 2007), and SemEval-2010 (Kim et al., 2010). The datasets are summarized in Table 3 along with the number of present keyphrase (#PKPs), the number of absent keyphrase (#AKPs), the number of articles (#Abs.), the number of present/absent 1grams, 2-grams, 3-grams, 4-grams and more than 4-grams (#>4-grams), in each collection. 5.2 Experimental Settings In the training dataset, input text is the concatenation of the title and abstract of the scientific articles. Following the work (Meng et al., 2017), all numbers in text are mapped to a special token <digit>. The syntactic tags include 32 POS tags and 16 phrase tags. The size of word vocabulary is set to 50,000, the size of word embeddings is set to 150, and the size of embeddings of two syntactic tags is set to 50. All embeddings are randomly initialized with uniform distribution in [-0.1,0.1], and 2https://github.com/memray/seq2seq-keyphrase 5230 Method Inspec Krapivin NUS SemEval KP20k F1@5 F1@10 F1@5 F1@10 F1@5 F1@10 F1@5 F1@10 F1@5 F1@10 BL∗ 0.223 0.313 0.249 0.216 0.249 0.268 0.176 0.194 0.270 0.230 CopyRNN 0.278 0.342 0.311 0.266 0.334 0.326 0.293 0.304 0.333 0.262 ConNet 0.265 0.321 0.309 0.256 0.336 0.329 0.294 0.302 0.325 0.257 ParaNetL 0.289 0.353 0.326 0.277 0.354 0.342 0.307 0.303 0.351 0.282 ParaNetT 0.292 0.355 0.327 0.281 0.360 0.349 0.313 0.309 0.357 0.287 ParaNetL+CoAtt 0.292 0.354 0.330 0.279 0.357 0.342 0.308 0.306 0.355 0.283 ParaNetT +CoAtt 0.296 0.357 0.329 0.282 0.360 0.350 0.311 0.312 0.360 0.289 Table 4: Comparisons of predicting present keyphrases on five scientific datasets. learned during training. The size of hidden vector is fixed at 300. The weight parameter used to tune the impacts of the two tasks is set to λ=0.3. The initial learning rate of Adam optimizer is set to 10−4, and the dropout rate is set to 0.5. We use the beam search to generate multiple phrases. The max depth of beam search is set to 6, and the beam size is set to 200. 5.3 Comparative Methods We compare our method with extraction and generation approaches. Extraction methods consist of three unsupervised and two supervised methods. Unsupervised extraction methods include TF-IDF, TextRank (Mihalcea and Tarau, 2004) and SingleRank (Wan and Xiao, 2008). Supervised extraction methods include Maui (Medelyan et al., 2009) and KEA (Witten et al., 1999). To clearly represent the experimental results, we select the best-performing method (BL∗) from these extraction baselines with best-performing parameters for each dataset to compare with our method. The generation baselines are state-of-the-art CopyRNN (Meng et al., 2017) and ConNet, which inputs the concatenation of word embeddings and two syntactic tag embeddings into CopyRNN. The proposed method includes four models: (1) ParaNetL, using the hyperbolic tangent function (i.e., Equ. 15) to combine two hidden vectors of words and syntactic tag generated by encoder; (2) ParaNetT , using the tree-LSTM to combine two hidden vectors; (3) ParaNetL+CoAtt, ParaNetL with the coverage attention; (4) ParaNetT +CoAtt, ParaNetT with the coverage attention. 5.4 Evaluation Metrics Almost all previous works on keyphrase prediction use precision (P), recall (R), F1-score (F1) to evaluate the results (Manning et al., 2010). P = #c #p , R = #c #l , F1 = 2PR P +R, (22) where #c is the number of correctly predicted keyphrases, #p is the total number of predicted keyphrases, and #l is the total number of author-labeled standard keyphrases. Following the study (Meng et al., 2017), we employ top-N macro-averaged F1-score (F1) for evaluating present keyphrases and recall (R) for evaluating absent keyphrases. We use Porter’s stemmer3 to remove words’ suffix before determining the match of two keyphrases. 5.5 Results and Analysis 5.5.1 Prediction of Present Keyphrases The experimental results are shown in Table 4, in which the F1 at top-5 and top-10 predictions are given and the best scores are highlighted in bold. We compare our method with the best-performing extractive method (BL∗), which can only extract the keyphrases that appear in the source text (i.e., present keyphrases). We first compare our proposed method with the conventional keyphrases extraction methods. The results show that even the worst one in our models (i.e., ParaNetL) has a large margin over the bestperforming extraction method (BL∗) on all of the test datasets. Secondly, we further compare our method with CopyRNN, and the results indicate that our worst ParaNetL still achieves better performance than CopyRNN. Note that ConNet does not perform as well as we expect, and is slightly worse than CopyRNN on most datasets. The main reason for this may be that directly concatenating embeddings of two syntactic tags and words introduces much noise into the encoder, such as POS tag of verb. Finally, we compare our different models. From the results shown in Table 4, we can observe that ParaNetT is more effective than ParaNetL. This means that, in combining the word and syntactic 3https://tartarus.org/martin/PorterStemmer/ 5231 Method Inspec Krapivin NUS SemEval KP20k R @ 10 R @50 R @10 R @50 R @10 R @50 R @10 R @50 R @10 R @50 CopyRNN 0.047 0.098 0.113 0.202 0.058 0.116 0.043 0.066 0.125 0.211 ConNet 0.041 0.083 0.094 0.184 0.059 0.117 0.041 0.057 0.119 0.203 ParaNetL 0.047 0.097 0.121 0.208 0.063 0.119 0.043 0.068 0.133 0.224 ParaNetT 0.054 0.098 0.127 0.214 0.069 0.127 0.044 0.069 0.136 0.228 ParaNetL+CoAtt 0.053 0.099 0.125 0.206 0.065 0.123 0.042 0.069 0.134 0.226 ParaNetT +CoAtt 0.060 0.103 0.125 0.214 0.068 0.125 0.044 0.071 0.137 0.228 Table 5: Comparisons of predicting absent keyphrases on five scientific datasets. No. |Pi|/|Ol| (%) |P n i |/|On l | (%) n = 1 n = 2 n = 3 n ≥4 1 5.05-1.57 0 1.93-0.76 16.52-4.97 37.33-9.82 2 9.87-1.23 21.57-2.41 2.77-0.53 1.50-0.21 0.78-0.03 3 4.90-0.68 0 4.47-0.62 14.99-2.45 16.55-0.81 4 5.82-1.43 0 4.23-0.50 21.42-7.47 16.62-0.84 5 1.37-0.04 0.82-0.03 2.33-0.06 0.62-0.01 0.24-0 6 9.82-0.95 8.80-0.98 13.52-1.32 5.33-0.20 1.44+0.03 Total 36.83 -5.90 31.19-3.42 29.25-3.79 60.38-15.31 72.96-11.47 Table 6: Comparisons of seriousness of the overlapping phrase problem between ParaNetT +CoAtt and CopyRNN. tag hidden vectors form encoders, the tree-LSTM model performs better than the hyperbolic tangent function. The reason for this may be that the multiple gating functions in tree-LSTM help ParaNetT to select the useful information from each encoder. In addition, we can observe that coverage attention mechanism can help to gain better performance in generating present keyphrases. Among our proposed models, ParaNetT +CoAtt achieves the best performance on almost all test datasets. 5.5.2 Prediction of Absent Keyphrase As mentioned in the work (Meng et al., 2017), the Seq2Seq models can predict absent keyphrases. Therefore, we only compare our method with CopyRNN and ConNet, and evaluate the performance within the recall of the top-10 and top-50 results to see how many absent keyphrases can be correctly predicted. The results are shown in Table 5. As the results show, our worst model (ParaNetL) can correctly predict more absent keyphrases than CopyRNN. The main reason for this may be that the syntactic tags provide more useful information for identifying a part of absent keyphrases which have special syntactic structures. In addition, we note that ConNet is still slightly worse than CopyRNN in predicting absent keyphrases. Finally, we compare our four different models for generating absent keyphrases. From the results shown in Table 5, we can observe that ParaNetT can correctly predict more absent keyphrases than ParaNetL on all test datasets. As the results in the present keyphrase generation, the tree-LSTM model still performs better than the hyperbolic tangent function in the absent keyphrase generation. In addition, we can observe that coverage attention mechanism can help to correctly predict more absent keyphrases. The reason for this may be that the coverage vector can capture long-distance dependencies. This will help to generate the absent keyphrases which are the non-contiguous subsequences of source text. Among our proposed models, ParaNetT +CoAtt perform better than the other three models on most test datasets. 5.5.3 Reduction of Overlapping Phrases As mentioned in the Section 1, the important motivation for this work is to alleviate generating the overlapping phrases of keyphrases. Table 6 shows the same statistics as Table 1, compared between the best performing model ParaNetT +CoAtt and CopyRNN. From the results, we observe that, compared with CopyRNN, ParaNetT +CoAtt can significantly alleviate the overlapping phrase problem, especially for the sub-phrase problems No.1, No.3 and No.4. For example, the proportion of the keyphrases suffering from the overlapping problem in all keyphrases has dropped from 42.73% to 36.83%. In addition, we investigate the proportion 5232 Method F1@10 Method F1@10 TF-IDF 0.270 ParaNetL 0.186 TextRank 0.097 ParaNetT 0.188 SingleRank 0.256 ParaNetL+CoAtt 0.187 CopyRNN 0.164 ParaNetT +CoAtt 0.191 Table 7: Comparisons of different methods on DUC. of the keyphrases with the length n which suffer from the i-th sub-problem in all keyphrases with the same length, i.e., |P n i |/|On l |. We observe that this proportion of 3-grams (n = 3) reduces most significantly by up to 15.31%. In addition to the reduction of the overlapping phrases on KP20k dataset, compared with CopyRNN, ParaNetT +CoAtt can highly rank the correctly predicted keyphrases and rank lowly the overlapping phrases of keyphrases. For example, in the sub-problem No.3, ParaNetT +CoAtt can increase the average ranking of correctly predicted keyphrases from 6.50 to 5.95 at top-10 predictions, and decrease the average ranking of sub-phrases of keyphrases from 2.08 to 2.41. 5.5.4 Cross-Domain Testing CopyRNN and ParaNet are supervised methods, and are trained on a large-scale dataset in specific scientific domain. Similar to the work (Meng et al., 2017), we expect that our supervised method can learn universal language features that are also effective in other corpora. We thus test our method on new type of text, to see whether the method will work when being transferred to a different domain. We use the popular news article dataset: DUC-2001 (Wan and Xiao, 2008) for our experiments, which consists of 308 news articles and 2,488 manually labeled keyphrases. The results are shown in Table 7. From these results, we can observe that our models generate a certain number of keyphrases in the new domain,. Though the best ParaNetT +CoAtt falls behind the unsupervised algorithms TF-IDF and SingleRank, the worst ParaNetL significantly outperforms the TextRank and CopyRNN. In addition, we note that the overlapping phrase problem also exists in DUC dataset. In the experiment, ParaNetT +CoAtt can reduce the total proportion of keyphrases suffering from the overlapping phrase problem from 21.96% to 19.13%. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 F1@10 Inspec KP20k Krapiv in NUS SemEv al DUC Figure 2: The influence of the weight λ (F1@10). 5.5.5 Influence of Weight Parameter In this work, we propose the multi-task Seq2Seq network for keyphrase generation, which jointly learns the dominant task of predicting keyphrases and the auxiliary task of predicting POS tags of keyphrases. We employ the weight parameter λ (in Equ. 21) to tune the impacts of the two tasks. We conduct the experiment to illustrate the influence of the weight parameter λ in ParaNetL, which does not use the coverage attention. The results are shown in Figure 2, in which the F1 at top-10 predictions are given on six datasets. We observe that the performance of ParaNetL is influenced by changes on the parameter λ. In general, the performance slowly increases and then slowly decreases on six datasets as λ grows. The bestperforming settings are λ = 0.5 on news dataset DUC and λ = 0.3 on other five scientific datasets, which are finally used to balance two prediction tasks in the comparison experiments. 6 Conclusion In this study, we propose the parallel Seq2Seq network with the coverage attention to alleviate the overlapping problem (including sub-phrase and super-phrase problems) in existing keyphrase generation methods. In particular, we incorporate the linguistic constraints of keyphrases into the basic Seq2Seq network, and employ multi-task learning framework to enhance generation performance. The experimental results show that the proposed method can significantly outperform the state-ofthe-art CopyRNN on scientific datasets, and is also effective in news domain. References Jun Chen, Xiaoming Zhang, Yu Wu, Zhao Yan, and Zhoujun Li. 2018. Keyphrase generation with correlation constraints. In Proceedings of EMNLP, pages 4057–4066. 5233 Wang Chen, Yifan Gao, Jiani Zhang, Irwin King, and Michael R Lyu. 2019. Title-guided encoding for keyphrase generation. In Proceedings of AAAI. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of ACL, pages 363–370. Sujatha Das Gollapalli and Cornelia Caragea. 2014. Extracting keyphrases from research papers using citation networks. In Proceedings of AAAI, pages 1629–1635. Sujatha Das Gollapalli, Xiao-Li Li, and Peng Yang. 2017. Incorporating expert knowledge into keyphrase extraction. In Proceedings of AAAI, pages 3180–3187. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of ACL, pages 1631–1640. Kazi Saidul Hasan and Vincent Ng. 2014. Automatic keyphrase extraction: A survey of the state of the art. In Proceedings of ACL, pages 1262–1273. Anette Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of EMNLP, pages 216–223. Anette Hulth and Be´ata B Megyesi. 2006. A study on automatically extracted keywords in text categorization. In Proceedings of ACL, pages 537–544. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. Semeval-2010 task 5 : Automatic keyphrase extraction from scientific articles. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR, pages 1–13. Mikalai Krapivin, Aliaksandr Autaeu, and Maurizio Marchese. 2009. Large dataset for keyphrases extraction. Technical report, University of Trento. Zhiyuan Liu, Peng Li, Yabin Zheng, and Maosong Sun. 2009. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of EMNLP, pages 257–266. Christopher Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2010. Introduction to information retrieval. Natural Language Engineering, 16(1):100– 103. Olena Medelyan, Eibe Frank, and Ian H Witten. 2009. Human-competitive tagging using automatic keyphrase extraction. In Proceedings of EMNLP, pages 1318–1327. Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In Proceedings of ACL, pages 582–592. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of EMNLP, pages 1318–1327. Thuy Dung Nguyen and Min-Yen Kan. 2007. Keyphrase extraction in scientific publications. In International conference on Asian digital libraries, pages 317–326. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of ACL, pages 1073–1083. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In Proceedings of EMNLP, pages 1526–1534. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of ACL, pages 1556–1566. Yixuan Tang, Weilong Huang, Qi Liu, and Beibei Zhang. 2017. Qalink: Enriching text documents with relevant Q&A site contents. In Proceedings of CIKM, pages 3159–3168. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL, pages 76–85. Xiaojun Wan and Jianguo Xiao. 2008. Single document keyphrase extraction using neighborhood knowledge. In Proceedings of AAAI, pages 855– 860. Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevillmanning. 1999. Kea: Practical automatic keyphrase extraction. In Proceedings of Acm Conference on Digital Libraries, pages 254–255. Hai Ye and Lu Wang. 2018. Semi-supervised learning for neural keyphrase generation. In Proceedings of EMNLP, pages 4142–4153. Yuxiang Zhang, Yaocheng Chang, Xiaoqing Liu, Sujatha Das Gollapalli, Xiaoli Li, and Chunjing Xiao. 2017. Mike: keyphrase extraction by integrating multidimensional information. In Proceedings of CIKM, pages 1349–1358.
2019
515
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5234–5245 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5234 A Unified Multi-task Adversarial Learning Framework for Pharmacovigilance Mining Shweta Yadav, Asif Ekbal, Sriparna Saha, Pushpak Bhattacharyya Indian Institute of Technology Patna Patna, India {shweta.pcs14,asif,sriparna,pb}@iitp.ac.in Abstract The mining of adverse drug reaction (ADR) has a crucial role in the pharmacovigilance. The traditional ways of identifying ADR are reliable but time-consuming, non-scalable and offer a very limited amount of ADR relevant information. With the unprecedented growth of information sources in the forms of social media texts (Twitter, Blogs, Reviews etc.), biomedical literature, and Electronic Medical Records (EMR), it has become crucial to extract the most pertinent ADR related information from these free-form texts. In this paper, we propose a neural network inspired multitask learning framework that can simultaneously extract ADRs from various sources. We adopt a novel adversarial learning-based approach to learn features across multiple ADR information sources. Unlike the other existing techniques, our approach is capable to extracting fine-grained information (such as ‘Indications’, ‘Symptoms’, ‘Finding’, ‘Disease’, ‘Drug’) which provide important cues in pharmacovigilance. We evaluate our proposed approach on three publicly available realworld benchmark pharmacovigilance datasets, a Twitter dataset from PSB 2016 Social Media Shared Task, CADEC corpus and Medline ADR corpus. Experiments show that our unified framework achieves state-of-the-art performance on individual tasks associated with the different benchmark datasets. This establishes the fact that our proposed approach is generic, which enables it to achieve high performance on the diverse datasets. The source code is available here1. 1 Introduction Early detection and monitoring of adverse drug reactions (ADRs) can minimize the deleterious impact on patients and health-care systems (Hakkarainen et al., 2012; Sultana et al., 2013). 1https://bit.ly/2EMln36 For prevention, the drug safety organizations known as pharmacovigilance agencies conduct post-market surveillance to identify the drug’s side effects post-release. However, the majority of the existing ADE surveillance systems utilizes passive spontaneous reporting system databases, such as the Federal Drug Administration’s Adverse Event Reporting System (FAERS) (Li et al., 2014). These systems are often under-reported, biased and delayed. To overcome the limitation of a passive reporting system, active methods to ADR monitoring continuously explores frequently updated ADR data sources (Behrman et al., 2011). The quantity and near-instantaneous nature of social media provide potential opportunities for real-time monitoring of Adverse Drug Reaction (ADR). The fact that this data is up-to-date and is generated by patients overcomes the weaknesses of traditional ADR surveillance techniques (Leaman et al., 2010). Thus, social media could complement traditional information sources for more effective pharmacovigilance studies, as well as potentially serve as an early warning system for unknown ADR, which may be important for a clinical decision. Additionally, the high statistically significant correlation (p < 0.001, ρ = 0.75) between FAERS and ADRs (extracted through Twitter data) shows that Twitter is a viable pharmacovigilance data source (Freifeld et al., 2014). With the enormous amount of data generated every day, it is desirable to have an automated ADR extraction system that can ease the work of domain experts to quickly investigate the vast amount of unstructured text and identify emerging trends. This may correspond to mapping previously undiscovered adverse effect with a given drug, or discovering an unforeseen impact to a change in the manufacturing process. However, extracting this information from the unstructured text poses several challenges as follows: 5235 Figure 1: Sample sentences from CADEC (Text1), MEDLINE (Text 2) and Twitter (Text 3,4) dataset. The token in red represents ADR, purple denotes Finding, blue represent Drug name and brown colour text represents Indication. • Multiple Context: Context carries an essential role in determining the semantic labels of the medical concepts. For example, consider the following tweets: Tweet 1: “Advil cured my horrific pain, but made my stomach upset” Tweet 2: “Advil cured my upset stomach but gave me a horrific pain” The above tweets, although have a similar medical concept, their contexts specify the associated class types. In Tweet 1, ‘pain’ refers to the class type Symptom, while in Tweet 2, it refers to ADR. • Multiple word form: Social media text offers some inherently distinct challenges such as containing short word-forms ( eg,“need to sleep 24/7”), misspelled wordforms (eg, “fluoxetine, it just make me so tiered ’), abbreviated words (eg, CIT for Citopram), slangs (eg, “seroquel knocked me out”), implicit sense (eg, “hard time getting some Z’s”), symbols (such as emoticons), and figurative languages (eg, “quetiapine zombie”). This arbitrariness increases the difficulty level in capturing the semantic relationships between the different types. To overcome these limitations, several machine learning and deep learning models are introduced for ADR mining. However, these models are very task-specific and often fail to show reasonable accuracies when these evaluated for some other domains or other annotation schemes. In this paper, we propose a unified multi-task learning (MTL) framework that works on the concept of adversarial learning. Our model is capable of learning several tasks associated with ADR monitoring with different levels of supervisions collectively. The proposed approach differs from the previous studies in two aspects: Firstly, most of the existing methods in multi-task learning attempt to divide the features of different tasks based on task-specific and task-invariant feature space, considering only component-wise parameters. The major drawback of this mechanism is that the common feature space often incorporates the task-specific feature space, leading to feature redundancy. Given this issue in multitask learning (MTL), in our proposed framework we employ adversarial learning (Goodfellow et al., 2014), which helps in eliminating redundant features from the feature space and prevent the contamination between shared and task-specific features. Secondly, we also employ the highway and residual connection whenever necessary to avoid the vanishing gradient problem and improve the performance of our deep neural model (multiheaded attention based stacked recurrent and convolutional neural network). Contributions: Contributions of our current work can be summarized as follows: (1) We propose a unified multi-task learning (MTL) framework for pharmacovigilance mining that exploits the capabilities of adversarial learning to learn the shared complementary features across the multiple ADR datasets. To our best knowledge, this is the very first attempt to study the effect of adversarial learning method in MTL environment, especially for pharmacovigilance mining. (2) Our proposed model is capable of automatically identifying the various information (such as Symptom, Finding, Disease, Drug), in addition to the ADR. (3) We validate our proposed framework on three popular benchmark datasets, namely Twitter (Sarker et al., 2016), CADEC (Karimi et al., 2015) and MEDLINE (Gurulingappa et al., 2012a) for pharmacovigilance mining, having different annotation schemes. We extract the following tags: ADR, Drugs, and Indications from the Twitter dataset, ADR, Disease, Drug, Finding; and Symptom from the CADEC dataset; and Drug and ADR mentions from the MEDLINE dataset. Figure-1 shows exemplary sentences from each dataset. (4) Our unified multi-task model achieves the state-of-the-art performance in the ADR labeling and outperforms the strong baseline models for all the other pharmacovigilance labels. 5236 Figure 2: Proposed model architecture for pharmacovigilance mining. (all the neurons representation are hypothetical). The right part of the image describes the Component 1 and Component 2. 2 Related Work Depending upon the source of data, we categorize the previous works as: (i) Biomedical Text and Electronic Medical Record: Several Natural Language Processing (NLP) techniques have been proposed to extract ADRs from the Electronic Medical Record (Wang et al., 2009; Friedman, 2009; Aramaki et al., 2010) and medical case reports (Gurulingappa et al., 2011). Gurulingappa et al. (2012a) adapted machine learning technique for the identification and extraction of potential adverse drug event relations from the MEDLINE case reports. Unlike other spontaneous data sources such as social media, both EMR and medical case reports offer several advantages of having complete records of patients’ medical history, treatment, conditions and the possible risk factors, and is also not restricted to the patients experiencing ADRs (Harpaz et al., 2012b). Recently, a study conducted by (Sarker and Gonzalez, 2015) utilized the data from MEDLINE case reports and Twitter. They proposed several textual features and investigated how the combination of different datasets would increase the performance of identifying ADRs. With the advancement of the neural network technique, (Huynh et al., 2016) investigated multiple neural network (NN) frameworks for ADR classification on both medical case reports and Twitter dataset. (ii) Social Media: Social media offers a very rich and viable source of information for identifying potential ADRs in a real-time. Leaman et al. (2010) conducted very first study utilizing user comments from their social media post. In total, the dataset contains 6, 890 user comments. The research shows that user comments are highly beneficial in uncovering the ADRs. Further works (Gurulingappa et al., 2012b; Benton et al., 2011; Harpaz et al., 2012a) utilized the lexicon-based approach to extract the ADRs. However, these approaches are only restricted to a number of target ADRs. Nikfarjam and Gonzalez (2011) exploited rule-based technique over naive lexicon-based approach on the same dataset which was capable of detecting ADR not included in lexicons. With the emergence of annotated data, several research works have employed supervised machine learning techniques such as Support Vector Machine (SVM) (Sarker and Gonzalez, 2015), Conditional Random Field (CRF) (Nikfarjam et al., 2015) and Random Forest (Zhang et al., 2016). In recent years with the introduction of deep learning techniques, most of the studies utilize deep learning model to predict ADRs. Lee et al. (2017) developed semi-supervised deep learning model on the Twitter corpus. In particular, they used the Convolution Neural Network (CNN) for classification. Stanovsky et al. (2017) used the Recurrent Neural Network integrated with knowledge graph embedding on the CADEC corpus. Their study shows that this integration can make the model more accurate. Tutubalina and Nikolenko (2017) explored the combination of CRF and Recurrent Neural Network (RNN). Their 5237 results show that CRF can assist RNN model in capturing the context well. The most relevant work to this study is the work conducted by Chowdhury et al. (2018). They learned jointly for three tasks: binary classification, ADR labeling, and indication labeling using RNN-attentioncoverage model. 3 Methodology With our adversarial multi-task framework, we jointly learn to label the ADR events from multiple ADR datasets. ADR labeling is a sequence labeling problem. For a given input sequence X, the model learns to find the optimal tag sequence y∗. Mathematically, y∗= arg max y P(Y |X) (1) Our proposed adversarial multi-task framework is depicted in Figure 2. Our model comprises of five components: (1) Embedding Layer: It captures the meaning and semantic associations between pharmacovigilance word that appears in the text. (2) Encoder/Feature Extractor Layer, which generates both task-specific and task-shared feature. Each of these feature generator modules consists of Convolutional Neural Network (CNN) followed by stacked Bi-Gated Recurrent Unit (GRU). Task-specific feature generator is responsible for capturing the features specific to the task. In the task-shared feature generator, there is an additional adversarial learning component, where feature extractor (Generator) is working operates adversarially towards a learnable multi-layer perceptron (Discriminator), preventing it from making an accurate prediction about the types of the task the feature generated from. (3) Concatenation Layer: This is responsible for concatenating the feature representation obtained by both the feature extractor modules. (4) Multi-head Attention Layer: This learns to encode better the given word by looking at the other words in the text. (5) CRF Layer: This is used to predict the most probable tag sequence. 3.1 Input Text The input to our model is a sequence of words X = (x1, x2, . . . , xn) corresponding to socialmedia posts/medical case reports comprising of n words. 3.2 Embedding Layer This layer generates two forms of representations: Word embedding: maps each word xi to low dimensional vector wi ∈Rde. We use pre-trained word embedding of dimension de. Character embedding:: to capture the morphological features. The character embedding can help in capturing the representations of the out of vocabulary (OOV) words, misspelt words and variations in noun or verb phrase. When it comes to the social media text, this issue even becomes more crucial to resolve. Character embedding is one of the ways to resolve this issue. It allows the model to learn lexical patterns (e.g. suffix or prefix) which eventually helps in capturing the outof-vocabulary (OOV) words and some other information which is difficult to capture through word embedding. We employ CNN for character embedding. Let C = {c1, c2, . . . , ck} be the character sequence of words xi having length l. Each character cj is represented as a one-hot vector of length C, which is the number of unique characters in the dataset. The resulted one-hot representations of all the characters in the word are stacked to form a matrix M ∈Rk×|C|. Thereafter, we apply several filters of different width to this matrix. The width of these filters varies from 1 to k, i.e., these filters look at 1 to k-gram character sequences. The max-pooling operation is performed followed by the convolutional operation to pick the most relevant feature. We call this character embedding feature as ci. Finally, the output of word embedding for the ith word is the concatenation of word embedding wi and the character embedding ci. For each xi ∈ X, the embedding layer generate the embedding in the following way: ei = wi ⊕ci (2) 3.3 Feature Extractor Our feature extractor utilizes CNN and stacked BiGRU to encode the output of the Embedding layer. CNN and stacked Bi-GRU takes the Embedding layer output as input and generate the features to further encode the sequence information. Since, we employ the stacked Bi-GRU, there could be vanishing gradient problem. To tackle this, we employ highway layer (Srivastava et al., 2015), that has shown a significant impact in reducing vanishing gradient problem in various NLP tasks (Kim 5238 et al., 2016; Costa-juss`a and Fonollosa, 2016). Let us the consider the input sequence to this layer is E = {e1, e2, . . . , en}. A convolution operation is performed over the zero-padded sequence Ep. Similar to the character embedding, a set of k filter of size m are applied to the sequence. We obtain convoluted features ct at given time t for t = 1, 2, . . . , n. ct = relu(F[et−m−1 2 . . . et . . . et+ m−1 2 ]) (3) Then, we generate the feature vectors C′ = [c′ 1, c′ 2 . . . c′ n], by applying max pooling on C. Inspired by the success of stacked attentive RNN in solving other NLP tasks (Wu et al., 2016; Graves et al., 2013; Dyer et al., 2015; Prakash et al., 2016), we use the stacked GRU to encode the input text. The stacked GRU is an extension to GRU model that has multiple hidden GRU layers. The purpose of using multiple GRUs layers is to learn more sophisticated conditional distributions from the data (Bahdanau et al., 2015). In this work, we employ vertical stacking strategy where the output of the previous layer of GRU is fed to the highway layer and corresponding output is passed as input to the next layer of GRU. Let the number of layers in stacked GRU is L then the GRU computes the hidden state for each layer l ∈L as follows: hl k = GRU(hl−1 k , hl k−1) (4) where, hl k is the hidden state representation at lth layer. The input h0 k to the first layer (l = 1) of GRU are initialized randomly. The first layer of GRU unit at kth word feature takes the input as the embedding layer output ek of the kth word. We compute the forward (−→ hk) and backward (←− hk) hidden state for each word k in the sentence. The final hidden state at layer l ∈L is computed by augmenting both the hidden states: zl k = [ −→ hl k⊕ ←− hl k]. The final input text representation from stacked Bi-GRU layer is calculated by taking the hidden state of the last layer (L) of the GRU as follows: h1, h2, . . . , hn = [ −→ hL 1 ⊕ ←− hL 1 ], [ −→ hL 2 ⊕ ←− hL 2 ], . . . , [ −→ hL n ⊕ ←− hL n] (5) We compute the overall input text representation by concatenating the output of CNN layer C ′ and stacked Bi-GRU (eq. 5) as follows: z1, z2, . . . , zn = [c ′ 1 ⊕h1], [c ′ 2 ⊕h2], . . . , [c ′ n ⊕hn] (6) The above approach to generate task specific feature is computed at for each task separately. In order to capture the common features along the task, we utilize the above feature extractor framework which serves as a Generator model and the feed forward neural network as a Discriminator. 3.4 Task Discriminator Layer Our feature extractor layer is generating two types of features, shared and task-specific. Ideally both feature spaces should be mutually exclusive. To ensure that task-specific features of given task do not exist in the shared space, we exploit the concept of adversarial training (Goodfellow et al., 2014) into shared feature space. We follow the same method as introduced by (Liu et al., 2017) to make the shared feature space uncontaminated by the task-specific features. For achieving the aforementioned strategy, a Task Discriminator D is used to map the attention prioritized shared feature to estimate the task of its origin. In our case, Task Discriminator is a fully connected layer using a softmax layer to produce the probability distribution of the shared features belonging to any task. The shared feature extractor (c.f. 3.3) works as Generator (G) to generate shared features. The shared feature extractor is made to work in an adversarial way, preventing the discriminator from predicting the task and hence preventing contamination in the shared space. The adversarial loss is used to train the model. Let us assume that the shared feature (c.f. equation 6) is {zs 1, zs 2, . . . , zs n}. It can be represented as: D(zs) = softmax(zs nW d + bd) (7) where W d and bd are the weight matrix and bias, respectively. 3.5 Concatenation Layer Let us denote the shared and task-specific features for input text are zs = {zs 1, zs 2, . . . , zs n} and zt = {zt 1, zt 2, . . . , zt n}. Finally, the output of the feature extractor layer is computed as the concatenation of the shared and task-specific feature as follows: S = zs 1 ⊕zt 1, zs 2 ⊕zt 2, . . . , zs n ⊕zt n = s1, s2, . . . , sn−1, sn (8) 3.6 Multi-head Attention Layer The multi-head attention is used to learn the dependencies between any pair of words in the input text. We apply the multi-head attention on the final representation of the input text S as computed in Equation 8. The multi-head attention (Vaswani 5239 et al., 2017) can be precisely described as follows: Attention(Q, K, V ) = softmax(QKT √ d )V (9) where, Q, K and V are the query, key and value matrix. In our experiment, all these values are equivalent to the S (with the multiplication of the respective learning weights) and d is the dimension of the feature extraction units. Multi-head attention first linearly projects the queries, keys and values to the given no. of the head (t) using different linear projections. Then these projections perform the scaled dot-product attention in parallel. Finally, these results of attention are concatenated and once again projected to get the new representation. Formally, the multi-head attention at head i can be computed by: headi = Attention(SW Q i , SW K i , SW V i ) S ′ = W(head1 ⊕head2⊕, . . . , ⊕headt) (10) where W Q i , W K i and W V i are the weight matrices. 3.7 Conditional Random Field Layer In sequence labeling problem there is often a dependency between the successive labels. Instead of predicting the current label independently through softmax layer, we employ the CRF (Lafferty et al., 2001) layer, which takes care of the previous label to predict the current label. Firstly, the attentive feature at given time step t is projected to another space which has a dimension equal to the number of output tags. Mathematically, it can be formulated as follows: ot = W SS, t + bS (11) Thereafter, we calculate the score to predict a given label sequence y as follows: score(y|X) = n X t=1 (At−1,t + ot,yt) (12) where A is the transition score matrix. Finally, we select the tag sequence with highest score as follows: ˆy = arg max y∈Y score(y|x) (13) In decoding stage, we use Viterbi algorithm to compute the optimal tag sequence. 4 Experimental Details 4.1 Network Training We have optimized two different losses to train our multi-task model. The first loss is task-specific loss of Ltask, which is specific for each task. Apart from task-specific loss, we also optimize the adversarial loss to train the network not correctly to predict the task. For task-specific loss, we use negative loglikelihood objective as the loss function for each task. Given the total number of task T and N training samples (xi, yi) from task t ∈T, the task loss Ltask can be computed by the following equation: Ltask = − T X t=1 N X i=1 logp( ˆyi t|xt i) (14) The likelihood function p( ˆyit|xt i) can be computed by the following equation: p( ˆyi t|xt i) = escore( ˆ yit|xt i) P y∈Y escore(yit|xt i) (15) The score(.) function is computed by the equation 12. The adversarial loss trains the shared feature extractor to generate the shared features such that the task discriminator layer cannot reliably recognize which task the input text comes from. The adversarial loss Ladv can be computed as follows: Ladv = min G  max D T X t=1 N X i=1 dt ilog  D G(xt i))]  (16) where dt i is the gold label indicating the type of the current task and xt i is the ith example of task t. The min-max optimization problem is addressed by the gradient reversal layer (Ganin and Lempitsky, 2015). The final loss of the model is defined by the following equation: L = α × Ltask + β × Ladv (17) where α and β are the scalar parameter. 4.2 Hyper-parameters We use the pre-trained word embedding 2 from Pyysalo et al. (2013) of dimension 200. It is trained on the combination of PubMed and PMC biomedical texts with texts extracted from a recent English Wikipedia dump. We set the maximum length of input text as 44 and maximum character 2http://evexdb.org/pmresources/vec-space-models/ 5240 length of 10. The CNN based character embedding length of 100 is used in this experiment. The optimal hidden state dimension of GRU is set to be 100. We use 4 GRU layers to form the stacked GRU layer. The CNN layer uses the filter set: {2, 3, 4}. In multi-head attention layer, we use a total of 4 heads to compute the attentive representation. We set the dropout rate to 0.5. The batch size is set to 16 and value of loss weights α and β are set to be 0.8 and 0.2, respectively. The Adam Optimization (Kingma and Ba, 2015) method with a learning rate of 0.01 is used during training to optimize the network weights. The optimal values of hyper-parameters are achieved through the 10-fold cross validation experiment. 4.3 Datasets We use three different ADR labeling datasets : PSB 2016 Social Media Shared Task for ADR Extraction dataset (Twitter), CADEC, and MEDLINE to evaluate our multi-task model performance. It is to be noted that our model is trained simultaneously on the different ADR datasets. The different datasets used in the experiment are as follows: 1. Twitter dataset: The first dataset, which we use is the Twitter dataset from PSB 2016 Social Media Shared Task for ADR Extraction task. It contains 572 tweets which are fully annotated for mentions of ADR, tweet ID, start and end offset, UMLS ID, annotated text span and the related drugs. We extracted the following three tags from this dataset: ADR, Drugs, and Indications. 2. CADEC adverse drugs events dataset: The another dataset, which we use is the CADEC adverse drugs event dataset. It contains a total of 1248 sentences containing different tags. Our model extract the following tags from CADEC Corpus: ADR, Disease, Drug, Finding and Symptom. 3. MEDLINE ADR dataset: This ADR corpus was released by Gurulingappa et al. (2012b). It was derived from the MEDLINE case reports3. This case report provides information about the symptoms, signs, diagnosis, treatment and follow-up of individual patients. This corpus contains 2972 documents with 3https://www.nlm.nih.gov/bsd/indexing/ training/PUB_050 20967 sentences. Out of which, 4272 sentences are annotated with names and relationships between drugs, adverse effects and dosages. Our model extract the Drug and ADR mentions in the sentences. 5 Result and Analysis We evaluate the pharmacovigilance labeling tasks in terms of Precision, Recall and F1-Score. Unlike the existing system, we evaluate the performance of our model, using the exact matching scheme, where a prediction sequence is counted as correct only if all the sequence labels are predicted correctly. We will begin by first describing the baselines models, followed by the results obtained from the proposed model and then present the analysis of the results. 5.1 Baselines We compare our adversarial multi-task model with the following state-of-the-art baselines. It is to be noted that these baselines are re-implementation of the state-of-the-art methods for ADR extraction. (1) ST-BLSTM: This is a single task model for ADR labeling with Bi-LSTM as sentence encoder. In our experiment, we build the individual model for each dataset. (2) ST-CNN: This model is similar to baseline STBLSTM, but instead of using Bi-LSTM for sentence encoder, we use CNN with filters: {2, 3, 4}. (3) CRNN: In this model CNN and LSTM are together used for sentence encoder (Huynh et al., 2016). We adopt the same architecture for ADR extraction by classifying each token of the sentence into a pre-defined set of tags. (4) RCNN: This model is similar to the third baseline, but here we extract the LSTM feature first and then pass these features as the input to the CNN network. (5) MT-BLSTM: It is a multi-task model (Chowdhury et al., 2018) with a shared Bi-LSTM layer across the task for sentence encoder and taskspecific Bi-LSTM for each task. The final representation is obtained by concatenating shared and task-specific Bi-LSTM. (6) MT-Atten-BLSTM: This baseline model (Chowdhury et al., 2018) is similar to the MTBLSTM. The sentence encoder of this model is also equipped with the word level attention mechanism. 5241 Models Twitter CADEC MEDLINE P R F1 P R F1 P R F1 ST-BLSTM 57.7 56.8 57.3 52.9 49.4 51.1 71.65 72.19 71.91 ST-CNN 63.8 65.8 67.1 39.7 42.7 42.0 66.88 73.81 70.17 CRNN (Huynh et al., 2016) 61.1 62.4 64.9 49.5 46.9 48.2 71.0 77.3 75.5 RCNN (Huynh et al., 2016) 57.6 58.7 63.6 42.4 44.9 43.6 73.5 72.0 74.0 MT-BLSTM (Chowdhury et al., 2018) 65.57 61.02 63.19 60.50 55.16 57.62 72.72 75.49 74.0 MT-Atten-BLSTM (Chowdhury et al., 2018) 62.26 69.62 65.73 56.63 60.0 58.27 75.08 81.06 77.95 Proposed Model 68.78 70.81 69.69 64.33 67.03 65.58 81.97 82.61 82.18 Table 1: Result comparison of the proposed method with the state-of-art baseline methods. Here, ‘P’, ‘R’, ‘F1’ represents Precision, Recall and F1-Score. The results on CADEC and MEDLINE are on 10-fold cross validation; for the twitter dataset, we use the train and test sets as provided by the PSB 2016 shared task. Model Components Twitter CADEC MEDLINE Proposed Model 69.69 65.58 82.18 – Character Embedding 67.63 (2.06 ↓) 56.10 (9.48 ↓) 76.34 (5.84 ↓) – Multi-head Attention 68.65 (1.04 ↓) 60.51 (5.07 ↓) 77.71 (4.47 ↓) – Adversarial Learning 68.11 (1.58 ↓) 58.57 (7.01 ↓) 71.21 (10.97 ↓) Table 2: Ablation study on all the dataset. The values in the bracket shows the absolute decrements (↓) in the proposed model by removing the respective component. It shows the contribution (in terms of model performance in F1 Score) of that component in our proposed model. 5.2 Results The extensive results of our proposed model with comparisons to the state-of-the-art baselines techniques are reported in Table 1. Our proposed model outperforms the state-of-the-art baselines techniques by fair margins in terms of precision, recall and F1-Score for all the datasets. In our first experiment, we train two models (i.e. SingleTask BLSTM and Multi-Task BLSTM) to analyze the effect of the multi-task model (MT-BLSTM) over a single task model (ST-BLSTM). On all the three datasets, we can visualize from Table 1 that, the multi-task framework with its sharing scheme can help in boost the performance of the system. We observe the performance improvement of 5.89, 6.52 and 2.09 F1-Score points on Twitter, CADEC, and MEDLINE dataset, respectively. The similar improvement is also observed in terms of precision and recall. In comparison to the baseline 5 model, our proposed method achieve the improvement of 6.5, 7.96, and 8.18 F1-Score points on Twitter, CADEC, and MEDLINE dataset, respectively. This shows the robustness of our proposed multitask method. We also compare our proposed system with MT-Atten-BLSTM model. The results show the performance improvement of 3.96, 7.31, and 4.23 F1 Score points for Twitter, CADEC and MEDLINE dataset, respectively. The improvements over all the baselines methods are statistically significant as p < 0.05. 5.3 Ablation Study To analyze the impact of various component of our model, we perform the ablation study (c.f. Table2) by removing one component from the proposed model and evaluate the performance on all the three datasets. Character embedding is found to be the most crucial component on Twitter, and CADEC datasets as both of these datasets are from the social media text and carry the nature of the short text and out of vocabulary words. To prove our hypothesis (introduction of adversarial learning in the multi-task framework can make shared space independent of the task invariant features), we exclude the adversarial loss from our proposed framework. We could see a significant decline in performance. This depicts that making the task shared space free from the contamination of task-specific feature, can significantly improve the performance of the system. Removal of the multi-head attention also lead to drop of an average 4% F1-Score points across all the datasets. 6 Analysis To get a deeper insight into how our multi-task model performs over the state-of-the-art multitask baseline model, we sample few sentences from all the three datasets. In the Table-3, we demonstrate the capability of our model in correctly predicting all the labels, while the MTLSTM and MT-LSTM-atten make the incorrect prediction. In the sentence 1 due to the sharing scheme, bipolar was correctly labeled as Indication. 5242 Sentence 1 fluoxetine and quet combo zombified me ahh the med merrygoround bipolar Actual Labels B-Drug O B-Drug O B-ADR O O O O O B-Indication MT-LSTM B-Drug O B-Drug O O O O O O O O MT-LSTM-Atten B-Drug O B-Drug O B-ADR O O O O O O Proposed Approach B-Drug O B-Drug O B-ADR O O O O O B-Indication Sentence 2 clozapine-induced tonic-clonic seizure managed with valproate implication for clinical care Actual Labels B-Drug B-ADR I-ADR O O O O O O O MT-LSTM B-Drug O O O O O O O O O MT-LSTM-Atten B-Drug O B-ADR O O B-ADR O O O O Proposed Approach B-Drug B-ADR I-ADR O O O O O O O Table 3: Comparison of the predictions of the proposed approach with the baseline models. Type-1 Sentence 1 too much zoloft and seroquel to get the horn my life is lie Actual O O B-Drug O B-Drug O O O B-ADR I-ADR I-ADR O O Predicted O O B-Drug O B-Drug O O O O O O O O Type-2 Sentence 2 pain in upper right arm could not sleep on it or move it behind my back Actual B-ADR I-ADR I-ADR I-ADR I-ADR O O O O O O O O O O O Predicted B-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR I-ADR Type-3 Sentence 3 terrible joint pain could not move shoulder hip hurt Actual O B-ADR B-ADR B-ADR I-ADR I-ADR I-ADR B-ADR I-ADR Predicted B-ADR I-ADR I-ADR B-ADR I-ADR I-ADR I-ADR I-ADR I-ADR Table 4: Exemplar description of various types of error. Here, Type-1 represent the error due ‘Presence of implicit mention’. Type-2 represent the error due to ‘Issue in annotation’ and Type-3 represents the error of type ‘Boundary detection problem’. In the sentence 2, we observe that, only MTLSTM-Atten model is able to predict the partial ADR (i.e. seizure instead of tonic-clonic seizure.), while our model is able to predict the full ADR phrase correctly. 6.1 Error Analysis In this subsection, we analyze the different sources of errors which lead to mis-classification. We closely study the false positive and false negative instances and come up with the following observations: (1) Presence of implicit mention: We observe that in the Twitter dataset user often tends to use very implicit and creative language to describe their adverse drug reaction. For e.g., in the sentence-1 of Table-4, user describes his ADR as ‘horn my life’ by taking drug (zoloft and seroquel). (2) Issue in annotation: For the CADEC dataset, we observe some of the sentences are not completely tagged. For e.g., in the sentence-2 of Table4, here ‘could not sleep’, ‘move it behind my back’ is also an ADR, in addition to ‘pain in upper right arm’. However, the first two ADRs are not labeled in the dataset. (3) Boundary detection problem: We also observe that, our system sometimes fails to detect the proper boundary. This might be because of the task sharing feature, which learns the feature distributions across the dataset which may not be correct for the given dataset as shown in sentence3 of Table-4. 7 Conclusion In this paper, we have proposed an end-to-end multi-task framework that provides a unified solution for pharmacovigilance mining. We have utilized an adversarial training based multi-task framework, which ensures that task-specific and task shared features are not contaminated. We evaluated this framework on three benchmark pharmacovigilance datasets. Our results demonstrate the capability of our model across all the datasets. In future, we would like to assist the model with multiple linguistic aspects of social media text like figurative languages. Acknowledgement Sriparna Saha and Asif Ekbal gratefully acknowledge the Young Faculty Research Fellowship (YFRF) Award, supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia) for carrying out this research. Authors acknowledge “Shusrut: ezDI Research Lab on Health Informatics”, Department of Computer Science and Engineering, IIT Patna, India. References Eiji Aramaki, Yasuhide Miura, Masatsugu Tonoike, Tomoko Ohkuma, Hiroshi Masuichi, Kayo Waki, and Kazuhiko Ohe. 2010. Extraction of adverse 5243 drug effects from clinical records. Studies in health technology and informatics, 160 Pt 1:739–43. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Rachel E Behrman, Joshua S Benner, Jeffrey S Brown, Mark McClellan, Janet Woodcock, and Richard Platt. 2011. Developing the sentinel systema national resource for evidence development. New England Journal of Medicine, 364(6):498–499. Adrian Benton, Lyle Ungar, Shawndra Hill, Sean Hennessy, Jun Mao, Annie Chung, Charles E Leonard, and John H Holmes. 2011. Identifying potential adverse effects using the web: A new approach to medical hypothesis generation. Journal of biomedical informatics, 44(6):989–996. Shaika Chowdhury, Chenwei Zhang, and Philip S. Yu. 2018. Multi-task pharmacovigilance mining from social media posts. In Proceedings of the 2018 World Wide Web Conference, WWW ’18, pages 117–126, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Marta R. Costa-juss`a and Jos´e A. R. Fonollosa. 2016. Character-based neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 357–361, Berlin, Germany. Association for Computational Linguistics. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334–343, Beijing, China. Association for Computational Linguistics. Clark C Freifeld, John S Brownstein, Christopher M Menone, Wenjie Bao, Ross Filice, Taha Kass-Hout, and Nabarun Dasgupta. 2014. Digital drug safety surveillance: monitoring pharmaceutical products in twitter. Drug safety, 37(5):343–350. Carol Friedman. 2009. Discovering novel adverse drug events using natural language processing and mining of the electronic health record. In Conference on Artificial Intelligence in Medicine in Europe, pages 1–5. Springer. Yaroslav Ganin and Victor S. Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 611 July 2015, pages 1180–1189. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645–6649. IEEE. Harsha Gurulingappa, Juliane Fluck, Martin HofmannApitius, and Luca Toldo. 2011. Identification of adverse drug event assertive sentences in medical case reports. In First international workshop on knowledge discovery and health care management (KDHCM), European conference on machine learning and principles and practice of knowledge discovery in databases (ECML PKDD), pages 16–27. Harsha Gurulingappa, Abdul Mateen-Rajpu, and Luca Toldo. 2012a. Extraction of potential adverse drug events from medical case reports. Journal of biomedical semantics, 3(1):15. Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012b. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of biomedical informatics, 45(5):885–892. Katja M Hakkarainen, Khadidja Hedna, Max Petzold, and Staffan H¨agg. 2012. Percentage of patients with preventable adverse drug reactions and preventability of adverse drug reactions–a meta-analysis. PloS one, 7(3):e33236. Rave Harpaz, William DuMouchel, Nigam H Shah, David Madigan, Patrick Ryan, and Carol Friedman. 2012a. Novel data-mining methodologies for adverse drug event discovery and analysis. Clinical Pharmacology & Therapeutics, 91(6):1010–1021. Rave Harpaz, Santiago Vilar, William DuMouchel, Hojjat Salmasian, Krystl Haerian, Nigam H Shah, Herbert S Chase, and Carol Friedman. 2012b. Combing signals from spontaneous reports and electronic health records for detection of adverse drug reactions. Journal of the American Medical Informatics Association, 20(3):413–419. Trung Huynh, Yulan He, Alistair Willis, and Stefan Rueger. 2016. Adverse drug reaction classification with deep neural networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 877–887, Osaka, Japan. The COLING 2016 Organizing Committee. Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73–81. 5244 Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 2741–2749. AAAI Press. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Robert Leaman, Laura Wojtulewicz, Ryan Sullivan, Annie Skariah, Jian Yang, and Graciela Gonzalez. 2010. Towards internet-age pharmacovigilance: extracting adverse drug reactions from user posts to health-related social networks. In Proceedings of the 2010 workshop on biomedical natural language processing, pages 117–125. Association for Computational Linguistics. Kathy Lee, Ashequl Qadir, Sadid A. Hasan, Vivek Datla, Aaditya Prakash, Joey Liu, and Oladimeji Farri. 2017. Adverse drug event detection in tweets with semi-supervised convolutional neural networks. In Proceedings of the 26th International Conference on World Wide Web, WWW ’17, pages 705–714, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Hui Li, Xiao-Jing Guo, Xiao-Fei Ye, Hong Jiang, WenMin Du, Jin-Fang Xu, Xin-Ji Zhang, and Jia He. 2014. Adverse drug reactions of spontaneous reports in shanghai pediatric population. PLoS One, 9(2):e89829. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1–10, Vancouver, Canada. Association for Computational Linguistics. Azadeh Nikfarjam and Graciela H Gonzalez. 2011. Pattern mining for extraction of mentions of adverse drug reactions from user comments. In AMIA Annual Symposium Proceedings, volume 2011, page 1019. American Medical Informatics Association. Azadeh Nikfarjam, Abeed Sarker, Karen Oconnor, Rachel Ginn, and Graciela Gonzalez. 2015. Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. Journal of the American Medical Informatics Association, 22(3):671–681. Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek V. Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 2923–2934. S Pyysalo, F Ginter, H Moen, T Salakoski, and S Ananiadou. 2013. Distributional semantics resources for biomedical text processing. In Proceedings of LBM 2013, pages 39–44. Abeed Sarker and Graciela Gonzalez. 2015. Portable automatic text classification for adverse drug reaction detection via multi-corpus training. Journal of biomedical informatics, 53:196–207. Abeed Sarker, Azadeh Nikfarjam, and Graciela Gonzalez. 2016. Social media mining shared task workshop. In Biocomputing 2016: Proceedings of the Pacific Symposium, pages 581–592. World Scientific. Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2377–2385. Curran Associates, Inc. Gabriel Stanovsky, Daniel Gruhl, and Pablo Mendes. 2017. Recognizing mentions of adverse drug reaction in social media using knowledge-infused recurrent models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 142–151. Janet Sultana, Paola Cutroneo, and Gianluca Trifir`o. 2013. Clinical and economic burden of adverse drug reactions. Journal of pharmacology & pharmacotherapeutics, 4(Suppl1):S73. Elena Tutubalina and Sergey Nikolenko. 2017. Combination of deep recurrent neural networks and conditional random fields for extracting adverse drug reactions from user reviews. Journal of Healthcare Engineering, 2017. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Xiaoyan Wang, George Hripcsak, Marianthi Markatou, and Carol Friedman. 2009. Active computerized pharmacovigilance using natural language processing, statistics, and electronic health records: a feasibility study. Journal of the American Medical Informatics Association, 16(3):328–337. 5245 Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Zhifei Zhang, JY Nie, and Xuyao Zhang. 2016. An ensemble method for binary classification of adverse drug reactions from social media. In Proceedings of the Social Media Mining Shared Task Workshop at the Pacific Symposium on Biocomputing.
2019
516
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5246–5251 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5246 Quantity Tagger: A Latent-Variable Sequence Labeling Approach to Solving Addition-Subtraction Word Problems Yanyan Zou and Wei Lu StatNLP Research Group Singapore University of Technology and Design yanyan [email protected], [email protected] Abstract An arithmetic word problem typically includes a textual description containing several constant quantities. The key to solving the problem is to reveal the underlying mathematical relations (such as addition and subtraction) among quantities, and then generate equations to find solutions. This work presents a novel approach, Quantity Tagger, that automatically discovers such hidden relations by tagging each quantity with a sign corresponding to one type of mathematical operation. For each quantity, we assume there exists a latent, variable-sized quantity span surrounding the quantity token in the text, which conveys information useful for determining its sign. Empirical results show that our method achieves 5 and 8 points of accuracy gains on two datasets respectively, compared to prior approaches. 1 Introduction Teaching machines to automatically solve arithmetic word problems, exemplified by two problems in Figure 1, is a long-standing Artificial Intelligence (AI) task (Bobrow, 1964; Mukherjee and Garain, 2008). Recent research (Hosseini et al., 2014; Kushman et al., 2014; Roy and Roth, 2015; Wang et al., 2017, 2018b,a) focused on designing algorithms to automatically solve arithmetic word problems. One line of prior works designed rules (Mukherjee and Garain, 2008; Hosseini et al., 2014) or templates (Kushman et al., 2014; Zhou et al., 2015; Mitra and Baral, 2016) to map problems to expressions, where rules or templates are collected from training data. However, it would be non-trivial and expensive to acquire a general set of rules or templates. Furthermore, such approaches typically require additional annotations. The addition-subtraction problems, which constitute the most fundamental class of arithmetic word problems, have been the focus Problem 1: A worker at a medical lab is studying blood samples. 2 samples contained a total of 7341 blood cells. The first sample contained 4221 blood cells. How many blood cells were in the second sample? Prediction: (0)×2+(+1)×7341+(−1)×4221+(−1)×x = 0 Equation: 7341 −4221 −x = 0 Solution: x = 3120 Problem 2: There are 22 walnut trees currently in the park. Park workers will plant walnut trees today. When the workers are finished there will be 55 walnut trees in the park. How many walnut trees did the workers plant today? Prediction: (+1)×22 + (−1)×55 + (+1)×x = 0 Equation: 22 −55 + x = 0 Solution: x = 33 Figure 1: Two examples of arithmetic word problems described in English with answers. for many previous works (Hosseini et al., 2014; Mitra and Baral, 2016). We also focus on this important task in this work. Our key observation is that essentially solving such a class of problems can be tackled from a sequence labeling perspective. This motivates us to build a novel sequence labeling approach, namely Quantity Tagger. The approach tags each quantity in the text with a label that indicates a specific mathematical operation. Taking Problem 1 from Figure 1 as an example, three constant quantities “2”,“7341” and “4221” sequentially appear in the problem text. We further introduce an unknown quantity x corresponding to the question sentence. From the problem description, one can form an equation “7341 − 4221 −x = 0”, based on which we can obtain the solution to x. This equation is mathematically equivalent to “0×2+(+1)×7341+(−1)×4221+ (−1)×x = 0” where “0, +1, −1, −1” are signs associated with the quantities “2, 7341, 4221, x”. Solving arithmetic word problem can thus be casted as a sequence labeling problem where we assign every quantity appearing in the problem text a sign (in the form of a tag) from the set {+1, 0, −1}. We further assume there exists a latent quantity span that needs to be learned – a sequence of words surrounding each quantity, based 5247 There Tspace tspace are 22 walnut trees will be 55 walnut trees park . How many walnut L+ QT(S) L+ N+ L+ L+ L+ L+ N+ L+ L+ L+ L+ N+ L+ L+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ L0 L0 N0 L0 L0 L0 L0 N0 L0 L0 L0 L0 N0 L0 L0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 L− L− N− L− L− L− L− N− L− L− L− L− N− L− L− R− R− R− R− R− R− R− R− R− R− R− R− L+ QT(S) L+ L+ L+ L+ L+ L+ L+ L+ L+ L+ L+ L+ L+ L+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ L0 L0 L0 L0 L0 L0 L0 L0 L0 L0 L0 L0 L0 L0 L0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 L− L− L− L− L− L− L− L− L− L− L− L− L− L− L− R− R− R− R− R− R− R− R− R− R− R− R− R− R− R− L+ QT(R)(S) L+ N+ L+ L+ L+ L+ N+ L+ L+ L+ L+ N+ L+ L+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ R+ L0 L0 N0 L0 L0 L0 L0 N0 L0 L0 L0 L0 N0 L0 L0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 R0 L− L− N− L− L− L− L− N− L− L− L− L− N− L− L− R− R− R− R− R− R− R− R− R− R− R− R− There are 22 walnut trees currently in the park . Park workers will plant walnut trees today . When the workers are finished there will be 55 walnut trees in the park . How many walnut trees did the workers plant today ? Figure 2: Illustrations of assumptions made by QT, QT(S) and QT(R), with possible paths (selected nodes are highlighted) built for the token sequence t (J=3), consisting of words from the original problem text T. on which tagging decisions could be made. We demonstrate through experiments on benchmark data that, despite its relatively simple assumptions involved, our novel sequence labeling approach is able to yield significantly better results than various state-of-the-art models. To the best of our knowledge, this is the first work that tackles the problem from a sequence labeling perspective. Our code is publicly available at https://github.com/ zoezou2015/quantity_tagger. 2 Our Approach 2.1 A Tagging Problem We define Q = (q1, q2, . . . , qi, x, qi+1, · · · qm) (0<i<m, m≥2 in arithmetic word problems) as an ordered quantity sequence for a problem text T, where qi ∈Q represents a constant quantity appearing in T, and x stands for the unknown quantity assigned to the question sentence. Q maintains the same order as the quantities appearing in T. The goal is to construct a valid math equation E. This research investigates such a problem by sequentially tagging each quantity q ∈Q with the most likely sign from set S = {+1, 0, −1}, where “+(−)1” means a quantity is positively (negatively) related to the question, i.e., the sign of the quantity should be +(-) when forming part of the equation; “0” means a quantity is irrelevant to the question and should be ignored. Given a specific prediction of the signs to the quantities, we can form an equation as follows: X qi∈Q/{x} siqi + sxx = 0 (1) where si ∈{+1, 0, −1} is the sign for the i-th constant quantity qi, and sx ∈{+1, −1} is the sign for x. The solution can be easily obtained. 2.2 Quantity Tagger Our primary assumption is that, for each quantity, there exists an implicit quantity span that resides in the problem text and can convey relevant information useful for determining the signs of the quantities. The quantity span of a quantity is essentially a contiguous token sequence from the problem text that consists of the quantity itself and some surrounding word tokens. Formally, our model needs to learn how to sequentially assign each quantity q ∈Q its optimal sign s ∈S. This is a sequence labeling problem (Lample et al., 2016; Zou and Lu, 2019). Common sequence labeling tasks, such as NER and POS tagging, mainly consider one sentence at a 5248 time, and tag each token in the sentence. However, our tagging problem typically involves multiple sentences where relatively unimportant information may be potentially included. For instance, the second sentence of Problem 2 in Figure 1, “Park workers will plant walnut trees today” describes background knowledge of the problem, but such information may not be useful for solving problems, yet even obstructive. For each quantity q ∈Q, we first consider a token window consisting of q and J −1 surrounding tokens located immediately to the left and right of q. This gives us a window of word tokens in the size of 2J −1. Next, such token windows for all quantities in Q are merged to form a new token sequence, denoted as t. Note that t is formed by concatenating token subsequences taken from T and is in the length of n (1≤n≤N, where N is the length of T). We assume the quantity spans are defined over such a token sequence t (rather than T), which we believe convey most relevant information for determining the signs for the quantities. Exemplified by Problem 2 in Figure 1, we show an example token sequence t with J = 3 in Figure 2. To capture quantity span information, we design 9 different labels with different semantics: H={L+, L0, L−; N+, N0, N−; R+, R0, R−}. • The N nodes are used to indicate that the current token is a quantity. • The L (R) nodes are used to indicate that the current token appears within a quantity span of a given quantity but to the left (right) of the quantity. The subscripts “+”, “0”, and “−” are used to denote the sign (+1, 0 and −1 respectively) associated with the quantities (and quantity spans). All quantities are explicitly given in the problem text. Therefore, the N node is used to tag a word token if and only if the token represents a quantity. Otherwise, L and R nodes are considered. Furthermore, the unknown quantity is always relevant to the problem. We thus tag it with either N+ or N−, while three types of N nodes are for all constant quantities. As illustrated in Figure 2, only one node from H will be selected at each position. Sequentially connecting all such nodes will form a single path that reveals information about quantity spans selected for all quantities. Following CRF (Lafferty et al., 2001), we formulate our method as a log-linear model with latent variables. Formally, given the problem text T, let t = (t1, t2, . . . , tn) be a token sequence as defined above, y be the corresponding label sequence, and h be a latent variable that provides specific quantity span information for the (t, y) tuple, we define: p(y|t) = P h exp(wT f(t, y, h)) P y′,h′ exp(wT f(t, y′, h′)) (2) where w is the feature weight vector, i.e., model parameters, and f is the feature vector defined over the triple (t, y, h), f(t, y, h) returns a list of discrete features (refer to supplementary materials). During training, we would like to minimize the negative log-likelihood of the training set: L(w) = X i log X y′,h′ exp (wT f f(t(i), y′, h′)) − X i log X h exp (wT f f(t(i), y(i), h)) (3) where the (t(i), y(i)) is the i-th training instance. The standard gradient-based methods can be used to optimize the above objective, such as L-BFGS (Liu and Nocedal, 1989). Gradients of the above function is given by: ∂L(w) ∂wk = X i Ep(y′,h|t(i))[fk(t(i), y′, h)] − X i Ep(h|t(i),y(i))[fk(t(i), y(i), h)] (4) where Ep[·] is the expectation under distribution p. We can construct a lattice representation on top of the nodes shown in Figure 2. The representation compactly encodes exponentially many paths, where each path corresponds to one possible label sequence. Note that there exists a topological ordering amongst all nodes. This allows us to apply a generalized forward-backward algorithm to perform exact marginal inference so as to calculate both objective and expectation values efficiently (Li and Lu, 2017; Zou and Lu, 2018). The MAP inference procedure can be done analogously, which is called during the decoding time. 2.3 Model Variants We further consider two variants of our model. Semi-Markov Variant: Our first variant, namely QT(S), employs the semi-Markov assumption (Sarawagi and Cohen, 2005), where N nodes are removed. Different from QT which makes the first-order Markov assumption, QT(S) assumes L 5249 Model AddSub AS CN Hosseini et al. (2014) 77.70 Kushman et al. (2014) 64.00 Koncel-Kedziorski et al. (2015) 77.00 Roy and Roth (2015) 78.00 47.57 Zhou et al. (2015) 53.14 51.48 Mitra and Baral (2016) 86.07 Roy and Roth (2017) 60.99 47.71 Wang et al. (2017) 20.64 Wang et al. (2018b) 78.50 QT(FIX) 87.73 53.19 QT 90.79 58.72 QT(S) 87.30 54.81 QT(R) 88.69 59.10 QT(-EF) 60.44 56.53 QT(S-EF) 63.49 52.62 QT(R-EF) 67.52 57.48 Table 1: Accuracy (%) on AddSub and AS CN. -EF: without external features. and R nodes are used to indicate the left and right boundaries of a quantity span respectively. Thus the model constructs edges (where non-Markovian features can be defined) by directly connecting the exactly first L and the last R nodes of a span. Relaxed Variant: One assumption made by QT is: each word in t strictly belongs to a certain quantity span. The variant QT(R) relaxes such a constraint. In this variant, some tokens in t may not belong to any quantity spans. Considering the example shown in Figure 2, the token “There” in t may not belong to any spans. 3 Experiments We conduct experiments on two datasets, AddSub (Hosseini et al., 2014), consisting of 395 additionsubtraction problems in English, and AS CN with 1,049 addition-subtraction problems in Chinese (Wang et al., 2017). For all of our experiments, we use the L-BFGS algorithm (Liu and Nocedal, 1989) for learning model parameters with ℓ2 regularization coefficient of 0.01. To tune the hyperparameter J, we randomly select 80% instances of the training set for training and the rest 20% for development. We tune J on the development set. 3.1 Analysis Following standard evaluation procedures used in previous works (Hosseini et al., 2014; Mitra and Baral, 2016), we conduct 3-fold cross validation on AddSub and AS CN, and report accuracies in Table 1. We make comparisons with a list of recent works1 and two baselines. Another is QT(FIX) 1Results on AS CN are obtained by running publicly released systems. Model AddSub AS CN AS.S. AM.S. F+ F0 F− AS.S. AM.S. F+ F0 F− QT 89.5 97.3 96.0 86.4 96.5 56.9 60.3 85.5 62.2 85.0 QT(S) 86.5 91.2 95.0 82.8 95.6 53.6 56.3 85.3 62.9 84.3 QT(R) 87.5 92.6 95.4 82.5 96.0 57.03 60.9 86.5 62.9 85.6 Table 2: Accuracies on two types of problems and F1 scores for three types of signs of quantities. AS.S.: accuracy of single-step problems (%) ; AM.S. accuracy of multi-step problems (%) ; F+(−/0): F1 score of sign “+1(−1/0)” (%). where the quantity span for each quantity is a fixed-size token window. All of our proposed models consistently outperform previous research efforts. These figures confirm the capability of our approach to provide more promising solutions to addition-subtraction problems. We do not require any additional annotations which can be expensive, while annotations like variable-word alignments and formulas are necessary for works of (Kushman et al., 2014; Mitra and Baral, 2016). To investigate the power of features extracted by external tools, such as ConceptNet (Liu and Singh, 2004) and Stanford CoreNLP tool (Manning et al., 2014), we conduct additional experiments on the afore-mentioned datasets, where we call such features external features (see supplementary material), indicated as “-EF”. It is expected that the performance drops because such features are necessary for capturing evidence across sentences. Especially, for the AddSub dataset, it affects a lot. As discussed before (Hosseini et al., 2014; Mitra and Baral, 2016), there exists lots of irrelevant information and information gaps in AddSub. We thus can infer the external features support our approach to be capable of bridging information gaps and recognizing irrelevant information for solving arithmetic problems. Poor performance shows challenges to solve such problems in Chinese. Which of our variants works the best? We observe that models with variable-sized quantity spans, namely QT, QT(S) and QT(R), generally perform better than QT(FIX) where the quantity spans are fixed token windows. This shows the effectiveness of introducing the quantity span as a latent variable. QT obtains the highest average accuracy on the AddSub and QT(R) outperforms other two variants on the AS CN. How does our approach perform on different types of problems? We divide problems into two categories: single-step and multi-step problems. The equation of a single-step problem contains at most two constant quantities tagged with either “+1” or “−1”, while the equation for a multi5250 1 2 3 4 5 6 N 20 40 60 80 100 AddSub AS CN J Accuracy QT QT(S) QT(R) Figure 3: Effects of J on three models (QT, QT(S) and QT(R)) evaluated on AddSub and AS CN. step problem has more than two constant quantities with signs of “+1” or “−1”. We report accuracy and F1 score in Table 2. According to empirical results illustrated in Table 2, our approach is able to give more accurate answers to multi-step problems, while the accuracy of single-step problems is lower. On the other hand, three models have similar patterns in terms of performance for three types of signs. The F1 scores for signs of “+1” and “−1” are higher than scores of “0”. After examining outputs, we found that problem texts of single-step problems often contain more than two constant quantities, among which only two of them are supposed to be labeled as “+1” or “−1” and the rest should be tagged as “0”. However, incorrectly labeling an irrelevant quantity with “+1” or “−1” leads to wrong solutions to single-step problems. This also reveals that one main challenge for automatically solving arithmetic word problems is to recognize the irrelevant quantities. Failures in identifying irrelevant information may due to implicit information of problem text or the external tool issues. Does J really matter? We further investigate the effects of J on the three proposed models. Figure 3 plots how performance varies with J (J ∈ {1, 2, 3, 4, 5, 6, N}2) on datasets AddSub (above) and AS CN (below). On AddSub, three models have similar patterns that performance tends to be worse with a larger J. As for the AS CN dataset, three models achieve relatively higher accuracies with J ∈{2, 3, 6} compared to other scenarios. Interestingly, it seems that QT and QT(R) performs better than the semi-Markov variant QT(S). We tracked outputs from three models and found that QT(S) made more mistakes in predictions for 2All tokens in the problem text are considered as the selected token window for a quantity when J = N. Model AddSub AS CN P.∗ R.∗ F.∗ P.∗ R.∗ F.∗ QT + 95.21∗ 96.70∗ 95.95∗ 81.86∗ 89.54∗ 85.53∗ 0 88.88† 83.96∗ 86.35† 75.83∗ 52.74∗ 62.21∗ − 96.65∗ 96.38⋆ 96.51⋆ 88.17∗ 82.04∗ 84.99∗ QT(S) + 93.97∗ 96.01∗ 94.98∗ 80.74∗ 90.30∗ 85.26∗ 0 81.06∗ 84.50† 82.75∗ 75.00∗ 54.22† 62.94† − 96.97∗ 94.18∗ 95.55∗ 88.66⋆ 80.25∗ 84.25∗ QT(R) + 94.37∗ 96.42∗ 95.38∗ 83.79∗ 89.39∗ 86.50∗ 0 80.55∗ 84.50† 82.48∗ 78.02† 52.72∗ 62.92∗ − 97.48⋆ 94.65∗ 96.04∗ 86.67∗ 84.61⋆ 85.63⋆ Table 3: Results for three types of signs for quantities predicted by three models. P.: Precision (%), R.: Recall (%), F.: F1 score (%); Highest scores are in bold and we use ∗, † and ⋆to distinguish different sign types. unknown. The fact that models with J = N perform do not perform well confirms our assumption that taking token windows into account rather than the whole text is reasonable and effective. Evaluation on different types of signs: We investigate the capability of proposed approach to predict three types of signs ({+1, 0, −1}), as illustrated in Table 3. Three models have similar patterns on two datasets. Predictions of “+1” and “−1” are more promising, compared to “0”. This reveals that one main challenge for automatically solving arithmetic word problems is to recognize the irrelevant information that should be labeled with “0”. Like what we discussed, failure on detecting irrelevant knowledge could be resulted from inevitably errors introduced by external resources and the lack of presence of crucial information in problem text. Error Analysis The leading sources of errors can be categorized into three types: 1) The description of the problem is incomplete and implicit, which is challenging for machine to understand. 2) Failing in recognizing relevant quantities caused missing quantities or introducing irrelevant information. 3) Incomplete information or errors from external tools, such as ConceptNet (Liu and Singh, 2004) and Standford CoreNLP tool (Manning et al., 2014), are another source of errors leading to wrong predictions, which are inevitable. 4 Conclusion and Future Work This work proposes the Quantity Tagger that regards solving addition-subtraction problem as a sequence labeling task by introducing the quantity span for each quantity. Despite its simplicity, it yields better performance. In the future, we would also like to investigate better models that are capable to address general arithmetic word problems, including addition, subtraction, multiplication and division. 5251 Acknowledgments We would like to thank the three anonymous reviewers for their thoughtful and constructive comments. We would also like to thank Yan Wang for his help on this work. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T21-156, and is partially supported by SUTD project PIE-SGP-AI-2018-01. References Daniel G Bobrow. 1964. A question-answering system for high school algebra word problems. In Proc. of AFIPS. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proc. of EMNLP. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. TACL, 3:585–597. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proc. of ACL. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of ICML. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proc. of NAACL. Hao Li and Wei Lu. 2017. Learning latent sentiment scopes for entity-level sentiment analysis. In Proc. of AAAI. Dong C Liu and Jorge Nocedal. 1989. On the limited memory BFGs method for large scale optimization. Mathematical programming, 45. Hugo Liu and Push Singh. 2004. ConceptNet-a practical commonsense reasoning tool-kit. BT technology journal, 22. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proc. of ACL. Arindam Mitra and Chitta Baral. 2016. Learning to use formulas to solve simple arithmetic problems. In Proc. of ACL. Anirban Mukherjee and Utpal Garain. 2008. A review of methods for automatic understanding of natural language mathematical problems. Artificial Intelligence Review, 29(2). Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proc. of EMNLP. Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application to arithmetic word problem solving. In Proc. of AAAI. Sunita Sarawagi and William W Cohen. 2005. SemiMarkov conditional random fields for information extraction. In Proc. of NIPS. Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang, and Xiaojiang Liu. 2018a. Translating a math word problem to an expression tree. In Proc. of EMNLP. Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, and Heng Tao Shen. 2018b. MathDQN: Solving arithmetic word problems via deep reinforcement learning. In Proc. of AAAI. Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proc. of EMNLP. Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In Proc. of EMNLP. Yanyan Zou and Wei Lu. 2018. Learning cross-lingual distributed logical representations for semantic parsing. In Proc. of ACL. Yanyan Zou and Wei Lu. 2019. Joint detection and location of english puns. In Proc. of NAACL.
2019
517
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5252–5258 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5252 A Deep Reinforced Sequence-to-Set Model for Multi-Label Classification Pengcheng Yang1,2, Fuli Luo2, Shuming Ma2, Junyang Lin2, Xu Sun1,2 1Deep Learning Lab, Beijing Institute of Big Data Research, Peking University 2MOE Key Lab of Computational Linguistics, School of EECS, Peking University {yang pc, luofuli, shumingma, linjunyang, xusun}@pku.edu.cn Abstract Multi-label classification (MLC) aims to predict a set of labels for a given instance. Based on a pre-defined label order, the sequence-tosequence (Seq2Seq) model trained via maximum likelihood estimation method has been successfully applied to the MLC task and shows powerful ability to capture high-order correlations between labels. However, the output labels are essentially an unordered set rather than an ordered sequence. This inconsistency tends to result in some intractable problems, e.g., sensitivity to the label order. To remedy this, we propose a simple but effective sequence-to-set model. The proposed model is trained via reinforcement learning, where reward feedback is designed to be independent of the label order. In this way, we can reduce the dependence of the model on the label order, as well as capture high-order correlations between labels. Extensive experiments show that our approach can substantially outperform competitive baselines, as well as effectively reduce the sensitivity to the label order. 1 1 Introduction Multi-label classification (MLC) aims to assign multiple labels to each sample. It can be applied in many real-world scenarios, such as text categorization (Schapire and Singer, 2000) and information retrieval (Gopal and Yang, 2010). Due to the complex dependency between labels, a key challenge for the MLC task is how to effectively capture high-order correlations between labels (Zhang and Zhou, 2014). When involving in capturing high-order correlations between labels, one line of research focuses on exploring the hierarchical structure of the label space (Prabhu and Varma, 2014; Jernite et al., 2017; Peng et al., 2018; Singh et al., 2018), while 1The code is available at https://github.com/ lancopku/Seq2Set another line strives to extend specific learning algorithms (Zhang and Zhou, 2006; Baker and Korhonen, 2017; Liu et al., 2017). However, most of these work tends to result in intractable computational costs (Chen et al., 2017). Recently, based on a pre-defined label order, Nam et al. (2017); Yang et al. (2018) succeeded in applying the sequence-to-sequence (Seq2Seq) model to the MLC task, which shows its powerful ability to capture high-order label correlations and achieves excellent performance. However, the Seq2Seq model suffers from some thorny flaws on the MLC task. The output labels are essentially an unordered set with swapping-invariance2, rather than an ordered sequence. This inconsistency usually leads to some intractable problems, e.g., sensitivity to the label order. Previous work (Vinyals et al., 2016) has shown that the order has a great impact on the performance of the Seq2Seq model. Therefore, the performance of classifier is sensitive to the pre-defined label order. Besides, even if the model accurately predicts all true labels, it still may result in an unreasonable training loss due to the inconsistent order with the pre-defined label sequence3. Therefore, in this work, we propose a simple but effective sequence-to-set model, which aims at alleviating the dependence of the model on the label order. Instead of maximizing the log-likelihood of pre-defined label sequences, we apply reinforcement learning (RL) (Sutton et al., 1999) to guild the model training. The designed reward not only comprehensively evaluates the quality of the output labels, but also satisfies swapping-invariance of the set, which leads to a reduction in the dependence of the model on the label order. 2Swapping-invariance means that swapping any two elements in the set will make no difference. 3For example, for the pre-defined label sequence [A, B, C], the training loss will be large if model generates [C, A, B]. 5253 The main contributions of this paper are summarized as follows: • We propose a simple but effective sequenceto-set (Seq2Set) model based on reinforcement learning, which not only captures the correlations between labels, but also alleviates the dependence on the label order. • Experimental results show that our Seq2Set model can outperform baselines by a large margin. Further analysis demonstrates that our approach can effectively reduce the sensitivity of the model to the label order. 2 Methodology 2.1 Overview Here we define some necessary notations and describe the MLC task. Given a text sequence x containing m words, the MLC task aims to assign a subset y containing n labels in the total label set Y to x. From the perspective of sequence learning, once the order of output labels is pre-defined, the MLC task can be regarded as the generation of target label sequence y conditioned on the source text sequence x. 2.2 Neural Sequence-to-Set Model Our proposed Seq2Set model consists of an encoder E and a set decoder D, which are introduced in detail as follows. Encoder E: We implement the encoder E as a bidirectional LSTM. Given the input text x = (x1, · · · , xm), the encoder computes the hidden states of each word as follows: −→h i = −−−−→ LSTM −→h i−1, e(xi)  (1) ←−h i = ←−−−− LSTM ←−h i+1, e(xi)  (2) where e(xi) is the embedding of xi. The final representation of the i-th word is hi = [−→h i; ←−h i], where semicolon denotes vector concatenation. Set decoder D: Due to its powerful ability of LSTM to model sequence dependency, we also implement D as a LSTM model to capture highorder correlations between labels. In particular, the hidden state st of the set decoder D at timestep t is computed as: st = LSTM st−1, [e(yt−1); ct]  (3) where [e(yt−1); ct] denotes the concatenation of vectors e(yt−1) and ct, e(yt−1) is the embedding of the label yt−1 generated at the last time-step, and ct is the context vector obtained by the attention mechanism. Readers can refer to Bahdanau et al. (2015) for more details. Finally, the set decoder D samples a label yt from the output probability distribution, which is computed as follows: ot = W2f(W1st + Uct) (4) yt ∼softmax(ot + It) (5) where W1, W2, and U are trainable parameters, f is a nonlinear activation function, and It ∈R|Y| is the mask vector that prevents D from generating repeated labels, (It)i = ( −∞ if the i-th label has been predicted. 0 otherwise. 2.3 Model Training MLC as a RL Problem In order to alleviate the dependence of the model on the label order, here we model the MLC task as a RL problem. Our set decoder D can be viewed as an agent, whose state at time-step t is the current generated labels (y1, · · · , yt−1). A stochastic policy defined by the parameter of D decides the action, which is the prediction of the next label. Once a complete label sequence y is generated, the agent D will observe a reward r. The training objective is to minimize negative expected reward, which is as follows: L(θ) = −Ey∼pθ[r(y)] (6) where θ refers to the model parameter. In our model, we use the self-critical policy gradient algorithm (Rennie et al., 2017). For each training sample in the minibatch, the gradient of Eq.(6) can be approximated as: ∇θL(θ) ≈−[r(ys) −r(yg)] ∇θlog pθ(ys)  (7) where ys is the label sequence sampled from probability distribution pθ and yg is the label sequence generated with the greedy search algorithm. r(yg) in Eq.(7) is the baseline, which aims to reduce the variance of gradient estimate and enhance the consistency of the model training and testing to alleviate exposure bias (Ranzato et al., 2016). Reward Design The ideal reward is supposed to be a good measure of the quality of the generated labels. Besides, 5254 Models HL (–) 0/1 Loss (–) F1 (+) Precision (+) Recall (+) BR-LR (Boutell et al., 2004) 0.0083 0.393 0.858 0.919 0.804 PCC-LR (Read et al., 2011) 0.0079 0.325 0.864 0.901 0.827 FastXML (Prabhu and Varma, 2014) 0.0078 0.358 0.863 0.956 0.786 XML-CNN (Liu et al., 2017) 0.0086 0.390 0.853 0.914 0.799 CNN-RNN (Chen et al., 2017) 0.0085 0.378 0.856 0.889 0.825 Seq2Seq (Yang et al., 2018) 0.0076 0.332 0.871 0.906 0.838 Seq2Set (Ours) 0.0073 0.314 0.879 0.900 0.858 Table 1: Performance of different systems. “HL”, “0/1 Loss”, “F1”, “Precision”, and “Recall” denote hamming loss, subset zero-one loss, micro-F1, micro-precision, and micro-recall, respectively. “+” indicates higher is better and “–” is opposite. The best performance is highlighted in bold. in order to free the model from the strict restriction of label order, it should also satisfy swappinginvariance of the output label set. Motivated by this, we design the reward r as the F1 score calculated by comparing the generated labels y with ground-truth labels y∗.4 r(y) = F1(y, y∗) (8) We also tried other reward designs, such as hamming accuracy. Results show that reward based on F1 score gives the best overall performance. 3 Experiments 3.1 Datasets We conduct experiments on the RCV1-V2 corpus (Lewis et al., 2004), which consists of a large number of manually categorized newswire stories. The total number of labels is 103. We adopt the same data-splitting in Yang et al. (2018). 3.2 Settings We tune hyper-parameters on the validation set based on the micro-F1 score. The vocabulary size is 50,000 and the batch size is 64. we set the embedding size to 512. Both encoder and set decoder is a 2-layer LSTM with the hidden size 512, but the former is set to bidirectional. We pre-train the model for 20 epochs via MLE method. The optimizer is Adam (Kingma and Ba, 2015) with 10−3 learning rate for pre-training and 10−5 for RL. Besides, we use dropout (Srivastava et al., 2014) to avoid overfitting and clip the gradients (Pascanu et al., 2013) to the maximum norm of 8. 3.3 Baselines We compare our approach with the following competitive baselines: 4When calculating F1 score, we convert y and y∗into |Y|-dimensional sparse vectors. • BR-LR (Boutell et al., 2004) amounts to independently training one binary classifier (logistic regression) for each label. • PCC-LR (Read et al., 2011) transforms the MLC task into a chain of binary classification (logistic regression) problems. • FastXML (Prabhu and Varma, 2014) learns a hierarchy of training instances and optimizes the objective at each node of the hierarchy. • XML-CNN (Liu et al., 2017) uses a dynamic max pooling scheme and a hidden bottleneck layer for better representations of documents. • CNN-RNN (Chen et al., 2017) presents an ensemble approach of CNN and RNN to capture both global and local textual semantics. • Seq2Seq (Nam et al., 2017; Yang et al., 2018) adapts the Seq2Seq model to perform multilabel classification. 3.4 Evaluation Metrics The evaluation metrics include: subset zero-one loss calculating the fraction of misclassifications, hamming loss denoting the fraction of wrongly predicted labels to total labels, and micro-F1 that is the weighted average of F1 score of each class. Micro-precision and micro-recall are also reported for reference. 4 Results and Discussion Here we conduct an in-depth analysis on the model and experimental results. For simplicity, we use BR to represent the baseline BR-LR. 4.1 Experimental Results The comparison between our approach and all baselines is presented in Table 1, showing that 5255 Models HL (–) 0/1 Loss (–) F1 (+) BR 0.0083 (↓0.0%) 0.393 (↓0.0%) 0.858 (↓0.0%) Seq2Seq 0.0083 (↓9.2%) 0.363 (↓9.3%) 0.859 (↓1.4%) Seq2Set 0.0075 (↓2.7%) 0.318 (↓1.2%) 0.876 (↓0.3%) Table 2: Comparison on the label-shuffled RCV1-V2 dataset. “↓” indicates that the model is degraded. the proposed Seq2Set model can outperform all baselines by a large margin in all evaluation metrics. Compared to BR which completely ignores the label correlations, our Seq2Set model achieves a reduction of 12.05% hamming-loss. It shows that modeling high-order label correlations can largely improve results. Compared to Seq2Seq that makes strict requirements on the label order, our Seq2Set model achieves a reduction of 3.95% hamming-loss on the RCV1-V2 dataset. This indicates that our approach can achieve substantial improvements by reducing the dependence of the model on the label order. 4.2 Reducing Sensitivity to Label Order To verify that our approach can reduce the sensitivity to the label order, we randomly shuffle the order of the label sequences. Table 2 presents the performance of various models on the labelshuffled RCV1-V2 dataset. Results show that for the shuffled label order, BR is not affected, but the performance of Seq2Seq declines drastically. The reason is that the decoder of Seq2Seq is essentially a conditional language model. It relies heavily on a reasonable label order to model the intrinsic association between labels, while labels in this case present an unordered state. However, our model’s performance on subset zero-one loss drops by only 1.2%5, while Seq2Seq drops by 9.3%. This shows that our Seq2Set model is more robust, which can resist disturbances in the label order. Our model is trained via reinforcement learning and reward feedback is independent of the label order, which reduces sensitivity to the label order. 4.3 Improving Model Universality The labels in the RCV1-V2 dataset exhibits a longtail distribution. However, in real-scenarios, there are other common label distributions, e.g., uniform distribution (Lin et al., 2018a). Therefore, here we analyze the universality of the Seq2Set 5This weak decline can be attributed to the influence of the label order on the pre-training. Figure 1: Left: Performance of different models. Right: The gap of performance of different models. model, which means that it can achieve stable improvements in performance under different label distributions. In detail, we remove the most frequent k labels in turn on the RCV1-V2 dataset and perform the evaluation on the remaining labels. The larger the k, the more uniform the label distribution. Figure 1 shows changes in the performance of different systems. First, as the number of removed high-frequency labels increases, the performance of all methods deteriorates. This is reasonable because predicting low-frequency labels is relatively difficult. However, compared to other methods, the performance of the Seq2Seq model is greatly degraded. We suspect this is because it’s difficult to define a reasonable order for uniformly distributed labels while Seq2Seq imposes strict requirements on the label order. This conflict may damage performance. However, as shown in Figure 1, as more labels are removed, the advantage of Seq2Set over Seq2Seq continues to grow. This illustrates that our Seq2Set model has excellent universality, which works for different label distributions. Our approach not only has the ability of Seq2Seq to capture label correlations, but also alleviates the strict requirements of Seq2Seq for label order via reinforcement learning. This avoids the problem of difficulty in predefining a reasonable label order on the uniform distribution, leading to excellent universality. 4.4 Error Analysis We find that all methods perform poorly when predicting low-frequency (LF) labels compared to high-frequency (HF) labels. This is reasonable because samples assigned LF labels are sparse, making it hard for the model to learn an effective pattern to make predictions. Figure 2 shows the results of different methods on HF labels and 5256 Figure 2: Performance of different systems on the HF labels and LF labels. “Impv-BR” and “Impv-Seq2Seq” denote the improvement of our model compared to BRLR and Seq2Seq, respectively. LF labels6. However, compared to other systems, our proposed Seq2Set model achieves better performance on both LF labels and HF labels. Besides, the relative improvements achieved by our approach are greater on LF labels than HF labels. In fact, the distribution of LF labels is relatively more uniform. As analyzed in Section 4.3, the label order problem is more serious in the uniform distribution. Our Seq2Set model can reduce the dependence on the label order via reinforcement learning, leading to larger improvements in performance on the LF labels. 5 Related Work Multi-label classification (MLC) aims to assign multiple labels to each sample in the dataset. Early work on exploring the MLC task focuses on machine learning algorithms, mainly including problem transformation methods and algorithm adaptation methods. Problem transformation methods, such as BR (Boutell et al., 2004), LP (Tsoumakas and Katakis, 2006) and CC (Read et al., 2011), aim at mapping the MLC task into multiple singlelabel learning tasks. Algorithm adaptation methods strive to extend specific learning algorithms to handle multi-label data directly. The corresponding representative work includes ML-DT (Clare and King, 2001), Rank-SVM (Elisseeff and Weston, 2001), ML-KNN (Zhang and Zhou, 2007), and so on. In addition, some other methods, including ensemble method (Tsoumakas et al., 2011) and joint training (Li et al., 2015), can also be used for the MLC task. However, they can only be used to capture the first or second order label correlations (Chen et al., 2017), or are computationally intractable when high-order label correlations are considered. 6By frequency, the top 10% of labels are regarded as HF labels, and the last 10% of labels are regarded as LF labels. Recent years, some neural network models have also been successfully used for the MLC task. For instance, the BP-MLL proposed by Zhang and Zhou (2006) applies a fully-connected network and the pairwise ranking loss to perform classification. Nam et al. (2013) further replace the pairwise ranking loss with cross-entropy loss function. Kurata et al. (2016) present an initialization method to model label correlations by leveraging neurons. Chen et al. (2017) present an ensemble approach of CNN and RNN so as to capture both global and local semantic information. Liu et al. (2017) use a dynamic max pooling scheme and a hidden bottleneck layer for better representations of documents. Graph convolution operations are employed by Peng et al. (2018) to capture nonconsecutive and long-distance semantics. The two milestones are Nam et al. (2017) and Yang et al. (2018), both of which utilize the Seq2Seq model to capture the label correlations. Going a step further, Lin et al. (2018b) propose a semantic-unitbased dilated convolution model and Zhao et al. (2018) present a label-graph based neural network equipped with a soft training mechanism to capture label correlations. Most recently, Qin et al. (2019) present new training objectives propose based on set probability to effectively model the mathematical characteristics of the set. 6 Conclusion In this work, we present a simple but effective sequence-to-set model based on reinforcement learning, which aims to reduce the stringent requirements of the sequence-to-sequence model for label order. The proposed model not only captures high-order correlations between labels, but also reduces the dependence on the order of output labels. Experimental results show that our Seq2Set model can outperform competitive baselines by a large margin. Further analysis demonstrates that our approach can effectively reduce the sensitivity to the label order. Acknowledgement We thank the anonymous reviewers for their thoughtful comments. We also would like to thank Lei Li, Yi Zhang, and Xuancheng Ren for their insightful suggestions. Xu Sun is the contact author of this paper. 5257 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, Conference Track Proceedings. Simon Baker and Anna Korhonen. 2017. Initializing neural networks for hierarchical multi-label text classification. pages 307–315. Matthew R. Boutell, Jiebo Luo, Xipeng Shen, and Christopher M. Brown. 2004. Learning multilabel scene classification. Pattern Recognition, 37(9):1757–1771. Guibin Chen, Deheng Ye, Zhenchang Xing, Jieshan Chen, and Erik Cambria. 2017. Ensemble application of convolutional and recurrent neural networks for multi-label text categorization. In 2017 International Joint Conference on Neural Networks, pages 2377–2383. Amanda Clare and Ross D King. 2001. Knowledge discovery in multi-label phenotype data. In European Conference on Principles of Data Mining and Knowledge Discovery, pages 42–53. Springer. Andr´e Elisseeff and Jason Weston. 2001. A kernel method for multi-labelled classification. In Advances in Neural Information Processing Systems 14 [Neural Information Processing Systems: Natural and Synthetic], pages 681–687. Siddharth Gopal and Yiming Yang. 2010. Multilabel classification with meta-level features. In Proceeding of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 315–322. Yacine Jernite, Anna Choromanska, and David Sontag. 2017. Simultaneous learning of trees and representations for extreme classification and density estimation. In Proceedings of the 34th International Conference on Machine Learning, pages 1665–1674. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, Conference Track Proceedings. Gakuto Kurata, Bing Xiang, and Bowen Zhou. 2016. Improved neural network-based multi-label classification with better initialization leveraging label cooccurrence. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 521–526. David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. 2004. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361–397. Li Li, Houfeng Wang, Xu Sun, Baobao Chang, Shi Zhao, and Lei Sha. 2015. Multi-label text categorization with joint learning predictions-as-features method. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 835–839. Junyang Lin, Qi Su, Pengcheng Yang, Shuming Ma, and Xu Sun. 2018a. Semantic-unit-based dilated convolution for multi-label text classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,, pages 4554–4564. Junyang Lin, Qi Su, Pengcheng Yang, Shuming Ma, and Xu Sun. 2018b. Semantic-unit-based dilated convolution for multi-label text classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4554–4564. Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yiming Yang. 2017. Deep learning for extreme multi-label text classification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval,, pages 115–124. Jinseok Nam, Jungi Kim, Iryna Gurevych, and Johannes F¨urnkranz. 2013. Large-scale multi-label text classification - revisiting neural networks. arXiv preprint arXiv:1312.5419. Jinseok Nam, Eneldo Loza Menc´ıa, Hyunwoo J Kim, and Johannes F¨urnkranz. 2017. Maximizing subset accuracy with recurrent neural networks in multilabel classification. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pages 5419–5429. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning,, pages 1310–1318. Hao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Yangqiu Song, and Qiang Yang. 2018. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In Proceedings of the 2018 World Wide Web Conference on World Wide Web,, pages 1063–1072. Yashoteja Prabhu and Manik Varma. 2014. Fastxml: A fast, accurate and stable tree-classifier for extreme multi-label learning. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,, pages 263–272. Kechen Qin, Cheng Li, Virgil Pavlu, and Javed A Aslam. 2019. Adapting rnn sequence prediction model to multi-label set prediction. arXiv preprint arXiv:1904.05829. 5258 Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, Conference Track Proceedings. Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. 2011. Classifier chains for multi-label classification. Machine learning, 85(3):333. Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition,, pages 1179–1195. Robert E Schapire and Yoram Singer. 2000. Boostexter: A boosting-based system for text categorization. Machine learning, 39(2-3):135–168. Gaurav Singh, James Thomas, Iain James Marshall, John Shawe-Taylor, and Byron C. Wallace. 2018. Structured multi-label biomedical text tagging via attentive neural tree decoding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2837–2842. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12, [NIPS Conference], pages 1057–1063. Grigorios Tsoumakas and Ioannis Katakis. 2006. Multi-label classification: An overview. International Journal of Data Warehousing and Mining, 3(3). Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. 2011. Random k-labelsets for multilabel classification. IEEE Transactions on Knowledge and Data Engineering, 23(7):1079–1089. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2016. Order matters: Sequence to sequence for sets. In 4th International Conference on Learning Representations, Conference Track Proceedings. Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. SGM: Sequence generation model for multi-label classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3915–3926. Min-Ling Zhang and Zhi-Hua Zhou. 2006. Multilabel neural networks with applications to functional genomics and text categorization. IEEE Transactions on Knowledge and Data Engineering, 18(10):1338– 1351. Min-Ling Zhang and Zhi-Hua Zhou. 2007. ML-KNN: A lazy learning approach to multi-label learning. Pattern recognition, 40(7):2038–2048. Min-Ling Zhang and Zhi-Hua Zhou. 2014. A review on multi-label learning algorithms. IEEE transactions on knowledge and data engineering, 26(8):1819–1837. Guangxiang Zhao, Jingjing Xu, Qi Zeng, and Xuancheng Ren. 2018. Review-driven multi-label music style classification by exploiting style correlations. arXiv preprint arXiv:1808.07604.
2019
518
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5259–5267 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5259 Joint Slot Filling and Intent Detection via Capsule Neural Networks Chenwei Zhang†, Yaliang Li§, Nan Du‡, Wei Fan‡, Philip S. Yu†¶ †University of Illinois at Chicago, Chicago, IL 60607 USA §Alibaba Group, Bellevue, WA 98004 USA ‡Tencent Medical AI Lab, Palo Alto, CA 94301 USA ¶Institute for Data Science, Tsinghua University, Beijing, China {czhang99,psyu}@uic.edu, [email protected], [email protected], [email protected] Abstract Being able to recognize words as slots and detect the intent of an utterance has been a keen issue in natural language understanding. The existing works either treat slot filling and intent detection separately in a pipeline manner, or adopt joint models which sequentially label slots while summarizing the utterancelevel intent without explicitly preserving the hierarchical relationship among words, slots, and intents. To exploit the semantic hierarchy for effective modeling, we propose a capsulebased neural network model which accomplishes slot filling and intent detection via a dynamic routing-by-agreement schema. A rerouting schema is proposed to further synergize the slot filling performance using the inferred intent representation. Experiments on two real-world datasets show the effectiveness of our model when compared with other alternative model architectures, as well as existing natural language understanding services. 1 Introduction With the ever-increasing accuracy in speech recognition and complexity in user-generated utterances, it becomes a critical issue for mobile phones or smart speaker devices to understand the natural language in order to give informative responses. Slot filling and intent detection play important roles in Natural Language Understanding (NLU) systems. For example, given an utterance from the user, the slot filling annotates the utterance on a word-level, indicating the slot type mentioned by a certain word such as the slot artist mentioned by the word Sungmin, while the intent detection works on the utterance-level to give categorical intent label(s) to the whole utterance. Figure 1 illustrates this idea. To deal with diversely expressed utterances without additional feature engineering, deep neural network based user intent detection models (Hu Word Put Sungmin into my summer playlist Slot O B-artist O B-playlist_owner B-playlist O Intent AddToPlaylist Figure 1: An example of an utterance with BOI format annotation for slot filling, which indicates the slot of artist, play list owner, and play list name from an utterance with an intent AddToPlaylist. et al., 2009; Xu and Sarikaya, 2013; Zhang et al., 2016; Liu and Lane, 2016; Zhang et al., 2017; Chen et al., 2016; Xia et al., 2018) are proposed to classify user intents given their utterances in the natural language. Currently, the slot filling is usually treated as a sequential labeling task. A neural network such as a recurrent neural network (RNN) or a convolution neural network (CNN) is used to learn contextaware word representations, along with sequence tagging methods such as conditional random field (CRF) (Lafferty et al., 2001) that infer the slot type for each word in the utterance. Word-level slot filling and utterance-level intent detection can be conducted simultaneously to achieve a synergistic effect. The recognized slots, which possess word-level signals, may give clues to the utterance-level intent of an utterance. For example, with a word Sungmin being recognized as a slot artist, the utterance is more likely to have an intent of AddToPlayList than other intents such as GetWeather or BookRestaurant. Some existing works learn to fill slots while detecting the intent of the utterance (Xu and Sarikaya, 2013; Hakkani-T¨ur et al., 2016; Liu and Lane, 2016; Goo et al., 2018): a convolution layer or a recurrent layer is adopted to sequentially label word with their slot types: the last hidden state of the recurrent neural network, or an attention5260 put my summer playlist Word Slot Intent playlist context movie_type … play_music … add_to_playlist get_weather into … … … … playlist _owner artist Dynamic Routing Dynamic Routing ReRouting WordCaps SlotCaps IntentCaps Sungmin Figure 2: Illustration of the proposed CAPSULE-NLU model for joint slot filling and intent detection. The model does slot filling by learning to assign each word in the WordCaps to the most appropriate slot in SlotCaps via dynamic routing. The weights learned via dynamic routing indicate how strong each word in WordCaps belongs to a certain slot type in SlotCaps. The dynamic routing also learns slot representations using WordCaps and the learned weight. The learned slot representations in SlotCaps are further aggregated to predict the utterance-level intent of the utterance. Once the intent label of the utterance is determined, a novel re-routing process is proposed to help improve word-level slot filling by the inferred utterance-level intent label. The solid lines indicate the dynamic-routing process and dash lines indicate the re-routing process. weighted sum of all convolution outputs are used to train an utterance-level classification module for intent detection. Such approaches achieve decent performances but do not explicitly consider the hierarchical relationship between words, slots, and intents: intents are sequentially summarized from the word sequence. As the sequence becomes longer, it is risky to simply rely on the gate function of RNN to compress all context information in a single vector (Cheng et al., 2016). In this work, we make the very first attempt to bridge the gap between word-level slot modeling and the utterance-level intent modeling via a hierarchical capsule neural network structure (Hinton et al., 2011; Sabour et al., 2017). A capsule houses a vector representation of a group of neurons. The capsule model learns a hierarchy of feature detectors via a routing-by-agreement mechanism: capsules for detecting low-level features send their outputs to high-level capsules only when there is a strong agreement of their predictions to high-level capsules. The aforementioned properties of capsule models are appealing for natural language understanding from a hierarchical perspective: words such as Sungmin are routed to concept-level slots such as artist, by learning how each word matches the slot representation. Conceptlevel slot features such as artist, playlist owner, and playlist collectively contribute to an utterance-level intent AddToPlaylist. The dynamic routing-by-agreement assigns a larger weight from a lower-level capsule to a higher-level when the low-level feature is more predictive to one high-level feature, than other high-level features. Figure 2 illustrates this idea. The inferred utterance-level intent is also helpful in refining the slot filling result. For example, once an AddToPlaylist intent representation is learned in IntentCaps, the slot filling may capitalize on the inferred intent representation and recognize slots that are otherwise neglected previously. To achieve this, we propose a re-routing schema for capsule neural networks, which allows high-level features to be actively engaged in the dynamic routing between WordCaps and SlotCaps, which improves the slot filling performance. To summarize, the contributions of this work are as follows: • Encapsulating the hierarchical relationship among word, slot, and intent in an utterance by a hierarchical capsule neural network structure. • Proposing a dynamic routing schema with rerouting that achieves synergistic effects for joint slot filling and intent detection. • Showing the effectiveness of our model on two real-world datasets, and comparing with existing models as well as commercial NLU services. 2 Approach We propose to model the hierarchical relationship among each word, the slot it belongs to, and 5261 the intent label of the whole utterance by a hierarchical capsule neural network structure called CAPSULE-NLU. The proposed architecture consists of three types of capsules: 1) WordCaps that learn context-aware word representations, 2) SlotCaps that categorize words by their slot types via dynamic routing, and construct a representation for each type of slot by aggregating words that belong to the slot, 3) IntentCaps determine the intent label of the utterance based on the slot representation as well as the utterance contexts. Once the intent label has been determined by IntentCaps, the inferred utterance-level intent helps re-recognizing slots from the utterance by a rerouting schema. 2.1 WordCaps Given an input utterance x = (w1, w2, ..., wT ) of T words, where each word is initially represented by a vector of dimension DW . Here we simply trained word represenations from scratch. Various neural network structures can be used to learn context-aware word representations. For example, a recurrent neural network such as a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) can be applied to learn representations of each word in the utterance: ⃗ht = LSTMfw(wt, ⃗ht−1), ← ht = LSTMbw(wt, ← ht+1). (1) For each word wt, we concatenate each forward hidden state ⃗ht obtained from the forward LSTMfw with a backward hidden state ← ht from LSTMbw to obtain a hidden state ht. The whole hidden state matrix can be defined as H = (h1, h2, ..., hT ) ∈RT×2DH, where DH is the number of hidden units in each LSTM. In this work, the parameters of WordCaps are trained with the whole model, while sophisticated pretrained models such as ELMo (Peters et al., 2018) or BERT (Devlin et al., 2018) may also be integrated. 2.2 SlotCaps Traditionally, the learned hidden state ht for each word wt is used as the logit to predict its slot tag. When H for all words in the utterance is learned, sequential tagging methods like the linear-chain CRF models the tag dependencies by assigning a transition score for each transition pattern between adjacent tags to ensure the best tag sequence of the utterance from all possible tag sequences. Instead of doing slot filling via sequential labeling which does not directly consider the dependencies among words, the SlotCaps learn to recognize slots via dynamic routing. The routingby-agreement explicitly models the hierarchical relationship between capsules. For example, the routing-by-agreement mechanism send a lowlevel feature, e.g. a word representation in WordCaps, to high-level capsules, e.g. SlotCaps, only when the word representation has a strong agreement with a slot representation. The agreement value on a word may vary when being recognized as different slots. For example, the word three may be recognized as a party size number slot or a time slot. The SlotCaps first convert the word representation obtained in WordCaps with respect to each slot type. We denote pk|t as the resulting prediction vector of the t-th word when being recognized as the kth slot: pk|t = σ(WkhT t + bk), (2) where k ∈{1, 2, ..., K} denotes the slot type and t ∈{1, 2, ..., T}. σ is the activation function such as tanh. Wk ∈RDP ×2DH and bk ∈RDP ×1 are the weight and bias matrix for the k-th capsule in SlotCaps, and DP is the dimension of the prediction vector. Slot Filling by Dynamic Routing-by-agreement We propose to determine the slot type for each word by dynamically route prediction vectors of each word from WordCaps to SlotCaps. The dynamic routing-by-agreement learns an agreement value ckt that determines how likely the t-th word agrees to be routed to the k-th slot capsule. ckt is calculated by the dynamic routing-by-agreement algorithm (Sabour et al., 2017), which is briefly recalled in Algorithm 1. Algorithm 1 Dynamic routing-by-agreement 1: procedure DYNAMIC ROUTING(pk|t, iter) 2: for each WordCaps t and SlotCaps k: bkt ←0. 3: for iter iterations do 4: for all WordCaps t: ct ←softmax(bt) 5: for all SlotCaps k: sk ←Σrcktpk|t 6: for all SlotCaps k: vk = squash(sk) 7: for all WordCaps t and SlotCaps k: bkt ← bkt + pk|t · vk 8: end for 9: Return vk 10: end procedure The above algorithm determines the agreement 5262 value ckt between WordCaps and SlotCaps while learning the slot representations vk in an unsupervised, iterative fashion. ct is a vector that consists of all ckt where k ∈K. bkt is the logit (initialized as zero) representing the log prior probability that the t-th word in WordCaps agrees to be routed to the k-th slot capsule in SlotCaps (Line 2). During each iteration (Line 3), each slot representation vk is calculated by aggregating all the prediction vectors for that slot type {pk|t|t∈T}, weighted by the agreement values ckt obtained from bkt (Line 5-6): sk = T X t cktpk|t, (3) vk = squash(sk) = ∥sk∥2 1 + ∥sk∥2 sk ∥sk∥, (4) where a squashing function squash(·) is applied on the weighted sum sk to get vk for each slot type. Once we updated the slot representation vk in the current iteration, the logit bkt becomes larger when the dot product pk|t · vk is large. That is, when a prediction vector pk|t is more similar to a slot representation vk, the dot product is larger, indicating that it is more likely to route this word to the k-th slot type (Line 7). An updated, larger bkt will lead to a larger agreement value ckt between the t-th word and the k-th slot in the next iteration. On the other hand, it assigns low ckt when there is inconsistency between pk|t and vk. The agreement values learned via the unsupervised, iterative algorithm ensures the outputs of the WordCaps get sent to appropriate subsequent SlotCaps after iterslot iterations. Cross Entropy Loss for Slot Filling For the t-th word in an utterance, its slot type is determined as follows: ˆyt = arg max k∈K (ckt). (5) The slot filling loss is defined over the utterance as the following cross-entropy function: Lslot = − X t X k yk t log(ˆyk t ), (6) where yk t indicates the ground truth slot type for the t-th word. yk t = 1 when the t-th word belongs to the k-th slot type. 2.3 IntentCaps The IntentCaps take the output vk for each slot k ∈{1, 2, ..., K} in SlotCaps as the input, and determine the utterance-level intent of the whole utterance. The IntentCaps also convert each slot representation in SlotCaps with respect to the intent type: ql|k = σ(WlvT k + bl), (7) where l ∈{1, 2, ..., L} and L is the number of intents. Wl ∈RDL×DP and bl ∈RDL×1 are the weight and bias matrix for the l-th capsule in IntentCaps. IntentCaps adopt the same dynamic routing-byagreement algorithm, where: ul = DYNAMIC ROUTING(ql|k, iterintent). (8) Max-margin Loss for Intent Detection Based on the capsule theory, the orientation of the activation vector ul represents intent properties while its length indicates the activation probability. The loss function considers a max-margin loss on each labeled utterance: Lintent = L X l=1 {[[z = zl]] · max(0, m+ −∥ul∥)2 + λ [[z ̸= zl]] · max(0, ∥ul∥−m−)2}, (9) where ∥ul∥is the norm of ul and [[]] is an indicator function, z is the ground truth intent label for the utterance x. λ is the weighting coefficient, and m+ and m−are margins. The intent of the utterance can be easily determined by choosing the activation vector with the largest norm ˆz = arg max l∈{1,2,...,L} ∥ul∥. 2.4 Re-Routing The IntentCaps not only determine the intent of the utterance by the length of the activation vector, but also learn discriminative intent representations of the utterance by the orientations of the activation vectors. Previously, the dynamic routingby-agreement shows how low-level features such as slots help construct high-level ideas such as intents. While the high-level features also work as a guide that helps learn low-level features. For example, the AddToPlaylist intent activation vector in IntentCaps also helps strength the existing slots such as artist name during slot filling on the words Sungmin in SlotCaps. 5263 Thus we propose a re-routing schema for SlotCaps where the dynamic routing-by-agreement is realized by the following equation that replaces the Line 7 in Algorithm 1: bkt ←bkt + pk|t · vk + α · pT k|tWRRˆuT ˆz , (10) where ˆuˆz is the intent activation vector with the largest norm. WRR ∈RDP ×DL is a bi-linear weight matrix, and α as the coefficient. The routing information for each word is updated toward the direction where the prediction vector not only coincides with representative slots, but also towards the most-likely intent of the utterance. As a result, the re-routing makes SlotCaps obtain updated routing information as well as updated slot representations. 3 Experiment Setup To demonstrate the effectiveness of our proposed models, we compare the proposed model CAPSULE-NLU with existing alternatives, as well as commercial natural language understanding services. Datasets For each task, we evaluate our proposed models by applying it on two real-word datasets: SNIPS Natural Language Understanding benchmark1 (SNIPS-NLU) and the Airline Travel Information Systems (ATIS) dataset (Tur et al., 2010). The statistical information on two datasets are shown in Table 1. Dataset SNIPS-NLU ATIS Vocab Size 11,241 722 Average Sentence Length 9.05 11.28 #Intents 7 21 #Slots 72 120 #Training Samples 13,084 4,478 #Validation Samples 700 500 #Test Samples 700 893 Table 1: Dataset statistics. SNIPS-NLU contains natural language corpus collected in a crowdsourced fashion to benchmark the performance of voice assistants. ATIS is a widely used dataset in spoken language understanding, where audio recordings of people making flight reservations are collected. Baselines We compare the proposed capsulebased model CAPSULE-NLU with other alternatives: 1) CNN TriCRF (Xu and Sarikaya, 2013) 1https://github.com/snipsco/ nlu-benchmark/ introduces a Convolution Neural Network (CNN) based sequential labeling model for slot filling. The hidden states for each word are summed up to predict the utterance intent. We adopt the performance with lexical features. 2) Joint Seq. (Hakkani-T¨ur et al., 2016) adopts a Recurrent Neural Network (RNN) for slot filling and the last hidden state of the RNN is used to predict the utterance intent. 3) Attention BiRNN (Liu and Lane, 2016) further introduces a RNN based encoderdecoder model for joint slot filling and intent detection. An attention weighted sum of all encoded hidden states is used to predict the utterance intent. 4) Slot-gated Full Atten. (Goo et al., 2018) utilizes a slot-gated mechanism as a special gate function in Long Short-term Memory Network (LSTM) to improve slot filling by the learned intent context vector. The intent context vector is used for intent detection. 5) DR-AGG (Gong et al., 2018) aggregates word-level information for text classification via dynamic routing. The high-level capsules after routing are concatenated, followed by a multilayer perceptron layer that predicts the utterance label. We used this capsule-based text classification model for intent detection only. 6) IntentCapsNet (Xia et al., 2018) adopts a multi-head selfattention to extract intermediate semantic features from the utterances, and uses dynamic routing to aggregate semantic features into intent representations for intent detection. We use this capsulebased model for intent detection only. We also compare our proposed model CAPSULE-NLU with existing commercial natural language understanding services, including api.ai (Now called DialogFlow)2, Waston Assistant3, Luis4, wit.ai5, snips.ai6, recast.ai7, and Amazon Lex8. Implementation Details The hyperparameters used for experiments are shown in Table 2. Dataset DW DH DP DL iterslot iterintent SNIPS-NLU 1024 512 512 128 2 2 ATIS 1024 512 512 256 3 3 Table 2: Hyperparameter settings. 2https://dialogflow.com/ 3https://www.ibm.com/cloud/ watson-assistant/ 4https://www.luis.ai/ 5https://wit.ai/ 6https://snips.ai/ 7https://recast.ai/ 8https://aws.amazon.com/lex/ 5264 Model SNIPS-NLU ATIS Slot (F1) Intent (Acc) Overall (Acc) Slot (F1) Intent (Acc) Overall (Acc) CNN TriCRF (Xu and Sarikaya, 2013) 0.944 Joint Seq. (Hakkani-T¨ur et al., 2016) 0.873 0.969 0.732 0.942 0.926 0.807 Attention BiRNN (Liu and Lane, 2016) 0.878 0.967 0.741 0.942 0.911 0.789 Slot-Gated Full Atten. (Goo et al., 2018) 0.888 0.970 0.755 0.948 0.936 0.822 DR-AGG (Gong et al., 2018) 0.966 0.914 IntentCapsNet (Xia et al., 2018) 0.974 0.948 CAPSULE-NLU 0.918 0.973 0.809 0.952 0.950 0.834 CAPSULE-NLU w/o Intent Detection 0.902 0.948 CAPSULE-NLU w/o Joint Training 0.902 0.977 0.804 0.948 0.847 0.743 Table 3: Slot filling and intention detection results using CAPSULE-NLU on two datasets. AddToPlaylist BookRestaurant GetWheather PlayMusic RateBook SearchCreativeWork SearchScreeningEvent 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00 F1 api.ai ibm.watson microsoft.luis wit.ai snips.ai recast.ai amazon.lex Capsule-NLU Figure 3: Stratified 5-fold cross validation for benchmarking with existing NLU services on SNIPS-NLU dataset. Black bars indicate the standard deviation. We use the validation data to choose hyperparameters. For both datasets, we randomly initialize word embeddings using Xavier initializer and let them train with the model. In the loss function, the down-weighting coefficient λ is 0.5, margins m+ and m−are set to 0.8 and 0.2 for all the existing intents. α is set as 0.1. RMSProp optimizer (Tieleman and Hinton, 2012) is used to minimize the loss. To alleviate over-fitting, we add the dropout to the LSTM layer with a dropout rate of 0.2. 4 Results Quantitative Evaluation The intent detection results on two datasets are reported in Table 3, where the proposed capsule-based model performs consistently better than current learning schemes for joint slot filling and intent detection, as well as capsule-based neural network models that only focuses on intent detection. These results demonstrate the novelty of the proposed capsule-based model CAPSULE-NLU in jointly modeling the hierarchical relationships among words, slots and intents via the dynamic routing between capsules. Also, we benchmark the intent detection performance of the proposed model with existing natural language understanding services9 in Figure 3. 9https://www.slideshare.net/KonstantinSavenkov/nluintent-detection-benchmark-by-intento-august-2017 Since the original data split is not available, we report the results with stratified 5-fold cross validation. From Figure 3 we can see that the proposed model CAPSULE-NLU is highly competitive with off-the-shelf systems that are available to use. Note that, our model archieves the performance without using pre-trained word representations: the word embeddings are simply trained from scratch. Ablation Study To investigate the effectiveness of CAPSULE-NLU in joint slot filling and intent detection, we also report ablation test results in Table 3. “w/o Intent Detection” is the model without intent detection: only a dynamic routing is performed between WordCaps and SlotCaps for the slot filling task, where we minimize Lslot during training; “w/o Joint Training” adopts a two-stage training where the model is first trained for slot filling by minimizing Lslot, and then use the fixed slot representations to train for the intent detection task which minimizes Lintent. From the lower part of Table 3 we can see that by using a capsulebased hierarchical modeling between words and slots, the model CAPSULE-NLU w/o Intent Detection is already able to outperform current alternatives on slot filling that adopt a sequential labeling schema. The joint training of slot filling and intent detection is able to give each subtask further improvements when the model parameters are updated jointly. 5265 Visualizing Agreement Values between Capsule Layers Thanks to the dynamic routing-byagreement schema, the dynamically learned agreement values between different capsule layers naturally reflect how low-level features are collectively aggregated into high-level ones for each input utterance. In this section, we harness the intepretability of the proposed capsule-based model via hierarchical modeling and provide case studies and visualizations. Between WordCaps and SlotCaps First we study the agreement value ckt between the t-th word in the WordCaps and the k-th slot capsule in SlotCaps. As shown in Figure 4, we observe that the dynamic routing-by-agreement is able to converge to an agreement quickly after the first iteration (shown in blue bars). It is able to assign a confident probability assignment close to 0 or 1. After the second iteration (shown in orange bars), the model is more certain about the routing decisions: probabilities are more leaning towards 0 or 1 as the model is confident about routing a word in WordCaps to its most appropriate slot in SlotCaps. 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure 4: The distribution of all agreement values between WordCaps and SlotCaps on the test split of SNIPS-NLU dataset. Blue: the distribution of values after the first iteration. Yellow: the distribution after the second iteration. However, we do find that when unseen slot values like new object names emerge in utterances like show me the movie operetta for the theatre organ with an intent of SearchCreativeWork, the iterative dynamic routing process would be even more appealing. Figure 5 shows the agreement values learned by dynamic routing-by-agreement. Since the dynamic routing-by-agreement is an iterative process controlled by the variable iterslot, we show the agreement values after the first iteration in the left part of Figure 5, and the values after the second iteration in the right part. From the left part of Figure 5, we can see that after the first iteration, the model considers the word operetta itself alone is likely to be an obshow me the movie operetta for the theatre organ B-object_type I-object_type O B-object_name I-object_name B-timeRange I-restaurant_type show me the movie operetta for the theatre organ Figure 5: The learned agreement values between WordCaps (x-axis) and SlotCaps (y-axis). A sample from the test split of SNIPS-NLU dataset is shown (Left: after the fist routing iteration. Right: after the second iteration). Due to space limitations, only part of slots (7/72) are shown on the y-axis. PlayMusic GetWeather BookRestaurant RateBook SearchScreeningEvent SearchCreativeWork AddToPlaylist B-object_type O B-object_name I-object_name I-playlist PlayMusic GetWeather BookRestaurant RateBook SearchScreeningEvent SearchCreativeWork AddToPlaylist Figure 6: The learned agreement values between SlotCaps (y-axis) and IntentCaps (x-axis). Left: after the first iteration. Right: after the second iteration. The same sample utterance used in Figure 5 is used here. ject name, probably because the following word for is usually a context word being annotated as O. Thus it tends to route word for to both the slot O and the slot I-object name. However, from the right part of Figure 5 we can see that after the second iteration, the dynamic routing found an agreement and is more certain to have operetta for the theatre organ as a whole for the slot B-object name and I-object name. Between SlotCaps and IntentCaps Similarly, we visualize the agreement values between each slot capsule in SlotCaps and each intent capsule in IntentCaps. The left part of Figure 6 shows that after the first iteration, since the model is not able to correctly recognize operetta for the theatre organ as a whole, only the context slot O (correspond to the word show me the) and B-object name (correspond to the word operetta) contribute significantly to the final intent capsule. From the right part of Figure 6, we found that with the word operetta for the theatre organ being recognized in the lower capsule, the slots I-object name and B-object type contribute more to the correct intent capsule SearchCreativeWork, when comparing with other routing alternatives to other intent capsules. 5266 5 Related Works Intent Detection With recent developments in deep neural networks, user intent detection models (Hu et al., 2009; Xu and Sarikaya, 2013; Zhang et al., 2016; Liu and Lane, 2016; Zhang et al., 2017; Chen et al., 2016; Xia et al., 2018) are proposed to classify user intents given their diversely expressed utterances in the natural language. As a text classification task, the decent performance on utterance-level intent detection usually relies on hidden representations that are learned in the intermediate layers via multiple non-linear transformations. Recently, various capsule based text classification models are proposed that aggregate wordlevel features for utterance-level classification via dynamic routing-by-agreement (Gong et al., 2018; Zhao et al., 2018; Xia et al., 2018). Among them, Xia et al. (2018) adopts self-attention to extract intermediate semantic features and uses a capsulebased neural network for intent detection. However, existing works do not study word-level supervisions for the slot filling task. In this work, we explicitly model the hierarchical relationship between words and slots on the word-level, as well as intents on the utterance-level via dynamic routingby-agreement. Slot Filling Slot filling annotates the utterance with finer granularity: it associates certain parts of the utterance, usually named entities, with predefined slot tags. Currently, the slot filling is usually treated as a sequential labeling task. A recurrent neural network such as Gated Recurrent Unit (GRU) or Long Short-term Memory Network (LSTM) is used to learn context-aware word representations, and Conditional Random Fields (CRF) are used to annotate each word based on its slot type. Recently, Shen et al. (2017); Tan et al. (2017) introduce the self-attention mechanism for CRFfree sequential labeling. Joint Modeling via Sequence Labeling To overcome the error propagation in the word-level slot filling task and the utterance-level intent detection task in a pipeline, joint models are proposed to solve two tasks simultaneously in a unified framework. Xu and Sarikaya (2013) propose a Convolution Neural Network (CNN) based sequential labeling model for slot filling. The hidden states corresponding to each word are summed up in a classification module to predict the utterance intent. A Conditional Random Field module ensures the best slot tag sequence of the utterance from all possible tag sequences. Hakkani-T¨ur et al. (2016) adopt a Recurrent Neural Network (RNN) for slot filling and the last hidden state of the RNN is used to predict the utterance intent. Liu and Lane (2016) further introduce an RNN based encoderdecoder model for joint slot filling and intent detection. An attention weighted sum of all encoded hidden states is used to predict the utterance intent. Some specific mechanisms are designed for RNNs to explicitly encode the slot from the utterance. For example, Goo et al. (2018) utilize a slot-gated mechanism as a special gate function in Long Short-term Memory Network (LSTM) to improve slot filling by the learned intent context vector. However, as the sequence becomes longer, it is risky to simply rely on the gate function to sequentially summarize and compress all slots and context information in a single vector (Cheng et al., 2016). In this paper, we harness the capsule neural network to learn a hierarchy of feature detectors and explicitly model the hierarchical relationships among word-level slots and utterance-level intent. Also, instead of doing sequence labeling for slot filling, we use a dynamic routing-by-agreement schema between capsule layers to route each word in the utterance to its most appropriate slot type. And we further route slot representations, which are learned dynamically from words, to the most appropriate intent capsule for intent detection. 6 Conclusions In this paper, a capsule-based model, namely CAPSULE-NLU, is introduced to harness the hierarchical relationships among words, slots, and intents in the utterance for joint slot filling and intent detection. Unlike treating slot filling as a sequential prediction problem, the proposed model assigns each word to its most appropriate slots in SlotCaps by a dynamic routing-by-agreement schema. The learned word-level slot representations are futher aggregated to get the utterancelevel intent representations via dynamic routingby-agreement. A re-routing schema is proposed to further synergize the slot filling performance using the inferred intent representation. Experiments on two real-world datasets show the effectiveness of the proposed models when compared with other alternatives as well as existing NLU services. 5267 7 Acknowledgments We thank the reviewers for their valuable comments. This work is supported in part by NSF through grants IIS-1526499, IIS-1763325, and CNS-1626432. References Yun-Nung Chen, Dilek Hakkani-T¨ur, G¨okhan T¨ur, Jianfeng Gao, and Li Deng. 2016. End-to-end memory networks with knowledge carryover for multiturn spoken language understanding. In INTERSPEECH, pages 3245–3249. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 551–561. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jingjing Gong, Xipeng Qiu, Shaojing Wang, and Xuanjing Huang. 2018. Information aggregation via dynamic routing for sequence encoding. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2742–2752. Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 753–757. Dilek Hakkani-T¨ur, G¨okhan T¨ur, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and YeYi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Interspeech, pages 715–719. Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. 2011. Transforming auto-encoders. In International Conference on Artificial Neural Networks, pages 44–51. Springer. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Jian Hu, Gang Wang, Fred Lochovsky, Jian-tao Sun, and Zheng Chen. 2009. Understanding user’s query intent with wikipedia. In WWW, pages 471–480. ACM. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, ICML 2001, pages 282–289. Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. Interspeech, pages 685–689. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In NIPS, pages 3859–3869. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2017. Disan: Directional self-attention network for rnn/cnnfree language understanding. arXiv preprint arXiv:1709.04696. Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2017. Deep semantic role labeling with self-attention. arXiv preprint arXiv:1712.01586. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31. Gokhan Tur, Dilek Hakkani-T¨ur, and Larry Heck. 2010. What is left to be understood in atis? In Spoken Language Technology Workshop (SLT), 2010 IEEE, pages 19–24. IEEE. Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip S Yu. 2018. Zero-shot user intent detection via capsule neural networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3090–3099. Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular crf for joint intent detection and slot filling. In ASRU, pages 78–83. IEEE. Chenwei Zhang, Nan Du, Wei Fan, Yaliang Li, ChunTa Lu, and Philip S Yu. 2017. Bringing semantic structures to user intent detection in online medical queries. In IEEE Big Data, pages 1019–1026. Chenwei Zhang, Wei Fan, Nan Du, and Philip S Yu. 2016. Mining user intentions from medical queries: A neural network based heterogeneous jointly modeling approach. In WWW, pages 1373–1384. Wei Zhao, Jianbo Ye, Min Yang, Zeyang Lei, Suofei Zhang, and Zhou Zhao. 2018. Investigating capsule networks with dynamic routing for text classification. arXiv preprint arXiv:1804.00538.
2019
519
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 547–556 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 547 Transfer Capsule Network for Aspect Level Sentiment Classification Zhuang Chen, Tieyun Qian∗ School of Computer Science, Wuhan University, China {zhchen18, qty}@whu.edu.cn Abstract Aspect-level sentiment classification aims to determine the sentiment polarity of a sentence towards an aspect. Due to the high cost in annotation, the lack of aspect-level labeled data becomes a major obstacle in this area. On the other hand, document-level labeled data like reviews are easily accessible from online websites. These reviews encode sentiment knowledge in abundant contexts. In this paper, we propose a Transfer Capsule Network (TransCap) model for transferring document-level knowledge to aspect-level sentiment classification. To this end, we first develop an aspect routing approach to encapsulate the sentence-level semantic representations into semantic capsules from both aspect-level and document-level data. We then extend the dynamic routing approach to adaptively couple the semantic capsules with the class capsules under the transfer learning framework. Experiments on SemEval datasets demonstrate the effectiveness of TransCap. 1 Introduction Aspect-level sentiment classification (ASC) is a fine-grained subtask in sentiment analysis. Given a sentence and an aspect occurring in the sentence, ASC aims to determine the sentiment polarity of the aspect. Traditional methods mostly use machine learning models with handcrafted features to build sentiment classifiers for ASC tasks (Jiang et al., 2011; Mohammad et al., 2013). Such methods need either laborious feature engineering or massive linguistic resources. With the development of deep learning technique, a number of neural models have been proposed (Wang et al., 2016b; Tang et al., 2016; Chen et al., 2017) for ASC tasks. All these models train classifiers in a supervised manner and require sufficient num*Corresponding author. ber of labeled data to get promising results. However, the annotation of opinion targets in ASC is extremely expensive. The lack of labeled data is a major obstacle in this field. Publicly available datasets for ASC often contain limited number of training samples. On the other hand, document-level labeled data like reviews are easily accessible from online websites such as Yelp and Amazon. Since each review has an accompanying rating score indicating user’s overall satisfaction towards an item, such a score can naturally serve as the label of sentiment polarity of the review document. Intuitively, the document-level data contain useful sentiment knowledge for analysis on aspectlevel data since they may share many linguistic and semantic patterns. Unfortunately, for ASC tasks, only one study (He et al., 2018) has taken the utilization of document-level data into account. The PRET+MULT framework proposed in (He et al., 2018) is a successful attempt by adopting pre-training and multi-task learning approaches. However, their model only shares shallow embedding and LSTM layers between ASC and DSC (document-level sentiment classification) tasks. In other words, the document-level knowledge is merely used for improving the word representations in ASC. Consequently, it is unable for PRET+MULT to handle complicated patterns like euphemism and irony which require highlevel semantic knowledge from the entire sentence. For example, given a sentence “The staff should be a bit more friendly”, PRET+MULT will make a wrong prediction (the detail will be given in the analysis part). In this paper, we propose a novel Transfer Capsule Network (TransCap) model to transfer sentence-level semantic knowledge from DSC to ASC. Our work is inspired by the capsule network (Hinton et al., 2011; Sabour et al., 2017) 548 which uses capsule vectors and the dynamic routing approach to store and cluster features, but we move one step further in that we develop an aspect routing approach which can generate sentence-level semantic features shared by ASC and DSC. Moreover, we extend the dynamic routing approach by adapting it to the transfer learning framework. We conduct extensive experiments on two SemEval datasets. Results demonstrate that our TransCap model consistently outperforms the state-of-the-art methods. 2 Related Work Aspect-level Sentiment Classification Traditional methods for sentiment classification (Nakagawa et al., 2010; Jiang et al., 2011; Taboada et al., 2011; Mohammad et al., 2013) mostly use machine learning algorithms to build sentiment classifiers with carefully extracted features, which take massive time and resources to collect. Early studies focus on document-level sentiment classification (DSC) tasks. In recent years, a number of deep learning methods have been proposed for aspect-level sentiment classification (ASC) tasks (Dong et al., 2014; Vo and Zhang, 2015; Tang et al., 2016; Wang et al., 2016a; Ma et al., 2017; Li et al., 2018; Ma et al., 2018; Wang et al., 2018a). In general, there are three types of neural networks for ASC tasks: LSTM based (Wang et al., 2016b; Ma et al., 2017; Tay et al., 2018), memory based (Tang et al., 2016; Chen et al., 2017; Zhu and Qian, 2018), and hybrid methods (Xue and Li, 2018). For example, Wang et al. (2016b) use attention mechanism to model the inter-dependence between LSTM hidden units and aspects. Tang et al. (2016) utilize memory network to store context words and conduct multi-hop attention to get the sentiment representation towards aspects. Chen et al. (2017) apply recurrent attention to multi-layer memory. Xue and Li (2018) employ CNN and gating mechanism to extract aspectspecific information from contexts. Although various types of approaches have been proposed, the inherent obstacle, i.e., the lack of labeled data, is still a big challenge for all ASC tasks. Without sufficient labeled data, training procedures in these approaches are likely to converge in a sub-optimal state. We differentiate our work from aforementioned models in that we aim to utilize the abundant labeled DSC data to alleviate the scarcity of labeled data in ASC tasks. Transfer Learning Transfer learning aims to extract knowledge from one or more source tasks and then apply the knowledge to a target task. It can be categorized into three types based on different situations in the source and target domains/tasks (Pan and Yang, 2010). Our work belongs to “inductive transfer learning (ITL)” type since ASC (target) and DSC (source) in our framework are different but related tasks. In this case, ITL is similar to multi-task learning (MTL) with a slight difference: ITL only aims at achieving high performance in the target task while MTL tries to improve both simultaneously. Several recent attempts have taken ITL or MTL methods for sentiment classification tasks. Dong and de Melo (2018) present a transfer learning framework by utilizing trained models. Xiao et al. (2018) employ capsule network for multitask learning. Both these methods are designed for document-level text/sentiment classification tasks, and are inappropriate for the fine-grained ASC task in this work. He et al. (2018) propose a multitask framework to combine ASC with DSC tasks together. This is the closest work to ours. However, the method in (He et al., 2018) is based on an existing AT-LSTM model (Wang et al., 2016b), whereas our framework is a totally new one which employs capsule network with carefully designed strategies for ASC tasks. 3 Our Proposed TransCap Model In this section, we introduce our Transfer Capsule Network (TransCap) model. TransCap is proposed to conduct aspect-level sentiment classification with the auxiliary knowledge transferred from document-level data. We first present the problem definitions and preliminary. We then illustrate the architecture of TransCap in detail. 3.1 Definitions and Preliminary Definition 1 (TransCap) Given a source document-level corpus CD and the learning task TD, a target aspect-level corpus CA and the learning task TA, TransCap aims to help improve the learning of the target predictive function fA(·) in TA using the knowledge transferred from TD. Definition 2 (TA and TD) Given a sentence S = {w1, ..., wa, ..., wL} ∈CA and an aspect wa occurring in S, an aspect-level sentiment classification task TA aims to determine the sentiment polarity 549 of S towards wa. Note there might be multiple aspects in one sentence. Given an opinion sentence (or document) D ∈CD, a document-level sentiment classification task TD aims at assigning an overall sentiment polarity for D. Note that TA is the main task and TD is only for providing auxiliary knowledge in our TransCap model. Preliminary (CapsNet) Capsule network is first proposed for image classification in computer vision (Hinton et al., 2011; Sabour et al., 2017). Compared with CNN, it replaces the scalar-output feature detectors with vector-output capsules and has the ability to preserve additional information such as position and thickness. The vanilla CapsNet consists of two capsule layers. The primary layer stores low-level image feature maps and the class layer generates the classification probability with each capsule corresponding to one class. Recently, CapsNet has been applied to several NLP tasks like text classification and relation extraction (Yang et al., 2018b; Gong et al., 2018; Xiao et al., 2018; Zhang et al., 2018; Wang et al., 2018b). CapsNet is able to adaptively decide the information transferred between layers by using dynamic routing. Furthermore, each class in CapsNet has distinctive parameters to aggregate features and an independent probability to be existed. Therefore, CapsNet meets our needs in the transfer learning scenario which includes multiple polarities and tasks. Our TransCap model is the first attempt to exploit the power of CapsNet under the transfer learning framework for ASC tasks. 3.2 An Overview of Architecture The architecture of TransCap is shown in Figure 1. It consists of four layers: 1) Input layer converts words in a sentence into low-dimensional realvalued vectors, 2) FeatCap layer extracts N-gram features from word vectors and transforms them into feature capsules, 3) SemanCap layer aggregates feature capsules into a set of aspect-related sentence-level semantic capsules, and 4) ClassCap layer generates class capsules which correspond to sentiment polarities in TA and TD, respectively. Note that TA and TD tasks share the first three layers, and they separate only in the last ClassCap layer. Since TA and TD are related tasks both aiming to identify the sentiment polarity, features useful for one task might be useful for the other. We expect the features produced by the shared layers can be improved in a mutual way. Figure 1: TransCap Architecture. 3.3 Input Layer The input layer consists of two lookup layers. Let Ew ∈Rdw×|V | be the pre-trained word embedding lookup table, where dw is the dimension of word vectors and |V | is the vocabulary size. The word lookup layer maps the word sequence in S(D) to a list of word vectors {e1, ..., ea, ...,eL} ∈Rdw×L. Following (Gu et al., 2018), we also use another position lookup layer. For TA, by calculating the absolute distance from every context word wi to aspect word wa, we can get an additional position sequence for S. For TD, the position sequence is a zero sequence since there is no aspect information. Let El ∈Rdl×|L| be the position embedding lookup table with random initialization, the position lookup layer maps the position sequence to a list of position vectors {l1, ..., la, ...,lL} ∈Rdl×L. The final representation of each word wi is calculated as xi = (ei ⊕li) ∈Rdh where ⊕denotes concatenation and dh = dw + dl. The sentence S(D) is transformed into a sentence embedding X = {x1, ..., xL} ∈Rdh×L. 3.4 Feature Capsule Layer This layer is used to extract n-gram features from sentence embedding X. N-gram features contain raw and local semantic meaning in a fixed window. We apply multiple convolution operations to the ith n-gram in X and get its feature vector ri: ri = Xi:i+K ∗F + b, (1) where F ∈ Rdp×(dh×K) is the kernel group, (dh × K) is the size of one convolutional kernel, K is the n-gram size and dp is the dimension of one feature capsule. After sliding F in X, we get a set of feature capsules r ∈Rdp×(L−K+1) encapsulating n-gram features extracted from the whole sentence S(D). Since one kernel group F corresponds to one category of semantic meaning, we repeat the above procedure C times with different kernel groups, and get multiple channels of feature capsules representing C categories of semantic meaning. The final output of feature capsule layer is arranged as R ∈RC×dp×(L−K+1): R = [r1, r2, ..., rC] (2) 550 3.5 Semantic Capsule Layer Aspect Routing Approach The sentence or document in two corpora CA and CD differs in whether an aspect term occurs in the sentence/document. The TD task does not contain aspects. Meanwhile, it is crucial for the TA task to determine the relation between contexts and aspects. Especially when a sentence contains two opposite sentiment polarities, different contexts must be separated for different aspects. For example, given a sentence “Great food but the service is dreadful !”, the context word “dreadful” should be strengthened for the aspect “service” and be weakened for the aspect “food”. To this end, we propose a novel aspect routing approach to compute the aspect weight for the context words of K-size window in TA. Formally, we apply a fusing convolution operation to the sentence embedding X with a kernel Fa ∈Rdh×K, and we get the aspect routing weight ai: ai = sigmoid(Xi:i+K ∗Fa + Taea + ba), (3) where ea is the aspect embedding (or average embedding in the case of multi-word aspect), Ta ∈ R1×dw is a transfer matrix to map ea to a scalar value, and ba is bias. The generated routing weight ai ∈[0, 1] fuses aspect information with respect to its context. It controls how much information in the current context can be transmitted to the next layer. If ai is zero, the feature capsule would be totally blocked. A minor challenge is that, for a TD task, there is actually no aspect in the document and we need to distinguish two types of sources from CA and CD. Hence we design a piecewise function gi for calculating the aspect routing weight gi for an arbitrary feature vector ri from X as: gi = ( ai X ∈CA 1.0 X ∈CD (4) After sliding in X, we can get g ∈R1×(L−K+1) for the whole sentence S(D). Since we have C channels of feature capsules, we repeat the above procedure C times to get the entire aspect routing weights G ∈RC×1×(L−K+1) as: G = [g1, g2, ..., gC], (5) Finally, the feature capsules are routed using these weights: P = R ⊙G, (6) where P ∈ RC×dp×(L−K+1) are the aspectcustomized feature capsules, and ⊙denotes element-wise multiplication (with broadcasting). Semantic Capsule Generation The above generated P are transformed from the n-gram feature capsules. Though encoding aspect-related information, P are still local features without a sentence-level view. Moreover, the large number of capsules in P may prevent the next layer from learning robust representations. Hence we adopt the element-wise maximum function (Lai et al., 2015) in P to aggregate all feature capsules in same channel horizontally. U = C×dp max t=1 Pt, (7) where U ∈RC×dp are the generated semantic capsules. Eq. 7 condenses all local features in each channel and thus we can obtain more precise and global semantic representations from subtle expressions, e.g., an euphemistic sentence. Finally, we want the length of each semantic capsule ui to represent the probability that ui’s semantic meaning is present in the current input, so we use a nonlinear “squash” function (Sabour et al., 2017) to limit its length in [0,1] as ui ← ∥ui∥2 1 + ∥ui∥2 ui ∥ui∥ (8) 3.6 Class Capsule Layer In the original capsule network, there is only one classification task and it uses class capsules to denote classes and their lengths as classification probabilities. However, there are two different tasks in our problem, and it is necessary to discern sentiment polarities (classes) in these tasks. To achieve this, we introduce two types of class capsules into TransCap, with six capsules in total. Such a structure makes it possible for our model to train TA and TD in a unified framework. Given input data from two tasks in turn, the first three layers share most parameters (except those in Eq. 3) to jointly train TD and TA, so that knowledge from document-level data can be successfully transferred into aspect-level task. In the last layer, each class capsule is used for calculating the classification probability of each class in TD and TA separately. Hence each class capsule should have its own routing weights to adaptively aggregate semantic capsules from the previous layer. Below we give the detail. A semantic capsule i generates a “prediction vector” ˆuj|i towards a class capsule j as: ˆuj|i = Wij ui, (9) where Wij∈Rdc×dp is a weight matrix, dp and 551 dc are the dimensions of semantic capsule i and class capsule j, ui is the vector representation of semantic capsule i. All “prediction vectors” generated by semantic capsules are summed up with weights cij to obtain the vector representation sj of class capsule j: sj = X i cij ˆuj|i, (10) where cij is a coupling coefficient defined by a “routing softmax”: cij = exp(bij) P k exp(bik) , (11) where each bij is the log prior probability that a semantic capsule i should pass to a class capsule j. It is computed using a dynamic routing approach which will be presented later. After that, we again apply the non-linear “squash” function (Sabour et al., 2017) to sj in Eq. 10 to get a final representation vj for class capsule j. vj = squash(sj), (12) where the length of vj is limited in [0,1] to represent the active probability of class capsule j. Dynamic Routing Approach The logit bij in Eq. 11 determines the intensity of the connection between the semantic capsule i and the class capsule j. It is initialized with 0 and is updated with an agreement coefficient aij. aij = ˆuj|i · vj (13) This agreement coefficient is added to the initial logit bij before computing the new values for all coupling coefficients cij linking semantic capsules to class capsules. bij ←bij + aij (14) The dynamic routing procedure can be summarized as (Eq. 11→10→12→13→14). The procedure can be repeated for r iterations. 3.7 Margin Loss The length of a class capsule is used to represent the probability of the sentiment polarity. The capsule length of the active class should be larger than others. Hence we adopt a separate margin loss Lj for each class capsule j in each task: Lj = Yjmax(0, m+ −∥vj∥)2 + λ(1 −Yj)max(0, ∥vj∥−m−)2, (15) where Yj=1 if the sentiment polarity is present in class capsule j, and we simply set m+=0.9, m−=0.1, λ=0.5 following those in (Sabour et al., 2017). The loss for a single task is LT = PJ j=1 Lj, where T is either A or D, denoting the loss LA and LD for task TA and TD, respectively. The final loss L for our TransCap model is the linear combination of two losses on single tasks. L = LA + γLD (16) where γ ∈[0,1] is a hyper-parameter controlling the weight of TD. When training converges, the class capsule with the largest active probability in a task is chosen as the prediction of sentiment polarities. 4 Experiments 4.1 Datasets and Settings Datasets for TA We evaluate TransCap on two aspect-level datasets from SemEval2014 Task 4 (Pontiki et al., 2014). The datasets contain reviews from Restaurant and Laptop domains respectively with 3-way sentiment polarity labels: positive, neutral and negative 1. Both datasets have a fixed training/test split. We further randomly sample 20% training data as the development set, and use the remaining 80% for training. Datasets for TD We use three document-level datasets to transfer knowledge: Yelp, Amazon and Twitter. All the documents (reviews) in Yelp Review (Zhang et al., 2015) and Amazon Electronics (McAuley et al., 2015) datasets have accompanying five-star ratings (1..5). We consider reviews with a score <3 as negative, =3 as neutral and >3 as positive. The Twitter dataset is collected from SemEval 2013 to 2017, where the original tweets are already labeled with 3-way polarities. Each dataset for TD contains 30,000 samples with balanced class labels. All samples in these datasets are used for auxiliary training. We do not report performance for the TD task since it is not our focus. Also note that the first two datasets in TD are of the same topics as those in TA, while the topics in Twitter are more general and less relevant to our main task TA. There are two combinations for TransCap: {Y,A} denotes {Res.+Yelp, Lap.+Amazon}, {T,T} denotes {Res.+Twitter, Lap.+Twitter}. By doing so, we wish to investigate how our proposed model performs on various types of auxiliary information. The statistics for these datasets are summarized in Table 1. 1We remove samples with conflict polarities following previous studies (Tang et al., 2016; Chen et al., 2017; He et al., 2018). 552 Task Dataset Type Pos. Neu. Neg. TA Restaurant train 2164 633 805 test 728 196 196 Laptop train 987 460 866 test 341 169 128 TD Yelp train 10k 10k 10k Amazon train 10k 10k 10k Twitter train 10k 10k 10k Table 1: The statistics for datasets. Settings We use Glove vectors with 840B tokens (Pennington et al., 2014) as the pre-trained word embeddings. r=3 following Sabour et al. (2017). The rest of hyperparameters are tuned on the development set. We set dw=300, dl=100, K=3, C=16, dp=16, dc=24. γ={0.7, 0.8, 0.8, 0.3} for the {R,Y}, {R,T}, {L,A}, {L,T} dataset combinations, respectively. We use Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001 and batch size 128. We train all models for 50 epochs with early-stopping, i.e., stop training if the performance on the development set does not improve among 5 epochs. The averaged accuracy (Acc.) and Macro-F1 (F1) scores are reported over 5 runs with random initialization on the same split of evaluation datasets 2. Compared Methods To demonstrate the superiority of our TransCap for ASC tasks, we compare it with followings baselines: ATAELSTM (Wang et al., 2016b), IAN (Ma et al., 2017), AF-LSTM(CONV) (Tay et al., 2018), AFLSTM(CORR) (Tay et al., 2018), PBAN (Gu et al., 2018), MemNN (Tang et al., 2016), RAM (Chen et al., 2017), CEA (Yang et al., 2018a), DAuM (Zhu and Qian, 2018), IARM (Majumder et al., 2018), PRET+MULT (He et al., 2018) and GCAE (Xue and Li, 2018). Most of them are the latest methods published in 2018. The rest are frequently-used classical models. 4.2 Main Results The comparison results for all models are shown in Table 2. For clarity, we classify the models into four categories: the first is the LSTM-based methods (from M1 to M5), the second is the memorybased ones (from M6 to M10), the third is the hybrid ones (M11 and M12), and the last three lines (M13 to M15) are the variants of our model, where TransCap{S} denotes the one with TA task only, TransCap{Y,A} and TransCap{T,T} utilize the knowledge from different sources in TD. 2Our code and data are available at https://github.com/ NLPWM-WHU/TransCap. Model Restaurant Laptop Acc. F1 Acc. F1 M1 ATAE-LSTM 78.38 66.36 69.12 63.24 M2 IAN 78.71 67.71 69.56 63.72 M3 AF-LSTM(CONV) 76.46 65.54 69.97 63.70 M4 AF-LSTM(CORR) 75.96 64.41 69.78 63.38 M5 PBAN 78.62 67.45 71.98 66.91 M6 MemNN 77.69 67.53 68.86 62.60 M7 RAM 78.41 68.52 72.16 66.97 M8 CEA 78.44 66.78 70.52 64.52 M9 DAuM 77.91 66.47 70.36 65.86 M10 IARM 77.73 66.66 68.63 63.30 M11 PRET+MULT 78.73 68.63 71.91 68.79 M12 GCAE 76.09 63.29 68.72 63.32 M13 TransCap{S} 78.84 69.70 72.65 68.77 M14 TransCap{Y,A} 79.55 71.41 73.51 69.81 M15 TransCap{T,T} 79.29 70.85 73.87 70.10 Table 2: Comparison of different methods. Best scores are in bold, and the second best ones (except those in our variants) are underlined. It is clear that our TransCap model consistently outperforms all baselines on both datasets. The hybrid model PRET+MULT, which is a multitask learning based model, also has the overall better performance than other baselines. Both these demonstrate that the aspect-level sentiment classification task TA can benefit a lot by transferring knowledge from the auxiliary task TD. PRET+MULT is inferior to our model. The reason is that it only shares low-level features and transfers limited knowledge between tasks. We also find that two multi-task variants of our model, TransCap{Y,A} and TransCap{T,T}, achieve similar performance. {Y,A} provides knowledge from relevant domains, but their labels are not very accurate since they may contain a lot of noises. Though the knowledge in {T,T} are from tweets of mixed and less relevant topics, their labels are manually-annotated and thus are quite reliable. Overall, given the sufficient number of training samples in the auxiliary task TD, the performance of TA tasks can be significantly enhanced over its single task counterpart TransCap{S}. Among LSTM-based models, PBAN and IAN achieve higher performance than others since they use the bi-directional attention mechanism. RAM is better than other memory-based models because it utilizes a non-linear combination for attention results in different hops. GCAE performs the worst among all baselines, as its simple CNNbased model can not capture the long-term dependencies between context words. 553 5 Analysis 5.1 Ablation Study To investigate the effects of different components in our model, we conduct the following ablation study on TransCap. (i)“- A”: We remove the aspect routing approach, and set same weights 1.0 for all feature capsules. (ii)“- S”: We remove semantic capsules, and pass weighted feature capsules directly to class capsules. (iii)“- D”: We remove the dynamic routing approach, i.e., a semantic capsule would be coupled to all class capsules with equal probabilities. Results for the ablation study are shown in Table 3, where “Ori.” denotes results for the original TransCap model, and “-*” for those removing the corresponding components. Restaurant Laptop {Y,A} {T,T} {Y,A} {T,T} Acc. F1 Acc. F1 Acc. F1 Acc. F1 Ori. 79.55 71.41 79.29 70.85 73.51 69.81 73.87 70.10 - A. 3.75↓6.49↓2.63↓3.95↓2.98↓5.34↓3.34↓3.80↓ - S. 4.01↓5.14↓1.45↓2.08↓2.35↓3.64↓2.40↓2.15↓ - D. 2.80↓4.06↓0.54↓1.01↓3.29↓6.03↓1.14↓1.75↓ Table 3: Ablation study for TransCap. ↓denotes the drop of performance. The worst scores are in bold. As expected, results for the simplified models all drop a lot. This clearly demonstrates the effectiveness of these components. Specifically, TransCap-A performs the worst, since it cannot generate aspect-related feature capsules after removing aspect routing from TransCap. Dynamic routing is critical as it helps TransCap to reduce the interference between TA and TD. The drop of performance of TransCap-S also shows that semantic capsules are important for building robust and precise connections between features and polarities. 5.2 Parameter Analysis Influence of Auxiliary Corpus Size To show the influence of DSC task on our major ASC task, we vary the size of auxiliary document-level corpus CD and observe the performance changes in TA. We use a percentage ∈[0, 1] to control the ratio of CD and present results in Figure 2. As can be seen, all curves in Figure 2 tend to rise with the increasing amount of document-level knowledge. This shows the effectiveness of our model by transferring knowledge from documentlevel data. At the initial stages where only 20% Figure 2: Influence of CD size. or 40% of CD are introduced, we find small decreases of performance. The reason may be that when the auxiliary document-level corpus CD is small, the model in TD has not been well trained. Hence it provides limited transferable knowledge to train the shared input, feature capsule and semantic capsule layers. Consequently, ASC task TA gets misleading information from these layers and then performs worse. After getting sufficient document-level data, TD becomes robust and stable, and TA also improves its performance. Effects of Balance Factor γ The balance factor γ determines how important the DSC task TD is in the model. To evaluate its effects, we vary γ in range [0,1] and present results in Figure 3. Figure 3: Effects of γ. The key observation from Figure 3 is that there are Turning Points (denoted as TP) for both two datasets: TP≈0.7 for Restaurant and TP≈0.3 for Laptop. The curves have an overall upward trend when γ < TP, but become flat or downward once γ > TP. This phenomenon can be explained with multi-task learning mechanism. In upward part, lots of useful sentiment knowledge is transferred from document-level data to aspect-level data, thus the performance of TA gets improved. Once the weight for TD exceeds TP, TD begins to dominate the whole TransCap model while TA gradually loses the mastership and performs worse. 5.3 Case Study To have a close look, we further select three samples from different datasets for a case study. Part 1 We first illustrate what kind of knowledge TransCap will transfer. Below is an example from 554 Laptop where the target is enclosed in [] with a subscript denoting its true polarity: 1.“It has so much more speed and the [screen]pos is very sharp.” Humans can easily identify the positive polarity towards aspect [screen]. However, the single-task variant TransCap{S} and most baselines give a false negative prediction. This is because “sharp” is a multi-polarity word in the training set as the following two examples show: 2.“Once open, the [leading edge]neg is razor sharp.” 3.“[Graphics]pos are clean and sharp, internet interfaces are seamless.” The training set in Laptop contains only 8 samples including “sharp” with 5 of them are labeled as negative. It is hard for single-task models to learn a correct meaning for “sharp” with several contradictory samples. Hence they simply consider it as a negative token due to the superiority of this polarity and make false predictions. However, for TransCap{Y,A}, the auxiliary Amazon dataset contains 294 samples where “sharp” cooccurs with lots of different contexts. With the help of sufficient training samples, three shared layers have learned to recognize the true polarity of “sharp” with respect to its contexts, thus the class capsule layer in TransCap{Y,A} finally makes a correct prediction. Part 2 This part aims to visualize the decisionmaking process of TransCap with an example from Restaurant dataset: 4.“Great [food]pos but the [service]neg is dreadful !”. The coupling coefficients cij ∈[0,1] for this example are visualized in Figure 4, which presents the cij between each pair of (semantic capsule, class capsule) after dynamic routing with respect to different aspects. Note that the sum of cij in every column (not row as that in the attention mechanism) is 1.0. When the input aspect is [service] (the upper part in Figure 4), the detailed decision-making process is as follow. Firstly, several semantic capsules such as 4 and 8 have already captured corresponding sentence-level semantic meaning from the review’s content. Secondly, by calculating the coupling coefficient cij after dynamic routing, these semantic capsules are highly coupled with the negative class capsule, and thus this negative capsule gets a higher active probability than other class capsules. As a result, TransCap makes the negative prediction for the aspect [service]. Similarly, when the input aspect is [food] (the lower part in Figure 4), the positive class capsule gets a high active probability and TransCap then makes a correct prediction for this aspect. Figure 4: Visualization of coupling coefficients cij after dynamic routing. Part 3 In last part, we present an example from Restaurant to show the advantage of TransCap over PRET+MULT (He et al., 2018): 5.“The [staff]neg should be a bit more friendly.” This is an euphemistic negative review towards the aspect [staff] though each word in the sentence itself does not convey a negative sentiment. PRET+MULT generates features and transfers knowledge only at the word level. Although embedding for each word is enhanced by the auxiliary document-level data, PRET+MULT can not recognize the overall negative sentiment behind each word and makes a false positive prediction due to the word “friendly”. In contrast, TransCap generates sentence-level semantic capsules containing overall semantic meanings of the sentence, and shares these sentence-level features between ASC and DSC tasks. Both these help TransCap make a correct decision. 6 Conclusion In this paper, we present a novel transfer capsule network (TransCap) model for aspect-level sentiment classification. In order to solve the problem of lacking aspect-level labeled data, we wish to utilize the abundant document-level labeled data. We develop a transfer learning framework to transfer knowledge from the document-level task to the aspect-level task. We implement it with a carefully designed capsule network, which mainly consists of the aspect routing and dynamic routing approaches. Experiments on two SemEval datasets demonstrate that TransCap outperforms the stateof-the-art baselines by a large margin. 555 Acknowledgments The work described in this paper is supported by the NSFC projects (61572376, 91646206), and the 111 project (B07037). References Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Annual Meeting of the Association for Computational Linguistics (ACL 2014). Xin Dong and Gerard de Melo. 2018. A helping hand: Transfer learning for deep sentiment analysis. In Annual Meeting of the Association for Computational Linguistics (ACL 2018). Jingjing Gong, Xipeng Qiu, Shaojing Wang, and Xuanjing Huang. 2018. Information aggregation via dynamic routing for sequence encoding. In Conference on Computational Linguistics (COLING 2018). Shuqin Gu, Lipeng Zhang, Yuexian Hou, and Yin Song. 2018. A position-aware bidirectional attention network for aspect-level sentiment analysis. In Conference on Computational Linguistics (COLING 2018). Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Exploiting document knowledge for aspect-level sentiment classification. In Annual Meeting of the Association for Computational Linguistics (ACL 2018). Geoffrey E. Hinton, Alex Krizhevsky, and Sida D. Wang. 2011. Transforming auto-encoders. In International Conference on Artificial Neural Networks (ICANN 2011). Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Annual Meeting of the Association for Computational Linguistics (ACL 2011). Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Computer Science. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI Conference on Artificial Intelligence (AAAI 2015). Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In Annual Meeting of the Association for Computational Linguistics (ACL 2018). Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In International Joint Conference on Artificial Intelligence (IJCAI 2017). Yukun Ma, Haiyun Peng, and Erik Cambria. 2018. Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM. In AAAI Conference on Artificial Intelligence (AAAI 2018). Navonil Majumder, Soujanya Poria, Alexander F. Gelbukh, Md. Shad Akhtar, Erik Cambria, and Asif Ekbal. 2018. IARM: Inter-aspect relation modeling with memory networks in aspect-based sentiment analysis. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). Julian J. McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substitutes. In Conference on Research and Development in Information Retrieval (SIGIR 2015). Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-Canada: Building the stateof-the-art in sentiment analysis of tweets. In International Workshop on Semantic Evaluation, (SemEval@NAACL-HLT 2013). Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using CRFs with hidden variables. In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2010). Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. 2017. Dynamic routing between capsules. In Conference on Neural Information Processing Systems (NIPS 2017). Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Linguistics. 556 Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory network. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis. In AAAI Conference on Artificial Intelligence (AAAI 2018). Duy-Tin Vo and Yue Zhang. 2015. Target dependent twitter sentiment classification with rich automatic features. In International Joint Conferences on Artificial Intelligence (IJCAI 2015). Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, and Yi Chang. 2018a. Target-sensitive memory networks for aspect sentiment classification. In Annual Meeting of the Association for Computational Linguistics (ACL 2018). Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016a. Recursive neural conditional random fields for aspect-based sentiment analysis. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016b. Attention-based LSTM for aspectlevel sentiment classification. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). Yequan Wang, Aixin Sun, Jialong Han, Ying Liu, and Xiaoyan Zhu. 2018b. Sentiment analysis by capsules. In Conference on World Wide Web (WWW 2018). Liqiang Xiao, Honglun Zhang, Wenqing Chen, Yongkun Wang, and Yaohui Jin. 2018. MCapsNet: Capsule network for text with multi-task learning. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In Annual Meeting of the Association for Computational Linguistics (ACL 2018). Jun Yang, Runqi Yang, Chongjun Wang, and Junyuan Xie. 2018a. Multi-entity aspect-based sentiment analysis with context, entity and aspect memory. In AAAI Conference on Artificial Intelligence (AAAI 2018). Min Yang, Wei Zhao, Jianbo Ye, Zeyang Lei, Zhou Zhao, and Soufei Zhang. 2018b. Investigating capsule networks with dynamic routing for text classification. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). Ningyu Zhang, Shumin Deng, Zhanlin Sun, Xi Chen, Wei Zhang, and Huajun Chen. 2018. Attentionbased capsule networks with dynamic routing for relation extraction. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Conference on Neural Information Processing Systems (NIPS 2015). Peisong Zhu and Tieyun Qian. 2018. Enhanced aspect level sentiment classification with auxiliary memory. In Conference on Computational Linguistics (COLING 2018).
2019
52
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5268–5277 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5268 Neural Aspect and Opinion Term Extraction with Mined Rules as Weak Supervision Hongliang Dai Department of CSE, HKUST [email protected] Yangqiu Song Department of CSE, HKUST [email protected] Abstract Lack of labeled training data is a major bottleneck for neural network based aspect and opinion term extraction on product reviews. To alleviate this problem, we first propose an algorithm to automatically mine extraction rules from existing training examples based on dependency parsing results. The mined rules are then applied to label a large amount of auxiliary data. Finally, we study training procedures to train a neural model which can learn from both the data automatically labeled by the rules and a small amount of data accurately annotated by human. Experimental results show that although the mined rules themselves do not perform well due to their limited flexibility, the combination of human annotated data and rule labeled auxiliary data can improve the neural model and allow it to achieve performance better than or comparable with the current state-of-the-art. 1 Introduction There are two types of words or phrases in product reviews (or reviews for services, restaurants, etc., we use “product reviews” throughout the paper for convenience) that are of particular importance for opinion mining: those that describe a product’s properties or attributes; and those that correspond to the reviewer’s sentiments towards the product or an aspect of the product (Hu and Liu, 2004; Liu, 2012; Qiu et al., 2011; Vivekanandan and Aravindan, 2014). The former are called aspect terms, and the latter are called opinion terms. For example, in the sentence “The speed of this laptop is incredible,” “speed” is an aspect term, and “incredible” is an opinion term. The task of aspect and opinion term extraction is to extract the above two types of terms from product reviews. Rule based approaches (Qiu et al., 2011; Liu et al., 2016) and learning based approaches (Jakob and Gurevych, 2010; Wang et al., 2016) are two major approaches to this task. Rule based approaches usually use manually designed rules based on the result of dependency parsing to extract the terms. An advantage of these approaches is that the aspect or opinion terms whose usage in a sentence follows some certain patterns can always be extracted. However, it is labor-intensive to design rules manually. It is also hard for them to achieve high performance due to the variability and ambiguity of natural language. Learning based approaches model aspect and opinion term extraction as a sequence labeling problem. While they are able to obtain better performance, they also suffer from the problem that significant amounts of labeled data must be used to train such models to reach their full potential, especially when the input features are not manually designed. Otherwise, they may even fail in very simple test cases (see Section 4.5 for examples). In this paper, to address above problems, we first use a rule based approach to extract aspect and opinion terms from an auxiliary set of product reviews, which can be considered as inaccurate annotation. These rules are automatically mined from the labeled data based on dependency parsing results. Then, we propose a BiLSTM-CRF (Bi-directional LSTM-Conditional Random Field) based neural model for aspect and opinion term extraction. This neural model is trained with both the human annotated data as ground truth supervision and the rule annotated data as weak supervision. We name our approach RINANTE (Rule Incorporated Neural Aspect and Opinion Term Extraction). We conduct experiments on three SemEval datasets that are frequently used in existing aspect and opinion term extraction studies. The results show that the performance of the neural model can 5269 be significantly improved by training with both the human annotated data and the rule annotated data. Our contributions are summarized as follows. • We propose to improve the effectiveness of a neural aspect and opinion term extraction model by training it with not only the human labeled data but also the data automatically labeled by rules. • We propose an algorithm to automatically mine rules based on dependency parsing and POS tagging results for aspect and opinion term extraction. • We conduct comprehensive experiments to verify the effectiveness of the proposed approach. Our code is available at https://github. com/HKUST-KnowComp/RINANTE. 2 Related Work There are mainly three types of approaches for aspect and opinion term extraction: rule based approaches, topic modeling based approaches, and learning based approaches. A commonly used rule based approach is to extract aspect and opinion terms based on dependency parsing results (Zhuang et al., 2006; Qiu et al., 2011). A rule in these approaches usually involves only up to three words in a sentence (Qiu et al., 2011), which limits its flexibility. It is also labor-intensive to design the rules manually. Liu et al. (2015b) propose an algorithm to select some rules from a set of previously designed rules, so that the selected subset of rules can perform extraction more accurately. However, different from the rule mining algorithm used in our approach, it is unable to discover rules automatically. Topic modeling approaches (Lin and He, 2009; Brody and Elhadad, 2010; Mukherjee and Liu, 2012) are able to get coarse-grained aspects such as food, ambiance, service for restaurants, and provide related words. However, they cannot extract the exact aspect terms from review sentences. Learning based approaches extract aspect and opinion terms by labeling each word in a sentence with BIO (Begin, Inside, Outside) tagging scheme (Ratinov and Roth, 2009). Typically, they first obtain features for each word in a sentence, then use them as the input of a CRF to get better sequence labeling results (Jakob and Gurevych, 2010; Wang et al., 2016). Word embeddings are commonly used features, hand-crafted features such as POS tag classes and chunk information can also be combined to yield better performance (Liu et al., 2015a; Yin et al., 2016). For example, Wang et al. (2016) construct a recursive neural network based on the dependency parsing tree of a sentence with word embeddings as input. The output of the neural network is then fed into a CRF. Xu et al. (2018) use a CNN model to extract aspect terms. They find that using both general-purpose and domainspecific word embeddings improves the performance. Our approach exploits unlabeled extra data to improve the performance of the model. This is related to semi-supervised learning and transfer learning. Some methods allow unlabeled data to be used in sequence labeling. For example, Jiao et al. (2006) propose semi-supervised CRF, Zhang et al. (2017) propose neural CRF autoencoder. Unlike our approach, these methods do not incorporate knowledge about the task while using the unlabeled data. Yang et al. (2017) propose three different transfer learning architectures that allow neural sequence tagging models to learn from both the target task and a different but related task. Different from them, we improve performance by utilizing the output of a rule based approach for the same problem, instead of another related task. Our approach is also related to the use of weakly labeled data (Craven and Kumlien, 1999), and is similar to the distant supervision approach used in relation extraction (Mintz et al., 2009). 3 RINANTE In this section, we introduce our approach RINANTE in detail. Suppose we have a human annotated dataset Dl and an auxiliary dataset Da. Dl contains a set of product reviews, each with all the aspect and opinion terms in it labeled. Da only contains a set of unlabeled product reviews. The reviews in Dl and Da are all for a same type or several similar types of products. Usually, the size of Da is much larger than Dl. Then, RINANTE consists of the following steps. 1. Use Dl to mine a set of aspect extraction rules Ra and a set of opinion extraction rules Ro with a rule mining algorithm. 2. Use the mined rules Ra and Ro to extract terms for all the reviews in Da, which can 5270 The horrible system . nsubj cop is det Figure 1: The dependency relations between the words in sentence “The system is horrible.” Each edge is a relation from the governor to the dependent. then be considered a weakly labeled dataset D′ a. 3. Train a neural model with Dl and D′ a. The trained model can be used on unseen data. Next, we introduce the rule mining algorithm used in Step 1 and the neural model in Step 3. 3.1 Rule Mining Algorithm We mine aspect and opinion term extraction rules that are mainly based on the dependency relations between words, since their effectiveness has been validated by existing rule based approaches (Zhuang et al., 2006; Qiu et al., 2011). We use (rel, wg, wd) to denote that the dependency relation rel exists between the word wg and the word wd, where wg is the governor and wd is the dependent. An example of the dependency relations between different words in a sentence is given in Figure 1. In this example, “system” is an aspect term, and “horrible” is an opinion term. A commonly used rule to extract aspect terms is (nsubj, O, noun∗), where we use O to represent a pattern that matches any word that belongs to a predefined opinion word vocabulary; noun∗ matches any noun word and the ∗means that the matched word is output as the aspect word. With this rule, the aspect term “system” in the example sentence can be extracted if the opinion term “horrible” can be matched by O. The above rule involves two words. In our rule mining algorithm, we only mine rules that involve no more than three words, because rules that involve many words may contribute very little to recall but are computationally expensive to mine. Moreover, determining their effectiveness requires a lot more labeled data since such patterns do not occur frequently. Since the aspect term extraction rule mining algorithm and the opinion term extraction rule mining algorithm are similar, we only introduce the former in detail. The algorithm contains two main parts: 1) Generating rule candidates based on a training set; 2) Filtering the rule Algorithm 1 Aspect term extraction rule candidate generation Input: A set of sentences St with all aspect terms extracted; integer T. Output: RC 1: Initialize list1, list2 as empty lists 2: for si ∈St do 3: for ai ∈si.aspect terms do 4: D1 = RelatedS1Deps(ai, si.deps) 5: D2 = RelatedS2Deps(ai, si.deps) 6: list1 += PatternsFromS1Deps(D1) 7: list2 += PatternsFromS2Deps(D2) 8: end for 9: end for 10: RC1 = FrequentPatterns(list1, T) 11: RC2 = FrequentPatterns(list2, T) 12: RC = RC1 + RC2 candidates based on their effectiveness on a validation set. The pseudocode for generating aspect term extraction rule candidates is in Algorithm 1. In Algorithm 1, si.aspect terms is a list of the manually annotated aspect terms in sentence si, si.deps is the list of the dependency relations obtained after performing dependency parsing. list1 and list2 contain the possible term extraction patterns obtained from each sentence that involve two and three words, respectively. The function RelatedS1Deps on Line 4 returns a list of dependency relations. Either the governor or the dependent of each dependency relation in this list has to be a word in the aspect term. The function PatternsFromS1Deps is then used to get aspect term extraction patterns that can be obtained from the dependency relations in this list. Let POS(wd) be the POS tag of wd; ps(w) be a function that returns the word type of w based on its POS tag, e.g., noun, verb, etc. Then for each (rel, wg, wd), if wd is a word in the aspect term, PatternsFromS1Deps may generate the following patterns: (rel, wg, ps(wd)∗), (rel, POS(wg), ps(wd)∗) and (rel, O, ps(wd)∗). For example, for (nsubj, “horrible”, “system”), it generates three patterns: (nsubj, “horrible”, noun∗), (rel, JJ, noun∗) and (rel, O, noun∗). Note that (rel, O, ps(wd)∗) is only generated when wg belongs to a predefined opinion word vocabulary. Also, we only consider two types of words while extracting aspect terms: nouns and verbs, i.e., we 5271 only generate the above patterns when ps(wg) returns noun or verb. The patterns generated when wg is the word in the aspect term are similar. The function RelatedS2Deps on Line 5 returns a list that contains pairs of dependency relations. The two dependency relations in each pair must have one word in common, and one of them is obtained with RelatedS1Deps. Afterwards, PatternsFromS2Deps generates patterns based on the dependency relation pairs. For example, the pair {(nsubj, “like”, “I”), (dobj, “like”, “screen”)} can be in the list returned by RelatedS2Deps, because “like” is the shared word, and (dobj, “like”, “screen”) can be obtained with RelatedS1Deps since “screen” is an aspect term. A pattern generated based on this relation pair can be, e.g., {(nsubj, “like”, “I”), (dobj, “like”, noun∗)}. The operations of PatternsFromS2Deps is similar with PatternsFromS1Deps except patterns are generated based on two dependency relations. Finally, the algorithm obtains the rule candidates with the function FrequentPatterns, which counts the occurrences of the patterns and only return those that occur more than T times. T is a predefined parameter that can be determined based on the total number of sentences in S. RC1 and RC2 thus contains candidate patterns based on single dependency relations and dependency relation pairs, respectively. They are merged to get the final rule candidates list RC. Algorithm 2 Aspect term extraction with mined rules Input: Sentence s; rule pattern r; a set of phrases unlikely to be aspect terms Vfil. Output: A 1: Initialize A as en empty list. 2: for (rel, wg, wd) ∈s.deps do 3: if (rel, wg, wd) does not matches r then 4: continue 5: end if 6: if the governor of r is the aspect word then 7: term = TermFrom(wg) 8: else 9: term = TermFrom(wd) 10: end if 11: if term /∈Vfil then 12: A.add(term) 13: end if 14: end for We still do not know the precision of the rule candidates obtained with Algorithm 1. Thus in the second part of our rule mining algorithm, for each rule candidate, we use it to extract aspect terms from another annotated set of review sentences (a validation set) and use the result to estimate its precision. Then we filter those whose precisions are less than a threshold p. The rest of the rules are the final mined rules. The algorithm for extracting aspect terms from a sentence s with a rule pattern r that contains one dependency relation is shown in Algorithm 2. Since a rule pattern can only match one word in the aspect term, the function TermFrom in Algorithm 2 tries to obtain the whole term based on this matched seed word. Specifically, it simply returns the word ws when it is a verb. But when ws is a noun, it returns a noun phrase formed by the consecutive sequence of noun words that includes ws. Vfil is a set of phrases that are unlikely to be aspect terms. It includes the terms extracted with the candidate rules from the training set that are always incorrect. The algorithm for extracting aspect terms with a rule pattern that contains a dependency relation pair is similar. In practice, we also construct a dictionary that includes the frequently used aspect terms in the training set. This dictionary is used to extract aspect terms through direct matching. The opinion term extraction rule mining algorithm is similar. But rule patterns related to an opinion word vocabulary are not generated. When extracting opinion terms based on rules, three types of words are considered as possible opinion terms: adjectives, nouns and verbs. Time Complexity Let L be the maximum number of words in an aspect/opinion term, M be the maximum number of words in a sentence, N be the total number of aspect terms in the training set. Then, the time complexity of the rule candidate generation part is O(LNM2). There can be at most LNM2/T candidate rules, so the time complexity of the rule filtering part of the algorithm is O(LNM4/T). In practice, the algorithm is fast since the actual number of rule candidates obtained is much less than LNM2/T. 3.2 Neural Model After the rules are mined, they are applied to a large set of product reviews Da to obtain the aspect and opinion terms in each sentence. The results are then transformed into BIO tag sequences 5272 Word Embedding BiLSTM CRF-RA CRF-RO CRF-M (a) Shared BiLSTM Model. Word Embedding CRF-RA CRF-RO CRF-M BiLSTM-A BiLSTM-O (b) Double BiLSTM Model. Figure 2: The structures of two neural aspect and opinion term extraction models. in order to be used by a neural model. Since the mined rules are inaccurate, there can be conflicts in the results, i.e., a word may be extracted as both an aspect term and an opinion term. Thus, we need two tag sequences for each sentence in Da to represent the result, one for the aspect terms and the other for the opinion terms. Our neural model should be able to learn from the above two tag sequences and a set of manually labeled data. Thus there are three tasks: predicting the terms extracted by the aspect term extraction rules; predicting the terms extracted by the opinion term extraction rules; predicting the manual labeling results. We denote these three tasks as ta, to, and tm, respectively. Note that the review sentences in the manually labeled data only need one tag sequence to indicate both aspect terms and opinion terms, since no words in the accurately labeled data can be both an aspect term and an opinion term. Then we can train a neural network model with both ground truth supervision and weak supervision. We propose two BiLSTMCRF (Huang et al., 2015) based models that can be trained based on these three tasks. Their structures are shown in Figure 2. We call the model in Figure 2a Shared BiLSTM Model and the model in Figure 2b Double BiLSTM Model. Both models use pre-trained embeddings of the words in a sentence as input, then a BiLSTM-CRF structure is used to predict the labels of each word. They both use three linearchain CRF layers for the three different prediction tasks: CRF-RA is for task ta; CRF-RO is for task to; CRF-M is for task tm. In Shared BiLSTM Model, the embedding of each word is fed into a BiLSTM layer that is share by the three CRF layers. Double BiLSTM Model has two BiLSTM layers: BiLSTM-A is used for ta and tm; BiLSTM-O is used for to and tm. When they are used for tm, the concatenation of the output vectors of BiLSTM-A and BiLSTM-O for each word in the sequence are used as the input of CRF-M. Training It is not straightforward how to train these two models. We use two different methods: 1) train on the three tasks ta, to and tm alternately; 2) pre-train on ta and to, then train on tm. In the first method, at each iteration, each of the three tasks is used to update the model parameters for one time. In the second method, the model is first pre-trained with ta and to, with these two tasks trained alternately. The resultant model is then trained with tm. We perform early stopping for training. While training with the first method or training on tm with the second method, early stopping is performed based on the performance (the sum of the F1 scores for aspect term extraction and opinion term extraction) of tm on a validation set. In the pre-training part of the second method, it is based on the sum of the F1 scores of ta and to. We also add dropout layers (Srivastava et al., 2014) right after the BiLSTM layers and the word embedding layers. 4 Experiments This section introduces the main experimental results. We also conducted some experiments related to BERT (Devlin et al., 2018), which are included in the appendix. 4.1 Datasets We use three datasets to evaluate the effectiveness of our aspect and opinion term extraction approach: SemEval-2014 Restaurants, SemEval2014 Laptops, and SemEval-2015 Restaurants. They are originally used in the SemEval semantic analysis challenges in 2014 and 2015. Since the original datasets used in SemEval do not have the annotation of the opinion terms in each sentence, we use the opinion term annotations provided by (Wang et al., 2016) and (Wang et al., 2017). Table 1 lists the statistics of these datasets, where we use SE14-R, SE14-L, and SE15-R to represent SemEval-2014 Restaurants, SemEval-2014 Laptops, and SemEval-2015 Restaurants, respectively. Besides the above datasets, we also use a Yelp 5273 Dataset #Sentences #AT #OT SE14-R (Train) 3,044 3,699 3,528 SE14-R (Test) 800 1,134 1,021 SE14-L (Train) 3,048 2,373 2,520 SE14-L (Test) 800 654 678 SE15-R (Train) 1,315 1,279 1,216 SE15-R (Test) 685 597 517 Table 1: Dataset statistics. AT: aspect terms; OT: opinion terms. dataset1 and an Amazon Electronics dataset (He and McAuley, 2016)2 as auxiliary data to be annotated with the mined rules. They are also used to train word embeddings. The Yelp dataset is used for the restaurant datasets SE14-R and SE15-R. It includes 4,153,150 reviews that are for 144,072 different businesses. Most of the businesses are restaurants. The Amazon Electronics dataset is used for the laptop dataset SE14-L. It includes 1,689,188 reviews for 63,001 products such as laptops, TV, cell phones, etc. 4.2 Experimental Setting For each of the SemEval datasets, we split the training set and use 20% as a validation set. For SE14-L, we apply the mined rules on all the laptop reviews of the Amazon dataset to obtain the automatically annotated auxiliary data, which includes 156,014 review sentences. For SE14-R and SE15-R, we randomly sample 4% of the restaurant review sentences from the Yelp dataset to apply the mined rules on, which includes 913,443 sentences. For both automatically annotated datasets, 2,000 review sentences are used to form a validation set, the rest are used to form the training set. They are used while training the neural models of RINANTE. We use Stanford CoreNLP (Manning et al., 2014) to perform dependency parsing and POS tagging. The frequency threshold integer T in the rule candidate generation part of the rule mining algorithm is set to 10 for all three datasets. The precision threshold p is set to 0.6. We use the same opinion word vocabulary used in (Hu and Liu, 2004) for aspect term extraction rules. We train two sets of 100 dimension word embeddings with word2vec (Mikolov et al., 2013) on all the reviews of the Yelp dataset and the Amazon dataset, respectively. The hidden layer sizes of the BiL1https://www.yelp.com/dataset/challenge 2http://jmcauley.ucsd.edu/data/amazon/ STMs are all set to 100. The dropout rate is set to 0.5 for the neural models. 4.3 Performance Comparison To verify the effectiveness of our approach, we compare it with several existing approaches. • DP (Double Propagation) (Qiu et al., 2011): A rule based approach that uses eight manually designed rules to extract aspect and opinion terms. It only considers noun aspect terms and adjective opinion terms. • IHS RD, DLIREC, and Elixa: IHS RD (Chernyshevich, 2014) and DLIREC (Toh and Wang, 2014) are the best performing systems at SemEval 2014 on SE14-L and SE14R, respectively. Elixa (Vicente et al., 2017) is the best performing system at SemEval 2015 on SE15-R. All these three systems use rich sets of manually designed features. • WDEmb and WDEmb*: WDEmb (Yin et al., 2016) first learns word and dependency path embeddings without supervision. The learned embeddings are then used as the input features of a CRF model. WDEmb* adds manually designed features to WDEmb. • RNCRF: RNCRF (Wang et al., 2016) uses a recursive neural network model based the dependency parsing tree of a sentence to obtain the input features for a CRF model. • CMLA: CMLA (Wang et al., 2017) uses an attention based model to get the features for aspect and opinion term extraction. It intends to capture the direct and indirect dependency relations among aspect and opinion terms through attentions. Our experimental setting about word embeddings and the splitting of the training sets mainly follows (Yin et al., 2016), which is different from the setting used in (Wang et al., 2016) for RNCRF and (Wang et al., 2017) for CMLA. For fair comparison, we also run RNCRF and CMLA with the code released by the authors under our setting. • NCRF-AE (Zhang et al., 2017): It is a neural autoencoder model that uses CRF. It is able to perform semi-supervised learning for sequence labeling. The Amazon laptop reviews 5274 SE14-R SE14-L SE15-R Approach Aspect Opinion Aspect Opinion Aspect Opinion DP (Qiu et al., 2011) 38.72 65.94 19.19 55.29 27.32 46.31 IHS RD (Chernyshevich, 2014) 79.62 74.55 DLIREC (Toh and Wang, 2014) 84.01 73.78 Elixa (Vicente et al., 2017) 70.04 WDEmb (Yin et al., 2016) 84.31 74.68 69.12 WDEmb* (Yin et al., 2016) 84.97 75.16 69.73 RNCRF (Wang et al., 2016) 82.23 83.93 75.28 77.03 65.39 63.75 CMLA (Wang et al., 2017) 82.46 84.67 73.63 79.16 68.22 70.50 NCRF-AE (Zhang et al., 2017) 83.28 85.23 74.32 75.44 65.33 70.16 HAST (Li et al., 2018) 85.61 79.52 69.77 DE-CNN (Xu et al., 2018) 85.20 81.59 68.28 Mined Rules 70.82 79.60 67.67 76.10 57.67 64.29 RINANTE (No Rule) 84.06 84.59 73.47 75.41 66.17 68.16 RINANTE-Shared-Alt 86.76 86.05 77.92 79.20 67.47 71.41 RINANTE-Shared-Pre 85.09 85.63 79.16 79.03 68.15 70.44 RINANTE-Double-Alt 85.80 86.34 78.59 78.94 67.42 70.53 RINANTE-Double-Pre 86.45 85.67 80.16 81.96 69.90 72.09 Table 2: Aspect and opinion term extraction performance of different approaches. F1 score is reported. IHS RD, DLIREC, Elixa and WDEmb* use manually designed features. For different versions of RINANTE, “Shared” and “Double” means shared BiLSTM model and double BiLSTM model, respectively; “Alt” and “Pre” means the first and the second training method, respectively. and the Yelp restaurant reviews are also used as unlabeled data for this approach. • HAST (Li et al., 2018): It proposes to use Truncated History-Attention and Selective Transformation Network to improve aspect extraction. • DE-CNN (Xu et al., 2018): DE-CNN feeds both general-purpose embeddings and domain-specific embeddings to a Convolutional Neural Network model. We also compare with two simplified versions of RINANTE: directly using the mined rules to extract terms; only using human annotated data to train the corresponding neural model. Specifically, the second simplified version uses a BiLSTMCRF structured model with the embeddings of each word in a sentence as input. This structure is also studied in (Liu et al., 2015a). We name this approach RINANTE (no rule). The experimental results are shown in Table 2. From the results, we can see that the mined rules alone do not perform well. However, by learning from the data automatically labeled by these rules, all four versions of RINANTE achieves better performances than RINANTE (no rule). This verifies that we can indeed use the results of the mined rules to improve the performance of neural models. Moreover, the improvement over RINANTE (no rule) can be especially significant on SE14-L and SE15-R. We think this is because SE14-L is relatively more difficult and SE15-R has much less manually labeled training data. Among the four versions of RINANTE, RINANTE-Double-Pre yields the best performance on SE14-L and SE15-R, while RINANTEShared-Alt is slightly better on SE14-R. Thus we think that for exploiting the results of the mined rules, using two separated BiLSTM layers for aspect terms and opinion terms works more stably than using a shared BiLSTM layer. Also, for both models, it is possible to get good performance with both of the training methods we introduce. In general, RINANTE-Double-Pre performs more stable than the other three versions, and thus is suggested to be used in practice. We can also see from Table 2 that the rules mined with our rule mining algorithm performs much better than Double Propagation. This is because our algorithm is able to mine hundreds of effective rules, while Double Propagation only has eight manually designed rules. 5275 Dataset #ATER #OTER #EAT #EOT SE14-R 431 618 1,453 1,205 SE14-L 157 264 670 665 SE15-R 133 193 818 578 Table 3: Number of mined rules on each dataset. ATER means aspect term extraction rules; OTER means opinion term extraction rules; EAT and EOT mean the extracted aspect terms and the extracted opinion terms on the corresponding test set, respectively. Rule Pattern Matched Example (nsubj, O, noun∗) The OS is great. (amod, noun∗, O) Long battery life. (dobj, “has”, noun∗) It has enough memory to run my business. {(nsubj, V BN, noun∗), (case, noun∗, with)} I am fully satisfied with the performance. Table 4: Mined aspect extraction rule examples. Shared words in dependency relation pairs are underlined. Aspect terms are in boldface. O matches predefined opinion words; V BN is a POS tag. noun∗means the corresponding noun phrase that includes this word should be extracted. Compared with the other approaches, RINANTE only fails to deliver the best performance on the aspect term extraction part of SE14-L and SE15-R. On SE14-L, DE-CNN performs better. However, our approach extracts both aspect terms and opinion terms, while DE-CNN and HAST only focus on aspect terms. On SE15-R, the best performing system for aspect term extraction is Elixa, which relies on handcrafted features 4.4 Mined Rule Results The numbers of rules extracted by our rule mining algorithm and the number of aspect and opinion terms extracted by them on the test sets are listed in Table 3. It takes less than 10 seconds to mine these rules on each dataset on a computer with Intel i7-7700HQ 2.8GHz CPU. The least amount of rules are mined on SE15-R, since this dataset contains the least amount of training samples. This also causes the mined rules to have inferior performance on this dataset. We also show some example aspect extraction rules mined from SE14-L in Table 4, along with the example sentences they can match and extract terms from. The “intentions” of the first, second, and third rules are easy to guess by simply looking at the patterns. As a matter of fact, the first rule and the second rule are commonly used in rule based aspect term extraction approaches (Zhuang et al., 2006; Qiu et al., 2011). However, we looked through all the mined rules and find that actually most of them are like the fourth rule in Table 4, which is hard to design manually through inspecting the data. This also shows the limitation of designing such rules by human beings. 4.5 Case Study To help understand how our approach works and gain some insights about how we can further improve it, we show in Table 5 some example sentences from SE14-L, alone with the aspect terms extracted by RINANTE (no rule), the mined rules, RINANTE (RINANTE-Double-Pre), and DE-CNN. In the first row, the aspect term “SuperDrive” can be easily extracted by a rule based approach. However, without enough training data, RINANTE (no rule) still fails to recognize it. In the second row, we see that the mined rules can also help to avoid extracting incorrect terms. The third row is also interesting: while the mined rules only extract “microphones”, RINANTE is still able to obtain the correct phrase “external microphones” instead of blindly following the mined rules. The sentence in the last row also has an aspect term that can be easily extracted with a rule. The result of RINANTE is also correct. But both RINANTE (no rule) and DE-CNN fails to extract it. 5 Conclusion and Future Work In this paper, we present an approach to improve the performance of neural aspect and opinion term extraction models with automatically mined rules. We propose an algorithm to mine aspect and opinion term extraction rules that are based on the dependency relations of words in a sentence. The mined rules are used to annotate a large unlabeled dataset, which is then used together with a small set of human annotated data to train better neural models. The effectiveness of this approach is verified through our experiments. For future work, we plan to apply the main idea of our approach to other tasks. Acknowledgments This paper was supported by WeChat-HKUST WHAT Lab and the Early Career Scheme (ECS, No. 26206717) from Research Grants Council in 5276 Sentence RINANTE (no rule) Mined Rules RINANTE DE-CNN The SuperDrive is quiet. SuperDrive SuperDrive SuperDrive My life has been enriched since I have been using Apple products. life It would seem that its Mac OS 10.9 does not handle external microphones properly. Mac OS 10.9 Mac OS 10.9; microphones Mac OS 10.9; external microphones Mac OS 10.9; external microphones I love the form factor. form factor form factor Table 5: Example sentences and the aspect terms extracted by different approaches. The correct aspect terms are in boldface in the sentences. “-” means no aspect terms are extracted. Hong Kong. We also thank Intel Corporation for supporting our deep learning related research. References Samuel Brody and Noemie Elhadad. 2010. An unsupervised aspect-sentiment model for online reviews. In Proceedings of NAACL-HLT, pages 804–812. Association for Computational Linguistics. Maryna Chernyshevich. 2014. Ihs r&d belarus: Crossdomain extraction of product features using crf. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 309– 313. Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In ISMB, volume 1999, pages 77–86. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of WWW, pages 507–517. International World Wide Web Conferences Steering Committee. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the KDD, pages 168–177. ACM. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Niklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single-and cross-domain setting with conditional random fields. In Proceedings of EMNLP, pages 1035–1045. Association for Computational Linguistics. Feng Jiao, Shaojun Wang, Chi-Hoon Lee, Russell Greiner, and Dale Schuurmans. 2006. Semisupervised conditional random fields for improved sequence segmentation and labeling. In Proceedings of ACL, pages 209–216. Association for Computational Linguistics. Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhimou Yang. 2018. Aspect term extraction with history attention and selective transformation. In Proceedings of IJCAI, pages 4194–4200. AAAI Press. Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of CIKM, pages 375–384. ACM. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1–167. Pengfei Liu, Shafiq Joty, and Helen Meng. 2015a. Fine-grained opinion mining with recurrent neural networks and word embeddings. In Proceedings of EMNLP, pages 1433–1443. Qian Liu, Zhiqiang Gao, Bing Liu, and Yuanlin Zhang. 2015b. Automated rule selection for aspect extraction in opinion mining. In Proceedings of IJCAI, volume 15, pages 1291–1297. Qian Liu, Bing Liu, Yuanlin Zhang, Doo Soon Kim, and Zhiqiang Gao. 2016. Improving opinion aspect extraction using semantic similarity and aspect associations. In AAAI, pages 2986–2992. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of ACL: system demonstrations, pages 55–60. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in NIPS, pages 3111–3119. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of ACL, pages 1003–1011. Association for Computational Linguistics. Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In Proceedings of ACL, pages 339–348. Association for Computational Linguistics. 5277 Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguistics, 37(1):9–27. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of CoNLL, pages 147–155. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Zhiqiang Toh and Wenting Wang. 2014. Dlirec: Aspect term extraction and term polarity classification system. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 235–240. I˜naki San Vicente, Xabier Saralegi, and Rodrigo Agerri. 2017. Elixa: A modular and flexible absa platform. arXiv preprint arXiv:1702.01944. K Vivekanandan and J Soonu Aravindan. 2014. Aspect-based opinion mining: A survey. International Journal of Computer Applications, 106(3). Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of EMNLP, pages 616–626. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In Proceedings of AAAI, pages 3316–3322. Hu Xu, Bing Liu, Lei Shu, and S Yu Philip. 2018. Double embeddings and cnn-based sequence labeling for aspect extraction. In Proceedings of ACL, volume 2, pages 592–598. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In Proceedings of ICLR. Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, and Ming Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term extraction. In Proceedings of IJCAI, pages 2979– 2985. AAAI Press. Xiao Zhang, Yong Jiang, Hao Peng, Kewei Tu, and Dan Goldwasser. 2017. Semi-supervised structured prediction with neural crf autoencoder. In Proceedings of EMNLP, pages 1701–1711. Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Proceedings of CIKM, pages 43–50. ACM.
2019
520
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5278–5283 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5278 Cost-sensitive Regularization for Label Confusion-aware Event Detection Hongyu Lin1,3, Yaojie Lu1,3, Xianpei Han1,2,∗, Le Sun1,2 1Chinese Information Processing Laboratory 2State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences, Beijing, China 3University of Chinese Academy of Sciences, Beijing, China {hongyu2016,yaojie2017,xianpei,sunle}@iscas.ac.cn Abstract In supervised event detection, most of the mislabeling occurs between a small number of confusing type pairs, including trigger-NIL pairs and sibling sub-types of the same coarse type. To address this label confusion problem, this paper proposes cost-sensitive regularization, which can force the training procedure to concentrate more on optimizing confusing type pairs. Specifically, we introduce a costweighted term into the training loss, which penalizes more on mislabeling between confusing label pairs. Furthermore, we also propose two estimators which can effectively measure such label confusion based on instance-level or population-level statistics. Experiments on TAC-KBP 2017 datasets demonstrate that the proposed method can significantly improve the performances of different models in both English and Chinese event detection. 1 Introduction Automatic event extraction is a fundamental task in information extraction. Event detection, aiming to identify trigger words of specific types of events, is a vital step of event extraction. For example, from sentence “Mary was injured, and then she died”, an event detection system is required to detect a Life:Injure event triggered by “injured” and a Life:Die event triggered by “died”. Recently, neural network-based supervised models have achieved promising progress in event detection (Nguyen and Grishman, 2015; Chen et al., 2015; Ghaeini et al., 2016). Commonly, these methods regard event detection as a wordwise classification task with one NIL class for tokens do not trigger any event. Specifically, a neural network automatically extracts high-level features and then feed them into a classifier to categorize words into their corresponding event sub∗Corresponding author. BC CT CR MT NIL CC BC 41.3 14.4 2.1 1.6 39.0 1.7 CT 8.5 42.7 4.7 2.6 40.6 0.9 CR 5.7 7.3 50.0 1.1 32.3 2.9 MT 3.0 7.7 6.1 28.7 51.3 3.2 Table 1: Prediction percentage heatmap of triggers with Contact coarse type. Row labels are the golden label and the column labels indicate the prediction. BC: Broadcast; CT: Conctact(sub-type); CR: Correspondence; MT: Meet; CC: Other cross coarse-type errors. types (or NIL). Optimization criteria of such models often involves in minimizing cross-entropy loss, which equals to maximize the likelihood of making correct predictions on the training data. However, we find that in supervised event detection, most of the mislabeling occurs between a small number of confusing type pairs. We refer to this phenomenon as label confusion. Specifically, there are mainly two types of label confusion in event detection: 1) trigger/NIL confusion; 2) sibling sub-types confusion. For example, both Transaction:Transfer-money and Transaction:Transfer-ownership events are frequently triggered by word “give”. Besides, in many cases “give” does not serve as a trigger word. Table 1 shows the classification results of a state-of-the-art event detection model (Chen et al., 2015) on all event triggers with coarse type of Contact on TAC-KBP 2017 English Event Detection dataset. We can see that the model severely suffers from two types of label confusion mentioned above: more than 50% mislabeling happens between trigger/NIL decision due to the ambiguity of natural language. Furthermore, the majority of remaining errors are between sibling sub-types of the same coarse type because of their semantic relatedness (Liu et al., 2017b). Similar results are also observed in other event detection datasets such as ACE2005 (Liu et al., 2018a). Therefore, 5279 it is critical to enhance the supervised event detection models by taking such label confusion problem into consideration. In this paper, inspired by cost-sensitive learning (Ling and Sheng, 2011), we introduce costsensitive regularization to model and exploit the label confusion during model optimization, which can make the training procedure more sensitive to confusing type pairs. Specifically, the proposed regularizer reshapes the loss function of model training by penalizing the likelihood of making wrong predictions with a cost-weighted term. If instances of class i are more frequently misclassified into class j, we assign a higher cost to this type pair to make the model intensively learn to distinguish between them. Consequently, the training procedure of models not only considers the probability of making correct prediction, but also tries to separate confusing type pairs with a larger margin. Furthermore, in order to estimate such cost automatically, this paper proposes two estimators based on population-level or instancelevel statistics. We conducted experiments on TAC-KBP 2017 Event Nugget Detection datasets. Experiments show that our method can significantly reduce the errors between confusing type pairs, and therefore leads to better performance of different models in both English and Chinese event detection. To the best of our knowledge, this is the first work which tackles with the label confusion problem of event detection and tries to address it in a cost-sensitive regularization paradigm. 2 Cost-sensitive Regularization for Neural Event Detection 2.1 Neural Network Based Event Detection The state-of-the-art neural network models commonly transform event detection into a wordwise classification task. Formally, let D = {(xi, yi)|i = 1, 2, ..., n} denote n training instances, P(y|x; θ) is the neural network model parameterized by θ, which takes representation (feature) x as input and outputs the probability that x is a trigger of event sub-type y (or NIL). Training procedure of such models commonly involves in minimizing following cross-entropy loss: LCE(θ) = − X (xi,yi)∈D log P(yi|xi; θ) (1) which corresponds to maximize the log-likelihood of the model making the correct prediction on all training instances and does not take the confusion between different type pairs into consideration. 2.2 Cost-sensitive Regularization As discussed above, the key to improve event detection performance is to solve the label confusion problem, i.e., to guide the training procedure to concentrate on distinguishing between more confusing type pairs such as trigger/NIL pairs and sibling sub-event pairs. To this end, we propose cost-sensitive regularization, which reshapes the training loss with a cost-weighted term of the loglikelihood of making wrong prediction. Formally, the proposed regularizer is defined as: LCS(θ) = X (xi,yi)∈D X yj̸=yi C(yi, yj; xi) log P(yj|xi; θ) (2) where C(yi, yj; x) is a positive cost of mislabeling an instance x with golden label yi into label yj. A higher C(yi, yj; x) is assigned if yi and yj is a more confusing type pair (i.e., more easily mislabeled by the current model). Therefore, the costsensitive regularizer will make the training procedure pay more attention to distinguish between confusing type pairs because they have larger impact on the training loss. Finally, the entire optimization objective can be written as: L(θ) = LCE(θ) + λLCS(θ) (3) where λ is a hyper-parameter that controls the relative impact of our cost-sensitive regularizer. 3 Cost Estimation Obviously it is critical for the proposed costsensitive regularization to have an accurate estimation of the cost C(yi, yj; x). In this section, we propose two approaches for this issue based on population-level or instance-level statistics. 3.1 Population-level Estimator A straightforward approach for measuring such costs is to use the relative mislabeling risk on the dataset. Therefore our population-level cost estimator is defined as: CP OP (yi, yj; xi) = #(yi, yj) P j #(yi, yj) (4) where #(yi, yj) is the number of instances with golden label yi but being classified into class yj in the corpus. These statistics can be computed either on the training set or on the development set. This paper uses statistics on development set due 5280 to its compact size. And the estimators are updated every epoch during the training procedure. 3.2 Instance-level Estimator The population-level estimators requires large computation cost to predict on the entire dataset when updating the estimators. To handle this issue, we propose another estimation method based directly on instance-level statistics. Inspire by Lin et al. (2017), the probability P(yj|xi; θ) of classifying instance xi into the wrong class yj can be directly regarded as the mislabeling risk of that instance. Therefore our instance-level estimator is: CINS(yi, yj; xi) = P(yj|xi; θ) (5) Then cost-sensitive regularizer for each training instance can be written as: LINS(xi; θ) = X yj̸=yi P(yj|xi; θ) log P(yj|xi; θ) (6) Note that if the probability of making correct prediction (i.e., P(yi|xi; θ)) is fixed, LINS(xi; θ) achieves its minimum when the probabilities of mislabeling xi into all incorrect classes are equal. This is equivalent to maximize the margin between the probability of golden label and that of any other class. In this circumstance, the loss L(θ) can be regarded as a combination of maximizing both the likelihood of correct prediction and the margin between correct and incorrect classes. 4 Experiments 4.1 Experimental Settings We conducted experiments on both English and Chinese on TAC-KBP 2017 Event Nugget Detection Evaluation datasets (LDC2017E55). For English, previously released RichERE corpus, including LDC2015E29, LDC2015E68, LDC2016E31 and the English part of LDC2017E02, were used for training. For Chinese, LDC2015E105, LDC2015E112, LDC2015E78 and the Chinese part of LDC2017E02 were used. For both English and Chinese, we sampled 20 documents from LDC2017E02 as the development set. Finally, there were 866/20/167 documents and 506/20/167 documents in English and Chinese train/development/test set respectively. We conducted experiments on two state-of-theart neural network event detection models to verify the portability of our method. One is DMCNN model proposed by Chen et al. (2015). Another is Model English Chinese P R F1 P R F1 LSTM CE 73.46 34.23 46.70 70.35 35.43 47.13 Focal 69.20 38.71 49.64 68.10 35.76 46.90 Hinge 62.51 44.36 51.89 58.34 43.40 49.77 Sampling 58.57 48.26 52.92 57.61 44.54 50.24 CR-POP 62.35 46.98 53.58 53.18 49.55 51.30 CR-INS 58.64 49.55 53.71 49.19 55.83 52.30 DMCNN CE 75.15 34.16 47.00 73.50 35.81 48.16 Focal 70.68 37.63 49.11 69.04 38.87 49.74 Hinge 67.49 42.67 52.28 60.27 45.50 51.85 Sampling 64.05 45.08 52.91 54.85 50.35 52.50 CR-POP 64.82 45.73 53.63 55.89 50.81 53.23 CR-INS 64.74 46.14 53.88 54.91 51.93 53.38 Table 2: Overall results. CR-POP and CR-INS are our method with population-level and instance-level estimators. All F1 improvements made by CR-POP and CR-INS are statistically significant with p < 0.05. a LSTM model by Yang and Mitchell (2017). Due to page limitation, please refer to original papers for details. 4.2 Baselines1 Following baselines were compared: 1) Cross-entropy Loss (CE), the vanilla loss. 2) Focal Loss (Focal) (Lin et al., 2017), which is an instance-level method that rescales the loss with a factor proportional to the mislabeling probability to enhance the learning on hard instances. 3) Hinge Loss (Hinge), which tries to separate the correct and incorrect predictions with a margin larger than a constant and is widely used in many machine learning tasks. 4) Under-sampling (Sampling), a representative cost-sensitive learning approaches which samples instances balance the model learning and is widely used in event detection to deal with imbalance (Chen et al., 2015). We also compared our methods with the top systems in TAC-KBP 2017 Evaluation. We evaluated all systems with micro-averaged Precision(P), Recall(R) and F1 using the official toolkit2. 4.3 Overall Results Table 2 shows the overall performance on TACKBP 2017 datasets. We can see that: 1) Cost-sensitive regularization can significantly improve the event detection performance by taking mislabeling costs into consideration. The proposed CR-INS and the CR-POP 1Our source code and hyper-parameter configures are openly available at github.com/sanmusunrise/CSR. 2github.com/hunterhector/EvmEval 5281 56.19 50.37 50.14 56.40 55.90 0.64 0.67 49 51 53 55 57 TAC-KBP 2017 English CR_INS Baseline KBP2017 50.64 46.76 42.14 52.50 50.24 0.88 2.06 40 43 46 49 52 TAC-KBP 2017 Chinese CR_INS Baseline KBP2017 Figure 1: Comparison with the top systems in TACKBP 2017. CR is our CR-INS method. The srcb system in English used additional CRF based models to deal with multi-word triggers in English, which is not considered in our model and leads to a significant higher recall than other competitors. steadily outperform corresponding baselines. Besides, compared with population-level estimators, instance-level cost estimators are more effective. This may because instance-level estimators can be updated every batch while population-level estimators are updated every epoch, which leads to a more accurate estimation. 2) Cost-sensitive regularization is robust to different languages and models. We can see that cost-sensitive regularization achieves significant improvements on both English and Chinese datasets with both CNN and RNN models. This indicates that our method is robust and can be applied to different models and datasets. 3) Data imbalance is not the only reason behind label confusion. Even Focal and Sampling baselines deals with the data imbalance problem, they still cannot achieve comparable performance with CR-POP and CR-INS. This means that there are still other reasons which are not fully resolved by conventional methods for data imbalance. 4.4 Comparing with State-of-the-art Systems Figure 1 compares our models with the top systems in TAC-KBP 2017 Evaluation. To achieve a strong baseline3, we also incorporate ELMOs (Peters et al., 2018) to English system for better representations. We can see that CR-INS can further gain significant improvements over all strong baselines which have already achieved comparable performance with top systems. In both English and Chinese, CR-INS achieves the new SOTA performance, which demonstrates its effectiveness. 3Top systems in the evaluation are commonly ensembling models with additional resources, while reported in-house results are of single model. Error Rate (%) SP CR ∆ Total Error 42.97 38.84 -9.6% - Trigger/NIL 33.39 31.15 -6.7% - Sibling Sub-types 8.15 6.25 -23.3% - Other 1.43 1.44 +0.6% Table 3: Error rates (CNN) on trigger words on the Chinese test set with Sampling(SP) and CR-INS(CR). 4.5 Error Analysis To clearly show where the improvement of our method comes from, we compared the mislabeling made by Sampling and our CR-INS method. Table 3 shows the results. We can first see that trigger/NIL mislabeling and sibling sub-types mislabeling make up most of errors of CE baseline. This further verifies our motivation. Besides, costsensitive regularization significantly reduces these two kinds of errors without introducing more other types of mislabeling, which clearly demonstrates the effectiveness of our method. 5 Related Work Neural Network based Event Detection. Recently, neural network based methods have achieved promising progress in event detection, especially with CNNs (Chen et al., 2015; Nguyen and Grishman, 2015) and Bi-LSTMs (Zeng et al., 2016; Yang and Mitchell, 2017) based models as automatic feature extractors. Improvements have been made by incorporating arguments knowledge (Nguyen et al., 2016; Liu et al., 2017a; Nguyen and Grishman, 2018; Hong et al., 2018) or capturing larger scale of contexts with more complicated architectures (Feng et al., 2016; Nguyen and Grishman, 2016; Ghaeini et al., 2016; Lin et al., 2018a,b; Liu et al., 2018a,b; Sha et al., 2018; Chen et al., 2018). Cost-sensitive Learning. Cost-sensitive learning has long been studied in machine learning (Elkan, 2001; Zhou, 2011; Ling and Sheng, 2011). It can be applied both at algorithm-level (Anand et al., 1993; Domingos, 1999; Sun et al., 2007; Krawczyk et al., 2014; Kusner et al., 2014) or datalevel (Ting, 2002; Zadrozny et al., 2003; Mirza et al., 2013), which has achieved great success especially in learning with imbalanced data. 6 Conclusions In this paper, we propose cost-sensitive regularization for neural event detection, which introduces a cost-weighted term of mislabeling likelihood 5282 to enhance the training procedure to concentrate more on confusing type pairs. Experiments show that our methods significantly improve the performance of neural network event detection models. Acknowledgments We sincerely thank the reviewers for their insightful comments and valuable suggestions. Moreover, this work is supported by the National Natural Science Foundation of China under Grants no. 61433015, 61572477 and 61772505; the Projects of the Chinese Language Committee under Grants no. WT135-24; and the Young Elite Scientists Sponsorship Program no. YESS20160177. References Rangachari Anand, Kishan G Mehrotra, Chilukuri K Mohan, and Sanjay Ranka. 1993. An improved algorithm for neural network classification of imbalanced training sets. IEEE Transactions on Neural Networks, 4(6):962–969. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of ACL 2015. Yubo Chen, Hang Yang, Kang Liu, Jun Zhao, and Yantao Jia. 2018. Collective event detection via a hierarchical and bias tagging networks with gated multilevel attention mechanisms. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1267–1276. Association for Computational Linguistics. Pedro M. Domingos. 1999. Metacost: A general method for making classifiers cost-sensitive. In KDD. Charles Elkan. 2001. The foundations of cost-sensitive learning. In IJCAI 2001, volume 17, pages 973–978. Lawrence Erlbaum Associates Ltd. Xiaocheng Feng, Lifu Huang, Duyu Tang, Bing Qin, Heng Ji, and Ting Liu. 2016. A languageindependent neural network for event detection. In Proceedings of ACL 2016. Reza Ghaeini, Xiaoli Z Fern, Liang Huang, and Prasad Tadepalli. 2016. Event nugget detection with forward-backward recurrent neural networks. In Proceedings of ACL 2016. Yu Hong, Wenxuan Zhou, Jingli Zhang, Qiaoming Zhu, and Guodong Zhou. 2018. Self-regulation: Employing a generative adversarial network to improve event detection. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 515–526. Association for Computational Linguistics. Bartosz Krawczyk, Michał Wo´zniak, and Gerald Schaefer. 2014. Cost-sensitive decision tree ensembles for effective imbalanced classification. Applied Soft Computing, 14:554–562. Matt J Kusner, Wenlin Chen, Quan Zhou, Zhixiang Eddie Xu, Kilian Q Weinberger, and Yixin Chen. 2014. Feature-cost sensitive learning with submodular trees of classifiers. In AAAI 2014. Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2018a. Adaptive scaling for sparse detection in information extraction. arXiv preprint arXiv:1805.00250. Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2018b. Nugget proposal networks for chinese event detection. arXiv preprint arXiv:1805.00249. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. 2017. Focal loss for dense object detection. arXiv preprint arXiv:1708.02002. Charles X Ling and Victor S Sheng. 2011. Costsensitive learning. In Encyclopedia of machine learning, pages 231–235. Springer. Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2018a. Event detection via gated multilingual attention mechanism. In Proceedings of AAAI2018. Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017a. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of ACL2017. Shulin Liu, Yubo Chen, Kang Liu, Jun Zhao, Zhunchen Luo, and Wei Luo. 2017b. Improving event detection via information sharing among related event types. In CCL 2017, pages 122–134. Xiao Liu, Zhunchen Luo, and Heyan Huang. 2018b. Jointly multiple events extraction via attentionbased graph information aggregation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1247–1256. Association for Computational Linguistics. Bilal Mirza, Zhiping Lin, and Kar-Ann Toh. 2013. Weighted online sequential extreme learning machine for class imbalance learning. Neural processing letters. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of NAACL-HLT 2016. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of ACL 2015. Thien Huu Nguyen and Ralph Grishman. 2016. Modeling skip-grams for event detection with convolutional neural networks. In Proceedings of EMNLP 2016. 5283 Thien Huu Nguyen and Ralph Grishman. 2018. Graph convolutional networks with argument-aware pooling for event detection. In Proceedings of AAAI2018. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Lei Sha, Feng Qian, Baobao Chang, and Zhifang Sui. 2018. Jointly extracting event triggers and arguments by dependency-bridge rnn and tensor-based argument interaction. In Proceedings of AAAI2018. Yanmin Sun, Mohamed S Kamel, Andrew KC Wong, and Yang Wang. 2007. Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition, 40(12):3358–3378. Kai Ming Ting. 2002. An instance-weighting method to induce cost-sensitive trees. IEEE Transactions on Knowledge and Data Engineering, 14(3):659–665. Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of ACL2017. Bianca Zadrozny, John Langford, and Naoki Abe. 2003. Cost-sensitive learning by cost-proportionate example weighting. In ICDM 2003, pages 435–442. Ying Zeng, Honghui Yang, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2016. A convolution bilstm neural network model for chinese event extraction. In Proceedings of NLPCC-ICCPOL 2016. Zhi-Hua Zhou. 2011. Cost-sensitive learning. In International Conference on Modeling Decisions for Artificial Intelligence, pages 17–18. Springer.
2019
521
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5284–5294 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5284 Exploring Pre-trained Language Models for Event Extraction and Generation Sen Yang†, Dawei Feng†, Linbo Qiao, Zhigang Kan, Dongsheng Li‡ National University of Defense Technology, Changsha, China {sen yang,linbo.qiao,kanzhigang13}@nudt.edu.cn [email protected], [email protected] Abstract Traditional approaches to the task of ACE event extraction usually depend on manually annotated data, which is often laborious to create and limited in size. Therefore, in addition to the difficulty of event extraction itself, insufficient training data hinders the learning process as well. To promote event extraction, we first propose an event extraction model to overcome the roles overlap problem by separating the argument prediction in terms of roles. Moreover, to address the problem of insufficient training data, we propose a method to automatically generate labeled data by editing prototypes and screen out generated samples by ranking the quality. Experiments on the ACE2005 dataset demonstrate that our extraction model can surpass most existing extraction methods. Besides, incorporating our generation method exhibits further significant improvement. It obtains new state-of-the-art results on the event extraction task, including pushing the F1 score of trigger classification to 81.1%, and the F1 score of argument classification to 58.9%. 1 Introduction Event extraction is a key and challenging task for many NLP applications. It targets to detect event trigger and arguments. Figure 1 illustrates a sentence containing an event of type Meet triggered by ”meeting”, with two arguments: ”President Bush” and ”several Arab leaders”, both of which play the role ”Entity”. There are two interesting issues in event extraction that require more efforts. On the one hand, roles in an event vary greatly in frequency (Figure 2), and they can overlap on some words, †These two authors contributed equally. ‡Corresponding Author. [Trigger] Event type: Meet Sentence : President Bush is going to be meeting with several Arab leaders [Entity] [Entity] Figure 1: An event of type Meet is highlighted in the sentence, including one trigger and two arguments. even sharing the same argument (the roles overlap problem). For example, in sentence ”The explosion killed the bomber and three shoppers”, ”killed” triggers an Attack event, while argument ”the bomber” plays the role ”Attacker” as well as the role ”Victim” at the same time. There are about 10% events in the ACE2005 dataset (Doddington et al., 2004) having the roles overlap problem. However, despite the evidence of the roles overlap problem, few attentions have been paid to it. On the contrary, it is often simplified in evaluation settings of many approaches. For example, in most previous works, if an argument plays multiple roles in an event simultaneously, the model classifies correctly as long as the prediction hits any one of them, which is obviously far from accurate to apply to the real world. Therefore, we design an effective mechanism to solve this problem and adopt more rigorous evaluation criteria in experiments. On the other hand, so far most deep learning based methods for event extraction follow the supervised-learning paradigm, which requires lots of labeled data for training. However, annotating accurately large amounts of data is a very laborious task. To alleviate the suffering of existing methods from the deficiency of predefined event data, event generation approaches are often used to produce additional events for training (Yang et al., 2018; Zeng et al., 2018; Chen et al., 2017). And distant supervision (Mintz et al., 2009) is a commonly used technique to this end for labeling external corpus. But the quality and quantity 5285 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Victim Place Agent Instrument Time Frequency Figure 2: Frequency of roles that appear in events of type Injure in the ACE2005 dataset. of events generated with distant supervision are highly dependent on the source data. In fact, external corpus can also be exploited by pre-trained language models to generate sentences. Therefore, we turn to pre-trained language models, attempting to leverage their knowledge learned from the large-scale corpus for event generation. Specifically, this paper proposes a framework based on pre-trained language models, which includes an event extraction model as our baseline and a labeled event generation method. Our proposed event extraction model is constituted of a trigger extractor and an argument extractor which refers result of the former for inference. In addition, we improve the performance of the argument extractor by re-weighting the loss function based on the importance of roles. Pre-trained language models have also been applied to generating labeled data. Inspired by the work of Guu et al. (2018), we take the existing samples as prototypes for event generation, which contains two key steps: argument replacement and adjunct token rewriting. Through scoring the quality of generated samples, we can pick out those of high quality. Incorporating them with existing data can further improve the performance of our event extractor. 2 Related work Event Extraction In terms of analysis granularity, there are document-level event extraction (Yang et al., 2018) and sentence-level event extraction (Zeng et al., 2018). We focus on the statistical methods of the latter in this paper. These methods can be further divided into two detailed categories: the feature based ones (Liao and Grishman, 2010; Liu et al., 2010; Miwa et al., 2009; Liu et al., 2016; Hong et al., 2011; Li et al., 2013b) which track designed features for extraction, and the neural based ones that take advantage of neural networks to learn features automatically (Chen et al., 2015; Nguyen and Grishman, 2015; Feng et al., 2016). Event Generation External resources such as Freebase, Frame-Net and WordNet are commonly employed to generate event and enrich the training data. Several previous event generation approaches (Chen et al., 2017; Zeng et al., 2018) base a strong assumption in distant supervision1 to label events in unsupervised corpus. But in fact, co-occurring entities could have none expected relationship. In addition, Huang et al. (2016) incorporates abstract meaning representation and distribution semantics to extract events. While Liu et al. (2016, 2017) manages to mine additional events from the frames in FrameNet. Pre-trained Language Model Pre-trained language models are capable of capturing the meaning of words dynamically in consideration of their context. McCann et al. (2017) exploits language model pre-trained on supervised translation corpus in the target task. ELMO (Embeddings from Language Models) (Peters et al., 2018) gets context sensitive embeddings by encoding characters with stacked bidirectional LSTM (Long Short Term Memory) and residual structure (He et al., 2016). Howard and Ruder (2018) obtains comparable result on text classification. GPT (Generative PreTraining) (Radford et al., 2018) improves the state of the art in 9 of 12 tasks. BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2018) breaks records of 11 NLP task and received a lot of attention. 3 Extraction Model This section describes our approach to extract events that occur in plain text. We consider event extraction as a two-stage task, which includes trigger extraction and argument extraction, and propose a Pre-trained Language Model based Event Extractor (PLMEE). Figure 3 illustrates the architecture of PLMEE. It consists of a trigger extractor and an argument extractor, both of which rely on the feature representation of BERT. 3.1 Trigger Extractor Trigger extractor targets to predict whether a token triggers an event. So we formulate trigger extraction as a token-level classification task with labels 1If two entities have a relationship in a knowledge base, then all sentences that mention these two entities will express that relationship. 5286 killed explosion the bomber and three shoppers BERT Embedding Classifier Conflict.Attack Trigger The explosion killed the bomber and three shoppers BERT Embedding Attacker Victim Place Cstart Cend Cstart Cend Cstart Cend Cstart Cend ... 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 Argument Loss Classifier Set Attacker Victim Place ... Role Importance killed explosion the bomber and three shoppers For inference The explosion killed the bomber and three shoppers WordPiece Segment Position WordPiece Segment Position The The Figure 3: Illustration of the PLMEE architecture, including a trigger extractor and an argument extractor. The processing procedure of an event instance triggered by the word ”killed” is also shown. being event types, and just add a multi-classifier on BERT to build the trigger extractor. The input of the trigger extractor follows the BERT, i.e. the sum of three types of embeddings, including WordPiece embedding (Wu et al., 2016), position embedding and segment embedding. Since the input contains only one sentence, all its segment ids are set to zero. In addition, token [CLS] and [SEP]2 are placed at the start and end of the sentence. In many cases, the trigger is a phrase. Therefore, we treat consecutive tokens which share the same predicted label as a whole trigger. As general, we adopt cross entropy as the loss function for fine-tuning. 3.2 Argument Extractor Given the trigger, argument extractor aims to extract related arguments and all roles they play. Compared with trigger extraction, argument extraction is more complicated because of three issues: the dependency of arguments on the trigger, most arguments being long noun phrases, and the roles overlap problem. We take exactly a series of actions to deal with these obstacles. In common with trigger extractor, argument extractor requires three kinds of embeddings as well. However, it needs to know which tokens comprise the trigger. Therefore, we feed argument extractor with the segment ids of trigger tokens being one. 2[CLS], [SEP] and [MASK] are special tokens of BERT. To overcome the latter two issues in argument extraction, we add multiple sets of binary classifiers on the BERT. Each set of classifiers sever for a role to determine the spans (each span includes a start and an end) of all arguments that play it. This approach is similar to the question answering task on the SQuAD (Rajpurkar et al., 2016) in which there is only one answer, while multiple arguments playing the same role can appear simultaneously in an event. Since the prediction is separated with roles, an argument can play multiple roles, and a token can belong to different arguments. Thus, the roles overlap problem can also be solved. 3.3 Argument Span Determination In PLMEE, a token t is predicted as the start of an argument that plays role r with probability: P r s (t) = Softmax (W r s · B (t)) , while as the end with probability: P r e (t) = Softmax (W r e · B (t)) , in which we use subscript ”s” to represent ”start” and subscript ”e” to represent ”end”. W r s is the weight of binary classifier that aims to detect starts of arguments playing role r, while W r e is the weight of another binary classifier that aims to detect ends. B is the BERT embedding. For each role r, we can get two lists Br s and Br e of 0 and 1 according to P r s and P r e . They indicate respectively whether a token in the sentence is the 5287 start or end of an argument that plays role r3. Algorithm 1 is used to detect each token sequentially to determine spans of all arguments that play the role r. Algorithm 1 Argument span determination In: P r s and P r e , Br s and Br e, sentence length l. Out: Span list L of the arguments that play role r Initiate: as ←-1, ae ←-1 1: for i ←0 to l do 2: if In State 1 & the ith token is a start then 3: as ←i and change to State 2 4: end if 5: if In State 2 then 6: if the ith token is a new start then 7: as ←i if P r s [i] > P r s [as] 8: end if 9: if the ith token is an end then 10: ae ←i and change to State 3 11: end if 12: end if 13: if In State 3 then 14: if the ith token is a new end then 15: ae ←i if P r e [i] > P r e [ae] 16: end if 17: if the ith token is a new start then 18: Append [as, ae] to L 19: ae ←-1, as ←i and change to State 2 20: end if 21: end if 22: end for Algorithm 1 contains a finite state machine, which changes from one state to another in response to Br s and Br e. There are three states totally: 1) Neither start nor end has been detected; 2) Only a start has been detected; 3) A start as well as an end have been detected. Specially, the state changes according to the following rules: State 1 changes to State 2 when the current token is a start; State 2 changes to State 3 when the current token is an end; State 3 changes to State 2 when the current token is a new start. Notably, if there has been a start and another start arises, we will choose the one with higher probability, and the same for end. 3.4 Loss Re-weighting We initially define Ls as the loss function of all binary classifiers that are responsible for detecting starts of arguments. It is the average of cross 3The ith token is a start if Br s[i]=1 or an end if Br e[i]=1. entropy between the output probabilities and the golden label y: Ls = 1 |R| × |S| X r∈R CE (P r s , yr s) , in which CE is cross entropy, R is the set of roles, S is the input sentence, and |S| is the number of tokens in S. Similarly, we define Le as the loss function of all binary classifiers that detect ends: Le = 1 |R| × |S| X r∈R CE (P r e , yr e) . We finally average Ls and Le as the loss L of argument extractor. As Figure 2 shows, there exists a big gap in frequency between roles. This implies that roles have different levels of ”importance” in an event. The ”importance” here means the ability of a role to indicate events of a specific type. For example, the role ”Victim” is more likely to indicate a Die event than the role ”Time”. Inspired by this, we re-weight Ls and Le according to the importance of roles, and propose to measure the importance with the following definitions: Role Frequency (RF) We define RF as the frequency of role r appearing in events of type v: RF(r, v) = Nr v P k∈R Nkv , where Nr v is the count of the role r that appear in the events of type v. Inverse Event Frequency (IEF) As the measure of the universal importance of a role, we define IEF as the logarithmically scaled inverse fraction of the event types that contain the role r: IEF(r) = log |V| |{v ∈V : r ∈v}|, where V is tht set of event types. Finally we take RF-IEF as the product of RF and IEF: RF-IEF(r, v) = RF(r, v) × IEF(r). With RF-IEF, we can measure the importance of a role r in events of type v: I(r, v) = expRF-IEF(r,v) P r′∈R expRF-IEF(r′,v) . We choose three event types and list the two most important roles of each type in Table 1. It shows that although there could be multiple roles 5288 Event Type Top 2 Roles Sum Transport(15) Artifact, Origin 0.76 Attack(14) Attacker, Target 0.85 Die(12) Victim, Agent 0.90 Table 1: Top two roles and their sum importance for each event type. The number in brackets behind event type is the count of roles that have appeared in it. in events of someone type, only a few of them is indispensable. Give the event type v of input, we re-weight Ls and Le based on each role’s importance in v: Ls = X r∈R I(r, v) |S| CE (P r s , yr s) Le = X r∈R I(r, v) |S| CE (P r e , yr e) . The loss of argument extractor L is still the average of Ls and Le. 4 Training Data Generation In addition to PLMEE, we also propose a pretrained language model based method for event generation as illustrated in Figure 4. By editing prototypes, this method can generate a controllable number of labeled samples as the extra training corpus. It consists of three stages: preprocessing, event generation and scoring. To facilitate the generation method, we define adjunct tokens as the tokens in sentences except triggers and arguments, including not only words and numbers, but also punctuation. Taking sentence in Figure 1 as an example, ”is” and ”going” are adjunct tokens. It is evident that adjunct tokens can adjust the smooth and diversity of expression. Therefore, we try to rewrite them to expand the diversity of the generation results, while keeping the trigger and arguments unchanged. 4.1 Pre-processing With the golden labels, we first collect arguments in the ACE2005 dataset as well as the roles they play. However, those arguments overlap with others are excluded. Because such arguments are often long compound phrases that contain too much unexpected information, and incorporating them in argument replacement could bring more unnecessary errors. We also adopt BERT as the target model to rewrite adjunct tokens in the following stage, and fine-tune it on the ACE2005 dataset with the masked language model task (Devlin et al., 2018) to bias its prediction towards the dataset distribution. In common with the pre-training procedure of BERT, each time we sample a batch of sentences and mask 15% of tokens. Its goal is still to predict the correct token without supervision. 4.2 Event generation To generate events, we conduct two steps on a prototype. We first replace the arguments in the prototype with those similar that have played the same role. Next, we rewrite adjunct tokens with the finetuned BERT. Through these two steps, we can obtain a new sentence with annotations. Argument Replacement The first step is to replace arguments in the event. Both the argument to be replaced and the new one should have played ever the same role. While the roles are inherited after replacement, so we can still use origin labels for the generated samples. In order not to change the meaning drastically, we employ similarity as the criteria for selecting new arguments. It is based on the following two considerations: one is that two arguments that play the same role may diverge significantly in semantics; another is that the role an argument plays is largely dependent on its context. Therefore, we should choose arguments that are semantically similar and coherent with the context. We use cosine similarity between embeddings to measure the similarity of two arguments. And due to ELMO’s ability to handle the OOV problem, we employ it to embed arguments: E(a) = 1 |a| X t∈a E(t), where a is the argument, E is ELMO embedding. We choose the top 10 percent most similar arguments as candidates, and use softmax operation on their similarity to allocate probability. An argument is replaced with probability 80% while keeping constant with probability 20% to bias the representation towards the actual event (Devlin et al., 2018). Note that the triggers remain unchanged to avoid undesirable deviation of dependency relation. Adjunct Token Rewriting The results of argument replacement can already be considered as the generated data, but the constant context may increase the risk of overfitting. Therefore, to smooth 5289 Dataset BERT Argument Collection Fine-tuning BERT Argument Replacement Adjunct Token Rewriting Scorer Out: Prime minister Blair is reported to the meeting with the leaders Stage 1: Pre-processing Stage 2: Event generation Stage 3: Scoring Quality: 0.5 In: President Bush is going to be meeting with several Arab leaders Entity 1. President 2. Prime minister Blair 3. the prime minister 4. the Arab leaders 5. an Arab counterpart 6. the Palestinians 7. the leaders 8. ... Prime minister Blair is going to be meeting with the leaders Figure 4: Flow chart of the generation approach. the generated data and expand their diversity, we manage to rewrite adjunct tokens with the finetuned BERT. The rewriting is to replace some adjunct tokens in the prototype with the new ones that are more matchable with the current context. We take it as a Cloze task (Taylor, 1953), where some adjunct tokens are randomly masked and the BERT fine tuned in the first stage is used to predict vocabulary ids of suitable tokens based on the context. We use a parameter m to denote the proportion of adjunct tokens that need to be rewritten. Adjunct token rewriting is a step-by-step process. Each time we mask 15% of adjunct tokens (with the token [MASK]). Then the sentence is fed into BERT to produce new adjunct tokens. The adjunct tokens that have not yet been rewritten will temporarily remain in the sentence. To further illustrate the above two steps, we give an instance in Figure 4. In this instance, we set m to 1.0, which means all the adjunct tokens will be rewritten. The final output is ”Prime minister Blair is reported to the meeting with the leaders”, which shares the labels with the original event in the prototype. It is evident that some adjunct tokens are preserved despite m is 1.0. 4.3 Scoring Theoretically, infinite number of events can be generated with our generation method. However, not all of them are valuable for the extractor and some may even degrade its performance. Therefore, we add an extra stage to quantify the quality of each generated sample to pick out those valuable. Our key insight for evaluating the quality lies that it is tightly related to two factors, which are the perplexity and the distance to the original dataset. The former reflects the rationality of generation, and the latter reflects the differences between the data. Perplexity (PPL) Different with the masked perplexity (Devlin et al., 2018) of logarithmic version, we take the average probability of those adjunct tokens that have been rewritten as the perplexity of generated sentence S′: PPL(S′) = 1 |A(S′)| X t∈A(S′) P(t), where A is the set of adjunct tokens in S′ that have been rewritten. Distance (DIS) We measure the distance between S′ and the dataset D with cosine similarity: DIS(S′, D) = 1 −1 |D| X S∈D B(S′) · B(S) |B(S′)| × |B(S)|. Different with embedding arguments by ELMO, we utilize BERT to embed sentence and take the embedding of the first token [CLS] as the sentence embedding. Both the PPL and the DIS are limited in [0,1]. We consider that generated samples of high quality should have both low PPL and DIS. Therefore, we define the quality function as: Q(S′) = 1 − λPPL S′ + (1 −λ) DIS S′, D  , where λ ∈[0, 1] is the balancing parameter. This function is used to select generated samples of high quality in experiments. 5 Experiments In this section, we first evaluate our event extractor PLMEE on the ACE2005 dataset. Then we give a case study of generated samples and conduct automatic evaluations by adding them into the training set. Finally, we illustrate the limitations of the generation method. 5290 Model Phase Trigger Trigger Argument Argument Identification(%) Calssfication(%) Identification(%) Calssfication(%) P R F P R F P R F P R F Cross Event N/A 68.7 68.9 68.8 50.9 49.7 50.3 45.1 44.1 44.6 Cross Entity N/A 72.9 64.3 68.3 53.4 52.9 53.1 51.6 45.5 48.3 Max Entropy 76.9 65.0 70.4 73.7 62.3 67.5 69.8 47.9 56.8 64.7 44.4 52.7 DMCNN 80.4 67.7 73.5 75.6 63.6 69.1 68.8 51.9 59.1 62.2 46.9 53.5 JRNN 68.5 75.7 71.9 66.0 73.0 69.3 61.4 64.2 62.8 54.2 56.7 55.4 DMCNN-DS 79.7 69.6 74.3 75.7 66.0 70.5 71.4 56.9 63.3 62.8 50.1 55.7 ANN-FN N/A 79.5 60.7 68.8 N/A N/A ANN-AugATT N/A 78.0 66.3 71.7 N/A N/A PLMEE(-) 84.8 83.7 84.2 81.0 80.4 80.7 71.5 59.2 64.7 61.7 53.9 57.5 PLMEE 71.4 60.1 65.3 62.3 54.2 58.0 Table 2: Performance of all methods. Bold denotes the best result. As previous works (Li et al., 2013b; Chen et al., 2015; Hong et al., 2011), we take the test set with 40 newswire documents, while 30 other documents as the validation set, and the remaining 529 documents to be the training set. However, different with previous works, we take the following criteria to evaluate the correctness of each predicted event mention: 1. A trigger prediction is correct only if its span and type match with the golden labels. 2. An argument prediction is correct only if its span and all roles it plays match with the golden labels. It is worth noting that all the predicted roles for an argument are required to match with the golden labels, instead of just one of them. We adopt Precision (P), Recall (R) and F measure (F1) as the evaluation metrics. 5.1 Results of Event Extraction We take several previous classic works for comparison, and divide them into three categories: Feature based methods Document-level information is utilized in Cross event (Liao and Grishman, 2010) to assist event extraction. While Cross entity (Hong et al., 2011) uses cross-entity inference in extraction. Max Extropy (Li et al., 2013a) extracts triggers as well as arguments together based on structured prediction. Neural based methods DMCNN (Chen et al., 2015) adopts firstly dynamic multi-pooling CNN to extract sentence-level features automatically. JRNN (Nguyen et al., 2016) proposes a joint framework based on bidirectional RNN for event extraction. External resource based methods DMCNNDS (Chen et al., 2017) uses FreeBase to label potential events in unsupervised corpus by distance supervision. ANN-FN (Liu et al., 2016) improves extraction with additionally events automatically detected from FrameNet, while ANNAugATT (Liu et al., 2017) exploits argument information via the supervised attention mechanisms to improve the performance further. In order to verify the effectiveness of loss reweighting, two groups of experiments are conducted for comparison. Namely, the group where the loss function is simply averaged on all classifiers’ output (indicated as PLMEE(-)) and the group where the loss is re-weighted based on role importance (indicated as PLMEE). Table 2 compares the results of the aforementioned models with PLMEE on the test set. As is shown, in both the trigger extraction task and the argument extraction task, PLMEE(-) has achieved the best results among all the compared methods. The improvement on the trigger extraction is quite significant, seeing a sharp increase of near 10% on the F1 score. While the improvement in argument extraction is not so obvious, achieving about 2%. This is probably due to the more rigorous evaluation metric we have taken and the difficulty of argument extraction task as well. Moreover, compared with feature based methods, neural based methods can achieve better performance. And the same observation appears when comparing external resource based methods with neural based methods. It demonstrates that external re5291 Prototype m Generated Event President Bush is going to be meeting with several Arab leaders 0.2 Russian President Putin is going to the meeting with the Arab leaders 0.4 The president is reported to be meeting with an Arab counterpart 0.6 Mr. Bush is summoned to a meeting with some Shiite Muslim groups 0.8 The president is attending to the meeting with the Palestinians 1.0 Prime minister Blair is reported to the meeting with the leaders Table 3: Example samples generated with different proportion of rewritten adjunct tokens. Italic indicates argument and bold indicates trigger. sources are useful to improve event extraction. In addition, the PLMEE model can achieve better results on the argument extraction task - with improvement of 0.6% on F1 score for identification and 0.5% for classification - than the PLMEE(-) model, which means that re-weighting the loss can effectively improve the performance. 5.2 Case Study Table 3 illustrates a prototype and its generation with parameter m ranging from 0.2 to 1.0. We can observe that the arguments after replacement can match the context in prototype relatively well, which indicates that they are resembling with the original ones in semantic. On the other hand, rewriting the adjunct tokens can smooth the generated data and expand their diversity. However, since there is no explicit guide, this step can also introduce unpredictable noise, making the generation not fluent as expected. 5.3 Automatic Evaluation of Generation So far, there are mainly three aspects of the generation method that could have significant impacts on the performance of the extraction model, including the amount of generated samples (represented by n, which indicates times the generation size is the number of dataset size), the proportion of rewritten adjunct tokens m, and the quality of the generated samples. The former two factors are controllable in the generation process. Specially, we can reuse a prototype and get a variety of combinations of arguments via similarity based replacement, which will bring different contexts for rewriting adjunct tokens. Moreover, the proportion of rewritten adjunct tokens can be adjusted, making a further variation. Although the quality of generation cannot be controlled arbitrarily, it can be quantified by the score function Q so that those samples of higher quality can be picked out and added into the training set. With λ in Q changing, different selection strategies can be used to screen out the generated samples. We first tuned the former two parameters on the development set through grid search. Specially, we set m ranging from 0.2 to 1.0 with an interval of 0.2, and set n to be 0.5, 1.0 and 2.0, while keeping other parameters unchanged in the generation process. We conduct experiments with these parameters. By analyzing the results, we find that the best performance of PLMEE on both trigger extraction and argument extraction can be achieved with m = 0.4 and n = 1.0. It suggests that neither too few generated samples nor too much is a better choice for extraction. Too few has limited influence, while too much could bring more noise that disturbs the distribution of the dataset. For the better extraction performance, we use such parameter settings in the following experiments. We also investigate the effectiveness of the sample selection approach, a comparison is conducted between three groups with different selection strategies. We obtain a total of four times the size of the ACE2005 dataset using our generation method with m = 0.4, and pick out one quarter of them (n = 1.0) with λ being 0, 0.5 and 1.0 respectively. When λ is 0 or 1.0, it is either perplexity or distance that determines the quality exclusively. We find that the selection method with λ = 0.5 in quality function is able to pick out samples that are more advantageous to promote the extraction performance. Model Trigger(%) Argument(%) PLMEE 80.7 58.0 PLMEE(+) 81.1 58.9 Table 4: F1 score of trigger classification and argument classification on the test set. Finally, we incorporate the above generated data with the ACE2005 dataset and investigate the effectiveness of our generation method on the test 5292 set. In Table 4, we use PLMEE(+) denotes the PLMEE model trained with extra generated samples. The results illustrate that with our event generation method, the PLMEE model can achieve the state of the art result of event extraction. 5.4 Limitation By comparing the annotations in generated samples and manually labeled samples, we find that one issue of our generation method is that the roles may deviate, because the semantics could change a lot with only a few adjunct tokens been rewritten. Taking Figure 5 as an example. The roles played by argument ”Pittsburgh” and ”Boston” should be ”Destination” and ”Origin”, rather not the opposite as in the prototype. This is because the token ”from” has been replaced with the token ”for”, while token ”drive to” been replaced with ”return from”. Trigger leave Event type Movement.Transport Arguments Niagara Falls Toronto Roles Origin Destination Trigger leave Event type Movement.Transport Arguments Pittsburgh Boston Roles Origin Destination Prototype: Leave from Niagara Falls and drive to Toronto, on 85 miles Generation: Leave for Pittsburgh and return from Boston in 200 miles x x ✓ Figure 5: One of the generated samples with wrong annotations. 6 Conclusion and Discussion In this paper, we present a framework to promote event extraction by using a combination of an extraction model and a generation method, both of which are based on pre-trained language models. To solve the roles overlap problem, our extraction approach tries to separate the argument predictions in terms of roles. Then it exploits the importance of roles to re-weight the loss function. To perform event generation, we present a novel method that takes the existing events as prototypes. This event generation method can produce controllably labeled samples through argument replacement and adjunct tokens rewriting. It also benefits from the scoring mechanism which is able to quantify the quality of generated samples. Experimental results show that the quality of generated data is competitive and incorporating them with existing corpus can make our proposed event extractor to be superior to several state of the art approaches. On the other hand, there are still limitations in our work. Events of the same type often share similarity. And co-occurring roles tend to hold a tight relation. Such features are ignored in our model, but they deserve more investigation for improving the extraction model. In addition, although our generation method can control the number of generated samples and filter with quality, it still suffers the deviation of roles alike with distant supervision. Therefore, for the future work, we will incorporate relation between events and relation between arguments into pre-trained language models, and take effective measures to overcome the deviation problem of roles in the generation. Acknowledgments The work was sponsored by the National Key Research and Development Program of China under Grant No.2018YFB0204300, and National Natural Science Foundation of China under Grant No.61872376 and No.61806216. References Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data generation for large scale event extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 409–419. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 167–176. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In LREC, volume 2, page 1. Xiaocheng Feng, Lifu Huang, Duyu Tang, Heng Ji, Bing Qin, and Ting Liu. 2016. A languageindependent neural network for event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 66–71. 5293 Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association of Computational Linguistics, 6:437–450. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1127– 1136. Association for Computational Linguistics. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339. Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare R Voss, Jiawei Han, and Avirup Sil. 2016. Liberal event extraction and event schema induction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 258–268. Peifeng Li, Qiaoming Zhu, and Guodong Zhou. 2013a. Joint modeling of argument identification and role determination in chinese event extraction with discourse-level information. In Twenty-Third International Joint Conference on Artificial Intelligence. Qi Li, Heng Ji, and Liang Huang. 2013b. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 73–82. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 789–797. Association for Computational Linguistics. Bing Liu, Longhua Qian, Hongling Wang, and Guodong Zhou. 2010. Dependency-driven featurebased learning for extracting protein-protein interactions from biomedical text. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 757–765. Association for Computational Linguistics. Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016. Leveraging framenet to improve automatic event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2134–2143. Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1789–1798. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6294–6305. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun’ichi Tsujii. 2009. A rich feature vector for protein-protein interaction extraction from multiple corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 121–130. Association for Computational Linguistics. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 365–371. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. 5294 Wilson L Taylor. 1953. “cloze procedure”: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Hang Yang, Yubo Chen, Kang Liu, Yang Xiao, and Jun Zhao. 2018. Dcfee: A document-level chinese financial event extraction system based on automatically labeled training data. Proceedings of ACL 2018, System Demonstrations, pages 50–55. Ying Zeng, Yansong Feng, Rong Ma, Zheng Wang, Rui Yan, Chongde Shi, and Dongyan Zhao. 2018. Scale up event extraction learning via automatic training data generation. In Thirty-Second AAAI Conference on Artificial Intelligence.
2019
522
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5295–5300 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5295 Improving Open Information Extraction via Iterative Rank-Aware Learning Zhengbao Jiang, Pengcheng Yin, Graham Neubig Language Technologies Institute Carnegie Mellon University {zhengbaj, pcyin, gneubig}@cs.cmu.edu Abstract Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences. We propose an additional binary classification loss to calibrate the likelihood to make it more globally comparable, and an iterative learning process, where extractions generated by the open IE model are incrementally included as training samples to help the model learn from trial and error. Experiments on OIE2016 demonstrate the effectiveness of our method.1 1 Introduction Open information extraction (IE, Sekine (2006); Banko et al. (2007)) aims to extract open-domain assertions represented in the form of n-tuples (e.g., was born in; Barack Obama; Hawaii) from natural language sentences (e.g., Barack Obama was born in Hawaii). Open IE started from rulebased (Fader et al., 2011) and syntax-driven systems (Mausam et al., 2012; Corro and Gemulla, 2013), and recently has used neural networks for supervised learning (Stanovsky et al., 2018; Cui et al., 2018; Sun et al., 2018; Duh et al., 2017; Jia et al., 2018). A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on tradeoffs between the precision and recall of extracted 1Code and data are available at https://github. com/jzbjyb/oie_rank generate model t extractions extractions up to t merge minimize binary classification loss model t+1 extractions up to t+1 Figure 1: Iterative rank-aware learning. assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model (Stanovsky et al., 2018; Sun et al., 2018). However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions. To calibrate open IE confidences and make them more globally comparable across different sentences, we propose an iterative rank-aware learning approach, as outlined in Fig. 1. Given extractions generated by the model as training samples, we use a binary classification loss to explicitly increase the confidences of correct extractions and decrease those of incorrect ones. Without adding additional model components, this training paradigm naturally leads to a better open IE model, whose extractions can be further included as training samples. We further propose an iter5296 ative learning procedure that gradually improves the model by incrementally adding extractions to the training data. Experiments on the OIE2016 dataset (Stanovsky and Dagan, 2016) indicate that our method significantly outperforms both neural and non-neural models. 2 Neural Models for Open IE We briefly revisit the formulation of open IE and the neural network model used in our paper. 2.1 Problem Formulation Given sentence s = (w1, w2, ..., wn), the goal of open IE is to extract assertions in the form of tuples r = (p, a1, a2, ..., am), composed of a single predicate and m arguments. Generally, these components in r need not to be contiguous, but to simplify the problem we assume they are contiguous spans of words from s and there is no overlap between them. Methods to solve this problem have recently been formulated as sequence-to-sequence generation (Cui et al., 2018; Sun et al., 2018; Duh et al., 2017) or sequence labeling (Stanovsky et al., 2018; Jia et al., 2018). We adopt the second formulation because it is simple and can take advantage of the fact that assertions only consist of words from the sentence. Within this framework, an assertion r can be mapped to a unique BIO (Stanovsky et al., 2018) label sequence y by assigning O to the words not contained in r, Bp/Ip to the words in p, and Bai/Iai to the words in ai respectively, depending on whether the word is at the beginning or inside of the span. The label prediction ˆy is made by the model given a sentence associated with a predicate of interest (s, v). At test time, we first identify verbs in the sentence as candidate predicates. Each sentence/predicate pair is fed to the model and extractions are generated from the label sequence. 2.2 Model Architecture and Decoding Our training method in § 3 could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE (Stanovsky et al., 2018; He et al., 2017), a stacked BiLSTM with highway connections (Zhang et al., 2016; Srivastava et al., 2015) and recurrent dropout (Gal and Ghahramani, 2016). Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: xt = [Wemb(wt), Wmask(wt = v)]. The probability of the label at each position is calculated independently using a softmax function: P(yt|s, v) ∝exp(Wlabelht + blabel), where ht is the hidden state of the last layer. At decoding time, we use the Viterbi algorithm to reject invalid label transitions (He et al., 2017), such as Ba2 followed by Ia1.2 We use average log probability of the label sequence (Sun et al., 2018) as its confidence:3 c(s, v, ˆy) = P|s| t=1 log P( ˆyt|s, v) |s| . (1) The probability is trained with maximum likelihood estimation (MLE) of the gold extractions. This formulation lacks an explicit concept of cross-sentence comparison, and thus incorrect extractions of one sentence could have higher confidence than correct extractions of another sentence. 3 Iterative Rank-Aware Learning In this section, we describe our proposed binary classification loss and iterative learning procedure. 3.1 Binary Classification Loss To alleviate the problem of incomparable confidences across sentences, we propose a simple binary classification loss to calibrate confidences to be globally comparable. Given a model θ′ trained with MLE, beam search is performed to generate assertions with the highest probabilities for each predicate. Assertions are annotated as either positive or negative with respect to the gold standard, and are used as training samples to minimize the hinge loss: ˆθ = arg min θ E s∈D v,ˆy∈gθ′(s) max (0, 1 −t · cθ(s, v, ˆy)), (2) 2This formulation cannot easily handle coordination, where multiple instances of an argument are extracted for a single predicate, so we use a heuristic of keeping only the first instance of an argument. 3The log probability is normalized by the length of the sentence to avoid bias towards short sentences. The original confidence score in RnnOIE is slightly different from ours. Empirically, we found them to perform similarly. 5297 Input: training data D, initial model θ(0) Output: model after convergence θ t ←0 # iteration E ←∅# generated extractions while not converge do E ←E ∪{(s, v, ˆy)|v, ˆy ∈gθ(t)(s), ∀s ∈D} θ(t+1) ←arg min θ E (s,v, ˆy)∈Emax (0, 1 −t · cθ(s, v, ˆy)) t ←t + 1; end Algorithm 1: Iterative learning. Train Dev. Test # sentence 1 688 560 641 # extraction 3 040 971 1 729 Table 1: Dataset statistics. where D is the training sentence collection, gθ′ represents the candidate generation process, and t ∈{1, −1} is the binary annotation. cθ(s, v, ˆy) is the confidence score calculated by average log probability of the label sequence. The binary classification loss distinguishes positive extractions from negative ones generated across different sentences, potentially leading to a more reliable confidence measure and better ranking performance. 3.2 Iterative Learning Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (Alg. 1) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. 4 Experiments 4.1 Experimental Settings Dataset We use the OIE2016 dataset (Stanovsky and Dagan, 2016) to evaluate our method, which only contains verbal predicates. OIE2016 is automatically generated from the QA-SRL dataset (He et al., 2015), and to remove noise, we remove extractions without predicates, with less than two arguments, and with multiple instances of an argument. The statistics of the resulting dataset are summarized in Tab. 1. Evaluation Metrics We follow the evaluation metrics described by Stanovsky and Dagan (2016): area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts.4 Baselines We compare our method with both competitive neural and non-neural models, including RnnOIE (Stanovsky et al., 2018), OpenIE4,5 ClausIE (Corro and Gemulla, 2013), and PropS (Stanovsky et al., 2016). Implementation Details Our implementation is based on AllenNLP (Gardner et al., 2018) by adding binary classification loss function on the implementation of RnnOIE.6 The network consists of 4 BiLSTM layers (2 forward and 2 backward) with 64-dimensional hidden units. ELMo (Peters et al., 2018) is used to map words into contextualized embeddings, which are concatenated with a 100-dimensional predicate indicator embedding. The recurrent dropout probability is set to 0.1. Adadelta (Zeiler, 2012) with ϵ = 10−6 and ρ = 0.95 and mini-batches of size 80 are used to optimize the parameters. Beam search size is 5. 4.2 Evaluation Results Tab. 4 lists the evaluation results. Our base model (RnnOIE, § 2) performs better than non-neural systems, confirming the advantage of supervised training under the sequence labeling setting. To test if the binary classification loss (E.q. 2, § 3) could yield better-calibrated confidence, we perform one round of fine-tuning of the base model with the hinge loss (+Binary loss in Tab. 4). We show both the results of using the confidence (E.q. 1) of the fine-tuned model to rerank the extractions of the base model (Rerank Only), and the end-to-end performance of the fine-tuned model in assertion generation (Generate). We 4The absolute performance reported in our paper is much lower than the original paper because the authors use a more lenient lexical overlap metric in their released code: https://github.com/gabrielStanovsky/ oie-benchmark. 5https://github.com/dair-iitd/ OpenIE-standalone 6https://allennlp.org/models# open-information-extraction 5298 sentence old new label rank rank A CEN forms an important but small part of a Local Strategic Partnership . 3 1  An animal that cares for its young but shows no other sociality traits is said to be “ subsocial” . 2 2  A casting director at the time told Scott that he had wished that he’d met him a week before ; he was casting for the “G.I. Joe” cartoon. 1 3  Table 2: Case study of reranking effectiveness. Red for predicate and blue for arguments. sentence label A Democrat , he became the youngest mayor in Pittsburgh’s history in September 2006 at the age of 26 .  A motorcycle speedway long-track meeting , one of the few held in the UK, was staged at Ammanford.  Table 3: Case study of generation effectiveness. Red for predicate and blue for arguments. 0.06 0.08 0.1 0.12 0.14 0.16 1 2 3 4 5 6 7 8 9 10 AUC rerank generate pos rerank pos generate 0.24 0.27 0.3 0.33 0.36 1 2 3 4 5 6 7 8 9 10 F1 rerank generate pos rerank pos generate Figure 2: AUC and F1 at different iterations. found both settings lead to improved performance compared to the base model, which demonstrates that calibrating confidence using binary classification loss can improve the performance of both reranking and assertion generation. Finally, our proposed iterative learning approach (Alg. 1, § 3) significantly outperforms non-iterative settings. We also investigate the performance of our iterative learning algorithm with respect to the number of iterations in Fig. 2. The model obtained at each iteration is used to both rerank the extractions generated by the previous model and generate new extractions. We also report results of using only positive samples for optimization. We observe the AUC and F1 of both reranking and generation increases simultaneously for the first 6 iterations and converges after that, which demonstrates the effecSystem AUC F1 Non-neural Systems PropS .006 .065 ClausIE .026 .144 OpenIE4 .034 .164 Neural Systems Base Model (RnnOIE) .050 .204 +Binary loss (§ 3.1), Rerank Only .091 .225 +Binary loss (§ 3.1), Generate .092 .260 +Iterative Learning (§ 3.2) .125 .315 Table 4: AUC and F1 on OIE2016. tiveness of iterative training. The best performing iteration achieves AUC of 0.125 and F1 of 0.315, outperforming all the baselines by a large margin. Meanwhile, using both positive and negative samples consistently outperforms only using positive samples, which indicates the necessity of exposure to the errors made by the system. Case Study Tab. 2 compares extractions from RnnOIE before and after reranking. We can see the order is consistent with the annotation after reranking, showing the additional loss function’s efficacy in calibrating the confidences; this is particularly common in extractions with long arguments. Tab. 3 shows a positive extraction discovered after iterative training (first example), and a wrong extraction that disappears (second example), which shows that the model also becomes better at assertion generation. 5299 overgenerated wrong missing predicate argument argument 41% 38% 21% Table 5: Proportions of three errors. Error Analysis Why is the performance still relatively low? We randomly sample 50 extractions generated at the best performing iteration and conduct an error analysis to answer this question. To count as a correct extraction, the number and order of the arguments should be exactly the same as the ground truth and syntactic heads must be included, which is challenging considering that the OIE2016 dataset has complex syntactic structures and multiple arguments per predicate. We classify the errors into three categories and summarize their proportions in Tab. 5. “Overgenerated predicate” is where predicates not included in ground truth are overgenerated, because all the verbs are used as candidate predicates. An effective mechanism should be designed to reject useless candidates. “Wrong argument” is where extracted arguments do not coincide with ground truth, which is mainly caused by merging multiple arguments in ground truth into one. “Missing argument” is where the model fails to recognize arguments. These two errors usually happen when the structure of the sentence is complicated and coreference is involved. More linguistic information should be introduced to solve these problems. 5 Conclusion We propose a binary classification loss function to calibrate confidences in open IE. Iteratively optimizing the loss function enables the model to incrementally learn from trial and error, yielding substantial improvement. An error analysis is performed to shed light on possible future directions. Acknowledgements This work was supported in part by gifts from Bosch Research, and the Carnegie Bosch Institute. References Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 2670–2676. Luciano Del Corro and Rainer Gemulla. 2013. Clausie: clause-based open information extraction. In 22nd International World Wide Web Conference, pages 355–366. Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 407–413. Kevin Duh, Benjamin Van Durme, and Sheng Zhang. 2017. MT/IE: cross-lingual open information extraction with neural sequence-to-sequence models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 64–70. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1535–1545. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, pages 1019–1027. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. CoRR, abs/1803.07640. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 473–483. Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 643–653. Shengbin Jia, Yang Xiang, and Xiaojun Chen. 2018. Supervised neural models revitalize the open relation extraction. CoRR, abs/1809.09408. Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523–534. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227–2237. 5300 Satoshi Sekine. 2006. On-demand information extraction. In ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, pages 2377–2385. Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2300–2305. Gabriel Stanovsky, Jessica Ficler, Ido Dagan, and Yoav Goldberg. 2016. Getting more out of syntax with props. CoRR, abs/1603.01648. Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 885–895. Mingming Sun, Xu Li, Xin Wang, Miao Fan, Yue Feng, and Ping Li. 2018. Logician: A unified end-to-end neural approach for open-domain information extraction. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 556–564. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yao, Sanjeev Khudanpur, and James R. Glass. 2016. Highway long short-term memory RNNS for distant speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 5755–5759.
2019
523
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5301–5307 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5301 Towards Improving Neural Named Entity Recognition with Gazetteers Tianyu Liu∗ Peking University [email protected] Jin-Ge Yao Chin-Yew Lin Microsoft Research Asia {jinge.yao,cyl}@microsoft.com Abstract Most of the recently proposed neural models for named entity recognition have been purely data-driven, with a strong emphasis on getting rid of the efforts for collecting external resources or designing hand-crafted features. This could increase the chance of overfitting since the models cannot access any supervision signal beyond the small amount of annotated data, limiting their power to generalize beyond the annotated entities. In this work, we show that properly utilizing external gazetteers could benefit segmental neural NER models. We add a simple module on the recently proposed hybrid semi-Markov CRF architecture and observe some promising results. 1 Introduction In the past few years, neural models have become dominant in research on named entity recognition (NER) (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016, inter alia), as they effectively utilize distributed representations learned from large-scale unlabeled texts (Pennington et al., 2014; Peters et al., 2018; Devlin et al., 2018, inter alia), while avoiding the huge efforts required for designing hand-crafted features or gathering external lexicons. Results from modern neural NER models have achieved new state-of-the-art performance over standard benchmarks such as the popular CoNLL 2003 shared task dataset (Tjong Kim Sang and De Meulder, 2003). An end-to-end model with the property of letting the data speak for itself seems to be appealing at first sight. However, given that the amount of labeled training data for NER is relatively small when compared with other tasks with millions of training examples, the annotated entities could only achieve a rather limited coverage for a theoretically infinite space of variant entity names. ∗Work during internship at Microsoft Research Asia Moreover, current neural architectures heavily rely on the word form due to the use of word embeddings and character embeddings, which could lead to a high chance of overfitting. 1 For instance, all the appearances of the single token Clinton in the CoNLL 2003 dataset are person names, while in practice it is also possible to refer to locations.2 Data-driven end-to-end models trained on that dataset could implicitly bias towards predicting PERSON for most occurrences of Clinton even under some contexts when it refers to a location. On the other hand, for frequently studied languages such as English, people have already collected dictionaries or lexicons consisting of long lists of entity names, known as gazetteers. Gazetteers could be treated as an external source of knowledge that could guide models towards wider coverage beyond the annotated entities in NER datasets. In traditional log-linear named entity taggers (Ratinov and Roth, 2009; Luo et al., 2015), gazetteers are commonly used as discrete features in the form of whether the current token or current span is appearing in the gazetter or not. There does not seem to be any reason for a neural model not to utilize the off-the-shelf gazetters. In this paper, we make a simple attempt in utilizing gazetteers in neural NER. Building on a recently proposed architecture called hybrid semiMarkov conditional random fields (HSCRFs) where span-level scores are derived from tokenlabel scores, we introduce a simple additional module that scores a candidate entity span by the degree it softly matches the gazetteer. Experimental studies over CoNLL 2003 and OntoNotes show the utility of gazetteers for neural NER models. 1In fact, traditional feature-based models also suffer from similar overfitting issues when trained on limited data, but in practice they could be easily spotted and fixed due to the transparency of linear feature weights. 2See e.g., https://en.wikipedia.org/wiki/ Clinton_(disambiguation) 5302 2 Framework 2.1 Hybrid semi-Markov CRFs Our approach is by nature based on the hybrid semi-Markov conditional random fields (HSCRFs) proposed by Ye and Ling (2018), which connect traditional CRFs (Lafferty et al., 2001) and semi-Markov CRFs (Sarawagi and Cohen, 2005) by simultaneously leveraging token-level and segment-level scoring information. Let s = ⟨s1, . . . , sp⟩denote a segmentation of input sequence x = ⟨x1, . . . , xn⟩, where a segment sj = ⟨tj, uj, yj⟩represents a span with a start position tj, an end position uj, and a label yj ∈Y . We assume that all segments have positive lengths and the start position of the first segment is always 1, then the segmentation s satisfies t1 = 1, up = n, uj −tj ≥0, and tj+1 = uj + 1 for 1 ≤j < p. Let l = ⟨l1, . . . , ln⟩be the corresponding token-level labels of x. A traditional semi-CRF (Sarawagi and Cohen, 2005) gives a segmentation of an input sequence and assign labels to each segment in it. For named entity recognition tasks, a correct segmentation of the sentence Scottish Labour Party narrowly backs referendum should be s = ⟨(1, 3, ORG), (4, 4, O), (5, 5, O), (6, 6, O)⟩, and the token-level label sequence under a BILOU tagging scheme 3 should become l = ⟨B−ORG, I−ORG, L−ORG, O, O, O⟩. HSCRFs inherit the definition of segmentation probability from traditional semi-CRFs. Given a sequence x = ⟨x1, . . . , xn⟩, the probability of segmentation s = ⟨s1, . . . , sp⟩is defined as Pr(s | x) = score(s, x) Z(x) , (1) where score(s, x) = Qp j=1 ψ(yj, yj+1, x, tj, uj), and Z(x) = P s′ score(s′, x) is the normalization term. Note that yp+1 is defined as a special ⟨END⟩. The Viterbi algorithm could be used for decoding, i.e., getting the most likely segmentation for a query sentence. HSCRFs employ a specific method to calculate the segment score using token-level labels, with the score potential function ψ(·) defined as ψ(yj, yj+1, x, tj, uj) = exp (φj + byj,yj+1), 3In the BILOU scheme, a model should learn to identify the Beginning, the Inside and the Last tokens of multi-token chunks as well as Outside tokens and Unit-length chunks. where φj = uj X i=tj ϕHSCRF token (li, v′ i) = uj X i=tj a⊺ liv′ i, (2) and byj,yj+1 is the segment label transition score from yj to yj+1, ϕtoken(li, wi) calculates the score of the i-th token being classified into token-level label li , v′ i is the feature representation vector of the i-th token xi, and ali is the weight parameter vector for token label li. In HSCRFs, v′ i is the concatenation of (1) BiLSTM encoded representation vi, (2) vuj −vtj, and (3) emb(i −tj + 1), the position embedding in the segment. 2.2 Gazetteer-enhanced sub-tagger The most na¨ıve attempt could be treating each gazetteer entity as an additional labeled training sentence, but we found consistently decreased performance in our initial experiments, as this would introduce a shift of label distribution given that the amount of gazetteer entity entries are typically large. Therefore, it seems more natural to utilize gazetteers in a separate module rather than na¨ıvely using them as augmented data. The structure of HSCRFs makes it straightforward to introduce a scoring scheme for candidate spans based on gazetteers. Following the scoring scheme of HSCRFs, we train a span classifier in the form of a sub-tagger and extract token-level features at the same time. Let z = ⟨z1, . . . , zk⟩ be an entity in the gazetteer with a corresponding label m. This span-level label can be expanded into token-level labels m1, . . . , mk. For example, the entity Scottish Labour Party is labeled as ⟨B−ORG, I−ORG, L−ORG⟩and Berlin is labeled as ⟨U−LOC⟩under the BILOU scheme. Similar to Equation 2, the scoring function of our sub-tagger is defined as φ(m, z) = k X i=1 ϕsubtagger token (mi, zi) = k X i=1 w⊺ miv′ i (3) where v′ i is defined in Section 2.1 and wmi is the weight parameter vector for token label mi. We calculate sigmoid φ(m, z)  as the probability of category m and minimize the cross-entropy loss for training this sub-tagger. The token-level BILOU scores derived from the sub-tagger are larger at scale. We rescale the scores with the tanh activation function 5303 Scottish Labour Party narrowly backs Soft Dictionary Lookup BiLSTM Layer GloVe CNN Char-enc ELMo Token-level Representation Segment Score (1,3,ORG) (4,4,O) (5,5,O) ... ... Semi-CRF Figure 1: Overall architecture and concatenate them with the corresponding token representation v′ i (defined in Section 2.1). Thus, an additional soft dictionary feature vector ηi = L m∈M tanh ϕsubtagger token (m, zi)  is derived for each token in a segment, where L is the concatenation operation and M is the set of all BILOU scheme token-level labels. The final φj for soft dictionary enhanced HSCRF is: φj = uj X i=tj ϕsoftdict token (li, µi) = uj X i=tj b⊺ liµi, (4) where µi = ηi L v′ i and b⊺ li is the new weight parameter for token label li. The HSCRF model and the sub-tagger derived from it are linear in the way they calculate the span scores. Unlike other semi-CRF models (Zhuo et al., 2016; Zhai et al., 2017; Sato et al., 2017) which utilize neural approaches to derive span scores from word-level representations, HSCRF calculates span score by summing up word-level scores inside a span along BILOU paths constrained by tag mi’s. This sub-tagger could be analogously treated as playing the role of soft dictionary look-ups, as opposed to the traditional way that activates a discrete feature only for hard token/span matches. 3 Experiments 3.1 Gazetteers We use the gazetteers contained in the publicly available UIUC NER system (Khashabi et al., 2018). The gazetteers were originally collected from the web and Wikipedia, consisting of around 1.5 million entities grouped into 79 fine-grained categories. We trimmed and mapped these groups into CoNLL-formatted NER tags (see Appendix for details) with about 1.3 million entities kept. 3.2 Dataset Evaluation is performed on the CoNLL-2003 English NER shared task dataset (Tjong Kim Sang and De Meulder, 2003) and the OntoNotes 5.0 dataset (Pradhan et al., 2013). We follow the standard train/development/test split described in the original papers along with previous evaluation settings (Chiu and Nichols, 2016). 3.3 Training Due to the space limit, we leave hyperparameter details to the supplementary materials. 4 Word representation The representation for a word consists of three parts: pretrained 50dimensional GloVe word embedding (Pennington et al., 2014), contextualized ELMo embedding (Peters et al., 2018), along with a convolutional character encoder trained from randomly initialized character embeddings, following previous work (Ye and Ling, 2018). Gazetteer-enhanced sub-tagger We randomly split the gazetteer entities for training (80%) and validation (20%), and sampled 1 million nonentity n-grams (the maximal n is 7) from the CoNLL 2003 training set excluding named entities as negative samples (O labels). We applied early stopping on validation loss when training the sub-tagger. 3.4 Alternative baselines with gazetteers Many previous NER systems (Ratinov and Roth, 2009; Passos et al., 2014; Chiu and Nichols, 2016) make use of discrete gazetteer features by directly concatenating them with word-level representations. Apart from simple discrete feature concatenation, we also compare our framework with another baseline that utilizes gazetteer embedding as 4Our implementation is available at: https:// github.com/lyutyuh/acl19_subtagger 5304 an additional feature. We add a single embedding layer for discrete gazetteer features. To be more specific, if a text span corresponds to multiple tags in the gazetteer, we sum all the embedded vector as the final gazetteer tag representation. Otherwise, if a text span has no corresponding tags in the gazetteer, a zero vector of the same dimension will be chosen. Then, the gazetteer tag representation is concatenated with each word-level representation inside a span. 3.5 Results Table 1 shows the results on the CoNLL 2003 dataset and OntoNotes 5.0 dataset respectively. HSCRFs using gazetteer-enhanced sub-tagger outperform the baselines, achieving comparable results with those of more complex or larger models on CoNLL 2003 and new state-of-the-art results on OntoNotes 5.0. We also attached some out-of-domain analysis in the Appendix. Model Test Set F1-score(±std) CoNLL OntoNotes Ma and Hovy (2016) 91.21 Lample et al. (2016) 90.94 Liu et al. (2018) 91.24±0.12 Devlin et al. (2018) 92.8 Chiu and Nichols (2016) 5 91.62±0.33 86.28±0.26 Ghaddar and Langlais ’18 91.73±0.10 87.95±0.13 Peters et al. (2018) 92.22±0.10 89.04±0.27 Clark et al. (2018) 92.6 ±0.1 88.8±0.1 Akbik et al. (2018) 93.09±0.12 89.71 HSCRF 92.54±0.11 89.38±0.11 HSCRF + concat 92.52±0.09 89.73±0.19 HSCRF + gazemb 92.63±0.08 89.77±0.20 HSCRF + softdict 92.75±0.18 89.94±0.16 Table 1: Results on CoNLL 2003 and OntoNotes 5.0 To better attribute the improments of our model, we split the test sets into four non-overlapped subsets according to whether an entity appears in the train set and gazetteer or not, and collect results respectively. We evaluate the performance of our systems on these subsets. Details of the evaluation of each system are shown in Table 2 and Table 3. We observe that our current approach of subtagger soft-dictionary matching consistently improves over baseline approaches on most subsets, while direct concatenating discrete gazetteer features or using gazetteer embedding have sometimes decrease the performance. However, the re5This work also introduced discrete gazetteer features. We tried their scheme on our gazetteer but we only found consistently decreased performance over the baseline HSCRF. sults on CoNLL and OntoNotes reveal slightly different patterns for the feature concatenation baseline and the gazetteer embedding baseline, making it difficult to analyze the underlying reasons. We leave more systematic experimental studies over the baselines to future work. We also evaluate the gazetteer sub-tagger on the held-out data of the gazetteer to analyze the potential impact of this module. For predictions, we choose the labels with the highest possibility. If none of the label receives a probability greater than 50%, the sample will be labeled as not being an entity. The results are reported in Table 4. We can see that while the sub-tagger module could help a lot in identifying person names (PER) and organization names (ORG), currently the worst-performing category is the miscellaneous type (MISC), which is possibly a result of the diversity in this category. Improving the prediction of such entities might further provide performance gains for named entity recognition in general. 4 Discussion Experimental results demonstrate the usefulness of gazetteer knowledge and show some promising results from our initial attempt to make use of gazetteer information. The sub-tagger has an advantage over hard matching with the capability of recognizing entity names not appearing in but being similar to those contained in the gazetteer. Table 5 lists some examples that the baselines failed to recognize as a complete entity name, while the sub-tagger enhanced system managed to do it. We checked a few cases for which only the sub-tagger enhanced model got correct predictions, and found terms with similar patterns from the gazetteer while not in training data as in Table 6. The gazetteer possesses an abundance of similar terms that enables generalization to out-ofgazetteer items. In summary, we show that gazetteer-enhanced modules could be useful for neural NER models. Future directions will include trying similarly enhanced modules on other different types of segmental models (Kong et al., 2016; Liu et al., 2016; Zhuo et al., 2016; Zhai et al., 2017; Sato et al., 2017), along with richer representations for further gain. Also, we would like to further explore the possibility to use domain-specific gazetteers or dictionaries to boost the performance of NER in 5305 Model Subset (number of entities with proportions) neither gazetteer only training set only both 2042 (36.5%) 655 (11.7%) 1765 (31.5%) 1131 (20.2%) HSCRF 84.41 81.37 82.86 97.26 98.38 97.82 96.72 99.09 97.89 96.58 99.84 98.18 HSCRF+gazemb 85.07 81.72 83.36 96.21 98.57 97.38 96.85 99.06 97.94 96.42 99.85 98.11 HSCRF+concat 85.29 81.34 83.27 96.11 98.68 97.38 96.90 99.35 98.11 96.37 99.91 98.11 HSCRF+softdict 84.93 82.16 83.52 97.40 98.53 97.96 97.07 99.31 98.18 96.54 99.91 98.19 Table 2: Detailed test set performance (Precision, Recall, F1) on CoNLL. Model Subset (number of entities with proportions) neither gazetteer only training set only both 2765 (36.5%) 720 (9.5%) 3601 (47.6%) 470 (6.2%) HSCRF 80.15 70.42 74.97 95.31 96.48 95.89 92.55 98.91 95.62 95.46 99.66 97.52 HSCRF+gazemb 80.41 71.41 75.64 94.70 96.53 95.60 92.38 98.91 95.53 95.15 99.48 97.27 HSCRF+concat 80.29 72.13 75.99 95.82 96.71 96.26 93.16 98.95 95.97 95.13 99.52 97.27 HSCRF+softdict 80.58 73.36 76.80 96.38 96.46 96.42 93.25 98.96 96.01 95.80 99.62 97.67 Table 3: Detailed test set performance (Precision, Recall, F1) on OntoNotes. Tag Type Precision Recall F1 PER 96.73 97.08 96.91 LOC 83.98 86.20 85.08 ORG 94.99 87.09 90.87 MISC 87.11 72.02 78.85 Overall 94.39 92.65 93.51 Table 4: Sub-tagger evaluation by category. We report the overall recall, precision, and F1 scores of the CoNLL tag set sub-tagger. HSCRF+softdict U.N. Interim Force in Lebanon { ORG HSCRF+gazemb U.N. Interim Force { ORG in Lebanon { LOC HSCRF U.N. Interim Force { ORG in Lebanon { LOC HSCRF+softdict Hector “Macho” Camacho { PER HSCRF+gazemb Hector “Macho { PER ” Camacho { PER HSCRF Hector { PER “ Macho { PER ” Camacho { PER HSCRF+softdict Bodman, Longely & Dahling { ORG HSCRF+gazemb Bodman { PER , Longely & Dahling { ORG HSCRF Bodman { PER , Longely & Dahling { ORG Table 5: Examples from CoNLL 2003 dev set that the soft-dictionary enhanced model classified correctly while other baselines failed. U.N. Interim Force in Lebanon Special Security Force Bangladesh Islamic Army in Iraq Grand Army of the Republic Hector “Macho” Camacho Charles “Charlie” White Carlos “Carl˜ao” Santos Orlando “Cachaito” L´opez Bodman, Longely & Dahling Ransomes, Sims & Jefferies Cravath, Swaine & Moore Drinker, Biddle & Reath Table 6: Terms similar to CoNLL 2003 dev set entities appearing in the gazetteer. various domains (Shang et al., 2018), beyond the standard corpora. Acknowledgement We thank all the anonymous reviewers for helpful suggestions, especially Reviewer #1 for the thorough comments containing more than 1,500 words in total, from which many points proved to be valuable for improving our initial draft. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649. Association for Computational Linguistics. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914– 1925. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Abbas Ghaddar and Phillippe Langlais. 2018. Robust lexical features for improved neural network namedentity recognition. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1896–1907. Association for Computational Linguistics. 5306 Daniel Khashabi, Mark Sammons, Ben Zhou, Tom Redman, Christos Christodoulopoulos, Vivek Srikumar, Nickolas Rizzolo, Lev Ratinov, Guanheng Luo, Quang Do, Chen-Tse Tsai, Subhro Roy, Stephen Mayhew, Zhili Feng, John Wieting, Xiaodong Yu, Yangqiu Song, Shashank Gupta, Shyam Upadhyay, Naveen Arivazhagan, Qiang Ning, Shaoshi Ling, and Dan Roth. 2018. CogCompNLP: Your Swiss Army Knife for NLP. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Segmental recurrent neural networks. In International Conference on Learning Representations. John D Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282–289. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics. Liyuan Liu, Jingbo Shang, Xiang Ren, Frank Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. In AAAI Conference on Artificial Intelligence. Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 2880–2886. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 879–888, Lisbon, Portugal. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Association for Computational Linguistics. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 78–86, Ann Arbor, Michigan. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143–152. Association for Computational Linguistics. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL ’09, pages 147–155, Stroudsburg, PA, USA. Association for Computational Linguistics. Sunita Sarawagi and William W Cohen. 2005. Semimarkov conditional random fields for information extraction. In Advances in neural information processing systems, pages 1185–1192. Motoki Sato, Hiroyuki Shindo, Ikuya Yamada, and Yuji Matsumoto. 2017. Segment-level neural conditional random fields for named entity recognition. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 97–102, Taipei, Taiwan. Jingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, and Jiawei Han. 2018. Learning named entity tagger using domain-specific dictionary. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2054–2064, Brussels, Belgium. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. Zhixiu Ye and Zhen-Hua Ling. 2018. Hybrid semimarkov crf for neural sequence labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 235–240. Association for Computational Linguistics. 5307 Feifei Zhai, Saloni Potdar, Bing Xiang, and Bowen Zhou. 2017. Neural models for sequence chunking. In Thirty-First AAAI Conference on Artificial Intelligence. Jingwei Zhuo, Yong Cao, Jun Zhu, Bo Zhang, and Zaiqing Nie. 2016. Segment-level sequence modeling using gated recursive semi-Markov conditional random fields. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1413–1423, Berlin, Germany. Association for Computational Linguistics.
2019
524
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5308–5314 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5308 Span-Level Model for Relation Extraction Kalpit Dixit Amazon AWS AI, USA [email protected] Yaser Al-Onaizan Amazon AWS AI, USA [email protected] Abstract Relation Extraction is the task of identifying entity mention spans in raw text and then identifying relations between pairs of the entity mentions. Recent approaches for this spanlevel task have been token-level models which have inherent limitations. They cannot easily define and implement span-level features, cannot model overlapping entity mentions and have cascading errors due to the use of sequential decoding. To address these concerns, we present a model which directly models all possible spans and performs joint entity mention detection and relation extraction. We report a new state-of-the-art performance of 62.83 F1 (prev best was 60.49) on the ACE2005 dataset. 1 Introduction Many NLP tasks follow the pattern of taking raw text as input and then: detecting relevant spans and classifying the relations between those spans. Examples of this include Relation Extraction (Li and Ji, 2014), Coreference Resolution (Ng, 2010) and Semantic Role Labeling (Gildea and Jurafsky, 2002). This class of NLP problems are inherently span-level tasks. This paper focuses on Relation Extraction (RE), which is the task of entity mention detection and classifying the relations between each pair of those mentions. We report a new state-of-the-art performance of 62.83 F1 (prev best was 60.49) on the ACE2005 dataset. Here is a simple example of Relation Extraction for the sentence, ”Washington, D.C. is the capital of the USA”. Step 1, Entity Mention Detection will detect the spans ”Washington, D.C.” and ”USA” as LOCATIONS. Step 2, Relation Extraction will classify all directed pairs of detected entity mentions. It will classify the directed pair (”Washington, D.C.”, ”USA”) as having the relation IS CAPITAL OF. But the directed pair (”USA”, ”Washington, D.C.”) will be classified as having no relation (NONE). In more complex cases, each entity could participate in multiple different relations. Since (Li and Ji, 2014), work on RE has revolved around end-to-end systems: single models which first perform entity mention detection and then relation extraction. These recent works (Bekoulis et al., 2018; Katiyar and Cardie, 2017; Miwa and Bansal, 2016; Li and Ji, 2014) have used sequential token-level methods for both the steps. Token-level models are primarily constrained by the fact that each token has a single fixed representation while each token is a part of many different spans. To model and extract spans, these token-level models have to resort to approximate span-level features which are increasingly indirect and expensive: Tree-LSTMs (Miwa and Bansal, 2016), CRFs (Bekoulis et al., 2018), Beam Search (Li and Ji, 2014) and Pointer Networks (Katiyar and Cardie, 2017). Their usage of the BILOU (Ratinov and Roth, 2009; Florian et al., 2006) token-tagging scheme makes modelling overlapping entities impossible. In general, these tokenlevel models are sequential in nature and hence have cascading errors. Another end-to-end approach for RE is to use a simple span-level model. A model which creates explicit representations for all possible spans, uses them for the entity mention detection step and then explicitly compares ordered pairs of spans for the relation extraction step. Such a model is not constrained like the token-level models because it can define direct span-specific features for each span inexpensively. Since each possible span is separately considered, selecting overlapping entity mentions is possible. Predicting one span as an entity no longer blocks another span from being predicted as an entity. This approach models each possible span independently and in parallel i.e. it is not sequential and does not suffer from cascad5309 ing errors. Such models have recently found success in similar NLP tasks like Coreference Resolution (Lee et al., 2017) and Semantic Role Labeling (Ouchi et al., 2018). In this paper, we present such a span-level model for Relation Extraction. We propose a simple bi-LSTM based model which generates span representations for each possible span. The span representations are used to perform entity mention detection on all spans in parallel. The same span representations are then used to perform relation extraction on all pairs of detected entity mentions. We evaluated the performance of our model on the ACE2005 dataset (Doddington et al., 2004) and report a new startof-the-art F1 score of 62.83 for Relation Extraction. 2 Related Work Given text input, Relation Extraction involves two steps: span detection and classification of the relation between pairs of detected spans. In the RE literature, these are more commonly called Entity Mention Detection and Relation Extraction respectively. An earlier line of research has focused on only the second step, assuming that the arguments of the relations are given by some other system/oracle (Bunescu and Mooney, 2005; Socher et al., 2012; dos Santos et al., 2015). The more interesting problem is joint Entity Mention Detection and Relation Extraction. More interesting because it simultaneously addresses both steps, enriches embeddings from losses related to both sub-tasks and only requires using a single model during test. Past approaches include Integer Linear Programming (Yang and Cardie, 2013) and Probabilistic Graphical Models (Singh et al., 2013). Li and Ji (2014) modeled this joint task as a Structured Prediction problem and since then most work on RE has revolved around endto-end systems which do the joint task (Miwa and Bansal, 2016; Katiyar and Cardie, 2017; Bekoulis et al., 2018). A common theme in current end-to-end models is the use of token-level models. For the entity mention detection step, recent works (Miwa and Bansal, 2016; Katiyar and Cardie, 2017; Bekoulis et al., 2018) have used the BILOU (Ratinov and Roth, 2009; Florian et al., 2006) tokentagging scheme. For the relation extraction step there have been a variety of methods tried like Tree-LSTMs (Miwa and Bansal, 2016), sequence labeling (Katiyar and Cardie, 2017) and multihead selection (Bekoulis et al., 2018). Li and Ji (2014) used semi-Markov chains and the Viterbi algorithm, which is also a sequential token-level approach. This token-level modeling approach has several limitations as highlighted in Section 1. Recent work using span-level end-to-end models have seen success in NLP tasks following the same pattern as RE (Coreference Resolution (Lee et al., 2017) and Semantic Role Labeling (Ouchi et al., 2018)). In this paper, we adapt (Lee et al., 2017) to create a span-level end-to-end model for RE. 3 Model Our model consists of three steps which we explain in detail in the next subsections: 1. Span Representation Generation Use task-agnostic raw token embeddings to create task-specific token embeddings for each token. The task-specific token embeddings are used to generate span embeddings for each possible span. 2. Entity Mention Detection (EMD) The span embeddings are used to obtain a vector of entity type scores for each span. Each span is assigned the entity type corresponding to its highest entity type score. The spans that are assigned an entity type other than NONE are selected for Step 3. 3. Relation Extraction (RE) For each ordered span-pair (i, j), we obtain a representation by concatenating the respective span embeddings. This representation is defined in an order-sensitive way in Section 3.3 i.e. the span-pair representation of spans (i, j) is different from that of spans (j, i). For each ordered span-pair, its representation is used to obtain a vector of relation type scores. Each ordered span-pair is assigned the relation type of its highest relation type score. 3.1 Step 1: Span Representation Generation The architecture we use to generate span representations closely follows (Lee et al., 2017). Given a document D with T tokens, there are N = T(T+1) 2 possible spans. span i is defined by all the tokens from START(i) to END(i) inclusive, for 1 ≤i ≤N. The aim is to obtain a span representation gi for each span i. 5310 Raw Token Embeddings We use xt to represent the raw token embeddings of token t with 1 ≤t ≤ T. xt is a concatenation of the following: 1. Fixed Contextual Word Embeddings 2. Fixed Word Embeddings 3. Trained from scratch Character Embeddings We use fixed ELMo (Peters et al., 2018) for Contextual Word Embeddings, fixed Senna (Collobert et al., 2011) for Word Embeddings and train Character Embeddings from scratch. The Contextual Word Embeddings for each sentence were computed separately. In terms of number of free parameters; Contextual Word Embeddings use the most (100’s of millions), followed by Word Embeddings (10’s of millions) and finally Character Embeddings use by far the least (10’s of thousands). The decision to train only the Character Embeddings was based on overfitting concerns given our relatively small dataset. Bi-LSTM Layers The pretrained Contextual Embeddings we use in xt above are obtained by unsupervised task-agnostic training. To obtain task-specific contextualization we use stacked bidirectional LSTMs (Hochreiter and Schmidhuber, 1997) on the raw token embeddings xt to obtain x∗ t, x∗ t = [−→ ht, ←− ht] where −→ ht and ←− ht are the hidden states of the last layer of the forward and backward LSTMs respectively. x∗ t is the concatenation of −→ ht and ←− ht. The bi-LSTMs were run separately on each sentence as that gave better performance. Span Representation Syntactic heads obtained from general syntactic parsers are used in many NLP systems. Here we don’t use general syntactic parsers but instead use attention (Bahdanau et al., 2015) to create a task-specific span-head feature. This feature vector is computed for each span: αt = MLPα(x∗ t) βi,t = exp(αt) END(i) P k=START(i) exp(αk) ˆxi = END(i) X k=START(i) βi,txt where MLPα is a Multi Layer Perceptron (aka Feed Forward Network). ˆxi is a weighted sum of fixed word vectors for the tokens in span i. We did experiment with using the weighted sum of the biLSTM output (x∗ t) or of the ELMo (Peters et al., 2018) fixed contextual word embeddings but got better results with using fixed word embeddings. For each span i, its span representation gi was defined as: gi = [x∗ START(i), x∗ END(i), ˆxi, φ(i)] where φ(i) encodes the size of span i in number of tokens. Each component of gi is a span-specific feature that would be difficult to define and use in token-level models. 3.2 Step 2: Entity Mention Detection (EMD) In this step, we predict the entity type for each span. This prediction is done identically and parallelly for each span. For each span we compute a vector of entity type scores. The number of entity type scores computed is the number of entity types (including the NONE entity type). For each span, the softmax function is applied to its entity type scores to get a distribution over the entity types. For span i, scorener i = MLPner(gi) pner i = softmax(scorener i ) (1) The output size of MLPner and hence the size of pner i is equal to the number of NER classes. The predicted entity type for each span i is the entity type corresponding to span i’s highest entity type score i.e. max (scorener i ). Only spans whose predicted entity type is not NONE are selected for Step 3. Unlike token-level models, overlapping spans can be selected here as each span’s selection decision is independent of other spans. 3.3 Step 3: Relation Extraction (RE) In this paper, we only consider ordered binary relations, the most common setting of RE i.e. only relations between exactly two arguments and where the two pairs (span i, span j) and (span j, span i) are considered different. We consider every ordered pair of selected spans (from Step 2) such that both spans are from the same sentence. For each such pair (span i, span j), we first compute an ordered pair embedding r(i,j): ri,j = [gi, gj, gi ◦gj] 5311 where gi and gj are the span embeddings of the 1st and 2nd arguments respectively (from Step 1). gi ◦gj refers to their element-wise product. We use the ordered pair embedding ri,j to compute a vector of relation type scores. The number of relation type scores is the number of relation types (including the NONE relation type). For each ordered pair of spans, the softmax function is applied to its relation type scores to get a distribution over the relation types. For pair (span i, span j), scorere i,j = MLPre(ri,j) pre i,j = softmax(scorere i,j) (2) The output size of MLPre and hence the size of pre i,j is equal to the number of RE classes. 3.4 Loss Two learning signals are provided to train the model: entity type information per span and relation type information per ordered (selected) span pair. Both are provided via CrossEntropy Loss on Equations 1 and 2 respectively. We use ˆyner i to represent the correct entity type for span i and ˆyre i,j to represent the correct relation type for the ordered pair of spans, (span i, span j). S represents the set of all spans and S′ represents the set of all selected spans (Section 3.2). Then the final training loss is, loss = X i∈S pner i (ˆyner i ) + X i∈S′ X j∈S′,j̸=i pre i,j(ˆyre i,j) where the first term is a sum over all spans of the entity mention detection loss (eqn 1) and the second term is a sum over all ordered pairs of selected spans of the relation extraction loss (eqn 2). 4 Experiments Dataset We use the ACE2005 dataset (Doddington et al., 2004). It has 351 documents for train, 80 for validation and 80 for test. There are seven span-level entity types and six ordered span relation types. Character Embeddings The learned character embeddings are of size 8. 1-dimensional convolutions of window size 3,4,5 are applied per-token with 50 filters of each window size. This is followed by ReLU activation (Nair and Hinton, 2010) and max-pooling over each filter. Model Size Our stacked bi-LSTMs (Section 3.1) has 3 layers with 200-dimensional hidden states and highway connections. All Multi Layer Perceptrons (MLP) has two hidden layers with 500 dimensions, each followed by ReLU activation. Feature Encoding Each span gets a span width feature which is a learned 20-dimensional vector representing the number of tokens in that span. Span Pruning A high number of spans under consideration can lead to memory and speed issues. We only consider spans that are entirely within a sentence and limit spans to a max length of L = 10. This choice was based on our Train Set, see Section 5) for a discussion about it. Performance is not affected significantly as very few entity mentions have more than 10 tokens. Regularization Dropout (Srivastava et al., 2014) is applied with dropout rate 0.2 to all hidden layers of all MLPs and feature encodings, with dropout rate 0.5 to all word and character embeddings and with dropout rate 0.4 to all LSTM layer outputs. Learning Learning is done with Adam (Kingma and Ba, 2015) with default parameters. The learning rate is annealed by 1% every 100 iterations. Minibatch Size is 1. Early Stopping of 20 evaluations on the dev set is used. 5 Model Complexity Section 3.1 describes our span generation process and Section 4 describes our algorithmic span pruning process. The algorithmic span pruning process limits our model spans which are entirely within a single sentence and have a max length of L = 10 tokens. While our model creates representations for spans (instead of just tokens), it achieves the dual goals of being memory efficient and capturing most (more than 99.95%) entities and relations in the space of the spans considered. Table 2 shows the model complexity and entity/relation coverage for different policies of span generation on the Train Set of ACE2005. It shows numbers for policies ranging from one which considers all spans across the doc, to a policy that considers only single token spans. It shows that our chosen span generation policy (in bold) is far more memory efficient than a naive search over all possible spans in the input document. Yet our policy still considers more than 99.95% of all entities and relations. Our policy is linear in the document’s (sentence) length, not quadratic; because we limit our model to spans that are wholly in a single sentence and have a max length of L = 10 tokens. 5312 Entity Mention Detection Relation Extraction System P R F1 P R F1 (Li and Ji, 2014) 85.2 76.9 80.8 68.9 41.9 52.1 (Miwa and Bansal, 2016) 82.9 83.9 83.4 57 54.0 55.6 (Katiyar and Cardie, 2017) 84.0 81.3 82.6 57.9 54.0 55.9 (Sanh et al., 2018) EMD + RE 86.54 85.49 86.02 68.66 54.05 60.49 (Sanh et al., 2018) multi-task * 85.68 85.69 85.69 68.53 54.48 61.30 our model 85.85 86.10 85.98 68.02 58.38 62.83 Table 1: EMD and RE results on the ACE2005 Test dataset. Our model reports a new state-of-the-art RE performance. Sanh et al. (2018) present several results in their multi-task paper. Results marked with (*) are not fair comparisons here because they use additional signals beyond EMD and RE. Included here for completeness. Permitted Spans # Spans % Entities Covered % Relations Covered all spans across doc 45,836,252 100.00 100.00 only spans within single sentence 1,894,256 100.00 100.00 + max length L = 10 1,079,150 99.99 99.96 + max length L = 5 632,477 99.92 99.60 + max length L = 2 279,515 98.02 94.92 + max length L = 1 144,783 89.13 78.87 Table 2: Numbers are for the Train Set (351 docs) of ACE2005, where each Relation is between exactly two Entities. Dev and Test Sets follow the same trends. Each row is a different policy for span generation and our chosen policy is bolded. ”# Spans” is the number of spans considered by the policy. ”% Entities Covered” is the percentage of entities in the dataset that are considered by that policy. ”% Relations Covered” is the same thing for Relations (i.e. a Relation is covered if both entities of the Relation are covered). Note how our chosen policy is more than 40x more memory efficient than a policy which considers all spans in the doc. And yet, our method covers 99.99% and 99.96% of all Entities and Relations respectively in the Train Set of ACE2005. 6 Results Table 1 shows the results for RE. For the joint task, we compare entity mention detection performance and relation extraction performance. Our proposed model achieves a new SOTA on RE with a F1 of 62.83, more than 2.3 F1 above the previous SOTA. Our proposed model also beats a multitask model Sanh et al. (2018) which uses signals from additional tasks by more than 1.5 F1 points. For both tasks, our model’s Precision is close to and Recall is significantly higher than previous works. The Recall gains for RE (4.3 absolute points) are much higher than for EMD (0.6 absolute points). The gains in EMD Recall highlights the effectiveness of our span representations (Section 3.1). The disproportionate gains in RE Recall cannot be fully explained by the relatively lower gains in EMD Recall. Thus, our large gains in RE Recall (and F1) showcase the effectiveness of our simple modeling of ordered span pairs for relation extraction (Section 3.3). 7 Conclusions We present a neural span-level end-to-end model for joint entity mention detection and relation extraction. In contrast with existing token-level models: our model is able to use span-specific features, allows for overlapping entity mentions and does not use sequential decoding. Our proposed model achieves a new state-of-the-art RE performance on the ACE2005 dataset. The gains are driven by improvements in Recall for both tasks. Acknowledgements We would like to thank He He for her guidance in writing this paper and comments on multiple manuscripts. We would also like to thank the anonymous reviewers of ACL 2019, their comments and suggestions have helped us focus more on the important aspects of this paper. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by 5313 jointly learning to align and translate. CoRR, abs/1409.0473. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Syst. Appl., 114:34–45. Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 724–731, Stroudsburg, PA, USA. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537. George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In LREC, volume 2, page 1. Radu Florian, Hongyan Jing, Nanda Kambhatla, and Imed Zitouni. 2006. Factorizing complex models: A case study in mention detection. In ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Comput. Linguist., 28(3):245–288. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Arzoo Katiyar and Claire Cardie. 2017. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 917–928. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197. Association for Computational Linguistics. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 402–412. Association for Computational Linguistics. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105–1116. Association for Computational Linguistics. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, pages 807–814, USA. Omnipress. Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 1396–1411, Stroudsburg, PA, USA. Association for Computational Linguistics. Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2018. A span selection model for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1630–1642. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL ’09, pages 147–155, Stroudsburg, PA, USA. Association for Computational Linguistics. V. Sanh, T. Wolf, and S. Ruder. 2018. A hierarchical multi-task approach for learning embeddings from semantic tasks. Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 626–634. Association for Computational Linguistics. Sameer Singh, Sebastian Riedel, Brian Martin, Jiaping Zheng, and Andrew McCallum. 2013. Joint inference of entities, relations, and coreference. In Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC ’13, pages 1– 6, New York, NY, USA. ACM. 5314 Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 1201–1211, Stroudsburg, PA, USA. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929– 1958. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1640–1649. Association for Computational Linguistics.
2019
525
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5315–5325 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5315 Enhancing Unsupervised Generative Dependency Parser with Contextual Information Wenjuan Han, Yong Jiang and Kewei Tu∗ {hanwj,jiangyong,tukw}@shanghaitech.edu.cn School of Information Science and Technology ShanghaiTech University, Shanghai, China Abstract Most of the unsupervised dependency parsers are based on probabilistic generative models that learn the joint distribution of the given sentence and its parse. Probabilistic generative models usually explicit decompose the desired dependency tree into factorized grammar rules, which lack the global features of the entire sentence. In this paper, we propose a novel probabilistic model called discriminative neural dependency model with valence (D-NDMV) that generates a sentence and its parse from a continuous latent representation, which encodes global contextual information of the generated sentence. We propose two approaches to model the latent representation: the first deterministically summarizes the representation from the sentence and the second probabilistically models the representation conditioned on the sentence. Our approach can be regarded as a new type of autoencoder model to unsupervised dependency parsing that combines the benefits of both generative and discriminative techniques. In particular, our approach breaks the context-free independence assumption in previous generative approaches and therefore becomes more expressive. Our extensive experimental results on seventeen datasets from various sources show that our approach achieves competitive accuracy compared with both generative and discriminative state-of-the-art unsupervised dependency parsers. 1 Introduction Dependency parsing is a very important task in natural language processing. The dependency relations identified by dependency parsing convey syntactic information useful in subsequent applications such as semantic parsing, information extraction, and question answering. In this paper, we ∗Corresponding author focus on unsupervised dependency parsing, which aims to induce a dependency parser from training sentences without gold parse annotation. Most previous approaches to unsupervised dependency parsing are based on probabilistic generative models, for example, the Dependency Model with Valence (DMV) (Klein and Manning, 2004) and its extensions (Cohen and Smith, 2009; Headden III et al., 2009; Cohen and Smith, 2010; BergKirkpatrick et al., 2010; Gillenwater et al., 2010; Jiang et al., 2016). A disadvantage of such approaches comes from the context-freeness of dependency grammars, a strong independence assumption that limits the information available in determining how likely a dependency is between two words in a sentence. In DMV, the probability of a dependency is computed from only the head and child tokens, the dependency direction, and the number of dependencies already connected from the head token. Additional information used for computing dependency probabilities in later work is also limited to local morpho-syntactic features such as word forms, lemmas and categories (Berg-Kirkpatrick et al., 2010), which does not break the context-free assumption. More recently, researchers have started to utilize discriminative methods in unsupervised dependency parsing based on the idea of discriminative clustering (Grave and Elhadad, 2015), the CRFAE framework (Cai et al., 2017) or the neural variational transition-based parser (Li et al., 2019). By conditioning dependency prediction on the whole input sentence, discriminative methods are capable of utilizing not only local information, but also global and contextual information of a dependency in determining its strength. Specifically, both Grave and Elhadad (2015) and Cai et al. (2017) include in the feature set of a dependency the information of the tokens around the head or child token of the dependency. In this way, 5316 they break the context-free independence assumption because the same dependency would have different strength in different contexts. Besides, Li et al. (2019) propose a variational autoencoder approach based on Recurrent Neural Network Grammars. In this paper, we propose a novel approach to unsupervised dependency parsing in the middle between generative and discriminative approaches. Our approach is based on neural DMV (Jiang et al., 2016), an extension of DMV that employs a neural network to predict dependency probabilities. Unlike neural DMV, however, when computing the probability of a dependency, we rely on not only local information as in DMV, but also global and contextual information from a compressed representation of the input sentence produced by neural networks. In other words, instead of modeling the joint probability of the input sentence and its dependency parse as in a generative model, we model the conditional probability of the sentence and parse given global information of the sentence. Therefore, our approach breaks the context-free assumption in a similar way to discriminative approaches, while it is still able to utilize many previous techniques (e.g., initialization and regularization techniques) of generative approaches. Our approach can be seen as an autoencoder. The decoder is a conditional generative neural DMV that generates the sentence as well as its parse from a continuous representation that captures the global features of the sentence. To model such global information, we propose two types of encoders, one deterministically summarizes the sentence with a continuous vector while the other probabilistically models the continuous vector conditioned on the sentence. Since the neural DMV can act as a fully-fledged unsupervised dependency parser, the encoder can be seen as a supplementary module that injects contextual information into the neural DMV for contextspecific prediction of dependency probabilities. This is very different from the previous unsupervised parsing approach based on the autoencoder framework (Cai et al., 2017; Li et al., 2019), in which the encoder is a discriminative parser and the decoder is a generative model, both of which are required for performing unsupervised parsing. Our experiments verify that our approach achieves a comparable result with recent state-ofthe-art approaches on extensive datasets from various sources. 2 Related Work 2.1 Dependency Model with Valence The Dependency Model with Valence (DMV) (Klein and Manning, 2004) is an extension of an earlier dependency model (Carroll and Charniak, 1992) for grammar induction. Different from the earlier model, there are three types of probabilistic grammar rules in DMV, namely ROOT, CHILD and CHILD rules. To generate a token sequence and its corresponding dependency parse tree, the DMV model first generates a token c from the ROOT distribution p(c|root). Then the generation continues in a recursive procedure. At each generation step, it makes a decision as to whether a new token needs to be generated from the current head token h in the dir direction by sampling a STOP or CONTINUE symbol dec from the CHILD distribution p(dec|h, dir, val) where val is an indicator representing whether token h has already generated a token before. If dec is CONTINUE, a new token is generated from the CHILD distribution p(c|h, dir, val). If dec is STOP, then the generation process switches to a new direction or a new head token. DMV can be trained from an unannotated corpus using the expectation-maximization algorithm. 2.2 Neural DMV The DMV model is very effective in inducing syntactic dependency relations between tokens in a sentence. One limitation of DMV is that correlation between similar tokens (such as different verb POS tags) is not taken into account during learning and hence rules involving similar tokens have to be learned independently. Berg-Kirkpatrick et al. (2010) proposed a feature-based DMV model in which the grammar rule probabilities are computed by a log-linear model with manually designed features that reflect token similarity. Jiang et al. (2016) proposed the neural DMV model which learns token embeddings to better capture correlations between tokens and utilizes a neural network to calculate grammar rule probabilities from the embeddings. Both approaches significantly outperform the original DMV. However, because of the strong independence assumption in such generative models, they can only utilize local information of a grammar rule (e.g., the head and 5317 child tokens, direction, and valence) when computing its probability. 3 Discriminative Neural DMV We extend the neural DMV such that when predicting the probability of a grammar rule in parsing a sentence, the model incorporates not only local information of the rule but also global information of the sentence. Specifically, we model each grammar rule probability conditioned on a continuous vector. We therefore call our model the discriminative neural DMV (D-NDMV). In this way, the probability of a dependency rule becomes sensitive to the input sentence, which breaks the context-free assumption in the neural DMV. Here, we provide two approaches to model this global continuous vector. 3.1 Deterministic Variant for D-NDMV Model Suppose we have a sentence (i.e., a word sequence) w, the corresponding POS tag sequence x, and the dependency parse z which is hidden in unsupervised parsing. DMV and its variants model the joint probability of the POS tag sequence and the parse P(x, z) and, because of the context-free assumption, factorize the probability based on the grammar rules used in the parse. In contrast, to the global features of the sentence, we model the conditional probability of the POS tag sequence and the parse given the sequence w: P(x, z|w). We assume conditional contextfreeness and factorize the conditional probability based on the grammar rules. PΘ(x, z|w) = Y r∈(x,z) p(r|w) (1) where r ranges over all the grammar rules used in the parse z of tag sequence x, Θ is the set of parameters to compute parameters of the distribution. Since one can reliably predict the POS tags x from the words w without considering the parse z (as most POS taggers do), to avoid degeneration of the model, we compute p(r|w) based on global information of w produced by a long short-term memory network (LSTM). Figure 1 shows the neural network structure for parametering p(chd|head, dir, val, w), the probabilities of CHILD rules given the input sentence w. The structure is similar to the one used in neural DMV except for using LSTM sentence encoder Inputs: Softmax Layer: Valence Head Tag … Outputs Wdir Wchd Hidden Layer: g = ReLU(Wdir[vval; vh; vw]) word 1 word 2 word 3 LSTM LSTM LSTM Embeddings: Sentence Representation: Concatenation: [vval; vh; vw] Softmax(Wchdg) vval vh Embeddings: Sequence:w vw Figure 1: The neural network structure for computing the probabilities of CHILD rules. to get the representation s from the sentence w. The embeddings of the head POS tag and valence are represented by vh and vval. The concatenation [vval; vh; s] is fed into a fully-connected layer with a direction-specific weight matrix Wdir and the ReLU activation function to produce the hidden layer g. All possible child POS tags are represented by the matrix Wchd. The i-th row of Wchd represents the output embedding of the i-th POS tag. We take the product of the hidden layer g and the child matrix Wchd and apply a softmax function to obtain the CHILD rule probabilities. ROOT and CHILD rule probabilities are computed in a similar way. Since the mapping from w to s is deterministic, we call it the deterministic variant of D-NDMV. To make the notations consistent with subsequent sections, we add an auxiliary random variable s to represent the global information of sentence w. The probabilistic distribution of s is defined as, PΦ(s|w) = δ(s −vw) (2) where Φ is the set of parameters of the LSTM neural network. Figure 2 (left) shows the directed graphical representation of this model. If we diminish the capacity of s (e.g., by shrinking its dimension), then our model gradually reduces to neural DMV. Parsing Given a deterministic variant with fixed parameters Φ, Θ. we can parse a sentence represented by POS tag sequence x and word sequence w 5318 w s Φ Θ x, z s Φ Θ x, z N N x Figure 2: Left: the illustration of the deterministic variant of D-NDMV as a directed graph. The deterministic variant models an autoencoder with PΦ(s|w)) as the encoder and PΘ(x, z|s) as the decoder. Right: the illustration of the variational variant of D-NDMV as a directed graph. We use dashed lines to denote the variational approximation qΦ(s|x) to the intractable posterior PΦ(s|x), and the solid lines to denote the generative model P(s)PΘ(x, z|s). by searching for a dependency tree z∗which has the highest probability p(x, z|w) among the set of valid parse trees Z(x). z∗= arg max z∈Z(x) PΘ,Φ(x, z|w) (3) Note that once we compute all the grammar rule probabilities based on w, our model becomes a standard DMV and therefore dynamic programming can be used to parse each sentence efficiently (Klein and Manning, 2004). Unsupervised Learning Objective Function: In a typical unsupervised dependency parsing setting, we are given a set of training sentences with POS tagging but without parse annotations. The objective function of learning deterministic variant is as follows. J(Θ, Φ) = 1 N N X i=1 log PΘ,Φ(x(i)|w(i)) (4) The log conditional likelihood is defined as: log PΘ,Φ(x|w) = log X z∈Z(x) PΘ,Φ(x, z|w) (5) We may replace summation with maximization so that it becomes the conditional Viterbi likelihood. Learning Algorithm: We optimize our objective function using the expectation-maximization (EM) algorithm. Specifically, the EM algorithm alternates between E-steps and M-steps to maximize a lower-bound of the objective function. For each training sentence, the lower bound is defined as: Q(q, Θ, Φ) = log PΘ,Φ(x|w) −KL(q(z)∥PΘ,Φ(z|x, w)) (6) where q(z) is an auxiliary distribution over the latent parse z. In the E-step, we fix Θ, Φ and maximize Q(q, Θ, Φ) with respect to q. The maximum is reached when the Kullback-Leibler divergence is zero, i.e., q(z) = PΘ,Φ(z|x, w) (7) Based on the optimal q, we compute the expected counts Eq(z)c(r, x, z) using the insideoutside algorithm, where c(r, x, z) is the number of times rule r is used in producing parse z of tag sequence x. In the M-step, we fix q and maximize Q(q, Θ, Φ) with respect to Θ, Φ. The lower bound now takes the following form: Q(Θ, Φ) = X r log p(r|w)Eq(z)c(r, x, z) −Constant (8) where r ranges over all the grammar rules and Constant is a constant value. The probabilities p(r|w, Θ, Φ) are computed by the neural networks and we can back-propagate the objective Q(Θ, Φ) into the parameters of the neural networks. We initialize the model either heuristically (Klein and Manning, 2004) or using a pre-trained unsupervised parser (Jiang et al., 2016); then we alternate between E-steps and M-steps until convergence. Note that if we require q(z) to be a delta function, then the algorithm becomes hard-EM, which computes the best parse of each training sentence in the E-step and set the expected count to 1 if the rule is used in the parse and 0 otherwise. It has been found that hard-EM outperforms EM in unsupervised dependency parsing (Spitkovsky et al., 2010; Tu and Honavar, 2012), so we use hard-EM in our experiments. 3.2 Variational Variant for D-NDMV Motivated by (Bowman et al., 2016), we propose to model the global representation s as drawing from a prior distribution, generally a standard 5319 Gaussian distribution. We also propose a variational posterior distribution qΦ(s|x) to approximate this prior distribution. In this way, we formalize it into a variational inference framework. We call this model variational variant and illustrate its graphical model in Figure 2 (right). It can be seen from Figure 2 (right) that the variational variant shares the same formulation of the encoder part with the variational autoencoder (VAE). Different from the vanilla VAE model with a simple multilayered feedforward neural network as the decoder, our decoder is a generative latent variable model with the structured hidden variable z. For the learning of the variational variant, we use the log likelihood as the objective function and optimize its lower bound. We show the derivation as followings: log PΦ,Θ(x) ≥−KL(qΦ(s|x)||p(s)) + EqΦ(s|x) log PΘ(x|s) (9) By performing the Monte Carlo method to estimate the expectation w.r.t. qΦ(s|x) and set the number of samples L to 1, we rewrite the second term as: EqΦ(s|x) log pΘ(x|s) ≃1 L L X l=1 log X z∈Z(x) pΘ(x, z|s(l)) = log X z∈Z(x) pΘ(x, z|s(1)) (10) where s(l) is estimated by the reparameterization trick (Kingma and Welling, 2014), which enables low gradient variances and stabilizes training. Because this formula is similar to Eq. 5, we can follow the subsequent derivation of deterministic variant and learn the variational variant using EM. It is worth noting that different from deterministic variant, in M-step an additional KL divergence term in Eq. 9 should be optimized by back-propagation. 4 Experiments We tested our methods on seventeen treebanks from various sources. For each dataset, we compared with current state-of-the-art approaches on the specific dataset. 4.1 Dataset and Setup English Penn Treebank We conducted experiments on the Wall Street Journal corpus (WSJ) with section 2-21 for training, section 22 for validation and section 23 for testing. We trained our model with training sentences of length ≤10, tuned the hyer-parameters on validation sentences of length ≤10 the and evaluated on testing sentences of length ≤10 (WSJ10) and all sentences (WSJ). We reported the directed dependency accuracy (DDA) of the learned grammars on the test sentences. Universal Dependency Treebank Following the setup of Jiang et al. (2017); Li et al. (2019), we conducted experiments on selected eight languages from the Universal Dependency Treebank 1.4 (Nivre et al., 2016). We trained our model on training sentences of length ≤15 and report the DDA on testing sentences of length ≤15 and ≤ 40. Datasets from PASCAL Challenge on Grammar Induction We conducted experiments on corpora of eight languages from the PASCAL Challenge on Grammar Induction (Gelling et al., 2012). We trained our model with training sentences of length ≤10 and evaluated on testing sentences of length ≤10 and all sentences. Note that on the UD Treebanks and PASCAL datasets, we used the same hyper-parameters as in the WSJ experiments without further tuning. Setup Following previous work, we conducted experiments under the unlexicalized setting where a sentence is represented as a sequence of gold part-of-speech tags with punctuations removed. The embedding length was set to 10 for the head and child tokens and the valence. The sentence embedding length was also set to 10. We trained the neural networks using stochastic gradient descent with batch size 10 and learning rate 0.01. We used the change of the loss on the validation set as the stop criteria. For our methods in the WSJ experiments, we followed Han et al. (2017) and initialized our model using the pre-trained model of Naseem et al. (2010), which significantly increased the accuracy and decreased the variance. For the other experiments, we used a pre-trained NDMV model to initialize our method. We ran our model for 5 times and report the average DDA. 5320 METHODS WSJ10 WSJ Systems in Basic Setup DMV (Klein and Manning, 2004) 58.3 39.4 LN (Cohen et al., 2008) 59.4 40.5 Convex-MST (Grave and Elhadad, 2015) 60.8 48.6 Shared LN (Cohen and Smith, 2009) 61.3 41.4 Feature DMV (Berg-Kirkpatrick et al., 2010) 63.0 PR-S (Gillenwater et al., 2010) 64.3 53.3 E-DMV (Headden III et al., 2009) 65.0 TSG-DMV (Blunsom and Cohn, 2010) 65.9 53.1 UR-A E-DMV (Tu and Honavar, 2012) 71.4 57.0 CRFAE (Cai et al., 2017) 71.7 55.7 Neural E-DMV(Jiang et al., 2016) 72.5 57.6 HDP-DEP (Naseem et al., 2010) 73.8 NVTP (Li et al., 2019) 54.7 37.8 L-EVG* (Headden III et al., 2009) 68.8 LexTSG-DMV* (Blunsom and Cohn, 2010) 67.7 55.7 L-NDMV* (Han et al., 2017) 75.1 59.5 variational variant D-NDMV 75.5 60.4 deterministic variant D-NDMV 75.6 61.4 Systems with Additional Training Data (for reference) CS (Spitkovsky et al., 2013) 72.0 64.4 MaxEnc* (Le and Zuidema, 2015) 73.2 65.8 Table 1: Comparison on WSJ. ∗: approaches with lexicalized information. 4.2 Results on English Penn Treebank In Table 1, we compared our method with a large number of previous approaches to unsupervised dependency parsing. Both variational variant and deterministic variant outperform recent approaches in the basic setup, which demonstrates the benefit of utilizing contextual information in dependency strength prediction. Deterministic variant has a slightly better parsing accuracy than variational variant but variational variant is more stable. The standard derivations of deterministic variant and variational variant are 0.530 and 0.402 respectively for 5 runs. 4.3 Results on Universal Dependency Treebank We compare our model with several state-of-theart models on the UD Treebanks and report the results in Table 2. We first compare our model with two generative models: NDMV and left corner DMV (LC-DMV) (Noji et al., 2016). The LC-DMV is the recent state-of-the-art generative approach on Universal Dependency Treebank. Our variational variant DNDMV outperforms the LC-DMV and the NDMV on average. Furthermore, we compare our model with current state-of-the-art discriminative models, the neural variational transition-based parser (NVTP) (Li et al., 2019) and Convex-MST (Grave and ElNO UP + UP NDMV LD DV VV NVTP CM Length ≤15 Basque 48.3 47.9 40.6 42.7 52.9 52.5 Dutch 44.1 35.5 42.1 43.0 39.6 43.4 French 59.5 52.1 59.0 61.7 59.9 61.6 German 56.2 51.9 56.4 58.5 57.5 54.4 Italian 72.7 73.1 59.6 63.5 59.7 73.2 Polish 72.7 66.2 70.5 75.8 57.1 66.7 Portuguese 34.4 70.5 68.8 69.1 52.7 60.7 Spanish 38.1 65.5 63.8 66.6 55.6 61.6 Average 53.3 57.8 57.6 60.1 54.4 59.3 Length ≤40 Basque 47.8 45.4 39.9 42.4 48.9 50.0 Dutch 35.6 34.1 42.4 43.7 42.5 45.3 French 38.1 48.6 57.2 58.5 55.4 62.0 German 50.4 50.5 54.5 52.9 54.2 51.4 Italian 63.6 71.1 60.2 61.3 55.7 69.1 Polish 62.8 63.7 66.7 73.0 51.7 63.4 Portuguese 49.0 67.2 64.7 65.7 45.3 57.1 Spanish 58.0 61.9 64.3 64.4 52.4 61.9 Average 50.7 55.3 56.2 57.7 50.8 57.5 Table 2: Comparison on Universal Dependency Treebank. No UP: Systems without universal linguistic prior. +UP: Systems with universal linguistic prior. LD: LC-DMV (Noji et al., 2016). DV: deterministic variant of D-NDMV. VV: variational variant of D-NDMV. NVTP: neural variational transition-based parser (Li et al., 2019). CM: Convex-MST. hadad, 2015). Note that current discriminative approaches usually rely on strong universal linguistic prior 1 to get better performance. So the comparisons may not be fair for our model. Despite this, we find that our model can achieve competitive accuracies compared with these approaches. 4.4 Results on Datasets from PASCAL Challenge We also perform experiments on the datasets from PASCAL Challenge (Gelling et al., 2012), which contains eight languages: Arabic, Basque, Czech, Danish, Dutch, Portuguese, Slovene and Swedish. We compare our approaches with NDMV (Jiang et al., 2016), Convex-MST (Grave and Elhadad, 2015) and CRFAE (Cai et al., 2017). NDMV and CRFAE are two state-of-the-art approaches on the PASCAL Challenge datasets. We show the directed dependency accuracy on the testing sentences no longer than 10 (Table 3) and on all the testing sentences (Table 4). It can be seen that on average our models outperform other state-of-the1Universal linguistic prior (UP) is a set of syntactic dependencies that are common in many languages (Naseem et al., 2010). 5321 Arabic Basque Czech Danish Dutch Portuguese Slovene Swedish Average Approaches Without Using Universal Linguistic Prior E-DMV 38.4 41.5 45.5 52.4 37.0 40.9 35.2 52.6 42.9 Neural DMV 60.0 44.1 46.2 63.3 33.2 36.9 31.6 48.3 45.4 Convex-MST 55.2 29.4 36.5 49.3 35.5 43.2 27.5 30.2 38.3 CRF-AE 42.4 45.8 24.4 23.9 28.8 33.0 33.4 45.6 34.6 deterministic variant 54.4 44.8 55.2 58.9 37.2 40.1 35.2 50.3 47.0 variational variant 60.0 45.4 59.1 63.6 34.6 42.7 28.3 45.9 47.5 Approaches Using Universal Linguistic Prior Convex-MST 39.0 27.8 43.8 48.1 35.9 55.6 62.6 49.6 45.3 CRF-AE 39.2 33.9 45.1 44.5 42.2 61.9 41.9 66.0 46.8 Table 3: DDA on testing sentences no longer than 10 on eight additional languages from PASCAL Challenge. Arabic Basque Czech Danish Dutch Portuguese Slovene Swedish Average Approaches Without Using Universal Linguistic Prior E-DMV 27.4 33.8 37.4 44.9 24.7 34.8 23.2 40.2 33.3 Neural DMV 30.9 37.7 38.1 53.3 22.9 30.7 19.9 33.9 33.4 Convex-MST 47.7 30.5 33.4 44.2 28.3 35.9 18.1 29.2 33.4 CRF-AE 29.9 39.1 20.3 18.6 17.8 32.6 28.0 37.0 27.9 deterministic variant 38.2 38.8 47.3 47.3 24.7 34.1 23.2 40.1 36.7 variational variant 33.9 41.2 48.4 54.7 25.3 35.8 28.1 40.5 38.5 Approaches Using Universal Linguistic Prior Convex-MST 34.2 24.9 39.0 36.3 35.2 46.0 51.7 39.6 38.3 CRF-AE 37.2 30.3 36.4 33.2 38.3 52.4 29.2 47.1 38.2 Table 4: DDA on all the testing sentences on eight additional languages from PASCAL Challenge. art approaches including those utilizing the universal linguistic prior. 5 Analysis In this section, we studies what information is captured in the sentence embeddings and the some configurations that are sensitive to our model. Here we use deterministic variant of D-NDMV to conduct the following analysis. deterministic variant of D-NDMV performs similar to deterministic variant of D-NDMV. 5.1 Rule Probabilities in Different Sentences The motivation behind D-NDMV is to break the independence assumption and utilize global information in predicting grammar rule probabilities. Here we conduct a few case studies of what information is captured in the sentence embedding and how it influences grammar rule probabilities. We train a D-NDMV on WSJ and extract all the embeddings of the training sentences. We then focus on the following two sentences: “What ’s next” and “He has n’t been able to replace the M’Bow cabal”. We now examine the dependency rule probability of VBZ generating JJ to the right with valence 0 in these two sentences (illustrated in Figure 3). In the first sentence, this rule is used in the gold parse (“’s” is the head of “next”); but VBZ … has n’t been able … What ’s next JJ RB VBZ JJ VBN WP 0.0798 0.0699 Figure 3: Rule probabilities predicted by D-NDMV given the two example sentences in the second sentence, this rule is not used in the gold parse (the head of “able” is “been” instead of “has”). We observe that the rule probability redicted by D-NDMV given the first sentence is indeed significantly larger than that given the second sentence, which demonstrates the positive impact of conditioning rule probability prediction on the sentence embedding. To obtain a more holistic view of how rule probabilities change in different sentences, we collect the probabilities of a particular rule (“IN” generating “CD” to the right with valence 1) predicted by our model for all the sentences of WSJ. Figure 4 shows two distributions over the rule probability when the rule is used in the gold parse vs. when the rule is applicable to parsing the sentence but is not used in the gold parse. It can be seen that when 5322 Frequency 0 0.1 0.2 0.3 0.4 Rule Probability 0.1 - 0.15 0.15 - 0.2 0.2 - 0.25 0.25 - 0.3 0.3 - 0.35 0.35 - 0.4 0.4 - 0.45 0.45 - 0.5 0.5 - 0.55 0.55 - 0.6 0.6 - 0.65 0.65 - 0.7 0.7 - 0.75 0.75 - 0.8 Not In Gold Parse In Gold Parse Figure 4: Comparison of the distributions over the rule probability when the rule appears vs. does not appear in the gold parse. AVERAGE PROBABILITY D-NDMV E-DMV All 0.107 0.094 In gold parse 0.253 0.219 Not in gold parse 0.097 0.085 Table 5: Comparison of the average probabilities in DNDMV and E-DMV when the rule is used and not used in the gold parse. the rule appears in the gold parse, its probability is clearly boosted in our model. Finally, for every sentence of WSJ, we collect the probabilities predicted by our model for all the rules that are applicable to parsing the sentence. We then calculate the average probability 1) when the rule is used in the gold parse, 2) when the rule is not used in the gold parse, and 3) regardless of whether the rule is used in the gold parse or not. We use the E-DMV model as the baseline in which rule probabilities do not change with sentences. The results are shown in Table 5. We observe that compared with the E-DMV baseline, the rule probabilities predicted by our model are increased by 14.0% on average, probably because our model assigns higher probabilities to rules applicable to parsing the input sentence than to rules not applicable (e.g., if the head or child of the rule does not appear in the sentence). The increase of the average probability when the rule is used in the gold parse (15.7%) is higher than when the rule is not used in the gold parse (13.7%), which again demonstrates the advantage of our model. 5.2 Choice of Sentence Encoder Besides LSTM, there are a few other methods of producing the sentence representation. Table 6 compares the experimental results of these methods. The bag-of-tags method simply computes the average of all the POS tag embeddings and has the lowest accuracy, showing that the word order is inSENTENCE ENCODER DDA Bag-of-Tags Method 74.1 Anchored Words Method 75.1 LSTM 75.9 Attention-Based LSTM 75.5 Bi-LSTM 74.2 Table 6: Comparison of different sentence encoders in D-NDMV. formative for sentence encoding in D-NDMV. The anchored words method replaces the POS tag embddings used in the neural network of the neural DMV with the corresponding hidden vectors produced by a LSTM on top of the input sentence, which leads to better accuracy than bag-of-tags but is still worse than LSTM. Replacing LSTM with Bi-LSTM or attention-based LSTM also does not lead to better performance, probably because these models are more powerful and hence more likely to result in degeneration and overfitting. 5.3 Impact of Genres All the sentences in WSJ come from newswire, which conform to very similar syntactic styles. Here we study whether our method can capture different syntactic styles by learning our method from Chinese Treebank 9.0 (2005) which contains sentences of two different genres: the informal genre and the formal genre. The experimental setup is the same as that in section 4. We pick the rule of “CD” (number) generating “AD” (adverb) to the left with valence 0 and collect the rule probability in sentences from the two genres.In informal sentences our model assigns smaller probabilities to the rule than in formal sentences. This may reflect the fact that formal texts are more precise than informal text when presenting numbers, which is captured by our model2. 5.4 Impact of Sentence Embedding Dimension The dimension of sentence embeddings in our model is an important hyper-parameter. If the dimension is too large, the sentence embedding may capture too much information of the sentence and hence the model is very likely to degenerate or overfit as discussed in section 3.1. If the dimension is too small, the model loses the benefit of sentence information and becomes similar to neural DMV. As Figure 5 illustrates, dimension 10 leads to the best parsing accuracy, while dimen2More details can be found in the supplementary materials. 5323 -23 -21 -19 -17 -15 74 74.5 75 75.5 76 Sentence Embedding Dimension 5 10 20 DDA Training Loss Validation Loss Figure 5: Impact of the sentence embedding dimension on both the testing set parsing accuracy and the average conditional log Viterbi likelihood (w.r.t. loss) of the training set and the validation set. sion 20 produces lower parsing accuracy probably because of a combination of degeneration and overfitting. The conditional log Viterbi likelihood curves on the training set and the validation set in Figure 5 confirm that overfitting indeed occur with dimension 20. 6 Conclusion We propose D-NDMV, a novel unsupervised parser with characteristics from both generative and discriminative approaches to unsupervised parsing. D-NDMV extends neural DMV by parsing a sentence using grammar rule probabilities that are computed based on global information of the sentence. In this way, D-NDMV breaks the context-free independence assumption in generative dependency grammars and is therefore more expressive. Our extensive experimental results show that our approach achieves competitive accuracy compared with state-of-the-art parsers. Acknowledgments This work was supported by the Major Program of Science and Technology Commission Shanghai Municipal (17JC1404102). References Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cˆot´e, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 582–590. Association for Computational Linguistics. Phil Blunsom and Trevor Cohn. 2010. Unsupervised induction of tree substitution grammars for dependency parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1204–1213. Association for Computational Linguistics. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. CoNLL 2016, page 10. Jiong Cai, Yong Jiang, and Kewei Tu. 2017. Crf autoencoder for unsupervised dependency parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1638–1643. Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. Department of Computer Science, Univ. Shay B Cohen, Kevin Gimpel, and Noah A Smith. 2008. Logistic normal priors for unsupervised probabilistic grammar induction. In Advances in Neural Information Processing Systems, pages 321–328. Shay B Cohen and Noah A Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 74–82. Association for Computational Linguistics. Shay B Cohen and Noah A Smith. 2010. Covariance in unsupervised learning of probabilistic grammars. The Journal of Machine Learning Research, 11:3017–3051. Douwe Gelling, Trevor Cohn, Phil Blunsom, and Joao Grac¸a. 2012. The pascal challenge on grammar induction. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 64–80. Association for Computational Linguistics. Jennifer Gillenwater, Kuzman Ganchev, Joao Grac¸a, Fernando Pereira, and Ben Taskar. 2010. Sparsity in dependency grammar induction. In Proceedings of the ACL 2010 Conference Short Papers, pages 194– 199. Association for Computational Linguistics. Edouard Grave and No´emie Elhadad. 2015. A convex and feature-rich discriminative approach to dependency grammar induction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1375–1384. Wenjuan Han, Yong Jiang, and Kewei Tu. 2017. Dependency grammar induction with neural lexicalization and big training data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1683–1688. 5324 William P Headden III, Mark Johnson, and David McClosky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 101–109. Association for Computational Linguistics. Yong Jiang, Wenjuan Han, and Kewei Tu. 2016. Unsupervised neural dependency parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 763–771, Austin, Texas. Association for Computational Linguistics. Yong Jiang, Wenjuan Han, and Kewei Tu. 2017. Combining generative and discriminative approaches to unsupervised dependency parsing via dual decomposition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1689–1694, Copenhagen, Denmark. Association for Computational Linguistics. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In ICLR. Dan Klein and Christopher D. Manning. 2004. Corpusbased induction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42Nd Annual Meeting on Association for Computational Linguistics, ACL ’04, Stroudsburg, PA, USA. Association for Computational Linguistics. Phong Le and Willem Zuidema. 2015. Unsupervised dependency parsing: Let’s use supervised parsers. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 651–661. Bowen Li, Jianpeng Cheng, Yang Liu, and Frank Keller. 2019. Dependency grammar induction with a neural variational transition-based parser. In AAAI. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85. Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowledge to guide grammar induction. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1234–1244. Association for Computational Linguistics. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan T McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In LREC. Hiroshi Noji, Yusuke Miyao, and Mark Johnson. 2016. Using left-corner parsing to encode universal structural constraints in grammar induction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 33–43. Valentin I Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2013. Breaking out of local optima with count transforms and model recombination: A study in grammar induction. In EMNLP, pages 1983– 1995. Valentin I Spitkovsky, Hiyan Alshawi, Daniel Jurafsky, and Christopher D Manning. 2010. Viterbi training improves unsupervised dependency parsing. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 9–17. Association for Computational Linguistics. Kewei Tu and Vasant Honavar. 2012. Unambiguity regularization for unsupervised learning of probabilistic grammars. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1324–1334. Association for Computational Linguistics. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207–238. A Impact of Genres All the sentences in WSJ come from newswire, which conform to very similar syntactic styles. Here we study whether our method can capture different syntactic styles by learning our method from Chinese Treebank 9.0 (2005) which contains sentences of two different genres: the informal genre (chat messages and transcribed conversational telephone speech) and the formal genre (newswire, broadcast and so on). The experimental setup is the same as that in section 4. We extract the embeddings of the training sentences from the learned model and map them onto a 3D space via the t-SNE algorithm (Van der Maaten and Hinton, 2008) (Figure 6). It can be seen that although the two types of sentences are mixed together overall, many regions are clearly dominated by one type or the other. This verifies that sentence embeddings learned by our approach can capture some genre information. We pick the rule of “CD” (number) generating “AD” (adverb) to the left with valence 0 and illustrate the distributions of the rule probability in sentences from the two genres in Figure 7. It can be seen that in informal sentences our model assigns 5325 What ’s next He has n’t been able to replace the M’Bow cabal WP VBZ JJ PRP VBZ RB VBN JJ TO VB DT NNP NN The government is nervous. I was shaking the whole time. DT NN VBZ JJ. PRP VBD VBG DT JJ NN. Both were right. But says Mr. Bock It was a close call. DT VBD JJ. CC VBZ NNP NNP PRP VBD DT JJ NN. That is n’t easy. Then there ’ll be another swing. DT VBZ RB JJ. RB EX MD VB DT NN. The IRA portion of the Packwood-Roth plan is irresponsible. He ’s totally geared to a punitive position. DT NNP NN IN DT NNP NN VBZ JJ. PRP VBZ RB VBN TO DT JJ NN. These figures are n’t seasonally adjusted. Her sister Cynthia wishes Toni had a different job. DT NNS VBP RB RB JJ. PRP$ NN NNP VBZ NNP VBD DT JJ NN. Table 7: Sentences closest to the two example sentences in terms of the L2 distance between their learned embeddings. Both the word sequence and the POS tag sequence are shown for each sentence. Figure 6: 3D visualization of the learned sentence embeddings from CTB. Orange dots denote informal sentences and blue dots denote formal sentences. smaller probabilities to the rule than in formal sentences. This may reflect the fact that formal texts are more precise than informal text when presenting numbers, which is captured by our model. B Nearby Sentences in Embedding Space We train a our method on WSJ and extract all the embeddings of the training sentences. We then focus on the following two sentences: “What ’s next” and “He has n’t been able to replace the M’Bow cabal”. Table 7 shows the two sentences as well as a few other sentences closest to them measured by the L2 distance between their embeddings. It can be seen that most sentences close to the first sentence contain a copula followed by a predicative adjective, while most sentences close to the second sentence end with a noun phrase where the noun has a preceding modifier. These two examples show that the sentence embeddings learned Frequency 0 0.175 0.35 0.525 0.7 Rule Probability 0.0 - 0.1 0.1 - 0.2 0.2 - 0.3 0.3 - 0.4 0.4 - 0.5 0.5 - 0.6 0.6 - 0.7 0.7 - 0.8 0.8 - 0.9 0.9 - 1.0 Informal Genre Formal Genre Figure 7: Comparison of the distributions over the rule probability in sentences from the two genres. by our approach encode syntactic information that can be useful in parsing.
2019
526
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326–5331 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5326 Neural Architectures for Nested NER through Linearization Jana Strakov´a and Milan Straka and Jan Hajiˇc Charles University Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics {strakova,straka,hajic}@ufal.mff.cuni.cz Abstract We propose two neural network architectures for nested named entity recognition (NER), a setting in which named entities may overlap and also be labeled with more than one label. We encode the nested labels using a linearized scheme. In our first proposed approach, the nested labels are modeled as multilabels corresponding to the Cartesian product of the nested labels in a standard LSTM-CRF architecture. In the second one, the nested NER is viewed as a sequence-to-sequence problem, in which the input sequence consists of the tokens and output sequence of the labels, using hard attention on the word whose label is being predicted. The proposed methods outperform the nested NER state of the art on four corpora: ACE-2004, ACE-2005, GENIA and Czech CNEC. We also enrich our architectures with the recently published contextual embeddings: ELMo, BERT and Flair, reaching further improvements for the four nested entity corpora. In addition, we report flat NER stateof-the-art results for CoNLL-2002 Dutch and Spanish and for CoNLL-2003 English. 1 Introduction In nested named entity recognition, entities can be overlapping and labeled with more than one label such as in the example “The Florida Supreme Court” containing two overlapping named entities “The Florida Supreme Court” and “Florida”.1 Recent publications on nested named entity recognition involve stacked LSTM-CRF NE recognizer (Ju et al., 2018), or a construction of a special structure that explicitly captures the nested entities, such as a constituency graph (Finkel and Manning, 2009) or various modifications of a directed hypergraph (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018). 1Example from ACE-2004 (Doddington et al., 2004), https://catalog.ldc.upenn.edu/LDC2005T09. We propose two completely neural network architectures for nested nested named entity recognition which do not explicitly build or model any structure and infer the relationships between nested NEs implicitly: • In the first model, we concatenate the nested entity multiple labels into one multilabel, which is then predicted with a standard LSTM-CRF (Lample et al., 2016) model. The advantages of this model are simplicity and effectiveness, because an already existing NE pipeline can be reused to model the nested entities. The obvious disadvantage is a large growth of NE classes. • In the second model, the nested entities are encoded in a sequence and then the task can be viewed as a sequence-to-sequence (seq2seq) task, in which the input sequence are the tokens (forms) and the output sequence are the labels. The decoder predicts labels for each token, until a special label "<eow>" (end of word) is predicted and the decoder moves to the next token. The expressiveness of the models depends on a non-ambiguous encoding of the nested entity structure. We use an enhanced BILOU scheme described in Section 4.1. The proposed models surpass the current nested NER state of the art on four nested entity corpora: ACE-2004, ACE-2005, GENIA and Czech CNEC. When the recently introduced contextual embeddings – ELMo (Peters et al., 2018), BERT (Devlin et al., 2018) and Flair (Akbik et al., 2018) – are added to the architecture, we reach further improvements for the above mentioned nested entity corpora and also exceed current state of the art for CoNLL-2002 Dutch and Spanish and for CoNLL-2003 English. 5327 2 Related Work Finkel and Manning (2009) explicitly model the nested structure as a syntactic constituency tree. Ju et al. (2018) run a stacked LSTM-CRF NE recognizer as long as at least one nested entity is predicted, from innermost to outermost entities. Wang and Lu (2018) build a hypergraph to capture all possible entity mentions in a sentence. Katiyar and Cardie (2018) model nested entities as a directed hypergraph similar to Lu and Roth (2015), using RNNs to model the edge probabilities. Our proposed architectures are different from these works because they do not explicitly build any structure to model the nested entities. The nested entity structure is instead encoded as a sequence of labels, and the artificial neural network is supposed to model the structural relationships between the named entities implicitly. A sequence-to-sequence architecture similar to one of our approaches is used by (Liu and Zhang, 2017) to predict the hierarchy of constituents in order to extract lookahead features for a shift-reduce constituency parser. 3 Datasets We evaluate our results on four nested NE corpora: • English ACE-2004, (Doddington et al., 2004)2. We reuse the train/dev/test split used by most previous authors (Lu and Roth, 2015; Muis and Lu, 2017; Wang and Lu, 2018). • English ACE-20053. Again, we use the train/dev/test split by Lu and Roth (2015); Muis and Lu (2017); Wang and Lu (2018). • English GENIA (Kim et al., 2003). We use the 90%/10% train/test split used by previous authors (Finkel and Manning, 2009; Lu and Roth, 2015; Muis and Lu, 2017; Wang and Lu, 2018). • Czech CNEC – Czech Named Entity Corpus 1.0. As previous authors (Strakov´a et al., 2016), we predict the 42 fine-grained NE types and 4 containers from the first annotation round. We evaluate flat NER on these four languages: CoNLL-2003 English and German 2https://catalog.ldc.upenn.edu/ LDC2005T09 3https://catalog.ldc.upenn.edu/ LDC2006T06 (Tjong Kim Sang and De Meulder, 2003) and CoNLL-2002 Dutch and Spanish (Tjong Kim Sang, 2002). In all cases, we use the train portion of the data for training and the development portion for hyperparameter tuning, and we report our final results on models trained on concatenated train+dev portions and evaluated on the test portion, following e.g. (Ratinov and Roth, 2009; Lample et al., 2016). Our evaluation is a strict one: each entity mention is considered correct only when both the span and class are correct. 4 Methods 4.1 Nested NE BILOU Encoding Our goal is to encode the nested entity structure into a CoNLL-like, per-token BILOU encoding,4 as in the following example for sentence “in the US Federal District Court of New Mexico .”: in O the B-ORG US I-ORG|U-GPE Federal I-ORG District I-ORG|U-GPE Court I-ORG of I-ORG New I-ORG|B-GPE Mexico L-ORG|L-GPE . O The mapping from tokens to multilabels is defined by the two following rules: (1) entity mentions starting earlier have priority over entities starting later, and (2) for mentions with the same beginning, longer entity mentions have priority over shorter ones. A multilabel for a word is then a concatenation of all intersecting entity mentions, from the highest priority to the lowest. Another, more formalized look at the BILOU encoding is that it is a BILOU encoding of an unfolded directed hypergraph similar to Katiyar and Cardie (2018), in which the shared entity labels are not collapsed and the O is used only for tokens outside any entity mention. We use a trivial heuristic during decoding, matching labels of consecutive words by order only. Therefore, an I- or L- label is merged with a preceding B- or I- if they appear on the same position in neighboring multilabels and have the same type. 4B- (beginning), I- (inside), U- (unit-length entity), L(last) or O (outside) labels (Ratinov and Roth, 2009). 5328 4.2 Neural Models for Nested NER Both our models are encoder-decoder architectures: LSTM-CRF: The encoder is a bi-directional LSTM and the decoder is a CRF (Lample et al., 2016), modeling multilabels from Section 4.1. Sequence-to-sequence (seq2seq): The encoder is a bi-directional LSTM and the decoder is a LSTM. The tokens are viewed as the input sequence, and the encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted, and predict labels for a word from highest to lowest priority as defined in Section 4.1. We train the network using the lazy variant of the Adam optimizer (Kingma and Ba, 2014), which only updates accumulators for variables that appear in the current batch,5 with parameters β1 = 0.9 and β2 = 0.98. We use mini-batches of size 8. As a regularization, we apply dropout with rate 0.5 and the word dropout replaces 20% of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search. In our baseline versions, we use the following word- and character-level word embeddings: • pretrained word embeddings: For English, we train our own word embeddings of dimension 300 with word2vec6 on the English Gigaword Fifth Edition.7 For other languages (German, Dutch, Spanish and Czech) we use the FastText word embeddings (Bojanowski et al., 2017).8 • end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot). • character-level word embeddings: We use bidirectional GRUs (Cho et al., 2014; Graves and Schmidhuber, 2005) of dimension 128 in line with Ling et al. (2015): we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs 5tf.contrib.opt.lazyadamoptimizer from www.tensorflow.org 6Skip-gram, for tokens with at least 10 occurrences, window = 5, dimension = 300, negative sampling = 5. 7https://catalog.ldc.upenn.edu/ LDC2011T07 8https://fasttext.cc/docs/en/ crawl-vectors.html for forward and reversed word characters. We further add contextual word embeddings to our baselines: • +ELMo (Peters et al., 2018): pretrained contextual word embeddings of dimension 512 for English. • +BERT (Devlin et al., 2018): pretrained contextual word embeddings of dimension 1024 for English9 and 768 for other languages10. For each token, we generate the contextual word embedding by averaging all BERT subword embeddings in the last four layers (Devlin et al., 2018) without finetuning. • +Flair (Akbik et al., 2018): pretrained contextual word embeddings of dimension 4096 for all languages except Spanish.11 We use the implementation provided by Akbik et al. (2018) to generate the Flair and ELMo word embeddings.12 We do not use any hand-crafted classification features in any of our models. 5 Results Table 1 shows the F1 score for the nested NER and Table 2 shows the F1 score for the flat NER. When comparing the results for the nested NER in the baseline models (without the contextual word embeddings) to the previous results in literature, we see that LSTM-CRF reaches comparable, but suboptimal results in three out of four nested NE corpora, while seq2seq clearly outperforms all the known methods by a wide margin. We hypothesize that seq2seq, although more complex (the system must predict multiple labels per token, including the special label "<eow>"), is more suitable for more complex corpora. The gain is most visible in ACE-2004 and ACE-2005, which contain extremely long named entities and the level of “nestedness” is greater than in the other nested corpora. According to Wang and Lu (2018), 39% of train sentences contain overlapping mentions in ACE-2004, as opposed to 22% of train sentences with overlapping mentions in GENIA. With shorter and less overlapping entities, such as in GENIA, and ultimately in flat 9BERT-Large Uncased from https://github.com/ google-research/bert 10BERT-Base Multilingual Uncased from https:// github.com/google-research/bert 11Not yet available in December 2018. 12https://github.com/zalandoresearch/ flair 5329 model ACE-2004 ACE-2005 GENIA CNEC 1.0 (Finkel and Manning, 2009)** – – 70.3 – (Lu and Roth, 2015)** 62.8 62.5 70.3 – (Muis and Lu, 2017)** 64.5 63.1 70.8 – (Katiyar and Cardie, 2018) 72.70 70.5 73.6 – (Ju et al., 2018)* – 72.2 74.7 – (Wang and Lu, 2018) 75.1 74.5 75.1 – (Strakov´a et al., 2016) – – – 81.20 LSTM-CRF 72.26 71.62 76.23 80.28 LSTM-CRF+ELMo 78.72 78.36 75.94 – LSTM-CRF+BERT 81.48 79.95 77.80 85.67 LSTM-CRF+Flair 77.65 77.25 76.65 81.74 LSTM-CRF+BERT+ELMo 80.07 80.04 76.29 – LSTM-CRF+BERT+Flair 81.22 80.82 77.91 85.70 LSTM-CRF+ELMo+BERT+Flair 80.19 79.85 76.56 – seq2seq 77.08 75.36 76.44 82.96 seq2seq+ELMo 81.94 81.95 77.33 – seq2seq+BERT 84.33 83.42 78.20 86.73 seq2seq+Flair 81.38 79.83 76.63 83.55 seq2seq+BERT+ELMo 84.32 82.15 77.77 – seq2seq+BERT+Flair 84.40 84.33 78.31 86.88 seq2seq+ELMo+BERT+Flair 84.07 83.41 78.01 – Table 1: Nested NER results (F1) for ACE-2004, ACE-2005, GENIA and CNEC 1.0 (Czech) corpora. Bold indicates the best result, italics results above SoTA and gray background indicates the main contribution. * uses different data split in ACE-2005. ** non-neural model model English German Dutch Spanish (Gillick et al., 2016) 86.50 76.22 82.84 82.95 (Lample et al., 2016) 90.94 78.76 81.74 85.75 ELMo (Peters et al., 2018) 92.22 – – – Flair (Akbik et al., 2018) 93.09 88.32 – – BERT (Devlin et al., 2018) 92.80 – – – LSTM-CRF 90.72 79.89 87.42 86.34 LSTM-CRF+ELMo 92.58 – – – LSTM-CRF+BERT 92.94 84.53 92.48 88.77 LSTM-CRF+Flair 92.25 82.35 88.31 – LSTM-CRF+BERT+ELMo 92.93 – – – LSTM-CRF+BERT+Flair 93.22 84.44 92.69 – LSTM-CRF+ELMo+BERT+Flair 93.38 – – – seq2seq 90.77 79.09 87.59 86.04 seq2seq+ELMo 92.43 – – – seq2seq+BERT 92.98 84.19 92.46 88.81 seq2seq+Flair 91.87 82.68 88.67 – seq2seq+BERT+ELMo 92.99 – – – seq2seq+BERT+Flair 93.00 85.10 92.34 – seq2seq+ELMo+BERT+Flair 93.07 – – – Table 2: Flat NER results (F1) for CoNLL-2002 and CoNLL-2003. Bold indicates best result, italics results above SoTA. corpora, the simplicity of LSTM-CRF wins over seq2seq. We also report a substantial increase in the F1 score when recently published contextual embeddings (ELMo, BERT, Flair) are added as pretrained word embeddings on input (Peters et al., 2018; Devlin et al., 2018; Akbik et al., 2018) in all languages and corpora, although in the case of CoNLL-2003 German, our results stay behind those of Akbik et al. (2018). 6 Conclusions We presented two neural architectures for nested named entities and a simple encoding algorithm to allow the modeling of multiple NE labels in an enhanced BILOU scheme. The LSTM-CRF modeling of NE multilabels is better suited for putatively less-nested and flat corpora, while the sequence-to-sequence architecture captures more complex relationships between nested and complicated named entities and surpasses the current state of the art in nested NER on four nested NE 5330 corpora. We also report surpassing state-of-theart results with the recently published contextual word embeddings on both nested and flat NE corpora. Acknowledgements The work described herein has been supported by OP VVV VI LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project CZ.02.1.01/0.0/0.0/16 013/0001781) and it has been supported and has been using language resources developed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071). We would like to thank the reviewers for their insightful comments. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual String Embeddings for Sequence Labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics, 5:135–146. KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. CoRR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The Automatic Content Extraction (ACE) program-tasks, data, and evaluation. Proceedings of LREC, 2. Jenny Rose Finkel and Christopher D. Manning. 2009. Nested Named Entity Recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1, EMNLP ’09, pages 141–150. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1296–1306. Association for Computational Linguistics. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, pages 5–6. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446–1459. Association for Computational Linguistics. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861–871. Association for Computational Linguistics. Jing-Dong Kim, Tomoto Ohta, Yuka Tateisi, and Jun’ichi Tsujii. 2003. GENIA corpus—A semantically annotated corpus for bio-textmining. Bioinformatics (Oxford, England), 19 Suppl 1:180–182. Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics. Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W. Black, and Isabel Trancoso. 2015. Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation. CoRR. Jiangming Liu and Yue Zhang. 2017. Shift-Reduce Constituent Parsing with Neural Lookahead Features. Transactions of the Association for Computational Linguistics, 5:45–58. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857–867. Association for Computational Linguistics. Aldrian Obaja Muis and Wei Lu. 2017. Labeling Gaps Between Words: Recognizing Overlapping Mentions with Mention Separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2608–2618. Association for Computational Linguistics. 5331 Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Lev Ratinov and Dan Roth. 2009. Design Challenges and Misconceptions in Named Entity Recognition. In CoNLL ’09: Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pages 147–155. Association for Computational Linguistics. Jana Strakov´a, Milan Straka, and Jan Hajiˇc. 2016. Neural Networks for Featureless Named Entity Recognition in Czech. In Text, Speech, and Dialogue: 19th International Conference, TSD 2016, Brno , Czech Republic, September 12-16, 2016, Proceedings, pages 173–181. Springer International Publishing. Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-independent Named Entity Recognition. In Proceedings of the 6th Conference on Natural Language Learning Volume 20, COLING-02, pages 1–4, Stroudsburg, PA, USA. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proceedings of CoNLL-2003, pages 142– 147. Edmonton, Canada. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204–214. Association for Computational Linguistics.
2019
527
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5332–5337 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5332 Online Infix Probability Computation for Probabilistic Finite Automata Marco Cognetta, Yo-Sub Han, and Soon Chan Kwon∗ Department of Computer Science Yonsei University, Seoul, Republic of Korea [email protected], {emmous, soon--chan}@yonsei.ac.kr Abstract Probabilistic finite automata (PFAs) are common statistical language model in natural language and speech processing. A typical task for PFAs is to compute the probability of all strings that match a query pattern. An important special case of this problem is computing the probability of a string appearing as a prefix, suffix, or infix. These problems find use in many natural language processing tasks such word prediction and text error correction. Recently, we gave the first incremental algorithm to efficiently compute the infix probabilities of each prefix of a string (Cognetta et al., 2018). We develop an asymptotic improvement of that algorithm and solve the open problem of computing the infix probabilities of PFAs from streaming data, which is crucial when processing queries online and is the ultimate goal of the incremental approach. 1 Introduction Weighted automata are a popular weighted language model in natural language processing. They have found use across the discipline both alone (Mohri et al., 2002) and in conjunction with more complicated language models (Ghazvininejad et al., 2016; Velikovich et al., 2018). As such, finding efficient algorithms for weighted automata has become an intensely studied topic (Allauzen and Mohri, 2009; Argueta and Chiang, 2018). An important subclass of weighted automata are PFAs. Given a PFA, one important task is to calculate the probability of a phrase or pattern. Efficient algorithms exist for this problem when given a PFA or a probabilistic context-free grammar (PCFG) and a pattern that forms a regular language (Vidal et al., 2005a; Nederhof and Satta, 2011). One important special case of this problem ∗Now at Google Korea. is to compute the probability of all strings containing a given infix, which was first studied by Corazza et al. (1991). The problem was motivated by applications to phrase prediction and error correction. Several partial results were established with various restrictions on the statistical model or infix (Corazza et al., 1991; Fred, 2000; Nederhof and Satta, 2011). Later, Nederhof and Satta (2011) gave a general solution for PCFGs and proposed the problem of computing the infix probabilities of each prefix of a string incrementally—using the infix probability of w1w2 . . . wk to speed up the calculation for w1w2 . . . wkwk+1. Recently, we gave an algorithm for this problem when the language model is a PFA, and suggested an open problem of online incremental infix probability calculation—where one is given a stream of characters instead of knowing the entire input string ahead of time (Cognetta et al., 2018). The online problem is of special practical importance as it is a more realistic setting than the offline problem. Not only do many speech processing tasks need to be performed “on the fly”, but also many parsing algorithms can be improved by utilizing an online algorithm. For example, suppose one has calculated the infix probability of all prefixes of the phrase “...be or...”, and later wishes to extend that phrase to “...be or not to be...” and retrieve all of the new infix probabilities. Instead of restarting the computation from the beginning, which would lead to redundant computation, an online method can be used to simply start from where the initial algorithm left off. As another example, suppose we have the phrase “...United States of...”, and wish to extend it by a word while maximizing the resulting infix probability. An online algorithm can be used to try all extensions in the vocabulary before settling on “America”, whereas naively applying an offline algorithm would require repeatedly computing already known values. 5333 0.2 | 0.3 0.8 | 0.2 0.0 | 0.3 a, 0.4 | b, 0.2 b, 0.1 a, 0.7 b, 0.8 Figure 1: An example PFA. Each state has an initial and final probability, and each transition has a label and transition probability. We first revisit our original incremental infix probability algorithm from (Cognetta et al., 2018) and improve the algorithm based on a careful reanalysis of the dynamic programming recurrence. Then, we develop an algorithm for the online incremental infix problem and demonstrate the practical effectiveness of the two new algorithms on series of benchmark PFAs. 2 Preliminaries We assume that the reader is familiar with the definition and basic properties of automata theory. For a thorough overview of PFAs, we suggest (Vidal et al., 2005a,b). A PFA is specified by a tuple P = (Q, Σ, {M(c)}c∈Σ, I, F), where Q is a set of states and Σ is an alphabet. The set {M(c)}c∈Σ is a set of labeled |Q| × |Q| transition matrices—the element M(c)i,j is the probability of transitioning from state qi to qj reading character c. Likewise, I is a 1 × |Q| initial probability vector and F is a |Q| × 1 final probability vector. PFAs have some conditions on their structure. Specifically, P|Q| i=1 Ii = 1 and for each state qi, Fi + P c∈Σ, j∈[1,|Q|] M(c)i,j = 1. Finally, each state must be accessible and co-accessible. When these are met, a PFA describes a probability distribution over Σ∗. The probability of a string is given as P(w) = I Q|w| i=1 M(wi)  F. Let M(Σ) = P c∈Σ M(c). Then, we can find the infinite sum P∞ i=0 M(Σ)i = (1 −M(Σ))−1, where 1 is the identity matrix. We denote this matrix M(Σ∗) and note that IM(Σ∗)F = 1. b b a a a b a b 1 2 3 4 5 Figure 2: The KMP DFA for w = aabb. The KMP automaton of w is a DFA with |w|+1 states that accepts the language of strings ending with the first occurrence of w, and can be built in O(|w|) time (Knuth et al., 1977). By convention, the states of a KMP DFA are labeled from q1 to q|w|+1, with the transition between qi and qi+1 corresponding to wi. Figure 2 gives an example. 3 Incremental Infix Algorithm We now review the method described in (Cognetta et al., 2018). The algorithm is based on state elimination for DFAs (Book et al., 1971). Given a DFA, we add two new states q0 and qn+1, where q0 is connected by λ-transitions (λ is the empty string) to all initial states and all final states are connected to qn+1 by λ-transitions. We then perform a dynamic state elimination procedure to produce regular expressions αk i,j that describe the set of strings that, when read starting at state i, end at state j and never pass through a state with label higher than k. We use the recurrence αk i,j = αk−1 i,j +αk−1 i,k (αk−1 k,k )∗αk−1 k,j , with the base case α0 i,j being the transitions from qi to qj. This method forms a regular expression stored in αn 0,n+1 that describes the same language as the input DFA. Furthermore, this regular expression is unambiguous in that there is at most one way to match a string in the language to the regular expression (Book et al., 1971). We then described a mapping from regular expressions to expressions of transition matrices of a PFA (Table 1) and proved that evaluating the matrix formed by the mapping gives the probability of all strings matching the regular expression (Cognetta et al., 2018). Regex Matrix Regex Matrix ∅ 0 R + S M(R) + M(S) λ 1 RS M(R)M(S) c M(c) R∗ (1 −M(R))−1 Table 1: A mapping from regular expressions to expressions of transition matrices. The basic idea behind the incremental algorithm is the following: the KMP DFA describes the infix language of the input string w. When performing the state elimination procedure, the term ak 0,k+1 is the regular expression for the infix language of w1w2 . . . wk. Further, the term ak+1 0,k+2 = αk 0,k+1(αk k+1,k+1)∗αk k+1,k+2 includes the term αk 0,k+1 and so the result from each iteration can be used in the next. The algorithm then performs state elimination while interpret5334 Algorithm 1 Incremental Infix 1: procedure INFIX(w = w1 . . . wn, PFA P) 2: D ←KMP DFA for w 3: T ←(n + 3) × (n + 3) table 4: T0,1, Tn+1,n+2 ←1 5: for (qi, c) ∈δ do 6: Ti,δ(qi,c) ←Ti,δ(qi,c) + M(c) 7: X ←1 ▷X holds αk 0,k+1. 8: for k ∈[1, n + 1] do 9: X ←X(1 −Tk,k)−1Tk,k+1 10: yield IXM(Σ∗)F ▷P(Σ∗w1 . . . wkΣ∗) 11: T ′ ←(n + 3) × (n + 3) table 12: for i ∈[0, n + 2]; j ∈[0, n + 2] do 13: T ′ i,j ←Ti,j + Ti,k(1 −Tk,k)−1Tk,j 14: T ←T ′ ing the terms αk i,j as matrices and outputs αk 0,k+1 at each step to retrieve the infix probability of w1w2 . . . wk. The algorithm based on this idea is given in Algorithm 1 and has a runtime of O(|w|3|QP|m). We note that this analysis is considering the alphabet to be constant sized. For the remainder of the paper, we deal with variable sized (but finite) alphabet sizes. Accounting for this, the true runtime is O(|Σ||w||QP|2+|w|3|QP|m)†, with the O(|Σ||w||QP|2) term coming from the initial table setup in Lines 5 to 6. 4 Asymptotic Speedup We now describe an asymptotic speedup for Algorithm 1 based on the following two lemmas. Lemma 1. Computing αn 0,n+1 only requires knowledge of the terms of the form αk i,j, where i, j ≥k + 1, or of the form ak 0,k+1. In other words, only the term αk 0,k+1 and the terms in the bottom right k × k sub-table of αk need to be considered at step k + 1. Lemma 2. Consider αk i,j where k + 1 ≤i < j. Then αk i,j = αk−1 i,j . Lemmas 1 and 2 imply that only O(|w| −k) = O(|w|) matrix multiplications/inversions need to be performed per iteration of Algorithm 1, leading to Theorem 3. Theorem 3. Algorithm 1 can be made to run in O(|Σ||w||QP|2 + |w|(|w||Q|m)) = O(|Σ||w||QP|2 + |w|2|Q|m) time when accounting for the preprocessing step. The new algorithm is faster than the previous known runtime of O(|Σ||w||QP|2+|w|3|Q|m). To †The constant m is such that n × n matrices can be multiplied or inverted in O(nm) time. In practice, m is often ≈2.807 (Strassen, 1969). implement this speed-up, we change the iteration range in Line 11 to of Algorithm 1 to be for i ∈ [k + 1, n + 2]; j ∈[k + 1, n + 2] and set T ′ i,j = Ti,j when j ≥k + 2. For the remaining O(k) values, we compute the term T ′ i,j = Ti,j +Ti,k(1− Tk,k)−1Tk,j as normal. 5 Online Incremental Infix Calculation We now consider the problem of determining the infix probabilities of strings given as a stream of characters. This is in contrast to the setting from Algorithm 1 and (Cognetta et al., 2018) in which the entire string was known ahead of time. In this setting, we build the KMP automaton step by step (instead of all at once at the beginning), and then eliminate the most recent state to maintain our dynamic programming table. The key difficulty in this method is that when adding a new state, |Σ| −1 back transitions (and 1 forward transition) are added to the DFA. The label and destination of each back transition cannot be predicted until a new character is added, the back transitions can go to any state up to the current one, and different configurations can arise depending on the newly added character. Together, these make correctly accounting for the paths that are generated at each step non-trivial. Lemma 4. The term αk k+1,k+1 can be computed as P c∈Σ−wk c(αk−1 δ(qk+1,c),k+1 + αk−1 δ(qk+1,c),k(αk−1 k,k )∗αk−1 k,k+1). The basic intuition of Lemma 4 is to concatenate the character from the backwards transition to the front of every string that brings state δ(qi, c) to state qk+1. When finding αk i,k+1 where i ≤k, the term can be computed as normal and evaluating αk k+1,k+1 takes O(|Σ||QP|m) time. Lemma 5. In the online setting, at each iteration k, only the k + 1th column of table T ′ needs to be evaluated. In contrast to Lemma 1 in the offline setting, where only the elements in the k + 1-th column below index k need to be computed, all elements of the k + 1-th column need to be evaluated in the online setting. This is due to the sum in Lemma 4 being dependent on the terms αk−1 δ(qk+1,c),k because δ(qk+1, c) can take on any value in [1, k]. Nevertheless, this leads to the following result. Theorem 6. Given a stream of characters w = w1w2 . . . , the infix probability of each prefix 5335 |Q|, |Σ| 500, 26 500, 100 1500, 26 1500, 100 |w| Alg Alg 1 Faster Online Alg 1 Faster Online Alg 1 Faster Online Alg 1 Faster Online 1 0.917 0.103 0.104 0.912 0.107 0.198 13.396 1.780 1.201 13.371 1.720 1.605 2 0.904 0.106 0.098 0.903 0.106 0.205 13.196 1.649 1.320 13.382 1.570 1.750 3 0.909 0.089 0.110 0.926 0.085 0.214 13.154 1.446 1.459 13.290 1.447 1.849 4 0.933 0.075 0.125 0.966 0.074 0.225 13.333 1.295 1.609 13.342 1.273 1.986 5 0.891 0.068 0.133 0.930 0.067 0.238 13.378 1.161 1.763 13.319 1.143 2.135 6 0.917 0.060 0.145 0.931 0.055 0.241 14.352 1.002 1.898 13.282 0.994 2.254 7 0.964 0.051 0.156 0.942 0.053 0.251 14.287 0.869 2.056 13.571 0.832 2.368 8 0.929 0.042 0.192 0.950 0.044 0.259 14.330 0.735 2.189 13.614 0.702 2.479 9 0.912 0.035 0.207 0.954 0.035 0.269 14.673 0.591 2.367 13.661 0.568 2.679 10 0.917 0.026 0.094 0.925 0.027 0.203 13.847 0.447 1.596 13.627 0.445 1.507 Total 9.194 0.656 1.365 9.341 0.663 2.307 137.947 10.976 17.459 134.462 10.694 20.615 Table 2: Timings from the experimental analysis of each algorithm. Alg 1 refers to Algorithm 1. “Faster” refers to the speedup described in Theorem 3. Online refers to Algorithm 2. All results are in seconds. Algorithm 2 Online Incremental Infix 1: procedure INFIX(Stream w = w1w2 . . . , PFA P) 2: D ←KMP DFA for w1 3: T ←re-sizable table 4: T0,1 ←1 5: for i ∈[1, 3]; j ∈[1, 3]; c ∈Σ do 6: if δ(qi, c) = qj then 7: Ti,j ←Ti,j + M(c) 8: X ←1, k ←1 ▷X holds αk 0,k+1. 9: while w is not exhausted do 10: Extend D with new character 11: X ←X(1 −Tk,k)−1Tk,k+1 12: yield IXM(Σ∗)F ▷P(Σ∗w1 . . . wkΣ∗) 13: T ′ ←re-sizable table 14: for i ∈[0, k + 1] do 15: j ←k + 1 16: if i ≤k then 17: T ′ i,j ←Ti,j + Ti,k(1 −Tk,k)−1Tk,j 18: else if i = k + 1 then 19: T ′ i,j = P c∈Σ−{wk} M(c)Tδ(qi,c),j 20: T ←T ′, k ←k + 1 of w can be computed online in O(|w|(|w| + |Σ|)|QP|m) time. 6 Experimental Results We now demonstrate the practical effectiveness of the improved and online algorithms. We generate a series of PFAs with varying state space and alphabet size. Because we store transition matrices as dense matrices and the algorithms depend only on |Q| and |Σ| (but not the number of transitions), the underlying structure of the PFA is unimportant. Thus, we can artificially generate the PFAs to control |Q| and |Σ| exactly. We consider PFAs with |Σ| ∈{26, 100} and |Q| ∈{500, 1500}. For each test, we use a random string of 10 characters and measure the time to perform each iteration of Algorithm 1, the asymptotic speedup described in Section 4, and Algorithm 2. We list the median of 10 trials for each iteration. The tests were implemented using Python 3.5 and NumPy and run on an Intel i7-6700 processor with 16gb of RAM. Table 2 contains the experimental results. Note that the asymptotic speedup and online algorithm outperform Algorithm 1 in every setting, which is in line with our theoretical analysis. Across all trials, each iteration of the improved algorithm speeds up while the online version slows down. These observations are not unexpected. The improved version only recomputes a k × k sub-table at iteration k and only requires O(|w|−k) multiplications. On the other hand, the online algorithm must perform O(k + |Σ|) multiplications at iteration k so we expect the runtime to slowly increase. Unlike the online version, the number of operations per iteration of Algorithm 1 and the improved version do not depend on |Σ|, so their runtimes do not differ as |Σ| grows. Consider the second use case for the online algorithm from Section 1, where we have a 500-state PFA with |Σ| = 26 and an input string of length 9, which we wish to extend while maximizing the resulting infix probability. We extrapolate from the timings in Table 2 and anticipate that finding the appropriate extension would take 26∗0.656 ≈ 17.056 seconds using the faster offline algorithm. On the other hand, we expect the online method to only take 1.271 + 26 ∗0.094 ≈3.715 seconds. 7 Conclusion Building off of our previous work, we have considered the problem of incrementally computing the infix probabilities of each prefix of a given string. We provide an improved analysis of our incremental algorithm that leads to an asymptotic speedup. Furthermore, we solve the open problem of computing the infix probabilities of each prefix of a stream of characters. The problem of adapting 5336 this approach to higher order statistical language models (such as PCFGs) remains open. Acknowledgments This work was supported by the Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (2018-0-00247). References Cyril Allauzen and Mehryar Mohri. 2009. N-way composition of weighted finite-state transducers. International Journal of Foundations of Computer Science, 20(4):613–627. Arturo Argueta and David Chiang. 2018. Composing finite state transducers on GPUs. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2697–2705. Ronald Book, Shimon Even, Sheila Greiback, and Gene Ott. 1971. Ambiguity in graphs and expressions. IEEE Transactions on Computers, 20:149– 153. Marco Cognetta, Yo-Sub Han, and Soon Chan Kwon. 2018. Incremental computation of infix probabilities for probabilistic finite automata. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2732–2741. Anna Corazza, Renato De Mori, Roberto Gretter, and Giorgio Satta. 1991. Computation of probabilities for an island-driven parser. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(9):936–950. Ana L. N. Fred. 2000. Computation of substring probabilities in stochastic grammars. In Grammatical Inference: Algorithms and Applications, pages 103– 114. Marjan Ghazvininejad, Xing Shi, Yejin Choi, and Kevin Knight. 2016. Generating topical poetry. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1183–1191. Donald E. Knuth, Jr. James H. Morris, and Vaughan R. Pratt. 1977. Fast pattern matching in strings. SIAM Journal on Computing, 6:323–350. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. Computer Speech & Language, 16(1):69–88. Mark-Jan Nederhof and Giorgio Satta. 2011. Computation of infix probabilities for probabilistic contextfree grammars. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1213–1221. Volker Strassen. 1969. Gaussian elimination is not optimal. Numer. Math., 13:354–356. Leonid Velikovich, Ian Williams, Justin Scheiner, Petar S. Aleksic, Pedro J. Moreno, and Michael Riley. 2018. Semantic lattice processing in contextual automatic speech recognition for google assistant. In Interspeech, pages 2222–2226. Enrique Vidal, Franck Thollard, Colin de la Higuera, Francisco Casacuberta, and Rafael C. Carrasco. 2005a. Probabilistic finite-state machines–part I. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27:1013–1025. Enrique Vidal, Franck Thollard, Colin de la Higuera, Francisco Casacuberta, and Rafael C. Carrasco. 2005b. Probabilistic finite-state machines–part II. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27:1026–1039. A Proofs Lemma 1. Computing αn 0,n+1 only requires knowledge of the terms of the form αk i,j, where i, j ≥k + 1, or of the form ak 0,k+1. Proof. This can be seen by expanding the term αn 0,n+1. As αn 0,n+1 = αn−1 0,n+1 + αn−1 0,n (αn−1 n,n )∗αn−1 n,n+1. The term αn−1 0,n+1 is always the empty set as there is no path from state n −1 to n + 1 that does not go through state n in the KMP DFA. Recursively applying this expansion to αn−1 n,n and αn−1 n,n+1 proves the claim. Lemma 2. Consider αk i,j where k + 1 ≤i < j. Then αk i,j = αk−1 i,j . Proof. Let i = k + 1 + x and j = k + 1 + y where x ≥0 and y > 0. Consider the expansion of the term αk k+1+x,k+1+j = αk−1 k+1+x,k+1+j+ αk−1 k+1+x,k(αk−1 k,k )∗αk−1 k,k+1+j. In the KMP DFA, state qi has exactly one transition to state qi+1 and |Σ| −1 transitions to lower (or equal) states. In other words, there is no path from a state of label i to a state with label at least i + 2 that does not go through state i + 1. Thus, αk−1 k,k+1+y = ∅. Then, αk−1 k+1+x,k(αk−1 k,k )∗αk−1 k,k+1+y = ∅, so αk k+1+x,k+1+j = αk−1 k+1+x,k+1+j. Theorem 3. In Algorithm 1, the k-th iteration requires only O(|w|) matrix inversions and multiplications to update the dynamic programming table. Proof. We use Lemmas 1 and 2. At iteration k of Algorithm 1, Lemma 1 states that we only need to update the lower right k × k table as that is all 5337 that is required to complete the k + 1-th iteration. Lemma 2 tells us that all of the terms in the lower right k × k table except for the terms in the k-th column are the same as in the previous iteration. Thus, those terms can simply be copied and the O(|w|) terms in the k-th column will be updated normally, with only. Lemma 4. The term αk k+1,k+1 can be computed as P c∈Σ−wk c(αk−1 δ(qk+1,c),k+1 + αk−1 δ(qk+1,c),k(αk−1 k,k )∗αk−1 k,k+1). Proof. For simplicity, we assume there are no self loops in the KMP DFA except on the initial state. The case where there are can be handled similarly. Note that there can only be at most one self loop not on the initial state of a KMP DFA. Such a self loop will be on the state corresponding to the last state where wk = wk−1 = . . . w1. First, we expand the term αk k+1,k+1 = αk−1 k+1,k+1 + αk−1 k+1,k(αk−1 k,k )∗αk−1 k,k+1. Since we assume there are no self loops on states k or k + 1, we can simplify the expression to be αk k+1,k+1 = αk−1 k+1,kαk−1 k,k+1. The term αk−1 k,k+1 is whatever character is on the transition from state k to k + 1. On the other hand, αk−1 k+1,k is the set of paths that take state k+1 to state k without passing through states higher than k. Lemma 5. In the online setting, at each iteration k, only the k + 1th column of table T ′ needs to be evaluated. Proof. First, we know that αk k+1,k+1 requires knowledge of each term in the kth column of αk−1. Further, expanding the term αk i,k+1 shows that only terms on the k-th and k + 1-th column of αk−1 are required for any of them. Elements on the k + 1th column of αk−1 are equal to the transitions between state qi and qk+1 per Lemma 2. We then proceed by induction on k and the claim follows. Theorem 6. Given a stream of characters w = w1w2 . . . , the infix probability of each prefix of w can be computed online in O(|w|(|w| + |Σ|)|QP|m) time. At iteration k, we need only recompute the k-th column in the table. All but the k-th element in the column are computed using the normal recurrence which each require O(1) multiplications. Computing the k-th element requires O(|Σ|) multiplications and inversions, so in total each iteration requires O(k + |Σ|) matrix multiplications. Since O(k) = O(|w|) and we perform O(|w|) iterations, we find the runtime is O(|w|(|w| + |Σ|)|QP|m).
2019
528
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5338–5343 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5338 How to best use Syntax in Semantic Role Labelling Yufei Wang1 and Mark Johnson1 and Stephen Wan2 and Yifang Sun3 and Wei Wang3 Macquarie University, Sydney, Australia1 CSIRO Data61, Sydney, Australia2 The University of New South Wales, Sydney, Australia3 [email protected], [email protected] [email protected] {yifangs,weiw}@cse.unsw.edu.au Abstract There are many different ways in which external information might be used in an NLP task. This paper investigates how external syntactic information can be used most effectively in the Semantic Role Labeling (SRL) task. We evaluate three different ways of encoding syntactic parses and three different ways of injecting them into a state-of-the-art neural ELMo-based SRL sequence labelling model. We show that using a constituency representation as input features improves performance the most, achieving a new state-of-the-art for non-ensemble SRL models on the in-domain CoNLL’05 and CoNLL’12 benchmarks.1 1 Introduction Properly integrating external information into neural networks has received increasing attention recently (Wu et al., 2018; Li et al., 2017; Strubell et al., 2018). Previous research on this topic can be roughly categorized into three classes: i) Input: The external information are presented as additional input features (i.e., dense real-valued vectors) to the neural network (Collobert et al., 2011). ii) Output: The neural network is trained to predict the main task and the external information in a multi-task approach (Changpinyo et al., 2018). iii) Auto-encoder: This approach, recently proposed by Wu et al. (2018), simultaneously combines the Input and Output during neural models training. The simplicity of these methods allow them to apply to many NLP sequence tasks and various neural model architectures. However, previous studies often focus on integrating word-level shallow features such as POS or chunk tags into the sequence labelling tasks. Syntactic information, which encodes the longrange dependencies and global sentence structure, has not been studied as carefully. This paper fills 1Our model source code is available in https:// github.com/GaryYufei/bestParseSRL this gap by integrating syntactic information to the sequence labelling task. We address three questions: 1) How should syntactic information be encoded as word-level features? 2) What is the best way of integrating syntactic information? and 3) What effect does the choice of syntactic representation have on the performance? We study these questions in the context of Semantic Role Labelling (SRL). A SRL system extracts the predicate-argument structure of a sentence.2 Syntax was an essential component of early SRL systems (Xue and Palmer, 2004; Punyakanok et al., 2008). The state-of-the-art neural SRL systems use a neural sequence labelling model without any syntax knowledge (He et al., 2018, 2017; Tan et al., 2018). We show below that injecting external syntactic knowledge into a neural SRL sequence labelling model can improve the performance, and our best model sets a new stateof-the-art for a non-ensemble SRL system. In this paper we express the external syntactic information as vectors of discrete features, because this enables us to explore different ways of injecting the syntactic information into the neural SRL model. Specifically, we propose three different syntax encoding methods: a) a full constituency tree representation (Full-C); b) an SRLspecific span representation (SRL-C); and c) a dependency tree representation (Dep). For (a) we adapt the constituency parsing representation from (G´omez-Rodr´ıguez and Vilares, 2018) and encode the tree structure as a set of features for word pairs. For (b), we use a categorical representation of the constituency spans that are most relevant to SRL tasks based on (Xue and Palmer, 2004). Finally, (c) we propose a discrete vector representation that encodes the head-modifier relationships in the dependency trees. We evaluate the effectiveness of these encodings using three different integration methods on 2who did what to whom, where and when 5339 the SRL CoNLL’05 and CoNLL’12 benchmarks. We show that using either of the constituency representations in either the Input or the AutoEncoder configurations produces the best performance. These results are noticeably better than a strong baseline and set a new state-of-the-art for non-ensemble SRL systems. 2 Related Work Semantic Role Labeling (SRL) generally refers to the PropBank style of annotation (Palmer et al., 2005). Broadly speaking, prior work on SRL makes use of syntactic information in two different ways. Carreras and M`arquez (2005); Pradhan et al. (2013) incorporate constituent-structure span-based information, while Hajiˇc et al. (2009) incorporate dependency-structure information. This information can be incorporated into an SRL system in several different ways. Swayamdipta et al. (2018) use span information from constituency parse trees as an additional training target in a multi-task learning approach, similar to one of the approaches we evaluate here. Roth and Lapata (2016) use an LSTM model to represent the dependency paths between predicates and arguments and feed the output as the input features to their SRL system. Marcheggiani and Titov (2017) use Graph Convolutional Network (Niepert et al., 2016) to encode the dependency parsing trees into their LSTM-based SRL system. Xia et al. (2019) represent dependency parses using position-based categorical features of tree structures in a neural model. Strubell et al. (2018) use dependency trees as a supervision signal to train one of attention heads in a self-attentive neural model. 3 Syntactic Representation This section introduces our representations of constituency and dependency syntax trees. 3.1 Full-C: Full Constituency Representation G´omez-Rodr´ıguez and Vilares (2018) propose a full representation of constituency parsing trees where the string position between wi and wi+1 is associated with the pair (n(wi) −n(wi−1), l(wi)) where n(wi) is the number of common ancestors between (wi, wi+1) and l(wi) is the non-terminal label at the lowest common ancestor3. For sim3The full constituency trees can be reconstructed from this representation, details refer to (G´omez-Rodr´ıguez and Figure 1: Examples of Full-C (n(w), r(w) and l(w)) and SRL-C (SRL-Cons). reported is the predicate word. The blue non-terminals are candidate constituents in the SRL-C. The circled number is the extraction order. plicity, we define r(wi) = n(wi) −n(wi−1) throughout this paper. 4 This encoding method transforms the whole constituency parsing tree into n−1 (r(wi), l(wi)) feature pairs for a length-n sentence. We assign (r(wi), l(wi)) to the wi (0 < i ≤n−1) and leave a padding symbol N to the wn. We treat r(wi) and l(wi) as two separate categorical features for each word. We refer this representation as the Full-C (Figure 1). 3.2 SRL-C: SRL Span Representation Xue and Palmer (2004) show only a small fraction of the constituents in the parse trees are useful for the SRL task given the predicate word. That means encoding the full constituency parsing tree may introduce redundant information. Therefore, we preserve the constituent spans that are most likely to be useful for the predicate word in the trees. We re-use the pruning algorithm in (Xue and Palmer, 2004). Their algorithm collects the potential argument constituents by walking up the tree to the root node recursively, which filters out many irrelevant constituents from the syntax trees with 99.3% of the ground truth arguments preserved. We encode the output of this rule-based pruning algorithm using a standard BIO (Begin-InsideOutside) annotation scheme. The words that are Vilares, 2018) 4In (G´omez-Rodr´ıguez and Vilares, 2018), both r(wi) and n(wi) is applicable for this encoding method. Our pilot experiments show that r(wi) works much better than the absolute representation n(wi). 5340 outside any candidate constituent receive the tag O. The words that are beginning of a candidate constituent receive the tag B, and the words that are inside a candidate constituent receive the tag I. We use the tag A to label words in prepositional phrases. We refer this representation as the SRLC (Figure 1). 3.3 Dep: Dependency Tree Representation The seeds already are in the script Left 0 1 0 2 0 0 1 Right 0 0 0 1 1 0 0 RG 1 2 1 4 -1 1 -2 Edge L L N R R L R DL det nsubj dep root prep det pobj root det nsubj dep prep det pobj Figure 2: Features from Dependency Tree. The dependency tree representation encodes key aspects of the head-modifier relationships within the sentence. We also consider encoding constituent edge information. The following word-level features have been proposed: a) #left/right Dependents (Left / Right). The number of dependents a word has on the left and right side. b) Right/Left-most Dependent (Edge). Whether the word is the Right/Left/None-most dependent of its governor. c) Relative Distance to Governor (RG). The relative distance between the word and its governor. d) Dependency Label (DL). The label describing the relationship between each pair of dependent and governor. We refer this representation as the Dep (Figure 25). 4 Injecting External Information In this section, we introduce three different methods for integrating external syntactic information into the neural SRL system (Figure 3): 5In this example, we assume the “root” is the first word of the sentence from the left. Figure 3: Model Architecture. Blue indicates the baseline model; Red indicates the multi-task output component; Green indicates the external feature component. Baseline Our baseline system is a stacked biLSTM architecture (He et al., 2017). We use ELMo (Peters et al., 2018) as word embeddings and a CRF output decoder on the top of LSTM, as shown in Figure 3. Input This approach represents the external categorical features as trainable, high dimensional dense vector token embeddings, which are concatenated with the representation vectors of ELMo in the baseline model. The syntactic parse trees that are used as the input features are produced by Kitaev and Klein (2018) (for constituency parsing). The dependency trees are produced by transforming the constituency trees using Stanford CoreNLP toolkit. This ensures that the constituency and dependency parses have a similar error distribution, helping to control for parsing quality. Our constituency and dependency parses score a state-of-the-art 95.4 F1 and 96.4% UAS on the WSJ test set respectively. We used a 20-fold cross-validation procedure to produce the data for the external syntactic input. Output In this approach, our model predicts both SRL sequence tags and syntactic features (encoded as the word-level features above) simultaneously. We use a log loss for each categorical feature. The final training loss is the multi-task objective LSRL−Pm f=1 log pf(y⋆ f), where pf(yf) is the probability of generating yf as the fth feature (m features in total, m = 1, 2, 5 for SRL-C, Full-C and Dep respectively) and y⋆ f is the ground truth for the fth feature. Gold training data was used as the external syntactic information for the multitask output setting, as this external information is not required at test time. Auto-encoder Following Wu et al. (2018), we use external information as input features and as a multi-task training objective simultaneously, so the system is behaving somewhat like an autoencoder. This auto-encoder has to reproduce the syntactic information in its output that it is fed in 5341 its input, encouraging it to incorporate this information in its internal representations. The input and output representations are the same as above. 5 Experiments We evaluate 10 different models (the 3 ways of using external information by 3 different encodings of syntax and a baseline model) on CoNLL’05 (Carreras and M`arquez, 2005) and CoNLL’12 (Pradhan et al., 2013) benchmarks, under the evaluation setting where the gold predicate is given. The CoNLL’05 benchmark uses WSJ and Brown test as in-domain and out-domain evaluation respectively. 5.1 Main Results Table 1 shows the effect of using the three different kinds of external syntactic information in the three different ways just described. When used as input features, all three representations improve over our baseline system. This shows that syntactic representations provide additional useful information, which is beyond the dynamic context embeddings from ELMo, to SRL task. Syntax Representations Models using constituency representations are 0.3% - 0.6% better than the models using the dependency representations. This might be because constituents align more directly with SRL arguments and constituency information is easier to use. Inject. Model CoNLL’05 CoNLL’12 WSJ Brown Test Baseline 87.7 78.1 85.8 Input Full-C 88.1 78.9 86.4 SRL-C 88.2 79.3 86.4 Dep 87.9 78.4 86.1 Output Full-C 87.7 78.4 85.9 SRL-C 87.9 78.5 85.9 Dep 87.6 78.9 85.8 Auto Encoder Full-C 88.2 77.7 86.3 SRL-C 88.2 79.0 86.4 Dep 87.6 78.1 85.7 Table 1: Injecting External Syntax Information. Bold number is the best performance in each column, same below. The SRL-C is slightly better than the Full-C for in-domain evaluation. The advantages of the SRL-C approach are greater on the out-of-domain (Brown) evaluation, with a margin of 0.4%. This could be because Full-C is more sensitive to parsing errors than SRL-C. When we compare gold and automatic parser representations in Brown device data, 10.5% of the words get different Full-C features while this only 7.9% get different SRL-C features. External Information Injection Table 1 shows at least on this task, multi-task learning does not perform as well as adding external information as additional input features. Both the Input and Auto-Encoder methods work equally well. We conclude that the extra complexity of the autoencoder model is not justified. In particular, Dep with auto-encoder hurts SRL accuracy (0.6% behind the model with the constituency features). 5.2 Comparison with existing systems We compare our best system (SRL-C used as Input) with previous work in Table 2. We improve upon the state-of-the-art results for nonensemble SRL models on in-domain test by 0.6% and 0.2% on CoNLL’05 and CoNLL’12 respectively. Our model also achieves a competitive result on CoNLL’05 Brown Test. Comparing with the strong ensemble model in (Ouchi et al., 2018), our model is only 0.3% and 0.6% lower in two benchmarks respectively. Model CoNLL’05 CoNLL’12 WSJ Brown Test ELMo Baseline 87.7 78.1 85.8 Strubell et al. (2018) 86.0 76.5 Xia et al. (2019) 86.9 76.8 He et al. (2018) 87.4 80.4 85.5 Ouchi et al. (2018) 87.6 78.7 86.2 Our best model 88.2 79.3 86.4 Xia et al. (2019)§ 87.8 78.8 Ouchi et al. (2018)§ 88.5 79.6 87.0 Table 2: Comparison with existing systems. § indicates ensemble models. 5.3 Using Gold Parse Trees Finally, we conduct an oracle experiment where all syntactic features are derived from gold trees. Our model performance improves by around 3% - 4% F1 score (see Table 3). This bounds the improvement in SRL that one can expect with improved syntactic parses. 5342 Model CoNLL’05 CoNLL’12 WSJ Brown Test Our best model 88.2 79.3 86.4 Full-C 92.2 83.5 91.4 SRL-C 91.7 83.4 90.3 Dep 91.9 83.3 91.1 Table 3: SRL Performance with Gold Trees 6 Conclusion and Future Work This paper evaluated three different ways of representing external syntactic parses, and three different ways of injecting that information into a stateof-the-art SRL system. We showed that representing the external syntactic information as constituents was most effective. Using the external syntactic information as input features was far more effective than a multi-task learning approach, and just as effective as an auto-encoder approach. Our best system sets a new state-of-theart for non-ensemble SRL systems on in-domain data. In future work we will explore how external information is best used in ensembles of models for SRL and other tasks. For example, is it better for all the models in an ensemble to use the same external information, or is it more effective if they make use of different kinds of information? We will also investigate whether the choice of method for injecting external information has the same impact on other NLP tasks as it does on SRL. Acknowledgments This research was supported by the Australian Research Councils Discovery Projects funding scheme (project number DPs 160102156, 170103710, 180103411), D2DCRC (DC25002, DC25003), and in part by CSIRO Data61. References Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the conll-2005 shared task: Semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 152–164. Association for Computational Linguistics. Soravit Changpinyo, Hexiang Hu, and Fei Sha. 2018. Multi-task learning for sequence tagging: An empirical study. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2965–2977. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Carlos G´omez-Rodr´ıguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1314– 1324. Association for Computational Linguistics. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 1–18, Boulder, Colorado. Association for Computational Linguistics. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 364–369. Association for Computational Linguistics. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473–483. Association for Computational Linguistics. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686. Association for Computational Linguistics. Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling source syntax for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 688–697. Association for Computational Linguistics. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515, Copenhagen, Denmark. Association for Computational Linguistics. 5343 Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. 2016. Learning convolutional neural networks for graphs. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 2014–2023. JMLR.org. Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2018. A span selection model for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1630–1642. Association for Computational Linguistics. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143–152. Association for Computational Linguistics. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257–287. Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1192–1202, Berlin, Germany. Association for Computational Linguistics. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027–5038. Association for Computational Linguistics. Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3772–3782. Association for Computational Linguistics. Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2018. Deep semantic role labeling with self-attention. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4929–4936. Minghao Wu, Fei Liu, and Trevor Cohn. 2018. Evaluating the utility of hand-crafted features in sequence labelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2850–2856. Association for Computational Linguistics. Qingrong Xia, Zhenghua Li, Min Zhang, Meishan Zhang, Guohong Fu, Rui Wang, and Luo Si. 2019. Syntax-aware neural semantic role labeling. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, (AAAI-19), Honolulu, Hawaii, USA, Jan 27-Feb 1, 2019. Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 88–94.
2019
529
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 557–566 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 557 Progressive Self-Supervised Attention Learning for Aspect-Level Sentiment Analysis Jialong Tang1,2,3∗, Ziyao Lu1∗, Jinsong Su1†, Yubin Ge4, Linfeng Song5, Le Sun2, Jiebo Luo5 1Xiamen University, Xiamen, China 2Institute of Software, Chinese Academy of Sciences, Beijing, China 3University of Chinese Academy of Sciences, Beijing, China 4University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA 5Department of Computer Science, University of Rochester, Rochester NY 14627, USA [email protected], [email protected] [email protected] Abstract In aspect-level sentiment classification (ASC), it is prevalent to equip dominant neural models with attention mechanisms, for the sake of acquiring the importance of each context word on the given aspect. However, such a mechanism tends to excessively focus on a few frequent words with sentiment polarities, while ignoring infrequent ones. In this paper, we propose a progressive self-supervised attention learning approach for neural ASC models, which automatically mines useful attention supervision information from a training corpus to refine attention mechanisms. Specifically, we iteratively conduct sentiment predictions on all training instances. Particularly, at each iteration, the context word with the maximum attention weight is extracted as the one with active/misleading influence on the correct/incorrect prediction of every instance, and then the word itself is masked for subsequent iterations. Finally, we augment the conventional training objective with a regularization term, which enables ASC models to continue equally focusing on the extracted active context words while decreasing weights of those misleading ones. Experimental results on multiple datasets show that our proposed approach yields better attention mechanisms, leading to substantial improvements over the two stateof-the-art neural ASC models. Source code and trained models are available.1 1 Introduction Aspect-level sentiment classification (ASC), as an indispensable task in sentiment analysis, aims at inferring the sentiment polarity of an input sentence in a certain aspect. In this regard, pre∗Equal contribution †Corresponding author 1https://github.com/DeepLearnXMU/PSSAttention vious representative models are mostly discriminative classifiers based on manual feature engineering, such as Support Vector Machine (Kiritchenko et al., 2014; Wagner et al., 2014). Recently, with the rapid development of deep learning, dominant ASC models have evolved into neural network (NN) based models (Tang et al., 2016b; Wang et al., 2016; Tang et al., 2016a; Ma et al., 2017; Chen et al., 2017; Li et al., 2018; Wang et al., 2018), which are able to automatically learn the aspect-related semantic representation of an input sentence and thus exhibit better performance. Usually, these NN-based models are equipped with attention mechanisms to learn the importance of each context word towards a given aspect. It can not be denied that attention mechanisms play vital roles in neural ASC models. However, the existing attention mechanism in ASC suffers from a major drawback. Specifically, it is prone to overly focus on a few frequent words with sentiment polarities and little attention is laid upon low-frequency ones. As a result, the performance of attentional neural ASC models is still far from satisfaction. We speculate that this is because there exist widely “apparent patterns” and “inapparent patterns”. Here, “apparent patterns” are interpreted as high-frequency words with strong sentiment polarities and “inapparent patterns” are referred to as low-frequency ones in training data. As mentioned in (Li et al., 2018; Xu et al., 2018; Lin et al., 2017), NNs are easily affected by these two modes: “apparent patterns” tend to be overly learned while “inapparent patterns” often can not be fully learned. Here we use sentences in Table 1 to explain this defect. In the first three training sentences, given the fact that the context word “small” occurs frequently with negative sentiment, the atten558 Type Sentence Ans./Pred. Train The [place] is small and crowded but the service is quick . Neg / — Train The [place] is a bit too small for live music . Neg / — Train The service is decent even when this small [place] is packed . Neg / — Test At lunch time , the [place] is crowded . Neg / Pos Test A small area makes for quiet [place] to study alone . Pos / Neg Table 1: The example of attention visualization for five sentences, where the first three are training instances and the last two are test ones. The bracketed bolded words are target aspects. Ans./Pred. = ground-truth/predicted sentiment label. Words are highlighted with different degrees according to attention weights. tion mechanism pays more attention to it and directly relates the sentences containing it with negative sentiment. This inevitably causes another informative context word “crowded” to be partially neglected in spite of it also possesses negative sentiment. Consequently, a neural ASC model incorrectly predicts the sentiment of the last two test sentences: in the first test sentence, the neural ASC model fails to capture the negative sentiment implicated by ”crowded”; while, in the second test sentence, the attention mechanism directly focuses on “small” though it is not related to the given aspect. Therefore, we believe that the attention mechanism for ASC still leaves tremendous room for improvement. One potential solution to the above-mentioned issue is supervised attention, which, however, is supposed to be manually annotated, requiring labor-intense work. In this paper, we propose a novel progressive self-supervised attention learning approach for neural ASC models. Our method is able to automatically and incrementally mine attention supervision information from a training corpus, which can be exploited to guide the training of attention mechanisms in ASC models. The basic idea behind our approach roots in the following fact: the context word with the maximum attention weight has the greatest impact on the sentiment prediction of an input sentence. Thus, such a context word of a correctly predicted training instance should be taken into consideration during the model training. In contrast, the context word in an incorrectly predicted training instance ought to be ignored. To this end, we iteratively conduct sentiment predictions on all training instances. Particularly, at each iteration, we extract the context word with the maximum attention weight from each training instance to form attention supervision information, which can be used to guide the training of attention mechanism: in the case of correct prediction, we will remain this word to be considered; otherwise, the attention weight of this word is expected to be decreased. Then, we mask all extracted context words of each training instance so far and then refollow the above process to discover more supervision information for attention mechanisms. Finally, we augment the standard training objective with a regularizer, which enforces attention distributions of these mined context words to be consistent with their expected distributions. Our main contributions are three-fold: (1) Through in-depth analysis, we point out the existing drawback of the attention mechanism for ASC. (2) We propose a novel incremental approach to automatically extract attention supervision information for neural ASC models. To the best of our knowledge, our work is the first attempt to explore automatic attention supervision information mining for ASC. (3) We apply our approach to two dominant neural ASC models: Memory Network (MN) (Tang et al., 2016b; Wang et al., 2018) and Transformation Network (TNet) (Li et al., 2018). Experimental results on several benchmark datasets demonstrate the effectiveness of our approach. 2 Background In this section, we give brief introductions to MN and TNet, which both achieve satisfying performance and thus are chosen as the foundations of our work. Here we introduce some notations to facilitate subsequent descriptions: x= (x1, x2, ..., xN) is the input sentence, t= (t1, t2, ..., tT ) is the given target aspect, y, yp∈{Positive, Negative, Neutral} denote the ground-truth and the predicted sentiment, respectively. 559 … "# "$ "% … ℎ# ℎ% … '# '% ((*) '$ Attention ℎ$ ,… CPT CPT CPT … ,"# "$ "% Bi-LSTM L× ℎ# (./#) ℎ$ (./#) ℎ# (.) ℎ$ (.) CNN/Attention … *# *$ *0 … … … … … … *# *$ *1 ℎ2 (./#) Bi-LSTM Attention ((*) Gating Fully-connected CPM ℎ2 (.) ℎ% (.) ℎ% (./#) … 3 3 Figure 1: The framework architecture of MN. … "# "$ "% … ℎ# ℎ% … '# '% ((*) '$ Attention ℎ$ ,… CPT CPT CPT … ,"# "$ "% Bi-LSTM ×L ℎ# (./#) ℎ$ (./#) ℎ# (.) ℎ$ (.) CNN/Attention … *# *$ *0 … … … … … … … *# *$ *1 ℎ2 (./#) Bi-LSTM Attention ((*) Gating Fully-connected CPM ℎ2 (.) ℎ% (./#) ℎ% (.) Figure 2: The framework architecture of TNet/TNetATT. Note that TNet-ATT is the variant of TNet replacing CNN with an attention mechanism. MN (Tang et al., 2016b; Wang et al., 2018). The framework illustration of MN is given in Figure 1. We first introduce an aspect embedding matrix converting each target aspect word tj into a vector representation, and then define the final vector representation v(t) of t as the averaged aspect embedding of its words. Meanwhile, another embedding matrix is used to project each context word xi to the continuous space stored in memory, denoted by mi. Then, an internal attention mechanism is applied to generate the aspectrelated semantic representation o of the sentence x: o =P isoftmax(vT t Mmi)hi, where M is an attention matrix and hi is the final semantic representation of xi, induced from a context word embedding matrix. Finally, we use a fully connected output layer to conduct classification based on o and v(t). TNet (Li et al., 2018). Figure 2 provides the framework illustrations of TNet, which mainly consists of three components: (1) The bottom layer is a Bi-LSTM that transforms the input x into the contextualized word representations h(0)(x)=(h(0) 1 , h(0) 2 , ..., h(0) N ) (i.e. hidden states of Bi-LSTM). (2) The middle part, as the core of the whole model, contains L layers of Context-Preserving Transformation (CPT), where word representations are updated as h(l+1)(x)=CPT(h(l)(x)). The key operation of CPT layers is Target-Specific Transformation. It contains another Bi-LSTM for generating v(t) via an attention mechanism, and then incorporates v(t) into the word representations. Besides, CPT layers are also equipped with a Context-Preserving Mechanism (CPM) to preserve the context information and learn more abstract word-level features. In the end, we obtain the word-level semantic representations h(x)=(h1,h2...,hN), with hi=h(L) i . (3) The topmost part is a CNN layer used to produce the aspect-related sentence representation o for the sentiment classification. In this work, we consider another alternative to the original TNet, which replaces its topmost CNN with an attention mechanism to produce the aspect-related sentence representation as o=Atten(h(x), v(t)). In Section 4, we will investigate the performance of the original TNet and its variant equipped with an attention mechanism, denoted by TNet-ATT. Training Objective. Both of the abovementioned models take the negative log-likelihood of the gold-truth sentiment tags as their training objectives: J(D; θ) = − X (x,t,y)∈D J(x, t, y; θ) = X (x,t,y)∈D d(y) · logd(x, t; θ), (1) where D is the training corpus, d(y) is the one-hot vector of y, d(x, t; θ) is the model-predicted sentiment distribution for the pair (x,t), and · denotes the dot product of two vectors. 3 Our Approach In this section, we first describe the basic intuition behind our approach and then provide its details. Finally, we elaborate how to incorporate the mined supervision information for attention mechanisms into neural ASC models. It is noteworthy that our method is only applied to the training optimization 560 of neural ASC models, without any impact on the model testing. 3.1 Basic Intuition The basic intuition of our approach stems from the following fact: in attentional ASC models, the importance of each context word on the given aspect mainly depends on its attention weight. Thus, the context word with the maximum attention weight has the most important impact on the sentiment prediction of the input sentence. Therefore, for a training sentence, if the prediction of ASC model is correct, we believe that it is reasonable to continue focusing on this context word. Conversely, the attention weight of this context word should be decreased. However, as previously mentioned, the context word with the maximum attention weight is often the one with strong sentiment polarity. It usually occurs frequently in the training corpus and thus tends to be overly considered during model training. This simultaneously leads to the insufficient learning of other context words, especially low-frequency ones with sentiment polarities. To address this problem, one intuitive and feasible method is to first shield the influence of this most important context word before reinvestigating effects of remaining context words of the training instance. In that case, other low-frequency context words with sentiment polarities can be discovered according to their attention weights. 3.2 Details of Our Approach Based on the above analysis, we propose a novel incremental approach to automatically mine influential context words from training instances, which can be then exploited as attention supervision information for neural ASC models. As shown in Algorithm 1, we first use the initial training corpus D to conduct model training, and then obtain the initial model parameters θ(0) (Line 1). Then, we continue training the model for K iterations, where influential context words of all training instances can be iteratively extracted (Lines 6-25). During this process, for each training instance (x, t, y), we introduce two word sets initialized as ∅(Lines 2-5) to record its extracted context words: (1) sa(x) consists of context words with active effects on the sentiment prediction of x. Each word of sa(x) will be encouraged to remain considered in the refined model training, and (2) sm(x) contains context words with misleading Algorithm 1 : Neural ASC Model Training with Automatically Mined Attention Supervision Information. Input: D: the initial training corpus; θinit: the initial model parameters; ϵα: the entropy threshold of attention weight distribution; K: the maximum number of training iterations; 1: θ(0) ←Train(D; θinit) 2: for (x, t, y) ∈D do 3: sa(x) ←∅ 4: sm(x) ←∅ 5: end for 6: for k = 1, 2..., K do 7: D(k) ←∅ 8: for (x, t, y) ∈D do 9: v(t) ←GenAspectRep(t, θ(k−1)) 10: x′ ←MaskWord(x, sa(x), sm(x)) 11: h(x′) ←GenWordRep(x′, v(t), θ(k−1)) 12: yp, α(x′) ←SentiPred(h(x′), v(t), θ(k−1)) 13: E(α(x′)) ←CalcEntropy(α(x′)) 14: if E(α(x′)) < ϵα then 15: m ←argmax1≤i≤N α(x′ i) 16: if yp == y then 17: sa(x) ←sa(x) ∪{x′ m} 18: else 19: sm(x) ←sm(x) ∪{x′ m} 20: end if 21: end if 22: D(k) ←D(k) ∪(x′, t, y) 23: end for 24: θ(k) ←Train(D(k); θ(k−1)) 25: end for 26: Ds ←∅ 27: for (x, t, y) ∈D do 28: Ds ←Ds ∪(x, t, y, sa(x), sm(x)) 29: end for 30: θ ←Train(Ds) Return: θ; effects, whose attention weights are expected to be decreased. Specifically, at the k-th training iteration, we adopt the following steps to deal with (x, t, y): In Step 1, we first apply the model parameters θ(k−1) of the previous iteration to generate the aspect representation v(t) (Line 9). Importantly, according to sa(x) and sm(x), we then mask all previously extracted context words of x to create a new sentence x′, where each masked word is replaced with a special token “⟨mask⟩” (Line 10). In this way, the effects of these context words will be shielded during the sentiment prediction of x′, and thus other context words can be potentially extracted from x′. Finally, we generate the word representations h(x′)={h(x′ i)}N i=1 (Line 11). In Step 2, on the basis of v(t) and h(x′), we 561 Iter Sentence Ans./Pred. E(α(x′)) x′ m 1 The [place] is small and crowded but the service is quick . Neg / Neg 2.38 small 2 The [place] is ⟨mask⟩and crowded but the service is quick . Neg / Neg 2.59 crowded 3 The [place] is ⟨mask⟩and ⟨mask⟩but the service is quick . Neg / Pos 2.66 quick 4 The [place] is ⟨mask⟩and ⟨mask⟩but the service is ⟨mask⟩. Neg / Neg 3.07 — Table 2: The example of mining influential context words from the first training sentence in Table 1. E(α(x′)) denotes the entropy of the attention weight distribution α(x′), ϵα is entropy threshold set as 3.0, and x′ m indicates the context word with the maximum attention weight. Note that all extracted words are replaced with “⟨mask⟩” and their background colors are labeled as white. leverage θ(k−1) to predict the sentiment of x′ as yp (Line 12), where the word-level attention weight distribution α(x′)={α(x′ 1), α(x′ 2), ..., α(x′ N)} subjecting to PN i=1 α(x′ i) = 1 is induced. In Step 3, we use the entropy E(α(x′)) to measure the variance of α(x′) (Line 13), which contributes to determine the existence of an influential context word for the sentiment prediction of x′, E(α(x′)) = − N X i=1 α(x′ i) log(α(x′ i)). (2) If E(α(x′)) is less than a threshold ϵα (Line 14), we believe that there exists at least one context word with great effect on the sentiment prediction of x′. Hence, we extract the context word x′ m with the maximum attention weight (Line 15-20) that will be exploited as attention supervision information to refine the model training. Specifically, we adopt two strategies to deal with x′ m according to different prediction results on x′: if the prediction is correct, we wish to continue focusing on x′ m and add it into sa(x) (Lines 16-17); otherwise, we expect to decrease the attention weight of x′ m and thus include it into sm(x) (Lines 18-19). In Step 4, we combine x′, t and y as a triple, and merge it with the collected ones to form a new training corpus D(k) (Line 22). Then, we leverage D(k) to continue updating model parameters for the next iteration (Line 24). In doing so, we make our model adaptive to discover more influential context words. Through K iterations of the above steps, we manage to extract influential context words of all training instances. Table 2 illustrates the context word mining process of the first sentence shown in Table 1. In this example, we iteratively extract three context words in turn: “small”, “crowded” and “quick”. The former two words are included in sa(x), while the last one is contained in sm(x). Finally, the extracted context words of each training instance will be included into D, forming a final training corpus Ds with attention supervision information (Lines 26-29), which will be used to carry out the last model training (Line 30). The details will be provided in the next subsection. 3.3 Model Training with Attention Supervision Information To exploit the above extracted context words to refine the training of attention mechanisms for ASC models, we propose a soft attention regularizer △(α(sa(x)∪sm(x)), ˆα(sa(x)∪sm(x)); θ) to jointly minimize the standard training objective, where α(∗) and ˆα(∗) denotes the model-induced and expected attention weight distributions of words in sa(x)∪sm(x), respectively. More specifically, △(α(∗), ˆα(∗); θ) is an Euclidean Distance style loss that penalizes the disagreement between α(∗) and ˆα(∗). As previously analyzed, we expect to equally continue focusing on the context words of sa(x) during the final model training. To this end, we set their expected attention weights to the same value 1 |sa(x)|. By doing so, the weights of words extracted first will be reduced, and those of words extracted later will be increased, avoiding the over-fitting of high-frequency context words with sentiment polarities and the under-fitting of lowfrequency ones. On the other hand, for the words in sm(x) with misleading effects on the sentiment prediction of x, we want to reduce their effects and thus directly set their expected weights as 0. Back to the sentence shown in Table 2, both “small” and “crowded”∈sa(x) are assigned the same expected weight 0.5, and the expected weight of “quick”∈sm(x) is 0. Finally, our objective function on the training corpus Ds with attention supervision information 562 Domain Dataset #Pos #Neg #Neu LAPTOP Train 980 858 454 Test 340 128 171 REST Train 2159 800 632 Test 730 195 196 TWITTER Train 1567 1563 3127 Test 174 174 346 Table 3: Datasets in our experiments. #Pos, #Neg and #Neu denotes the number of instances with Positive, Negative and Neutral sentiment, respectively. becomes Js(Ds; θ) = − X (x,t,y)∈Ds {J(x, t, y; θ)+ (3) γ△(α(sa(x) ∪sm(x)), ˆα(sa(x) ∪sm(x)); θ)}, where J(x, t, y; θ) is the conventional training objective defined in Equation 1, and γ>0 is a hyperparameter that balances the preference between the conventional loss function and the regularization term. In addition to the utilization of attention supervision information, our method has a further advantage: it is easier to address the vanishing gradient problem by adding such information into the intermediate layers of the entire network (Szegedy et al., 2015), because the supervision of ˆα(∗) is closer to α(∗) than y. 4 Experiments Datasets. We applied the proposed approach into MN (Tang et al., 2016b; Wang et al., 2018) and TNet-ATT (Li et al., 2018) (see Section 2), and conducted experiments on three benchmark datasets: LAPTOP, REST (Pontiki et al., 2014) and TWITTER (Dong et al., 2014). In our datasets, the target aspect of each sentence has been provided. Besides, we removed a few instances with conflict sentiment labels as implemented in (Chen et al., 2017). The statistics of the final datasets are listed in Table 3. Contrast Models. We referred to our two enhanced ASC models as MN(+AS) and TNetATT(+AS), and compared them with MN, TNet, and TNet-ATT. Note our models require additional K+1-iteration training, therefore, we also compared them with the above models with additional K+1-iteration training, which are denoted as MN(+KT), TNet(+KT) and TNet-ATT(+KT). Moreover, to investigate effects of different kinds of attention supervision information, we 0.66 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 !" 0.59 0.6 0.61 0.62 0.63 0.64 0.65 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 F1-score LAPTOP REST TWITTER !" Figure 3: Effects of ϵα on the validation sets using MN(+AS). 0.66 0.68 0.7 0.72 0.74 0.76 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 F1-score LAPTOP REST TWITTER !" 0.59 0.6 0.61 0.62 0.63 0.64 0.65 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 F1-score LAPTOP REST TWITTER !" Figure 4: Effects of ϵα on the validation sets using TNet-ATT(+AS). also listed the performance of MN(+ASa) and MN(+ASm), which only leverage context words of sa(x) and sm(x), respectively, and the same for TNet-ATT(+ASa) and TNet-ATT(+ASm). Training Details. We used pre-trained GloVe vectors (Pennington et al., 2014) to initialize the word embeddings with vector dimension 300. For out-of-vocabulary words, we randomly sampled their embeddings from the uniform distribution [0.25, 0.25], as implemented in (Kim, 2014). Besides, we initialized the other model parameters uniformly between [-0.01, 0.01]. To alleviate overfitting, we employed dropout strategy (Hinton et al., 2012) on the input word embeddings of the LSTM and the ultimate aspect-related sentence representation. Adam (Kingma and Ba, 2015) was adopted as the optimizer with the learning rate 0.001. When implementing our approach, we empirically set the maximum iteration number K as 5, γ in Equation 3 as 0.1 on LAPTOP data set, 0.5 on REST data set and 0.1 on TWITTER data set, respectively. All hyper-parameters were tuned on 20% randomly held-out training data. Finally, we used F1-Macro and accuracy as our evaluation 563 Model LAPTOP REST TWITTER Macro-F1 Accuracy Macro-F1 Accuracy Macro-F1 Accuracy MN (Wang et al., 2018) 62.89 68.90 64.34 75.30 — — MN 63.28 68.97 65.88 77.32 66.17 67.71 MN(+KT) 63.31 68.95 65.86 77.33 66.18 67.78 MN(+ASm) 64.37 69.69 68.40 78.13 67.20 68.90 MN(+ASa) 64.61 69.95 68.59 78.23 67.47 69.17 MN(+AS) 65.24∗∗ 70.53∗∗ 69.15∗∗ 78.75∗ 67.88∗∗ 69.64∗∗ TNet (Li et al., 2018) 71.75 76.54 71.27 80.69 73.60 74.97 TNet 71.82 76.12 71.70 80.35 76.82 77.60 TNet(+KT) 71.74 76.44 71.36 80.59 76.78 77.54 TNet-ATT 71.21 76.06 71.15 80.32 76.53 77.46 TNet-ATT(+KT) 71.44 76.06 71.01 80.50 76.58 77.46 TNet-ATT(+ASm) 72.39 76.89 72.04 80.96 77.42 78.08 TNet-ATT(+ASa) 73.30 77.34 72.67 81.33 77.63 78.47 TNet-ATT(+AS) 73.84∗∗ 77.62∗∗ 72.90∗∗ 81.53∗ 77.72∗∗ 78.61∗ Table 4: Experimental results on various datasets. We directly cited the best experimental results of MN and TNet reported in (Wang et al., 2018; Li et al., 2018). ∗∗and ∗means significant at p <0.01 and p <0.05 over the baselines (MN, TNet) on each test set, respectively. Here we conducted 1,000 bootstrap tests (Koehn, 2004) to measure the significance in metric score differences. measures. 4.1 Effects of ϵα ϵα is a very important hyper-parameter that controls the iteration number of mining attention supervision information (see Line 14 of Algorithm 1). Thus, in this group of experiments, we varied ϵα from 1.0 to 7.0 with an increment of 1 each time, so as to investigate its effects on the performance of our models on the validation sets. Figure 3 and 4 show the experimental results of different models. Specifically, MN(+AS) with ϵα=3.0 achieves the best performance, meanwhile, the optimal performance of TNet-ATT(+AS) is obtained when ϵα=4.0. We observe the increase of ϵα does not lead to further improvements, which may be due to more noisy extracted context words. Because of these results, we set ϵα for MN(+AS) and TNet-ATT(+AS) as 3.0 and 4.0 in the following experiments, respectively. 4.2 Overall Results Table 4 provides all the experimental results. To enhance the persuasiveness of our experimental results, we also displayed the previously reported scores of MN (Wang et al., 2018) and TNet (Li et al., 2018) on the same data set. According to the experimental results, we can come to the following conclusions: First, both of our reimplemented MN and TNet are comparable to their original models reported in (Wang et al., 2018; Li et al., 2018). These results show that our reimplemented baselines are competitive. When we replace the CNN of TNet with an attention mechanism, TNet-ATT is slightly inferior to TNet. Moreover, when we perform additional K+1-iteration of training on these models, their performance has not changed significantly, suggesting simply increasing training time is unable to enhance the performance of the neural ASC models. Second, when we apply the proposed approach into both MN and TNet-ATT, the context words in sa(x) are more effective than those in sm(x). This is because the proportion of correctly predicted training instances is larger than that of incorrectly ones. Besides, the performance gap between MN(+ASa) and MN(+ASm) is larger than that between two variants of TNet-ATT. One underlying reason is that the performance of TNetATT is better than MN, which enables TNet-ATT to produce more correctly predicted training instances. This in turn brings more attention supervision to TNet-ATT than MN. Finally, when we use both kinds of attention supervision information, no matter for which metric, MN(+AS) remarkably outperforms MN on all test sets. Although our TNet-ATT is slightly in564 Model Sentence Ans./Pred. TNet-ATT The [folding chair] i was seated at was uncomfortable . Neg / Neu TNet-ATT(+AS) The [folding chair] i was seated at was uncomfortable . Neg / Neg TNet-ATT The [food] did take a few extra minutes ... the cute waiters ... Neu / Pos TNet-ATT(+AS) The [food] did take a few extra minutes ... the cute waiters ... Neu / Neu Table 5: Two test cases predicted by TNet-ATT and TNet-ATT(+AS). ferior to TNet, TNet-ATT(+AS) still significantly surpasses both TNet and TNet-ATT. These results strongly demonstrate the effectiveness and generality of our approach. 4.3 Case Study In order to know how our method improves neural ASC models, we deeply analyze attention results of TNet-ATT and TNet-ATT(+AS). It has been found that our proposed approach can solve the above-mentioned two issues well. Table 5 provides two test cases. TNet-ATT incorrectly predicts the sentiment of the first test sentence as neutral. This is because the context word “uncomfortable” only appears in two training instances with negative polarities, which distracts attention from it. When using our approach, the average attention weight of “uncomfortable” is increased to 2.6 times than that of baseline in these two instances. Thus, TNet-ATT(+AS) is capable of assigning a greater attention weight (0.0056→0.2940) to this context word, leading to the correct prediction of the first test sentence. For the second test sentence, since the context word “cute” occurs in training instances mostly with positive polarity, TNet-ATT directly focuses on this word and then incorrectly predicts the sentence sentiment as positive. Adopting our method, attention weights of “cute” in training instances with neural or negative polarity are significantly decreased. Specifically, in these instances, the average weight of “cute” is reduced to 0.07 times of the original. Hence, TNet-ATT(+AS) assigns a smaller weight (0.1090→0.0062) to “cute” and achieves the correct sentiment prediction. 5 Related Work Recently, neural models have been shown to be successful on ASC. For example, due to its multiple advantages, such as being simpler and faster, MNs with attention mechanisms (Tang et al., 2016b; Wang et al., 2018) have been widely used. Another prevailing neural model is LSTM that also involves an attention mechanism to explicitly capture the importance of each context word (Wang et al., 2016). Overall, attention mechanisms play crucial roles in all these models. Following this trend, researchers have resorted to more sophisticated attention mechanisms to refine neural ASC models. Chen et al., (2017) proposed a multiple-attention mechanism to capture sentiment features separated by a long distance, so that it is more robust against irrelevant information. An interactive attention network has been designed by Ma et al., (2017) for ASC, where two attention networks were introduced to model the target and context interactively. Liu et al., (2017) proposed to leverage multiple attentions for ASC: one obtained from the left context and the other one acquired from the right context of a given aspect. Very recently, transformation-based model has also been explored for ASC (Li et al., 2018), and the attention mechanism is replaced by CNN. Different from these work, our work is in line with the studies of introducing attention supervision to refine the attention mechanism, which have become hot research topics in several NNbased NLP tasks, such as event detection (Liu et al., 2017), machine translation (Liu et al., 2016), and police killing detection (Nguyen and Nguyen, 2018). However, such supervised attention acquisition is labor-intense. Therefore, we mainly commits to automatic mining supervision information for attention mechanisms of neural ASC models. Theoretically, our approach is orthogonal to these models, and we leave the adaptation of our approach into these models as future work. Our work is inspired by two recent models: one is (Wei et al., 2017) proposed to progressively mine discriminative object regions using classification networks to address the weakly-supervised semantic segmentation problems, and the other one is (Xu et al., 2018) where a dropout method integrating with global information is presented to 565 encourage the model to mine inapparent features or patterns for text classification. To the best of our knowledge, our work is the first one to explore automatic mining of attention supervision information for ASC. 6 Conclusion and Future Work In this paper, we have explored how to automatically mine supervision information for attention mechanisms of neural ASC models. Through indepth analyses, we first point out the defect of the attention mechanism for ASC: a few frequent words with sentiment polarities are tend to be over-learned, while those with low frequency often lack sufficient learning. Then, we propose a novel approach to automatically and incrementally mine attention supervision information for neural ASC models. These mined information can be further used to refine the model training via a regularization term. To verify the effectiveness of our approach, we apply our approach into two dominant neural ASC models, where experimental results demonstrate our method significantly improves the performance of these two models. Our method is general for attention mechanisms. Thus, we plan to extend our approach to other neural NLP tasks with attention mechanisms, such as neural document classification (Yang et al., 2016) and neural machine translation (Zhang et al., 2018). Acknowledgments The authors were supported by National Natural Science Foundation of China (Nos. 61433015, 61672440), NSF Award (No. 1704337), Beijing Advanced Innovation Center for Language Resources, the Fundamental Research Funds for the Central Universities (Grant No. ZK1024), Scientific Research Project of National Language Committee of China (Grant No. YB135-49), and Project 2019X0653 supported by XMU Training Program of Innovation and Enterpreneurship for Undergraduates. We also thank the reviewers for their insightful comments. References Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In EMNLP. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In ACL. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. Computer Science. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif Mohammad. 2014. Nrc-canada-2014: Detecting aspects and sentiment in customer reviews. In SemEval. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In EMNLP. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In ACL. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dolla r. 2017. Focal loss for dense object detection. In ICCV. Lemao Liu, Masao Utiyama, Andrew M. Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. In COLING. Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In ACL. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In IJCAI. Minh Nguyen and Thien Nguyen. 2018. Who is killed by police: Introducing supervised attention for hierarchical lstms. In COLING. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In SemEval. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In CVPR. 566 Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective lstms for target-dependent sentiment classification. In COLING. Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect level sentiment classification with deep memory network. In EMNLP. Joachim Wagner, Piyush Arora, Santiago Cortes, Utsab Barman, Dasha Bogdanova, Jennifer Foster, and Lamia Tounsi. 2014. DCU: aspect-based polarity classification for semeval task 4. In SemEval. Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, and Yi Chang. 2018. Target-sensitive memory networks for aspect sentiment classification. In ACL. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspectlevel sentiment classification. In EMNLP. Yunchao Wei, Jiashi Feng, Xiaodan Liang, Ming-Ming Cheng, Yao Zhao, and Shuicheng Yan. 2017. Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In CVPR. Hengru Xu, Shen Li, Renfen Hu, Si Li, and Sheng Gao. 2018. From random to supervised: A novel dropout mechanism integrated with global information. In CONLL. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL. Biao Zhang, Deyi Xiong, and Jinsong Su. 2018. Neural machine translation with deep attention. IEEE Transactions on Pattern Analysis and Machine Intelligence. Yue Zhang and Jiangming Liu. 2017. Attention modeling for targeted sentiment. In EACL.
2019
53
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5344–5349 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5344 PTB Graph Parsing with Tree Approximation Yoshihide Kato and Shigeki Matsubara Information & Communications, Nagoya University Furo-cho, Chikusa-ku, Nagoya, 464-8601 Japan [email protected] Abstract The Penn Treebank (PTB) represents syntactic structures as graphs due to nonlocal dependencies. This paper proposes a method that approximates PTB graph-structured representations by trees. By our approximation method, we can reduce nonlocal dependency identification and constituency parsing into single treebased parsing. An experimental result demonstrates that our approximation method with an off-the-shelf tree-based constituency parser significantly outperforms the previous methods in nonlocal dependency identification. 1 Introduction In the Penn Treebank (PTB) (Marcus et al., 1993), syntactic structures are represented as graphs due to nonlocal dependencies, which capture syntactic discontinuities. This paper proposes a method that approximates PTB graph-structured representations by trees. By our approximation method, we can reduce nonlocal dependency identification and constituency parsing into single tree-based parsing. The information loss of our approximation method is slight, and we can easily recover original PTB graphs from the output of a parser trained using the approximated ones. An experimental result demonstrates that our approximation method with an off-the-shelf tree-based parser significantly outperforms the previous nonlocal dependency identification methods. 2 Nonlocal Dependency Identification This section explains nonlocal dependencies in the PTB, and summarizes previous work on nonlocal dependency identification. 2.1 Nonlocal dependency in PTB In the PTB, a nonlocal dependency is represented as an edge. One node is called an empty element, which is a covert element in the syntactic representation. The other is called a filler. PTB’s syntactic representations are graph-structured, while its constituency structures are represented by trees. Below, a syntactic representation in the PTB is called a PTB graph. The left graph in Figure 1 is an example of PTB graph. The empty elements are labelled with -NONE-. The terminal symbols such as 0 and ∗T∗designate their types of the empty elements. 0 and ∗T∗represent a zero relative pronoun and a trace of wh-movement, respectively. If a terminal symbol of an empty element is indexed with a number, its corresponding filler exists in the PTB graph, and is indexed with the same number. For example, the empty element of type ∗T∗is indexed with 1 and it has the corresponding filler WHNP-1. For more details about PTB nonlocal dependencies, we refer readers to (Bies et al., 1995). 2.2 Previous Work Most PTB-based parsers deal with the trees obtained by removing nonlocal dependencies and empty elements (we call such trees PTB trees). While such parsers are simple, efficient and accurate, they cannot handle nonlocal dependencies. To fill this gap, several methods have been proposed so far. They can be classified into the following two categories: the methods that introduce special operations handling nonlocal dependencies or empty elements into parsing algorithm (Dienes and Dubey, 2003; Schmid, 2006; Cai et al., 2011; Evang and Kallmeyer, 2011; Maier, 2015; Kato and Matsubara, 2016; Hayashi and Nagata, 2016; Kummerfeld and Klein, 2017), and the ones that recover PTB graphs from PTB trees generated by a parser (Johnson, 2002; Campbell, 2004; Levy and Manning, 2004). The former approach is required to design a parsing model that is suitable for the algorithm. In the latter post-processing approach, the pre-processing parser cannot reflect 5345                      !"# $%% "& ! $%%' (& !       )            )   !"# $%'% "& ! $%'%' (& !                !"# )*+)+ ,,*++),, .  "!/ Figure 1: PTB graph and PTB augmented tree. the information about nonlocal dependencies. 3 Tree Approximation of PTB Graphs This section proposes a new approach of nonlocal dependency identification. We reduce nonlocal dependency identification and constituency parsing into single tree-based parsing. In our approach, a PTB graph is converted to a tree which approximately represents the PTB graph. The conversion consists of the following two steps: Removing nonlocal dependency removes the edges between the empty elements and their fillers, and augments the labels of them. Augmented labels are used in order to recover the removed edges. Removing empty element removes the empty elements and inserts new inner nodes that encode the empty elements. We call the trees obtained by this conversion PTB augmented trees. Figure 1 shows an example of the conversion. Below, we explain each step in detail. 3.1 Removing nonlocal dependency By removing the nonlocal dependency edges, a PTB graph becomes a tree. In order to approximately represent the edges in the resulting tree, we augment node labels in the annotation scheme identical to that proposed by Kato and Matsubara (2016). In this scheme, the labels of empty elements and their fillers are augmented with special tags. We first describe the annotation scheme, and then how to recover removed edges using augmented labels. 3.1.1 Annotation approximately representing nonlocal dependency Algorithm 1 is the annotation algorithm of Kato and Matsubara (2016). Here, posi(x, y) is the relative position of x for y and defined as follows: posi(x, y) =      A (x is an ancestor of y) L (x occurs to the left of y) R (x occurs to the right of y) The tag OBJCTRL enables us to distinguish between subject and object control. Algorithm 1 Removing nonlocal dependency type(e) is the type of an empty element e. cat(x) is the category of x. par(x) is the parent of x. SBJ(x) means the label of x has the tag SBJ Input: an empty element e and e’s co-indexed filler f remove the edge (e, f) assign posi(f, e) to e if type(e) = ∗∧¬SBJ(f) then assign OBJCTRL to e end if if type(e) ∈{∗EXP∗, ∗ICH∗, ∗RNR∗, ∗T∗} then assign type(e), cat(par(e)) and posi(f, e) to f end if For example, the left PTB graph in Figure 1 is converted to the middle tree. The boxes designate the augmented empty element and filler. 3.1.2 nonlocal dependency recovery This section proposes a method of recovering nonlocal dependencies using the annotation described in the previous section. This method is based on heuristic rules, which are similar to, but simpler than those of Kato and Matsubara (2016).1 1 Kato and Matsubara (2016) defined their recovery rules for intermediate results in parsing. This makes their rules somewhat complex. 5346 node pattern constraint imposed on the corresponding node x e = (-NONE-L ∗) posi(x, e) = L ∧c-cmd(x, e) ∧SBJ(x) e = (-NONE-R ∗) posi(x, e) = R ∧c-cmd(x, e) ∧SBJ(x) e = (-NONE-L-OBJCTRL ∗) posi(x, e) = L ∧c-cmd(x, e) ∧cat(x) ∈{NP, PP} ∧cat par(x)  = VP e = (-NONE-L ∗T∗) posi(x, e) = L ∧c-cmd(x, e) ∧match(x, e) e = (-NONE-A ∗T∗) ∃y posi(x, y) = A ∧posi(y, e) = A ∧cat(y) = PRN  ∧cat par(e)  = cat(x) e = (-NONE-R ∗RNR∗) posi(x, e) = R ∧c-cmd(x, e) ∧match(x, e) e = (-NONE-L ∗ICH∗) posi(x, e) = L ∧match(x, e) e = (-NONE-R ∗ICH∗) posi(x, e) = R ∧match(x, e) f = (X-∗EXP∗-R · · · ) posi(f, x) = R ∧c-cmd(x, f) ∧x = (NP (PRP it)) match(f, e) means the type, the category and the position tag of a filler f are identical to those of an empty element e. Table 1: The rules for nonlocal dependency recovery A rule consists of a node pattern and a constraint. When there is a node that matches the pattern, we select the nearest node satisfying the constraint as its co-indexed node. Table 1 summarizes the rules.2 Here, c-cmd(x, y) is the syntactic relation called c-command3 and holds iff the following condition (1) is satisfied: ∃z. (z is a sibling of x) ∧posi(z, y) = A  (1) For example, the nonlocal dependency in Figure 1 can be recovered by the fourth rule in Table 1. 3.2 Removing empty elements While the first step in the conversion can remove nonlocal dependency edges, the empty elements still remain. The second step removes empty elements and encodes them as inner nodes. By this conversion, parsing algorithm require no special operations handling empty elements. 3.2.1 Encoding empty elements Algorithm 2 removes and encodes empty elements. For example, the middle tree in Figure 1 is converted to the right one. The dotted boxes designate the inner nodes encoding the empty elements. Here, note that [(NP (-NONE-L ∗T∗))] is no more than a part of the label in the PTB augmented tree. Kummerfeld and Klein (2017) represent empty elements in a similar way, but important difference exists. Our method keeps empty element positions (L and R) and no nonlocal dependencies, while they do not keep empty element positions and reserves nonlocal dependencies. Furthermore, while they require a specially-designed head rule 2 In the third rule, if cat(x) = PP, e is co-indexed with not x but x’s child NP. 3 Kato and Matsubara (2016) follow Chomsky’s GBtheory (Chomsky, 1981) to use this relation, because it holds between co-indexed nodes in most cases. We also use this relation. Algorithm 2 Encoding Empty element null(x) means all the leaves of x are empty elements. node(l, C) creates a node with a label l and children C. encode(x) converts the subtree rooted at x to a string. label(x) is the label of x. Input: a node x ⟨c1, . . . , cn⟩←children(x) i ←the leftmost position such that ¬null(ci) C ←⟨ci⟩ for j from i + 1 to n do if ¬null(cj) then C ←C · ⟨cj⟩ else C ←⟨node(cat(x) + ”R” + encode(cj), C)⟩ end if end for for j from i −1 down to 1 do C ←⟨node(cat(x) + ”L” + encode(cj), C)⟩ end for return node(label(x), C) to avoid constructing cyclic graphs in parsing, our method does not need head rules in the first place. 3.2.2 Recovering empty elements Algorithm 2 is lossless and Algorithm 3 can recover the empty elements from the inner nodes inserted in Algorithm 2. 4 Experiment To evaluate the performance of our proposed method, we conducted an experiment using the PTB. We used the Kitaev and Klein (henceforth K&K) parser (Kitaev and Klein, 2018a)4. The K&K parser is a state-of-the-art tree-based parser, which can use ELMo (Peters et al., 2018) or BERT5 (Devlin et al., 2018) as external data. PTB graphs in the training (sections 02–21) and development (section 22) data were converted into PTB augmented trees by our tree approximation 4https://github.com/nikitakit/ self-attentive-parser 5The experiment using BERT is reported in (Kitaev and Klein, 2018b). 5347 Empty element Nonlocal dependency Nonlocal dependency detection identification identification (Fillers are ignored.) (Unindexed empty elements are excluded.) pre. rec. F1 pre. rec. F1 pre. rec. F1 (Johnson, 2002) 85 74 79 73 63 68 – – – (Dienes and Dubey, 2003) – – – 81.5 68.7 74.6 – – – (Campbell, 2004) 85.2 81.7 83.4 78.3 75.1 76.7 – – – (Schmid, 2006) 86.0 82.3 84.1 – – – 81.7 73.5 77.4 (Cai et al., 2011) 90.1 79.5 84.5 – – – – – – (Hayashi and Nagata, 2016) 90.3 81.7 85.8 – – – – – – (Kato and Matsubara, 2016) 88.5 82.1 85.2 81.4 75.5 78.4 79.8 73.8 76.7 (Kummerfeld and Klein, 2017) 89.5 81.6 85.4 74.3 67.3 70.6 – – – post-processing (using gold PTB trees) (Johnson, 2002) 93 83 88 80 70 75 – – – (Campbell, 2004) 94.9 91.1 93.0 90.1 86.6 88.4 – – – ours 92.6 87.7 90.1 88.1 83.4 85.7 88.4 81.1 84.6 ours (with ELMo) 94.2 90.3 92.3 89.9 86.2 88.0 90.4 84.1 87.2 ours (with BERT) 94.9 91.4 93.1 90.8 87.4 89.0 91.6 84.9 88.1 Table 2: Comparison for nonlocal dependency identification on the test data. Algorithm 3 Recovering empty element decode(x) creates a tree by decoding a string assigned by encode and returns its root. Input: a node x C ←children(x) C′ ←⟨⟩ while C ̸= ⟨⟩do pop the first element c from C if c is an inserted node and c has the tag L then C ←⟨decode(c)⟩· children(c) · C else if c is an inserted node and has the tag R then C ←children(c) · ⟨decode(c)⟩· C else C′ ←C′ · ⟨c⟩ end if end while return node(label(x), C′) method6, and a parsing model was trained using the PTB augmented trees. The hyperparameters for training were identical to those of Kitaev and Klein (2018a). We selected the model that maximizes the F1 score on the development data, where we treated the node labels of PTB augmented trees as constituent labels. For the test data (section 23), PTB graphs were recovered from the PTB augmented trees generated by the parser. The accuracy of the nonlocal dependency identification was evaluated by the metric proposed by Johnson (2002). First, we evaluated the performance of our approximation method. We recovered PTB graphs from not the parser output but the gold PTB augmented trees in the development data. We ob6The conversion code is available at https:// github.com/yosihide/ptb2cf. tained 99.5 F1 score in nonlocal dependency identification where unindexed empty elements were excluded. This result means that the information loss is slight in our approximation method. Table 2 summarizes the performances of our system and previous ones. These results demonstrate that our system significantly outperforms the previous methods in nonlocal dependency identification. Although the main reason for this is because of the performance of the K&K parser, the important point is that our proposed approximation method enables us to use the K&K parser for the nonlocal dependency identification task. The previous methods that introduce additional operations cannot adopt such parser directly. On the other hand, although post-processing approach can use any parser in pre-processing, our approach outperforms the post-processing approach, even if the pre-processing parser is assumed to always generate gold PTB trees. We converted PTB graphs into PTB trees to evaluate constituency parsing performance. Table 3 shows the F1 scores of our and the K&K parser. These results demonstrate that our tree approximation has little negative impact on the constituency parsing performance. 5 Conclusion This paper proposes a conversion of PTB graphs into PTB augmented trees, which enables us to reduce nonlocal dependency identification and constituency parsing into single parsing. Our proposed conversion method can be easily combined 5348 pre. rec. F1 K&K 93.90 93.20 93.55 K&K (ELMo) 95.40 94.85 95.13 K&K (BERT) 96.03 95.51 95.77 Ours 93.84 92.78 93.31 Ours (ELMo) 95.27 94.70 94.99 Ours (BERT) 96.04 95.36 95.70 Table 3: Comparison for constituency parsing performance on the test data. with other tree-based parsers. We can expect that the evolution of tree-based parsing technology makes our approach improve the accuracy of nonlocal dependency identification. Acknowledgements This research was partially supported by the Grant-in-Aid for Scientific Research (C) (17K00303) of JSPS. References Ann Bies, Mark Ferguson, Karen Katz, and Robert MacIntyre. 1995. Bracketing guidelines for Treebank II style Penn Treebank project. University of Pennsylvania. Shu Cai, David Chiang, and Yoav Goldberg. 2011. Language-independent parsing with empty elements. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 212– 216, Portland, Oregon, USA. Richard Campbell. 2004. Using linguistic principles to recover empty categories. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 645–652, Barcelona, Spain. Noam Chomsky. 1981. Lectures on government and binding: The Pisa lectures. Walter de Gruyter. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. P´eter Dienes and Amit Dubey. 2003. Antecedent recovery: Experiments with a trace tagger. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 33–40. Kilian Evang and Laura Kallmeyer. 2011. PLCFRS parsing of English discontinuous constituents. In Proceedings of the 12th International Conference on Parsing Technologies, pages 104–116, Dublin, Ireland. Katsuhiko Hayashi and Masaaki Nagata. 2016. Empty element recovery by spinal parser operations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 95–100, Berlin, Germany. Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 136–143, Philadelphia, Pennsylvania, USA. Yoshihide Kato and Shigeki Matsubara. 2016. Transition-based left-corner parsing for identifying PTB-style nonlocal dependencies. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 930–940, Berlin, Germany. Nikita Kitaev and Dan Klein. 2018a. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686. Nikita Kitaev and Dan Klein. 2018b. Multilingual constituency parsing with self-attention and pretraining. CoRR, abs/1812.11760. Jonathan K. Kummerfeld and Dan Klein. 2017. Parsing with traces: An O(n4) algorithm and a structural representation. Transactions of the Association for Computational Linguistics, 5:441–454. Roger Levy and Christopher Manning. 2004. Deep dependencies from context-free statistical parsers: Correcting the surface dependency approximation. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 327–334, Barcelona, Spain. Wolfgang Maier. 2015. Discontinuous incremental shift-reduce parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1202–1212, Beijing, China. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):310–330. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237. Helmut Schmid. 2006. Trace prediction and recovery with unlexicalized PCFGs and slash features. In Proceedings of the 21st International Conference on 5349 Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 177–184, Sydney, Australia.
2019
530
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5350–5357 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5350 Sequence Labeling Parsing by Learning Across Representations Michalina Strzyz David Vilares Carlos G´omez-Rodr´ıguez Universidade da Coru˜na, CITIC FASTPARSE Lab, LyS Research Group, Departamento de Computaci´on Campus de Elvi˜na, s/n, 15071 A Coru˜na, Spain {michalina.strzyz,david.vilares,carlos.gomez}@udc.es Abstract We use parsing as sequence labeling as a common framework to learn across constituency and dependency syntactic abstractions. To do so, we cast the problem as multitask learning (MTL). First, we show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the other paradigm. Secondly, we explore an MTL sequence labeling model that parses both representations, at almost no cost in terms of performance and speed. The results across the board show that on average MTL models with auxiliary losses for constituency parsing outperform singletask ones by 1.05 F1 points, and for dependency parsing by 0.62 UAS points. 1 Introduction Constituency (Chomsky, 1956) and dependency grammars (Mel’cuk, 1988; K¨ubler et al., 2009) are the two main abstractions for representing the syntactic structure of a given sentence, and each of them has its own particularities (Kahane and Mazziotta, 2015). While in constituency parsing the structure of sentences is abstracted as a phrasestructure tree (see Figure 1a), in dependency parsing the tree encodes binary syntactic relations between pairs of words (see Figure 1b). When it comes to developing natural language processing (NLP) parsers, these two tasks are usually considered as disjoint tasks, and their improvements therefore have been obtained separately (Charniak, 2000; Nivre, 2003; Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017; Ma et al., 2018; Kitaev and Klein, 2018). Despite the potential benefits of learning across representations, there have been few attempts in the literature to do this. Klein and Manning (2003) considered a factored model that provides separate methods for phrase-structure and lexical dependency trees and combined them to obtain optimal parses. With a similar aim, Ren et al. (2013) first compute the n best constituency trees using a probabilistic context-free grammar, convert those into dependency trees using a dependency model, compute a probability score for each of them, and finally rerank the most plausible trees based on both scores. However, these methods are complex and intended for statistical parsers. Instead, we propose a extremely simple framework to learn across constituency and dependency representations. Contribution (i) We use sequence labeling for constituency (G´omez-Rodr´ıguez and Vilares, 2018) and dependency parsing (Strzyz et al., 2019) combined with multi-task learning (MTL) (Caruana, 1997) to learn across syntactic representations. To do so, we take a parsing paradigm (constituency or dependency parsing) as an auxiliary task to help train a model for the other parsing representation, a simple technique that translates into consistent improvements across the board. (ii) We also show that a single MTL model following this strategy can robustly produce both constituency and dependency trees, obtaining a performance and speed comparable with previous sequence labeling models for (either) constituency or dependency parsing. The source code is available at https://github.com/ mstrise/seq2label-crossrep 2 Parsing as Sequence Labeling Notation We use w = [wi, ..., w|w|] to denote an input sentence. We use bold style lower-cased and math style upper-cased characters to refer to vectors and matrices (e.g. x and W). Sequence labeling is a structured prediction task where each token in the input sentence is mapped to a label (Rei and Søgaard, 2018). Many NLP tasks suit this setup, including part-of-speech tag5351 ging, named-entity recognition or chunking (Sang and Buchholz, 2000; Toutanova and Manning, 2000; Tjong Kim Sang and De Meulder, 2003). More recently, syntactic tasks such as constituency parsing and dependency parsing have been successfully reduced to sequence labeling (Spoustov´a and Spousta, 2010; Li et al., 2018; G´omezRodr´ıguez and Vilares, 2018; Strzyz et al., 2019). Such models compute a tree representation of an input sentence using |w| tagging actions. We will also cast parsing as sequence labeling, to then learn across representations using multitask learning. Two are the main advantages of this approach: (i) it does not require an explicit parsing algorithm nor explicit parsing structures, and (ii) it massively simplifies joint syntactic modeling. We now describe parsing as sequence labeling and the architecture used in this work. Constituency parsing as tagging G´omezRodr´ıguez and Vilares (2018) define a linearization method Φ|w| : Tc,|w| →L|w| c to transform a phrase-structure tree into a discrete sequence of labels of the same length as the input sentence. Each label li ∈Lc is a three tuple (ni, ci, ui) where: ni is an integer that encodes the number of ancestors in the tree shared between a word wi and its next one wi+1 (computed as relative variation with respect to ni−1), ci is the non-terminal symbol shared at the lowest level in common between said pair of words, and ui (optional) is a leaf unary chain that connects ci to wi. Figure 1a illustrates the encoding with an example.1 Dependency parsing as tagging Strzyz et al. (2019) also propose a linearization method Π|w| : Td,|w| →L|w| d to transform a dependency tree into a discrete sequence of labels. Each label ri ∈Ld is also represented as a three tuple (oi, pi, di). If oi > 0, wi’s head is the oith closest word with PoS tag pi to the right of wi. If oi < 0, the head is the −oith closest word to the left of wi that has as a PoS tag pi. The element di represents the syntactic relation between the head and the dependent terms. Figure 1b depictures it with an example. Tagging with LSTMs We use bidirectional LSTMs (BILSTMs) to train our models (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997). Briefly, let LSTM→(x) be an abstrac1In this work we do not use the dual encoding by Vilares et al. (2019), which combines the relative encoding with a top-down absolute scale to represent certain relations. S . .P (∅) VP NP N control (-2,S, ∅) J good (1,NP, ∅) V has (1,VP, ∅) NP N He (1,S,NP) (a) A constituency tree <ROOT> He has good control . N V J N . 1 2 3 4 5 (+1,V,nsubj) (-1,ROOT,root) (+1,N,amod) (-1,V,dobj) (-1,V,punct) nsubj dobj amod root punct (b) A dependency tree Figure 1: An example of constituency and dependency trees with their encodings. tion of a LSTM that processes the input from left to right, and let LSTM←(x) be another LSTM processing the input in the opposite direction, the output hi of a BILSTM at a timestep i is computed as: BILSTM(x, i) = LSTM→(x0:i) ◦LSTM←(xi:|w|). Then, hi is further processed by a feed-forward layer to compute the output label, i.e. P(y|hi) = softmax(W ∗hi +b). To optimize the model, we minimize the categorical cross-entropy loss, i.e. L = −P log(P(y|hi)). In Appendix A we detail additional hyperpameters of the network. In this work we use NCRFpp (Yang and Zhang, 2018) as our sequence labeling framework. 3 Learning across representations To learn across representations we cast the problem as multi-task learning. MTL enables learning many tasks jointly, encapsulating them in a single model and leveraging their shared representation (Caruana, 1997; Ruder, 2017). In particular, we will use a hard-sharing architecture: the sentence is first processed by stacked BILSTMs shared across all tasks, with a task-dependent feed-forward network on the top of it, to compute each task’s outputs. In particular, to benefit from a specific parsing abstraction we will be using the concept of auxiliary tasks (Plank et al., 2016; Bingel and Søgaard, 2017; Coavoux and Crabb´e, 2017), where tasks are learned together with the main task in the MTL setup even if they are not of actual interest by themselves, as they might help to find out hidden patterns in the data and lead to 5352 better generalization of the model.2 For instance, Hershcovich et al. (2018) have shown that semantic parsing benefits from that approach. The input is the same for both types of parsing and the same number of timesteps are required to compute a tree (equal to the length of the sentence), which simplifies the joint modeling. In this work, we focus on parallel data (we train on the same sentences labeled for both constituency and dependency abstractions). In the future, we plan to explore the idea of exploiting joint training over disjoint treebanks (Barrett et al., 2018). 3.1 Baselines and models We test different sequence labeling parsers to determine whether there are any benefits in learning across representations. We compare: (i) a single-task model for constituency parsing and another one for dependency parsing, (ii) a multi-task model for constituency parsing (and another for dependency parsing) where each element of the 3tuple is predicted as a partial label in a separate subtask instead of as a whole, (iii) different MTL models where the partial labels from a specific parsing abstraction are used as auxiliary tasks for the other one, and (iv) an MTL model that learns to produce both abstractions as main tasks. Single-paradigm, single-task models (S-S) For constituency parsing, we use the single-task model by G´omez-Rodr´ıguez and Vilares (2018). The input is the raw sentence and the output for each token a single label of the form li=(ni, ci, ui). For dependency parsing we use the model by Strzyz et al. (2019) to predict a single dependency label of the form ri=(oi, pi, di) for each token. Single-paradigm, multi-task models (S-MTL) For constituency parsing, instead of predicting a single label output of the form (ni, ci, ui), we generate three partial and separate labels ni, ci and ui through three task-dependent feed-forward networks on the top of the stacked BILSTMs. This is similar to Vilares et al. (2019). For dependency parsing, we propose in this work a MTL version too. We observed in preliminary experiments, as shown in Table 1, that casting the problem as 3task learning led to worse results. Instead, we cast it as a 2-task learning problem, where the first task consists in predicting the head of a word wi, i.e. 2Auxiliary losses are usually given less importance during the training process. Model UAS LAS S-S 93.81 91.59 S-MTL(2) 94.03 91.78 S-MTL(3) 93.66 91.47 Table 1: Comparison of the single-paradigm models for dependency parsing evaluated on the PTB dev set where each label is learned as single, 2- or 3-tasks. Figure 2: Architecture of our double-paradigm, MTL model with 3-task learning for constituency parsing and 2-task learning for dependency parsing. predicting the tuple (oi, pi), and the second task predicts the type of the relation (di). The loss is here computed as L=P t Lt, where Lt is the partial loss coming from the subtask t. Double-paradigm, multi-task models with auxiliary losses (D-MTL-AUX) We predict the partial labels from one of the parsing abstractions as main tasks. The partial labels from the other parsing paradigm are used as auxiliary tasks. The loss is computed as L=P t Lt + P a βaLa, where La is an auxiliary loss and βa its specific weighting factor. Figure 2 shows the architecture used in this and the following multi-paradigm model. Double paradigm, multi-task models (D-MTL) All tasks are learned as main tasks instead. 4 Experiments 4.1 Data In the following experiments we use two parallel datasets that provide syntactic analyses for both dependency and constituency parsing. 5353 Model Dependency Parsing Constituency Parsing UAS LAS F1 English (PTB) S-S 93.60 91.74 90.14 S-MTL 93.84 91.83 90.32 D-MTL-AUX 94.05 92.01 90.39 D-MTL 93.96 91.90 89.81 Basque S-S 86.20 81.70 89.54 S-MTL 87.42 81.71 90.86 D-MTL-AUX 87.19 81.73 91.12 D-MTL 87.09 81.77 90.76 French S-S 89.13 85.03 80.68 S-MTL 89.54 84.89 81.34 D-MTL-AUX 89.52 84.97 81.33 D-MTL 89.45 85.07 81.19 German S-S 91.24 88.76 84.19 S-MTL 91.54 88.75 84.46 D-MTL-AUX 91.58 88.80 84.38 D-MTL 91.45 88.67 84.28 Hebrew S-S 82.74 75.08 88.85 S-MTL 83.42 74.91 91.91 D-MTL-AUX 83.90 75.89 91.83 D-MTL 82.60 73.73 91.10 Hungarian S-S 88.24 84.54 90.42 S-MTL 88.69 84.54 90.76 D-MTL-AUX 88.99 84.95 90.69 D-MTL 88.89 84.89 90.93 Korean S-S 86.47 84.12 83.33 S-MTL 86.78 84.39 83.51 D-MTL-AUX 87.00 84.60 83.39 D-MTL 86.64 84.34 83.08 Polish S-S 91.17 85.64 92.59 S-MTL 91.58 85.04 93.17 D-MTL-AUX 91.37 85.20 93.36 D-MTL 92.00 85.92 93.52 Swedish S-S 86.49 80.60 83.81 S-MTL 87.22 80.61 86.23 D-MTL-AUX 87.24 80.34 86.53 D-MTL 87.15 80.71 86.44 average S-S 88.36 84.13 87.06 S-MTL 88.89 84.07 88.06 D-MTL-AUX 88.98 84.28 88.11 D-MTL 88.80 84.11 87.90 Table 2: Results on the PTB and SPMRL test sets. Model Dependency parsing Constituency Parsing UAS LAS F1 Chen and Manning (2014) 91.80 89.60 — Kiperwasser and Goldberg (2016) 93.90 91.90 — Dozat and Manning (2017) 95.74 94.08 — Ma et al. (2018) 95.87 94.19 — Fern´andez-G and G´omez-R (2019) 96.04 94.43 — Vinyals et al. (2015) — — 88.30 Zhu et al. (2013) — — 90.40 Vilares et al. (2019) — — 90.60 Dyer et al. (2016) — — 91.20 Kitaev and Klein (2018) — — 95.13 D-MTL-AUX 94.05 92.01 90.39 Table 3: Comparison of existing models against the DMTL-AUX model on the PTB test set. PTB For the evaluation on English language we use the English Penn Treebank (Marcus et al., 1993), transformed into Stanford dependencies (De Marneffe et al., 2006) with the predicted PoS tags as in Dyer et al. (2016). SPMRL We also use the SPMRL datasets, a collection of parallel dependency and constituency treebanks for morphologically rich languages (Seddah et al., 2014). In this case, we use the predicted PoS tags provided by the organizers. We observed some differences between the constituency and dependency predicted input features provided with the corpora. For experiments where dependency parsing is the main task, we use the input from the dependency file, and the converse for constituency, for comparability with other work. D-MTL models were trained twice (one for each input), and dependency and constituent scores are reported on the model trained on the corresponding input. Metrics We use bracketing F-score from the original EVALB and EVAL SPMRL official scripts to evaluate constituency trees. For dependency parsing, we rely on LAS and UAS scores where punctuation is excluded in order to provide a homogeneous setup for PTB and SPMRL. 4.2 Results Table 2 compares single-paradigm models against their double-paradigm MTL versions. On average, MTL models with auxiliary losses achieve the best performance for both parsing abstractions. They gain 1.05 F1 points on average in comparison with the single model for constituency parsing, and 0.62 UAS and 0.15 LAS points for dependency parsing. In comparison to the single-paradigm MTL models, the average gain is smaller: 0.05 F1 points for constituency parsing, and 0.09 UAS and 0.21 LAS points for dependency parsing. MTL models that use auxiliary tasks (D-MTLAUX) consistently outperform the single-task models (S-S) in all datasets, both for constituency parsing and for dependency parsing in terms of UAS. However, this does not extend to LAS. This different behavior between UAS and LAS seems to be originated by the fact that 2-task dependency parsing models, which are the basis for the corresponding auxiliary task and MTL models, improve UAS but not LAS with respect to single-task dependency parsing models. The reason might be that 5354 Model Basque French German Hebrew Hungarian Korean Polish Swedish average Nivre et al. (2007) 70.11 77.98 77.81 69.97 70.15 82.06 75.63 73.21 74.62 Ballesteros (2013) 78.58 79.00 82.75 73.01 79.63 82.65 79.89 75.82 78.92 Ballesteros et al. (2015) (char+POS) 78.61 81.08 84.49 72.26 76.34 86.21 78.24 74.47 78.96 De La Clergerie (2013) 77.55 82.06 84.80 73.63 75.58 81.02 82.56 77.54 79.34 Bj¨orkelund et al. (2013) (ensemble) 85.14 85.24 89.65 80.89 86.13 86.62 87.07 82.13 85.36 D-MTL-AUX 84.02 83.85 88.18 74.94 80.26 85.93 85.86 79.77 82.85 Table 4: Dependency parsing: existing models evaluated with LAS scores on the SPMRL test set. Model Basque French German Hebrew Hungarian Korean Polish Swedish average Fern´andez-Gonz´alez and Martins (2015) 85.90 78.75 78.66 88.97 88.16 79.28 91.20 82.80 84.22 Coavoux and Crabb´e (2016) 86.24 79.91 80.15 88.69 90.51 85.10 92.96 81.74 85.66 Bj¨orkelund et al. (2013) (ensemble) 87.86 81.83 81.27 89.46 91.85 84.27 87.55 83.99 86.01 Coavoux and Crabb´e (2017) 88.81 82.49 85.34 89.87 92.34 86.04 93.64 84.00 87.82 Vilares et al. (2019) 91.18 81.37 84.88 92.03 90.65 84.01 93.93 86.71 88.10 Kitaev and Klein (2018) 89.71 84.06 87.69 90.35 92.69 86.59 93.69 84.35 88.64 D-MTL-AUX 91.12 81.33 84.38 91.83 90.69 83.39 93.36 86.53 87.83 Table 5: Constituency parsing: existing models evaluated with F1 score on the SPMRL test set. Model Dependency parsing Constituency parsing S-S 102±6 117±6 S-MTL 128±11 133±1 D-MTL-AUX 128±11 133±1 D-MTL 124±1 124±1 Table 6: Sentences/second on the PTB test set. the single-task setup excludes unlikely combinations of dependency labels with PoS tags or dependency directions that are not found in the training set, while in the 2-task setup, both components are treated separately, which may be having a negative influence on dependency labeling accuracy. In general, one can observe different range of gains of the models across languages. In terms of UAS, the differences between single-task and MTL models span between 1.22 (Basque) and −0.14 (Hebrew); for LAS, 0.81 and −1.35 (both for Hebrew); and for F1, 3.06 (Hebrew) and −0.25 (Korean). Since the sequence labeling encoding used for dependency parsing heavily relies on PoS tags, the result for a given language can be dependent on the degree of the granularity of its PoS tags. In addition, Table 3 provides a comparison of the D-MTL-AUX models for dependency and constituency parsing against existing models on the PTB test set. Tables 4 and 5 shows the results for various existing models on the SPMRL test sets.3 3Note that we provide these SPMRL results for merely informative purposes. While they are the best existing results to our knowledge in these datasets, not all are directly comparable to ours (due to not all of them using the same kinds of information, e.g. some models do not use morphological Table 6 shows the speeds (sentences/second) on a single core of a CPU4. The D-MTL setup comes at almost no added computational cost, so the very good speed-accuracy tradeoff already provided by the single-task models is improved. 5 Conclusion We have described a framework to leverage the complementary nature of constituency and dependency parsing. It combines multi-task learning, auxiliary tasks, and sequence labeling parsing, so that constituency and dependency parsing can benefit each other through learning across their representations. We have shown that MTL models with auxiliary losses outperform single-task models, and MTL models that treat both constituency and dependency parsing as main tasks obtain strong results, coming almost at no cost in terms of speed. Source code will be released upon acceptance. Acknowlegments This work has received funding from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01). features). Also, there are not many recent results for dependency parsing on the SPMRL datasets, probably due to the popularity of UD corpora. For comparison, we have included punctuation for this evaluation. 4Intel Core i7-7700 CPU 4.2 GHz. 5355 References Miguel Ballesteros. 2013. Effective morphological feature selection with maltoptimizer at the spmrl 2013 shared task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 63–70. Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with LSTMs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 349–359, Lisbon, Portugal. Association for Computational Linguistics. Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, and Anders Søgaard. 2018. Sequence classification with human attention. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 302–312. Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. CoRR, abs/1702.08303. Anders Bj¨orkelund, Ozlem Cetinoglu, Rich´ard Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (re) ranking meets morphosyntax: State-of-the-art results from the spmrl 2013 shared task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 135–145. Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41–75. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 132–139. Association for Computational Linguistics. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750, Doha, Qatar. Association for Computational Linguistics. Noam Chomsky. 1956. Three models for the description of language. IRE Transactions on information theory, 2(3):113–124. Maximin Coavoux and Benoit Crabb´e. 2016. Neural greedy constituent parsing with dynamic oracles. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 172–182. Maximin Coavoux and Benoˆıt Crabb´e. 2017. Multilingual lexicalized constituency parsing with wordlevel auxiliary tasks. In 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), volume 2, pages 331–336. Association for Computational Linguistics. Eric De La Clergerie. 2013. Exploring beam-based shift-reduce dependency parsing with dyalog: Results from the spmrl 2013 shared task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 53–62. Marie-Catherine De Marneffe, Bill MacCartney, Christopher D Manning, et al. 2006. Generating typed dependency parses from phrase structure parses. In Lrec, volume 6, pages 449–454. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209. Association for Computational Linguistics. Daniel Fern´andez-Gonz´alez and Carlos G´omezRodr´ıguez. 2019. Left-to-right dependency parsing with pointer networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), page to appear, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Daniel Fern´andez-Gonz´alez and Andr´e F. T. Martins. 2015. Parsing as reduction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1523–1533, Beijing, China. Association for Computational Linguistics. Carlos G´omez-Rodr´ıguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1314– 1324. Association for Computational Linguistics. Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2018. Multitask parsing across semantic representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 373–385. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Sylvain Kahane and Nicolas Mazziotta. 2015. Syntactic polygraphs. a formalism extending both constituency and dependency. In Mathematics of Language. 5356 Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2676–2686. Dan Klein and Christopher D Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Advances in neural information processing systems, pages 3–10. Sandra K¨ubler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing. Synthesis Lectures on Human Language Technologies, 1(1):1–127. Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3203–3214, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1403–1414. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. Igor Aleksandrovic Mel’cuk. 1988. Dependency syntax: theory and practice. SUNY press. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the Eighth International Workshop on Parsing Technologies (IWPT, pages 149–160, Nancy, France. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨uls¸en Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. Maltparser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95–135. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In The 54th Annual Meeting of the Association for Computational Linguistics, page 412. Marek Rei and Anders Søgaard. 2018. Zero-shot sequence labeling: Transferring knowledge from sentences to tokens. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 293–302. Xiaona Ren, Xiao Chen, and Chunyu Kit. 2013. Combine constituent and dependency parsing via reranking. In Twenty-Third International Joint Conference on Artificial Intelligence. Sebastian Ruder. 2017. An overview of multitask learning in deep neural networks. CoRR, abs/1706.05098. Erik F Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task chunking. In Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Djam´e Seddah, Sandra K¨ubler, and Reut Tsarfaty. 2014. Introducing the spmrl 2014 shared task on parsing morphologically-rich languages. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages, pages 103–109. Drahom´ıra Spoustov´a and Miroslav Spousta. 2010. Dependency parsing as a sequence labeling task. The Prague Bulletin of Mathematical Linguistics, 94(1):7–14. Michalina Strzyz, David Vilares, and Carlos G´omezRodr´ıguez. 2019. Viable dependency parsing as sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), page to appear, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Kristina Toutanova and Christopher D Manning. 2000. Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics-Volume 13, pages 63–70. Association for Computational Linguistics. David Vilares, Mostafa Abdou, and Anders Søgaard. 2019. Better, faster, stronger sequence tagging constituent parsers. In Proceedings of the 2019 Conference of the North American Chapter of the Associ5357 ation for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), page to appear, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in neural information processing systems, pages 2773–2781. Jie Yang and Yue Zhang. 2018. Ncrf++: An opensource neural sequence labeling toolkit. Proceedings of ACL 2018, System Demonstrations, pages 74–79. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 434–443. A Model parameters The models were trained up to 150 iterations and optimized with Stochastic Gradient Descent (SGD) with a batch size of 8. The best model for constituency parsing was chosen with the highest achieved F1 score on the development set during the training and for dependency parsing with the highest LAS score. The best double paradigm, multi-task model was chosen based on the highest harmonic mean among LAS and F1 scores. Table 7 shows model hyperparameters. Initial learning rate 0.02 Time-based learning rate decay 0.05 Momentum 0.9 Dropout 0.5 Dimension Word embedding 100 Char embedding 30 Self-defined features 20 5 Word hidden vector 800 Character hidden vector 50 Type of MTL model Weighting factor for each task 2-task D 1 3-task C 1 D with auxiliary task C D: 1 and C: 0.2 C with auxiliary task D C: 1 and D: 0.1 Multi-task C and D 1 Table 7: Model hyperparameters. D indicates dependency parsing and C constituency parsing. 5Models trained on PTB treebank used PoS tag embedding size of 25 in order to assure the same setup for comparison with the previously reported results.
2019
531
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5358–5362 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5358 A Prism Module for Semantic Disentanglement in Name Entity Recognition Kun Liu1,*, † Shen Li2,* Daqi Zheng2 Zhengdong Lu2 Sheng Gao1 Si Li1 {liukun, gaosheng, lisi}@bupt.edu.cn {shen, da, luz}@deeplycurious.ai 1 Beijing University of Posts and Telecommunications 2 Deeplycurious.ai Abstract Natural Language Processing has been perplexed for many years by the problem that multiple semantics are mixed inside a word, even with the help of context. To solve this problem, we propose a prism module to disentangle the semantic aspects of words and reduce noise at the input layer of a model. In the prism module, some words are selectively replaced with task-related semantic aspects, then these denoised word representations can be fed into downstream tasks to make them easier. Besides, we also introduce a structure to train this module jointly with the downstream model without additional data. This module can be easily integrated into the downstream model and significantly improve the performance of baselines on named entity recognition (NER) task. The ablation analysis demonstrates the rationality of the method. As a side effect, the proposed method also provides a way to visualize the contribution of each word. 1 1 Introduction In Nature Language Processing (NLP), words contribute differently to different tasks. Therefore, attention-based models pay more attention on important words than unimportant words. Since the information that is unrelated to the task can be regarded as noise, unimportant words contain more noise than important words do. From this perspective, attention is a noise reduction mechanism. Hard attention and soft attention are two main types of attention mechanisms which are proposed in (Xu et al., 2015). Hard attention mechanism selects some important tokens from input sequence ∗Kun Liu and Shen Li contributed equally to this work. †Work performed when Kun Liu worked as an intern in Deeplycurious.ai. 1Our code is available at https://github.com/ liukun95/Prism-Module and ignore others. This will lead to the loss of necessary information which exists in the ignored tokens. By contrast, in soft attention mechanism, a probability distribution which reflects the importance of tokens is calculated over each token of the input sequence. However, since there is more useless information than useful information in unimportant words, it should be noted that noise could be kept more, when those words are assigned with non-zero probabilities. Overall, both two attention mechanisms have drawbacks in noise reduction. Attention mechanism is firstly applied in Computer Vision (CV) (Mnih et al., 2014) where pixels are the basic units. However, in NLP, the minimum unit is not word but sense. Therefore, NLP tasks need a noise reduction method at a finer granularity than attention mechanism. Normally, various aspects of semantics are entangled in word embeddings (Bengio et al., 2003; Mikolov et al., 2013). However, only some of the aspects are needed in specific tasks and other redundant aspects can be regarded as noise. To reduce the noise, entangled word embeddings can be replaced with distributed representations of disentangled semantic aspects. Considering that it could be hard to find the corresponding semantics for each aspect, we call them abstract aspects. In this paper, we propose a prism module to generate parallel denoised sentences from multiple aspects. Different from attention mechanism, the module reduces noise in semantic aspect level rather than word level. Specifically, we selectively replace some words in the sentence with abstract aspects. These denoised sentences are expected to keep sufficient information to make predictions in the downstream tasks, like the low-noise version of original sentence. Compared with attention mechanism, the proposed method not only reduces the noise, but also reduces the loss of necessary information. Furthermore, this method also allows 5359 to reduce noise from different aspects. As a side effect, the interpretability of models is improved since different abstract aspects could represent different semantics. We introduce a method to train this module jointly with downstream model without extra training data. During training, the prism module learns to find the proper words to be replaced for each abstract aspect and also learns the embeddings of abstract aspects which can represent the task-related semantics of words. Furthermore, we introduce a novel trick to reduce the high variance in training brought by REINFORCE method. The prism module can be easily integrated into downstream model to reduce noise and improve performance. We evaluate our method on NER task. Results show that our model outperforms the baseline by a substantial margin. 2 Related Work Attention-based models achieve the state of the art performance in a broad range of NLP tasks. Although soft attention is more popular, hard attention is found to be more effective with good training (Xu et al., 2015). Hard attention has been successfully applied in computer vision (Ba et al., 2014; Mnih et al., 2014) but the application is limited in NLP. Lei et al. (2016) proposed a novel type of hard attention and apply it to improve the interpretability of models. However, the accuracy is not improved. Inspired by this, our proposed method can also be understood as hard-attention based but improves the accuracy successfully. In addition to improving accuracy, attentionbased models also improve the interpretability by showing the inner working of neural networks (Rush et al., 2015; Rockt¨aschel et al., 2015; Lei et al., 2016). Disentangling provides another way to improve the interpretability by extracting information from different aspects of the input. Lin et al. (2017) propose a multi-aspect self-attention to disentangle the latent semantic information of the input sentence. Jain et al. (2018) propose a model to learn disentangled representations of texts for 4 given biomedical aspects. Our proposed method can be regarded as the combination of the above two types of methods to improve the interpretability of the model. 3 Model 3.1 Prism Module The target of this module is to get the sentences with less noise by replacing some of the words with abstract aspects. In a sentence, since each word has different semantics and contributes differently to the task, the key is to calculate the probability distribution over possible replacements. Given a sentence X, which have n words X = (w1, w2, w3, · · · wn) (1) where wi is the embedding of the i-th word in the sentence. We also have m different abstract aspects which represent m aspects of semantics A = (a1, a2, a3, · · · am) (2) where ai is the embedding of the i-th abstract aspect. We apply bidirectional LSTM to the input sentence, which could capture some dependency between words. −→ h t = −−−−→ LSTM (wt, −→ h t−1) (3) ←− h t = ←−−−− LSTM (wt, ←− h t+1) (4) where −→ h t and ←− h t denote the hidden states. We use ht, the concatenation of −→ h t and ←− h t as the annotation of words. All n hidden states are annotated as the matrix H = (h1, h2, h3, · · · hn) (5) We define binary variable si,j ∈0, 1 which indicates whether j-th word wj is replaced by i-th abstract aspect ai or not. Then, the probabilities P with shape of m-by-n can be computed, where each element pi,j is the probability of si,j = 1. P is calculated as: P = sigmoid(WHT + b) (6) pi,j = p(si,j = 1|X) (7) Here, W is the weight with the size of m-by-2h and b is the bias. si,j is the random variable with multinoulli distribution parametrized by pi,j. To get the replaced sentences, we sample S′ according to the probability distribution pi,j S′ =   s′ 1,1 · · · s′ 1,n ... ... ... s′ m,1 . . . s′ m,n   (8) 5360 where i-th row of the matrix indicates which words in a sentence are replaced with i-th abstract aspect. After replacing the words with the guide of S′, we obtain m replaced sentences (X′ 1, X′ 2, X′ 3 · · · X′ m) where each one is denoised from different aspect. Then, these parallel sentences including m denoised sentences and the original sentence are used as the input of the downstream model. 3.2 Model Training The prism module is trained jointly with downstream model. The parameters in the model can be divided into two parts, θo for downstream model and θa for prism module. The objective for optimizing θo is to improve the prediction accuracy of the model. Since the input of the model includes both the word embeddings and abstract aspect embeddings, the loss function for parameters θo is L (θo) = L (θo, X, y) + L θo, X, S′, y  (9) The objective for optimizing θa is to replace proper words with proper abstract aspects. Because of the discrete variable si,j, the loss function is non-differentiable for the parameters θa. We use the policy gradient/REINFORCE (Williams, 1992) to optimize θa. Since we expect that not only the downstream model is well trained, but also the replaced sentences can achieve favorable performance in downstream task, the loss function L (θo) is used as reward R. The objective function for θa is: L (θa) = Es∼p (R log (p (s|X))) (10) Besides, we also introduce a penalization term Ω(A) proposed by Lin et al. (2017) to diversify the abstract aspects which are expected to represent different disentangled aspects. Ω(A) = b A b AT −I 2 F (11) where ∥∥F denotes the Frobenius norm of a matrix, I stands for the identity matrix and bA is calculated by normalizing each ai of A. Considering that we sample the S′ according to the probability distribution to simplify the expectation, for all parameters, the loss function L is: L = L (θo) + L (θa) + Ω(A) = L (θo, X, y) + L θo, X, S′, y  + L (θo) log p S′|X  + Ω(A) (12) 3.3 Normalization of Reward High variance is one of the disadvantages of REINFORCE method, which makes models difficult to converge. No exception, our model also suffers from the same problem. We propose a novel method to reduce the variance and stabilize the training process. We normalize the rewards by making them have the mean of 0 and variance of 1. µ ←1 m m X i=1 Ri (13) σ2 ←1 m m X i=1 (Ri −µ)2 (14) c Ri ←Ri −µ √ σ2 (15) where mean µ and variance σ are calculated over each mini-batch. bRi denotes the normalized reward. The loss L becomes L = L (θo, X, y) + L θo, X, S′, y  + c Ri log p S′|X  + Ω(A) (16) 4 Experiments We evaluate the effectiveness of our noise reduction method on NER task. Dataset: CoNLL 2003 (Sang and De Meulder, 2003) is used as our dataset. Baseline: Yang et al. (2018) compare the performance of twelve neural sequence labeling models in NER task and the architecture CNNBiLSTM (Bi-directional LSTM)-CRF (Ma and Hovy, 2016) achieves the best result (F1). Therefore, we use this model as our baseline. Figure 1 shows our model where the prism module is integrated into CNN-BiLSTM-CRF architecture. The sentence is fed into the prism module and the output of this module is m(e.g., 3) sentences which are denoised from different aspect. These m + 1 parallel sentences including the m denoised sentences and the original sentence are fed into BiSTM+CRF network to predict the labels. Besides, only the original sentence is used in testing. 4.1 Model Configuration In the prism module, the hidden size of BiLSTM is the same as in CNN-BiLSTM-CRF architecture. The number of abstract aspects is set as 8. Except the hyper parameters in prism module, other hyper parameters are all set as (Ma and Hovy, 2016). 5361 a3 w2 a3 a3 w5 w6 p31 p32 p33 p34 p35 p36 w1 BiLSTM w2 w3 w4 w5 w6 p21 p22 p23 p24 p25 p26 Feed Forward BiLSTM+CRF p11 p12 p13 p14 p15 p16 w1 a2 w3 w4 w5 a2 Sampling w1 w2 a1 w4 w5 w6 Testing Training Noise Reduction Module Figure 1: CNN-BiLSTM-CRF architecture with prism module. w1, w2... denote the concatenation of original word embeddding and character-level representation which is computed by CNN. Model F1 Baseline (Ma and Hovy, 2016) 91.2 Multi-aspect hard attention 91.5 Random replacement 91.5 Single aspect 91.3 Our method 91.8 Table 1: NER F1 score of baseline, three ablation experiments and our model on test data of CoNLL-2003. 4.2 Result and Analysis The experimental results are shown in Table 1. Our model outperforms the baseline by a clear margin. To prove the effectiveness of our prism module, we design three ablation experiments: Multi-aspect hard attention: Instead of replacing the words with abstract aspects, we replace the embeddings of selected words with zero vectors. This method can be regarded as a type of multi-aspect hard attention where some of the words are ignored. Random replacement: Instead of learning to select the words to be replaced guided by the downstream task, we select the words to be replaced randomly for each abstract aspect. It is a kind of data noising technique which is similar to the method proposed in (Xie et al., 2017) with Figure 2: Heat map for S′ multiple aspects. Single aspect: In our model, one word could be replaced with different abstract aspects in different denoised sentences. In this experiment, there is only one denoised sentence where each word could only be replaced with the abstract aspect of the maximum probability. Our model has better performance than three ablation experiments as shown in Table 1. The results indicate that (1) The trainable embeddings of each abstract aspect can capture the information which is valuable for the task. (2) Our model can learn to replace words properly guided by the downstream task (e.g., NER). (3) For each word, more than one aspect of semantics are task-related. Additionally, considering that the first two ablation experiments improve F1 by 0.3% but the last one only improves 0.1%, multi-aspect denoising is important for the prism module. 4.3 Visualization We visualize the matrix S′ by drawing the heat map of each row vector as shown in Figure 2. In this example, japan and china are location entities. Each row corresponds to one abstract aspect and each element indicates whether this word is replaced. The heat map shows that each abstract aspect replaces some of words to keep certain task-related semantics and filter out other information. Since the abstract aspects represent different meanings respectively, the selections of words vary between rows which indicates noise is reduced from different aspects. From the heat map, we can also learn that a word can be replaced with multiple abstract aspects and this process is the disentanglement of semantics. 5 Conclusion In this paper, we propose a prism module to reduce the noise of word embeddings by selectively replacing some words with task-related semantic aspects. We also introduce a structure to train this 5362 prism module jointly with existing model and no extra data is needed. Considering REINFORCE method is used in training, a novel method is introduced to reduce the variance of rewards. As a result, our model outperforms the baseline by a clear margin and the ablation analysis proves the effectiveness of our method. As a side effect, this module also improves the interpretability of models. Since our prism module can be easily integrated into existing models, it can be applied in a wide range of neural architectures. References Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. 2014. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Sarthak Jain, Edward Banner, Jan-Willem van de Meent, Iain J Marshall, and Byron C Wallace. 2018. Learning disentangled representations of texts with application to biomedical abstracts. arXiv preprint arXiv:1804.07212. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. arXiv preprint arXiv:1606.04155. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. 2014. Recurrent models of visual attention. In Advances in neural information processing systems, pages 2204–2212. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Erik F Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. arXiv preprint cs/0306050. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Ziang Xie, Sida I Wang, Jiwei Li, Daniel L´evy, Aiming Nie, Dan Jurafsky, and Andrew Y Ng. 2017. Data noising as smoothing in neural network language models. arXiv preprint arXiv:1703.02573. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057. Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. arXiv preprint arXiv:1806.04470.
2019
532
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5363–5369 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5363 Label-Agnostic Sequence Labeling by Copying Nearest Neighbors Sam Wiseman Karl Stratos Toyota Technological Institute at Chicago Chicago, IL, USA {swiseman,stratos}@ttic.edu Abstract Retrieve-and-edit based approaches to structured prediction, where structures associated with retrieved neighbors are edited to form new structures, have recently attracted increased interest. However, much recent work merely conditions on retrieved structures (e.g., in a sequence-to-sequence framework), rather than explicitly manipulating them. We show we can perform accurate sequence labeling by explicitly (and only) copying labels from retrieved neighbors. Moreover, because this copying is label-agnostic, we can achieve impressive performance in zero-shot sequencelabeling tasks. We additionally consider a dynamic programming approach to sequence labeling in the presence of retrieved neighbors, which allows for controlling the number of distinct (copied) segments used to form a prediction, and leads to both more interpretable and accurate predictions. 1 Introduction Retrieve-and-edit style structured prediction, where a model retrieves a set of labeled nearest neighbors from the training data and conditions on them to generate the target structure, is a promising approach that has recently received renewed interest (Hashimoto et al., 2018; Guu et al., 2018; Gu et al., 2018; Weston et al., 2018). This approach captures the intuition that while generating a highly complex structure from scratch may be difficult, editing a sufficiently similar structure or set of structures may be easier. Recent work in this area primarily uses the nearest neighbors and their labels simply as an additional context for a sequence-to-sequence style model to condition on. While effective, these models may not explicitly capture the discrete operations (like copying) that allow for the neighbors to be edited into the target structure, making interpreting the behavior of the model difficult. Moreover, since many retrieve-and-edit style models condition on dataset-specific labels directly, they may not easily allow for transfer learning and in particular to porting a trained model to a new task with different labels. We address these limitations in the context of sequence labeling by developing a simple labelagnostic model that explicitly models copying token-level labels from retrieved neighbors. Since the model is not a function of the labels themselves but only of a learned notion of similarity between an input and retrieved neighbor inputs, it can be effortlessly ported (zero shot) to a task with different labels, without any retraining. Such a model can also take advantage of recent advances in representation learning, such as BERT (Devlin et al., 2018), in defining this similarity. We evaluate the proposed approach on standard sequence labeling tasks, and show it is competitive with label-dependent approaches when trained on the same data, but substantially outperforms strong baselines when it comes to zero-shot transfer applications, such as when training with coarse labels and testing with fine-grained labels. Finally, we propose a dynamic programming based approach to sequence labeling in the presence of retrieved neighbors, which allows for trading off token-level prediction confidence with trying to minimize the number of distinct segments in the overall prediction that are taken from neighbors. We find that such an approach allows us to both increase the interpretability of our predictions as well as their accuracy. 2 Related Work Nearest neighbor based structured prediction (also referred to as instance- or memory-based learning) has a long history in machine learning and NLP, 5364 Ms. Haag plays Elianti The index fell 109.9 Monday DT NN VBD CD NNP SEC Mr. Lane NNP NNP NNP ... ... Mr. Ridley ‘s decision fires NNP NNP POS NNP VBZ NNP NNP VBZ NNP ... The DT ‘s POS ... Figure 1: A visualization of POS tagging an input sentence x by copying token-labels from the label sequences y′(m) of M = 3 retrieved sentences x′(m). with early successes dating back at least to the taggers of Daelemans (Daelemans, 1993; Daelemans et al., 1996) and the syntactic disambiguation system of Cardie (1994). Similarly motivated approaches remain popular for computer vision tasks, especially when it is impractical to learn a parametric labeling function (Shakhnarovich et al., 2006; Schroff et al., 2015). More recently, there has been renewed interest in explicitly conditioning structured predictions on retrieved neighbors, especially in the context of language generation (Hashimoto et al., 2018; Guu et al., 2018; Gu et al., 2018; Weston et al., 2018), although much of this work uses neighbors as extra conditioning information within a sequenceto-sequence framework (Sutskever et al., 2014), rather than making discrete edits to neighbors in forming new predictions. Retrieval-based approaches to structured prediction appear particularly compelling now with the recent successes in contextualized word embedding (McCann et al., 2017; Peters et al., 2018; Radford et al.; Devlin et al., 2018), which should allow for expressive representations of sentences and phrases, which in turn allow for better retrieval of neighbors for structured prediction. Finally, we note that there is a long history of transfer-learning based approaches to sequence labeling (Ando and Zhang, 2005; Daume III, 2007; Schnabel and Sch¨utze, 2014; Zirikly and Hagiwara, 2015; Peng and Dredze, 2016; Yang et al., 2017; Rodriguez et al., 2018, inter alia), though it is generally not zero-shot. There has, however, been recent work in zero-shot transfer for sequence labeling problems with binary tokenlabels (Rei and Søgaard, 2018). 3 Nearest Neighbor Based Labeling While nearest-neighbor style approaches are compelling for many structured prediction problems, we will limit ourselves here to sequence-labeling problems, such as part-of-speech (POS) tagging or named-entity recognition (NER), where we are given a T-length sequence x = x1:T (which we will assume to be a sentence), and we must predict a corresponding T-length sequence of labels ˆy = ˆy1:T for x. We will assume that for any given task there are Z distinct labels, and denote x’s true but unknown labeling as y = y1:T ∈{1, . . . , Z}T . Sequence-labeling is particularly convenient for nearest-neighbor based approaches, since a prediction ˆy can be formed by simply concatenating labels extracted from the label-sequences associated with neighbors. In particular, we will assume we have access to a database D = {x′(m), y′(m)}M m=1 of M retrieved sentences x′(m) and their corresponding true label-sequences y′(m). We will predict a labeling ˆy for x by considering each token xt, selecting a labeled token x′(m) k from D, and then setting ˆyt = y′(m) k .1 3.1 A Token-Level Model We consider a very simple token-level model for this label-agnostic copying, where the probability that x’s t’th label yt is equal to y′(m) k — the k’th label token of sequence x′(m) — simply depends on the similarity between xt and x′(m) k , and is independent of the surrounding labels, conditioned on x and D.2 In particular, we define p(yt = y′(m) k | x, D) ∝exp(xT t x′(m) k ), (1) where the above probability is normalized over all label tokens of all label-sequences in D. Above, xt and x′(m) k (both in RD) represent the contextual word embeddings of the t’th token in x and the k’th token in x′(m), respectively, as obtained by running a deep sequence-model over x and over x′(m). In all experiments we use BERT (Devlin et al., 2018), a model based on the Transformer 1More precisely, we will set ˆyt to be an instance of the label type of which y′(m) k is a label token; this distinction between label types and tokens can make the exposition unnecessarily obscure, and so we avoid it when possible. 2While recent sequence labeling models (Ma and Hovy, 2016; Lample et al., 2016), often model inter-label dependence with a first-order CRF (Lafferty et al., 2001), Devlin et al. (2018) have recently shown that excellent performance can be obtained by modeling labels as conditionally independent given a sufficiently expressive representation of x. 5365 architecture (Vaswani et al., 2017), to obtain contextual word embeddings. We fine-tune these contextual word embeddings by maximizing a latent-variable style probabilistic objective T X t=1 ln M X m=1 X k: y′(m) k = yt p(yt = y′(m) k | x, D), (2) where we sum over all individual label tokens in D that match yt. At test time, we predict ˆyt to be the label type with maximal marginal probability. That is, we set ˆyt to be arg maxz PM m=1 P k: y′(m) k =z p(yt = y′(m) k | x, D), where z ranges over the label types (e.g., POS or named entity tags) present in D. As noted in the introduction, predicting labels in this way allows for the prediction of any label type present in the database D used at test time, and so we can easily predict label types unseen at training time without any additional retraining. 4 Data and Methods Our main experiments seek to determine both whether the label-agnostic copy-based approach introduced above results in competitive sequencelabeling performance on standard metrics, as well as whether this approach gives rise to better zeroshot transfer. Accordingly, our first set of experiments consider several standard sequence-labeling tasks and datasets, namely, POS tagging the Penn Treebank (Marcus et al., 1993) with both the standard Penn Treebank POS tags and Universal POS tags (Petrov et al., 2012; Nivre et al., 2016), and the CoNLL 2003 NER task (Sang and Buchholz, 2000; Sang and De Meulder, 2003). We compare with the sequence-labeling performance of BERT (Devlin et al., 2018), which we take to be near state of the art. We use the standard datasetsplits and evaluations for all tasks, and BIO encoding for all segment-level tagging tasks. We evaluate zero-shot transfer performance by training on one dataset and evaluating on another, without any retraining. In particular, we consider three zero-shot transfer scenarios: training with Universal POS Tags on the Penn Treebank and then predicting the standard, fine-grained POS tags, training on the CoNLL 2003 NER data and predicting on the fine-grained OntoNotes NER data (Hovy et al., 2006) using the setup of Strubell et al. (2017), and finally training on the CoNLL 2003 chunking data and predicting on the CoNLL 2003 NER data. We again compare with a BERT baseline, where labels from the original task are deterministically mapped to the most frequent label on the new task with which they coincide.3 Our nearest-neighbor based models were finetuned by retrieving the 50 nearest neighbors of each sentence in a mini-batch of either size 16 or 20, and maximizing the objective (2) above. For training, nearest neighbors were determined based on cosine-similarity between the averaged toplevel (non-fine-tuned) BERT token embeddings of each sentence. In order to make training more efficient, gradients were calculated only with respect to the input sentence embeddings (i.e., the xt in (1)) and not the embeddings x′(m) k of the tokens in D. At test time, 100 nearest neighbors were retrieved for each sentence to be labeled using the fine-tuned embeddings. The baseline BERT models were fine-tuned using the publicly available huggingface BERT implementation,4 and the “base” weights made available by the BERT authors (Devlin et al., 2018). We made word-level predictions based on the embedding of the first tokenized word-piece associated with a word (as Devlin et al. (2018) do), and ADAM (Kingma and Ba, 2014) was used to fine-tune all models. Hyperparameters were chosen using a random search over learning rate, batch size, and number of epochs. Code for duplicating all models and experiments is available at https://github.com/swiseman/ neighbor-tagging. 5 Main Results The results of our experiments on standard sequence labeling tasks are in Table 1. We first note that all results are quite good, and are competitive with the state of the art. The label-agnostic model tends to underperform the standard finetuned BERT model only very slightly, though consistently, and is typically within several tenths of a point in performance. The results of our zero-shot transfer experiments are in Table 2. We see that in all cases the label-agnostic model outperforms standard fine3For the Chunk →NER task, this results in mapping all tags to ‘O’, so we instead use the more favorable mapping of NPs to PERSON tags. 4https://github.com/huggingface/ pytorch-pretrained-BERT 5366 NER Dev. F1 Test F1 BERT 95.14 90.76 NN 94.48 89.94 POS Dev. Acc. Test Acc. BERT 97.56 97.91 NN 97.33 97.64 U-POS Dev. Acc. Test Acc. BERT 98.34 98.62 NN 98.08 98.36 Table 1: Performance of fine-tuned BERT and nearestneighbor based labeling (NN) on NER, POS tagging, and universal POS tagging; see text. BERT numbers are from fine-tuning the huggingface implementation, and differ slightly from Devlin et al. (2018). tuned BERT, often significantly. In particular, we note that when going from universal POS tags to standard POS tags, the fine-tuned label-agnostic model manages to outperform the standard mostfrequent-tag-per-word baseline, which itself obtains slightly less than 92% accuracy. The most dramatic increase in performance, of course, occurs on the Chunking to NER task, where the label-agnostic model is successfully able to use chunking-based training information in copying labels, whereas the parametric fine-tuned BERT model can at best attempt to map NP-chunks to PERSON labels (the most frequent named entity in the dataset). In order to check that the increase in performance is not due only to the BERT representations themselves, Table 2 also shows the results of nearest neighbor based prediction without fine-tuning (“NN (no FT)” in the table) on any task. In all cases, this leads to a decrease in performance. 6 Encouraging Contiguous Copies Although we model token-level label copying, at test time each ˆyt is predicted by selecting the label type with highest marginal probability, without any attempt to ensure that the resulting sequence ˆy resembles one or a few of the labeled neighbors y′(m). In this section we therefore consider a decoding approach that allows for controlling the trade-off between prediction confidence and minimizing the number of distinct segments in ˆy that represent direct (segment-level) copies from some neighbor, in the hope that having fewer CoNLL →Onto NER Dev. F1 Test F1 BERT 58.41 58.05 NN 62.17 62.33 NN (no FT) 54.29 55.35 U-POS →POS Dev. Acc. Test Acc. BERT 61.78 59.86 NN 96.70 96.98 NN (no FT) 87.44 87.13 Chunk →NER Dev. F1 Test F1 BERT 9.55 8.03 NN 78.05 71.74 NN (no FT) 75.21 67.19 Table 2: Zero-shot performance of models trained on CoNLL NER and applied to fine-grained OntoNotes NER, with universal POS tags and applied to standard POS tagging, and on CoNLL chunking and applied to CoNLL NER. “NN (no FT)” indicates BERT was not fine tuned even on the original task. distinct copied segments in our predictions might make them more interpretable or accurate. We emphasize that the following decoding approach is in fact applicable even to standard sequence labeling models (i.e., non-nearest-neighbor based models), as long as neighbors can be retrieved at test time. To begin with a simple case, suppose we already know the true labels y for a sequence x, and are simply interested in being able to reconstruct y by concatenating as few segments y′ i:j that appear in some y′(m) ∈D as possible. More precisely, define the set ZD to contain all the unique label type sequences appearing as a subsequence of some sequence y′(m) ∈D. Then, if we’re willing to tolerate some errors in reconstructing y, we can use a dynamic program to minimize the number of mislabelings in our now “prediction” ˆy, plus the number of distinct segments used in forming ˆy multiplied by a constant c, as follows: J(t) = min 1≤k≤t z∈ZD:|z|=k J(t−k) + c + k X j=1 1[yt−k+j ̸= zj], where J(0) = 0 is the base case and |z| is the length of sequence z. Note that greedily selecting sequences that minimize mislabelings may result in using more segments, and thus a higher J. In the case where we do not already know y, but wish to predict it, we might consider a modification of the above, which tries to minimize c times 5367 NATO’s top military men – General George Joulwan ... 1986 – Bishop Desmond Tutu was enthroned as Archbishop of Cape Town , South Africa . O O O B-PER I-PER O O O O O B-LOC I-LOC O B-LOC I-LOC O Phan’s accidental journey started last week in Prince Rupert , British Columbia , ... … tampering and off-spinner Muttiah Muralitharan was called for throwing ... 1986 – Bishop Desmond Tutu was enthroned as Archbishop of Cape Town , South Africa . O O O B-PER I-PER O O O O O B-LOC I-LOC O B-LOC I-LOC O … set up under a 1993 interim peace deal to control parts of the Gaza Strip and West Bank , ... Figure 2: A CoNLL NER development example, which can be labeled with only two distinct segments. We show those used by a model trained on the NER data (top), and on chunking data and transferred zero-shot (bottom). the number of distinct segments used in forming ˆy plus the expected number of mislabelings: J(t) = min 1≤k≤t z∈ZD:|z|=k  J(t−k) + c + k X j=1 1 −p(yt−k+j = zj | x, D)  , where we have used the linearity of expectation. Note that to use such a dynamic program to predict ˆy we only need an estimate of p(yt−k+j = zj | x, D), which we can obtain as in Section 3 (or from a more conventional model). In Figure 3 we plot both the F1 score and the average number of distinct segments used in predicting each ˆy against the c parameter from the dynamic program above, for the CoNLL 2003 NER development data in both the standard and zeroshot settings. First we note that we are able to obtain excellent performance with only about 1.5 distinct segments per prediction, on average; see Figure 2 for examples. Interestingly, we also find that using a higher c (leading to fewer distinct segments) can in fact improve performance. Indeed, taking the best values of c from Figure 3 (0.4 in the standard setting and 0.5 in the zero-shot setting), we are able to improve our performance on the test set from 89.94 to 90.20 in the standard setting and from 71.74 to 73.61 in the zero shot setting, respectively; see Tables 1 and 2. 7 Conclusion We have proposed a simple label-agnostic sequence-labeling model, which performs nearly as well as a standard sequence labeler, but improves on zero-shot transfer tasks. We have also 94.60 94.65 94.70 94.75 94.80 F1 NER Dev. Performance 0.1 0.2 0.3 0.4 0.5 0.6 c 1.500 1.505 1.510 1.515 1.520 segments/sentence 78.4 78.6 78.8 79.0 79.2 79.4 F1 Chunk NER Dev. Performance 0.1 0.2 0.3 0.4 0.5 0.6 c 1.40 1.45 1.50 1.55 segments/sentence Figure 3: F1 performance and the average number of distinct segments per predicted labeling on the CoNLL NER development data as c is varied, when training either (top) on the standard training set or (bottom) on the CoNLL chunking data (i.e., zero-shot performance). proposed an approach to sequence label prediction in the presence of retrieved neighbors, which allows for discouraging the use of many distinct segments in a labeling. Future work will consider problems where more challenging forms of neighbor manipulation are necessary for prediction. 5368 References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6(Nov):1817–1853. Claire Cardie. 1994. Domain-specific knowledge acquisition for conceptual sentence analysis. Computer Science Department Faculty Publication Series, page 60. Walter Daelemans. 1993. Memory-based lexical acquisition and processing. In Workshop on Machine Translation and Lexicon, pages 85–98. Springer. Walter Daelemans, Jakob Zavrel, Peter Berck, and Steven Gillis. 1996. Mbt: A memory-based part of speech tagger-generator. In Fourth Workshop on Very Large Corpora. Hal Daume III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256–263. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. 2018. Search engine guided neural machine translation. In Thirty-Second AAAI Conference on Artificial Intelligence. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association of Computational Linguistics, 6:437–450. Tatsunori B Hashimoto, Kelvin Guu, Yonatan Oren, and Percy S Liang. 2018. A retrieve-and-edit framework for predicting structured outputs. In Advances in Neural Information Processing Systems, pages 10073–10083. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90\% solution. In Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), pages 282–289. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260–270. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1064–1074. Mitchell P Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2). Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6294–6305. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan T McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In LREC. Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. In The 54th Annual Meeting of the Association for Computational Linguistics, page 149. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Marek Rei and Anders Søgaard. 2018. Zero-shot sequence labeling: Transferring knowledge from sentences to tokens. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 293–302. Juan Diego Rodriguez, Adam Caldwell, and Alexander Liu. 2018. Transfer learning for entity recognition of novel classes. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1974–1985. Erik F Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task chunking. In Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop. 5369 Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. Tobias Schnabel and Hinrich Sch¨utze. 2014. Flors: Fast and simple domain adaptation for part-ofspeech tagging. Transactions of the Association for Computational Linguistics, 2:15–26. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823. Gregory Shakhnarovich, Trevor Darrell, and Piotr Indyk. 2006. Nearest-neighbor methods in learning and vision: theory and practice (neural information processing). The MIT press. Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2670–2680. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Jason Weston, Emily Dinan, and Alexander H Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. arXiv preprint arXiv:1808.04776. Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In ICLR. Ayah Zirikly and Masato Hagiwara. 2015. Crosslingual transfer of named entity recognizers without parallel corpora. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 390–396.
2019
533
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370–5381 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5370 Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset Hannah Rashkin1⋆, Eric Michael Smith2, Margaret Li2, Y-Lan Boureau2 1 Paul G. Allen School of Computer Science & Engineering, University of Washington 2 Facebook AI Research [email protected], {ems,margaretli,ylan}@fb.com Abstract One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill. While it is straightforward for humans to recognize and acknowledge others’ feelings in a conversation, this is a significant challenge for AI systems due to the paucity of suitable publicly-available datasets for training and evaluation. This work proposes a new benchmark for empathetic dialogue generation and EMPATHETICDIALOGUES, a novel dataset of 25k conversations grounded in emotional situations. Our experiments indicate that dialogue models that use our dataset are perceived to be more empathetic by human evaluators, compared to models merely trained on large-scale Internet conversation data. We also present empirical comparisons of dialogue model adaptations for empathetic responding, leveraging existing models or datasets without requiring lengthy retraining of the full model. 1 Introduction A desirable trait in a human-facing dialogue agent is to appropriately respond to a conversation partner that is describing personal experiences, by understanding and acknowledging any implied feelings — a skill we refer to as empathetic responding. For instance, while the crossed-out response in Figure 1 is topically relevant, “Congrats! That’s great!” may be more satisfying because it acknowledges the underlying feelings of accomplishment in an empathetic way. In this work, we investigate empathetic response generation from current dialogue systems, and propose experiments using a new resource, EMPATHETICDIALOGUES, as a benchmark to evaluate this skill set. ⋆This work was done while first author was intern at Facebook AI Research (FAIR). I finally got promoted today at work. Why would anyone promote you? Congrats! That’s great! feels proud 😀 EMPATHETICDIALOGUES dataset example Listener Speaker Figure 1: Example where acknowledging an inferred feeling is appropriate Empathetic responding is clearly relevant to dialogue systems that are geared towards general conversation or chit-chat. Indeed, ordinary communication is frequently prompted by people sharing their feelings or circumstances. But researchers analyzing goal-directed conversations have also observed the frequent intrusion of ordinary conversation in those interactions as well, either as a “warm-up” introduction or as a detour (Levinson et al., 2000; Heritage, 2005). Engaging in social talk, reacting to emotional cues and displaying a caring attitude have, in fact, been associated with better task outcomes in many domains (Wentzel, 1997; Levinson et al., 2000; Bickmore and Cassell, 2001; Kim et al., 2004; Fraser et al., 2018). While many of those studies deal with human-human interactions, humans have been shown to often interact with machines in a natural and social way (Reeves and Nass, 1996; Lee et al., 2010), so it is reasonable to expect that dialogue agents would also benefit from empathetic responding. Most recent powerful language architectures are trained on vast amounts of barely curated text scrapes, social media conversations, or independent books (Ritter et al., 2010; Zhang et al., 2018; Mazare et al., 2018; Devlin et al., 2018; Liu et al., 2019; Radford et al., 2019). It might be the case 5371 Label: Afraid Situation: Speaker felt this when... “I’ve been hearing noises around the house at night” Conversation: Speaker: I’ve been hearing some strange noises around the house at night. Listener: oh no! That’s scary! What do you think it is? Speaker: I don’t know, that’s what’s making me anxious. Listener: I’m sorry to hear that. I wish I could help you figure it out Label: Proud Situation: Speaker felt this when... “I finally got that promotion at work! I have tried so hard for so long to get it!” Conversation: Speaker: I finally got promoted today at work! Listener: Congrats! That’s great! Speaker: Thank you! I’ve been trying to get it for a while now! Listener: That is quite an accomplishment and you should be proud! Figure 2: Two examples from EMPATHETICDIALOGUES training set. The first worker (the speaker) is given an emotion label and writes their own description of a situation when they’ve felt that way. Then, the speaker tells their story in a conversation with a second worker (the listener). that models trained on this type of data could exhibit some of the aggressive and callous responses that have been observed in spontaneous internet conversations (Anderson, 2015). Unfortunately, while chitchat dialogue benchmarks have been proposed (e.g., Dinan et al., 2019), to the best of our knowledge there are currently no benchmarks gauging whether dialogue agents can converse with empathy. This work aims to facilitate evaluating models’ ability to produce empathetic responses. We introduce a new task for dialogue systems to respond to people discussing situations that cover a wide range of emotions, and EMPATHETICDIALOGUES (ED), a novel dataset with about 25k personal dialogues. Each dialogue is grounded in a specific situation where a speaker was feeling a given emotion, with a listener responding (Figure 2). The new resource consists of crowdsourced one-onone conversations, and covers a large set of emotions in a balanced way. This dataset is larger and contains a more extensive set of emotions than many similar emotion prediction datasets from other text domains such as Scherer and Wallbott (1994), Strapparava and Mihalcea (2007), Mohammad et al. (2018), and Gupta et al. (2017). Our experiments show that large-capacity conversation models trained on spontaneous internet conversation data are not rated as very empathetic. We propose two simple ways to leverage our dataset to improve those models: use utterances from our training data as candidate responses in a retrieval model at inference time, and fine-tune the model on our task. Finally, we explore whether different ways of combining information from related tasks can lead to more empathetic responses. The contributions of this work are thus: 1) we release a novel empathetic dialogue dataset as a new benchmark; 2) we show that training over this dataset can improve the performance of an end-toend dialogue system on empathetic dialogue. 2 Related Work Emotion data Crafting our dataset requires deciding what set of emotions the models should be capable of reacting to. Multiple schemas have attempted to organize the spectrum of emotions, from a handful of basic emotions derived from biological responses (Ekman, 1992; Plutchik, 1984) to larger sets of subtle emotions inferred from contextual situations (Skerry and Saxe, 2015). We incorporate emotions from multiple annotation schemas, noting that emotions merely inferred from a situation are important in dialogue scenarios. There is a wide breadth of research in distributional representation approaches for many emotion classification tasks (Duppada et al., 2018; Park et al., 2018; Xu et al., 2018; Mohammad et al., 2018) that build on deep networks pretrained on large-scale weakly-labelled data such as emojis (Felbo et al., 2017) or hashtags (Mohammad, 2012), gathered from public social media content published on Twitter. The SEMEVAL2019 EmoContext challenge also uses conversation data for detection of three basic emotions (‘happy’, ‘sad’, and ‘angry’) over two turns of context from Twitter exchanges (Gupta et al., 2017). We focus on personal conversations rather than using social media data to be closer to a context of a one-onone conversation. Public social media content occurs in front of large “peripheral audiences” (Goffman, 1981) where uncertainty as to how wide that audience is and the need for curated selfpresentation (Goffman, 1959) have been shown to lead to different choices of subject matters compared to private messaging, with people sharing 5372 more intense and negative emotions through private channels (Bazarova et al., 2015; Litt et al., 2014). In this work, we generate a more balanced coverage of emotions than would appear in public social media content, using a domain that is closer to our ultimate goal of training a model for conversation that can respond to any emotion. Controllable language generation Several other works have focused on controlling the emotional content of a text response either through a manually specified target (Zhou and Wang, 2018; Zhou et al., 2018; Wang and Wan, 2018; Hu et al., 2017; Huang et al., 2018) or through a general term to encourage higher levels of affect (Asghar et al., 2018), with evaluations focused on matching a predetermined desired emotion rather than empathetic responding. Niu and Bansal (2018) generate responses conditioned on a specified politeness setting (polite, rude or neutral). Huber et al. (2018) investigate how to respond to emotions detected from an image. Our work focuses on empathetic responses that are appropriate to signals inferred purely from text rather than conveying a pre-specified emotion. Related chit-chat data Several works have attempted to make chit-chat dialogue models more engaging by grounding them in personal contexts (Li et al., 2016b; Zhang et al., 2018; Mazare et al., 2018), focusing on personal facts (“I am from New York”). Another interesting resource is the DAILYDIALOG (DD) dataset (Li et al., 2017), which comprises about 13k dialogues obtained by crawling educational websites intended for learners of English and also has emotion label annotations. Many of the dialogues are focused on topics for ESL learners (ordering from a restaurant, asking for directions, introductions, etc), but only ≈5% of the utterances have a label other than “none” or “happy”. Our task focuses explicitly on conversations about emotionally grounded personal situations, and considers a richer, evenly distributed set of emotions. We also introduce an explicit single listener in the conversation who is reacting to the situation being described in an empathetic way, to make the setting as close as possible to our desired goal of a one-on-one empathetic conversation. 3 Talking about Personal Situations We consider an open-domain one-on-one conversational setting where two people are discussing a Emotion Most-used speaker words Most-used listener words Surprised got,shocked,really that's,good,nice Excited going,wait,i'm that's,fun,like Angry mad,someone,got oh,would,that's Proud got,happy,really that's,great,good Sad really,away,get sorry,oh,hear Annoyed get,work,really that's,oh,get Grateful really,thankful,i'm that's,good,nice Lonely alone,friends,i'm i'm,sorry,that's Afraid scared,i'm,night oh,scary,that's Terrified scared,night,i'm oh,that's,would Guilty bad,feel,felt oh,that's,feel Impressed really,good,got that's,good,like Disgusted gross,really,saw oh,that's,would Hopeful i'm,get,really hope,good,that's Confident going,i'm,really good,that's,great Furious mad,car,someone oh,that's,get Anxious i'm,nervous,going oh,good,hope Anticipating wait,i'm,going sounds,good,hope Joyful happy,got,i'm that's,good,great Nostalgic old,back,really good,like,time Disappointed get,really,work oh,that's,sorry Prepared ready,i'm,going good,that's,like Jealous friend,got,get get,that's,oh Content i'm,life,happy good,that's,great Devastated got,really,sad sorry,oh,hear Embarrassed day,work,got oh,that's,i'm Caring care,really,taking that's,good,nice Sentimental old,really,time that's,oh,like Trusting friend,trust,know good,that's,like Ashamed feel,bad,felt oh,that's,i'm Apprehensive i'm,nervous,really oh,good,well Faithful i'm,would,years good,that's,like 1.9% 2.4% 2.5% 2.6% 2.7% 2.7% 2.9% 2.9% 2.9% 3% 3% 3.1% 3.1% 3.1% 3.1% 3.1% 3.1% 3.2% 3.2% 3.2% 3.2% 3.2% 3.2% 3.2% 3.3% 3.3% 3.4% 3.4% 3.5% 3.6% 3.8% 5.1% Training set emotion distrib Figure 3: Distribution of conversation labels within EMPATHETICDIALOGUES training set and top 3 content words used by speaker/listener per category. situation that happened to one of them, related to a given feeling. We collect around 25k conversations using the following format. Emotional situation grounding Each conversation is grounded in a situation, which one participant writes about in association with a given emotion label. We consider 32 emotion labels, listed in Figure 3, which we chose by aggregating labels from several emotion prediction datasets (Scherer and Wallbott, 1994; Strapparava and Mihalcea, 2007; Skerry and Saxe, 2015; Li et al., 2017; Mohammad, 2012). These emotion labels cover a broad range of positive and negative emotions. Our goal in providing a single emotion label is to have a situation strongly related to (at least) one particular emotional experience, though we note that some emotions may be very closely related1 and additional related emotions may be invoked in a given conversation. Speaker and listener The person who wrote the situation description (Speaker) initiates a conver1Researchers could merge similar emotions, like ”afraid” and ”terrified”, to get coarser labels, if desired. 5373 sation to talk about it. The other conversation participant (Listener) becomes aware of the underlying situation through what the Speaker says and responds. Speaker and Listener then exchange up to 6 more turns. We include two example conversations from the training data in Figure 2 and ten more in Table 5 in the Appendix. The models discussed below are tested in the role of Listener responding to the Speaker. Neither the situation description written by the Speaker nor the emotion label is given to the models (just as they were not given to the Listener during dialogue collection). Our data could also be used to generate conversations for the Speaker conditioned on the situation description though we leave this for future work. Collection details We collected crowdsourced dialogues using the ParlAI platform (Miller et al., 2017) to interact with Amazon Mechanical Turk (MTurk), hiring 810 US workers. A pair of workers are asked to (i) select an emotion word each and describe a situation when they felt that way, and to (ii) have a conversation about each of the situations, as outlined below. Each worker had to contribute at least one situation description and one pair of conversations: one as Speaker about the situation they contributed, and one as Listener about the situation contributed by another worker. They were allowed to participate in as many hits as they wanted for the first ∼10k conversations, then we limited the more “frequently active” workers to a maximum of 100 conversations. The median number of conversations per worker was 8, while the average was 61 (some workers were more active contributors than others). To ensure quality, we manually checked random subsets of conversations by our most-frequent workers. Task set-up In the first stage of the task, workers are asked to describe in a few sentences a situation based on a feeling label. We ask the workers to try to keep these descriptions between 1-3 sentences. The average response is 19.8 words. In the second stage, two workers are paired and asked to have two short chats with each other. In each chat, one worker (speaker) starts a conversation about the situation they previously described, and the other worker (listener) responds. Neither can see what the other worker was given as emotion label or the situation description they submitted, so they must respond to each others’ stories based solely on cues within the conversation. Each conversation is allowed to be 4-8 utterances long (the average is 4.31 utterances per conversation). The average utterance length was 15.2 words long. Ensuring balanced emotion coverage After the first few initial rounds of data collection, we forced workers to select an emotion that among three emotion labels that had been the least chosen overall so far if it was their first time working on the task. If they had already performed the task, the offered emotion labels were among those that they had chosen the least often before. Given that a conversation model trained for empathetic responding needs to be able to handle emotions even if they are less frequent, we opted for this balancing procedure to make training for these categories easier, while still allowing for some measure of choice for workers. As shown in Figure 3, the distribution of emotion label prompts is close to evenly distributed, with a few that are selected slightly more/less often. EMPATHETICDIALOGUES dataset statistics The resulting dataset comprises 24,850 conversations about a situation description, gathered from 810 different participants, which are publicly available through the ParlAI framework2. We split the conversations into approximately 80% train, 10% validation, and 10% test partitions. To prevent overlap of discussed situations between partitions, we split the data so that all sets of conversations with the same speaker providing the initial situation description would be in the same partition. The final train/val/test split was 19533 / 2770 / 2547 conversations, respectively. We include ten examples from our training set in Appendix Section A. 4 Empathetic Response Generation This section shows how ED can be used as a benchmark to gauge the ability of a model to respond in an empathetic way, and as a training resource to make generic chitchat models more empathetic. We also examine different ways existing models can be combined to produce more empathetic responses. We use ED dialogues to train and evaluate models in the task of generating conversation responses in the Listener role. To emulate a normal conversation, the model has access to previous utterances in the dialogue, but not to the emotion word prompt (e.g., “proud”), nor to 2https://parl.ai/ 5374 y* = argmax hx ⋅hy x1 x2 . . . x1 x2 . . . y1 y2 . . . </s> y1 y2 . . . p(¯y|x) hy hx Context Encoder Context Encoder Candidate Encoder Transformer Decoder Generative Architecture Retrieval Architecture Figure 4: Dialogue generation architectures used in our experiments. The context of concatenated previous utterances is tokenized into x1, x2, · · · , and encoded into vector hx by the context encoder. Left: In the retrieval set-up, each candidate y is tokenized into y1, y2, · · · and encoded into vector hy by the candidate encoder. The system outputs the candidate y∗that maximizes dot product hx · hy. Right: In the generative set-up, the encoded context hx is used as input to the decoder to generate start symbol </s> and tokens y1, y2, · · · . The model is trained to minimize the negative loglikelihood of target sequence ¯y conditioned on context. the situation description generated by the Speaker. Given a dialogue context x of n previous conversation utterances concatenated and tokenized as x1, · · · , xm, followed by a target response ¯y, our models are trained to maximize the likelihood p(¯y|x) of producing the target response. We investigate both generative and retrieval-based settings (Lowe et al., 2016) as described in Figure 4. 4.1 Base Architecture We base our models on Transformer networks (Vaswani et al., 2017), which have proven successful in machine translation and dialogue generation tasks (Zhang et al., 2018; Mazare et al., 2018). Retrieval-based In the retrieval-based set-up, the model is given a large set Y of candidate responses and picks the “best” one, y∗. We first experiment with the retrieval Transformer-based architecture from Yang et al. (2018): two Transformer encoders separately embedding the context, x, and candidates, y ∈Y , as hx and hy, respectively. We also experiment with BERT (Devlin et al., 2018) as base architecture to encode candidates and contexts, using the final hidden vector from BERT as the hx or hy encodings. The model chooses a candidate utterance according to a softmax on the dot product: hx·hy. We minimize the negative log-likelihood of selecting the correct candidate. At training time, we use all of the utterances from the batch as candidates, with a large batch size of 512 to give the model more negative examples (except for BERT for which a batch size of 256 was used). At inference time, we experiment with three sets of candidate utterances for the model to choose from: all of the response utterances in the ED training set (Y ED), all the utterances in the DailyDialog (Li et al., 2017) training set (Y DD), and a million utterances from a dump of 1.7 billion Reddit (R) conversations (Y R). Generative In the generative set-up, we use the full Transformer architecture (Vaswani et al., 2017), consisting of an encoder and a decoder. The Transformer decoder uses the encoder output to predict a sequence of words y, and is trained to minimize the negative log-likelihood of the target sequence ¯y. At inference time, we use diverse beam search from Vijayakumar et al. (2016). Training details Models are pretrained on predicting replies from a dump of 1.7 billion Reddit conversations, starting either from scratch for the Transformer architectures, or from the BERTbase model released by Devlin et al. (2018) for the BERT-based architectures.3 Pretrained models without any fine-tuning on ED will be referred to as “Pretrained” hereafter. We limit the maximum number of word tokens in the context and response to be 100 each. The Transformer networks used in most experiments have the same base architecture (four layers and six transformer heads) and are trained the same way as in Mazare et al. (2018). We also experiment with a larger architecture of five layers (denoted as ”Large”), and BERT retrieval models, that are allowed to train for much longer (see training times in Table 3).4 For all models, we keep the version that has the lowest loss on the validation set. We use 300-d word embeddings pretrained on common-crawl data using fastText (Grave et al., 2018). More training details are provided in Appendix D.1. 4.2 Leveraging the Training Data from ED A retrieval-based model relies on candidates. ED data was explicitly collected with instructions to be empathetic, in a one-on-one setting, which is 3We experimented with directly fine-tuning BERT on ED without first training on Reddit conversations, but this did not perform as well. 4While the models had not fully converged when we stopped training, we trained the Pretrained models for a few iterations more than the corresponding Fine-Tuned models, to ensure that any observed improvement was due to the data used for fine-tuning and not the extra training time. 5375 max hw Concat+Linear hc he hw Pre-trained Emotion Classifier Pre-trained Emotion Classifier d embarrassed I slipped and… I slipped and fell on my face hw d and fell on my face Setup Prepend-k Ensemble Encoder Encoder Encoder Pre-trained Transformer Encoder I slipped and fell on my face hw Figure 5: Incorporating additional supervised information, here from an emotion classification task. An input sequence (either a dialogue context or a candidate) is run through a pre-trained classifier, and the top k output labels are prepended to the sequence, which is then run through the corresponding (context or candidate) encoder to output a hidden representation hw (either hx or hy) as in the base setting. not the case of the Reddit conversation data used for pretraining, and these domain candidates may be better suited to empathetic responding than generic conversation utterances. Thus, we experiment with incorporating ED training candidates into the pool used at inference time by pretrained retrieval-based models, with no fine-tuning on ED. For retrieval-based and generative models, we also experiment with fine-tuning pretrained models to predict the next utterance over ED with a context window of four previous utterances, which is the average length of a conversation in our dataset. These models are referred to as “FineTuned” models. This fine-tuning is conducted until convergence for all architectures except those referred to as “Pretrained”. 4.3 Adding Information from External Predictors Many existing models have been pretrained on supervised tasks that may be relevant to empathetic responding. Combining these models with the representations from our base architecture may reap benefits from previous training time and external training data without having to redo the work or requiring access to that data, which may matter to practitioners. Note that this may considerably augment the effective capacity of the resulting models, as well as the total amount of training data used overall, but our goal here is to get an empirical sense of how robust performance improvement is to variations in architecture set-up or supervision domain. We experiment with adding supervised information from two prediction tasks: emotion detection, which is more closely relevant to our task, and topic detection, which may also be useful in crafting relevant replies.5 Prepending Top-k Predicted Labels This setup (Fig. 5), PREPEND-1, is a very simple way to add supervised information to data, requires no architecture modification, and can be used with black-box classifiers. The top predicted label6 from the supervised classifier is merely prepended to the beginning of the token sequence as encoder input, as below: Original:“I finally got promoted!” Prepend-1:“proud I finally got promoted!” Similar methods have been used for controlling the style of generated text (e.g. Niu and Bansal, 2018). Here, we use a fastText model (Joulin et al., 2017) as prediction architecture. Both the context and the candidates are run through the classifier and receive prepended labels. Fine-tuning is conducted similarly as before, but using these modified inputs. We use two external sources of information. To provide emotion signal, we train a classifier to predict the emotion label from the description of the situation written by the Speaker before the dialogue for the training set dialogues of ED (EMOPREPEND-1).7 To gauge whether supervision from a more distant task would still be helpful, we also experiment with a classifier trained on the 20-Newsgroup dataset (Joachims, 1996), for topic classification (TOPICPREPEND-1). 5 Experimental Evaluation We evaluate the models on their ability to reproduce the Listener’s portion of the conversation (i.e. the ability to react to someone else’s story). We use both automated metrics and human evaluation to score each model’s retrievals/generations. Human evaluation is important, as automated metrics don’t always correlate with human judgments of dialogue quality (Liu et al., 2016), but we provide automated metrics to give a sense of how well they align with human judgment on this task. 5We considered multitask or feature concatenation setups, but they did not provide consistent improvements. These experiments are included in Appendix D.2. 6We only discuss prepending the top predicted label here, but also experimented with top-3 and top-5 models, with similar result patterns, shown in Appendix D.3. 7We also experimented with training the classifier on the utterances themselves, with similar results. 5376 Retrieval Retrieval w/ BERT Generative Model Candidate Source P@1,100 AVG BLEU P@1,100 AVG BLEU PPL AVG BLEU Pretrained R 4.10 4.26 27.96 5.01 ED 43.25 5.51 49.94 5.97 Fine-Tuned ED 56.90 5.88 65.92 6.21 21.24 6.27 ED+DD 5.61 ED+DD+R 4.74 EmoPrepend-1 ED 56.31 5.93 66.04 6.20 24.30 4.36 TopicPrepend-1 ED 56.38 6.00 65.96 6.18 25.40 4.17 Table 1: Automatic evaluation metrics on the test set. Pretrained: model pretrained on a dump of 1.7 billion REDDIT conversations (4-layer Transformer architecture, except when specified BERT). Fine-Tuned: model fine-tuned over the EMPATHETICDIALOGUES training data (Sec. 4.2). EmoPrepend-1, Topic-Prepend1: model incorporating supervised information from an external classifiers, as described in Sec. 4.3. Candidates come from REDDIT (R), EMPATHETICDIALOGUES (ED), or DAILYDIALOG (DD). P@1,100: precision retrieving the correct test candidate out of 100 test candidates. AVG BLEU: average of BLEU-1,-2,-3,-4. PPL: perplexity. All automatic metrics clearly improve with in-domain training on utterances (Fine-Tuned vs. Pretrained), other metrics are inconsistent. Bold: best performance for that architecture. Automated metrics (Table 1) For both retrieval and generative systems, we compute BLEU scores (Papineni et al., 2002) for the model response, comparing against the gold label (the actual response), following the practice of earlier work in dialogue generation (Wen et al., 2015; Li et al., 2016a,b). For the generative systems, we additionally report perplexity of the actual gold response. For the retrieval-based systems, we further compute p@1,100, the accuracy of the model at choosing the correct response out of a hundred randomly selected examples in the test set. When we compute p@1,100, the actual response is included in the candidates, unlike inference from the retrieval systems for all other metrics, which only uses training utterances as candidates. Human ratings (Table 2) We ran crowdsourcing tasks on MTurk (further details in Appendix B). Participants were given a model’s output for a randomly selected test set example and asked to score different aspects of the model. The rating task provides a means of comparing aspects of responses, and we ask raters specifically about whether the response is acknowledging the conversation partner’s feelings. We collected at least 100 ratings per model and asked about three aspects of performance, all rated on a Likert scale (1: not at all, 3: somewhat, 5: very much): Empathy/Sympathy: did the responses show understanding of the feelings of the person talking about their experience? Relevance: did the responses seem appropriate to the conversation? Were they on-topic? Fluency: could you understand the responses? Did the language seem accurate? 5.1 Results Pretrained models baseline Pretrained conversation models are rated poorly by humans for empathy when the candidates are retrieved from Reddit utterances or when a generative model is used (Table 2). Higher ratings with models based on BERT or larger Transformer models show that increasing the capacity makes the models seem more empathetic, but still remain far from human performance, while being considerably more onerous to train (Table 3).8 Using EMPATHETICDIALOGUES for candidate selection Table 1 shows that merely using the pool of candidates from the training set of ED improves the BLEU scores of retrieval models. Using candidates from our dataset also substantially improves the performance of pre-trained retrieval models on all human metrics, particularly the Empathy subscore of most interest to us (Table 2). 8Results on larger retrieval-based Transformer models in Table 9 of the Appendix show the same pattern. 5377 Model Candidate Empathy Relevance Fluency Retrieval Pre-trained R 2.82 ± 0.12 3.03 ± 0.13 4.14 ± 0.10 R+ED 3.16 ± 0.14 3.35 ± 0.13 4.16 ± 0.11 ED 3.45 ± 0.12 3.55 ± 0.13 4.47 ± 0.08 Fine-tuned ED 3.76 ± 0.11 3.76 ± 0.12 4.37 ± 0.09 EmoPrepend-1 ED 3.44 ± 0.11 3.70 ± 0.11 4.40 ± 0.08 TopicPrepend-1 ED 3.72 ± 0.12 3.91 ± 0.11 4.57 ± 0.07 Retrieval w/ BERT Pre-trained R 3.06 ± 0.13 3.29 ± 0.13 4.20 ± 0.10 R+ED 3.49 ± 0.12 3.62 ± 0.12 4.41 ± 0.09 ED 3.43 ± 0.13 3.49 ± 0.14 4.37 ± 0.10 Fine-tuned ED 3.71 ± 0.12 3.76 ± 0.12 4.58 ± 0.06 EmoPrepend-1 ED 3.93 ± 0.12 3.96 ± 0.13 4.54 ± 0.09 TopicPrepend-1 ED 4.03 ± 0.10 3.98 ± 0.11 4.65 ± 0.07 Generative Pre-trained – 2.31 ± 0.12 2.21 ± 0.11 3.89 ± 0.12 Fine-Tuned – 3.25 ± 0.12 3.33 ± 0.12 4.30 ± 0.09 EmoPrepend-1 – 3.16 ± 0.12 3.19 ± 0.13 4.36 ± 0.09 TopicPrepend-1 – 3.09 ± 0.13 3.12 ± 0.13 4.41 ± 0.08 Gold Response – – 4.19 ± 0.10 4.55 ± 0.07 4.68 ± 0.06 Table 2: Human ratings. Fine-tuning on ED and using ED candidates generally improves scores, especially on Empathy, with minimal retraining. Additional external supervision (Prepend) improves the Empathy and Relevance scores for BERT-based models. Bold: best score for that group. Italics: reference model for the group. Using EMPATHETICDIALOGUES for finetuning Additionally, fine-tuning to predict conversation responses on our data improves all automated metrics (Table 1). While fine-tuning on ED data improves performance on predicting the next ED utterance, this may come at the expense of performance when predicting next utterance in other corpora. To measure this, we compared automated metrics on next utterance prediction with pre-trained models and models fine-tuned using ED data (for our base and larger retrieval-based Transformer models) when predicting on DAILYDIALOG and REDDIT (drawing both context and candidates from the same corpus). Compared to the 12-14% P@1,100 increase measured with ED (see Tables 1 and 7), fine-tuning on ED leads to a 5-7% increase on DD, and a 2-3% decrease on R.9 For all three datasets, fine-tuning increases AVG BLEU by 0.2 to 0.5. The slight decrease of performance on R is not surprising because the pre-trained model was trained directly on Reddit predictions. But, the improvement on DD is an encouraging sign that improvements from fine-tuning on ED may generalize to other conversation datasets. Fine-tuning on the ED data 9Numbers for these datasets are included in Table 6 of the appendix. also generally improves human metrics on the ED task, in both retrieval and generative set-ups (Table 2). Augmenting conversation models with external pretrained classifiers Automated and human evaluations suggest that prepending emotion or topic predictions may boost perfomance of high-capacity models based on BERT (but not the smaller models), with Empathy ratings close to approaching human performance. More extensive experiments with large models would be required to confirm that larger capacity makes additional external supervision effective for this task. Resources and capacity Table 3 quantifies resource and parameter usage for several models and set-ups, including a larger Transformer generative model (5 layers instead of 4) and BERTbased architectures with substantially more parameters that require longer training. Using ED candidates in pretrained retrieval models, or finetuning pretrained conversation models on ED data makes smaller models perform better than larger ones with minimal increase in resource usage. 5378 Model Params, resources, train examples Emp Rel Fluent Retrieval Pretrained-R 84.3M, 2.5 days, 8GPUs, 1.7B 2.8 3.0 4.1 Pretrained-ED same , same, same 3.5 3.6 4.5 Fine-Tuned same , + 0.5 hour, 1 GPU, +22.3k 3.8 3.8 4.4 Pretrained-Bert-R 217M, 13.5 days, 8GPUs , 1.7B 3.1 3.3 4.2 Pretrained-Bert-ED same, same, same 3.4 3.5 4.4 Fine-Tuned-Bert same, +1hour, 8GPUs, +22.3k 3.7 3.8 4.6 Generative Pretrained 85.1M, 2 days, 32 GPUs, 1.7B 2.3 2.2 3.9 Fine-Tuned same , +1 hour, 1 GPU, +22.3k 3.3 3.3 4.3 Pretrained-Large 86.2M, 2.5 days, 32 GPUs, 1.7B 2.8 3.0 4.0 Fine-Tuned-Large same , +0.5 hour, 1 GPU, +22.3k 3.6 3.6 4.5 Table 3: Training resources for different models, with human ratings for empathy (Emp), relevance (Rel) and fluency (Fluent). Retrieval-based models use reply candidates from the ED training set (ED) or from Reddit (R). Resource comparisons are relative to the first row of each group. Fine-tuning on ED improves all scores (except for Fluency in one case) while requiring minimal additional training resources. SEM is approximately 0.1 Context Setup Model Cand Generated Response Someone came to my door with a gun the other day. Gen Pretrained I’m not sure if you’re joking or not. Gen Fine-Tuned Oh no! What did you do? Gen EmoPrepend Oh no! Did you have a weapon? Gen TopicPrepend I’m glad to hear that. My son failed his exams! Can you believe it! I don’t know what to do! Ret-BERT Pretrained R Hopefully his mistakes can help you know what not to do on the tests. Ret-BERT Pretrained ED Oh no! I ’ m so sorry! I hope you could pull your grade up! Ret-BERT Fine-Tuned ED Oh no! Did he not study? Ret-BERT EmoPrepend ED Im sorry about that! Did your son start school already? Ret-BERT TopicPrepend ED Oh no! Does he have the option of taking it again? Table 4: Examples of model responses. Shading is provided for better legibility. 6 Conclusion We introduce a new dataset of 25k dialogues grounded in situations prompted by specific emotion labels. Our experiments show that using this dataset to provide retrieval candidates or fine-tune conversation models leads to responses that are evaluated as more empathetic. Future work will investigate how to integrate empathetic responding into more general dialogue when, for example, the needs for empathy have to be balanced with staying on topic or providing information. We hope that our results and dataset will stimulate more research in the important direction of making dialog systems more empathetic. Acknowledgments We thank the anonymous reviewers for insightful feedback and suggestions. This material is based, in part, upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082. References Katie Elson Anderson. 2015. Ask me anything: what is reddit? Library Hi Tech News, 32(5):8–11. Nabiha Asghar, Pascal Poupart, Jesse Hoey, Xin Jiang, and Lili Mou. 2018. Affective neural response generation. In European Conference on Information Retrieval, pages 154–166. Springer. 5379 Natalya N Bazarova, Yoon Hyung Choi, Victoria Schwanda Sosik, Dan Cosley, and Janis Whitlock. 2015. Social sharing of emotions on facebook: Channel differences, satisfaction, and replies. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing, pages 154–164. ACM. Timothy Bickmore and Justine Cassell. 2001. Relational agents: a model and implementation of building user trust. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 396–403. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2019. The second conversational intelligence challenge (convai2). arXiv preprint arXiv:1902.00098. Venkatesh Duppada, Royal Jain, and Sushant Hiray. 2018. Seernet at semeval-2018 task 1: Domain adaptation for affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 18–23. Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169–200. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1615–1625. Jamie Fraser, Ioannis Papaioannou, and Oliver Lemon. 2018. Spoken conversational ai in video games: Emotional dialogue management increases user engagement. In Proceedings of the 18th International Conference on Intelligent Virtual Agents, pages 179–184. ACM. Erving Goffman. 1959. The presentation of self in everyday life. Erving Goffman. 1981. Forms of talk. University of Pennsylvania Press. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). Umang Gupta, Ankush Chatterjee, Radhakrishnan Srikanth, and Puneet Agrawal. 2017. A sentimentand-semantics-based approach for emotion detection in textual conversations. arXiv preprint arXiv:1707.06996. John Heritage. 2005. Conversation analysis and institutional talk. Handbook of language and social interaction, 103:47. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 1587–1596. JMLR. org. Chenyang Huang, Osmar Zaiane, Amine Trabelsi, and Nouha Dziri. 2018. Automatic dialogue generation with expressed emotions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 49–54. Bernd Huber, Daniel McDuff, Chris Brockett, Michel Galley, and Bill Dolan. 2018. Emotional dialogue generation using image-grounded language models. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, page 277. ACM. Thorsten Joachims. 1996. A probabilistic analysis of the rocchio algorithm with tfidf for text categorization. Technical report, Carnegie-mellon univ pittsburgh pa dept of computer science. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 427–431. Sung Soo Kim, Stan Kaplowitz, and Mark V Johnston. 2004. The effects of physician empathy on patient satisfaction and compliance. Evaluation & the health professions, 27(3):237–251. Min Kyung Lee, Sara Kiesler, and Jodi Forlizzi. 2010. Receptionist or information kiosk: how do people talk with a robot? In Proceedings of the 2010 ACM conference on Computer supported cooperative work, pages 31–40. ACM. Wendy Levinson, Rita Gorawara-Bhat, and Jennifer Lamb. 2000. A study of patient clues and physician responses in primary care and surgical settings. Jama, 284(8):1021–1027. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. 5380 Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 994–1003. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 986–995. Eden Litt, Erin Spottswood, Jeremy Birnholtz, Jeff T Hancock, Madeline E Smith, and Lindsay Reynolds. 2014. Awkward encounters of an other kind: collective self-presentation and face threat on facebook. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing, pages 449–460. ACM. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Ryan Lowe, Iulian V Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. On the evaluation of dialogue systems with next utterance classification. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 264. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In Proceedings of ICLR. Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. Parlai: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval2018 task 1: Affect in tweets. In SemEval@NAACLHLT. Saif M. Mohammad. 2012. #emotional tweets. In Proceedings of the First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval ’12, pages 246–255, Stroudsburg, PA, USA. Association for Computational Linguistics. Tong Niu and Mohit Bansal. 2018. Polite dialogue generation without parallel data. Transactions of the Association for Computational Linguistics, 6:373–389. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Ji Ho Park, Peng Xu, and Pascale Fung. 2018. Plusemo2vec at semeval-2018 task 1: Exploiting emotion knowledge from emoji and #hashtags. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 264–272. Robert Plutchik. 1984. Emotions: A general psychoevolutionary theory. Approaches to emotion, 1984:197–219. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Byron Reeves and Clifford Ivar Nass. 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172–180. Association for Computational Linguistics. Klaus R. Scherer and Harald G. Wallbott. 1994. Evidence for universality and cultural variation of differential emotion response patterning. Journal of personality and social psychology, 66 2:310–28. Amy Skerry and Rebecca Saxe. 2015. Neural representations of emotion are organized around abstract event features. Current Biology, 25:1945–1954. Carlo Strapparava and Rada Mihalcea. 2007. Semeval2007 task 14: Affective text. In SemEval@ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. 5381 Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. CoRR, abs/1610.02424. Ke Wang and Xiaojun Wan. 2018. Sentigan: generating sentimental texts via mixture adversarial networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4446–4452. AAAI Press. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721. Kathryn R Wentzel. 1997. Student motivation in middle school: The role of perceived pedagogical caring. Journal of educational psychology, 89(3):411. Peng Xu, Andrea Madotto, Chien-Sheng Wu, Ji Ho Park, and Pascale Fung. 2018. Emo2vec: Learning generalized emotion representation by multitask training. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 292–298. Yinfei Yang, Steve Yuan, Daniel Cer, Sheng yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learning semantic textual similarity from conversations. In Rep4NLP@ACL. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2204–2213. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Thirty-Second AAAI Conference on Artificial Intelligence. Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1128–1137.
2019
534
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5382–5391 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5382 Know More about Each Other: Evolving Dialogue Strategy via Compound Assessment Siqi Bao, Huang He, Fan Wang, Rongzhong Lian and Hua Wu Baidu Inc., China {baosiqi, hehuang, wangfan04, lianrongzhong, wu hua}@baidu.com Abstract In this paper, a novel Generation-Evaluation framework is developed for multi-turn conversations with the objective of letting both participants know more about each other. For the sake of rational knowledge utilization and coherent conversation flow, a dialogue strategy which controls knowledge selection is instantiated and continuously adapted via reinforcement learning. Under the deployed strategy, knowledge grounded conversations are conducted with two dialogue agents. The generated dialogues are comprehensively evaluated on aspects like informativeness and coherence, which are aligned with our objective and human instinct. These assessments are integrated as a compound reward to guide the evolution of dialogue strategy via policy gradient. Comprehensive experiments have been carried out on the publicly available dataset, demonstrating that the proposed method outperforms the other state-of-the-art approaches significantly. 1 Introduction Intelligent dialogue systems have become popular in our daily life, such as the chit-chat XiaoIce and the task-oriented Echo. These systems serve as smart agents to facilitate more effective interaction with users in various situations, like ticket booking or recreation offering. Primary dialogue systems (Vinyals and Le, 2015; Shang et al., 2015) try to mimic human beings to generate fluent utterances, whereas paying little attention to the intrinsic factors of human conversations: exchanging information and enhancing interaction (Li et al., 2017). Therefore, they are prone to generate dull and generic responses. To address this problem, in recent years, several approaches have been developed to generate informative responses based on external knowledge. Recently, a knowledge grounded model is proposed in Ghazvininejad et al. (2018), where relevant factual texts are encoded into memory and replies are decoded via attention mechanism. Instead of using unstructured text knowledge, CCM (Zhou et al., 2018) relies on structured knowledge to generate rich-information response. However, all these approaches are designed for the singleround settings. While applied to the real-world scenarios (where dialogues are conducted for multiple rounds), the dialogue quality will be severely limited due to the lack of coordination among different rounds. As discussed above, one of the ultimate goals in human conversation is that information can be exchanged effectively through interaction. Particularly, we argue that successful multi-turn dialogues are determined by the joint experience of both participants in the conversation, i.e., both participants need to get aware of their counterparts and express themselves effectively. To this end, we propose the objective of letting both sides know more about each other. With this objective, a novel Generation-Evaluation framework is introduced for the multi-turn dialogues. As the name Generation-Evaluation indicates, there are two fundamental modules in our framework. In the module of dialogue generation, a two-stage generative model is employed, where the dialogue strategy determines which knowledge to use for the current turn and the decoder uses this knowledge to produce the response. In the module of evaluation, the generated dialogues are assessed from the following two aspects: informativeness, which measures the effectiveness of information exchange and coherence, which reflects the response’s suitableness. Both modules are assembled within a unified reinforcement learning pipeline. The generation module simulates knowledge grounded conversations with two dialogue agents and receives compound reward from the 5383 ! " Namaste. How are you today? I am doing great. How are you? Great, thanks. My children and I were just about to watch Game of Thrones. Nice. How old are you children? … Dialogue Generation Informativeness Coherence Strategy Evaluation Compound Reward Dialogue Reward Backgrounds I have four children I love watching Game of Thrones … … I like to ski I hate Mexican food Encourage informative & concise conversations to exchange information Generate coherent & proper responses to enhance interaction Coverage Duplication Relevance Consistency Figure 1: Framework overview. Left: dialogue generation. Right: strategy evaluation. evaluation module. By keeping adapted for higher evaluation rewards, the generation module will be continuously evolving for better dialogue quality. As suggested in Yarats and Lewis (2018), applying reinforcement learning on the decoder might bring in adverse impacts on the linguistic quality. As such, in the generation module, the decoder is pre-trained with supervised learning and the dialogue strategy keeps evolving with reinforcement learning. The contributions of this work are summarized as follows: • With the objective of letting both participants know more about each other, we propose a novel Generation-Evaluation framework, which facilitates the generation of informative and coherent dialogues. • To evaluate the effectiveness of dialogue strategy, two metrics are specially designed on informativeness and coherence, which are further integrated as a compound reward. Towards maximizing this reward, the strategy of knowledge selection is able to evolve via reinforcement learning. • Intensive and extensive experiments have been carried out on PersonaChat. As compared with other state-of-the-art approaches, our method obtains superior performances on both automatic and human evaluations. 2 Methodology 2.1 Framework Overview Our Generation-Evaluation framework is illustrated in Figure 1. Under the deployed strategy of knowledge selection, two dialogue agents introduce themselves alternately in accordance with corresponding backgrounds and make responses Utterance Encoder Context Encoder Knowledge Encoder !"#$ % &"#$ … MLP-ATT MLP-ATT + Decoder Response !" Sampling () () * !"#$ * &"#$ * %* +(%|&") Embedding Layer Figure 2: Architecture of dialogue generation. to their counterparts in a proper way. The generated dialogues together with the agents’ backgrounds are collected for strategy evaluation in terms of two essential aspects: informativeness and coherence. Then these assessments are integrated as a compound reward, acting as the reinforcing signal for the evolution of knowledge interaction strategy. In the following parts, we will first introduce the process of dialogue generation, present the metrics utilized in strategy evaluation and then describe the strategy evolution via compound assessment. 2.2 Dialogue Generation The detailed network architecture of dialogue generation is illustrated in Figure 2. With the context and background knowledge as input, our dialogue strategy selects one piece of appropriate knowledge to generate informative and coherent response. The background Z = {z1, z2, · · · , zM} includes a set of knowledge, where a piece of knowledge zi is presented by one sentence, such as “i like to ski”. Utterance ut−1 is the last response from the other participant and the context ct = concat(u1, u2, · · · , ut−1) is the current conversation history. It is worth noting that in our dialogue generation, the input context ct is separated into two parts, with independent encoders employed for utterance ut−1 and context ct−1 respectively. The motivation to do so lies in two aspects: for the 5384 sake of coherence, the knowledge utilized in t-th turn is supposed to be semantically related to the partner’s last utterance ut−1; to avoid repetition, the knowledge utilized in t-th turn should be dissimilar with the former dialogue history ct−1. After passing through the embedding layer and the encoders of gated recurrent unit (GRU) (Cho et al., 2014), the inputs obtain their corresponding feature representation: knowledge zG i , utterance uG t−1 and context cG t−1. ZG = {zG 1 , zG 2 , · · · , zG M} is the set of knowledge representation. With discriminative representations uG t−1, cG t−1 and ZG obtained, the prior distribution over knowledge p(Z|ct) can be estimated through MLP attention (MLP-ATT) (Bahdanau et al., 2015): p(Z|ct) = p(Z|ut−1) ∗0.5 + p(Z|ct−1) ∗0.5, p(zi|ut−1) = softmax MLP-ATT(uG t−1, zG i )  , p(zi|ct−1) = softmax MLP-ATT(cG t−1, zG i )  , (1) where softmax is defined as softmax(si) = esi/ P j esj (Sukhbaatar et al., 2015). And the computation of MLP-ATT is given as follows: MLP-ATT(x, y) = V T 1 tanh(xW1 + yW2), where W1, W2 ∈ Rd×d and V1 ∈ Rd are the weight matrices. p(Z|ct) is the probability distribution for knowledge selection and PM i=1 p(zi|ct) = 1. (If p(zi|ct) = 0.2, it means that the probability to select knowledge zi is 0.2.) According to the estimated prior probability distribution p(Z|ct), one piece of knowledge can be sampled zi ∼p(Z|ct) and sent to the decoder for response generation p(ut|zi, ut−1). It is obvious that the key component for informative and coherent conversation is the appropriate knowledge selection, shown as Blue areas in Figure 2. Nevertheless, a high-fidelity decoder p(ut|zi, ut−1), which is able to express the given knowledge accurately, is also indispensable. To this end, the pre-training is carried out using those target responses associated with groundtruth knowledge via supervised learning. The training data is in the format of {ut−1, zi, ut}, where ut−1 is the last utterance from the partner, ut is the target response and zi is the ground truth knowledge used in ut. Major steps in the pre-training are listed as follows: (1) the encoders convert the knowledge and utterance into zG i and uG t−1; (2) the decoder tries to generate the response ut based on the ground-truth knowledge zi and last 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 !" !# !$ 0 0 1 0 0 0 1 1 0 0 0 1 1 0 0 Activation %& Coverage '& Repetition (& 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 )" )# )$ )* )+ )" )# )$ )* )+ )" )# )$ )* )+ Knowledge Conversation Figure 3: Toy example of informativeness assessment: activation at records whether a piece of knowledge is expressed in ut, coverage vt keeps track of expressed knowledge and repetition dt detects reiteration. utterance ut−1; (3) parameters in the encoders and decoder (Gray areas) are optimized via supervised leaning, with the loss functions defined in Zhao et al. (2017). For the rest of the parameters related to the knowledge selection strategy (Blue areas), they will keep evolving through GenerationEvaluation reinforcement learning, which will be discussed in detail. 2.3 Strategy Evaluation Multi-turn knowledge grounded conversations are generated by two dialogue agents. To evaluate the effectiveness of deployed strategy, generated conversations and agents’ background knowledge are collected for evaluation and two metrics are judiciously designed – informativeness and coherence. 2.3.1 Informativeness Information is a crucial ingredient in generating meaningful conversations. Although many approaches have been introduced to boost the generation of informative utterances, due to a lack of thorough control on effective information utilization, they are prone to generating repetitive utterances in multi-turn conversations. In this paper, we design a novel informativeness metric to measure the effective exploitation of information in the conversation level, which encourages extensive coverage and avoids unnecessary repetition. To illustrate the informativeness assessment, a toy example is given in Figure 3. Assume that there are five pieces of background knowledge zi within the conversation participants. For each generated utterance ut, it will be assessed whether zi is expressed by ut or not, which can be approximately inferred through keyword matching (in the form of binary variable 0/1). Such estimation over the background knowledge is stored in the activation vector at. If relying on at as the informa5385 tiveness metric, it is able to boost informative response generation on the utterance level. However, it inevitably produces repetitive responses due to the lack of information utilization control on the conversation level. Inspired by the coverage mechanism in machine translation (Tu et al., 2016) and text summarization (See et al., 2017), we propose to maintain one coverage vector vt to keep track of the activation on each piece of information during the conversation flow. From the toy example, it can be observed that the coverage vector vt increases with the amount of expressed knowledge. In other words, a higher mean value of vt indicates that the participants have expressed more background knowledge, which gives a better chance for them to know more about each other. Although the coverage mechanism stimulates extensive knowledge expression, it still lacks effective and explicit control on the reiteration. For the sake of user experience, we also maintain one repetition vector dt to detect information redundancy, whose estimation is carried out by jointly considering current information activation and last-step coverage status: dt = min(at, vt−1), (2) where the function min(·) calculates the elementwise minimum value between two vectors. As shown in Figure 3, when utterance u3 reiterates the same information as before, it does not increase knowledge coverage and leads to unnecessary repetition. In summary, instead of focusing on the information activation of the single-round response, our informativeness metric considers the effective information utilization in the scope of multi-turn conversation. For a conversation with T turns, its informativeness is estimated as follows: rI = mean(vT ) − T X t=1 mean(dt), (3) where the function mean(·) calculates the mean value of a vector. By maintaining information coverage and internal repetition simultaneously, the conversation level informativeness is able to encourage informative and concise conversations. 2.3.2 Coherence For the sake of natural interaction, coherence is another indispensable ingredient in strategy evaluation. In addition to relevance with the context, Utterance GRU Context Backgrounds H-GRU MLP ⨁ MLP Coherence "# $# % H-GRU "# & $# ' (' Embedding Layer Figure 4: Illustration of coherence assessment, where H-GRU refers to hierarchical GRU and the symbol ⊕ denotes vector concatenation. the coherence assessment also evaluates the conversation consistency with the backgrounds. The motivation to enforce background consistency is to confine the massive and loose interactive responses into a reasonable space. Considering that the essence of coherence is semantic relevance between two inputs and many deep learning based approaches have demonstrated their superiority at capturing semantic relevance, such as DSSM (Huang et al., 2013), SMN (Wu et al., 2017) and BERT (Devlin et al., 2018), we use a symmetric neural network for the coherence assessment in this paper. As shown in Figure 4, for a generated utterance ut, its coherence with the context ct and corresponding backgrounds Z can be estimated through this symmetric network. The utterance is fed into the embedding layer, followed by gated recurrent unit (GRU) (Cho et al., 2014) and multilayer perceptron (MLP) to capture discriminative representation. As for the context and backgrounds, they are fed into the embedding layer and the hierarchical GRU for better feature extractions (Sordoni et al., 2015), which are further concatenated together to obtain comprehensive representation. The final coherence is estimated as the inner product between two vectors: rCt = σ MLP(uG t ) · MLP([cH t , zH])  , where MLP(x) = σ(xW1 + b1)W2 + b2 . (4) σ(·) is the sigmoid activation, [·, ·] denotes vector concatenation and MLP includes two linear transformations with a sigmoid activation in between. The above equation evaluates the coherence for each generated utterance ut, by considering existing conversation history and corresponding background, which is further summed up over all utterances as conversation-level coherence assessment. 5386 2.3.3 Compound Assessment To provide a united reinforcement signal for strategy evolution, the informativeness and coherence assessments are further integrated as a compound reward. For a conversation τ with T turns, the compound assessment is defined as: R(τ) = T X t=1 rCt + rI . (5) The two intrinsic factors in human conversations – exchanging information and enhancing interaction have been included in our compound reward. 2.4 Strategy Evolution From the perspective of reinforcement learning, the knowledge selection within a conversation can be regarded as sequential actions taken within a trajectory. As such, the objective of knowledge grounded dialogue generation can be written as: max J(θ) = Eτ∼p(τ;θ)R(τ), (6) where θ refers to the network parameters of dialogue generation, τ ∼p(τ; θ) is a multi-turn conversation generated under the deployed strategy and R(τ) is the compound assessment of strategy evaluation. Gradient update of the above objective can be further derived as follows: ▽θJ(θ) = T X t=1 ▽θ log p(zi|ct)p(ut|zi, ut−1) R(τ) −b  , = T X t=1 ▽θ log p(zi|ct) R(τ) −b  + T X t=1 ▽θ log p(ut|zi, ut−1) R(τ) −b  , (7) where b is the reward baseline estimated with K times Monte Carlo sampling: b = P k R(τ (k))/K. In Equation (7), the first term is about the dialogue strategy of appropriate knowledge selection and the second term is about the decoding process with the selected knowledge. As suggested in (Lewis et al., 2017; Yarats and Lewis, 2018), applying reinforcement learning on the decoder might lead to poor linguistic quality. As such, in this paper, the focus is on the strategy evolution and gradient update is further simplified: ▽θJ(θ) = T X t=1 ▽θ log p(zi|ct) R(τ) −b  . (8) The physical meaning of the above equation is given as follows: the strategies that lead to higher conversation rewards will be encouraged and those that result in lower conversation rewards will be suppressed. As demonstrated in Equation (8), the network parameters related to dialogue strategy (Blue areas in Figure 2) will keep evolving via compound assessment. For the rest parameters, they are pretrained with supervised learning and will be kept fixed during strategy evolution. 3 Experiments 3.1 Settings All experiments have been carried out on the publicly available dataset – PersonaChat (Zhang et al., 2018), which provides both human annotated conversations and the participants’ background knowledge (persona profiles). PersonaChat has separated training and testing set. In total, there are 8,939 dialogues (131,438 turns) in the training set and 968 dialogues (15,024 turns) in the testing set. Comprehensive comparisons have been made to the following methods: • Sequence to sequence with attention (Seq2Seq) (Vinyals and Le, 2015) is the classic response generation approach, without using any extra knowledge. • The knowledge grounded memory network (Mem-Net) (Ghazvininejad et al., 2018) encodes text knowledge into memory to boost the generation of informative responses. • The KG-Net (Lian et al., 2019) makes use of posterior knowledge distribution in the training process for accurate informative response generation and achieves the state-of-the-art results on PersonaChat. • Li et al. (2016b) first employed reinforcement learning for dialogue generation (RLDG), where simple Seq2Seq was used as the generation model. In the experiments, to improve RL-DG’s performance, KG-Net is utilized as the base model for informative generation. In our strategic knowledge interaction, the parameters of knowledge encoder, utterance encoder and decoder were pre-trained with supervised learning. For the learnable parameters (Blue 5387 areas in Figure 2), the context encoder was initialized with the utterance encoder and random initialization was employed for the rest layers1. The training process was carried out using Adam optimizer, with a learning rate of 2e-4. The conversation turns T was set to 8, batch size was set to 8 and Monte Carlo sampling times K was set to 16. 3.2 Experimental Results The training curves of reinforcement learning are shown in Figure 5, which are the results averaged over 5 random seeds. The horizontal axis refers to the number of trained dialogues. The vertical axis stands for the compound episode reward, informativeness and coherence, respectively. These results demonstrate that all rewards increase stably within the training process and remarkable increments are achieved after convergence. 3.2.1 Automatic Evaluation The experimental results with automatic measurements are summarized in Table 1, with highest value written in bold. Distinct-1/2 (Li et al., 2016a) measures the diversity of generated conversations, which is defined as the amount of distinct unigrams or bigrams divided by the total number of generated words. KnowledgeRecall/Precision/F1 (Dinan et al., 2019b) measures the informativeness of generated conversations with regarding to background knowledge, defined as: Recall = |WG ∩WK| |WK| , Precision = |WG ∩WK| |WG| , F1 = 2 × Recall × Precision Recall + Precision , (9) where WG and WK refer to the set of non-stop words in generated conversations and background knowledge. From Table 1, it demonstrates that the proposed method obtains the best results. The distinct measurement indicates that more diverse words or phrases are produced by our method. The knowledge measurement verifies the effectiveness of our approaches on the knowledge utilization in multiturn conversations. As compared with the state-ofthe-art KG-Net, the knowledge F1 of our method 1Our code and model will be released at https: //github.com/PaddlePaddle/models/tree/ develop/PaddleNLP/Research/ACL2019-SEEDS. Table 1: Experimental results with automatic measurements, with highest value written in bold. is increased by 3.6%, which is a significant improvement. 3.2.2 Human Evaluation Currently, most automatic metrics are not aligned well with human beings in dialogue evaluation (Liu et al., 2016), such as BLEU, ROUGE, etc. In our experiments, extensive evaluations have been carried out with crowd-sourced human beings. With the background knowledge (persona profiles of two participants) and the first start utterance in the testing set, simulated dialogues were generated using each method. There are 8 turns in the simulated conversations (1 start utterance followed by 7 successive generated responses). Our method is compared with the rest state-ofthe-art approaches and each group contains 100 pairs of simulated dialogues, randomly selected from the testing set. For each pair of conversations, they share the same background knowledge and 3 crowd-sourced workers are asked to compare these two simulated conversations at the same time. The human evaluations include the following aspects: (1) Overall refers to the general preference towards the two conversations, with a joint consideration of effective information exchange and coherent interaction. (2) Coverage measures the amount of knowledge expressed during conversations. (3) Concise considers the information repetition and utterance reiteration within conversations. (4) Coherence estimates the consistency and appropriateness within the interaction between participants. The final comparison results by crowd-sourced workers are determined through majority voting, which are summarized in Table 2. These results demonstrate that our method is consistently and significantly better than the other state-of-the-art approaches. 5388 1e7 1e7 1e7 Episode reward Informativeness Coherence # Dialogues # Dialogues # Dialogues Figure 5: Training curves of reinforcement learning. Table 2: Experimental results with human evaluation, with highest value written in bold. Table 3: Simulated dialogues with the same personas and start utterance. 5389 1 2 3 4 5 6 7 8 9 10 11 12 0.0 0.8 0.6 0.4 0.2 Figure 6: Visualisation of knowledge utilization in conversations of our method (Upper) and KG-Net (Bottom). Horizontal: background knowledge in the first 12 simulated dialogues, separated by Purple lines. Vertical: knowledge selection probability of each response by one participant. 3.3 Discussions 3.3.1 Case Analysis Table 3 provides several detailed cases of the simulated dialogues generated by each method, under the same background knowledge (persona profiles) and the start utterance. It can be observed that Mem-Net tends to generate general and fluent responses, like “what about you”, while expresses limited background knowledge. Although informative utterances can be generated by KG-Net, due to a lack of control on information utilization, serious repetition has emerged in the simulated conversation. In addition to redundant responses, another problem with RL-DG is the poor linguistic quality, which might be caused by the decoder update via RL (Lewis et al., 2017; Yarats and Lewis, 2018). Our method is able to generate informative and coherent conversations because the decoder is fixed and only the knowledge selection strategy keeps evolving via compound assessment Visualization of knowledge utilization in conversations is displayed in Figure 6, where the first 12 simulated dialogues from the testing set are presented. The horizontal axis is the background knowledge in the dialogues, separated by Purple lines. The vertical axis shows the knowledge selection probability p(zi|ct) of each utterance, made by one participant in the simulated dialogues (in total 4 utterances). The upper part (our method) demonstrates extensive knowledge coverage, while the bottom part (KG-Net) exhibits repetitive knowledge utilization (highlighted with red circles). 3.3.2 Correlation Analysis The correlation statistics between automatic metrics (including the distinct-1/2, knowledge-R/P/F1 and our compound reward) and human annotations are provided in Table 4. The Pearson correlation coefficient (Benesty et al., 2009) is estimated using the annotated overall score of our method v.s. Table 4: Correlation between automatic metrics and human evaluations, with highest value written in bold. Table 5: Comparison with Lost in Conversation, with highest value written in bold. KG-Net. These results indicate our designed compound reward is aligned better with human beings than commonly used metrics. 3.3.3 Further Evaluation of the Dialogue Strategy The PersonaChat dataset is also employed by the ConvAI2 challenge (Dinan et al., 2019a), where the team Lost in Conversation obtained the best performance. The network of Lost in Conversation involves 12 transformer layers, which requires extra training data in addition to PersonaChat. For fair comparison, our dialogue strategy is also implemented with the same number of transformer layers and training settings used by Lost in Conversation. The comparison is summarized in Table 5, which verifies the superiority of our proposed method over the advanced transformer network. 4 Related Work Our work is related with knowledge grounded response generation and multi-turn conversation with reinforcement learning. As conventional Seq2Seq (Vinyals and Le, 2015) tends to generate general and dull re5390 sponses, some knowledge grounded approaches have been introduced to increase the informativeness with extra knowledge. MemNet (Ghazvininejad et al., 2018) encodes factual texts into memory and decodes via attention mechanism for informative generation. CCM (Zhou et al., 2018) relies on structured knowledge to generate rich-information response. In Lian et al. (2019), the posterior distribution is estimated and accurate knowledge is selected to boost informative generation. However, without thorough consideration and control on the knowledge utilization in multi-turn conversations, the above approaches are prone to produce repetitive and incoherent utterances. The technique of reinforcement learning has been applied to multi-turn dialogue systems in several scenarios. In RL-DG (Li et al., 2016b), three rewards are defined and combined together to boost diverse response generation. Due to a lack of effective control on knowledge utilization, RL-DG is unable to express extensive information during conversations. As RL-DG relies on the reinforcement signal to update all components in the dialogue system, including decoder, it suffers from poor linguistic quality. In Yao et al. (2018), reinforcement learning is employed to plan a cue word (topic) path for a dialogue, where the cue word at t-th turn will assist the corresponding response generation. Different from these chitchat approaches, our dialogue generation is conducted under the objective of facilitating effective information exchange and letting both participates know more about each. With judiciously design of evaluation metrics, our compound reward is aligned well with human beings and provides meaningful reinforcement signal to evolve the dialogue strategy. 5 Conclusion In this paper, a novel Generation-Evaluation framework is proposed for informative and coherent multi-turn dialogue generation. Knowledge grounded conversations are generated under the dialogue strategy, which is able to continuously evolve via reinforcement learning with the compound reward. Comprehensive experimental results demonstrate that the proposed method obtains superior performances than the other stateof-the-art methods on both automatic measurements and human evaluations. In the future, our work can be potentially improved by enriching the assessments with more fine-grained criteria, which can fully integrate turn-level cohesion and dialogue-level coherence. We will also explore to make full use of knowledge to guide the selection of policy strategies for multi-turn conversation. Acknowledgments We would like to thank the ACL reviewers for their constructive suggestions and Jinhua Peng, Chaotao Chen, Min Xie for the helpful discussions. This work was supported by the Natural Science Foundation of China (No.61533018). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations. Jacob Benesty, Jingdong Chen, Yiteng Huang, and Israel Cohen. 2009. Pearson correlation coefficient. In Noise reduction in speech processing, pages 1–4. Springer. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2019a. The second conversational intelligence challenge (convai2). arXiv preprint arXiv:1902.00098. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019b. Wizard of wikipedia: Knowledge-powered conversational agents. International Conference on Learning Representations. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM 5391 international conference on Conference on information & knowledge management, pages 2333–2338. ACM. Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning of negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2443–2453. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016b. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192– 1202. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, volume 1, pages 986– 995. Rongzhong Lian, Min Xie, Fan Wang, Jinhua Peng, and Hua Wu. 2019. Learning to select knowledge for response generation in dialog systems. arXiv preprint arXiv:1902.04911. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1073–1083. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, pages 1577–1586. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 553–562. ACM. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 76–85. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 496–505. Lili Yao, Ruijian Xu, Chao Li, Dongyan Zhao, and Rui Yan. 2018. Chat more if you like: Dynamic cue words planning to flow longer conversations. arXiv preprint arXiv:1811.07631. Denis Yarats and Mike Lewis. 2018. Hierarchical text generation and planning for strategic dialogue. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pages 5591–5599. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2204–2213. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 654–664. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Commonsense knowledge aware conversation generation with graph attention. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4623–4629.
2019
535
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5392–5404 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5392 Training Neural Response Selection for Task-Oriented Dialogue Systems Matthew Henderson, Ivan Vuli´c, Daniela Gerz, Iñigo Casanueva, Paweł Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkši´c, and Pei-Hao Su PolyAI Limited London, United Kingdom [email protected] [email protected] Abstract Despite their popularity in the chatbot literature, retrieval-based models have had modest impact on task-oriented dialogue systems, with the main obstacle to their application being the low-data regime of most task-oriented dialogue tasks. Inspired by the recent success of pretraining in language modelling, we propose an effective method for deploying response selection in task-oriented dialogue. To train response selection models for taskoriented dialogue tasks, we propose a novel method which: 1) pretrains the response selection model on large general-domain conversational corpora; and then 2) fine-tunes the pretrained model for the target dialogue domain, relying only on the small in-domain dataset to capture the nuances of the given dialogue domain. Our evaluation on six diverse application domains, ranging from e-commerce to banking, demonstrates the effectiveness of the proposed training method. 1 Introduction Retrieval-based dialogue systems conduct conversations by selecting the most appropriate system response given the dialogue history and the input user utterance (i.e., the full dialogue context). A typical retrieval-based approach to dialogue encodes the input and a large set of responses in a joint semantic space. When framed as an ad-hoc retrieval task (Deerwester et al., 1990; Ji et al., 2014; Kannan et al., 2016; Henderson et al., 2017), the system treats each input utterance as a query and retrieves the most relevant response from a large response collection by computing semantic similarity between the query representation and the encoding of each response in the collection. This task is referred to as response selection (Wang et al., 2013; Al-Rfou et al., 2016; Yang et al., 2018; Du and Black, 2018; Weston et al., 2018; Chaudhuri et al., 2018), as illustrated in Figure 1. Input Candidate Responses Is that place affordable? Absolutely, call me any time! There is no place like home. The restaurant serves Japanese food. I would say that the prices are reasonable. This was their second warning. It was so unfortunate to concede the goal. Figure 1: The conversational response selection task: given the input sentence, the goal is to identify the relevant response from a large collection of candidates. Formulating dialogue as a response selection task stands in contrast with other data-driven dialogue modeling paradigms such as modular and end-to-end task-based dialogue systems (Young, 2010; Wen et al., 2017b; Liu and Perez, 2017; Li et al., 2017; Bordes et al., 2017). Unlike standard task-based systems, response selection does not rely on explicit task-tailored semantics in the form of domain ontologies, which are hand-crafted for each task by domain experts (Henderson et al., 2014a,b; Mrkši´c et al., 2015). Response selection also differs from chatbot-style systems which generate new responses by generalising over training data, their main deficiency being the tendency towards generating universal but irrelevant responses such as “I don’t know” or “Thanks” (Vinyals and Le, 2015; Li et al., 2016; Serban et al., 2016; Song et al., 2018). Therefore, response selection removes the need to engineer structured domain ontologies, and to solve the difficult task of general language generation. Furthermore, it is also much easier to constrain or combine the output of response selection models. This design also bypasses the construction of dedicated decision-making policy modules. Although conceptually attractive, retrieval-based dialogue systems still suffer from data scarcity, as deployment to a new domain requires a sufficiently large in-domain dataset for training the response selection model. Procuring such data is expensive and labour-intensive, with annotated datasets for task-based dialogue still few and far between, as 5393 well as limited in size.1 Recent work on language modelling (LM) pretraining (Peters et al., 2018; Howard and Ruder, 2018) has shown that task-specific architectures are not necessary in a number of NLP tasks. The best results have been achieved by LM pretraining on large unannotated corpora, followed by supervised fine-tuning on the task at hand (Devlin et al., 2019). Given the compelling benefits of large-scale pretraining, our work poses a revamped question for response selection: can we pretrain a general response selection model and then adapt it to a variety of different dialogue domains? To tackle this problem, we propose a two-step training procedure which: 1) pretrains a response selection model on large conversational corpora (such as Reddit); and then 2) fine-tunes the pretrained model for the target dialogue domain. Throughout the evaluation, we aim to provide answers to the following two questions: 1. (Q1) How to pretrain? Which encoder structure can best model the Reddit data? 2. (Q2) How to fine-tune? Which method can efficiently adapt the pretrained model to a spectrum of target dialogue domains? Regarding the first question, the results support findings from prior work (Cer et al., 2018; Yang et al., 2018): the best scores are reported with simple transformer-style architectures (Vaswani et al., 2017) for input-response encodings. Most importantly, our results suggest that pretraining plus finetuning for response selection is useful across six different target domains. As for the second question, the most effective training schemes are lightweight: the model is pretrained only once on the large Reddit training corpus, and the target task adaptation does not require expensive retraining on Reddit. We also show that the proposed two-step response selection training regime is more effective than directly applying offthe-shelf state-of-the-art sentence encoders (Cer et al., 2018; Devlin et al., 2019). 1For instance, the recently published MultiWOZ dataset (Budzianowski et al., 2018) comprises a total of 115,424 dialogue turns scattered over 7 target domains. It is several times larger than other standard task-based dialogue datasets such as DSTC2 (Henderson et al., 2014b) with 23,354 turns, Frames (El Asri et al., 2017) with 19,986 turns, or M2M (Shah et al., 2018) with 14,796 turns. To illustrate the difference in magnitude, the Reddit corpus used in this work for response selection pretraining comprises 727M dialogue turns. We hope that this paper will inform future development of response-based taskoriented dialogue. Training and test datasets, described in more detail by Henderson et al. (2019), are available at: github.com/ PolyAI-LDN/conversational-datasets. 2 Methodology Why Pretrain and Fine-Tune? By simplifying the conversational learning task to a response selection task, we can relate target domain tasks to general-domain conversational data such as Reddit (Al-Rfou et al., 2016). This also means that parameters of response selection models in target domains with scarce training resources can be initialised by a general-domain pretrained model. The proposed two-step approach, described in §2.1 and §2.2, can be seen as a “lightweight” task adaptation strategy: the expensive Reddit model pretraining is run only once (i.e., training time is typically measured in days), and the model is then fine-tuned on N target tasks (i.e., fine-tuning time is in minutes). The alternatives are “heavyweight” data mixing strategies. First, in-domain and Reddit data can be fused into a single training set: besides expensive retraining for each task, the disbalance between in-domain and Reddit data sizes effectively erases the target task signal. An improved data mixing strategy keeps the identities of the origin datasets (Reddit vs. target) as features in training. While this now retains the target signal, our preliminary experiments indicated that the results again fall short of the proposed lightweight fine-tuning strategies. In addition, this strategy still relies on expensive Reddit retraining for each task. 2.1 Step 1: Response Selection Pretraining Reddit Data. Our pretraining method is based on the large Reddit dataset compiled and made publicly available recently by Henderson et al. (2019). This dataset is suitable for response selection pretraining due to multiple reasons as discussed by Al-Rfou et al. (2016). First, the dataset offers organic conversational structure and it is large at the same time: all Reddit data from January 2015 to December 2018, available as a BigQuery dataset, span almost 3.7B comments. After preprocessing the dataset to remove both uninformative and long comments2 and pairing all comments with their 2We retain only sentences containing more than 8 and less than 128 word tokens. 5394 self-attention self-attention self-attention self-attention positional emb positional emb positional emb positional emb H fully-connected hidden layers (H=3) 1024-dim + swish H fully-connected hidden layers (H=3) 1024-dim + swish input: x response: y dot product loss Figure 2: Schematic input-response encoder model structure. We show the best-performing architecture for brevity, while we evaluate a variety of other encoder architecture configurations later in §4.1. responses, we obtain more than 727M commentresponse pairs which are used for model pretraining. This Reddit dataset is substantially larger than the previous Reddit dataset of Al-Rfou et al. (2016), which spans around 2.1B comments and 133M conversational threads, and is not publicly available. Second, Reddit is extremely diverse topically (Schrading et al., 2015; Al-Rfou et al., 2016): there are more than 300,000 sub-forums (i.e., subreddits) covering diverse topics of discussion. Finally, compared to message-length-restricted Twitter conversations (Ritter et al., 2010), Reddit conversations tend to be more natural. In summary, all these favourable properties hold promise to support a large spectrum of diverse conversational domains. Input and Response Representation. We now turn to describing the architecture of the main pretraining model. The actual description focuses on the best-performing architecture shown in Figure 2, but we also provide a comparative analysis of other architectural choices later in §4.1. First, similar to Henderson et al. (2017), raw text is converted to unigrams and bigrams, that is, we extract n-gram features from each input x and its corresponding response y from (Reddit) training data. During training we obtain d-dimensional feature representations (d = 320, see Figure 2) shared between inputs and responses for each unigram and bigram jointly with other neural net parameters. In addition, the model can deal with out-of-vocabulary unigrams and bigrams by assigning a random id from 0 to 50,000 to each, which is then used to look up their embedding. When fine-tuning, this allows the model to learn representations of words that otherwise would be out-of-vocabulary. Sentence Encoders. The unigram and bigram embeddings then undergo a series of transformations on both the input and the response side, see Figure 2 again. Following the transformer architecture (Vaswani et al., 2017), positional embeddings and self-attention are applied to unigrams and bigrams separately. The representations are then combined as follows (i.e., this refers to the reduction layer in Figure 2): the unigram and bigram embeddings are each summed and divided by the square root of the word sequence length. The two vectors are then averaged to give a single 320-dimensional representation of the text (input or response). The averaged vector is then passed through a series of H fully connected h-dim feed-forward hidden layers (H = 3; h = 1, 024) with swish as the non-linear activation, defined as: swish(x) = x·sigmoid(βx) (Ramachandran et al., 2017).3 The final layer is linear and maps the text into the final l-dimensional (l = 512) representation: hx for the input text, and hy for the accompanying response text. This provides a fast encoding of the text, with some sequential information preserved.4 Scaled Cosine Similarity Scoring. The relevance of each response to the given input is then quantified by the score S(x, y). It is computed as scaled cosine similarity: S(x, y) = C ·cos(hx, hy), where C is a learned constant, constrained to lie between 0 and √ l. We resort to scaled cosine similarity instead of general dot product as the absolute values are meaningful for the former. In consequence, the scores can be thresholded, and retrained models can rely on the same thresholding. Training proceeds in batches of K (input, response) pairs (x1, y1), . . . , (xK, yK). The objective tries to distinguish between the true relevant response and irrelevant/random responses for each input sentence xi. The training objective for a single batch of K pairs is as follows: J = K X i=1 S(xi, yi) − K X i=1 log K X j=1 eS(xi,yj) (1) 3We fix β = 1 as suggested by Ramachandran et al. (2017). The use of swish is strictly empirically driven: it yielded slightly better results in our preliminary experiments than the alternatives such as tanh or a family of LU/ReLU-related activations (He et al., 2015; Klambauer et al., 2017). 4Experiments with higher-order n-grams, recurrent, and convolutional structures have not provided any substantial gain, and slow down the encoder model considerably. 5395 Reddit (Source) Train Train1 Test1 Train2 Test2 TrainN TestN Target Domain 1 Target Domain 2 Target Domain N (a) REDDIT-DIRECT Reddit (Source) Train Train1 Test1 Train2 Test2 TrainN TestN Target Domain 1 Target Domain 2 Target Domain N (b) FT-DIRECT Reddit (Source) Train Train1 Test1 Train2 Test2 TrainN TestN Target Domain 1 Target Domain 2 Target Domain N + Reddit + Reddit + Reddit (c) FT-MIXED Figure 3: High-level overview of baseline and fine-tuning strategies used in our evaluation. (a) REDDIT-DIRECT: a pretrained general-domain (Reddit) response selection model is directly applied on each target task, without any target domain fine-tuning; (b) FT-DIRECT: after pretraining the large response selection model on Reddit, the model is fine-tuned for each target task by directly continuing the training on (much smaller) target domain data; (c) FT-MIXED: similar to FT-DIRECT, but the crucial difference is in-batch mixing of Reddit input-response pairs with target domain pairs during the target fine-tuning procedure. Another baseline (TARGET-ONLY) trains a response selection model on each target task separately without leveraging general-domain Reddit data (not shown). Effectively, Eq. (1) maximises the score of pairs (xi, yi) that go together in training, while minimising the score of pairing each input xi with K′ negative examples, that is, responses that are not associated with the input xi. For simplicity, as in prior work (Henderson et al., 2017; Yang et al., 2018), for each input xi, we treat all other K−1 responses in the current batch yj ̸= yi as negative examples.5 As discussed by Henderson et al. (2017) in the context of e-mail reply applications, this design enables efficient response search as it allows for precomputing vectors of candidate responses independently of input queries, and searching for responses with high scaled cosine similarity scores in the precomputed set. It also allows for approximate nearest neighbour search (Malkov and Yashunin, 2016) which speeds up computations drastically at the modest decrease of retrieval performance.6 Finally, in this work we rely on a simple strategy based on random negative examples. In future work, we plan to experiment with alternative (nonrandom) negative sampling strategies. For instance, inspired by prior work on semantic specialisation (Mrkši´c et al., 2017b) and parallel corpora mining (Guo et al., 2018), difficult negative examples might comprise invalid responses that are semantically related to the correct response (measured by e.g. dot-product similarity). 5Note that the matrix S = C · [hy1, . . . , hyK] · [hx1, . . . , hxK]T is inexpensive to compute. 6E.g., experiments on Reddit test data reveal a 130× speed-up using the approximate search method of Malkov and Yashunin (2016) while retaining 95% top-30 recall. 2.2 Step 2: Target Domain Fine-Tuning The second step concerns the application of the pretrained general Reddit model on N target domains. We assume that we have the respective training and test sets of KN,tr and KN,te in-domain inputresponse pairs for each of the N domains, where KN,tr and KN,te are considerably smaller than the number of Reddit training pairs. We test two general fine-tuning strategies, illustrated in Figure 3. FT-DIRECT directly continues where the Reddit pretraining stopped: it fine-tunes the model parameters by feeding the KN,tr in-domain (input, response) pairs into the model and by following exactly the same training principle as described in §2.1. The fine-tuned model is then tested in the in-domain response selection task using KN,te test pairs, see Figure 3b. FT-MIXED attempts to prevent the “specialisation” of the Reddit model to a single target domain, that is, it aims to maintain stable performance on the general-domain Reddit data. This way, the model can support multiple target tasks simultaneously. Instead of relying only on in-domain training pairs, we now perform in-batch mixing of Reddit pairs with in-domain pairs: M% of the pairs in each batch during fine-tuning are Reddit pairs, while (100−M)% of the pairs are in-domain pairs, where M is a tunable hyper-parameter. With this fine-tuning strategy, outlined in Figure 3c, each dataset provides negative examples for the other one, enriching the learning signal. We compare FT-DIRECT and FT-MIXED against two straightforward and insightful baselines: the REDDIT-DIRECT model from Figure 3a directly ap5396 plies the pretrained Reddit model on the target task without any in-domain fine-tuning. Comparisons to this baseline reveal the importance of fine-tuning. On the other hand, the TARGET-ONLY baseline simply trains the response selection model from Figure 2 from scratch directly on the in-domain KN,tr pairs. Comparisons to this baseline reveal the importance of Reddit pretraining. For all TARGETONLY models in all target tasks, we tuned the word embedding sizes and embedding dropout rates on the corresponding training sets. 3 Experimental Setup Training Setup and Hyper-Parameters. All input text is lower-cased and tokenised, numbers with 5 or more digits get their digits replaced by a wildcard symbol #, while words longer than 16 characters are replaced by a wildcard token LONGWORD. Sentence boundary tokens <S> and </S> are added to each sentence. The vocabulary consists of the unigrams that occur at least 10 times in a random 1M subset of the Reddit training set –this results in a total of 105K unigrams– plus the 200K most frequent bigrams in the same random subset. The following training setup refers to the final Reddit model, illustrated in Figure 2, and used in fine-tuning. The model is trained by SGD setting the initial learning rate to 0.03, and then decaying the learning rate by 0.3x every 1M training steps after the first 2.5M steps. Similar to learning rate scaling by the batch size used in prior work (Goyal et al., 2017; Codreanu et al., 2017), we scale the unigram and bigram embedding gradients by the batch size. The batch size is 500, and attention projection dimensionality is 64. We also apply the label smoothing technique (Szegedy et al., 2016), shown to reduce overfitting by preventing a network to assign full probability to the correct training example (Pereyra et al., 2017). Effectively, this reshapes Eq. (1): each positive training example in each batch gets assigned the probability of 0.8, while the remaining probability mass gets evenly redistributed across in-batch negative examples. Finally, we train the model on 13 GPU nodes with one Tesla K80 each for 18 hours: the model sees around 2B examples and it is sufficient for the model to reach convergence.7 Fine-tuning is run by relying on early stopping on 7Training is relatively cheap compared to other large models: e.g., BERT models (Devlin et al., 2019) were pre-trained for 4 days using 4 Cloud TPUs (BERT-SMALL) or 16 Cloud TPUs (BERT-LARGE). in-domain validation data. The ratio of Reddit and in-domain pairs with FT-MIXED is set to 3:1 (in favour of Reddit) in all experimental runs. Test Domains and Datasets. We conduct experiments on six target domains with different properties and varying corpora sizes. The diversity of evaluation probes the robustness of the proposed pretraining and fine-tuning regime. The summary of target domains and the corresponding data is provided in Table 1. All datasets are in the form of (input, response) pairs. For UBUNTU8, SEMEVAL159, and AMAZONQA10 we use standard data splits into training, dev, and test portions following the original work (Lowe et al., 2017; Nakov et al., 2015; Wan and McAuley, 2016). For the OpenSubtitles dataset (OPENSUB) (Lison and Tiedemann, 2016), we rely on the data splits introduced by Henderson et al. (2019). We evaluate pretrained Reddit models on the REDDIT held-out data: 50K randomly sampled (input, response) pairs are used for testing. We have also created a new FAQ-style dataset in the e-banking domain which includes questionanswer pairs divided into 77 unique categories with well-defined semantics (e.g., “card activation”, “closing account”, “refund request”). Such FAQ information can be found in various e-banking customer support pages, but the answers are highly hierarchical and often difficult to locate. Our goal is to test the fine-tuned encoder’s ability to select the relevant answers to the posed question. To this end, for each question we have collected 10 paraphrases that map to the same answer. All unique (question, answer) pairs are added to the final dataset, which is then divided into training (70%), validation (20%) and test portions (10%), see Table 1. Baseline Models. Besides the direct encoder model training on each target domain without pretraining (TARGET-ONLY), we also evaluate two standard IR baselines based on keyword matching: 1) a simple TF-IDF query-response scoring (Manning et al., 2008), and 2) Okapi BM25 (Robertson and Zaragoza, 2009). Furthermore, we also analyse how pretraining plus fine-tuning for response selection compares to a representative sample of publicly available neural network embedding models which embed inputs and responses into a vector space. We include the following embedding models, all of 8https://github.com/rkadlec/ 9http://alt.qcri.org/semeval2015/task3/ 10http://jmcauley.ucsd.edu/data/amazon/qa/ 5397 Dataset Reference Domain Training Size Test Size REDDIT (Henderson et al., 2019) discussions on various topics 654,396,778 72,616,937 OPENSUB (Lison and Tiedemann, 2016) movies, TV shows 283,651,561 33,240,156 AMAZONQA (Wan and McAuley, 2016) e-commerce, retail 3,316,905 373,007 UBUNTU (Lowe et al., 2017) computers, technical chats 3,954,134 72,763 BANKING New e-banking applications, banking FAQ 10,395 1,485 SEMEVAL15 (Nakov et al., 2015) lifestyle, tourist and residential info 9,680 1,158 Table 1: Summary of all target domains and data. Data sizes: a total number of unique (input, response) pairs. Note that some datasets contain many-to-one pairings (i.e., multiple inputs are followed by the same response; BANKING) and one-to-many pairings (i.e., one input generates more than one plausible response; SEMEVAL15). which are readily available online.11 (1) Universal Sentence Encoder of Cer et al. (2018) is trained using a transformer-style architecture (Vaswani et al., 2017) on a variety of web sources such as Wikipedia, web news, discussion forums as well as on the Reddit data. We experiment with the base USE model and its larger variant (USE-LARGE). (2) We run fixed mean-pooling of ELMO contextualised embeddings (Peters et al., 2018) pretrained on the bidirectional LM task using the LM 1B words benchmark (Chelba et al., 2013): ELMO. (3) We also compare to two variants of the bidirectional transformer model of Devlin et al. (2019) (BERT-SMALL and BERT-LARGE).12 We compare to two model variants for each of the above vector-based baseline models. First, the SIM method ranks responses according to their cosine similarity with the context vector: it relies solely on pretrained models without any further fine-tuning or adaptation, that is, it does not use the training set at all. The MAP variant learns a linear mapping on top of the response vector. The final score of a response with vector hy for an input with vector hx is the cosine similarity cos(·, ·) of the context vector with the mapped response vector: cos hx, (W + αI) · hy  . (2) W, α are parameters learned on a random sample of 10,000 examples from the training set using the same dot product loss from Eq. (1), and I is the identity matrix. Vectors are ℓ2-normalised before being fed to the MAP method. For all baseline models, learning rate and regularization parameters are tuned using a held-out development set. 11https://www.tensorflow.org/hub 12Note that the encoder architectures similar to the ones used by USE can also be used in the Reddit pretraining phase in lieu of the architecture shown in Figure 2. However, the main goal is to establish the importance of target response selection fine-tuning by comparing it to direct application of state-of-the-art pretrained encoders, used to encode both input and responses in the target domain. Full Reddit Model 61.3 - Wider hidden layers; h = 2, 048, 24h training 61.1 - Narrower hidden layers; h = 750, 18h training 60.8 - Narrower hidden layers; h = 512 59.8 - Batch size 50 (before 500) 57.4 - H = 2 (before H = 3) 56.9 - tanh activation (before swish) 56.1 - no label smoothing 55.3 - no self-attention 48.7 - remove bigrams 35.5 Table 2: The results of different encoder configurations on the Reddit test data (R100@1 scores ×100%). Starting from the full model (top row), each subsequent row shows a configuration with one component removed or edited from the configuration from the previous row. The combination of the two model variants with the vector-based models results in a total of 10 baseline methods, as listed in Table 3. Evaluation Protocol. We rely on a standard IR evaluation measure used in prior work on retrievalbased dialogue (Lowe et al., 2017; Zhou et al., 2018; Chaudhuri et al., 2018): Recall@k. Given a set of N responses to the given input/query, where only one response is relevant, it indicates whether the relevant response occurs in the top k ranked candidate responses. We refer to this evaluation measure as RN@k, and set N = 100; k = 1: R100@1. This effectively means that for each query, we indicate if the correct response is the top ranked response between 100 candidates. The final score is the average across all queries. 4 Results and Discussion This section aims to provide answers to the two main questions posed in §1: which encoder architectures are more suitable for pretraining (Q1; §4.1), and how to adapt/fine-tune the pretrained model to target tasks (Q2; §4.2). 4.1 Reddit Pretraining The full encoder model is described in §2.1 and visualised in Figure 2. In what follows, we also anal5398 REDDIT OPENSUB AMAZONQA UBUNTU BANKING SEMEVAL15 TF-IDF 26.7 10.9 51.8 27.5 27.3 38.0 BM25 27.6 10.9 52.3 19.6 23.4 35.5 USE-SIM 36.6 13.6 47.6 11.5 18.2 36.0 USE-MAP 40.8 15.8 54.4 17.2 79.2 45.5 USE-LARGE-SIM 41.4 14.9 51.3 13.6 27.3 44.0 USE-LARGE-MAP 47.7 18.0 61.9 18.5 81.8 56.5 ELMO-SIM 12.5 9.5 16.0 3.5 6.5 19.5 ELMO-MAP 19.3 12.3 33.0 6.2 87.0 34.5 BERT-SMALL-SIM 17.1 13.8 27.8 4.1 13.0 13.0 BERT-SMALL-MAP 24.5 17.5 45.8 9.0 77.9 37.5 BERT-LARGE-SIM 14.8 12.2 25.9 3.6 10.4 10.0 BERT-LARGE-MAP 24.0 16.8 44.1 8.0 68.8 34.5 REDDIT-DIRECT 61.3 19.1 61.4 9.6 27.3 46.0 TARGET-ONLY 29.0 (18.2) 83.3 (11.6) 6.2 ( 2.3) 88.3 ( 1.2) 7.5 ( 1.1) FT-DIRECT 30.6 (40.0) 84.2 (30.8) 38.7 (51.9) 94.8 (55.3) 52.5 (55.2) FT-MIXED 25.5 (60.0) 77.0 (59.6) 38.1 (59.4) 90.9 (59.8) 56.5 (59.4) Table 3: Summary of the results (R100@1 scores ×100%) with fine-tuning on all six target domains. Datasets are ordered left to right based on their size. The scores in the parentheses in the TARGET-ONLY, FT-DIRECT and FT-MIXED rows give the performance on the general-domain REDDIT test data. The scores are computed with de-duplicated inputs for SEMEVAL15 (i.e., the initial dataset links more responses to the same input), and deduplicated answers for banking. yse performance of other encoder configurations, which can be seen as ablated or varied versions of the full model. The results on the REDDIT response selection task are summarised in Table 2. Results and Discussion. The scores suggest that the final model gets contribution from its multiple components: e.g., replacing tanh with the recently proposed swish activation (Ramachandran et al., 2017) is useful, and label smoothing also helps. Despite contradictory findings from prior work related to the batch size (e.g., compare (Smith et al., 2017) and (Masters and Luschi, 2018)), we obtain better results with larger batches. This is intuitive given the model design: increasing the batch size in fact means learning from a larger number of negative examples. The results also suggest that the model saturates when provided with a sufficient number of parameters, as wider hidden layers and longer training times did not yield any substantial gains. The scores also show the benefits of selfattention and positional embeddings instead of deep feed-forward averaging of the input unigram and bigram embeddings (Iyyer et al., 2015). This is in line with prior work on sentence encoders (Cer et al., 2018; Yang et al., 2018), which reports similar gains on several classification tasks. Finally, we observe a large gap with the unigram-only model variant, confirming the importance of implicitly representing underlying sequences with n-grams (Henderson et al., 2017; Mrkši´c et al., 2017a). Following the results, we fix the pretraining model in all follow-up experiments (top row in Table 2). 4.2 Target-Domain Fine-Tuning Results and Discussion. The main results on all target tasks after fine-tuning are summarised in Table 3. First, the benefits of Reddit pretraining and fine-tuning are observed in all tasks regardless of the in-domain data size. We report large gains over the TARGET-ONLY model (which trains a domain-specific response selection encoder from scratch) especially for tasks with smaller training datasets (e.g., BANKING, SEMEVAL15). The low scores of TARGET-ONLY with smaller training data suggest overfitting: the encoder architecture cannot see enough training examples to learn to generalise. The gains are also present even when TARGET-ONLY gets to see much more in-domain input-response training data: e.g., we see slight improvements on OPENSUB and AMAZONQA, and large gains on UBUNTU when relying on the FTDIRECT fine-tuning variant. What is more, a comparison to REDDIT-DIRECT further suggests that fine-tuning even with a small amount of in-domain data can lead to large improvements: e.g., the gains over REDDIT-DIRECT are +67.5% on BANKING, +32.5% on UBUNTU, +22.8% on AMAZONQA, and +11.5% on OPENSUB. These results lead to the following crucial conclusion: while in-domain data are insufficient to train response selection models from scratch for many target domains, such data are invaluable for adapting a pretrained general-domain model to the target domain. In other words, the results indicate that the synergy between the abundant response 5399 (a) ELMO-SIM (b) USE-MAP (c) REDDIT-DIRECT (no fine-tuning) (d) FT-MIXED (with fine-tuning) Figure 4: t-SNE plots (van der Maaten and Hinton, 2012) of encoded questions/inputs for a selection of 10 categories from the BANKING test set. The most coherent clusters for each category with well-defined semantics are observed with the FT-MIXED fine-tuning model applied on top of Reddit response selection pretraining. selection Reddit data and scarce in-domain data is effectively achieved through the proposed training regime, and both components are crucial for the final improved performance in each target domain. In simple words, this finding confirms the importance of fine-tuning for the response selection task. Comparison to Baselines. The results of TF-IDF and BM25 reveal that lexical evidence from the preceding input can partially help in the response selection task and it achieves reasonable performance across the target tasks. For instance, on some tasks (e.g., AMAZONQA, BANKING), such keyword matching baselines even outperform some of the vector-based baseline models, and are comparable to the REDDIT-DIRECT model variant. They are particularly strong for AMAZONQA and UBUNTU, possibly because rare and technical words (e.g., the product name) are very informative in these domains. However, these baselines are substantially outperformed by the proposed fine-tuning approach across the board. A comparison to other pretrained sentence encoders in Table 3 further stresses the importance of training for the response selection task in particular. Using off-the-shelf sentence encoders such as USE or BERT directly on in-domain sentences without distinguishing the input and the response space leads to degraded performance compared even to TF-IDF, or the REDDIT-DIRECT baseline without in-domain fine-tuning. The importance of learning the mapping from input to response versus simply relying on similarity is also exemplified by the comparison between the MAP method and the simple SIM method: regardless of the actual absolute performance, MAP leads to substantial gains over SIM for all vector-based baseline models. However, even the MAP method cannot match the performance of our two-step training regime: we report substantial gains with our FT-DIRECT and FT-MIXED fine-tuning on top of Reddit pretraining for all target domains but one (SEMEVAL15). Further Discussion. The comparison of two fine-tuning strategies suggests that the simpler FTDIRECT fine-tuning has an edge over FT-MIXED, and it seems that the gap between FT-DIRECT and FT-MIXED is larger on bigger datasets. However, as expected, FT-DIRECT adapts to the target task more aggressively: this leads to its degraded performance on the general-domain Reddit response selection task, see the scores in parentheses in Table 3. With more in-domain training data FT-DIRECT becomes worse on the REDDIT test set. On the other hand, FTMIXED manages to maintain its high performance on REDDIT due to the in-batch mixing used in the fine-tuning process.13 Qualitative Analysis. The effect of fine-tuning is also exemplified by t-SNE plots for the BANK13Varying the parameter M in FT-MIXED from the ratio 3:1 to 1:3 leads only to slight variations in the final results. 5400 ING domain shown in Figure 4.14 Recall that in our BANKING FAQ dataset several questions map to the same response, and ideally such questions should be clustered together in the semantic space. While we do not see such patterns at all with ELMOencoded questions without mapping (ELMO-SIM, Figure 4a), such clusters can already be noticed with USE-MAP (Figure 4b) and with the model pretrained on Reddit without fine-tuning (Figure 4c). However, fine-tuning yields the most coherent clusters by far: it attracts encodings of all similar questions related to the same category closer to each other in the semantic space. This is in line with the results reported in Table 3. 5 Related Work Retrieval-Based Dialogue Systems. Retrievalbased systems (Yan et al., 2016; Bartl and Spanakis, 2017; Wu et al., 2017; Song et al., 2018; Weston et al., 2018, inter alia) provide less variable output than generative dialogue systems (Wen et al., 2015, 2017a; Vinyals and Le, 2015), but they offer a crucial advantage of producing more informative, semantically relevant, controllable, and grammatically correct responses (Ji et al., 2014). Unlike modular and end-to-end task-oriented systems (Young, 2010; Wen et al., 2017b; Mrkši´c and Vuli´c, 2018; Li et al., 2018), they do not require expensive curated domain ontologies, and bypass the modelling of complex domain-specific decision-making policy modules (Gaši´c et al., 2015; Chen et al., 2017). Despite these desirable properties, their potential has not been fully exploited in task-oriented dialogue. Their fundamental building block is response selection (Banchs and Li, 2012; Wang et al., 2013; Al-Rfou et al., 2016; Baudis and Sedivý, 2016). We have witnessed a recent rise of interest in neural architectures for modelling response selection (Wu et al., 2017; Chaudhuri et al., 2018; Zhou et al., 2018; Tao et al., 2019), but the progress is still hindered by insufficient domain-specific training data (El Asri et al., 2017; Budzianowski et al., 2018). While previous work typically focused on a single domain (e.g., Ubuntu technical chats (Lowe et al., 2015, 2017)), in this work we show that much larger general-domain Reddit data can be leveraged to pretrain response selection models that support more specialised target dialogue domains. 14For clarity, we show the plots with 10 (out of 77) selected categories, while the full plots with all 77 categories are available in the supplemental material. To the best of our knowledge, the work of Henderson et al. (2017) and Yang et al. (2018) is closest to our response selection pretraining introduced in §2.1. However, Henderson et al. (2017) optimise their model for one single task: replying to e-mails with short messages (Kannan et al., 2016). They use a simpler feed-forward encoder architecture and do not consider wide portability of a single generaldomain response selection model to diverse target domains through fine-tuning. Yang et al. (2018) use Reddit conversational context to simply probe semantic similarity of sentences (Agirre et al., 2012, 2013; Nakov et al., 2016), but they also do not investigate response selection fine-tuning across diverse target domains. Pretraining and Fine-Tuning. Task-specific fine-tuning of language models (LMs) pretrained on large unsupervised corpora (Peters et al., 2018; Devlin et al., 2019; Howard and Ruder, 2018; Radford et al., 2018; Lample and Conneau, 2019; Liu et al., 2019) has taken NLP by storm. Such LMbased pretrained models support a variety of NLP tasks, ranging from syntactic parsing to natural language inference (Peters et al., 2018; Devlin et al., 2019), as well as machine reading comprehension (Nishida et al., 2018; Xu et al., 2019) and information retrieval tasks (Nogueira and Cho, 2019; Yang et al., 2019). In this work, instead of the LM-based pretraining, we put focus on the response selection pretraining in particular, and show that such models coupled with target task fine-tuning (Howard and Ruder, 2018) lead to improved modelling of conversational data in various domains. 6 Conclusion and Future Work We have presented a novel method for training neural response selection models for task-oriented dialogue systems. The proposed training procedure overcomes the low-data regime of task-oriented dialogue by pretraining the response selection model using general-domain conversational Reddit data and efficiently adapting this model to individual dialogue domains using in-domain data. Our evaluation demonstrates the compelling benefits of such pretraining, with the proposed training procedure achieving strong performance across each of the five different dialogue domains. In future work, we will port this approach to additional target domains, other languages, and investigate more sophisticated encoder architectures and fine-tuning strategies. 5401 References Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 Task 6: A pilot on semantic textual similarity. In Proceedings of *SEM, pages 385–393. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In Proceedings of *SEM, pages 32–43. Rami Al-Rfou, Marc Pickett, Javier Snaider, YunHsuan Sung, Brian Strope, and Ray Kurzweil. 2016. Conversational contextual cues: The case of personalization and history for response ranking. CoRR, abs/1606.00372. Rafael E. Banchs and Haizhou Li. 2012. IRIS: A chatoriented dialogue system based on the vector space model. In Proceedings of ACL System Demos, pages 37–42. Alexander Bartl and Gerasimos Spanakis. 2017. A retrieval-based dialogue system utilizing utterance and context embeddings. CoRR, abs/1710.05780. Petr Baudis and Jan Sedivý. 2016. Sentence pair scoring: Towards unified framework for text comprehension. CoRR, abs/1603.06127. Antoine Bordes, Y.-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In Proceedings of ICLR. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaši´c. 2018. MultiWOZ - A large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of EMNLP, pages 5016–5026. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175. Debanjan Chaudhuri, Agustinus Kristiadi, Jens Lehmann, and Asja Fischer. 2018. Improving response selection in multi-turn dialogue systems by incorporating domain knowledge. In Proceedings of CoNLL, pages 497–507. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. 2013. One billion word benchmark for measuring progress in statistical language modeling. In Proceedings of INTERPSEECH, pages 2635–2639. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. SIGKDD Explorations Newsletter, 19(2):25–35. Valeriu Codreanu, Damian Podareanu, and Vikram A. Saletore. 2017. Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train. CoRR, abs/1711.04291. Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, 41(6):391–407. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186. Wenchao Du and Alan Black. 2018. Data augmentation for neural online chats response selection. In Proceedings of the 2nd International Workshop on Search-Oriented Conversational AI, pages 52–58. Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: A corpus for adding memory to goal-oriented dialogue systems. In Proceedings of SIGDIAL, pages 207– 219. Milica Gaši´c, Nikola Mrkši´c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Policy committee for adaptation in multi-domain spoken dialogue systems. In Proceedings of ASRU, pages 806–812. Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch SGD: Training ImageNet in 1 hour. CoRR, abs/1706.02677. Mandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernandez Abrego, Keith Stevens, Noah Constant, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Effective parallel corpus mining using bilingual sentence embeddings. In Proceedings of WMT, pages 165–176. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of ICCV, pages 1026–1034. Matthew Henderson, Rami Al-Rfou, Brian Strope, YunHsuan Sung, László Lukács, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, and Ray Kurzweil. 2017. Efficient natural language response suggestion for smart reply. CoRR, abs/1705.00652. Matthew Henderson, Pawel Budzianowski, Iñigo Casanueva, Sam Coope, Daniela Gerz, Girish Kumar, Nikola Mrkši´c, Georgios Spithourakis, Pei-Hao Su, Ivan Vuli´c, and Tsung-Hsien Wen. 2019. A repository of conversational datasets. In Proceedings of the 1st Workshop on Natural Language Processing for Conversational AI. 5402 Matthew Henderson, Blaise Thomson, and Jason D. Wiliams. 2014a. The Second Dialog State Tracking Challenge. In Proceedings of SIGDIAL, pages 263– 272. Matthew Henderson, Blaise Thomson, and Steve Young. 2014b. Word-based dialog state tracking with recurrent neural networks. In Proceedings of SIGDIAL, pages 292–299. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of ACL, pages 328–339. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of ACL, pages 1681–1691. Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. CoRR, abs/1408.6988. Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos, Greg Corrado, László Lukács, Marina Ganea, Peter Young, and Vivek Ramavajjala. 2016. Smart Reply: Automated response suggestion for email. In Proceedings of KDD, pages 955–964. Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-normalizing neural networks. CoRR, abs/1706.02515. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. CoRR, abs/1901.07291. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL-HLT, pages 110–119. Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. 2017. End-to-end taskcompletion neural dialogue systems. In Proceedings of IJCNLP, pages 733–743. Xiujun Li, Sarah Panda, Jingjing Liu, and Jianfeng Gao. 2018. Microsoft dialogue challenge: Building endto-end task-completion dialogue systems. CoRR, abs/1807.11125. Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of LREC. Fei Liu and Julien Perez. 2017. Gated end-to-end memory networks. In Proceedings of EACL, pages 1–10. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. CoRR, abs/1901.11504. Ryan Lowe, Nissan Pow, Iulian V. Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of SIGDIAL, pages 285–294. Ryan Thomas Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, and Joelle Pineau. 2017. Training end-to-end dialogue systems with the ubuntu dialogue corpus. Dialogue & Discourse, 8(1):31–65. Laurens van der Maaten and Geoffrey E. Hinton. 2012. Visualizing non-metric similarities in multiple maps. Machine Learning, 87(1):33–55. Yury A. Malkov and D. A. Yashunin. 2016. Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs. CoRR, abs/1603.09320. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press. Dominic Masters and Carlo Luschi. 2018. Revisiting small batch training for deep neural networks. CoRR, abs/1804.07612. Nikola Mrkši´c and Ivan Vuli´c. 2018. Fully statistical neural belief tracking. In Proceedings of ACL, pages 108–113. Nikola Mrkši´c, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gaši´c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multidomain dialog state tracking using recurrent neural networks. In Proceedings of ACL, pages 794–799. Nikola Mrkši´c, Diarmuid Ó Séaghdha, Blaise Thomson, Tsung-Hsien Wen, and Steve Young. 2017a. Neural Belief Tracker: Data-driven dialogue state tracking. In Proceedings of ACL, pages 1777–1788. Nikola Mrkši´c, Ivan Vuli´c, Diarmuid Ó Séaghdha, Ira Leviant, Roi Reichart, Milica Gaši´c, Anna Korhonen, and Steve Young. 2017b. Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the ACL, pages 314–325. Preslav Nakov, Lluís Màrquez, Walid Magdy, Alessandro Moschitti, Jim Glass, and Bilal Randeree. 2015. SemEval-2015 Task 3: Answer selection in community question answering. In Proceedings of SEMEVAL, pages 269–281. Preslav Nakov, Lluís Màrquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, abed Alhakim Freihat, Jim Glass, and Bilal Randeree. 2016. SemEval2016 Task 3: Community question answering. In Proceedings of SEMEVAL, pages 525–545. Kyosuke Nishida, Itsumi Saito, Atsushi Otsuka, Hisako Asano, and Junji Tomita. 2018. Retrieve-and-read: 5403 Multi-task learning of information retrieval and reading comprehension. In Proceedings of CIKM, pages 647–656. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. CoRR, abs/1901.04085. Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. CoRR, abs/1701.06548. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical Report, OpenAI. Prajit Ramachandran, Barret Zoph, and Quoc V. Le. 2017. Searching for activation functions. CoRR, abs/1710.05941. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Proceedings of NAACL-HLT, pages 172–180. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333–389. Nicolas Schrading, Cecilia Ovesdotter Alm, Ray Ptucha, and Christopher Homan. 2015. An analysis of domestic abuse discourse on Reddit. In Proceedings of EMNLP, pages 2577–2583. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of AAAI, pages 3776–3784. Pararth Shah, Dilek Hakkani-Tür, Bing Liu, and Gokhan Tür. 2018. Bootstrapping a neural conversational agent with dialogue self-play, crowdsourcing and on-line reinforcement learning. In Proceedings of NAACL-HLT, pages 41–51. Samuel L. Smith, Pieter-Jan Kindermans, and Quoc V. Le. 2017. Don’t decay the learning rate, increase the batch size. In Proceedings of ICLR. Yiping Song, Cheng-Te Li, Jian-Yun Nie, Ming Zhang, Dongyan Zhao, and Rui Yan. 2018. An ensemble of retrieval-based and generation-based humancomputer conversation systems. In Proceedings of IJCAI, pages 4382–4388. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of CVPR, pages 2818–2826. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multirepresentation fusion network for multi-turn response selection in retrieval-based chatbots. In Proceedings of WSDM, pages 267–275. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS, pages 6000– 6010. Oriol Vinyals and Quoc Le. 2015. A Neural Conversational Model. In Proceedings of ICML Deep Learning Workshop. Mengting Wan and Julian McAuley. 2016. Modeling ambiguity, subjectivity, and diverging viewpoints in opinion question answering systems. In Proceedings of ICDM, pages 489–498. Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In Proceedings of EMNLP, pages 935–945. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkši´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proceedings of EMNLP, pages 1711–1721. Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve J. Young. 2017a. Latent intention dialogue models. In Proceedings of ICML, pages 3732–3741. Tsung-Hsien Wen, David Vandyke, Nikola Mrkši´c, Milica Gaši´c, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017b. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of EACL, pages 438–449. Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 87–92. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of ACL, pages 496–505. Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. CoRR, abs/1904.02232. Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In Proceedings of SIGIR, pages 55–64. Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Simple applications of BERT for ad hoc document retrieval. CoRR, abs/1903.10972. 5404 Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-Yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learning semantic textual similarity from conversations. In Proceedings of The 3rd Workshop on Representation Learning for NLP, pages 164–174. Steve Young. 2010. Still talking to machines (cognitively speaking). In Proceedings of INTERSPEECH, pages 1–10. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of ACL, pages 1118–1127.
2019
536
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5405–5415 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5405 Collaborative Dialogue in Minecraft Anjali Narayan-Chen∗ Prashant Jayannavar∗ University of Illinois at Urbana-Champaign {nrynchn2, paj3, juliahmr}@illinois.edu Julia Hockenmaier Abstract We wish to develop interactive agents that can communicate with humans to collaboratively solve tasks in grounded scenarios. Since computer games allow us to simulate such tasks without the need for physical robots, we define a Minecraft-based collaborative building task in which one player (A, the Architect) is shown a target structure and needs to instruct the other player (B, the Builder) to build this structure. Both players interact via a chat interface. A can observe B but cannot place blocks. We present the Minecraft Dialogue Corpus, a collection of 509 conversations and game logs. As a first step towards our goal of developing fully interactive agents for this task, we consider the subtask of Architect utterance generation, and show how challenging it is. 1 Introduction Building interactive agents that can successfully communicate with humans about the physical world around them to collaboratively solve tasks in this environment is a long-sought goal of AI (e.g. Winograd, 1971). Such situated dialogue poses challenges that go beyond what is required for the slot-value filling tasks performed by standard dialogue systems (e.g. Kim et al., 2016, 2017; Budzianowski et al., 2018) or chatbots (e.g. Ritter et al., 2010; Schrading et al., 2015; Lowe et al., 2015), as well as for so-called visual dialogue where users talk about a static image (Das et al., 2017) or video-context dialogue where users interact in a chat room while viewing a live-streamed video (Pasunuru and Bansal, 2018). It requires the ability to refer to real-world objects and spatial relations that depend on the current position of the speakers as well as changes in the environment. Due to the expense of actual human-robot communication (e.g. Tellex et al., 2011; Thomason et al., ∗Both authors equally contributed to the paper. 2015; Misra et al., 2016; Chai et al., 2018), simulated environments that allow easier experimentation are commonly used (Koller et al., 2010; Chen and Mooney, 2011; Janarthanam et al., 2012). In this paper, we therefore introduce the Minecraft Collaborative Building Task, in which pairs of users control avatars in the Minecraft virtual environment and collaboratively build 3D structures in a Blocks World-like scenario while communicating solely via text chat (Section 3). We have built a data collection platform and have used it to collect the Minecraft Dialogue Corpus, consisting of 509 human-human written dialogues, screenshots and complete game logs for this task (Section 4). While our ultimate goal is to develop fully interactive agents that can collaborate with humans successfully on this task, we first consider the subtask of Architect utterance generation (Section 5) and describe a set of baseline models that encode both the dialogue history (Section 6) and the world state (Section 7). Section 8 describes our experiments. Our analysis (Section 9) highlights the challenges of this task. The corpus and platform as well as our models are available for download. 1 2 Related Work Our work is partly inspired by the HCRC Map Task Corpus (Anderson et al., 1991), which consists of route-following dialogues between an Instruction Giver and a Follower who are given maps of an environment that differ in significant details. Our task also features asymmetric roles and levels of information between the two speakers, but operates in 3D space and focuses on the creation of structures rather than navigation around existing ones. Koller et al. (2010) design a challenge where systems with access to symbolic world rep1 http://juliahmr.cs.illinois.edu/Minecraft 5406 resentations and a route planner generate real-time instructions to guide users through a treasure hunt in a virtual 3D world. There is a resurgence of interest in Blocks World-like scenarios. Wang et al. (2017) let users define 3D voxel structures via a highly programmatic natural language. The interface learns to understand descriptions of increasing complexity, but does not engage in a back-and-forth dialogue with the user. Most closely related to our work are the corpora of Bisk et al. (2018, 2016a,b), which feature pairs of scenes involving simulated, uniquely labeled, 3D blocks annotated with single-shot instructions aimed at guiding an (imaginary) partner on how to transform an input scene into the target. In their scenario, the building area is always viewed from a fixed bird’s-eye perspective. Simpler versions of the data retain the grid-based assumption over blocks, and structures consist solely of numeric digits procedurally reconstructed along the horizontal plane. Later versions increase the task complexity significantly by incorporating human-generated, truly 3D structures and removed the grid assumption, as well as allowing for rotations of individual blocks. Their blocks behave like physical blocks, disallowing structures with floating blocks that are prevalent in our data. Our work differs considerably in a few other aspects: our corpus features two-way dialogue between an instructor and a real human partner; it also includes a wide range of perspectives as a result of using Minecraft avatars, rather than a fixed bird’s-eye perspective; and we utilize blocks of different colors, allowing for entire substructures to be identified (e.g., “the red pillar”). 3 Minecraft Collaborative Building Task Minecraft (https://minecraft.net/) is a popular multi-player game in which players control avatars to navigate in a 3D world and manipulate inherently block-like materials in order to build structures. Players can freely move, jump and fly, and they can choose between firstor third-person perspectives. Camera angles can be smoothly rotated by moving around or turning one’s avatar’s head up, down, and side-to-side, resulting in a wide range of possible viewpoints. Blocks World in Minecraft Minecraft provides an ideal setting to simulate Blocks World, although there are two key differences to physical toy blocks: Minecraft blocks can only be placed on a discrete 3D grid, and they do not need to obey gravity. That is, they do not need to be placed on the ground or on top of another block, but can be put anywhere as long as one of their sides touches another block. That neighboring block can later be removed, allowing the second block (and any structure supported by it) to “float”. Players need to identify when such supporting blocks need to be added or removed. Collaborative Building Task We define the Collaborative Building Task as a two-player game between an Architect (A) and a Builder (B). A is given a target structure (Target) and has to instruct B via a text chat interface to build a copy of Target on a given build region. A and B can communicate back and forth via chat throughout the game (e.g. to resolve confusions or to correct B’s mistakes). B is given access to an inventory of 120 blocks of six given colors that it can place and remove. A can observe B and move around in its world, allowing it to provide instructions from varying perspectives. But A cannot move blocks, and remains invisible to B. The task is complete when the structure built by B (Built) matches Target, invariant to translations within the horizontal plane and rotations about the vertical axis. Built also needs to lie completely within the boundaries of the predefined build region. Although human players were able to complete each structure successfully, this task is not trivial. Figure 1 shows the perspectives seen by each player in the Minecraft client. This example from our corpus shows some of the challenges of this task. A often provides instructions that they think are sufficient, but leave B still clearly confused, indicated either by B’s lack of initiative to start building or a confused response. Once a multistep instruction is understood, B also needs to plan a sequence of steps to follow that instruction; in many cases, B chooses clearly suboptimal solutions, resulting in large amounts of redundancy in block movements. A misinterpreted instruction may also lead to a whole sequence of blocks being misplaced by B (either due to miscommunication, or because B made an educated guess on how to proceed) until A decides to intervene (in the example, this can be seen with the built yellow 6). A could also misinterpret the target structure, giving B incorrect instructions that would later need to be rectified. This illustrates the challenges involved 5407 Figure 1: In the Minecraft Collaborative Building Task, the Architect (A) has to instruct a Builder (B) to build a target structure. A can observe B, but remains invisible to B. Both players communicate via a chat interface. (NB: We show B’s actions in the dialogue as a visual aid to the reader.) in designing an interactive agent for this task: the Architect needs to provide clear instructions; the Builder needs to identify when more information is required, and both agents may need to design efficient plans to construct complex structures. 4 The Minecraft Dialogue Corpus The Minecraft Dialogue Corpus consists of 509 human-human dialogues and game logs for the Collaborative Building Task. This section describes this corpus and our data collection process. Further details are in the supplementary materials. 4.1 Data Collection Procedure Data was collected over the course of 3 weeks (approx. 62 hours overall). 40 volunteers, both undergraduate and graduate students with varying levels of proficiency with Minecraft, participated in 1.5 hour sessions in which they were paired up and asked to build various predefined structures within a 11 × 11 × 9 sized build region. Builders began with an inventory of 6 colors of blocks and 20 blocks of each color. After a brief warm-up round to become familiar with the interface, participants were asked to successfully build as many structures as they could manage within this time frame. On average, each game took 8.55 minutes. Architects were encouraged not to overwhelm the Builder with instructions and to allow their partner a chance to respond or act before moving on. Builders were instructed not to place blocks outside the specified build region and to stay as faithful as possible to the Architect’s instructions. Both players were asked to communicate as naturally as possible while avoiding idle chit-chat. Participants were allowed to complete multiple sessions if desired; we ensured that an individual never saw the same target structure twice, and attempted as much as possible to pair them with a previously unseen partner. While some individuals indicated a preference towards either the Architect or Builder roles, roles were, for the most part, assigned in such a way that each individual who participated in repeat sessions played both roles equally often. Each participant is assigned a unique anonymous ID across sessions. 4.2 Data Structures and Collection Platform Microsoft’s Project Malmo (Johnson et al., 2016) is an AI research platform that provides an API for Minecraft agents and the ability to log, save, and load game states. We have extended Malmo into a data collection platform. We represent the progression of each game (involving the construction of a single target structure by an Architect and 5408 Builder pair) as a discrete sequence of game states. Although Malmo continuously monitors the game, we selectively discretize this data by only saving snapshots, or “observations,” of the game state at certain triggering moments (whenever B picks up or puts down a block or when either player sends a chat message). This allows us to reduce the amount of (redundant) data to be logged while preserving significant game state changes. Each observation is a JSON object that contains the following information: 1) a time stamp, 2) the chat history up until that point in time, 3) B’s position (a tuple of real-valued x, y, z coordinates as well as pitch and yaw angles, representing the orientation of their camera), 4) B’s block inventory, 5) the locations of the blocks in the build region, 6) screenshots taken from A’s and B’s perspectives. Whenever B manipulates a block, we also capture screenshots from four invisible “Fixed Viewer” clients hovering around the build region at fixed angles. 4.3 Data Statistics and Analysis Overall statistics The Minecraft Dialogue Corpus contains 509 human-human dialogues (15,926 utterances, 113,116 tokens) and game logs for 150 target structures of varying complexity (min. 6 blocks, max. 68 blocks, avg. 23.5 blocks). We collected a minimum of three dialogues per structure. The training, test and development sets consist of 85 structures (281 dialogues), 39 structures (137 dialogues), and 29 structures (101 dialogues) respectively. Dialogues for the same structure are fully contained within a single split; structures in training are thus guaranteed to be unseen in test. On average, dialogues contain 30.7 utterances: 22.5 Architect utterances (avg. length 7.9 tokens), 8.2 Builder utterances (avg. length 2.9 tokens), and 49.5 Builder block movements. Dialogue length varies greatly with the complexity of the target structure (not just the number of blocks, but whether it requires floating blocks or contains recognizable substructures). Floating blocks Blocks in Minecraft can be placed anywhere as long as they touch an existing block (or the ground). If such a supporting block is later removed, the remaining block (and any structure supported by it) will continue to “float” in place. This makes it possible to produce complex designs. 53.6% of our target structures contain such floating blocks. Instructions for these structures varied greatly, ranging from step-by-step instructions involving temporary supporting blocks to single-shot descriptions such as, simply, “build a floating yellow block” (sufficient for a veteran Minecraft player, but not necessarily for a novice). Referring expressions and ellipsis Architects made frequent use of implicit arguments and references, relying heavily on the Builder’s current perspective and their most recent actions for reference resolution. For instance, Architect instructions could include references such as “two more in the same direction,” “one up,” “two towards you,” and “one right from the last thing you built.” Recognizable shapes and sub-structures Some target structures were designed with commonplace objects in mind. Some Architects took advantage of this in their instructions, ranging from straightforward (‘L’-shapes, “staircases”) to more eccentric descriptions (“either a chicken or a gun turret,” “a heart that looks diseased,” “a silly multicolored worm”). To avoid slogging through block-by-block instructions, Architects frequently used such names to refer to sub-elements of the target structure. Some even defined new terms that get re-used across utterances: A: i will refer to this shape as r-windows from here on out... B: okay A: please place the first green block in the right open space of the blue r-window. Builder utterances Even though the Architect shouldered the large responsibility of describing the unseen structure, the Builder played an active role in continuing and clarifying the dialogue, especially for more complex structures. Builders regularly took initiative during the course of a dialogue in a variety of ways, including verification questions (“is this ok?”), clarification questions (“is it flat?” or “did I clean it up correctly?”), status updates (“i’m out of red blocks”), suggestions (“feel free to give more than one direction at a time if you’re comfortable,” “i’ll stay in a fixed position so it’s easier to give me directions with respect to what i’m looking at”), or extrapolation (“I think I know what you want. Let me try,” then continuing to build without explicit instruction). 5 Architect Utterance Generation Task Although the Minecraft Dialogue Corpus was motivated by our ultimate goal of building agents that can successfully play an entire collaborative building game as Architect or Builder, we first con5409 Figure 2: An overview of the full model combining global and local world representation variants. sider the task of Architect utterance generation: given access to the entire game state context leading up to a certain point in a human-human game at which the human Architect spoke next, we aim to generate a suitable Architect utterance. Architect utterance generation is a much simpler task than developing a fully interactive Architect or Builder, but it still captures some of the essential difficulties of the Architect’s role. Since Architects need to be able to give instructions, correct Builders’ mistakes and answer their questions, they need the ability to compare the built structure against the target structure, and to understand the preceding dialogue. We also believe that the models developed for this task could be leveraged to at least bootstrap a fully interactive Architect (which will also need to decide when to speak, as well as deal with potentially much noisier dialogue histories than those we are considering here). Although future work should consider the task of Builder utterance generation, the challenges in creating a fully interactive Builder lie more in the need to understand and execute complex instructions in a discourse and game context, to know when it is appropriate to ask clarification questions and to understand the Architect’s answers, than in the need to generate complex utterances. 6 Seq2Seq Architect Utterance Model We define a sequence of models for Architect utterance generation. Our most basic variant is a sequence-to-sequence model (Sutskever et al., 2014) that conditions the next utterance on the preFigure 3: A target structure (left) and corresponding built structure at a certain point in the game (right). ceding dialogue. Since Architects need to compare the current state of the build region against the target structure, we augment this model in the next section with world state information. Dialogue History Encoder We encode the entire dialogue history as a sequence of tokens in which each player’s utterances are contained within speaker-specific start and end tokens (<A>...</A> or <B>...</B>....). Each utterance corresponds to a single chat message, and may consist of multiple sentences. These tokens are fed through a word embedding layer and subsequently passed through a bidirectional RNN (Schuster and Paliwal, 1997) to produce an embedding of the entire dialogue history in the encoder RNN’s final hidden state. Output Utterance Decoder The output utterance is generated by a decoder RNN conditioned on the discourse context. In standard fashion, the final hidden state of the encoder RNN is used to initialize the hidden state of the decoder RNN. 7 World State Representations To be able to give accurate instructions, the Architect requires a mental model of how the target structure can be constructed successfully given the current state of the built structure. Since the Builder’s world is not explicitly aligned to the target structure (our space does not contain any markers that would indicate cardinal directions or other landmarks, and we consider any built structure a success as long as it matches the target structure and fits completely into the Builder’s build region), this model must consider all possible translational and rotational alignment variants, although we assume it can ignore any sub-optimal alignments. For any given alignment, we compute 5410 the Hamming distance between the built structure and the target (the total number of blocks of each color to be placed and removed), and only retain those alignments that have the smallest distance to the target. Once the game has progressed sufficiently far, there is often only one optimal alignment between built and target structures, but in the early stages, a number of different optimal alignments may be possible. Our world state representation captures this uncertainty. Figure 3 depicts a target structure (left) and a point in the game at which a single red block has been placed (right). We can identify three potential paths (left, up, and down) to continue the structure by extending it along the four cardinal directions. A permissibility check disqualifies the option of extending to the right, as blocks would end up placed outside the build region. These remaining paths, considered equally likely, indicate the colors and locations of blocks to be placed (or removed). A summary of this information forms the basis of the input to our model. Computing the distance between structures Computing the Hamming distance between the built and target structure under a given alignment tells us also which blocks need to be placed or removed. A structure S is a set of blocks (c, x, y, z). Each block has a color c and occupies a location (x, y, z) in absolute coordinate space (i.e., the coordinate system defined by the Minecraft client). A structure’s position and orientation can be mutated by an alignment A in which S undergoes a translation AT (shift) followed by a rotation AR, denoted A(S) = AR(AT (S)). We only consider rotations about the vertical axis in 90-degree intervals, but allow all possible translations along the horizontal plane. The symmetric difference between the target T and a built structure S w.r.t. an alignment A, diff(T, S, A), consists of the set of blocks to be placed, Bp = A(T) −S and the set of blocks to be removed from S, Br = S −A(T). diff(T, S, A) = Bp ∪Br The cardinality |diff(T, S, A)| is the Hamming distance between A(T) and S. Feasible next placements Architects’ instructions often concern the immediate next blocks to be placed. Since new blocks can only be feasibly placed if one of their faces touches the ground or another block, we also wish to capture which blocks Bn can be placed in the immediate next action. Bn, the set of blocks that can be feasibly placed, is a subset of Bp. Block counters To obtain a summary representation of the optimal alignments (without detailed spatial information), we represent each of the sets Bp and Br (as well as Bn) of an alignment A = Bp ∪Br as sets of counters over block colors (indicating how many blocks of each color remain to be placed [next] and to be removed). We compute the set of expected block counters for each color c ∈{red,blue,orange, purple, yellow, green} and action a ∈{p, r, n} as the average over all k optimal alignments A∗= arg minA(|diff(T, S, A)|). E[countc,a] = 1 k k X i=1 counti c,a With six colors, and three sets of blocks (all placements, next placements, removals), we obtain an 18-dimensional vector of expected block counts. 7.1 Block Counter Models We augment our basic seq2seq model with two variants of block counters that capture the current state of the built structure: Global block counters are 18-dimensional vectors (capturing expected overall placements, next placements, and removals for each of the six colors) that are computed over the whole build region. Local block counters Since many Builder actions involve locations immediately adjacent to their last action, we construct local block counters that focus on and encode spatial information of this concentrated region. Here, we consider a 3 × 3 × 3 cube of block locations: those directly surrounding the location of the last Builder action as well as the last action itself. We compute a separate set of block counters for each of these 27 locations. Using the Builder’s position and gaze, we deterministically assign a relative direction for each location that indicates its position relative to the last action in the Builder’s perspective, e.g., “left”, “top”, “back-right,” etc. The 27 18-dimensional block counters of each location are concatenated, using a fixed canonical ordering of the assigned directions. Adding block counters to the model To add block counters to out models, we found the best results by feeding the concatenated global and local 5411 counter vectors through a single fully-connected layer before concatenating them to the word embedding vector that is fed into the decoder at each time step (Figure 2). 8 Experimental Setup Data Our training, test and dev splits contain 6,548, 2,855, and 2,251 Architect utterances. Training We trained for a maximum of 40 epochs using the Adam optimizer (Kingma and Ba, 2015). During training, we minimize the sum of the cross entropy losses between each predicted and ground truth token. We stop training early when perplexity on the held-out validation set had increased monotonically for two epochs. All word embeddings were initialized with pretrained GloVe vectors (Pennington et al., 2014). We first performed grid search over model architecture hyperparameters (embedding layer sizes and RNN layer depths). Once the best-performing architecture was found, we then varied dropout parameters (Srivastava et al., 2014). More details can be found in the supplementary materials. Decoding We use beam search decoding to generate the utterance with the maximum loglikelihood score according to our model normalized by utterance length (beam size = 10). In order to promote diversity of generated utterances, we use a γ penalty (Li et al., 2016) of γ = 0.8. These parameters were found by a grid search on the validation set for our best model. 9 Results and Analysis We evaluate our models in three ways: we use automated metrics to assess how closely the generated utterances match the human utterances. For a random sample of 100 utterances per model, we use human evaluators to identify dialogue acts and to evaluate whether the generated utterances are correct in the given game context. Finally, we perform a qualitative analysis of our best model. 9.1 Automated Evaluation Metrics To evaluate how closely the generated utterances resemble the human utterances, we report standard BLEU scores (Papineni et al., 2002). We also compute (modified) precision and recall of a number of lists of domain-specific keywords that are instrumental to task success: colors, spatial relations, and other words that are highly indicative of dialogue acts (e.g., responding “yes” vs. “no”, instructing to “place” vs. “remove”, etc.). These lists also capture synonyms that are common in our data (e.g. “yes”/“yeah”), and were obtained by curating non-overlapping lists of words (with a frequency ≥10 across all data splits) that are appropriate to each category.2 We report precision and recall scores per category, and for an “all keywords” list consisting of the union of all category word lists. For each category, we reduce both human and generated utterances to those tokens that occur in the corresponding keyword list: “place another red left of the green” reduces to “red green” for color, to “left” for spatial relations and “place” for dialogue. For a given (reduced) generated sentence Sg and its associated (reduced) human utterance Sh, we calculate term-specific precision (and recall) as follows. Any token tg in Sg matches a token th in Sh if tg and th are identical or synonyms. Similar to BLEU’s modified unigram precision, once tg is matched to one token th, it cannot be used for further matches to other tokens within Sh. Counts are accumulated over the entire corpus to compute the ratio of matched to total tokens in Sg (or Sh). Ablation study Table 1 shows the results of an ablation study on the validation set. All model variants here share the same RNN parameters. While the individual addition of global and local block counters each see a slight boost in performance in precision and recall respectively, combining them as in our final model shows significant performance increase, especially on colors. Test set results We finetune our most basic and most complex model via a grid search over all architectural parameters and dropout values on the validation set. The best model’s results on the test set are shown in Table 2. Our full model shows noticeable improvements on each of our metrics over the baseline. Most promising is again the significant increase in performance on colors, indicating that the block counters capture necessary information about next Builder actions. 9.2 Human Evaluation In order to better evaluate the quality of generated utterances as well as benchmark human performance, we performed a small-scale human evaluation of Architect utterances. We asked 3 hu2 These word lists are in the supplementary materials. 5412 BLEU Precision / Recall Metric B-1 B-2 B-3 B-4 all keywords colors spatial dialogue seq2seq 14.9 6.9 3.8 2.1 12.0 / 10.3 8.4 / 12.1 9.9 / 9.1 16.5 / 19.1 + global only 16.1 7.7 4.1 2.4 12.9 / 11.6 14.4 / 15.5 8.8 / 7.0 19.1 / 18.8 + local only 16.0 7.9 4.5 2.6 13.5 / 13.8 13.3 / 23.5 9.5 / 11.3 19.3 / 22.0 + global & local 16.2 8.1 4.7 2.8 14.5 / 13.8 14.8 / 23.3 10.7 / 9.5 17.9 / 20.6 Table 1: BLEU score and term-specific precision and recall ablation study on the validation set. BLEU Precision / Recall Metric B-1 B-2 B-3 B-4 all keywords colors spatial dialogue seq2seq 15.3 7.8 4.5 2.8 11.8 / 11.1 8.1 / 17.0 9.3 / 8.6 17.9 / 19.3 + global & local 15.7 8.1 4.8 2.9 13.5 / 14.4 14.9 / 28.7 8.7 / 8.7 18.5 / 19.9 Table 2: BLEU and term-specific precision and recall scores of the seq2seq and the full model on the test set. man participants who had previously completed the Minecraft Collaborative Building Task to evaluate 100 randomly sampled scenarios from the test set. Each scenario was reenacted from an actual human-human game by simulating the context of dialogue and Builder actions in Minecraft. Then, we presented 3 candidate Architect utterances to follow that context (one each generated from the models in Table 2 as well as the original human utterance) to the evaluators in randomized order. Here, we analyze a subset of results on coarse annotation of dialogue acts and utterance correctness. More details on the full evaluation framework, including descriptions of evaluation criteria and inter-annotator agreement statistics, are included in the supplementary materials. Dialogue acts Given a list of six predefined coarse-grained dialogue acts (including Instruct B, Describe Target, etc.; see the supplementary material for full details), evaluators were asked to choose all dialogue acts that categorized a candidate utterance. An utterance could belong to any number of categories; e.g., “great! now place a red block” is both a confirmation as well as an instruction. Results can be found in Table 3. These results show a significantly higher diversity of utterance types generated by humans. Humans provided instructions only about half of the time, and devoted more energy to providing higher-level descriptions of the target, responding to the Builder’s actions and queries, and rectifying mistakes. On the other hand, even the improved model failed to capture this, mainly generating instructions even if it was inappropriate or unhelpful to do so. Utterance correctness Given a window of game context (consisting of at least the last seven Builder’s and Architect’s actions, but always including the previous Architect’s utterance) and access to the target structure to be built, evaluators were asked to rate the correctness of an utterance immediately following that context with respect to task completion. For an utterance to be fully correct, information contained within it must both be consistent with the current state of the world as well as not lead the Builder off-course from the target. Utterances could be considered partially correct if some described elements (e.g. colors) were accurate, but other incorrect elements precluded full correctness. Otherwise, utterances could be deemed incorrect (if wildly off-course) or N/A (if there was not enough information). Results can be found in Table 4. Unsurprisingly, without access to world state information, the baseline model performs poorly, conveying incorrect information about half of the time. With access to a simple world representation, our full model shows marked improvement on generating both fully and partially correct utterances. Finally, human performance sets a high bar; when not engaging in chitchat or correcting typos, humans consistently produce fully correct utterances constructive towards task completion. 9.3 Qualitative Analysis Here, we use examples to illustrate different aspects of our best model’s utterances. Identifying the game state In the course of a game, players progress through different states. In the human-human data, dialogue is peppered with context cues (greetings, questions, apologies, in5413 Describe Answer Confirm B’s Correct/ Model Instruct B Target question actions/plans clarify A/B Other seq2seq 76.0 12.0 7.0 9.0 3.0 4.0 + global & local 72.0 14.0 8.0 9.0 3.0 4.0 human 47.0 14.0 12.0 17.0 23.0 8.0 Table 3: Percentage of utterances categorized as a given dialogue act. Labels were determined per dialogue act by majority vote across three human evaluators. An utterance can belong to multiple dialogue acts. Model Full Partial None N/A seq2seq 14.0 28.0 48.0 10.0 + global & local 25.0 36.0 32.0 7.0 human 89.0 2.0 0.0 9.0 Table 4: Percentage of utterances deemed correct by human evaluators. structions to move or place blocks) that indicate the flow of a game. Our model is able to capture some of these aspects. It often begins games with an instruction like “we’ll start with blue”, and may end them with “ok we’re done!” (although it occasionally continues with further instructions, e.g “great! now we’ll do the same thing on the other side”.) It often says “perfect!” immediately followed by a new instruction which indicates the model’s ability to acknowledge a Builder’s previous actions before continuing. The model often describes the type of the next required action correctly (even if it makes mistakes in the specifics of that action): it generated “remove the bottom row” when the ground truth was “okay so now get rid of the inner most layer of purple in the square”. Predicting block colors and spatial relations Generated utterances often identify the correct color of blocks, e.g “then place a red block on top of that” in a context when the the next placements include a layer of red blocks (ground truth utterance: “the second level of the structure consists wholly of red blocks. start by putting a red block on each orange block”.) Less frequently, the model is also able to predict accurate spatial relations (“perfect! now place a red block to the left of that”) for referent blocks. Utterance diversity and repetition Generated utterances lack diversity: the pattern “a x b” (for a rectangle of size a × b) is almost exclusively used to describe squares (an extremely common shape in our data). Utterances are mostly fluent, but sometimes contain repeats: “okay, on top of the blue block, put a blue block on top of the blue” or “yes, now, purple, purple, purple, ...” 10 Conclusion and Future Work The Minecraft Collaborative Building Task provides interesting challenges for interactive agents: they must understand and generate spatially-aware dialogue, execute instructions, identify and recover from mistakes. As a first step towards the goal of developing fully interactive agents for this task, we considered the subtask of Architect utterance generation. To give accurate, high-level instructions, Architects need to align the Builder’s world state to the target structure and identify complex substructures. We show that models that capture some world state information improve over naive baselines. Richer models (e.g. CNNs over world states, attention mechanisms (Bahdanau et al., 2015), memory networks (Bordes et al., 2017)) and/or explicit semantic representations should be able to generate better utterances. Clearly, much work remains to be done to create actual agents that can play either role interactively against a human. The Minecraft Dialogue Corpus as well as the Malmo platform and our extension of it enable many such future directions. Our platform can also be extended to support fully interactive scenarios that may involve a human player, measure task completion, or support other training regimes (e.g. reinforcement learning). Acknowledgements We would like to thank the reviewers for their valuable comments. This work was supported by Contract W911NF-15-1-0461 with the US Defense Advanced Research Projects Agency (DARPA) Communicating with Computers Program and the Army Research Office (ARO). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. 5414 References Anne H Anderson, Miles Bader, Ellen Gurman Bard, Elizabeth Boyle, Gwyneth Doherty, Simon Garrod, Stephen Isard, Jacqueline Kowtko, Jan McAllister, Jim Miller, et al. 1991. The HCRC map task corpus. Language and speech, 34(4):351–366. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Yonatan Bisk, Daniel Marcu, and William Wong. 2016a. Towards a dataset for human computer communication via grounded language acquisition. In AAAI Workshop: Symbiotic Cognitive Systems. Yonatan Bisk, Kevin Shih, Yejin Choi, and Daniel Marcu. 2018. Learning interpretable spatial operations in a rich 3D Blocks World. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5028–5036. Yonatan Bisk, Deniz Yuret, and Daniel Marcu. 2016b. Natural language communication with robots. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 751–761, San Diego, California. Association for Computational Linguistics. Antoine Bordes, Y.-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaˇsi´c. 2018. MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Joyce Y. Chai, Qiaozi Gao, Lanbo She, Shaohua Yang, Sari Saba-Sadiya, and Guangyue Xu. 2018. Language to action: Towards interactive task learning with physical agents. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), pages 2–9. International Joint Conferences on Artificial Intelligence Organization. David Chen and Raymond Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the TwentyFifth AAAI Conference on Artificial Intelligence, pages 859–865. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e M.F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual Dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 326– 335. Srinivasan Janarthanam, Oliver Lemon, and Xingkun Liu. 2012. A web-based evaluation framework for spatial instruction-giving systems. In Proceedings of the ACL 2012 System Demonstrations, pages 49– 54, Jeju Island, Korea. Association for Computational Linguistics. Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. 2016. The Malmo platform for artificial intelligence experimentation. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), pages 4246–4247. Seokhwan Kim, Luis Fernando D’Haro, Rafael E Banchs, Jason D Williams, and Matthew Henderson. 2017. The fourth dialog state tracking challenge. In Dialogues with Social Robots, pages 435–449. Springer. Seokhwan Kim, Luis Fernando D’Haro, Rafael E Banchs, Jason D Williams, Matthew Henderson, and Koichiro Yoshino. 2016. The fifth dialog state tracking challenge. In 2016 IEEE Spoken Language Technology Workshop (SLT), pages 511–517. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Alexander Koller, Kristina Striegnitz, Donna Byron, Justine Cassell, Robert Dale, Johanna Moore, and Jon Oberlander. 2010. The first challenge on generating instructions in virtual environments. In Empirical Methods in Natural Language Generation, pages 328–352, Berlin, Heidelberg. SpringerVerlag. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294, Prague, Czech Republic. Association for Computational Linguistics. Dipendra K. Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. 2016. Tell me Dave: Contextsensitive grounding of natural language to manipulation instructions. The International Journal of Robotics Research, 35(1-3):281–300. 5415 Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ramakanth Pasunuru and Mohit Bansal. 2018. Gamebased video-context dialogue. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 125–136, Brussels, Belgium. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172–180, Los Angeles, California. Association for Computational Linguistics. Nicolas Schrading, Cecilia Ovesdotter Alm, Ray Ptucha, and Christopher Homan. 2015. An analysis of domestic abuse discourse on Reddit. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2577– 2583, Lisbon, Portugal. Association for Computational Linguistics. M. Schuster and K. K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew Walter, Ashis Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 1507–1514. Jesse Thomason, Shiqi Zhang, Raymond J Mooney, and Peter Stone. 2015. Learning to interpret natural language commands through human-robot dialog. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015), pages 1923–1929. Sida I. Wang, Samuel Ginn, Percy Liang, and Christopher D. Manning. 2017. Naturalizing a programming language via interactive learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 929–938, Vancouver, Canada. Association for Computational Linguistics. Terry Winograd. 1971. Procedures as a representation for data in a computer program for understanding natural language. Technical report, MIT. Cent. Space Res.
2019
537
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5416–5426 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5416 Neural Response Generation with Meta-Words Can Xu1, Wei Wu1∗, Chongyang Tao2, Huang Hu1, Matt Schuerman3, and Ying Wang3 1Microsoft Corporation, Beijing, China 2Institute of Computer Science and Technology, Peking University, Beijing, China 3Microsoft Corporation, Redmond, Washington 1,3{caxu,wuwei,huahu,matthes,Ying.Wang}@microsoft.com [email protected] Abstract We present open domain response generation with meta-words. A meta-word is a structured record that describes various attributes of a response, and thus allows us to explicitly model the one-to-many relationship within open domain dialogues and perform response generation in an explainable and controllable manner. To incorporate meta-words into generation, we enhance the sequence-to-sequence architecture with a goal tracking memory network that formalizes meta-word expression as a goal and manages the generation process to achieve the goal with a state memory panel and a state controller. Experimental results on two large-scale datasets indicate that our model can significantly outperform several state-ofthe-art generation models in terms of response relevance, response diversity, accuracy of oneto-many modeling, accuracy of meta-word expression, and human evaluation. 1 Introduction Human-machine conversation is a fundamental problem in NLP. Traditional research focuses on building task-oriented dialog systems (Young et al., 2013) to achieve specific user goals such as restaurant reservation through limited turns of dialogues within specific domains. Recently, building a chatbot for open domain conversation (Vinyals and Le, 2015) has attracted increasing attention, not only owing to the availability of large amount of human-human conversation data on internet, but also because of the success of such systems in real products such as the social bot XiaoIce (Shum et al., 2018) from Microsoft. A common approach to implementing a chatbot is to learn a response generation model within an encoder-decoder framework (Vinyals and Le, ∗Corresponding author. Message: last week I have a nice trip to New York! Meta-word: Act: yes-no question | Len: 8 | Copy: true | Utts: false | Spe: medium Response 1: Is New York more expensive than California? Meta-word: Act: wh-question | Len: 17 | Copy: false | Utts: true | Spe: high Response 2: Cool, sounds great! What is the tallest building in this city, Chrysler building? Meta-word: Act: statement | Len: 13 | Copy: false | Utts: true | Spe: low Response 3: I don’t know what you are talking about. But it seems good. Table 1: An example of response generation with metawords. The underlined word means it is copied from the message, and the word in bold means it corresponds to high specificity. 2015; Shang et al., 2015). Although the architecture can naturally model the correspondence between a message and a response, and is easy to extend to handle conversation history (Serban et al., 2016; Xing et al., 2018) and various constraints (Li et al., 2016; Zhou et al., 2018), it is notorious for generating safe responses such as “I don’t know” and “me too” in practice. A plausible reason for the “safe response” issue is that there exists one-to-many relationship between messages and responses. One message could correspond to many valid responses and vice versa (Zhang et al., 2018a). The vanilla encoder-decoder architecture is prone to memorize high-frequency patterns in data, and thus tends to generate similar and trivial responses for different messages. A typical method for modeling the relationship between messages and responses is to introduce latent variables into the encoder-decoder framework (Serban et al., 2017; Zhao et al., 2017; Park et al., 2018). It is, however, difficult to explain what relationship a latent variable represents, nor one can control responses to generate by manipulating the latent variable. Although a recent study (Zhao et al., 2018) replaces continuous latent variables with discrete ones, it still needs a lot of post human effort to explain the meaning of the variables. In this work, we aim to model the one-to-many relationship in open domain dialogues in an explainable and controllable way. Instead of using 5417 latent variables, we consider explicitly representing the relationship between a message and a response with meta-words1. A meta-word is a structured record that characterizes the response to generate. The record consists of a group of variables with each an attribute of the response. Each variable is in a form of (key, type, value) where “key” defines the attribute, “value” specifies the attribute, and “type” ∈{r, c} indicates whether the variable is real-valued (r) or categorical (c). Given a message, a meta-word corresponds to one kind of relationship between the message and a response, and by manipulating the meta-word (e.g., values of variables or combination of variables), one can control responses in a broad way. Table 1 gives an example of response generation with various meta-words, where “Act”, “Len”, “Copy”, “Utts”, and “Spe” are variables of a meta-word and refer to dialogue act, response length (including punctuation marks), if copy from the message, if made up of multiple utterances, and specificity level (Zhang et al., 2018a) respectively2. Advantages of response generation with meta-words are three-folds: (1) the generation model is explainable as the meta-words inform the model, developers, and even end users what responses they will have before the responses are generated; (2) the generation process is controllable. The metaword system acts as an interface that allows developers to customize responses by tailoring the set of attributes; (3) the generation approach is general. By taking dialogue acts (Zhao et al., 2017), personas (Li et al., 2016), emotions (Zhou et al., 2018), and specificity (Zhang et al., 2018a) as attributes, our approach can address the problems in the existing literature in a unified form; and (4) generation-based open domain dialogue systems now become scalable, since the model supports feature engineering on meta-words. The challenge of response generation with meta-words lies in how to simultaneously ensure relevance of a response to the message and fidelity of the response to the meta-word. To tackle the challenge, we propose equipping the vanilla sequence-to-sequence architecture with a novel goal tracking memory network (GTMN) and crafting a new loss item for learning GTMN. GTMN 1We start from single messages. It is easy to extend the proposed approach to handle conversation history. 2For ease of understanding, we transformed “copy ratio” and “specificity” used in our experiments into categorical variables. sets meta-word expression as a goal of generation and dynamically monitors expression of each variable in the meta-word during the decoding process. Specifically, GTMN consists of a state memory panel and a state controller where the former records status of meta-word expression and the latter manages information exchange between the state memory panel and the decoder. In decoding, the state controller updates the state memory panel according to the generated sequence, and reads out difference vectors that represent the residual of the meta-word. The next word from the decoder is predicted based on attention on the message representations, attention on the difference vectors, and the word predicted in the last step. In learning, besides the negative log likelihood, we further propose minimizing a state update loss that can directly supervise the learning of the memory network under the ground truth. We also propose a meta-word prediction method to make the proposed approach complete in practice. We test the proposed model on two large-scale open domain conversation datasets built from Twitter and Reddit, and compare the model with several state-of-the-art generation models in terms of response relevance, response diversity, accuracy of one-to-many modeling, accuracy of metaword expression, and human judgment. Evaluation results indicate that our model can significantly outperform the baseline models over most of the metrics on both datasets. Our contributions in this paper are three-folds: (1) proposal of explicitly modeling one-to-many relationship and explicitly controlling response generation in open domain dialogues with multiple variables (a.k.a., meta-word); (2) proposal of a goal tracking memory network that naturally allows a meta-word to guide response generation; and (3) empirical verification of the effectiveness of the proposed model on two large-scale datasets. 2 Related Work Neural response generation models are built upon the encoder-decoder framework (Sutskever et al., 2014). Starting from the basic sequence-tosequence with attention architecture (Vinyals and Le, 2015; Shang et al., 2015), extensions under the framework have been made to combat the “safe response” problem (Mou et al., 2016; Tao et al., 2018); to model the hierarchy of conversation history (Serban et al., 2016, 2017; Xing et al., 2018); 5418 to generate responses with specific personas or emotions (Li et al., 2016; Zhou et al., 2018); and to speed up response decoding (Wu et al., 2018). In this work, we also aim to tackle the “safe response” problem, but in an explainable, controllable, and general way. Rather than learning with a different objective (e.g., (Li et al., 2015)), generation from latent variables (e.g., (Zhao et al., 2017)), or introducing extra content (e.g., (Xing et al., 2017)), we explicitly describe relationship between message-response pairs by defining metawords and express the meta-words in responses through a goal tracking memory network. Our method allows developers to manipulate the generation process by playing with the meta-words and provides a general solution to response generation with specific attributes such as dialogue acts. Recently, controlling specific aspects in text generation is drawing increasing attention (Hu et al., 2017; Logeswaran et al., 2018). In the context of dialogue generation, Wang et al. (2017) propose steering response style and topic with human provided topic hints and fine-tuning on small scenting data; Zhang et al. (2018a) propose learning to control specificity of responses; and very recently, See et al. (2019) investigate how controllable attributes of responses affect human engagement with methods of conditional training and weighted decoding. Our work is different in that (1) rather than playing with a single variable like specificity or topics, our model simultaneously controls multiple variables and can take controlling with specificity or topics as special cases; and (2) we manage attribute expression in response generation with a principled approach rather than simple heuristics like in (See et al., 2019), and thus, our model can achieve better accuracy in terms of attribute expression in generated responses. 3 Problem Formalization Suppose that we have a dataset D = {(Xi, Mi, Yi)}N i=1, where Xi is a message, Yi is a response, and Mi = (mi,1, . . . , mi,l) is a meta-word with mi,j = (mi,j.k, mi,j.t, mi,j.v) the j-th variable and mi,j.k, mi,j.t, and mi,j.v the key, the type, and the value of the variable respectively. Our goal is to estimate a generation probability P(Y |X, M) from D, and thus given a new message X with a pre-defined meta-word M, one can generate responses for X according to P(Y |X, M). In this work, we assume that M is given as input for response generation. Later, we will describe how to obtain M with X. 4 Response Generation with Meta-Words In this section, we present our model for response generation with meta-words. We start from an overview of the model, and then dive into details of the goal tracking memory enhanced decoding. 4.1 Model Overview Figure 1 illustrates the architecture of our goal tracking memory enhanced sequence-tosequence model (GTMES2S). The model equips the encoder-decoder structure with a goal tracking memory network that comprises a state memory panel and a state controller. Before response decoding, the encoder represents an input message as a hidden sequence through a bi-directional recurrent neural network with gated recurrent units (biGRU) (Chung et al., 2014), and the goal tracking memory network is initialized by a meta-word. Then, during response decoding, the state memory panel tracks expression of the meta-word and gets updated by the state controller. The state controller manages the process of decoding at each step by reading out the status of meta-word expression from the state memory panel and informing the decoder of the difference between the status and the target of meta-word expression. Based on the message representation, the information provided by the state controller, and the generated word sequence, the decoder predicts the next word of the response. In the following section, we will elaborate the goal tracking memory enhanced decoding, which is the key to having a response that is relevant to the message and at the same time accurately reflects the meta-word. 4.2 Goal Tracking Memory Network The goal tracking memory network (GTMN) dynamically controls response generation according to the given meta-word via cooperation of the state memory panel and the state controller. It informs the decoder at the first time to what extend the meta-word has been expressed. For local attributes such as response length3, the dynamic control 3Local attributes refer to the attributes whose values are location sensitive during response generation. For example, 5419 KEY !" #$ !" State Controller %" + GOAL *"+ Softmax Weighted sum State Update Difference Reading Last week I have a nice trip to New York Len Act Copy Spe Utts State Memory Panel Attribute Key Attribute Value (--1) Length Act Copy Spe Utts State Memory Panel Attribute Key Attribute Value (-) Is New York ? ? ? ? ? 1$ 12 13 … … 4"#$ 4" Encoder Decoder Distance Vector *" A *"B *"$ *" C Figure 1: Architecture of goal tracking memory enhanced sequence-to-sequence model. strategy is more reasonable than static strategies such as feeding the embedding of attributes to the decoder like in conditional training in (See et al., 2019). This is because if the goal is to generate a response with 5 words and 2 words have been decoded, then the decoder needs to know that there are 3 words left rather than always memorizing that 5 words should be generated. 4.2.1 State Memory Panel Suppose that the given meta-word M consists of l variables, then the state memory panel M is made up of l memory cells {Mi}l i=1 where ∀i ∈ {1, . . . , l}, Mi is in a form of (key, goal, value) which are denoted as Mi.k, Mi.g, and Mi.v respectively. We define Rep(·) as a representation getting function which can be formulated as Rep(mi.k) = B(mi.k), Rep(mi.v) = σ(B(mi.v)), mi.t = c mi.v × σ(B(mi.k)), mi.t = r, (1) where mi is the i-th variable of M, σ(·) is a sigmoid function, and B(·) returns the bag-of-words representation for a piece of text. Mi is then initialized as: Mi.k = Rep(mi.k), Mi.g = Rep(mi.v), Mi.v0 = 0. (2) Mi.k ∈Rd stores the key of mi, and Mi.g ∈ Rd stores the goal for expression of mi in generation. Thus, the two items are frozen in decoding. Mi.v ∈Rd refers to the gray part of the progress bar in Figure 1, and represents the progress of expression of mi in decoding. Hence, it is updated by the state controller after each step of decoding. length of the remaining sequence varies after each step of decoding. In contrary, some attributes, such as dialogue acts, are global attributes, as they are reflected by the entire response. 4.2.2 State Controller As illustrated by Figure 1, the state controller stays between the encoder and the decoder, and manages the interaction between the state memory panel and the decoder. Let st be the hidden state of the decoder at step t. The state controller first updates Mi.vt−1 to Mi.vt based on st with a state update operation. It then obtains the difference between Mi.g and Mi.vt from the state memory panel via a difference reading operation, and feeds the difference to the decoder to predict the t-th word of the response. State Update Operation. The operation includes SUB and ADD as two sub-operations. Intuitively, when the status of expression surpasses the goal, then the state controller should execute the SUB operation (stands for “subtract”) to trim the status representation; while when the status of expression is inadequate, then the state controller should use the ADD operation to enhance the status representation. Technically, rather than comparing Mi.vt−1 with Mi.g and adopting operations accordingly, we propose a soft way to update the state memory panel with SUB and ADD, since (1) it is difficult to identify over-expression or sub-expression by comparing two distributed representations; and (2) the hard way will break differentiablility of the model. Specifically, we define gt ∈Rd×l as a gate to control the use of SUB or ADD where gt(i) ∈Rd is the i-th element of gt. Let ∆SUB t (i) ∈Rd and ∆ADD t (i) ∈Rd be the changes from the SUB operation and the ADD operation respectively, then Mi.vt−1 is updated as ˆVt(i) = Mi.vt−1 −gt(i) ◦∆SUB t (i), Mi.vt = ˆVt(i) + (1 −gt(i)) ◦∆ADD t (i), (3) where ◦means element-wise multiplication, and 5420 gt(i), ∆SUB t (i), and ∆ADD t (i) can be defined as gt(i) = σ(WgSt(i) + bg) (4) and  ∆SUB t (i) ∆ADD t (i)  = σ  W SUB W ADD  St(i) +  bSUB bADD  (5) respectively with Wg ∈ Rd×d, bg ∈ Rd, W {SUB,ADD} ∈Rd×3d, and b{SUB,ADD} ∈Rd parameters. St(i) = Mi.k ⊕Mi.vt−1 ⊕st where ⊕is a concatenation operator. Difference Reading Operation. For each variable in the meta-word M, the operation represents the difference between the status of expression and the goal of expression as a vector, and then applies an attention mechanism to the vectors to indicate the decoder the importance of variables in generation of the next word. Formally, suppose that dt i ∈R2d is the difference vector for mi ∈M at step t, then dt i is defined as dt i = (Mi.g −Mi.vt) ⊕(Mi.g ◦Mi.vt). (6) With (dt 1, . . . , dt l) as a difference memory, the difference reading operation then takes st as a query vector and calculates attention over the memory as ot = Xl i=1 ai · (Udt i), at i = softmax((st)⊤(Udt i)), (7) where (at 1, . . . , at l) are attention weights, and U ∈ Rd×d is a parameter. 4.3 Response Decoding In decoding, the hidden state st is calculated by GRU(st−1, [e(yt−1) ⊕Ct]), where e(yt−1) ∈Rd is the embedding of the word predicted at step t −1, and Ct is a context vector obtained from attention over the hidden states of the input message X given by the biGRU based encoder. Let HX = (hX,1, . . . , hX,Tx) be the hidden states of X, then Ct is calculated via Ct = XTx j=1 αt,jhX,j αt,j = exp(et,j) PTx k=1 exp(et,k) , et,j = U ⊤ d tanh (Wsst−1 + WhhX,j + bd), (8) where Ud, Ws, Wh, and bd are parameters, and st−1 is the hidden state of the decoder at step t−1. With the hidden state st and the distance vector ot returned by the state controller, the probability distribution for predicting the t-th word of the response is given by p(yt) = softmax(Wp[e(yt) ⊕ot ⊕st] + bp), (9) where yt is the t-th word of the response with e(yt) its embedding, and Wp and bp are parameters. 5 Learning Method To perform online response generation with metawords, we need to (1) estimate parameters of GTMES2S by minimizing a loss function; and (2) learn a model to predict meta-words for online messages. 5.1 Loss for Model Learning The first loss item is the negative log likelihood (NLL) of D, which is formulated as LNLL(Θ) = −1 N XN i=1 log P(Yi|Xi, Mi), (10) where Θ is the set of parameters of GTMES2S. By minimizing NLL, the supervision signals in D may not sufficiently flow to GTMN, as GTMN is nested within response decoding. Thus, besides NLL, we propose a state update loss that directly supervises the learning of GTMN with D. The idea is to minimize the distance between the ground truth status of meta-word expression and the status stored in the state memory panel. Suppose that y1:t is the segment of response Y generated until step t, then ∀mi ∈M, we consider two cases: (1) ∃Fi(·) that Fi(y1:t) maps y1:t to the space of mi.v. As an example, response length belongs to this case with Fi(y1:t) = t; (2) it is hard to define an Fi(·) that can map y1:t to the space of mi.v. For instance, dialogue acts belong to this case since it is often difficult to judge the dialogue act from part of a response. For case (1), we define the state update loss as L1 SU(mi) = XT t=1∥Mi.vt −Rep(Fi(y1:t))∥, (11) where T is the length of Y and ∥·∥refers to L2 norm. For case (2), the loss is defined as L2 SU(mi) = ∥Mi.vT −Rep(mi.v)∥. (12) The full state update loss LSU(Θ) for D is then given by N X i=1 l X j=1 I[mi,j ∈C1]L1 SU(mi,j) + I[mi,j ∈C2]L2 SU(mi,j), (13) 5421 where C1 and C2 represent sets of variables belonging to case (1) and case (2) respectively, and I(·) is an indicator function. The loss function for learning of GTMES2S is finally defined by L(Θ) = LNLL(Θ) + λLSU(Θ), (14) where λ acts as a trade-off between the two items. 5.2 Meta-word Prediction We assume that values of meta-words are given beforehand. In training, the values can be extracted from ground truth. In test, however, since only a message is available, we propose sampling values of a meta-word for the message from probability distributions estimated from {(Xi, Mi)}N i=1 ⊂D. The sampling approach not only provides meta-words to GTMNES2S, but also keeps meta-words diverse for similar messages. Formally, let hp X be the last hidden state of a message X processed by a biGRU, then ∀mi ∈ M, we assume that mi.v obeys a multinomial distribution with the probability ⃗pi parameterized as softmax(W mul i hp X + bmul i ), if mi.t = c; otherwise, mi.v obeys a normal distribution with µi and log(σ2 i ) parameterized as W µ i hp X + bµ i and W σ i hp X + bσ i respectively. In distribution estimation, we assume that variables in a meta-word are independent, and jointly maximize the log likelihood of {(Mi|Xi)}N i=1 and the entropy of the distributions as regularization. 6 Experiments We test GTMNES2S on two large-scale datasets. 6.1 Datasets We mine 10 million message-response pairs from Twitter FireHose, covering 2-month period from June 2016 to July 2016, and sample 10 million pairs from the full Reddit data4. As preprocessing, we remove duplicate pairs, pairs with a message or a response having more than 30 words, and messages that correspond to more than 20 responses to prevent them from dominating learning. After that, there are 4, 759, 823 pairs left for Twitter and 4, 246, 789 pairs left for Reddit. On average, each message contains 10.78 words in the Twitter data and 12.96 words in the Reddit data. The average lengths of responses in the Twitter data and the Reddit data are 11.03 and 12.75 respectively. From the pairs after pre-processing, we 4https://redd.it/3bxlg7 randomly sample 10k pairs as a validation set and 10k pairs as a test set for each data, and make sure that there is no overlap between the two sets. After excluding pairs in the validation sets and the test sets, the left pairs are used for model training. The test sets are built for calculating automatic metrics. Besides, we randomly sample 1000 distinct messages from each of the two test sets and recruit human annotators to judge the quality of responses generated for these messages. For both the Twitter data and the Reddit data, top 30, 000 most frequent words in messages and responses in the training sets are kept as message vocabularies and response vocabularies. In the Twitter data, the message vocabulary and the response vocabulary cover 99.17% and 98.67% words appearing in messages and responses respectively. The two ratios are 99.52% and 98.8% respectively in the Reddit data. Other words are marked as “UNK”. 6.2 Meta-word Construction As a showcase of the framework of GTMNES2S, we consider the following variables as a metaword: (1) Response Length (RL): number of words and punctuation marks in a response. We restrict the range of the variable in {1, . . . , 25} (i.e., responses longer than 25 are normalized as 25), and treat it as a categorical variable. (2) Dialog Act (DA): we employ the 42 dialogue acts based on the DAMSL annotation scheme (Core and Allen, 1997). The dialogue act of a given response is obtained by the state-of-the-art dialogue act classifier in (Liu et al., 2017) learned from the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997). DA is a categorical variable. (3) Multiple Utterances (MU): if a response is made up of multiple utterances. We split a response as utterances according to “.”, “?” and “!”, and remove utterances that are less than 3 words. The variable is “true” if there are more than 1 utterance left, otherwise it is “false”. (4) Copy Ratio (CR): inspired by COPY-NET (Gu et al., 2016) which indicates that humans may repeat entity names or even long phrases in conversation, we incorporate a “copy mechanism” into our model by using copy ratio as a soft implementation of COPY-NET. We compute the ratio of unigrams shared by a message and its response (divided by the length of the response) with stop words and top 1000 most frequent words in training excluded. CR is a real-valued variable. (5) Specificity (S): 5422 following SC-Seq2Seq (Zhang et al., 2018b), we calculate normalized inverse word frequency as a specificity variable. The variable is real-valued. Among the five variables, RL, CR, and S correspond to the state update loss given by Equation (11), and others correspond to Equation (12). 6.3 Baselines We compare GTMNES2S with the following baseline models: (1) MMI-bidi: the sequenceto-sequence model with response re-ranking in (Li et al., 2015) learned by a maximum mutual information objective; (2) SC-Seq2Seq: the specificity controlled Seq2Seq model in (Zhang et al., 2018b); (3) kg-CVAE: the knowledgeguided conditional variational autoencoders in (Zhao et al., 2017); and (4) CT: the conditional training method in (See et al., 2019) that feeds the embedding of pre-defined response attributes to the decoder of a sequence-to-sequence model. Among the baselines, CT exploits the same attributes as GTMNES2S, SC-Seq2Seq utilizes specificity, and kg-CVAE leverages dialogue acts. All models are implemented with the recommended parameter configurations in the existing papers, where for kg-CVAE, we use the code shared at https://github.com/ snakeztc/NeuralDialog-CVAE, and for other models without officially published code, we code with TensorFlow. Besides the baselines, we also compare GTMNES2E learned from the full loss given by Equation (14) with a variant learned only from the NLL loss, in order to check the effect of the proposed state update loss. We denote the variant as GTMNES2S w/o SU. 6.4 Evaluation Metrics We conduct both automatic evaluation and human evaluation. In terms of automatic ways, we evaluate models from four aspects: relevance, diversity, accuracy of one-to-many modeling, and accuracy of meta-word expression. For relevance, besides BLEU (Papineni et al., 2002), we follow (Serban et al., 2017) and employ Embedding Average (Average), Embedding Extrema (Extrema), Embedding Greedy (Greedy) as metrics. To evaluate diversity, we follow (Li et al., 2015) and use Distinct-1 (Dist1) and Distinct-2 (Dist2) as metrics which are calculated as the ratios of distinct unigrams and bigrams in the generated responses. For accuracy of one-to-many modeling, we utilize A-bow precision (A-prec), A-bow recall (A-rec), E-bow precision (E-prec), and E-bow recall (Erec) proposed in (Zhao et al., 2017) as metrics. For accuracy of meta-word expression, we measure accuracy for categorical variables and square deviation for real-valued variables. Metrics of relevance, diversity, and accuracy of meta-word expression are calculated on the 10k test data based on top 1 responses from beam search. To measure the accuracy of meta-word expression for a generated response, we extract values of the metaword of the response with the methods described in Section 6.2, and compare these values with the oracle ones sampled from distributions. Metrics of accuracy of one-to-many modeling require a test message to have multiple reference responses. Thus, we filter the test sets by picking out messages that have at least 2 responses, and form two subsets with 166 messages for Twitter and 135 messages for Reddit respectively. On average, each message corresponds to 2.8 responses in the Twitter data and 2.92 responses in the Reddit data. For each message, 10 responses from a model are used for evaluation. In kg-CVAE, we follow (Zhao et al., 2017) and sample 10 times from the latent variable; in SC-Seq2Seq, we vary the specificity in {0.1, 0.2, . . . , 1}; and in both CT and GTMNES2S, we sample 10 times from the distributions. Top 1 response from beam search under each sampling or specificity setting are collected as the set for evaluation. In terms of human evaluation, we recruit 3 native speakers to label top 1 responses of beam search from different models. Responses from all models for all the 1000 test messages in both data are pooled, randomly shuffled, and presented to each of the annotators. The annotators judge the quality of the responses according to the following criteria: +2: the response is not only relevant and natural, but also informative and interesting; +1: the response can be used as a reply, but might not be informative enough (e.g.,“Yes, I see” etc.); 0: the response makes no sense, is irrelevant, or is grammatically broken. Each response receives 3 labels. Agreements among the annotators are measured by Fleiss’ kappa (Fleiss and Cohen, 1973). 6.5 Implementation Details In test, we fix the specificity variable as 0.5 in SCSeq2Seq, since in (Zhang et al., 2018a), the authors conclude that the model achieves the best overall performance under the setting. For kg5423 Dataset Models Relevance Diversity One-to-Many BLEU Average Greedy Extreme Dist1 Dist2 A-prec A-rec E-prec E-rec Twitter MMI-bidi 2.92 0.787 0.181 0.394 6.35 20.6 0.853 0.810 0.601 0.554 kg-CVAE 1.83 0.766 0.175 0.373 8.65 29.7 0.862 0.822 0.597 0.545 SC-Seq2Seq 2.57 0.776 0.182 0.387 6.87 22.5 0.857 0.815 0.594 0.551 CT 3.32 0.792 0.181 0.402 8.04 26.9 0.859 0.813 0.596 0.550 GTMNES2S w/o SU 3.25 0.793 0.183 0.405 7.59 28.4 0.861 0.819 0.598 0.554 GTMNES2S 3.39 0.810 0.182 0.413 8.41 30.5 0.886 0.839 0.610 0.560 Reddit MMI-bidi 1.82 0.752 0.171 0.369 6.12 20.3 0.821 0.775 0.587 0.542 kg-CVAE 1.89 0.745 0.171 0.357 8.47 28.7 0.827 0.781 0.583 0.531 SC-Seq2Seq 1.95 0.752 0.176 0.362 5.94 19.2 0.823 0.778 0.581 0.536 CT 2.43 0.751 0.172 0.383 8.62 33.4 0.827 0.783 0.587 0.540 GTMNES2S w/o SU 2.75 0.757 0.174 0.382 8.47 32.6 0.832 0.791 0.594 0.548 GTMNES2S 2.95 0.760 0.172 0.386 10.35 36.3 0.841 0.795 0.602 0.554 Table 2: Results on relevance, diversity, and accuracy of one-to-many modeling. Numbers in bold mean that improvement over the best baseline is statistically significant (t-test, p-value < 0.01). Dataset Metaword Type SC-Seq2Seq kg-CVAE CT GTMNES2S w/o SU GTMNES2S Twitter RL c 97% 95.6% 98.6% DA c 58.2% 60.9% 61.2% 62.6% MU c 98.8% 99.5% 99.4% CR r 0.176 0.178 0.164 S r 0.195 0.130 0.158 0.103 Reddit RL c 94.5% 95.1% 96.7% DA c 55.7% 59.9% 55.9% 61.2% MU c 99.2% 98.7% 99.4% CR r 0.247 0.253 0.236 S r 0.143 0.118 0.112 0.084 Table 3: Results on accuracy of meta-word expression. Numbers in bold mean that improvement over the best baseline is statistically significant (t-test, p-value < 0.01). CVAE, we follow (Zhao et al., 2017) and predict a dialogue act for a message with an MLP. GTMNES2S and CT leverage the same set of attributes. Thus, for fair comparison, we let them exploit the same sampled values in generation. In GTMNES2S, the size of hidden units of the encoder and the decoder, and the size of the vectors in memory cells (i.e., d) are 512. Word embedding is randomly initialized with a size of 512. We adopt the Adadelta algorithm (Zeiler, 2012) in optimization with a batch size 200. Gradients are clipped when their norms exceed 5. We stop training when the perplexity of a model on the validation data does not drop in two consecutive epochs. Beam sizes are 200 in MMI-bidi (i.e., the size used in (Li et al., 2015)) and 5 in other models. 6.6 Evaluation Results Table 2 and Table 3 report evaluation results on automatic metrics. On most of the metrics, GTMNES2S outperforms all baseline methods, and the improvements are significant in a statistical sense (t-test, p-value < 0.01). The results demonstrate that with meta-words, our model can represent the relationship between messages and responses in a more effective and more accurate way, and thus can generate more diverse responses without sacrifice on relevance. Despite leveraging the same attributes for response generation, GTMNES2S achieves better accuracy than CT on both one-to-many modeling and meta-word expression, indicating the advantages of the dynamic control strategy over the static control strategy, as we have analyzed at the beginning of Section 4.2. Without the state update loss, there is significant performance drop for GTMNES2S. The results verified the effect of the proposed loss in learning. Table 4 summarizes human evaluation results. Compared with the baseline methods and the variant, the full GTMNES2S model can generate much more excellent responses (labeled as “2”) and much fewer inferior responses (labeled as “0”). Kappa values of all models exceed 0.6, indicating substantial agreement over all annotators. The results further demonstrate the value of the proposed model for real human-machine conversation. kg-CVAE gives more informative responses, and also more bad responses than MMIbidi and SC-Seq2Seq. Together with the contradiction on diversity and relevance in Table 2, the results indicate that latent variable is a doublebladed sword: the randomness may bring interesting content to responses and may also make responses out of control. On the other hand, there 5424 Dataset Models 2 1 0 Avg kappa Twitter MMI-bidi 16.6% 51.7% 31.7% 0.85 0.65 kg-CVAE 23.1% 40.9% 36% 0.87 0.78 SC-Seq2Seq 21.2% 48.5% 30.3% 0.91 0.61 CT 27.6% 38.4% 34% 0.94 0.71 GTMNES2S w/o SU 27% 39.1% 33.9% 0.93 0.64 GTMNES2S 33.2% 37.7% 29.1% 1.04 0.71 Reddit MMI-bidi 4.4% 58.1% 37.5% 0.67 0.79 kg-CVAE 13.7% 44.6% 41.7% 0.72 0.68 SC-Seq2Seq 9.9% 51.2% 38.9% 0.71 0.78 CT 16.5% 48.2% 35.3% 0.81 0.73 GTMNES2S w/o SU 15.7% 47.3% 37% 0.79 0.66 GTMNES2S 19.2% 47.5% 33.3% 0.86 0.76 Table 4: Results on the human evaluation. Ratios are calculated by combining labels from the three judges. Dataset Multiple Dialog Length Copy Specificity PPL utterances Act Ratio Twitter × × × × × 70.19 ✓ × × × × 67.23 ✓ ✓ × × × 62.13 ✓ ✓ ✓ × × 50.36 ✓ ✓ ✓ ✓ × 42.05 ✓ ✓ ✓ ✓ ✓ 38.57 Reddit × × × × × 72.43 ✓ × × × × 65.17 ✓ ✓ × × × 61.92 ✓ ✓ ✓ × × 49.67 ✓ ✓ ✓ ✓ × 41.78 ✓ ✓ ✓ ✓ ✓ 37.96 Table 5: Contribution of different attributes. are no random variables in our model, and thus, it can enjoy a well-trained language model. 6.7 Discussions In this section, we examine effect of different attributes by adding them one by one to the generation model. Besides, we also illustrate how GTMNES2S tracks attribute expression in response generation with test examples. Contribution of attributes. Table 5 shows perplexity (PPL) of GTMNES2S with different sets of attributes on the validation data. We can see that the more attributes are involved in learning, the lower PPL we can get. By leveraging all the 5 attributes, we can reduce almost 50% PPL from the vanilla encoder-decoder model (i.e., the one without any attributes). The results not only indicate the contribution of different attributes to model fitting, but also inspire us the potential of the proposed framework, since it allows further improvement with more well designed attributes involved. Case Study. Figure 2 illustrates how our model controls attributes of responses with the goal tracking mechanism, where distance between the value of a memory cell (i.e., Mi.vt) during generMessage: mm so should i just pull the ring out than ? kg-CVAE: where is the ring ? MMI-bidi: you don’t want to that SC-Seq2Seq: you should not do such things MU=False, DA=Statement-non-opinion, RL=8, CR=0.24, S=0.5 GTMNES2S: GTMNES2S w/o SU: i ‘ll just pull the ring out creepier i ’ll pull the ring on the ring Message: i will not give up until you take an actual guess kg-CVAE: open your mouth MMI-bidi: what you 're talking about ? ? ? SC-Seq2Seq: i 'm not sure about that . MU=True, DA=Wh-Question, RL=12, CR=0.08, S=0.4 GTMNES2S: GTMNES2S w/o SU: why are you so mean to me ? i ‘m pretty special what do you mean ? you ‘re not a normal person . Figure 2: Examples of response generation from the Twitter test data. Up: the heat map is defined by ∥Mi.vt−Mi.g∥normalized to [0, 1], where Mi refers to CR. Below: Mi in the heat map refers to MU. ation and the goal of the memory cell (i.e., Mi.g) is visualized via heat maps. In the first example, the full model gradually reduces the distance between the value and the goal of copy ratio expression with the generation process moving on. As a result, it just copies “pull the ring out” from the message, which makes the response informative and coherent. On the other hand, without the state update loss, GTMNES2S w/o SU makes a mistake by copying “ring” twice, and the distance between the value and the goal is out of control. In the second example, we visualize the expression of MU, a categorical attribute. Compared with realvalued attributes, categorical attributes are easier to express. Therefore, both the full model and GTMNES2S w/o SU successfully generate a response with multiple utterances, although the distance between the value and the goal of MU expression in GTMNES2S w/o SU is still in a mess. 7 Conclusions We present a goal-tracking memory enhanced sequence-to-sequence model for open domain response generation with meta-words which explicitly define characteristics of responses. Evaluation results on two datasets indicate that our model significantly outperforms several state-of-the-art generative architectures in terms of both response quality and accuracy of meta-word expression. 5425 References Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Mark G Core and James Allen. 1997. Coding dialogs with the damsl annotation scheme. In AAAI fall symposium on communicative action in humans and machines, volume 56. Boston, MA. Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement, 33(3):613– 619. John J Godfrey and Edward Holliman. 1997. Switchboard-1 release 2. Linguistic Data Consortium, Philadelphia, 926:927. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In International Conference on Machine Learning, pages 1587–1596. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 994– 1003. Yang Liu, Kun Han, Zhao Tan, and Yun Lei. 2017. Using context information for dialog act classification in dnn framework. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2170–2178. Lajanugen Logeswaran, Honglak Lee, and Samy Bengio. 2018. Content preserving text generation with attribute controls. In Advances in Neural Information Processing Systems, pages 5108–5118. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, pages 3349–3358. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Yookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A hierarchical latent structure for variational conversation modeling. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1792–1801. Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. arXiv preprint arXiv:1902.08654. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. End-to-end dialogue systems using generative hierarchical neural network models. In AAAI, pages 3776–3784. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pages 3295–3301. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In ACL, pages 1577–1586. Heung-Yeung Shum, Xiaodong He, and Di Li. 2018. From eliza to xiaoice: Challenges and opportunities with social chatbots. Frontiers of Information Technology & Electronic Engineering, 19(1):10–26. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 3104–3112, Cambridge, MA, USA. MIT Press. Chongyang Tao, Shen Gao, Mingyue Shang, Wei Wu, Dongyan Zhao, and Rui Yan. 2018. Get the point of my utterance! learning towards effective responses with multi-head attention mechanism. In IJCAI, pages 4418–4424. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Di Wang, Nebojsa Jojic, Chris Brockett, and Eric Nyberg. 2017. Steering output style and topic in neural response generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2140–2150. Yu Wu, Wei Wu, Dejian Yang, Can Xu, Zhoujun Li, and Ming Zhou. 2018. Neural response generation with dynamic vocabularies. In AAAI, pages 5594– 5601. 5426 Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI, pages 3351– 3357. Chen Xing, Wei Wu, Yu Wu, Ming Zhou, Yalou Huang, and Wei-Ying Ma. 2018. Hierarchical recurrent attention network for response generation. In AAAI, pages 5610–5617. Stephanie Young, Milica Gasic, Blaise Thomson, and John D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018a. Learning to control the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1108–1117. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Systems, pages 1815–1825. Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. arXiv preprint arXiv:1804.08069. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–664. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In AAAI, pages 730– 738.
2019
538
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5427–5436 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5427 Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading Lianhui Qin†, Michel Galley‡, Chris Brockett‡, Xiaodong Liu‡, Xiang Gao‡, Bill Dolan‡, Yejin Choi† and Jianfeng Gao‡ † University of Washington, Seattle, WA, USA ‡ Microsoft Research, Redmond, WA, USA {lianhuiq,yejin}@cs.washington.edu {mgalley,Chris.Brockett,xiaodl,xiag,billdol,jfgao}@microsoft.com Abstract Although neural conversation models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and non-vacuous. We present a new end-to-end approach to contentful neural conversation that jointly models response generation and on-demand machine reading. The key idea is to provide the conversation model with relevant long-form text on the fly as a source of external knowledge. The model performs QA-style reading comprehension on this text in response to each conversational turn, thereby allowing for more focused integration of external knowledge than has been possible in prior approaches. To support further research on knowledge-grounded conversation, we introduce a new large-scale conversation dataset grounded in external web pages (2.8M turns, 7.4M sentences of grounding). Both human evaluation and automated metrics show that our approach results in more contentful responses compared to a variety of previous methods, improving both the informativeness and diversity of generated output. 1 Introduction While end-to-end neural conversation models (Shang et al., 2015; Sordoni et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Li et al., 2016a; Gao et al., 2019a, etc.) are effective in learning how to be fluent, their responses are often vacuous and uninformative. A primary challenge thus lies in modeling what to say to make the conversation contentful. Several recent approaches have attempted to address this difficulty by conditioning the language decoder on external information sources, such as knowledge bases (Agarwal et al., 2018; Liu et al., 2018a), review posts (Ghazvininejad et al., 2018; Moghe et al., 2018), and even images (Das et al., 2017; Mostafazadeh et al., 2017). …… …… She holds the Guinness world record for surviving the highest fall without a parachute: 10,160 metres (33,330 ft). A woman fell 30,000 feet from an airplane and survived. Well if she only fell a few hundred meters and survived then I 'm not impressed at all. The page states that a 2009 report found the plane only fell several hundred meters. Still pretty incredible , but quite a bit different that 10,000 meters. In 2005, Vulović‘s fall was recreated by the American television MythBusters. Four years later, […] two Praguebased journalists, claimed that Flight 367 had been mistaken for an enemy aircraft and shot down by the Czechoslovak Air Force at an altitude of 800 metres (2,600 ft). Figure 1: Users discussing a topic defined by a Wikipedia article. In this real-world example from our Reddit dataset, information needed to ground responses is distributed throughout the source document. However, empirical results suggest that conditioning the decoder on rich and complex contexts, while helpful, does not on its own provide sufficient inductive bias for these systems to learn how to achieve deep and accurate integration between external knowledge and response generation. We posit that this ongoing challenge demands a more effective mechanism to support on-demand knowledge integration. We draw inspiration from how humans converse about a topic, where people often search and acquire external information as needed to continue a meaningful and informative conversation. Figure 1 illustrates an example human discussion, where information scattered in separate paragraphs must be consolidated to com5428 pose grounded and appropriate responses. Thus, the challenge is to connect the dots across different pieces of information in much the same way that machine reading comprehension (MRC) systems tie together multiple text segments to provide a unified and factual answer (Seo et al., 2017, etc.). We introduce a new framework of end-toend conversation models that jointly learn response generation together with on-demand machine reading. We formulate the reading comprehension task as document-grounded response generation: given a long document that supplements the conversation topic, along with the conversation history, we aim to produce a response that is both conversationally appropriate and informed by the content of the document. The key idea is to project conventional QA-based reading comprehension onto conversation response generation by equating the conversation prompt with the question, the conversation response with the answer, and external knowledge with the context. The MRC framing allows for integration of long external documents that present notably richer and more complex information than relatively small collections of short, independent review posts such as those that have been used in prior work (Ghazvininejad et al., 2018; Moghe et al., 2018). We also introduce a large dataset to facilitate research on knowledge-grounded conversation (2.8M turns, 7.4M sentences of grounding) that is at least one order of magnitude larger than existing datasets (Dinan et al., 2019; Moghe et al., 2018). This dataset consists of real-world conversations extracted from Reddit, linked to web documents discussed in the conversations. Empirical results on our new dataset demonstrate that our full model improves over previous grounded response generation systems and various ungrounded baselines, suggesting that deep knowledge integration is an important research direction.1 2 Task We propose to use factoid- and entity-rich web documents, e.g., news stories and Wikipedia pages, as external knowledge sources for an openended conversational system to ground in. Formally, we are given a conversation history 1Code for reproducing our models and data is made publicly available at https://github.com/qkaren/ converse_reading_cmr. of turns X = (x1, . . . , xM) and a web document D = (s1, . . . , sN) as the knowledge source, where si is the ith sentence in the document. With the pair (X, D), the system needs to generate a natural language response y that is both conversationally appropriate and reflective of the contents of the web document. 3 Approach Our approach integrates conversation generation with on-demand MRC. Specifically, we use an MRC model to effectively encode the conversation history by treating it as a question in a typical QA task (e.g., SQuAD (Rajpurkar et al., 2016)), and encode the web document as the context. We then replace the output component of the MRC model (which is usually an answer classification module) with an attentional sequence generator that generates a free-form response. We refer to our approach as CMR (Conversation with on-demand Machine Reading). In general, any off-the-shelf MRC model could be applied here for knowledge comprehension. We use Stochastic Answer Networks (SAN)2 (Liu et al., 2018b), a performant machine reading model that until very recently held state-of-the-art performance on the SQuAD benchmark. We also employ a simple but effective data weighting scheme to further encourage response grounding. 3.1 Document and Conversation Reading We adapt the SAN model to encode both the input document and conversation history and forward the digested information to a response generator. Figure 2 depicts the overall MRC architecture. Different blocks capture different concepts of representations in both the input conversation history and web document. The leftmost blocks represent the lexicon encoding that extracts information from X and D at the token level. Each token is first transformed into its corresponding word embedding vector, and then fed into a positionwise feed-forward network (FFN) (Vaswani et al., 2017) to obtain the final token-level representation. Separate FFNs are used for the conversation history and the web document. The next block is for contextual encoding. The aforementioned token vectors are concatenated with pre-trained 600-dimensional CoVe vectors (McCann et al., 2017), and then fed to a BiL2https://github.com/kevinduh/san_mrc 5429 Embedding FFN Bi-LSTM +CoVe Embedding FFN Bi-LSTM +CoVe Bi-LSTM Self-Attn ... fell several meters <EOS> ... Cross-Attn Document: […] claimed that Flight 367 had been mistaken for an enemy aircraft and shot down by the at an altitude of 800 metres (2,600 ft). Conversation History: A woman fell 30,000 feet [...] Generator Lexicon Encoding Contextual Encoding Memory ... Output: <BOS> ... Emb FFN BiLSTM +CoVe Emb FFN BiLSTM +CoVe BiLSTM SelfAttn ... CEO of Apple <EOS> Cross-Attn 1. Lexicon Encoding 2. Contextual Encoding 3. Memory <BOS> ... Lorem ipsum dolor sit amet, con So he’s the CEO of Apple. Steve Jobs was a mediocre programmer and one of the greatest designers […]. <title> Steve Jobs </title> <p> Steven Paul Jobs was an American entrepreneur, businessman, inventor, and industrial designer. He was the chairman, chief executive officer (CEO), and co-founder of Apple Inc.; [...] Generator Conversation history Document Model Output Figure 2: Model Architecture for Response Generation with on-demand Machine Reading: The first blocks of the MRC-based encoder serve as a lexicon encoding that maps words to their embeddings and transforms with position-wise FFN, independently for the conversation history and the document. The next block is for contextual encoding, where BiLSTMs are applied to the lexicon embeddings to model the context for both conversation history and document. The last block builds the final encoder memory, by sequentially applying cross-attention in order to integrate the two information sources, conversation history and document, self-attention for salient information retrieval, and a BiLSTM for final information rearrangement. The response generator then attends to the memory and generates a free-form response. STM that is shared for both conversation history and web document. The step-wise outputs of the BiLSTM carry the information of the tokens as well as their left and right context. The last block builds the memory that summarizes the salient information from both X and D. The block first applies cross-attention to integrate information from the conversation history X into the document representation. Each contextual vector of the document D is used to compute attention (similarity) distribution over the contextual vectors of X, which is concatenated with the weighted average vector of X by the resulting distribution. Second, a self-attention layer is applied to further ingest and capture the most salient information. The output memory, M ∈Rd×n, is obtained by applying another BiLSTM layer for final information rearrangement. Note that d is the hidden size of the memory and n is the length of the document. 3.2 Response Generation Having read and processed both the conversation history and the extra knowledge in the document, the model then produces a free-form response y = (y1, . . . , yT ) instead of generating a span or performing answer classification as in MRC tasks. We use an attentional recurrent neural network decoder (Luong et al., 2015) to generate response tokens while attending to the memory. At the beginning, the initial hidden state h0 is the weighted sum of the representation of the history X. For each decoding step t with a hidden state ht, we generate a token yt based on the distribution: p(yt) = softmax((W1ht + b)/τ), (1) where τ > 0 is the softmax temperature. The hidden state ht is defined as follows: ht = W2[zt ++fattention(zt, M)]. (2) Here, [·++·] indicates a concatenation of two vectors; fattention is a dot-product attention (Vaswani et al., 2017); and zt is a state generated by GRU(et−1, ht−1) with et−1 being the embedding of the word yt−1 generated at the previous (t −1) step. In practice, we use top-k sample decoding to draw yt from the above distribution p(yt). Section 5 provides more details about the experimental configuration. 3.3 Data Weighting Scheme We further propose a simple data weighting scheme to encourage the generation of grounded responses. The idea is to bias the model training to fit better to those training instances where the ground-truth response is more closely relevant to the document. More specifically, given a training instance (X, D, y), we measure the closeness score c ∈R between the document D and the gold response y (e.g., with the NIST (Doddington, 2002) or BLEU (Papineni et al., 2002) metrics). In each training data batch, we normalize the closeness scores of all the instances to have a sum of 1, and weight each of the instances with its corresponding normalized score when evaluating the 5430 Train Valid Test # dialogues 28.4k 1.2k 3.1k # utterances 2.36M 0.12M 0.34M # documents 28.4k 1.2k 3.1k # document sentences 15.18M 0.58M 1.68M Average length (# words): utterances 18.74 18.84 18.48 document sentences 13.72 14.17 14.15 Table 1: Our grounded conversational dataset. training loss. This training regime promotes instances with grounded responses and thus encourages the model to better encode and utilize the information in the document. 4 Dataset To create a grounded conversational dataset, we extract conversation threads from Reddit, a popular and large-scale online platform for news and discussion. In 2015 alone, Reddit hosted more than 73M conversations.3 On Reddit, user submissions are categorized by topics or “subreddits”, and a submission typically consists of a submission title associated with a URL pointing to a news or background article, which initiates a discussion about the contents of the article. This article provides framing for the conversation, and this can naturally be seen as a form of grounding. Another factor that makes Reddit conversations particularly well-suited for our conversationas-MRC setting is that a significant proportion of these URLs contain named anchors (i.e., ‘#’ in the URL) that point to the relevant passages in the document. This is conceptually quite similar to MRC data (Rajpurkar et al., 2016) where typically only short passages within a larger document are relevant in answering the question. We reduce spamming and offensive language by manually curating a list of 178 relatively “safe” subreddits and 226 web domains from which the web pages are extracted. To convert the web page of each conversation into a text document, we extracted the text of the page using an html-to-text converter,4 while retaining important tags such as <title>, <h1> to <h6>, and <p>. This means the entire text of the original web page is preserved, but these main tags retain some high-level struc3https://redditblog.com/2015/12/31/ reddit-in-2015/ 4https://www.crummy.com/software/ BeautifulSoup ture of the article. For web URLs with named anchors, we preserve that information by indicating the anchor text in the document with tags <anchor> and </anchor>. As the whole documents in the dataset tend to be lengthy, anchors offer important hints to the model about which parts of the documents should likely be focused on in order to produce a good response. We considered it sensible to keep them as they are also available to the human reader. After filtering short or redacted turns, or which quote earlier turns, we obtained 2.8M conversation instances respectively divided into train, validation, and test (Table 1). We used different date ranges for these different sets: years 2011-2016 for train, Jan-Mar 2017 for validation, and the rest of 2017 for test. For the test set, we select conversational turns for which 6 or more responses were available, in order to create a multi-reference test set. Given other filtering criteria such as turn length, this yields a 6-reference test set of size 2208. For each instance, we set aside one of the 6 human responses to assess human performance on this task, and the remaining 5 responses serve as ground truths for evaluating different systems.5 Table 1 provides statistics for our dataset, and Figure 1 presents an example from our dataset that also demonstrates the need to combine conversation history and background information from the document to produce an informative response. To enable reproducibility of our experiments, we crawled web pages using Common Crawl (http://commoncrawl.org), a service that crawls web pages and makes its historical crawls available to the public. We also release the code (URL redacted for anonymity) to recreate our dataset from both a popular Reddit dump6 and Common Crawl, and the latter service ensures that anyone reproducing our data extraction experiments would retrieve exactly the same web pages. We made a preliminary version of this dataset available for a shared task (Galley et al., 2019) at Dialog System Technology Challenges (DSTC) (Yoshino et al., 2019). Back-and-forth with participants helped us iteratively refine the dataset. The code to recreate this dataset is included.7 5While this is already large for a grounded dataset, we could have easily created a much bigger one given how abundant Reddit data is. We focused instead on filtering out spamming and offensive language, in order to strike a good balance between data quality and size. 6http://files.pushshift.io/reddit/ 7We do not report on shared task systems here, as these 5431 5 Experiments 5.1 Systems We evaluate our systems and several competitive baselines: SEQ2SEQ (Sutskever et al., 2014) We use a standard LSTM SEQ2SEQ model that only exploit the conversation history for response generation, without any grounding. This is a competitive baseline initialized using pretrained embeddings. MEMNET: We use a Memory Network designed for grounded response generation (Ghazvininejad et al., 2018). An end-to-end memory network (Sukhbaatar et al., 2015) encodes conversation history and sentences in the web documents. Responses are generated with a sequence decoder. CMR-F : To directly measure the effect of incorporating web documents, we compare to a baseline which omits the document reading component of the full model (Figure 2). As with the SEQ2SEQ approach, the resulting model generates responses solely based on conversation history. CMR: To measure the effect of our data weighting scheme, we compare to a system that has identical architecture to the full model, but is trained without associating weights to training instances. CMR+W: As described in section 3, the full model reads and comprehends both the conversation history and document using an MRC component, and sequentially generates the response. The model is trained with the data weighting scheme to encourage grounded responses. Human: To get a better sense of the systems’ performance relative to an upper bound, we also evaluate human-written responses using different metrics. As described in Section 4, for each test instance, we set aside one of the 6 human references for evaluation, so the ‘human’ is evaluated against the other 5 references for automatic evaluation. To make these results comparable, all the systems are also automatically evaluated against the same 5 references. systems do not represent our work and some of these systems have no corresponding publications. Along with the data described here, we provided a standard SEQ2SEQ baseline to the shared task, which we improved for the purpose of this paper (improved BLEU, NIST and METEOR). Our new SEQ2SEQ baseline is described in Section 5. 6 Experiment Details For all the systems, we set word embedding dimension to 300 and used the pretrained GloVe8 for initialization. We set hidden dimensions to 512 and dropout rate to 0.4. GRU cells are used for SEQ2SEQ and MEMNET (we also tested LSTM cells and obtained similar results). We used the Adam optimizer for model training, with an initial learning rate of 0.0005. Batch size was set to 32. During training, all responses were truncated to have a maximum length of 30, and maximum query length and document length were set to 30, 500, respectively. we used regular teacher-forcing decoding during training. For inference, we found that top-k random sample decoding (Fan et al., 2018) provides the best results for all the systems. That is, at each decoding step, a token was drawn from the k most likely candidates according to the distribution over the vocabulary. Similar to recent work (Fan et al., 2018; Edunov et al., 2018), we set k = 20 (other common k values like 10 gave similar results). We selected key hyperparameter configurations on the validation set. 6.1 Evaluation Setup Table 2 shows automatic metrics for quantitative evaluation over three qualities of generated texts. We measure the overall relevance of the generated responses given the conversational history by using standard Machine Translation (MT) metrics, comparing generated outputs to ground-truth responses. These metrics include BLEU-4 (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007). and NIST (Doddington, 2002). The latter metric is a variant of BLEU that weights n-gram matches by their information gain by effectively penalizing uninformative n-grams (such as “I don’t know”), which makes it a relevant metric for evaluating systems aiming diverse and informative responses. MT metrics may not be particularly adequate for our task (Liu et al., 2016), given its focus on the informativeness of responses, and for that reason we also use two other types of metrics to measure the level of grounding and diversity. As a diversity metric, we count all n-grams in the system output for the test set, and measure: (1) Entropy-n as the entropy of the n-gram count distribution, a metric proposed in (Zhang et al., 2018b); (2) Distinct-n as the ratio between the 8https://nlp.stanford.edu/projects/ glove/ 5432 Appropriateness Grounding Diversity NIST BLEU METEOR Precision Recall F1 Entropy-4 Distinct-1 Distinct-2 Len Human 2.650 3.13% 8.31% 2.89% 0.45% 0.78% 10.445 0.167 0.670 18.757 SEQ2SEQ 2.223 1.09% 7.34% 1.20% 0.05% 0.10% 9.745 0.023 0.174 15.942 MEMNET 2.185 1.10% 7.31% 1.25% 0.06% 0.12% 9.821 0.035 0.226 15.524 CMR-F 2.260 1.20% 7.37% 1.68% 0.08% 0.15% 9.778 0.035 0.219 15.471 CMR 2.213 1.43% 7.33% 2.44% 0.13% 0.25% 9.818 0.046 0.258 15.048 CMR+W 2.238 1.38% 7.46% 3.39% 0.20% 0.38% 9.887 0.052 0.283 15.249 Table 2: Automatic Evaluation results (higher is better for all metrics). Our best models (CMR+W and CMR) considerably increase the quantitative measures of Grounding, and also slightly improve Diversity. Automatic measures of Quality (e.g., BLEU-4) give mixed results, but this is reflective of the fact that we did not aim to improve response relevance with respect to the context, but instead its level of grounding. The human evaluation results in Table 3 indeed suggest that our best system (CMR+W) is better. number of n-gram types and the total number of n-grams, a metric introduced in (Li et al., 2016a). For the grounding metrics, we first compute ‘#match,’ the number of non-stopword tokens in the response that are present in the document but not present in the context of the conversation. Excluding words from the conversation history means that, in order to produce a word of the document, the response generation system is very likely to be effectively influenced by that document. We then compute both precision as ‘#match’ divided by the total number of non-stop tokens in the response, and recall as ‘#match’ divided by the total number of non-stop tokens in the document. We also compute the respective F1 score to combine both. Looking only at exact unigram matches between the document and response is a major simplifying assumption, but the combination of the three metrics offers a plausible proxy for how greatly the response is grounded in the document. It seems further reasonable to assume that these can serve as a surrogate for less quantifiable forms of grounding such as paraphrase – e.g., US −→American – when the statistics are aggregated on a large test dataset. 6.2 Automatic Evaluation Table 2 shows automatic evaluation results for the different systems. In terms of appropriateness, the different variants of our models outperform the SEQ2SEQ and MEMNET baselines, but differences are relatively small and, in case of one of the metrics (NIST), the best system does not use grounding. Our goal, we would note, is not to specifically improve response appropriateness, as many responses that completely ignore the document (e.g., I don’t know) might be perHuman judges preferred: Our best system Neutral Comparator CMR+W *44.17% 26.27% 29.56% SEQ2SEQ CMR+W *40.93% 25.80% 33.27% MEMNET CMR+W 37.67% 27.53% 34.80% CMR CMR+W 30.37% 16.27% *53.37% Human Table 3: Human Evaluation results, showing preferences (%) for our model (CMR+W) vs. baseline and other comparison systems. Distributions are skewed towards CMR+W. The 5-point Likert scale has been collapsed to a 3-point scale. *Differences in mean preferences are statistically significant (p ≤0.0001). fectly appropriate. Our systems fare much better in terms of Grounding and Diversity: our best system (CMR+W) achieves an F1 score that is more than three times (0.38% vs. 0.12%) higher than the most competitive non-MRC system (MEMNET). 6.3 Human Evaluation We sampled 1000 conversations from the test set. Filters were applied to remove conversations containing ethnic slurs or other offensive content that might confound judgments. Outputs from systems to be compared were presented pairwise to judges from a crowdsourcing service. Four judges were asked to compare each pair of outputs on Relevance (the extent to which the content was related to and appropriate to the conversation) and Informativeness (the extent to which the output was interesting and informative). Judges were asked to agree or disagree with a statement that one of the pair was better than the other on the above two parameters, using a 5-point Likert scale.9 Pairs 9The choices presented to the judges were Strongly Agree, Agree, Neutral, Disagree, and Strongly Disagree. 5433 of system outputs were randomly presented to the judges in random order in the context of short snippets of the background text. These results are presented in summary form in Table 3, which shows the overall preferences for the two systems expressed as a percentage of all judgments made. Overall inter-rater agreement measured by Fliess’ Kappa was 0.32 (“fair"). Nevertheless, the differences between the paired model outputs are statistically significant (computed using 10,000 bootstrap replications). 6.4 Qualitative Study Table 4 illustrates how our best model (CMR+W) tends to produce more contentful and informative responses compared to the other systems. In the first example, our system refers to a particular episode mentioned in the article, and also uses terminology that is more consistent with the article (e.g., series). In the second example, humorous song seems to positively influence the response, which is helpful as the input doesn’t mention singing at all. In the third example, the CMR+W model clearly grounds its response to the article as it states the fact (Steve Jobs: CEO of Apple) retrieved from the article. The outputs by the other two baseline models are instead not relevant in the context. Figure 3 displays the attention map of the generated response and (part of) the document from our full model. The model successfully attends to the key words (e.g., 36th, episode) of the document. Note that the attention map is unlike what is typical in machine translation, where target words tend to attend to different portions of the input text. In our task, where alignments are much less oneto-one compared to machine translation, it is common for the generator to retain focus on the key information in the external document to produce semantically relevant responses. 7 Related Work Dialogue: Traditional dialogue systems (see (Jurafsky and Martin, 2009) for an historical perspective) are typically grounded, enabling these systems to be reflective of the user’s environment. The lack of grounding has been a stumbling block for the earliest end-to-end dialogue systems, as various researchers have noted that their outputs tend to be bland (Li et al., 2016a; Gao et al., 2019b), inconsistent (Zhang et al., 2018a; Li et al., Figure 3: Attention weights between words of the documents and words of the response. Dark (blue) cells represent probabilities closer to 1. 2016b; Zhang et al., 2019), and lacking in factual content (Ghazvininejad et al., 2018; Agarwal et al., 2018). Recently there has been growing interest in exploring different forms of grounding, including images, knowledge bases, and plain texts (Das et al., 2017; Mostafazadeh et al., 2017; Agarwal et al., 2018; Yang et al., 2019). A recent survey is included in Gao et al. (2019a). Prior work, e.g, (Ghazvininejad et al., 2018; Zhang et al., 2018a; Huang et al., 2019), uses grounding in the form of independent snippets of text: Foursquare tips and background information about a given speaker. Our notion of grounding is different, as our inputs are much richer, encompassing the full text of a web page and its underlying structure. Our setting also differs significantly from relatively recent work (Dinan et al., 2019; Moghe et al., 2018) exploiting crowdsourced conversations with detailed grounding labels: we use Reddit because of its very large scale and better characterization of real-world conversations. We also require the system to learn grounding directly from conversation and document pairs, instead of relying on additional grounding labels. Moghe et al. (2018) explored directly using a span-prediction QA model for conversation. Our framework differs in that we combine MRC models with a sequence generator to produce free-form responses. Machine Reading Comprehension: MRC models such as SQuAD-like models, aim to extract answer spans (starting and ending indices) 5434 Doc <title> Investigations </title> <p> "Investigations" is the 36th episode of the American science fiction television series Star Trek: Voyager which aired on the UPN network. It is the 36th episode of the second season. </p> Set in the 24th century, the series follows the adventures of the Federation starship Voyager during its journey home to Earth, having been stranded tens of thousands of light-years away. [...] Input [...] King Abdullah II of Jordon once cameo’d as an unnamed ensign on Star Trek Voyager MEMNET I think it should be noted that he is like a good guy who just killed her and her son. SEQ2SEQ well , i didn’t know , but i was just saying that it wasn’t a joke. CMR+W I am not sure if it was a “perfect” episode, but I think I saw the episode where he was the voice of the Star Trek series. Doc <title> “Weird Al” Yankovic </title> <p> Alfred Matthew “Weird Al” Yankovic is an American singer, songwriter, parodist, record producer, satirist, actor, voice actor, music video director, film producer, and author. He is known for his humorous songs that make light of popular culture and often parody specific songs by contemporary musical acts, original songs that are style pastiches of the work of other acts, and polka medleys of several popular songs, featuring his favored instrument, the accordion.</p> (Prince and West mentioned several times in the rest of the article.) [...] Input That’s fair, I believe Weird Al always got permission from artists before he parodied them which is nice. MEMNET Well then you don’t need to make a sequel to the front page of the movie. SEQ2SEQ If I remember correctly. I had no idea who had any good chance in history class. CMR+W No, because his song was a parody of the artist formerly known as "Kanye West". Doc <title> Steve Jobs </title> <p> Steven Paul Jobs was an American entrepreneur, businessman, inventor, and industrial designer. He was the chairman, chief executive officer (CEO), and co-founder of Apple Inc.; chairman and majority shareholder of Pixar; a member of The Walt Disney Company’s board of directors following its acquisition of Pixar; and the founder, chairman, and CEO of next. [...] </p> Input Steve Jobs was a mediocre programmer and one of the greatest designers and marketers in human history. But I prefer Bill Gates if we want to talk about hero worship myself. [...] MEMNET What if you don’t want to see this movie? SEQ2SEQ No ... the whole point is that this is a good way to make a difference. CMR+W So he’s the CEO of Apple. Table 4: Sample output comparing our best system (CMR+W) against Memory Networks and a SEQ2SEQ baseline. The source documents were manually shortened to fit in the table, without significantly affecting meaning. from a given document for a given question (Seo et al., 2017; Liu et al., 2018b; Yu et al., 2018). These models differ in how they fuse information between questions and documents. We chose SAN (Liu et al., 2018b) because of its representative architecture and competitive performance on existing MRC tasks. We note that other off-theshelf MRC models, such as BERT (Devlin et al., 2018), can also be plugged in. We leave the study of different MRC architectures for future work. Questions are treated as entirely independent in these “single-turn” MRC models, so recent work (e.g., CoQA (Reddy et al., 2019) and QuAC (Choi et al., 2018)) focuses on multi-turn MRC, modeling sequences of questions and answers in a conversation. While multi-turn MRC aims to answer complex questions, that body of work is restricted to factual questions, whereas our work—like much of the prior work in end-to-end dialogue—models free-form dialogue, which also encompasses chitchat and non-factual responses. 8 Conclusions We have demonstrated that the machine reading comprehension approach offers a promising step to generating, on the fly, contentful conversation exchanges that are grounded in extended text corpora. The functional combination of MRC and neural attention mechanisms offers visible gains over several strong baselines. We have also formally introduced a large dataset that opens up interesting challenges for future research. The CMR (Conversation with on-demand machine reading) model presented here will help connect the many dots across multiple data sources. One obvious future line of investigation will be to explore the effect of other off-the-shelf machine reading models such as BERT (Devlin et al., 2018) within the CMR framework. Acknowledgements We are grateful to the anonymous reviewers, as well as to Vighnesh Shiv, Yizhe Zhang, Chris Quirk, Shrimai Prabhumoye, and Ziyu Yao for helpful comments and suggestions on this work. This research was supported in part by NSF (IIS1524371), DARPA CwC through ARO (W911NF15-1-0543), and Samsung AI Research. 5435 References Shubham Agarwal, Ondrej Dusek, Ioannis Konstas, and Verena Rieser. 2018. A knowledge-grounded multimodal search-based conversational agent. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 59–66, Brussels, Belgium. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174–2184. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M.F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual Dialog. In CVPR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proc. of HLT. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proc. of EMNLP. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proc. of ACL. Michel Galley, Chris Brockett, Xiang Gao, Jianfeng Gao, and Bill Dolan. 2019. Grounded response generation task at DSTC7. In AAAI Dialog System Technology Challenges Workshop. Jianfeng Gao, Michel Galley, and Lihong Li. 2019a. Neural approaches to conversational ai. Foundations and Trends in Information Retrieval, 13(23):127–298. Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019b. Jointly optimizing diversity and relevance in neural response generation. In NAACL-HLT 2019. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Proc. of AAAI. Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2019. Challenges in building intelligent open-domain dialog systems. arXiv preprint arXiv:1905.05709. Dan Jurafsky and James H Martin. 2009. Speech & language processing. Prentice Hall. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proc. of the Second Workshop on Statistical Machine Translation, StatMT ’07, pages 228–231. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proc. of NAACL-HLT. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proc. of ACL. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proc. of EMNLP. Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018a. Knowledge diffusion for neural dialogue generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1489–1498. Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018b. Stochastic Answer Networks for machine reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1694–1704, Melbourne, Australia. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6297–6308. Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M Khapra. 2018. Towards exploiting background knowledge for building conversation systems. In Proc. of EMNLP. Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios Spithourakis, and Lucy Vanderwende. 2017. Imagegrounded conversations: Multimodal context for natural question and response generation. In Proc. of IJCNLP. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. 5436 Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association of Computational Linguistics (TACL). Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. AAAI. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proc. of ACL-IJCNLP. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proc. of NAACL-HLT. Sainbayar Sukhbaatar, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Proc. of NIPS. Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Sequence to sequence learning with neural networks. In Proc. of NIPS, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In ICML Deep Learning Workshop. Liu Yang, Junjie Hu, Minghui Qiu, Chen Qu, Jianfeng Gao, W Bruce Croft, Xiaodong Liu, Yelong Shen, and Jingjing Liu. 2019. A hybrid retrieval-generation neural conversation model. arXiv preprint arXiv:1904.09068. Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D’Haro, Lazaros Polymenakos, R. Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda AlAmri, Tim K. Marks, Devi Parikh, and Dhruv Batra. 2019. Dialog system technology challenge 7. In In NeurIPS Conversational AI Workshop. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. QANet: Combining local convolution with global self-attention for reading comprehension. In ICLR. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In Proc. of NeurIPS. Yizhe Zhang, Xiang Gao, Sungjin Lee, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Consistent dialogue generation with self-supervised feature learning. arXiv preprint arXiv:1903.05759.
2019
539
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 567–578 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 567 Classification and Clustering of Arguments with Contextualized Word Embeddings Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨at Darmstadt www.ukp.tu-darmstadt.de Abstract We experiment with two recent contextualized word embedding methods (ELMo and BERT) in the context of open-domain argument search. For the first time, we show how to leverage the power of contextualized word embeddings to classify and cluster topic-dependent arguments, achieving impressive results on both tasks and across multiple datasets. For argument classification, we improve the state-of-the-art for the UKP Sentential Argument Mining Corpus by 20.8 percentage points and for the IBM Debater - Evidence Sentences dataset by 7.4 percentage points. For the understudied task of argument clustering, we propose a pre-training step which improves by 7.8 percentage points over strong baselines on a novel dataset, and by 12.3 percentage points for the Argument Facet Similarity (AFS) Corpus.1 1 Introduction Argument mining methods have been applied to different tasks such as identifying reasoning structures (Stab and Gurevych, 2014), assessing the quality of arguments (Wachsmuth et al., 2017), or linking arguments from different documents (Cabrio and Villata, 2012). Broadly speaking, existing methods either approach argument mining from the discourse-level perspective (aiming to analyze local argumentation structures), or from an information-seeking perspective (aiming to detect arguments relevant to a predefined topic). While discourse-level approaches mostly focus on the analysis of single documents or document collections (Eger et al., 2017), information-seeking approaches need to be capable of dealing with heterogeneous sources and topics (Shnarch et al., 2018) and also face the problem of redundancy, as 1Code and models available: https://github.com/UKPLab/ acl2019-BERT-argument-classification-and-clustering arguments might be repeated across sources. As a result, this perspective naturally calls for a subsequent clustering step, which is able to identify and aggregate similar arguments for the same topic. In this work, we focus on the latter perspective, referring to it as open-domain argument search, and show how contextualized word embeddings can be leveraged to overcome some of the challenges involved in topic-dependent argument classification and clustering. Identifying arguments for unseen topics is a challenging task for machine learning systems. The lexical appearance for two topics, e.g. “net neutrality” and “school uniforms”, is vastly different. Hence, in order to perform well, systems must develop a deep semantic understanding of both the topic as well as the sources to search for arguments. Even more so, clustering similar arguments is a demanding task, as fine-grained semantic nuances may determine whether two arguments (talking about the same topic) are similar. Figure 1 gives an example of arguments on the topic “net neutrality”. Both arguments center around the aspect of “equal access for every Internet user” but are differently phrased. A1 The ultimate goal is fast, affordable, open Internet access for everyone, everywhere. A2 If this does not happen, we will create an Internet where only users able to pay for privileged access enjoy the network’s full capabilities. Figure 1: Similar pro arguments for the topic “net neutrality”. Contextualized word embeddings, especially ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) could offer a viable solution to this problem. In contrast to traditional word embeddings like word2vec (Mikolov et al., 2013) or 568 GloVe (Pennington et al., 2014), these methods compute the embeddings for a sentence on the fly by taking the context of a target word into account. This yields word representations that better match the specific sense of the word in a sentence. In cross-topic scenarios, with which we are dealing in open-domain argument search, contextualized representations need to be able to adapt to new, unseen textual topics. We thus analyze ELMo and BERT in a cross-topic scenario for the tasks of argument classification and clustering on four different datasets. For argument classification, we use the UKP Sentential Argument Mining Corpus by Stab et al. (2018b) and the IBM Debater R⃝: Evidence Sentences corpus by Shnarch et al. (2018). For argument clustering, we introduce a novel corpus on aspect-based argument clustering and evaluate the proposed methods on this corpus as well as on the Argument Facet Similarity Corpus (Misra et al., 2016). The contributions in this publications are: (1) We frame the problem of open-domain argument search as a combination of topic-dependent argument classification and clustering and discuss how contextualized word embeddings can help to improve these tasks across four different datasets. (2) We show that our suggested methods improve the state-of-the-art for argument classification when fine-tuning the models, thus significantly reducing the gap to human performance. (3) We introduce a novel corpus on aspect-based argument similarity and demonstrate how contextualized word embeddings help to improve clustering similar arguments in a supervised fashion with little training data. We present the four different datasets used in this work in Section 3, before we discuss our experiments and results on argument classification and clustering in Sections 4 and 5. We conclude our findings for open-domain argument search in Section 6. 2 Related Work In the following, we concentrate on the fundamental tasks involved in open-domain argument search. First, we discuss work that experiments with sentence-level argument classification. Second, we review work that provides us with the necessary tools to cluster extracted arguments by their similarity. Third, we take a deeper look into contextualized word embeddings. Argument Classification, as viewed in this work, aims to identify topic-related, sentencelevel arguments from (heterogeneous) documents. Levy et al. (2014) identify context-dependent claims (CDCs) by splitting the problem into smaller sub-problems. Rinott et al. (2015) extend this work with a pipeline of feature-based models that find and rank supporting evidence from Wikipedia for the CDCs. However, neither of these approaches leverage the potential of word embeddings in capturing semantic relations between words. Shnarch et al. (2018) aim to identify topicdependent evidence sentences by blending large automatically generated training sets with manually annotated data as initialization step. They use a BiLSTM with GloVe embeddings and integrate the topic via attention. For topic-dependent argument detection, Stab et al. (2018b) deploy a modified LSTM-cell that is able to directly integrate topic information. They show the importance of topic information by outperforming a BiLSTM baseline by around 4.5pp. Yet, their best model only shows mediocre recall for arguments, while showing an even lower precision when compared to their baseline. As argument classification is the first logical step in open-domain argument search, a low performance would eventually propagate further down to the clustering of similar arguments. Hence, in this work, we aim to tackle this problem by leveraging superior contextualized language models to improve on precision and recall of argumentative sentences. Argument Clustering aims to identify similar arguments. Previous research in this area mainly used feature-based approaches in combination with traditional word embeddings like word2vec or GloVe. Boltuˇzi´c and ˇSnajder (2015) applied hierarchical clustering on semantic similarities between users’ posts from a two-side online debate forum using word2vec. Wachsmuth et al. (2018) experimented with different word embeddings techniques for (counter)argument similarity. Misra et al. (2016) presented a new corpus on argument similarity on three topics. They trained a Support Vector Regression model using different hand-engineered features including custom trained word2vec. Trabelsi and Za¨ıane (2015) used an augmented LDA to automatically extract coherent words and phrases describing arguing expressions and apply constrained cluster569 ing to group similar viewpoints of topics. In contrast to previous work, we apply argument clustering on a dataset containing both relevant and non-relevant arguments for a large number of different topics which is closer to a more realistic setup. Contextualized word embeddings compute a representation for a target word based on the specific context the word is used within a sentence. In contrast, traditional word embedding methods, like word2vec or GloVe, words are always mapped to the same vector. Contextualized word embeddings tackle the issue that words can have different senses based on the context. Two approaches that became especially popular are ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018). ELMo (Embeddings from Language Models) representations are derived from a bidirectional language model, that is trained on a large corpus. Peters et al. combine a character-based CNN with two bidirectional LSTM layers. The ELMo representation is then derived from all three layers. BERT (Bidirectional Encoder Representations from Transformers) uses a deep transformer network (Vaswani et al., 2017) with 12 or 24 layers to derive word representations. Devlin et al. presented two new pre-training objectives: the “masked language model” and the “next sentence prediction” objectives. They demonstrate that the pre-trained BERT models can be fine-tuned for various tasks, including sentence classification and sentence-pair classification. ELMo and BERT were primarily evaluated on datasets where the test and training sets have comparable distributions. In cross-topic setups, however, the distributions for training and testing are vastly different. It is unclear, whether ELMo and BERT will be able to adapt to this additional challenge for cross-topic argument mining. 3 Datasets No dataset is available that allows evaluating open-domain argument search end-to-end. Hence, we analyze and evaluate the involved steps (argument classification and clustering) independently. 3.1 Argument Classification To our knowledge, to date there are only two suitable corpora for the task of topic-dependent argument classification. UKP Corpus. The UKP Sentential Argument Mining Corpus by Stab et al. (2018b) (henceforth: UKP corpus) annotated 400 documents with 25,492 sentences on eight controversial topics with the labels: pro/con/no argument. IBM Corpus. The IBM Debater R⃝: Evidence Sentences by Shnarch et al. (2018) (henceforth: IBM corpus) contains 118 topics drawn from different debate portals. For each topic, Shnarch et al. (2018) extracted sentences from Wikidata that were in turn annotated by crowd-workers (10 for each topic-sentence pair) with one of the two labels: evidence or no evidence in regard to the topic. 3.2 Argument Clustering Topic-dependent argument clustering is an understudied problem with few resources available. Arguments on controversial topics usually address a limited set of aspects, for example, many arguments on “nuclear energy” address safety concerns. Argument pairs addressing the same aspect should be assigned a high similarity score, and arguments on different aspects a low score. To date, the only available resource of that kind we are aware of, is the Argument Facet Similarity (AFS) Corpus (Misra et al., 2016). AFS Corpus. The AFS corpus annotates similarities of arguments pairwise. Misra et al. (2016) aimed to create automatic summaries for controversial topics. As an intermediate step, they extracted 6,000 sentential argument pairs from curated online debating platforms for three topics and annotated them on a scale from 0 (“different topic”) to 5 (“completely equivalent”). A drawback of this corpus is that the arguments are curated, i.e., the dataset does not include noise or non-relevant arguments. Furthermore, the corpus covers only three different topics. UKP ASPECT Corpus. To remedy these shortcomings, we create a new corpus with annotations on similar and dissimilar sentence-level arguments (Stab et al., 2018b), referred to as the Argument Aspect Similarity (UKP ASPECT) Corpus in the following.2 The UKP ASPECT corpus consists of sentences which have been identified as arguments for given topics using the ArgumenText system (Stab et al., 2018a). The ArgumenText system expects as input an arbitrary topic (query) and searches a large web crawl for relevant docu2The dataset is available at http://www.ukp.tudarmstadt.de/data 570 ments. Finally, it classifies all sentences contained in the most relevant documents for a given query into pro, con or non-arguments (with regard to the given topic). We picked 28 topics related to currently discussed issues from technology and society. To balance the selection of argument pairs with regard to their similarity, we applied a weak supervision approach. For each of our 28 topics, we applied a sampling strategy that picks randomly two pro or con argument sentences at random, calculates their similarity using the system by Misra et al. (2016), and keeps pairs with a probability aiming to balance diversity across the entire similarity scale. This was repeated until we reached 3,595 arguments pairs, about 130 pairs for each topic. The argument pairs were annotated on a range of three degrees of similarity (no, some, and high similarity) with the help of crowd workers on the Amazon Mechanical Turk platform. To account for unrelated pairs due to the sampling process, crowd workers could choose a fourth option.3 We collected seven assignments per pair and used Multi-Annotator Competence Estimation (MACE) with a threshold of 1.0 (Hovy et al., 2013) to consolidate votes into a gold standard. About 48% of the gold standard pairs are labeled with no similarity, whereas about 23% resp. 13% are labeled with some resp. high similarity. Furthermore, 16% of the pairs were labeled as containing invalid argument(s) (e.g. irrelevant to the topic at hand). We asked six experts (graduate research staff familiar with argument mining) to annotate a random subset of 50 pairs from 10 topics. The resulting agreement among experts was Krippendorff’s α = 0.43 (binary distance) resp. 0.47 (weighted distance4), reflecting the high difficulty of the task. Krippendorff’s α agreement between experts and the gold standard from crowd workers was determined as 0.54 (binary) resp. 0.55 (weighted distance). 4 Argument Classification As a first task in our pipeline of open-domain argument search, we focus on topic-dependent, sentence-level argument classification. To prevent 3The exact layout of the Human Intelligence Task (HIT) guidelines, as well as agreement statistics can be seen in the appendix. 4Reduced distance of 0.5 between high and some similarity, otherwise 1. the propagation of errors to the subsequent task of argument clustering, it is paramount to reach a high performance in this step. 4.1 Experimental Setup For the UKP Corpus, we use the proposed evaluation scheme by Stab et al. (2018b): The models are trained on the train split (70% of the data) of seven topics, tuned on the dev split (10%) of these seven topics, and then evaluated on the test split (20%) of the eighth topic. A macro F1-score is computed for the 3-label classes and scores are averaged over all topics and over ten random seeds. For the IBM Corpus, we use the setup by Shnarch et al. (2018): Training on 83 topics (4,066 sentences) and testing on 35 topics (1,719 sentences). We train for five different random seeds and report the average accuracy over all runs. 4.2 Methods We experiment with a number of different models and distinguish between models which use topic information and ones that do not. bilstm. This model was presented as a baseline by Stab et al. (2018b). It trains a bi-directional LSTM network on the sentence, followed by a softmax classifier and has no information about the topic. As input, pre-trained word2vec embeddings (Google News dataset) were used. biclstm. Stab et al. (2018b) presented the contextualized LSTM (clstm), which adds topic information to the i- and c-cells of the LSTM. The topic information is represented by using pre-trained word2vec embeddings. IBM. Shnarch et al. (2018) blend large automatically generated training sets with manually annotated data in the initialization step. They use an LSTM with 300-d GloVe embeddings and integrate the topic via attention. We re-implemented their system, as no official code is available. We experiment with these three models by replacing the word2vec / GloVe embeddings with ELMo and BERT embeddings. The ELMo embeddings are obtained by averaging the output of the three layers from the pre-trained 5.5B ELMo model. For each token in a sentence, we generate a BERT embedding with the pre-trained BERTlarge-uncased model. Further, we evaluate fine-tuning the transformer network from BERT for our datasets: BERT. We add a softmax layer to the output of the first token from BERT and fine-tune the net571 work for three epochs with a batch size of 16 and a learning rate of 2e-5. We only present the sentence to the BERT model. BERTtopic. We add topic information to the BERT network by changing the input to the network. We concatenate the topic and the sentence (separated by a special [SEP]-token) and finetune the network as mentioned before. 4.3 Results and Analysis In the following, we present and analyze the results. UKP Corpus. Replacing traditional embeddings in the bilstm by contextualized word embeddings improves the model’s performance by around 6pp and 8pp in F1 for ELMo and BERT (see Table 1). The fine-tuned BERT-large improves by even 12pp over the baseline bilstm and by this also outperforms bilstmBERT by around 4pp. Hence, using an intermediary BiLSTM layer for the BERT model even hurts the performance. Using ELMo and BERT embeddings in the topic-integrating biclstm model significantly decreases the performance, as compared to their performance in the bilstm. The contextualized word embedding for a topic is different to the one of a topic appearing in a sentence and the biclstm fails to learn a connection between them. Including the topic into the fine-tuned BERT models increases the F1 score by approx. 14.5pp and 13pp for BERT-base and BERT-large. This is due to a vast increase in recall for both models; while changes in precision are mostly small, recall for positive and negative arguments increases by at least 21pp for both models. As such, BERTlargetopic also beats the biclstm by almost 21pp in F1 score and represents a new state-of-the-art on this dataset. While the gap to human performance remains at around 18pp in F1, our proposed approach decreases this gap significantly as compared to the previous state-of-the-art. Based on preliminary experimental results, we suspect that this gap can be further reduced by adding more topics to the training data. The results show that (1) the BERT-[base/large] models largely improve F1 and precision for arguments and (2) leveraging topic-information yields another strong improvement on the recall of argumentative sentences. The usefulness of topicinformation has already been shown by Stab et al. (2018b) through their biclstm and stems from a much higher recall of arguments while losing some of the precision when compared to their bilstm. Yet, their approach cannot hold to BERT’s superior architecture; the topic-integrating BERT models BERT-basetopic and BERT-largetopic not only compensate for the biclstm’s drop in precision, but also increase the recall for pro and con arguments by at least 18pp and 15pp. We account this performance increase to BERT’s multihead attention between all word pairs, where every word in a sentence has an attention value with the topic (words). IBM corpus. As a baseline for models that do not use any topic information, we train three simple BiLSTMs with ELMo, BERT, and 300-d GloVe embeddings and compare them to the finetuned base and large BERT models. As Table 1 shows, BERT and ELMo embeddings perform around 2.7 and 3.7pp better in accuracy than the GloVe embeddings. BERT-base yields even 7pp higher accuracy, while its difference to the large model is only +1pp. Both BERT-base and BERT-large outperform the baseline IBM set by Shnarch et al. (2018) already by more than 6pp in accuracy5. The topic integrating models IBMELMo and IBMBERT do not improve much over their BiLSTM counterparts, which do not use any topic information. Similar to the conclusion for the UKP corpus, we attribute this to the different embedding vectors we retrieve for a topic as compared to the vectors for a topic mention within a sentence. BERT-basetopic and BERT-largetopic show the largest improvement with 8pp over the baseline and represent a new state-of-the-art on this dataset. The fine-tuned BERT models show vast improvements over the baseline, which is on par with the findings for the UKP corpus. Yet, in contrast to the results on the UKP corpus, adding topic information to the fine-tuned BERT models has only a small effect on the score. This can be explained with the different composition of both corpora: while sentences in the UKP corpus may only be implicitly connected to their related topic (only 20% of all sentences contain their related topic), sentences in IBM’s corpus all contain their related topic and are thus explicitly 5Please note that we refer to our reproduced baseline. Also, the original baseline’s performance by Shnarch et al. (2018) can only be guessed, since the numbers are drawn from a figure and do not appear in the text. 572 Model UKP Corpus IBM F1 Parg+ PargRarg+ RargAccuracy Without topic information bilstm (Stab et al., 2018b) .3796 .3484 .4710 .0963 .2181 .7201 bilstmELMo .4382 .4639 .5088 .1840 .2778 .7574 bilstmBERT .4631 .5051 .5079 .2074 .3076 .7476 BERT-base .4680 .5521 .5397 .2352 .2800 .7928 BERT-large .5019 .5844 .5818 .2917 .3154 .8021 With topic information outer-att (Stab et al., 2018b) .3873 .3651 .4696 .1042 .2381 biclstm (Stab et al., 2018b) .4242 .2675 .3887 .2817 .4028 biclstmELMo .3924 .2372 .4381 .0317 .3955 biclstmBERT .4243 .3431 .4397 .1060 .4275 IBM (Shnarch et al., 2018) ∼.74 IBM (reproduced) .7288 IBMELMo .7651 IBMBERT .7480 BERT-basetopic .6128 .5048 .5313 .4698 .5795 .8137 BERT-largetopic .6325 .5535 .5843 .5051 .5594 .8131 Human Performance .8100 Table 1: Results of each model for sentence-level argument classification using cross-topic evaluation on the UKP Sentential Argument Mining Corpus and on the IBM Debater R ⃝- Evidence Sentences dataset. Blank fields result from dataset-specific models. P: precision, R: recall, arg+: pro-arguments, arg-: con-arguments. connected to it (although topics are masked with a placeholder). Hence, in the IBM corpus, there is much less need for the additional topic information in order to recognize the relatedness to a sentence. 5 Argument Clustering Having identified a large amount of argumentative text for a topic, we next aim at grouping the arguments talking about the same aspects. For any clustering algorithm, a meaningful similarity between argument pairs is crucial and needs to account for the challenges regarding argument aspects, e.g., different aspect granularities, context-dependency or aspect multiplicity. Another requirement is the robustness for topicdependent differences. Therefore, in this section, we study how sentence-level argument similarity and clustering can be improved by using contextualized word embeddings. We evaluate our methods on the UKP ASPECT and the AFS corpus (see Section 3.2). 5.1 Clustering Method We use agglomerative hierarchical clustering (Day and Edelsbrunner, 1984) to cluster arguments. We use the average linkage criterion to compute the similarity between two cluster A and B: 1 |A||B| P a∈A P b∈B d(a, b), for a given similarity metric d. As it is a priori unknown how many different aspects are discussed for a topic (number of clusters), we apply a stopping threshold which is determined on the train set. We also tested the k-means and the DBSCAN clustering algorithms, but we found that agglomerative clustering generally yielded better performances in preliminary experiments. Agglomerative clustering uses a pairwise similarity metric d between arguments. We propose and evaluate various similarity metrics in two setups: (1) Without performing a clustering, i.e. the quality of the metric is directly evaluated (without clustering setup), and (2) in combination with the described agglomerative clustering method (with clustering setup). 5.2 Experimental Setup We differentiate between unsupervised and supervised methods. Our unsupervised methods include no pre-training whereas the supervised methods use some data for fine-tuning the model. For the UKP ASPECT corpus, we binarize the four labels to only indicate similar and dissimilar argument pairs. Pairs labeled with some and high similarity were labeled as similar, pairs with no similarity and different topic as dissimilar. We evaluate methods in a 4-fold crossvalidation setup: seven topics are used for testing and 21 topics are used for fine-tuning. Final evaluation results are the average over the four folds. In case of supervised clustering methods, we use 17 topics for training and four topics for tuning. In 573 Without Clustering With Clustering Model Fmean Fsim Fdissim Fmean Fsim Fdissim Human Performance .7834 .7474 .8194 .7070 .6188 .7951 Random predictions .4801 .3431 .6171 .4253 .3162 .5344 Unsupervised Methods Tf-Idf .6118 .5230 .7007 .5800 .4892 .6708 InferSent - fastText .6621 .5866 .7376 .6344 .5443 .7584 InferSent - GloVe .6494 .5472 .7517 .6149 .4587 .7711 GloVe Embeddings .6468 .5632 .7304 .5926 .4605 .7246 ELMo Embeddings .6447 .5355 .7538 .6366 .5347 .7384 BERT Embeddings .6539 .5232 .7848 .6070 .4818 .7323 Supervised Methods: Cross-Topic Evaluation BERT-base .7401 .6695 .8107 .7007 .6269 .7746 BERT-large .7244 .6297 .8191 .7135 .6125 .8146 Table 2: F1 scores on the UKP ASPECT Corpus. their experiments on the AFS corpus, Misra et al. (2016) only performed a within-topic evaluation by using 10-fold cross-validation. As we are primarily interested in cross-topic performances, we evaluate our methods also cross-topic: we train on two topics, and evaluate on the third. 5.3 Evaluation For the UKP ASPECT dataset we compute the marco-average Fmean for the F1-scores for the similar-label (Fsim) and for the dissimilarlabel (Fdissim). In the without clustering setup, we compute the similarity metric (d(a, b)) for an argument pair directly, and assign the label similar if it exceeds a threshold, otherwise dissimilar. The threshold is determined on the train set of a fold for unsupervised methods. For supervised methods, we use a held-out dev set. In the with clustering setup, we use the similarity metric to perform agglomerative clustering. This assigns each argument exactly one cluster ID. Arguments pairs in the same cluster are assigned the label similar, and argument pairs in different clusters are assigned the label dissimilar. We use these labels to compute Fsim and Fdissim given our gold label annotations. For the AFS dataset, Misra et al. (2016) computed the correlation between the predicted similarity and the annotated similarity score. They do not mention which correlation method they used. In our evaluation, we show Pearson correlation (r) and Spearman’s rank correlation coefficient (ρ). 5.4 Similarity Metrics We experiment with the following methods to compute the similarity between two arguments. Tf-Idf. We computed the most common words (without stop-words) in our training corpus and compute the cosine similarity between the Tf-Idf vectors of a sentence. InferSent. We compute the cosine-similarity between the sentence embeddings returned by InferSent (Conneau et al., 2017). Average Word Embeddings. We compute the cosine-similarity between the average word embeddings for GloVe, ELMo and BERT. BERT. We fine-tune the BERT-uncased model to predict the similarity between two given arguments. We add a sigmoid layer to the special [CLS] token and trained it on some of the topics. We fine-tuned for three epochs, with a learning rate of 2e-5 and a batch-size of 32. Human Performance. We approximated the human upper bound on the UKP ASPECT corpus in the following way: we randomly split the seven pair-wise annotations in two groups, computed their corresponding MACE (Hovy et al., 2013) scores and calculated Fsim, Fdissim and Fmean. We repeated this process ten times and averaged over all runs (without clustering setup). For the with clustering setup, we applied agglomerative hierarchical clustering on the MACE scores of one of both groups and computed the evaluation metrics using the other group as the gold label. For the AFS dataset, Misra et al. (2016) computed the correlation between the three human annotators. 5.5 Results and Analysis Unsupervised Methods. Table 2 shows the performance on the novel UKP ASPECT Corpus. When evaluating the argument similarity metrics directly (without clustering setup), we notice no large differences between averaging GloVe, ELMo or BERT embeddings. These three setups perform worse than applying InferSent with fast574 Average r ρ Human Performance .6767 Unsupervised Methods Tf-Idf .4677 .4298 InferSent - fastText .2519 .2423 InferSent - GloVe .2708 .2663 GloVe Embeddings .3240 .3400 ELMo Embeddings .2827 .2675 BERT Embeddings .3539 .3507 Supervised Methods: Within-Topic Evaluation SVR (Misra et al., 2016) .6333 BERT-base .7475 .7318 BERT-large .7256 .6959 Supervised Methods: Cross-Topic Evaluation BERT-base .5849 .5723 BERT-large .6202 .6034 Table 3: Pearson correlation r and Spearman’s rank correlation ρ on the AFS dataset (Misra et al., 2016) averaged over the three topics. Text embeddings. Tf-Idf shows the worst performance. In Table 3, we show the performances for the AFS corpus (detailed results in the appendix, Table 5). In contrast to the ASPECT Corpus, the Tf-Idf method achieves the best performance and InferSent - fastText embeddings achieved the worst performance. As for the ASPECT Corpus, ELMo and BERT embeddings do not lead to an improvement compared to averaged GloVe embeddings. Unsupervised methods compute some type of similarity between sentence pairs. However, as our experiments shows, this similarity notion is not necessarily the notion needed for the task. Supervised Methods. We fine-tune the BERT model for some of the topics and study the performance on unseen topics. For the ASPECT Corpus, we observe a performance increase of 7.8pp. Identifying dissimilar arguments (Fdissim) is on-par with the human performance, and identifying similar arguments achieves an F-score of .67, compared to .75 for human annotators. For the AFS dataset, we observe that fine-tuning the BERT model significantly improves the performance by 11pp compared to the previous state-ofthe-art from Misra et al. (2016). In a cross-topic evaluation setup on the AFS dataset, we observe that the performance drops to .57 Spearman correlation. This is still significantly larger than the best unsupervised method. We evaluated the effect of the training set size on the performance of the BERT model for the ASPECT Corpus. A certain number of topics were randomly sampled and the performance was evaluated on distinct topics. This process was repeated 10 times with different random seeds (Reimers and Gurevych, 2018). Table 4 shows the averaged results. By allowing fine-tuning on five topics we are able to improve the Fmean-score to .71 compared to .65 when using BERT without fine-tuning (without clustering setup). Adding more topics then slowly increases the performance. w/o Clustering With Clustering #Topics Fmean Fmean 1 0.6244 0.5943 3 0.6817 0.6322 5 0.7134 0.6563 7 0.7164 0.6703 9 0.7151 0.6697 11 0.7305 0.6988 13 0.7350 0.6964 15 0.7370 0.7010 17 0.7401 0.7034 Table 4: F1 scores on the UKP ASPECT Corpus with increasing training set sizes (BERT model). With Clustering. We studied how the performance changes on the ASPECT corpus if we combine the similarity metric with agglomerative clustering (Table 2). We notice that the performances drop by up to 7.64pp. Agglomerative clustering is a strict partitioning algorithm, i.e., each object belongs to exactly one cluster. However, an argument can address more than one aspect of a topic, therefore, arguments could belong to more than one cluster. Hence, strict partitioning clustering methods introduce a new source of errors. We can estimate this source of error by evaluating the transitivity in our dataset. For a strict partitioning setup, if argument A ∼B, and B ∼C are similar, then A ∼C are similar. This transitivity property is violated in 376 out of 1,714 (21.9%) cases, indicating that strict partitioning is a suboptimal setup for the ASPECT dataset. This also explains why the human performance in the with clustering setup is significantly lower than in the without clustering setup. As Table 2 shows, a better similarity metric must not necessarily lead to a better clustering performance with agglomerative clustering. Humans are better than the proposed BERT-model at estimating the pairwise similarity of arguments. However, when combined with a clustering method, the performances are on-par. 6 Conclusion Open-domain argument search, i.e. identifying and aggregating arguments for unseen topics, is 575 a challenging research problem. The first challenge is to identify suitable arguments. Previous methods achieved low F1-scores in a crosstopic scenario, e.g., Stab et al. (2018b) achieved an F1-score of .27 for identifying pro-arguments. We could significantly improve this performance to .53 by using contextualized word embeddings. The main performance gain came from integrating topic information into the transformer network of BERT, which added 13pp compared to the setup without topic information. The second challenge we addressed is to decide whether two arguments on the same topic are similar. Previous datasets on argument similarity used curated lists of arguments, which eliminates noise from the argument classification step. In this publication, we annotated similar argument pairs that came from an argument search engine. As the annotation showed, about 16% of the pairs were noisy and did not address the target topic. Unsupervised methods on argument similarity showed rather low performance scores, confirming that fine-grained semantic nuances and not the lexical overlap determines the similarity between arguments. We were able to train a supervised similarity function based on the BERT transformer network that, even with little training data, significantly improved over unsupervised methods. While these results are very encouraging and stress the feasibility of open-domain argument search, our work also points to some weaknesses of the current methods and datasets. A good argument similarity function is only the first step towards argument clustering. We evaluated the agglomerative clustering algorithm in combination with our similarity function and identified it as a new source of errors. Arguments can address multiple aspects and therefore belong to multiple clusters, something that is not possible to model using partitional algorithms. Future work should thus study the overlapping nature of argument clustering. Further, more realistic datasets, that allow end-to-end evaluation, are required. Acknowledgments The authors would like to sincerely thank Joy Mahapatra, who carried out the initial annotation study. This work has been supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1) and within the project “Open Argument Mining” (GU 798/25-1), associated with the Priority Program “Robust Argumentation Machines (RATIO)” (SPP-1999). It has been co-funded by the German Federal Ministry of Education and Research (BMBF) under the promotional references 01UG1816B (CEDIFOR) and 03VP02540 (ArgumenText). References Filip Boltuˇzi´c and Jan ˇSnajder. 2015. Identifying prominent arguments in online debates using semantic textual similarity. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 110– 115. Elena Cabrio and Serena Villata. 2012. Natural Language Arguments: A Combined Approach. In Proceedings of the 20th European Conference on Artificial Intelligence, pages 205–210. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680. William H. E. Day and Herbert Edelsbrunner. 1984. Efficient algorithms for agglomerative hierarchical clustering methods. Journal of Classification, 1(1):7–24. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11–22. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120–1130. Ran Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. 2014. Context dependent claim detection. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1489– 1500. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781. 576 Amita Misra, Brian Ecker, and Marilyn A. Walker. 2016. Measuring the similarity of sentential arguments in dialogue. In Proceedings of the SIGDIAL 2016 Conference, The 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 13-15 September 2016, Los Angeles, CA, USA, pages 276–287. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237. Nils Reimers and Iryna Gurevych. 2018. Why comparing single performance scores does not allow to draw conclusions about machine learning approaches. arXiv preprint arXiv:1803.09578. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M. Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence - an automatic method for context dependent evidence detection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 440–450. Eyal Shnarch, Carlos Alzate, Lena Dankin, Martin Gleize, Yufang Hou, Leshem Choshen, Ranit Aharonov, and Noam Slonim. 2018. Will it Blend? Blending Weak and Strong Labeled Data in a Neural Network for Argumentation Mining. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 599–605. Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, and Iryna Gurevych. 2018a. ArgumenText: Searching for Arguments in Heterogeneous Sources. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: System Demonstrations, pages 21–25. Christian Stab and Iryna Gurevych. 2014. Identifying Argumentative Discourse Structures in Persuasive Essays. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 46–56. Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018b. Cross-topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, volume Long Papers, pages 3664–3674. Amine Trabelsi and Osmar R Za¨ıane. 2015. Extraction and clustering of arguing expressions in contentious text. Data & Knowledge Engineering, 100:226– 239. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Henning Wachsmuth, Nona Naderi, Ivan Habernal, Yufang Hou, Graeme Hirst, Iryna Gurevych, and Benno Stein. 2017. Argumentation quality assessment: Theory vs. practice. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 250–255. Henning Wachsmuth, Shahbaz Syed, and Benno Stein. 2018. Retrieval of the best counterargument without prior topic knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 241–251. 577 A Appendices A.1 UKP ASPECT Corpus: Amazon Mechanical Turk Guidelines and Inter-annotator Agreement The annotations required for the UKP ASPECT Corpus were acquired via crowdsourcing on the Amazon Mechanical Turk platform. Workers participating in the study had to be located in the US, with more than 100 HITs approved and an overall acceptance rate of 90% or higher. We paid them at the US federal minimum wage of $7.25/hour. Workers also had to qualify for the study by passing a qualification test consisting of twelve test questions with argument pairs. Figure 2 shows the instructions given to workers. A.2 AFS Corpus: Detailed Results Table 5 shows the full results of the (un)supervised methods for the argument similarity calculation on the AFS dataset (all topics). 578 Gun Control Gay Marriage Death Penalty Avg. r ρ r ρ r ρ r ρ Human Performance .6900 .6000 .7400 .6767 Unsupervised Methods Tf-Idf .6266 .5528 .4107 .3778 .3657 .3589 .4677 .4298 InferSent - fastText .3376 .3283 .1012 .1055 .3168 .2931 .2519 .2423 InferSent - GloVe .3757 .3707 .1413 .1435 .2953 .2847 .2708 .2663 GloVe Embeddings .4344 .4485 .2519 .2741 .2857 .2973 .3240 .3400 ELMo Embeddings .3747 .3654 .1753 .1709 .2982 .2663 .2827 .2675 BERT Embeddings .4575 .4460 .1960 .1999 .4082 .4072 .3539 .3507 Supervised Methods: Within-Topic Evaluation SVR (Misra et al., 2016) .7300 .5400 .6300 .6333 BERT-base .8323 .8076 .6255 .6122 .7847 .7768 .7475 .7318 BERT-large .7982 .7592 .6240 .6137 .7545 .7149 .7256 .6959 Supervised Methods: Cross-Topic Evaluation BERT-base .6892 .6689 .4307 .4236 .6339 .6245 .5849 .5723 BERT-large .6895 .6749 .5071 .4866 .6641 .6486 .6202 .6034 Table 5: Pearson correlation r and Spearman’s rank correlation ρ on the AFS dataset. Within-Topic Evaluation: 10-fold cross-validation. Cross-Topic Evaluation: System trained on two topics, evaluated on the third. Read each of the following sentence pairs and indicate whether they argue about the same aspect with respect to the given topic (given as “Topic Name” on top of the HIT). There are four options, of which one needs to be assigned to each pair of sentences (arguments). Please read the following for more details. • Different Topic/Can’t decide: Either one or both of the sentences belong to a topic different than the given one, or you can’t understand one or both sentences. If you choose this option, you need to very briefly explain, why you chose it (e.g. “The second sentence is not grammatical”, “The first sentence is from a different topic” etc.). For example, Argument A: “I do believe in the death penalty, tit for tat”. Argument B: “Marriage is already a civil right everyone has, so like anyone you have it too”. • No Similarity: The two arguments belong to the same topic, but they don’t show any similarity, i.e. they speak about completely different aspects of the topic. For example, Argument A: “If murder is wrong then so is the death penalty”. Argument B: “The death penalty is an inappropriate way to work against criminal activity”. • Some Similarity: The two arguments belong to the same topic, showing semantic similarity on a few aspects, but the central message is rather different, or one argument is way less specific than the other. For example, Argument A: “The death penalty should be applied only in very extreme cases, such as when someone commands genocide”. Argument B: “An eye for an eye: He who kills someone else should face capital punishment by the law”. • High Similarity: The two arguments belong to the same topic, and they speak about the same aspect, e.g. using different words. For example, Argument A: “An ideal judiciary system would not sentence innocent people”. Argument B: “The notion that guiltless people may be sentenced is indeed a judicial system problem”. Your rating should not be affected by whether the sentences attack (e.g. “Animal testing is cruel and inhumane” for the topic “Animal testing”) or support (e.g. “Animals do not have rights, therefore animal testing is fair” for the topic “Animal testing”) the topic, but only by the aspect they are using to support or attack the topic. Figure 2: Amazon Mechanical Turk HIT Guidelines used in the annotation study for the Argument Aspect Similarity Corpus.
2019
54
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5437–5447 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5437 Ordinal and Attribute Aware Response Generation in a Multimodal Dialogue System Hardik Chauhan1∗, Mauajama Firdaus2∗, Asif Ekbal2, Pushpak Bhattacharyya2 1 Department of Electrical Engineering, Indian Institute of Technology Roorkee, India 2 Department of Computer Science and Engineering, Indian Institute of Technology Patna, India ([email protected]),(mauajama.pcs16,asif,pb)@iitp.ac.in Abstract Multimodal dialogue systems have opened new frontiers in the traditional goal-oriented dialogue systems. The state-of-the-art dialogue systems are primarily based on unimodal sources, predominantly the text, and hence cannot capture the information present in the other sources such as videos, audios, images etc. With the availability of large scale multimodal dialogue dataset (MMD) (Saha et al., 2018) on the fashion domain, the visual appearance of the products is essential for understanding the intention of the user. Without capturing the information from both the text and image, the system will be incapable of generating correct and desirable responses. In this paper, we propose a novel position and attribute aware attention mechanism to learn enhanced image representation conditioned on the user utterance. Our evaluation shows that the proposed model can generate appropriate responses while preserving the position and attribute information. Experimental results also prove that our proposed approach attains superior performance compared to the baseline models, and outperforms the state-of-the-art approaches on text similarity based evaluation metrics. 1 Introduction With the advancement in Artificial Intelligence (AI), dialogue systems have become a prominent part in today’s virtual assistant, which helps users to converse naturally with the system for effective task completion. Dialogue systems focus on two broad categories - open domain conversations with casual chit chat and goal-oriented systems where the system is designed to solve a particular task for the user belonging to a specific domain. Response generation is a crucial component of every conversational agent. The task of “how to say” ∗First two authors are jointly the first authors the information to the user is the primary objective of every response generation module. One of the running goals of AI is to bring language and vision together in building robust dialogue systems. Advances in visual question answering (VQA) (Kim et al., 2016; Xiong et al., 2016; Ben-Younes et al., 2017), and image captioning (Anderson et al., 2018; Chen et al., 2018) have ensured interdisciplinary research in natural language processing (NLP) and computer vision. Recently, several works in dialogue systems incorporating both vision and language (Das et al., 2017a; Mostafazadeh et al., 2017) have shown promising research directions. Goal oriented dialogue systems are majorly based on textual data (unimodal source). With increasing demands in the domains like retail, travel, entertainment, conversational agents that can converse by combining different modalities is an essential requirement for building the robust systems. Knowledge from different modalities carries complementary information about the various aspects of a product, event or activity of interest. By combining information from different modalities to learn better representation is crucial for creating robust dialogue systems. In a multimodal setup, the provision of different modalities assists both the user and the agent in achieving the desired goal. Our work is established upon the recently proposed Multimodal Dialogue (MMD) dataset (Saha et al., 2018), consisting of ecommerce (fashion domain) related conversations. The work focused on generating textual responses conditioned on the conversational history consisting of both text and image. In the existing task-oriented dialogue systems, the inclusion of visually grounded dialogues- as in the case of MMD dataset- has provided exciting new challenges in the field of interactive dialogue systems. In contrast to VQA, multimodal 5438 dialogues have conversations with more extended contextual dependency, and a clear end-goal. As opposed to a static image in VQA, MMD deals with dynamic images making the task even more challenging. In comparison to the previous slotfilling dialogue systems on textual data (Young et al., 2013; Rieser and Lemon, 2011), MMD provides an additional visual modality to drive the conversation forward. In this work, we propose an entirely data-driven response generation model in a multi-modal setup by combining the modalities of text and images. In Figure 1, we present an example from the MMD dataset. It is a conversation between the user and the system in a multimodal setting on the fashion domain. From the example, it is understood that the position of images is essential for the system to fulfill the demands of the user. For example, in figure, the U3 utterance “Can you tell me the type of colour in the 1st image” needs position information of the particular image from the given set of images. To handle such situations, we incorporate position embeddings to capture ordered visual information. The underlying motivation was to capture the correct image information from the text; hence, we use position aware attention mechanism. From Figure 1, in utterance U5, we can see that the user is keen on different aspects of the image as well. In this case, user is interested in the “print as in the 2nd image”. To focus and capture the different attributes from the image representation being considered in the text, we apply attribute aware attention on the image representation. Hence in order to handle such situations present in the dataset, we apply both position and attribute aware attention mechanisms to capture intricate details from the image and textual features. For effective interaction among the modalities, we use Multimodal Factorized Bilinear (MFB) (Yu et al., 2017) pooling mechanism. Since multimodal feature distribution varies dramatically, hence the integrated image-text representations obtained by such linear models may not be sufficient in capturing the complex interactions between the visual and textual modalities. The information of the present utterance, image and the contextual history are essential for better response generation (Serban et al., 2015). The key contributions/highlights of our current work are as follows: • We employ a position-aware attention mechanism to incorporate the ordered visual information and attribute-aware attention mechanism to focus on image conditioned on the attributes discussed in the text. • We utilize Multi-modal Factorized Bilinear (MFB) model to fuse the contextual information along with image and utterance representation. • We achieve state-of-the-art performance for the textual response generation task on the MMD dataset. The rest of the paper is structured as follows: In section 2, we discuss the related works. In Section 3, we explain the proposed methodology followed by the dataset description in Section 4. Experimental details and evaluation metrics are reported in Section 5. Results along with necessary analysis are presented in Section 6. In Section 7, we conclude the paper along with future research direction. Figure 1: An example from the MMD dataset 2 Related Work Research on dialog systems have been a major attraction since a long time. In this section we briefly discuss some of the prominent research carried out on single and multi-modal dialog systems. 2.1 Unimodal Dialogue Systems Dialogue systems have mostly focused on single modal source such as text. Hence, there have been 5439 Text Image System Utterance Text Image User Utterance Multimodal Encoder Multimodal Encoder Decoder Decoded Output Context Encoder (a) Overall model architecture with Multimodal encoder followed by context encoder and the decoder module View from left to right VGG VGG Concatenation Linear Layer Concatenation (b) Multimodal encoder with simple concatenation of text and image representations Figure 2: Block Diagram of the MHRED model; Left image is the overall system architecture for text generation; Right image is the baseline encoder model Direction from left to right Position Attribute Concatenation PE1 PE2 VGG VGG Attribute aware image representation Position aware image representation Concatenation MFB Context Encoder Figure 3: Proposed Multimodal Encoder with Position and Attribute aware Attention with MFB fusion several works carried out on data-driven textual response generation. To help the users achieve their desired goals, response generation provides the medium through which a conversational agent can communicate with its user. In (Ritter et al., 2011), the authors used social media data for response generation following the machine translation approach. The effectiveness of deep learning has shown remarkable improvement in dialogue generation. Deep neural models have been quite beneficial for modelling conversations in (Vinyals and Le, 2015; Li et al., 2016a,b; Shang et al., 2015). A context-sensitive neural language model was proposed in (Sordoni et al., 2015), where the model chooses the most probable response given the textual conversational history. In (Serban et al., 2015, 2017), the authors have proposed a hierarchical encoder-decoder model for capturing the dependencies in the utterances of a dialogue. Conditional auto-encoders have been employed in (Zhao et al.; Shen et al., 2018) that generate diverse replies by capturing discourse-level information in the encoder. Our current work differentiates from these existing works in dialogue systems in a way that we generate the appropriate responses by capturing information from both the text and image, conditioned on the conversational history. 2.2 Multimodal Dialogue Systems With the recent shift in interdisciplinary research, dialogue systems combining different modalities (text, images, video) have been investigated for creating robust conversational agents. Dialogue generation combining information from text and images (Das et al., 2017a,b; Mostafazadeh et al., 2017; Gan et al., 2019; De Vries et al., 2017) has been successful in bridging the gap between vision and language. Our work differs from these as the conversation in Multimodal Dialogue (MMD) dataset (Saha et al., 2018) deals with multiple images and the growth in conversation is dependent on both image and text as opposed to a conversation with a single image. Lately, with the release of DSTC7 dataset, video and textual modalities have been explored in (Lin et al., 2019; Le et al., 2019). Prior works on MMD dataset reported in (Agarwal et al., 2018b,a; Liao et al., 2018) have captured the information in the form of knowledge 5440 bases using hierarchical encoder-decoder model. Our work is different from these existing works on MMD dataset in the sense that we incorporate position and attribute aware attention mechanism for capturing ordered information and minute details such as colour, style etc. from the image representations for more accurate response generation. Our method, unlike the previous works, make use of the MFB technique for better information fusion across different modalities. The approach that we propose to capture and integrate information from image and text is novel. We successfully demonstrate the effectiveness of our proposed model in generating responses through sufficient empirical analysis. 3 Methodology In this section we firstly define the problem and then present the details of the proposed method. 3.1 Problem Definition In this paper, we address the task of textual response generation conditioned on conversational history as proposed in (Saha et al., 2018). The dialogue consists of text utterances along with multiple images and given a context of k turns the task here is to generate the next text response. More precisely, given an user utterance Uk = (wk,1, wk,2, ...., wk,n), a set of images Ik = (imgk,1, imgk,2, ..., imgk,n′) and a conversational history Hk = ((U1, I1), (U2, I2), ..., (Uk−1, Ik−1)) the task is to generate the next textual response Yk = (yk,1, yk,2, ....., yk,n′′). 3.2 Hierarchical Encoder Decoder We construct a response generation model, as shown in Figure 2(a), which is an extension of the recently introduced Hierarchical Encoder Decoder (HRED) architecture (Serban et al., 2016, 2017). As opposed to the standard sequence to sequence models (Cho et al., 2014; Sutskever et al., 2014), the dialogue context is modelled by a separate context Recurrent Neural Network (RNN) over the encoder RNN, thus forming a hierarchical encoder. The multimodal HRED (MHRED) is built upon the HRED to include text and image modalities. The key components of MHRED are the utterance encoder, image encoder, context encoder and decoder. Utterance Encoder: Given an utterance Um, a bidirectional Gated Recurrent Units (BiGRU) (Bahdanau et al., 2014) is employed to encode each word wm,i, i ∈(1, ..., n) represented by ddimensional embeddings into the hidden vectors hm,U,i. −−−→ hU,m,i = GRUu,f(wm,i, −−−−−→ hU,m,i−1) (1) ←−−− hU,m,i = GRUu,b(wm,i, ←−−−−− hU,m,i−1) (2) hU,m,i = [−−−→ hU,m,i, ←−−− hU,m,i] (3) Image Encoder: A pre-trained VGG-19 model (Simonyan and Zisserman, 2014) is used to extract image features for all the images in a given dialogue turn. The concatenation of single image features is given as input to a single linear layer to obtain a global image context representation. Fm,i = V GG(imgm,i) (4) Fm = Concat(Fk,1, Fk,2, ..., Fk,n′) (5) hI,m = ReLU(WIFm + bI) (6) where WI and bI are the trainable weight matrix and biases, respectively. The number of images in a single turn is ≤5; hence, zero vectors are considered in the absence of images. Context-level Encoder: The final hidden representations from both image as well as text encoders are concatenated for every turn and are fed as input to the context GRU, as shown in Figure 2(b). A hierarchical encoder is built on top of the image and text encoder to model the dialogue history. The final hidden state of the context GRU serves as the initial state of the decoder GRU. hc,m = GRUc([hI,m; hU,m,n], hc,m−1) (7) Decoder: In the decoding stage, the decoder is another GRU that generates words sequentially conditioned on the final hidden state of the context GRU and the previously decoded words. Attention mechanism similar to (Luong et al., 2015) is incorporated to enhance the performance of the decoder GRU. The attention layer is applied to the hidden state of context encoder using decoder state dt as the query vector. The concatenation of the context vector and the decoder state is used to compute a final probability distribution over the output tokens. hd,t = GRUd(yk,t−1, hd,t−1) (8) 5441 αt,m = softmax(hT c,mWhd,t) (9) ct = k X m=1 αt,mhc,m, (10) ˜ht = tanh(W˜h[hd,t; ct]) (11) P(yt/y<t) = softmax(WV ˜ht) (12) where, Wh, WV and W˜h are trainable weight matrices. 3.3 Proposed Model To improve the performance of the MHRED model, rather than just concatenating the representations of the text and image encoder we apply an attention layer to mask out the irrelevant information. In our case, we apply attention to learn where to focus and what to focus upon as described in the user utterance. To decouple these two tasks we augment the encoder with position and attribute aware attention mechanisms. Position-aware Attention: In the baseline MHRED model, we incorporate position information of the images to improve the performance of the system. For example, “List more in colour as the 4th image and style as in the 1st image”, the ordered information of the images is essential for the correct textual response by the agent to satisfy the needs of the user. Hence, the knowledge of every image with respect to its position is necessary so that the agent can capture the information and fulfill the objective of the customer. The lack of position information of the images in the baseline MHRED model causes quite a few errors in focusing on the right image. To alleviate this issue, we fuse position embedding of every image with the corresponding image features. The position of every image is represented by position embedding PEi, where, PE = [PE1, ..., PEn′]. This information is concatenated to the corresponding image features. To compute self attention (Wang et al., 2017) we represent textual features as HU = [hU,1, ...., hU,n]. αp = softmax(WpT HU), Up = αpHU T (13) We use the self-attended text embedding as a query vector Up to calculate the attention distribution over the position embedding PE. βp = softmax(UpT Wp′PE), Ip = βpPET (14) where, WpT and Wp′ are trainable parameters. Attribute-aware Attention: To focus on different attributes of the image mentioned in the text, we employ attribute-aware attention. αa = softmax(WaT HU), Ua = αaHU T (15) The self-attended text embedding is used as query vector Ua to compute the attention distribution over the image feature represented by HI = [hI,1, ..., HI,n′] . βa = softmax(UaT Wa′HI), Ia = βaHIT (16) where, WaT and Wa′ are trainable parameters. Finally, in our proposed model, as shown in Figure 3, we incorporate position-aware and attributeaware attention mechanisms to provide focused information conditioned on the text utterance. We concatenate Ua and Up vectors for the final utterance representations Uf, Ia and Ip vectors as the final image representation If. The output of the context encoder hc along with If and Uf serves as input to the MFB module. Here, we compute the MFB between If and Uf. z = SumPooling(WmUf T ◦Wm′If T , k′) (17) z = sign(z)|z|0.5, z = zT /||z|| (18) where, Wm and Wm′ are the trainable parameters, and SumPooling function is same as described in (Gan et al., 2019). Similarly, we take a pairwise combination of If, Uf and hc as the final output of our multimodal fusion module. Hence, the final multimodal fusion can be represented by hd = [MFB(Uf, If), MFB(Uf, hc), MFB(If, hc)], where hd is used to initialize the decoder. 3.4 Training and Inference We employ commonly used teacher forcing (Williams and Zipser, 1989) algorithm at every decoding step to minimize negative log-likelihood on the model distribution. We define y∗ = {y∗ 1, y∗ 2, . . . , y∗ m} as the ground-truth output sequence for a given input Lml = − m X t=1 log p(y∗ t |y∗ 1, . . . , y∗ t−1) (19) We apply uniform label smoothing(Szegedy et al., 2016) to alleviate the common issue of low diversity in dialogue systems, as suggested in (Jiang and de Rijke, 2018). 5442 3.5 Baseline Models For our experiment, we develop the following models: Model 1 (MHRED): The first model is the baseline MHRED model described in Section 3.2. Model 2 (MHRED + A): In this model, we apply attention (A) on the text and image features rather than merely concatenating the features. Model 3 (MHRED + A + PE): In this model, position embeddings (PE) of every image is concatenated with the respective image features to provide ordered visual information of the images. Model 4 (MHRED + PA): Self-attention on the text representations with respect to position information is computed to generate a query vector. This query vector is used to learn the attention distribution on the position embeddings to focus on the discussed image in user utterance. Model 5 (MHRED + AA): To learn the different attributes discussed in the text we apply selfattention on the text representation and compute a query vector that attends the image representation in accordance to the attributes in the text. Model 6 (MHRED + PA + AA): In this model, the final text and image representations, denoted as Uf and If, respectively, and obtained after applying the position and attribute aware attention, are concatenated and fed as input to the context encoder. Model 7 (MHRED + MFB(I, T)): MFB module is employed to learn the complex association between the textual and visual features. The final text representation (T) Uf and the final image representation (I) If are fed as input to the MFB module. Model 8 (MHRED + MFB(I,T,C)): In this model, we concatenate the pairwise output of the MFB module on the contextual information (C), that is the output of context encoder hc,i along with text and image representations. 4 Datasets Our work is built upon the Multimodal Dialogue (MMD) dataset (Saha et al., 2018). The MMD dataset comprises of 150k chat sessions between the customer and sales agent. Table 1 lists the detailed information about the MMD dataset. Domain-specific knowledge in the fashion domain was captured during the series of customer-agent interactions. The dialogues incorporate text and image information seamlessly in a conversation bringing together multiple modalities for creating advanced dialogue systems. The dataset poses new challenges for multimodal, goal-oriented dialogue containing complex user utterances. For example, “Can you show me the 5th image in different orientations within my budget?”, requires quantitative inference such as filtering, counting and sorting. Bringing the textual and image modalities together, multimodal inference makes the task of generation even more challenging, for example, “See the second stilettos, I want to see more like it but in a different colour”. In our work, Dataset Statistics Train Valid Test Number of dialogues 105,439 22,595 22,595 Avg. turns per Dialogue 40 40 40 No. of Utterances with Image Response 904K 194K 193K No. of Utterances with Text Response 1.54M 331K 330K Avg. words in Text Response 14 14 14 Table 1: Dataset statistics of MMD we use a different version of the dataset as described in (Agarwal et al., 2018a,b) to capture the multiple images, in turn, as one concatenated context vector for every turn in a given dialogue. 5 Experiments In this section we present the implementation details and the evaluation metrics (automatic and human) that we use for measuring the model performance. 5.1 Implementation Details All the implementations are done using the PyTorch1 framework. We use 512-dimensional word embedding and 10-dimensional position embedding as described in (Vaswani et al., 2017). We use the dropout(Srivastava et al., 2014) with probability 0.45. During decoding, we use a beam search with beam size 10. We initialize the model parameters randomly using a Gaussian distribution with Xavier scheme (Glorot and Bengio, 2010). The hidden size for all the layers is 512. We employ AMSGrad (Reddi et al., 2019) as the optimizer for model training to mitigate the slow convergence issues. We use uniform label smoothing with ϵ = 0.1 and perform gradient clipping when gradient norm is over 5. For image representation, 1https://pytorch.org/ 5443 FC6(4096 dimension) layer representation of the VGG-19 (Simonyan and Zisserman, 2014), pretrained on ImageNet is used. 5.2 Automatic Evaluation For evaluating the model we report the standard metrics like BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004) and METEOR (Lavie and Agarwal, 2007) employing the evaluation scripts made available by (Sharma et al., 2017). 5.3 Human Evaluation To understand the quality of responses, we adopt human evaluation to compare the performance of different models. We randomly sample 700 responses from the test set for human evaluation. Given an utterance, image along with the conversation history were presented to three human annotators, with post-graduate level of exposure. They were asked to measure the correctness and relevance of the responses generated by the different models with respect to the following three metrics: 1. Fluency (F): The generated response is grammatically correct and is free of any errors. 2. Relevance (R): The generated response is in accordance to the aspect being discussed (style, colour, material, etc.), and contains the information with respect to the conversational history. Also, there is no loss of attributes/information in the generated response. We follow the scoring scheme for fluency and relevance as- 0: incorrect or incomplete, 1: moderately correct, and 2: correct. We compute the Fleiss’ kappa (Fleiss, 1971) for the above metrics to measure inter-rater consistency. The kappa score for fluency is 0.75 and relevance is 0.77 indicating “substantial agreement”. 6 Results and Analysis In this section we present the detailed experimental results using automatic and human evaluation metrics both. In addition we also report the errors that our current model encounters. 6.1 Automatic Evaluation Results Results of the different models are presented in Table 2. The proposed model performs better than the other baselines for all the evaluation metrics, and we find this improvement to be statistically Description Model BLEU 4 METEOR ROUGE L State-of -the-arts MHRED-attn (Agarwal et al., 2018a) 0.4451 0.3371 0.6799 MHRED-attn-kb (Agarwal et al., 2018b) 0.4634 0.3480 0.6923 Baseline Models MHRED 0.4454 0.3367 0.6725 MHRED + A 0.4512 0.3452 0.6754 MHRED + A + PE 0.4548 0.3476 0.6783 MHRED + PA 0.4781 0.3521 0.7055 MHRED + AA 0.4763 0.3511 0.7063 MHRED + PA + AA 0.4810 0.3569 0.7123 MHRED + MFB(I,T) 0.4791 0.3523 0.7115 MHRED + MFB(I,T,C) 0.4836 0.3575 0.7167 Our Proposed Model MHRED + PA + AA + MFB(I,T) 0.4928 0.3689 0.7211 MHRED + PA + AA + MFB(I,T,C) 0.4957 0.3714 0.7254 Table 2: Results of different models on MMD dataset. Here, A: Attention, PE: Positional embeddings, PA: Position-aware attention, AA: Attribute-aware attention, MFB (I,T): MFB fusion on image (I) and text (T) representations, MFB(I,T,C): MFB fusion on I,T and context (C) significant 2. The results are reported for context size 5 due to its superior performance in comparison to the context size 2, as shown in (Agarwal et al., 2018a,b). The MHRED model is a decent baseline with good scores (0.6725 ROUGEL, 0.4454 BLEU). The application of attention over the text and image representations, as opposed to the concatenation, provides an absolute improvement of (+0.85%) in METEOR as well as in the other metrics. To give the ordered visual information in Model 3, we incorporate positional embedding for the images which boost the performance of text generation by (+0.94%) in BLEU score and (+0.58%) in ROUGE-L. The improved performance shows the effectiveness of position embedding for the images in a multimodal dialogue setting. The efficiency of position-aware and attribute-aware attention mechanism (Model 6) can be seen in the increased performance of the model with respect to Model 4 and Model 5 with an improvement of 0.68% and 0.6% in ROUGE-L metric, respectively. The MFB based fusion technique helps to improve the performance of the generation model (Model 8) with an improvement of 3.82% in BLEU score with respect to the baseline model, whereas it shows 0.26% improvement in BLEU score in comparison to Model 6. The final proposed model (MHRED + PA + AA + MFB(I,T,C)) after incorporating the position and attribute aware attention mechanisms along with MFB fusion attains the state-of-theart performance with an improvement of 3.23% in BLEU score, 3.31% in ROUGE-L and 2.34% in METEOR in comparison to the existing approaches (Agarwal et al., 2018b). 2we perform statistical significance t-test (Welch, 1947) and it is conducted at 5% (0.05) significance level 5444 Figure 4: Position and Attribute aware Attention Visualization In Figure 4, we show the attention visualization to demonstrate the effectiveness of our proposed position and attribute aware attention mechanisms. Example 1 in the figure shows that the model can focus on the correct image (in this case, the 3rd image) with the help of position-aware attention mechanism as the focus is given to the word 3rd in the utterance. Example 2 shows the effect of both position and attribute aware attention mechanism that helps in more accurate response generation. The positional word 2nd along with the attribute rubber has obtained maximum focus in the given example. While in Example 3, we can see the effect of attribute aware attention mechanism with maximum attention given to the keywords such as dark, red, frame in the utterance. 6.2 Human Evaluation Results In Table 3, we present the evaluation results of human. In case of fluency, the baseline MHRED model and the proposed model have shown quite similar performance. While for the relevance metric our proposed model has shown better performance with an improvement of 7.47% in generating the correct responses. This may be due the reason that our proposed model focuses on the relevant information in the text as well as the image, and generate more accurate and informative responses. All the results are statistically significant as we perform Welch’s t-test (Welch, 1947) and it is conducted at 5% (0.05) significance level. 6.3 Error Analysis We analyse the outputs generated from our proposed model to perform a detailed qualitative analDescription Model Fluency Relevance 0 1 2 0 1 2 Baseline MHRED 18.64 39.66 41.70 13.41 39.83 46.76 Proposed MHRED + PA + AA + MFB(I,T,C) 15.54 42.71 41.75 7.36 38.14 54.23 Table 3: Human evaluation results for Fluency and Relevance (All values are in percentages.) Figure 5: Examples of Responses Generated by the Different Models ysis of the responses. In Figure 5, we present a few examples of the responses generated by the different models given the image and utterance as an input. Some commonly occurring errors include: 1. Unknown tokens: As the baseline MHRED model uses the basic sequence to sequence framework, the number of unknown tokens is predicted the most in this case. The model also often predicts ‘end of sequence’ token just after the ‘out of vocabulary’ token, thus leaving sequences incomplete. Gold: ..the type of the chinos is cargo in the 1st and 2nd image; Predicted: .. the type 2. Extra information: The proposed model sometimes generates extra informative sentences than in the ground-truth response due to multiple occurrences of these attributes together in the data: Gold: the jackets in the 1st, 2nd and 5th images will suit well for dry clean; Predicted: the jackets in the 1st, 2nd and 5th images will suit well for dry clean, regular, cold, hand clean. 3. Repetition: The baseline, as well as the proposed model in a few cases, go on repeating the information present in a given utterance: Gold: it can go well with cropped type navy sweater; Predicted: it can go well with navy style, navy neck, navy style, navy neck sweater and with. 4. Incorrect Products: The model generates the incorrect products in the predicted utterance as compared to the one present in the original utterance as different products have similar attributes: Gold: it can go well with unique branded, black colouring, chic type hand bag; Predicted: it can go well with black frame colour sunglasses. 5. Wrong choice of images: The model focuses on incorrect images with respect to the conversational history due to the discussion over multiple images in history. Gold: the upper material in the 2nd image is rubber lace; Predicted: the upper material in the 4th image is leather. 5445 7 Conclusion In this paper, we have proposed an ordinal and attribute aware attention mechanism for natural language generation exploiting images and texts. In a multimodal setting, the information sharing between the modalities is significant for proper response generation, thereby leading to customer satisfaction. We incorporate the MFB fusing technique along with position and attribute aware attention mechanism for effective knowledge integration from the textual and visual modalities. On the recently released MMD dataset, the incorporation of our proposed techniques has shown improved performance for the task of textual response generation. In qualitative and quantitative analyses of the generated responses, we have observed contextually correct and informative responses, along with minor inaccuracies as discussed in the error analysis section. Overall the performance of our model shows the variations and more accurate responses in comparison to the other models keeping the attribute and position information of the generated responses intact. In future, along with the opportunity of extending the architectural design and training methodologies to enhance the performance of our systems, we look forward to designing a specific component to enhance the natural language generation component of an end-to-end chatbot, by including image generation and retrieval systems for the completion of a multimodal dialogue system. Acknowledgement Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia). References Shubham Agarwal, Ondrej Dusek, Ioannis Konstas, and Verena Rieser. 2018a. Improving context modelling in multimodal dialogue generation. arXiv preprint arXiv:1810.11955. Shubham Agarwal, Ondrej Dusek, Ioannis Konstas, and Verena Rieser. 2018b. A knowledgegrounded multimodal search-based conversational agent. arXiv preprint arXiv:1810.11954. Peter Anderson, Stephen Gould, and Mark Johnson. 2018. Partially-supervised image captioning. In Advances in Neural Information Processing Systems, pages 1879–1890. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Hedi Ben-Younes, R´emi Cadene, Matthieu Cord, and Nicolas Thome. 2017. Mutan: Multimodal tucker fusion for visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2612–2620. Chen Chen, Shuai Mu, Wanpeng Xiao, Zexiong Ye, Liesi Wu, Fuming Ma, and Qi Ju. 2018. Improving image captioning with conditional generative adversarial nets. arXiv preprint arXiv:1805.07112. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017a. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 326–335. Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017b. Learning cooperative visual dialog agents with deep reinforcement learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2951–2960. Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5503–5512. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378. Zhe Gan, Yu Cheng, Ahmed EI Kholy, Linjie Li, Jingjing Liu, and Jianfeng Gao. 2019. Multi-step reasoning via recurrent dual attention for visual dialog. arXiv preprint arXiv:1902.00579. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on Artificial Intelligence and Statistics, pages 249–256. 5446 Shaojie Jiang and Maarten de Rijke. 2018. Why are sequence-to-sequence models so dull? understanding the low-diversity problem of chatbots. arXiv preprint arXiv:1809.01941. Jin-Hwa Kim, Sang-Woo Lee, Donghyun Kwak, MinOh Heo, Jeonghee Kim, Jung-Woo Ha, and ByoungTak Zhang. 2016. Multimodal residual learning for visual qa. In Advances in Neural Information Processing Systems, pages 361–369. Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228–231. Association for Computational Linguistics. Hung Le, S Hoi, Doyen Sahoo, and N Chen. 2019. End-to-end multimodal dialog systems with hierarchical multimodal attention on video features. In DSTC7 at AAAI2019 workshop. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL-HLT, pages 110–119. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 994–1003. Lizi Liao, Yunshan Ma, Xiangnan He, Richang Hong, and Tat-seng Chua. 2018. Knowledge-aware multimodal dialogue systems. In 2018 ACM Multimedia Conference on Multimedia Conference, pages 801– 809. ACM. Chin-Yew Lin. 2004. Rouge: a package for automatic evaluation of summaries. In Workshop on Text Summarization Branches Out, Post-Conference Workshop of ACL 2004, Barcelona, Spain. Kuan-Yen Lin, Chao-Chun Hsu, Yun-Nung Chen, and Lun-Wei Ku. 2019. Entropy-enhanced multimodal attention model for scene-aware dialogue generation. In DSTC7 at AAAI2019 workshop. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios Spithourakis, and Lucy Vanderwende. 2017. Imagegrounded conversations: Multimodal context for natural question and response generation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 462–472. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics. Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. 2019. On the convergence of adam and beyond. arXiv preprint arXiv:1904.09237. Verena Rieser and Oliver Lemon. 2011. Reinforcement learning for adaptive dialogue systems: a datadriven methodology for dialogue management and natural language generation. Springer Science & Business Media. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the conference on Empirical Methods in Natural Language Processing, pages 583– 593. Association for Computational Linguistics. Amrita Saha, Mitesh M Khapra, and Karthik Sankaranarayanan. 2018. Towards building large scale multimodal domain-aware conversation systems. In Thirty-Second AAAI Conference on Artificial Intelligence,pages 696-704. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808, 7(8). Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 3776–3783. AAAI Press. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pages 3295–3301. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1577–1586. Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation. arXiv preprint arXiv:1706.09799. Xiaoyu Shen, Hui Su, Shuzi Niu, and Vera Demberg. 2018. Improving variational encoder-decoders in dialogue generation. In Thirty-Second AAAI Conference on Artificial Intelligence, pages 5456–5463. AAAI. 5447 Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 196205, Denver, Colorado. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2818–2826. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proceedings of ICML Deep Learning Workshop. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189–198. Bernard L Welch. 1947. The generalization ofstudent’s’ problem when several different population variances are involved. Biometrika, 34(1/2):28–35. Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270– 280. Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In International Conference on Machine Learning, pages 2397–2406. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 1821–1830. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
2019
540
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5448–5453 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5448 Memory Consolidation for Contextual Spoken Language Understanding with Dialogue Logistic Inference He Bai1,2, Yu Zhou1,2, Jiajun Zhang1,2 and Chengqing Zong1,2,3 1 National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China 2 University of Chinese Academy of Sciences, Beijing, China 3 CAS Center for Excellence in Brain Science and Intelligence Technology {he.bai, yzhou, jjzhang, cqzong}@nlpr.ia.ac.cn Abstract Dialogue contexts are proven helpful in the spoken language understanding (SLU) system and they are typically encoded with explicit memory representations. However, most of the previous models learn the context memory with only one objective to maximizing the SLU performance, leaving the context memory under-exploited. In this paper, we propose a new dialogue logistic inference (DLI) task to consolidate the context memory jointly with SLU in the multi-task framework. DLI is defined as sorting a shuffled dialogue session into its original logical order and shares the same memory encoder and retrieval mechanism as the SLU model. Our experimental results show that various popular contextual SLU models can benefit from our approach, and improvements are quite impressive, especially in slot filling. 1 Introduction Spoken language understanding (SLU) is a key technique in today’s conversational systems such as Apple Siri, Amazon Alexa, and Microsoft Cortana. A typical pipeline of SLU includes domain classification, intent detection, and slot filling(Tur and De Mori, 2011), to parse user utterances into semantic frames. Example semantic frames (Chen et al., 2018) are shown in Figure 1 for a restaurant reservation. Traditionally, domain classification and intent detection are treated as classification tasks with popular classifiers such as support vector machine and deep neural network (Haffner et al., 2003; Sarikaya et al., 2011). They can also be combined into one task if there are not many intents of each domain(Bai et al., 2018). Slot filling task is usually treated as a sequence labeling task. Popular approaches for slot filling include conditional random fields (CRF) and recurrent neural network (RNN) (Raymond and Riccardi, 2007; B-time S2 S1 u1 book Which restaurant would you like to book a table for? u1 s D I restaurants reserve_restaurant a table for 10 people tonight O B-people B-date O O O O Cascal B-rest for O 6 B-people Figure 1: Example semantic frames of utterances u1 and u2 with domain (D), intent (I) and semantic slots in IOB format (S1, S2). Yao et al., 2014). Considering that pipeline approaches usually suffer from error propagation, the joint model for slot filling and intent detection has been proposed to improve sentence-level semantics via mutual enhancement between two tasks (Xu and Sarikaya, 2013; Hakkani-T¨ur et al., 2016; Zhang and Wang, 2016; Goo et al., 2018), which is a direction we follow. To create a more effective SLU system, the contextual information has been shown useful (Bhargava et al., 2013; Xu and Sarikaya, 2014), as natural language utterances are often ambiguous. For example, the number 6 of utterance u2 in Figure 1 may refer to either B-time or B-people without considering the context. Popular contextual SLU models (Chen et al., 2016; Bapna et al., 2017) exploit the dialogue history with the memory network (Weston et al., 2014), which covers all three main stages of memory process: encoding (write), storage (save) and retrieval (read) (Baddeley, 1976). With such a memory mechanism, SLU model can retrieve context knowledge to reduce the ambiguity of the current utterance, contributing to a stronger SLU model. However, the memory consolidation, a well-recognized operation for maintaining and updating memory in cognitive psy5449 MemRetrieval  SLU DLI Memory Encoder     History Utterances  { , , ..., } x1 x2 xk intent slot tag logistic   h Sentence Encoder   Current Utterance xk+1 Figure 2: Architecture of our proposed contextual SLU with memory consolidation. chology (Sternberg and Sternberg, 2016), is underestimated in previous models. They update memory with only one objective to maximizing the SLU performance, leaving the context memory under-exploited. In this paper, we propose a multi-task learning approach for multi-turn SLU by consolidating context memory with an additional task: dialogue logistic inference (DLI), defined as sorting a shuffled dialogue session into its original logical order. DLI can be trained with contextual SLU jointly if utterances are sorted one by one: selecting the right utterance from remaining candidates based on previously sorted context. In other words, given a response and its context, the DLI task requires our model to infer whether the response is the right one that matches the dialogue context, similar to the next sentence prediction task (Logeswaran and Lee, 2018). We conduct our experiments on the public multi-turn dialogue dataset KVRET (Eric and Manning, 2017), with two popular memory based contextual SLU models. According to our experimental results, noticeable improvements are observed, especially on slot filling. 2 Model Architecture This section first explains the memory mechanism for contextual SLU, including memory encoding and memory retrieval. Then we introduce the SLU tagger with context knowledge, the definition of DLI and how to optimize the SLU and DLI jointly. The overall model architecture is illustrated in Figure 2. Memory Encoding To represent and store dialogue history {x1, x2, ...xk}, we first encode them into memory embedding M = {m1, m2, ...mk} with a BiGRU (Chung et al., 2014) layer and then encode the current utterance xk+1 into sentence embedding c with another BiGRU: mi = BiGRUm(xi) c = BiGRUc(xk+1) (1) Memory Retrieval Memory retrieval refers to formulating contextual knowledge of the user’s current utterance xk+1 by recalling dialogue history. There are two popular memory retrieval methods: The attention based (Chen et al., 2016) method first calculates the attention distribution of c over memories M by taking the inner product followed by a softmax function. Then the context can be represented with a weighted sum over M by the attention distribution: pi = softmax(cT mi) mws = X i pimi (2) where pi is the attention weight of mi. In Chen et al., they sum mws with utterance embedding c, then multiplied with a weight matrix Wo to generate an output knowledge encoding vector h: h = Wo(c + mws) (3) The sequential encoder based (Bapna et al., 2017) method shows another way to calculate h: gi = sigmoid(FF([c ; mi])) (4) h = BiGRUg([g1, g2, ..., gk]) (5) where the function FF() is a fully connected forward layer. 5450 Contextual SLU Following Bapna et al., our SLU model is a stacked BiRNN: a BiGRU layer followed by a BiLSTM layer. However, Bapna et al. only initializes the BiLSTM layer’s hidden state with h, resulting in the low participation of context knowledge. In this work, we feed h to the second layer in every time step: O1 = BiGRU1(xk+1) (6) O2 = BiLSTM2([O1; h]) (7) where O1 = {o1 1, ..., om 1 } is the first layer’s output and m is the length of xk+1. The second layer encodes {[o1 1 ; h], ..., [om 1 ; h]} into the final state s2 = [−→ s2 ; ←− s2] and outputs O2 = {o1 2, ..., om 2 }, which can be used in the following intent detection layer and slot tagger layer respectively. P i = softmax(Us2) P s t = softmax(V ot 2) (8) where U and V are weight matrices of output layers and t is the index of each word in utterance xk+1. Dialogue Logistic Inference As described above, the memory mechanism holds the key to contextual SLU. However, context memory learned only with SLU objective is underexploited. Thus, we design a dialogue logistic inference (DLI) task that can consolidate the context memory by sharing encoding and retrieval components with SLU. DLI is introduced below: Given a dialogue session X = {x1, x2, ...xn}, where xi is the ith sentence in this conversation, we can shuffle X into a random order set X′. It is not hard for human to restore X′ to X by determining which is the first sentence then the second and so on. This is the basic idea of DLI: choosing the right response given a context and all candidates. For each integer j in range k + 1 to n, training data of DLI can be labelled automatically by: P(xj|x1, ..., xk) =  1 j = k + 1 0 j ̸= k + 1 (9) where k+1 is the index of the current utterance. In this work, we calculate the above probability with a 2-dimension softmax layer: P(xj|x1, ..., xk) = softmax(Wdh) (10) where Wd is a weight matrix for dimension transformation. Datasets Train Dev Test Avg.turns KVRET 2425 302 304 5.25 KVRET* 1830 224 226 6.88 Table 1: Detailed information of KVRET and KVRET* datasets, including train/dev/test size and average turns per conversation. Joint Optimization As we depict in Figure 2, we train DLI and SLU jointly in order to benefit the memory encoder and memory retrieval components. Loss functions of SLU and DLI are as follows. LSLU = log(p(yI|x1, ..., xk+1)) + X t log(p(yS t |x1, ..., xk+1)) (11) LDLI = X xj log(p(yD|xj, x1, ..., xk)) (12) where xj is a candidate of the current response, yI, yS t and yD are training targets of intent, slot and DLI respectively. Finally, the overall multitask loss function is formulated as L = (1 −λ)LSLU + λLDLI (13) where λ is a hyper parameter. 3 Experiments In this section, we first introduce datasets we used, then present our experimental setup and results on these datasets. 3.1 Datasets KVRET (Eric and Manning, 2017) is a multi-turn task-oriented dialogue dataset for an in-car assistant. This dataset was collected with the Wizardof-Oz scheme (Wen et al., 2017) and consists of 3,031 multi-turn dialogues in three distinct domains, and each domain has only one intent, including calendar scheduling, weather information retrieval, and point-of-interest navigation. However, all dialogue sessions of KVRET are single domain. Following Bapna et al., we further construct a multi-domain dataset KVRET* by randomly selecting two dialogue sessions with different domain from KVRET and recombining them into one conversation. The recombining probability is set to 50%. Detailed information about these two datasets is shown in Table 1. 5451 Models DLI KVRET KVRET* Slot Intent Slot Intent P R F1 Acc. P R F1 Acc. NoMem No 54.8 80.0 56.7 93.4 48.9 81.0 54.7 93.8 MemNet No 75.8 81.1 75.8 93.9 73.1 81.8 74.5 92.8 Yes 76.0 82.3 77.4(+1.6) 93.9(+0) 75.8 81.3 76.3(+1.8) 93.8(+1.0) SDEN No 70.5 80.9 70.1 93.6 56.9 81.3 59.4 93.0 Yes 64.9 80.9 70.8 (+0.7) 93.8(+0.2) 56.5 81.4 60.2(+0.8) 93.5(+0.5) SDEN† No 71.9 82.2 74.0 93.7 72.7 80.8 74.9 93.2 Yes 75.2 81.4 76.6(+2.6) 94.3(+0.6) 78.0 81.4 78.3(+3.4) 93.2(+0) Table 2: SLU results on original KVRET and multi-domain KVRET*, including accuracy of intent detection and average precision, recall and F1 score of slot filling. 0 200 400 600 800 1000 Training Steps 50 55 60 65 70 75 80 85 Slot F1 F1 of SDEN F1 of SDEN +DLI 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Loss loss of SDEN loss of SDEN +DLI (a) 0.0 0.2 0.4 0.6 0.8 Lambda 70 72 74 76 78 80 Slot F1 slot intent 90 92 94 96 98 100 Intent Acc. (b) Figure 3: (a) Validation loss and slot F1 score of SDEN† during training. (b) Slot F1 score and intent accuracy of SDEN† with different lambda. 3.2 Experimental Setup We conduct extensive experiments on intent detection and slot filling with datasets described above. The domain classification is skipped because intents and domains are the same for KVRET. For training model, our training batch size is 64, and we train all models with Adam optimizer with default parameters (Kingma and Ba, 2014). For each model, we conduct training up to 30 epochs with five epochs’ early stop on validation loss. The word embedding size is 100, and the hidden size of all RNN layer is 64. The λ is set to be 0.3. The dropout rate is set to be 0.3 to avoid over-fitting. 3.3 Results The following methods are investigated and their results are shown in Table 2: NoMem: A single-turn SLU model without memory mechanism. MemNet: The model described in Chen et al. , with attention based memory retrieval. SDEN: The model described in Bapna et al. , with sequential encoder based memory retrieval. SDEN†: Similar with SDEN, but the usage of h is modified with Eq.6. As we can see from Table 2, all contextual SLU models with memory mechanism can benefit from our dialogue logistic dependent multi-task framework, especially on the slot filling task. We also note that the improvement on intent detection is trivial, as single turn information has already trained satisfying intent classifiers according to results of NoMem in Table 2. Thus, we mainly analyze DLI’s impact on slot filling task and the prime metric is the F1 score. In Table 2, the poorest contextual model is the SDEN, as its usage of the vector h is too weak: simply initializes the BiLSTM tagger’s hidden state with h, while other models concatenate h with BiLSTM’s input during each time step. The more the contextual model is dependent on 5452 h, the more obvious the improvement of the DLI task is. Comparing the performance of MemNet with SDEN† on these two datasets, we can find that our SDEN† is stronger than MemNet after the dialogue length increased. Finally, we can see that improvements on KVRET* are higher than KVRET. This is because retrieving context knowledge from long-distance memory is challenging and our proposed DLI can help to consolidate the context memory and improve memory retrieval ability significantly in such a situation. We further analyze the training process of SDEN† on KVRET* to figure out what happens to our model with DLI training, which is shown in Figure 3(a). We can see that the validation loss of SDEN† + DLI falls quickly and its slot F1 score is relatively higher than the model without DLI training, indicating the potential of our proposed method. To present the influence of hyper-parameter λ, we show SLU results with λ ranging from 0.1 to 0.9 in Figure 3(b). In this figure, we find that the improvements of our proposed method are relatively steady when λ is less than 0.8, and 0.3 is the best one. When λ is higher than 0.8, our model tends to pay much attention to the DLI task, overlook detail information within sentences, leading the SLU model to perform better on the intent detection but failing in slot filling. 4 Conclusions In this work, we propose a novel dialogue logistic inference task for contextual SLU, with which memory encoding and retrieval components can be consolidated and further enhances the SLU model through multi-task learning. This DLI task needs no extra labeled data and consumes no extra inference time. Experiments on two datasets show that various contextual SLU model can benefit from our proposed method and improvements are quite impressive, especially on the slot filling task. Also, DLI is robust to different loss weight during multi-task training. In future work, we would like to explore more memory consolidation approaches for SLU and other memory related tasks. Acknowledgements The research work descried in this paper has been supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002103 and the Natural Science Foundation of China under Grant No. U1836221. References Alan D Baddeley. 1976. The psychology of memory. New York. He Bai, Yu Zhou, Jiajun Zhang, Liang Zhao, Mei-Yuh Hwang, and Chengqing Zong. 2018. Source critical reinforcement learning for transferring spoken language understanding to a new language. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3597–3607. Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2017. Sequential Dialogue Context Modeling for Spoken Language Understanding. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL). A. Bhargava, A. Celikyilmaz, D. Hakkani-Tur, and R. Sarikaya. 2013. Easy contextual intent prediction and slot detection. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8337–8341. IEEE. Yun-Nung Chen, Asli Celikyilmaz, and Dilek HakkaniTur. 2018. Deep learning for dialogue systems. In Proceedings of the 27th International Conference on Computational Linguistics: Tutorial Abstracts (COLING), pages 25–31. Association for Computational Linguistics. Yun-Nung Chen, Dilek Hakkani-Tr, Gokhan Tur, Jianfeng Gao, and Li Deng. 2016. End-to-End Memory Networks with Knowledge Carryover for MultiTurn Spoken Language Understanding. In Interspeech, pages 3245–3249. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Deep Learning and Representation Learning Workshop. Mihail Eric and Christopher D. Manning. 2017. KeyValue Retrieval Networks for Task-Oriented Dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL). Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-Gated Modeling for Joint Slot Filling and Intent Prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 753–757, New Orleans, Louisiana. Patrick Haffner, Gokhan Tur, and Jerry H Wright. 2003. Optimizing svms for complex call classification. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 1, pages I–I. IEEE. 5453 Dilek Hakkani-T¨ur, G¨okhan T¨ur, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and YeYi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Interspeech, pages 715–719. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Computer Science. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. arXiv preprint arXiv:1803.02893. Christian Raymond and Giuseppe Riccardi. 2007. Generative and discriminative algorithms for spoken language understanding. In Eighth Annual Conference of the International Speech Communication Association. Ruhi Sarikaya, Geoffrey E Hinton, and Bhuvana Ramabhadran. 2011. Deep belief nets for natural language call-routing. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pages 5680–5683. IEEE. Robert J Sternberg and Karin Sternberg. 2016. Cognitive psychology. Nelson Education. Gokhan Tur and Renato De Mori. 2011. Spoken language understanding: Systems for extracting semantic information from speech. John Wiley & Sons. Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A Networkbased End-to-End Trainable Task-oriented Dialogue System. In EACL, pages 1–12. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916. Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular crf for joint intent detection and slot filling. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 78–83. IEEE. Puyang Xu and Ruhi Sarikaya. 2014. Contextual domain classification in spoken language understanding systems using recurrent neural network. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 136–140, Florence, Italy. IEEE. Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken language understanding using long short-term memory neural networks. In Spoken Language Technology Workshop (SLT), 2014 IEEE, pages 189–194. IEEE. Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In IJCAI, pages 2993–2999.
2019
541
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5454–5459 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5454 Personalizing Dialogue Agents via Meta-Learning Andrea Madotto†, Zhaojiang Lin†, Chien-Sheng Wu, Pascale Fung Center for Artificial Intelligence Research (CAiRE) Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong [amadotto,zlinao,cwuak,pascale]@ust.hk Abstract Existing personalized dialogue models use human designed persona descriptions to improve dialogue consistency. Collecting such descriptions from existing dialogues is expensive and requires hand-crafted feature designs. In this paper, we propose to extend Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017) to personalized dialogue learning without using any persona descriptions. Our model learns to quickly adapt to new personas by leveraging only a few dialogue samples collected from the same user, which is fundamentally different from conditioning the response on the persona descriptions. Empirical results on Persona-chat dataset (Zhang et al., 2018) indicate that our solution outperforms non-metalearning baselines using automatic evaluation metrics, and in terms of human-evaluated fluency and consistency. 1 Introduction There is a growing interest in learning personalized chit-chat dialogue agents for making chatbots more consistent. Recently, a multi-turn conversational dataset called Persona-chat (Zhang et al., 2018) has been released, where two speakers are paired and a persona description (4-5 sentences) is randomly assigned to each of them. For example, “I am an old man” and “I like to play football” are one of the possible persona descriptions provided to the speaker. By conditioning the response generation on the persona descriptions, a chit-chat model is able to produce a more persona consistent dialogue (Zhang et al., 2018). However, it is difficult to capture a persona just by using few sentences, and collecting a nonsynthetic set of persona descriptions from a real human-human conversation, e.g., Reddit, is challenging as well since it requires hand-crafted fea† These two authors contributed equally. ⋯ ⋯ Figure 1: The difference between finetuning from a) joint training on all personas and b) meta-learning persona. The solid line represents the optimization path of the initial parameters and dashed line the fine-tuning path. Meta-learned initial parameters can faster adapt to a new persona. ture designs (Mazare et al., 2018). In light of this, we propose to leverage a set of dialogues done by the same persona directly, instead of using its persona descriptions, to generate a more consistent response. We consider learning different personas as different tasks via meta-learning algorithms, which is fundamentally different from optimizing the model to represent all the personas. A high-level intuition of the difference between these two approaches is shown in Figure 1. We aim to learn a persona-independent model that is able to quickly adapt to a new persona given the dialogues. We formulate this task as a few-shot learning problem, where K dialogues are used for training and the remaining for the test. Hence, we expect to learn initial parameters of a dialogue model that can quickly adapt to the response style of a certain persona just by using few dialogues. The main contribution of this paper is to cast the personalized dialogue learning as a meta-learning problem, which allows our model to generate personalized responses by efficiently leveraging only a few dialogue samples instead of human-designed 5455 persona descriptions. Empirical results show that our solution outperforms joint training, in terms of human-evaluated fluency and consistency. 2 Personalized Dialogue Learning 2.1 Persona-conditioned dialogue In Persona-chat dataset (Zhang et al., 2018), a dialogue is defined as a set of utterances U = {u1, . . . , un} and a persona description is defined as a set of sentences P = {p1, . . . , pm}. A personalized dialogue model fθ is trained to produce a response Y = ut conditioned on previous utterances X = {u1, . . . , ut−1} and persona sentences P: fθ(Y |X, P; θ) = p (ut|u1:t−1, p1:m; θ) (1) 2.2 Persona-agnostic dialogue Instead of conditioning our response on the persona sentences, we first adapt θ to the set of dialogue made by a persona P and then we only use the dialogue history to condition our response. Eq. (1) becomes: fθ(Y |X; θ) = p (ut|u1:t−1; θ) (2) Therefore, we define the set of dialogues of a persona P as Dp = {U1, . . . , Uk}. Conceptually, a model fθ is expected to generate personalized response after being trained with a few dialogues example from Dp. The main idea of our work is to use Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017) to learn an initial set of parameters that can quickly learn a persona from few dialogues sample. We refer to the proposed meta-learning method for persona dialogues as Persona-Agnostic Meta-Learning (PAML). Persona-agnostic meta-learning (PAML) We define the persona meta-dataset as D = {Dp1, . . . , Dpz}, where z is the number of persona. Before training, D is split into Dtrain, Dvalid, Dtest. For each training epoch, we uniformly sample a batch of personas Dpi from Dtrain, then from each persona in Dpi we sample a set of dialogues as training Dtrain pi , and another set of dialogues as validation Dvalid pi . After t iterations of training on Dtrain pi , the dialogue model fθ, parameterized by θ, is updated to θ′ pi by standard gradient descent, θ′ pi = θ −α∇θLDtrain pi (fθ) (3) Algorithm 1 Persona-Agnostic Meta-Learning Require: Dtrain Require: α, β: step size hyperparameters 1: Randomly initialize θ 2: while not done do 3: Sample batch of persona Dpi ∼Dtrain 4: for all Dpi do 5: (Dtrain pi , Dvalid pi ) ∼Dpi 6: Evaluate ∇θLDtrain pi (fθ) using Dtrain pi 7: Compute adapted parameters with gradient descent: θ′ pi = θ −α∇θLDtrain pi (fθ) 8: end for 9: θ ←θ−β P Dpi∼Dtrain ∇θLDvalid pi  fθ′pi  10: end while where α is learning of the inner optimization, and LDtrain pi the training loss. Specifically, crossentropy loss is used for training the response generation: LDpi (fθ) = − X Dpi log p (ut|u1:t−1; θ) (4) The meta-learning model is then trained to maximize the performance of the adapted model fθ′pi to the unseen dialogues in Dvalid pi . Following Finn et al. (2017), we define the meta-objective as: min θ X Dpi∼Dtrain LDvalid pi  fθ′pi  = X Dpi∼Dtrain LDvalid pi  fθ−α∇θLDtrain pi (fθ)  (5) where LDvalid pi  fθ′pi  is the loss evaluated on Dvalid pi . For optimizing Eq.(5), we apply again stochastic gradient descent on the meta-model parameters θ by computing the gradient of LDvalid pi  fθ′pi  , which is: θ ←θ −β X Dpi∼Dtrain ∇θLDvalid pi  fθ′pi  (6) where β is meta-learning rate. This process requires second order optimization partial derivatives, which can be computed by any automatic differentiation library (e.g. PyTorch, Tensorflow etc.). A summary of the training procedure is shown in Algorithm 1. 5456 Automatic Human PPL BLEU C Fluency Consistency Human 0.33 3.434 0.234 Dialogue+Persona 30.42 1.00 0.07 3.053 0.011 Dialogue 36.75 0.64 -0.03 Dialogue+Fine-tuning 32.96 0.90 0.00 3.103 0.038 PAML 41.64 0.74 0.20 3.185 0.197 Table 1: Results of automatic and human evaluation: PAML vs Dialogue+Persona shows the our approach can achieve good consistency by using few dialogues instead of conditioning on the persona description, PAML vs Dialogue+Fine-tuning shows the effectiveness of meta-learning approach in personalizing dialogue model. 3 Experiment and Results The experiments are conducted using Personachat (Zhang et al., 2018). To create the meta-sets D, we match the dialogues by their persona description separately for train, validation and test, by following the same persona split as in Zhang et al. (2018). On average each persona description has 8.3 unique dialogues. In the Appendix, we report the number of dialogue distribution. Experimental setting In our experiments, we compared different training settings: (Dialogue) a model trained using dialogue history, as in Eq.(2); (PAML) a meta-trained model as in Eq.(5), where we test each set Dpi ∈Dtest by selecting one dialogue and training with all the others. To elaborate, suppose we are testing Ut ∈Dpi then we first fine-tuning using all the dialogues in Dpi \ Ut, and then test on Ut. This process is repeated for all the dialogues in Dpi. (Dialogue+Fine-tuning) we use the same testing as PAML but on a model trained as Dialogue. We also report a trained model that assumes persona description is available and we refer it as (Dialogue+Persona). Implementation details We implemented fθ using a standard Transformer architecture (Vaswani et al., 2017) with pre-trained Glove embedding (Pennington et al., 2014) 1. For the standard training, we used Adam (Kingma and Ba, 2014) optimizer with a warm-up learning rate strategy, and a batch size of 32. Instead, in meta-training, we used SGD for the inner loop and Adam for the outer loop with learning rate α = 0.01 and β = 0.0003 respectively, and batch size of 16 for both. In all the model we used beam search with beam size 5. 1The model and the pre-processing scripts are available at https://github.com/HLTCHKUST/PAML 3.1 Evaluation metric The objective of the evaluation is to verify whether PAML can produce a more consistent response with reference to the given dialogue and persona description (even though is not seen). To do so, we employ both automatic and human evaluation. Automatic We report perplexity and BLEU score (Papineni et al., 2002) of the generate sentences against the human-generated prediction. Aside of standards evaluation metrics, we also train a Natural Language Inference (NLI) model using Dialog NLI (Sean et al., 2018) dataset, a recently proposed corpus based on Persona dataset, with NLI annotation between persona description sentences and dialogues utterance. We fine-tune a pre-trained BERT model (Devlin et al., 2018) using the DNLI corpus and achieve a test set accuracy of 88.43%, which is aligned to the bestreported model ESIM (Chen et al., 2017) in Sean et al. (2018) (with 88.20% accuracy). Then, we defined a new evaluation metric for dialogue consistency as follow: NLI(u, pj) = ( 1 if u entails pj 0 if u is independent to pj −1 if u contradicts pj C(u) = m X j NLI(u, pj) (7) where u is a generated utterance and the pj is one sentence in the persona description. Hence, having a higher consistency C score means having a more persona consistent dialogue response. Human Since automatic evaluation performs poorly in this task (Liu et al., 2016), we perform a human evaluation using crowd-sourced workers. We randomly selected 300 generated response examples from 10 unique personas and we asked 5457 each worker to evaluate fluency (1 to 5) and consistency of the generated response with respect to the dialogue history and the respective persona description. We asked the workers to assign a score of 1, 0 or -1 for consistent, neutral, and contradicts respectively, the full instruction set is available in the Appendix. 3.2 Results Table 1 shows both automatic and human evaluation results. PAML achieve consistently better results in term of dialogue consistency in both automatic and human evaluation. The latter also shows that all the experimental settings have comparable fluency scores, where instead perplexity and BLEU score are lower in PAML. This confirms that these measures are not correlated to human judgment (Liu et al., 2016). For completeness, we also show generated responses examples from PAML and baseline models in Appendix. On the other hand, the human evaluated consistency is aligned to the C score, which confirms the meaningfulness of the defined measure. This agrees with results of Sean et al. (2018), where the authors showed that by re-ranking the beam search hypothesis using the DNLI score (i.e. C score), they achieved a substantial improvement in dialogue consistency. Few-shot Learning We analyze the ability of our model to fast adapt to a certain persona in term of shots. We define shot as the number of dialogues used in Dtrain pi for fine-tuning a certain persona, e.g. 1-shot one dialogue, 3-shot three dialogue and so on. Figure 2 compares the k-shot consistency C results for k equal to 0, 1, 3, 5 and 10, both PAML and Dialogue+Finetuning. PAML can achieve a high consistency score just by using 3 dialogues, which is better than Persona+Dialogue. On the other hand, Dialogue+Fine-tuning cannot properly leverage the dialogues in Dpi, which proves the effectiveness of training with meta-learning. 4 Related Work Meta-Learning Meta-learning (Thrun and Pratt, 1998; Schmidhuber, 1987, 1992; Naik and Mammone, 1992; Bengio et al., 1992) is sub-field of machine learning with the aim of learning the learning algorithm itself. Recently, several meta-learning models has been proposed for solving few-shot image classification (Ravi 0 1 3 5 10 K-shot −0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 C K-shot vs C PAML Dialogue+Fine-tuning Human Dialogue+Persona Figure 2: k-shot results for different settings. Consistency of PAML grows linearly with respect to k. and Larochelle, 2016; Vinyals et al., 2016; Finn et al., 2017; Mishra et al., 2017; Santoro et al., 2016), optimization (Andrychowicz et al., 2016) and reinforcement learning (Finn et al., 2017). Meta-learning for NLP application is less common, and it has been applied in semantic parsing task (Huang et al., 2018), machine translation for low resource language (Gu et al., 2018), and for text classification (Yu et al., 2018). To the best of our knowledge, this is the first attempt in adapting meta-learning to personalized dialogue learning. Personalized Dialogue Li et al. (2016) was the first to propose a persona based dialogue models for improving response consistency. Zhang et al. (2018) introduced Persona-chat, which was further extended in ConvAI2 (2019). Several works improved on the initial baselines with various methodologies (Kulikov et al., 2018; Yavuz et al.; Hancock et al., 2019; Lucas et al., 2009; Joshi et al., 2017; Zemlyanskiy and Sha, 2018; Gao et al., 2018). However, all of these previous works conditioned their response on the persona description, instead of using the dialogues produced by the persona. 5 Conclusion In this paper, we present a novel meta-learning setting for personalizing dialogue agents without conditioning the model response to the persona description. This is especially useful since obtaining such persona description requires human effort. Moreover, we show that a dialogue agent trained with meta-learning achieves a more consistent dialogue by both of automatic measures and human evaluation. In future works, we plan to apply meta-learning to comment genera5458 tion (Lin et al., 2019) and task-oriented dialogues systems (Madotto et al., 2018; Wu et al., 2019, 2017, 2018; Reddy et al., 2018). 6 Acknowledgments This work has been funded by MRP/055/18 of the Innovation Technology Commission, of the Hong Kong University of Science and Technology. References Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. 2016. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981–3989. Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. 1992. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, pages 6–8. Univ. of Texas. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2019. The second conversational intelligence challenge (convai2). arXiv preprint arXiv:1902.00098. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org. Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1371–1374. ACM. Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018. Meta-learning for lowresource neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622–3631. Association for Computational Linguistics. Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot! arXiv preprint arXiv:1901.05415. Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen-tau Yih, and Xiaodong He. 2018. Natural language to structured query generation via metalearning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 732–738. Association for Computational Linguistics. Chaitanya K Joshi, Fei Mi, and Boi Faltings. 2017. Personalization in goal-oriented dialog. arXiv preprint arXiv:1706.07503. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. arXiv preprint arXiv:1811.00907. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 994–1003. Zhaojiang Lin, Genta Indra Winata, and Pascale Fung. 2019. Learning comment generation by leveraging user-generated data. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7225–7229. IEEE. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Association for Computational Linguistics. JM Lucas, F Fern´andez, J Salazar, J Ferreiros, and R San Segundo. 2009. Managing speaker identity and user profiles in a spoken dialogue system. Procesamiento del lenguaje natural, (43). Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1468–1478. Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods 5459 in Natural Language Processing, pages 2775–2779. Association for Computational Linguistics. Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. 2017. A simple neural attentive metalearner. ICLR. Devang K Naik and RJ Mammone. 1992. Metaneural networks that learn by learning. In [Proceedings 1992] IJCNN International Joint Conference on Neural Networks, volume 1, pages 437–442. IEEE. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Sachin Ravi and Hugo Larochelle. 2016. Optimization as a model for few-shot learning. Revanth Reddy, Danish Contractor, Dinesh Raghu, and Sachindra Joshi. 2018. Multi-level memory for task oriented dialogs. arXiv preprint arXiv:1810.10647. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. 2016. Metalearning with memory-augmented neural networks. In International conference on machine learning, pages 1842–1850. Jurgen Schmidhuber. 1987. Evolutionary principles in self-referential learning. on learning now to learn: The meta-meta-meta...-hook. Diploma thesis, Technische Universitat Munchen, Germany, 14 May. J¨urgen Schmidhuber. 1992. Learning to control fastweight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131–139. Welleck Sean, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2018. Dialogue natural language inference. arXiv preprint arXiv:1811.00671. Sebastian Thrun and Lorien Pratt, editors. 1998. Learning to Learn. Kluwer Academic Publishers, Norwell, MA, USA. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638. Chien-Sheng Wu, Andrea Madotto, Genta Winata, and Pascale Fung. 2017. End-to-end recurrent entity network for entity-value independent goal-oriented dialog learning. In Dialog System Technology Challenges Workshop, DSTC6. Chien-Sheng Wu, Andrea Madotto, Genta Indra Winata, and Pascale Fung. 2018. End-to-end dynamic query memory network for entity-value independent task-oriented dialog. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6154–6158. IEEE. Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019. Global-to-local memory pointer networks for task-oriented dialogue. In International Conference on Learning Representations. Semih Yavuz, Abhinav Rastogi, Guan-lin Chao, Dilek Hakkani-T¨ur, and Amazon Alexa AI. Deepcopy: Grounded response generation with hierarchical pointer networks. Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse few-shot text classification with multiple metrics. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1206–1215. Association for Computational Linguistics. Yury Zemlyanskiy and Fei Sha. 2018. Aiming to know you better perhaps makes me a more engaging dialogue partner. CoNLL 2018, page 551. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213. Association for Computational Linguistics.
2019
542
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5460–5466 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5460 Reading Turn by Turn: Hierarchical Attention Architecture for Spoken Dialogue Comprehension Zhengyuan Liu, Nancy F. Chen Institute for Infocomm Research, A*STAR {liu zhengyuan, nfychen}@i2r.a-star.edu.sg Abstract Comprehending multi-turn spoken conversations is an emerging research area, presenting challenges different from reading comprehension of passages due to the interactive nature of information exchange from at least two speakers. Unlike passages, where sentences are often the default semantic modeling unit, in multi-turn conversations, a turn is a topically coherent unit embodied with immediately relevant context, making it a linguistically intuitive segment for computationally modeling verbal interactions. Therefore, in this work, we propose a hierarchical attention neural network architecture, combining turnlevel and word-level attention mechanisms, to improve spoken dialogue comprehension performance. Experiments are conducted on a multi-turn conversation dataset, where nurses inquire and discuss symptom information with patients. We empirically show that the proposed approach outperforms standard attention baselines, achieves more efficient learning outcomes, and is more robust to lengthy and out-of-distribution test samples. 1 Introduction Reading comprehension has attracted much interest in the past couple years, fueled by avid neural modeling investigations. Given a certain textual content, the goal is to answer a series of questions based on implicit semantic understanding. Previous work has focused on passages like Wikipedia (Rajpurkar et al., 2016) or news articles (Hermann et al., 2015). Recently, dialogue comprehension in the form of cloze tests and multi-choice questions has also started to spur research interest (Ma et al., 2018; Sun et al., 2019). Different from passages, human-to-human dialogues are a dynamic and interactive flow of information exchange, which are often informal, verbose and repetitive.1 This leads to lower information density and more topic diffusion, since the spoken content of a conversation is determined by two speakers, each with his/her own thought process and potentially distracting and parallel streams of thoughts. To address such challenges, we propose to utilize a hierarchical attention mechanism for dialogue comprehension, which has shown to be effective in various natural language processing tasks (Yang et al., 2016; Choi et al., 2017; Hsu et al., 2018). The hierarchical models successively capture contextual information at different levels of granularity, leveraging coarse-grained attention to reduce the potential distraction in finer-grained attention but at the same time exploit finer-grained attention to distill key information for downstream tasks more precisely and efficiently. While in document tasks sentences are the default semantic modeling unit at the coarse-grained level, utterances might be a more suitable counterpart in spoken dialogues, as dialogues often consist of incomplete sentences. However, a single utterance/sentence which usually implies information from one speaker is insufficient for grasping the full relevant context, as the interactive information from the interlocutor is often necessary. In multi-turn dialogues, each turn is one round of information exchange between speakers, thus making it a linguistically intuitive segment for modeling verbal communications. Thus, we postulate that for spoken dialogue comprehension, it is more effective to model conversations turn by turn using a multi-granularity design. In this work, we introduce a hierarchical neu1One needs to process information on the spot during conversations, hence a particular concept could take rounds of interactions to confirm the information is conveyed correctly before moving on to the next topic, while for passages the reader can process the information at his own pace. 5461 Figure 1: Turn-based hierarchical architecture for dialogue comprehension: tokens in purple are the indicators of dialogue turns, and their indices are used to select question-aware hidden states (Green) for turn-level attention calculation. The turn with higher attentive score (Yellow) contributes more in scoring word-level attentions (Red). ral attention architecture, integrating turn-level attention with word-level attention for multi-turn dialogue comprehension in a question-answering manner, where we evaluate performance on a corpus preserving linguistic features from real-world spoken conversation scenarios. In particular, we examine how our approach is able to address challenges from limited training data scenarios and from lengthy and out-of-distribution test samples. 2 Hierarchical Attention Architecture The proposed architecture of modeling multi-level attention for dialogue comprehension is shown in Figure 1. The model design is based on extractive question answering, and consists of the following layers: a sequence encoding layer, a questionaware modeling layer, a turn-level attention layer, a word-level attention layer, and an answer pointer layer. We elaborate on the details below. 2.1 Sequence Encoding Layer Given a t-length sequence of word embedding vectors S = {w0, w1, ...wt}, a bi-directional long short-term memory (Bi-LSTM) layer (Schuster and Paliwal, 1997) is used to encode S to a hidden representation, H = {h0, h1, ...ht} ∈Rt×d, where d is the hidden dimension. We obtain the content representation Hc by encoding the dialogue sequence and concatenating the forward and backward information: hc i = [LSTM Forward wc i ; LSTM Backward wc i ] (1) and extracting the last hidden state of question encoding as the question representation hq. 2.2 Question-Aware Modeling Layer We concatenate each step of Hc with the question hq as in aspect-modeling (Wang et al., 2016), then obtain the question-aware modeling H ′ via a BiLSTM layer. h ′ i = [LSTM Forward [hc i;hq] ; LSTM Backward [hc i;hq] ] (2) 2.3 Turn-Level Attention Layer We design the turn-level attention to score the dialogue turns explicitly, so the more salient turns will obtain higher scores, which is similar to (Hsu et al., 2018). However, instead of calculating the sentence-level attention using a separate recurrent component, we directly obtain the turn representations Hturn by collecting hidden states from H ′ with the turn-level segment position indices T turn = {tturn 0 , tturn 1 , ...tturn m }, where m is the turn number of the dialogue content. More specifically, in a two-party conversation, each continuous utterance span between the speakers will be labeled as in one turn segment, and tturn i+1 −tturn i is the length of the ith turn. Then the turn-level attentive score is calculated via a dense layer and softmax normalization: Aturn = softmax(WαHturn + bα) (3) 2.4 Word-Level Attention Layer In our hierarchical architecture, to mitigate adverse effects of spurious word-level attention from words in less attended turns, we utilize turn-level salient scores to modulate word-level attentions. Thus, we broadcast each aturn i in Aturn with its turn length to obtain A ′ in dialogue length, and 5462 then multiply H ′ with A ′ to obtain the contextual sequence C ′. Then the word-level attention Aword is calculated on C ′, and multiplied with H ′ to obtain the contextual sequence C ′′. Aword = softmax(Wβ(H ′∗A ′) + bβ) (4) 2.5 Answer Pointer Layer Contextual sequences C ′, C ′′ and question hq are concatenated together and fed to a LSTM modeling layer. Then a dense layer with softmax normalization is applied for answer span prediction (Wang and Jiang, 2016). Ms/e = LSTM[C′;C′′;hq] (5) Ps/e = softmax(WγMs/e + bγ) (6) where each ps/pe indicates the probability of being the start/end position of the answer span. 2.6 Loss function Cross-entropy loss function is used as the metric between the predicted label and the ground-truth distribution. The total loss Ltotal contains the loss from answer span (Wang and Jiang, 2016) and from turn-level attentive scoring similar to (Hsu et al., 2018), with a weight λ ∈[0, 1]. Ltotal = Lspan + λLturn attn (7) 3 Experiments 3.1 Corpus & Data Processing Dialogue Dataset: We evaluated the proposed approach on a spoken dialogue comprehension dataset, consisting of nurse-to-patient symptom monitoring conversations. This corpus was inspired by real dialogues in the clinical setting where nurses inquire about symptoms of patients (Liu et al., 2019). Linguistic structures at the semantic, syntactic, discourse and pragmatic levels were abstracted from these conversations to construct templates for simulating multi-turn dialogues. The informal styles of expressions, including incomplete sentences, incorrect grammar and diffuse flow of topics were preserved. A team of linguistically trained personnel refined, substantiated, and corrected the automatically simulated dialogues by enriching verbal expressions through different English speaking populations in Asia, Europe and the U.S., validating Figure 2: Examples of segmented turns in our corpus. The default segmented turn is an adjacency pair of utterances from two speakers (Yellow). To ensure a turn spans across semantically congruent utterances, neighboring utterances could be merged according to a set of rules derived from spoken features, like n-gram repetition (Green), back-channeling (Blue), self-pause (Red) and interlocutor interruption (Gray). logical correctness through checking if the conversations were natural, reasonable and not disobeying common sense, and verifying the clinical content by consulting certified and registered nurses. These conversations cover 9 topics/symptoms (e.g. headache, cough). For each conversation, the average word number is 255 and the average turn number is 15.5. Turn Segmentation: In a smooth conversation, one turn is an adjacency pair of two utterances from two speakers (Sacks et al., 1974). However, in real scenarios, the conversation flow is often disrupted by verbal distractions such as interlocutor interruption, back-channeling, self-pause and repetition (Schlangen, 2006). We thus annotated these verbal features from transcripts of the realworld dialogues and integrated them in the templates, which are used to generate the simulated dialogue data. We subsequently merged the adjacent utterances from speakers considering the features and the intents to form turns (see Figure 2). This procedure ensures semantic congruence of each turn. Then the segment indices of turns were labeled for turn-level context collection. Annotations for Question Answering: For the 5463 comprehension task, questions were raised to query different attributes of a specified symptom; e.g., How frequently did you experience headaches? Answer spans in the dialogues were labeled with start and end indices, and turns containing the answer span were annotated for turnlevel attention training. 3.2 Baseline Models We implemented the proposed turn-based hierarchical attention (HA) model, and compared it with several baselines: Pointer LSTM: We implemented a Pointer network for QA (Vinyals et al., 2015). The content and question embedding are concatenated and fed to a two-layer Bi-LSTM, then the answer span is predicted as in Section 2.5. Bi-DAF: We implemented the Bi-Directional Attention Flow network (Seo et al., 2017) as an established baseline, which fuses question-aware and context-aware attention. R-Net: We implemented R-Net (Wang et al., 2017), another established baseline, which introduces self-attention to implicitly model multilevel contextual information. Utterance-based HA: To evaluate the effectiveness of turn-level modeling, we implemented an utterance-based model as the control, by treating every utterance as a single segment. 3.3 Training Configuration Pre-trained word embeddings from Glove (Pennington et al., 2014) were utilized and fixed during training. Out-of-vocabulary words were replaced with the [unk] token. The hidden size and embedding dimension were set to 300. Adam optimizer (Kingma and Ba, 2015) was used with batch size of 64 and learning rate of 0.001. For the modeling layers, dropout rate was set to 0.2 (Srivastava et al., 2014). The weight λ in the loss function was set to 1.0. During training, the validationbased early stop strategy was applied. During prediction, we selected answer spans using the maximum product of ps and pe, with a constraint such that 0 ≤e −s ≤10. 3.4 Evaluation: Comparison with Baselines Evaluation was conducted on the dialogue corpus described in Section 3.1, where the training, validation and test sets were 40k, 3k and 3k samples of multi-turn dialogues, respectively. We adopted Model EM Score F1 Score Pointer LSTM 77.85 82.73 Bi-DAF 87.24 88.67 R-Net 88.93 90.41 Utterance-based HA 88.59 90.12 Turn-based HA (Proposed) 91.07 92.39 Table 1: Comparison with baseline models. Figure 3: Results on different sizes of training data. Exact Match (EM) and F1 score in SQuAD as metrics (Rajpurkar et al., 2016). Results in Table 1 show that while the utterance-based HA network is on par with established baselines, the proposed turn-based HA model obtains more gains, achieving the best EM and F1 scores. 3.5 Evaluation in Low-Resource Scenarios Limited amount of training data is a major pain point for dialogue-based tasks, as it is timeconsuming and labor-intensive to collect and annotate natural dialogues at a large-scale. We expect the hierarchical structure to result in more efficient learning capabilities. We conducted experiments on a range of training sizes (from 3k to 40k) with a fixed-size test set (3k samples). As shown in Figure 3, the turn-based HA model outperforms all other models significantly when the training set is smaller than 20k. 3.6 Lengthy Sample Evaluation Spoken conversations are often verbose with low information density scattered with topics not central to the main dialogue theme, especially since speakers chit-chat and get distracted during taskoriented discussions. To evaluate such scenarios, we adopted model-independent ADDSENT (Jia and Liang, 2017), where we randomly extracted sentences from SQuAD and inserted them before or after topically coherent segments. The average length of the augmented test set (3k samples), increased from 255 to 900. As shown in Table 2, the proposed turn-based model compares favorably when modeling lengthy dialogues. 5464 Model EM Score F1 Score Pointer LSTM 67.11 (-10.74) 72.67 (-10.06) Bi-DAF 77.45 (-9.79) 79.55 (-9.12) R-Net 79.96 (-8.97) 82.26 (-8.15) Utterance-based HA 78.92 (-9.67) 80.72 (-9.40) Turn-based HA 85.25 (-5.82) 87.18 (-5.21) Table 2: Lengthy sample evaluation. Bracketed values denote absolute decrease of model performance in Section 3.6. Model EM Score F1 Score Pointer LSTM 60.99 (-16.86) 68.94 (-13.79) Bi-DAF 74.58 (-12.66) 76.42 (-12.25) R-Net 78.73 (-10.20) 80.38 (-10.03) Utterance-based HA 77.84 (-10.75) 79.77 (-10.35) Turn-based HA 82.50 (-8.57) 84.08 (-8.31) Table 3: Out-of-distribution evaluation. Bracketed values denote absolute decrease of model performance in Section 3.7. 3.7 Out-of-Distribution Evaluation Another evaluation was performed on an augmented set of dialogue samples, by adding three out-of-distribution symptom entities (bleeding, cold/flu, and sweating) to the corresponding conversations (3k samples). This was conducted on the well-trained models in Section 3.4. As shown in Table 3, the proposed turn-based HA model is the most robust in answering questions related to unseen symptoms/topics while till performing well on in-domain symptoms, thus showing potential generalization capabilities. In summary, our overall experimental results demonstrate that the proposed hierarchical method achieves higher learning efficiency with robust performance. Moreover, the turn-based model significantly outperforms the utterance-based one, empirically verifying that it is appropriate to use turns as the basic semantic unit in coarse-grained attention for modeling dialogues. 4 Related Work Machine comprehension of passages has achieved rapid progress lately, benefiting from large-scale datasets (Rajpurkar et al., 2016; Kocisky et al., 2018), semantic vector representations (Pennington et al., 2014; Peters et al., 2018; Devlin et al., 2019), and end-to-end neural modeling (Wang et al., 2017; Hu et al., 2018). The attention mechanism enables neural models to more flexibly focus on salient contextual segments (Luong et al., 2015; Vaswani et al., 2017), and is further improved by hierarchical designs for document processing tasks (Yang et al., 2016; Choi et al., 2017). Multi-level attention could be fused in hidden representations (Wang et al., 2017) or calculated explicitly (Hsu et al., 2018). There is an established body of work studying how humans take turns speaking during conversations to better understand when and how to generate more natural dialogue responses (Sacks et al., 1974; Wilson et al., 1984; Schlangen, 2006). Utterance-level attention has also been applied to context modeling for different dialogue tasks such as dialogue generation (Serban et al., 2016) and state tracking (Dhingra et al., 2017). Recently, there is emerging interest in machine comprehension of dialogue content (Ma et al., 2018; Sun et al., 2019). To the best of our knowledge, our work is the first in exploiting turn-level attention in neural dialogue comprehension. 5 Conclusion We proposed to comprehend dialogues by exploiting a hierarchical neural architecture through incorporating explicit turn-level attention scoring to complement word-level mechanisms. We conducted experiments on a corpus embodying verbal distractors inspired from real-world spoken dialogues that interrupt the coherent flow of conversation topics. Our model compares favorably to established baselines, performs better when there is limited training data, and is capable of addressing challenges from low information density of spoken dialogues and out-of-distribution samples. Acknowledgements This research was supported by funding for Digital Health and Deep Learning from the Institute for Infocomm Research (I2R) and the Science and Engineering Research Council (Project Nos. A1718g0045 and A1818g0044), A*STAR, Singapore. This work was conducted using resources and infrastructure provided by the Human Language Technology unit at I2R. We thank A. T. Aw, R. E. Banchs, L. F. D’Haro, P. Krishnaswamy, H. Lim, F. A. Suhaimi and S. Ramasamy at I2R, and W. L. Chow, A. Ng, H. C. Oh, S. Ong and S. C. Tong at Changi General Hospital for insightful discussions. We also thank the anonymous reviewers for their precious feedback to help improve and extend this piece of work. 5465 References Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 209–220. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–495, Vancouver, Canada. Association for Computational Linguistics. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 1693–1701, Cambridge, MA, USA. MIT Press. Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 132–141. Association for Computational Linguistics. Minghao Hu, Furu Wei, Yuxing Peng, Zhen Huang, Nan Yang, and Ming Zhou. 2018. Read + verify: Machine reading comprehension with unanswerable questions. CoRR, abs/1808.05759. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations. Tomas Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gabor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317–328. Zhengyuan Liu, Hazel Lim, Nur Farah Ain Binte Suhaimi, Shao Chuen Tong, Sharon Ong, Angela Ng, Sheldon Lee, Michael R. Macdonald, Savitha Ramasamy, Pavitra Krishnaswamy, Wai Leng Chow, and Nancy F. Chen. 2019. Fast prototyping a dialogue comprehension system for nurse-patient conversations on symptom monitoring. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Association for Computational Linguistics. Kaixin Ma, Tomasz Jurczyk, and Jinho D. Choi. 2018. Challenging reading comprehension on daily conversation: Passage completion on multiparty dialog. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2039–2048. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Association for Computational Linguistics. Harvey Sacks, Emanuel A. Schegloff, and Gail Jefferson. 1974. A simplest systematics for the organization of turn-taking for conversation. Language, 50(4):696–735. David Schlangen. 2006. From reaction to prediction: Experiments with computational models of turntaking. In Ninth International Conference on Spoken Language Processing. 5466 Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In Proceedings of the 5th International Conference for Learning Representations. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 3776–3783. AAAI Press. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge dataset and models for dialogue-based reading comprehension. CoRR, abs/1902.00164. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. CoRR, abs/1608.07905. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 189–198. Association for Computational Linguistics. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspectlevel sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 606–615, Austin, Texas. Association for Computational Linguistics. Thomas P Wilson, John M Wiemann, and Don H Zimmerman. 1984. Models of turn taking in conversational interaction. Journal of Language and Social Psychology, 3(3):159–183. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Association for Computational Linguistics.
2019
543
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5467–5471 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5467 A Novel Bi-directional Interrelated Model for Joint Intent Detection and Slot Filling Haihong E∗, Peiqing Niu∗, Zhongfu Chen∗, Meina Song Beijing University of Posts and Telecommunications, Beijing, China {ehaihong,niupeiqing,chenzhongfu,mnsong}@bupt.edu.cn Abstract A spoken language understanding (SLU) system includes two main tasks, slot filling (SF) and intent detection (ID). The joint model for the two tasks is becoming a tendency in SLU. But the bi-directional interrelated connections between the intent and slots are not established in the existing joint models. In this paper, we propose a novel bi-directional interrelated model for joint intent detection and slot filling. We introduce an SF-ID network to establish direct connections for the two tasks to help them promote each other mutually. Besides, we design an entirely new iteration mechanism inside the SF-ID network to enhance the bi-directional interrelated connections. The experimental results show that the relative improvement in the sentence-level semantic frame accuracy of our model is 3.79% and 5.42% on ATIS and Snips datasets, respectively, compared to the state-of-the-art model. 1 Introduction Spoken language understanding plays an important role in spoken dialogue system. SLU aims at extracting the semantics from user utterances. Concretely, it identifies the intent and captures semantic constituents. These two tasks are known as intent detection and slot filling (Tur and De Mori, 2011), respectively. For instance, the sentence ‘what flights leave from phoenix’ sampled from the ATIS corpus is shown in Table 1. It can be seen that each word in the sentence corresponds to one slot label, and a specific intent is assigned for the whole sentence. Sentence what flights leave from phoenix Slots O O O O B-fromloc Intent atis flight Table 1: An example sentence from the ATIS corpus ∗Authors contributed equally. Traditional pipeline approaches manage the two mentioned tasks separately. Intent detection is seen as a semantic classification problem to predict the intent label. General approaches such as support vector machine (SVM) (Haffner et al., 2003) and recurrent neural network (RNN) (Lai et al., 2015) can be applied. Slot filling is regarded as a sequence labeling task. Popular approaches include conditional random field (CRF) (Raymond and Riccardi, 2007), long short-term memory (LSTM) networks (Yao et al., 2014). Considering the unsatisfactory performance of pipeline approaches caused by error propagation, the tendency is to develop a joint model (Chen et al., 2016a; Zhang and Wang, 2016) for intent detection and slot filling tasks. Liu and Lane (2016) proposed an attention-based RNN model. However, it just applied a joint loss function to link the two tasks implicitly. Hakkani-T¨ur et al. (2016) introduced a RNN-LSTM model where the explicit relationships between the slots and intent are not established. Goo et al. (2018) proposed a slotgated model which applies the intent information to slot filling task and achieved superior performance. But the slot information is not used in intent detection task. The bi-directional direct connections are still not established. In fact, the slots and intent are correlative, and the two tasks can mutually reinforce each other. This paper proposes an SF-ID network which consists of an SF subnet and an ID subnet. The SF subnet applies intent information to slot filling task while the ID subnet uses slot information in intent detection task. In this case, the bi-directional interrelated connections for the two tasks can be established. Our contributions are summarized as follows: 1) We propose an SF-ID network to establish the interrelated mechanism for slot filling and intent detection tasks. Specially, a novel ID subnet is proposed to apply the slot information to intent detec5468 Figure 1: The structure of the proposed model based on SF-ID network tion task. 2) We establish a novel iteration mechanism inside the SF-ID network in order to enhance the connections between the intent and slots. 3) The experiments on two benchmark datasets show the effectiveness and superiority of the proposed model. 2 Proposed Approaches This section first introduces how we acquire the integration of context of slots and intent by attention mechanism. And then it presents an SF-ID network which establishes the direct connections between intent and slots. The model architecture based on bi-directional LSTM (BLSTM) is shown in Figure 2.1 2.1 Integration of Context In SLU, word tags are determined not only by the corresponding terms, but also the context (Chen et al., 2016b). The intent label is also relevant with every element in the utterance. To capture such dependencies, attention mechanism is introduced. Slot filling: The ith slot context vector ci slot is computed as the weighted sum of BLSTM’s hidden states (h1, ..., ht): ci slot = T X j=1 αS i,jhj (1) where the attention weight α is acquired the same way as in (Liu and Lane, 2016). Intent detection: The intent context vector cinte is calculated as the same way as cslot, in particular, it just generates one intent label for the whole sentence. 1The code is available at https://github.com/ ZephyrChenzf/SF-ID-Network-For-NLU 2.2 SF-ID Network The SF-ID network consists of an SF subnet and an ID subnet. The order of the SF and ID subnets can be customized. Depending on the order of the two subnets, the model have two modes: SF-First and ID-First. The former subnet can produce active effects to the latter one by a medium vector. 2.2.1 SF-First Mode In the SF-First mode, the SF subnet is executed first. We apply the intent context vector cinte and slot context vector cslot in the SF subnet and generate the slot reinforce vector rslot. Then, the newlyformed vector rslot is fed to the ID subnet to bring the slot information. SF subnet: The SF subnet applies the intent and slot information (i.e. cinte and cslot) in the calculation of a correlation factor f which can indicate the relationship of the intent and slots. This correlation factor f is defined by: f = X V ∗tanh(ci slot + W ∗cinte) (2) In addition, we introduce a slot reinforce vector rslot defined by (3), and it is fed to the ID subnet to bring slot information. ri slot = f · ci slot (3) ID subnet: We introduce a novel ID subnet which applies the slot information to the intent detection task. We believe that the slots represent the wordlevel information while the intent stands for the sentence-level. The hybrid information can benefit the intent detection task. The slot reinforce vector rslot is fed to the ID subnet to generate the reinforce vector r, which is defined by: r = T X i=1 αi · ri slot (4) 5469 Figure 2: Illustration of the ID subnet where the weight αi of ri slot is computed as: αi = exp(ei,i) PT j=1 exp(ei,j) (5) ei,j = W ∗tanh(V1 ∗ri slot + V2 ∗hj + b) (6) We also introduce an intent reinforce vector rinte which is computed as the sum of the reinforce vector r and intent context vector rinte. rinte = r + cinte (7) Iteration Mechanism: The intent reinforce vector rinte can also be fed into the SF subnet. In fact, this intent reinforce vector rinte can improve the effect of relation factor f because it contains the hybrid information of intent and slots, and (2) can be replaced by: f = X V ∗tanh(ci slot + W ∗rinte) (8) With the change in the relation factor f, a new slot reinforce vector rslot is acquired. Thus, the ID subnet can takes a new rslot and exports a new rinte. In this case, both SF subnet and ID subnet are updated, one iteration is completed. In theory, the interaction between the SF subnet and ID subnet can repeat endlessly, which is denoted as the iteration mechanism in our model. The intent and slot reinforce vectors act as the links between the SF subnet and the ID subnet and their values continuously change during the iteration process. After the iteration mechanism, the rinte and rslot participate in the final prediction of intent and slots, respectively. For the intent detection task, the intent reinforce vector rinte and the last hidden state hT of BLSTM are utilized in the final intent prediction: yinte = softmax(W hy inteconcat(hT , rinte)) (9) For the slot filling task, the hidden state hi combined with its corresponding slot reinforce vector ri slot are used in the ith slot label prediction. The final expression without CRF layer is: yi slot = softmax(W hy slotconcat(hi, ri slot)) (10) 2.2.2 ID-First Mode In the ID-First mode, the ID subnet is performed before the SF subnet. In this case, there are some differences in the calculation of ID subnet in the first iteration. ID subnet: Unlike the Slot-First mode, the reinforce vector r is acquired by the hidden states and the context vectors of BLSTM. Thus, (4) (5) (6) can be replaced by: r = T X i=1 αi · hi (11) αi = exp(ei,i) PT j=1 exp(ei,j) (12) ei,j = W ∗σ(V1 ∗hi + V2 ∗cj slot + b) (13) The intent reinforce vector rinte is still defined by (7), and it is fed to the SF subnet. SF subnet: The intent reinforce vector rinte is fed to the SF subnet and the relation factor f is calculated the same way as (8). Other algorithm details are the same as in SF-First mode. Iteration Mechanism: Iteration mechanism in ID-First mode is almost the same as that in SFFirst mode except for the order of the two subnets. 2.3 CRF layer Slot filling is essentially a sequence labeling problem. For the sequence labeling task, it is beneficial to consider the correlations between the labels in neighborhoods. Therefore, we add the CRF layer above the SF subnet outputs to jointly decode the best chain of labels of the utterance. 3 Experiment Dataset: We conducted experiments using two public datasets, the widely-used ATIS dataset (Hemphill et al., 1990) and custom-intent-engine dataset called the Snips (Coucke et al., 2018), which is collected by Snips personal voice assistant. Compared with the ATIS dataset, the Snips dataset is more complex due to its large vocabulary and cross-domain intents. Evaluation Metrics: We use three evaluation 5470 Model ATIS Dataset Snips Dataset Slot (F1) Intent (Acc) Sen. (Acc) Slot (F1) Intent (Acc) Sen. (Acc) Joint Seq (Hakkani-T¨ur et al., 2016) 94.30 92.60 80.70 87.30 96.90 73.20 Atten.-Based (Liu and Lane, 2016) 94.20 91.10 78.90 87.80 96.70 74.10 Sloted-Gated (Goo et al., 2018) 95.42 95.41 83.73 89.27 96.86 76.43 SF-First (with CRF) 95.75 97.76 86.79 91.43 97.43 80.57 SF-ID SF-First (without CRF) 95.55 97.40 85.95 90.34 97.34 78.43 Network ID-First (with CRF) 95.80 97.09 86.90 92.23 97.29 80.43 ID-First (without CRF) 95.58 96.58 86.00 90.46 97.00 78.37 Table 2: Performance comparison on ATIS and Snips datasets. The improved cases are written in bold. Model ATIS Snips Slot Intent Slot Intent Without SF-ID 95.05 95.34 88.9 96.23 ID subnet Only 95.43 95.74 89.57 97.42 SF subnet Only 95.14 95.75 90.7 96.71 SF-ID (no interaction) 95.56 95.75 90.97 97.01 SF-ID (SF-First) 95.75 97.76 91.43 97.43 SF-ID (ID-First) 95.80 97.09 92.23 97.29 Table 3: Analysis of seperate subnets and their interaction effects metrics in the experiments. For the slot filling task, the F1-score is applied. For the intent detection task, the accuracy is utilized. Besides, the sentence-level semantic frame accuracy (sentence accuracy) is used to indicate the general performance of both tasks, which refers to proportion of the sentence whose slots and intent are both correctly-predicted in the whole corpus. Training Details: In our experiments, the layer size for the BLSTM networks is set to 64. During training, the adam optimization (Kingma and Ba, 2014) is applied. Besides, the learning rate is updated by ηt = η0/(1 + pt) with a decay rate of p = 0.05 and an initial learning rate of η0 = 0.01, and t denotes the number of completed steps. Model Performance: The performance of the models are given in Table 2, wherein it can be seen that our model outperforms the baselines in all three aspects: slot filling (F1), intent detection (Acc) and sentence accuracy (Acc). Specially, on the sentence-level semantic frame results, the relative improvement is around 3.79% and 5.42% for ATIS and Snips respectively, indicating that SFID network can benefit the SLU performance significantly by introducing the bi-directional interrelated mechanism between the slots and intent. Analysis of Seperate Subnets: We analyze the effect of seperate subnets, and the obtained results are given in Table 3. The experiments are conducted when the CRF layer is added. As we can Figure 3: Effect of iteration number on the model performance in SF-First mode see, both models including only the SF subnet or the ID subnet have acheived better results than the BLSTM model. Therefore, we believe that both SF subnet and ID subnet have significance in performance improvement. Beside, we also analyse the condition with independent SF and ID subnet, in other words, when there is no interaction in SF and ID subnet. We can see it also obtains good results. However, the SF-ID network which allows the two subnets interact with each other achieve better results. This is because the bi-directional interrelated mechanism help the two subnets promote each other mutually, which improves the performance in both tasks. Analysis of Model Mode: In Table 2, it can be seen that the ID-First mode achieves better performance in the slot filling task. This is because the ID-First mode treats the slot filling task as a more important task, because the SF subnet can utilize the intent information output from the ID subnet. Similarly, the SF-First mode performs better in the intent detection task. In general, the difference between the two modes is minor. Iteration Mechanism: The effect of iteration mechanism is shown in Figure 3. The experiments are conducted in SF-First mode. Sentence accuracy is applied as the performance measure because it can reflect the overall model performance. It increases gradually and reaches the maximum value when the iteration number is three on both ATIS and Snips dataset, indicating the effective5471 ness of iteration mechanism. It may credit to the iteration mechanism which can enhance the connections between intent and slots. After that, the sentence accuracy gradually gets stabilized with minor drop. On balance, the iteration mechanism with proper iteration number can benefit the SLU performance. CRF Layer: From Table 2 it can be seen that the CRF layer has a positive effect on the general model performance. This is because the CRF layer can obtain the maximum possible label sequence on the sentence level. However, CRF layer mainly focuses on sequence labeling problems. So the improvement of the slot filling task obviously exceeds that of the intent detection task. In general, the performance is improved by the CRF layer. 4 Conclusion In this paper, we propose a novel SF-ID network which provides a bi-directional interrelated mechanism for intent detection and slot filling tasks. And an iteration mechanism is proposed to enhance the interrelated connections between the intent and slots. The bi-directional interrelated model helps the two tasks promote each other mutually. Our model outperforms the baselines on two public datasets greatly. This bi-directional interrelated mechanism between slots and intent provides guidance for the future SLU work. Acknowledgments The authors would like to thank the reviewers for their valuable comments. This work was supported in part by the National Key R&D Program of China under Grant SQ2018YFB140079 and 2018YFB1403003. References Yun-Nung Chen, Dilek Hakanni-T¨ur, Gokhan Tur, Asli Celikyilmaz, Jianfeng Guo, and Li Deng. 2016a. Syntax or semantics? knowledge-guided joint semantic frame parsing. In Spoken Language Technology Workshop (SLT), 2016 IEEE, pages 348–355. IEEE. Yun-Nung Chen, Dilek Hakkani-T¨ur, G¨okhan T¨ur, Jianfeng Gao, and Li Deng. 2016b. End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. In INTERSPEECH, pages 3245–3249. Alice Coucke, Alaa Saade, Adrien Ball, Th´eodore Bluche, Alexandre Caulier, David Leroy, Cl´ement Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 753–757. Patrick Haffner, Gokhan Tur, and Jerry H Wright. 2003. Optimizing svms for complex call classification. In Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). 2003 IEEE International Conference on, volume 1, pages I–I. IEEE. Dilek Hakkani-T¨ur, G¨okhan T¨ur, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and YeYi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Interspeech, pages 715–719. Charles T Hemphill, John J Godfrey, and George R Doddington. 1990. The atis spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI, volume 333, pages 2267– 2273. Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454. Christian Raymond and Giuseppe Riccardi. 2007. Generative and discriminative algorithms for spoken language understanding. In Eighth Annual Conference of the International Speech Communication Association. Gokhan Tur and Renato De Mori. 2011. Spoken language understanding: Systems for extracting semantic information from speech. John Wiley & Sons. Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken language understanding using long short-term memory neural networks. In Spoken Language Technology Workshop (SLT), 2014 IEEE, pages 189–194. IEEE. Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In IJCAI, pages 2993–2999.
2019
544
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5472–5477 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5472 Dual Supervised Learning for Natural Language Understanding and Generation Shang-Yu Su Chao-Wei Huang Yun-Nung Chen Department of Computer Science and Information Engineering National Taiwan University {f05921117,r07922069}@ntu.edu.tw [email protected] Abstract Natural language understanding (NLU) and natural language generation (NLG) are both critical research topics in the NLP and dialogue fields. Natural language understanding is to extract the core semantic meaning from the given utterances, while natural language generation is opposite, of which the goal is to construct corresponding sentences based on the given semantics. However, such dual relationship has not been investigated in literature. This paper proposes a novel learning framework for natural language understanding and generation on top of dual supervised learning, providing a way to exploit the duality. The preliminary experiments show that the proposed approach boosts the performance for both tasks, demonstrating the effectiveness of the dual relationship.1 1 Introduction Spoken dialogue systems that can help users solve complex tasks such as booking a movie ticket have become an emerging research topic in artificial intelligence and natural language processing areas. With a well-designed dialogue system as an intelligent personal assistant, people can accomplish certain tasks more easily via natural language interactions. The recent advance of deep learning has inspired many applications of neural dialogue systems (Wen et al., 2017; Bordes et al., 2017; Dhingra et al., 2017; Li et al., 2017). A typical dialogue system pipeline can be divided into several parts: 1) a speech recognizer that transcribes a user’s speech input into texts, 2) a natural language understanding module (NLU) that classifies the domain and associated intents and fills slots to form a semantic frame (Chi et al., 2017; Chen et al., 2017; Zhang et al., 2018; Su et al., 2018c, 1https://github.com/MiuLab/DualSL Natural Language Understanding Natural Language Generation Natural Language McDonald’s is a cheap restaurant nearby the station. Semantic Frame RESTAURANT=“McDonald’s” PRICE=“cheap” LOCATION= “nearby the station” Figure 1: NLU and NLG emerge as a dual form. 2019), 3) a dialogue state tracker (DST) that predicts the current dialogue state in the multi-turn conversations, 4) a dialogue policy that determines the system action for the next step given the current state (Peng et al., 2018; Su et al., 2018a), and 5) a natural language generator (NLG) that outputs a response given the action semantic frame (Wen et al., 2015; Su et al., 2018b; Su and Chen, 2018). Many artificial intelligence tasks come with a dual form; that is, we could directly swap the input and the target of a task to formulate another task. Machine translation is a classic example (Wu et al., 2016); for example, translating from English to Chinese has a dual task of translating from Chinese to English; automatic speech recognition (ASR) and text-to-speech (TTS) also have structural duality (Tjandra et al., 2017). Previous work first exploited the duality of the task pairs and proposed supervised (Xia et al., 2017) and unsupervised (reinforcement learning) (He et al., 2016) training schemes. The recent studies magnified the importance of the duality by boosting the performance of both tasks with the exploitation of the duality. NLU is to extract core semantic concepts from the given utterances, while the goal of NLG is to construct corresponding sentences based on given semantics. In other words, understanding and generating sentences are a dual problem pair shown in Figure 1. In this paper, we introduce a novel train5473 ing framework for NLU and NLG based on dual supervised learning (Xia et al., 2017), which is the first attempt at exploiting the duality of NLU and NLG. The experiments show that the proposed approach improves the performance for both tasks. 2 Proposed Framework This section first describes the problem formulation, and then introduces the core training algorithm along with the proposed methods of estimating data distribution. Assuming that we have two spaces, the semantics space X and the natural language space Y, given n data pairs {(xi, yi)}n i=1, the goal of NLG is to generate corresponding utterances based on given semantics. In other words, the task is to learn a mapping function f(x; θx→y) to transform semantic representations into natural language. On the other hand, NLU is to capture the core meaning of utterances, finding a function g(y; θy→x) to predict semantic representations given natural language. A typical strategy of these optimization problems is based on maximum likelihood estimation (MLE) of the parameterized conditional distribution by the learnable parameters θx→y and θy→x. 2.1 Dual Supervised Learning Considering the duality between two tasks in the dual problems, it is intuitive to bridge the bidirectional relationship from a probabilistic perspective. If the models of two tasks are optimal, we have probabilistic duality: P(x)P(y | x; θx→y) = P(y)P(x | y; θy→x) = P(x, y) ∀x, y, where P(x) and P(y) are marginal distributions of data. The condition reflects parallel, bidirectional relationship between two tasks in the dual problem. Although standard supervised learning with respect to a given loss function is a straightforward approach to address MLE, it does not consider the relationship between two tasks. Xia et al. (2017) exploited the duality of the dual problems to introduce a new learning scheme, which explicitly imposed the empirical probability duality on the objective function. The training strategy is based on the standard supervised learning and incorporates the probability duality constraint, so-called dual supervised learning. Therefore the training objective is extended to a multiobjective optimization problem:      minθx→y(E[l1(f(x; θx→y), y)]), minθy→x(E[l2(g(y; θy→x), x)]), s.t. P(x)P(y | x; θx→y) = P(y)P(x | y; θy→x), where l1,2 are the given loss functions. Such constraint optimization problem could be solved by introducing Lagrange multiplier to incorporate the constraint: ( minθx→y(E[l1(f(x; θx→y), y)] + λx→ylduality), minθy→x(E[l1(g(y; θy→x), x)] + λy→xlduality), where λx→y and λy→x are the Lagrange parameters and the constraint is formulated as follows: lduality = (log ˆP(x) + logP(y | x; θx→y) −log ˆP(y) −logP(x | y; θy→x))2. Now the entire objective could be viewed as the standard supervised learning with an additional regularization term considering the duality between tasks. Therefore, the learning scheme is to learn the models by minimizing the weighted combination of an original loss term and a regularization term. Note that the true marginal distribution of data P(x) and P(y) are often intractable, so here we replace them with the approximated empirical marginal distribution ˆP(x) and ˆP(y). 2.2 Distribution Estimation as Autoregression With the above formulation, the current problem is how to estimate the empirical marginal distribution ˆ P(·). To accurately estimate data distribution, the data properties should be considered, because different data types have different structural natures. For example, natural language has sequential structures and temporal dependencies, while other types of data may not. Therefore, we design a specific method of estimating distribution for each data type based on the expert knowledge. From the probabilistic perspective, we can decompose any data distribution p(x) into the product of its nested conditional probability, p(x) = D Y d p(xd | x1, ..., xd−1), (1) where x could be any data type and d is the index of a variable unit. 5474 2.2.1 Language Modeling Natural language has an intrinsic sequential nature; therefore it is intuitive to leverage the autoregressive property to learn a language model. In this work, we learn the language model based on recurrent neural networks (Mikolov et al., 2010; Sundermeyer et al., 2012) by the cross entropy objective in an unsupervised manner. p(y) = L Y i p(yi | y1, ..., yi−1; θy), (2) where y(·) are words in the sentence y, and L is the sentence length. 2.2.2 Masked Autoencoder The semantic representation x in our work is discrete semantic frames containing specific slots and corresponding values. Each semantic frame contains the core concept of a certain sentence, for example, the slot-value pairs “name[Bibimbap House], food[English], priceRange[moderate], area [riverside], near[Clare Hall]” corresponds to the target sentence “Bibimbap House is a moderately priced restaurant who’s main cuisine is English food. You will find this local gem near Clare Hall in the Riverside area.”. Even though the product rule in (1) enables us to decompose any probability distribution into a product of a sequence of conditional probability, how we decompose the distribution reflects a specific physical meaning. For example, language modeling outputs the probability distribution over vocabulary space of i-th word yi by only taking the preceding word sequence y<i. Natural language has the intrinsic sequential structure and temporal dependency, so modeling the joint distribution of words in a sequence by such autoregressive property is logically reasonable. However, slot-value pairs in semantic frames do not have a single directional relationship between them, while they parallel describe the same sentence, so treating a semantic frame as a sequence of slot-value pairs is not suitable. Furthermore, slot-value pairs are not independent, because the pairs in a semantic frame correspond to the same individual utterance. For example, French food would probably cost more. Therefore, the correlation should be taken into account when estimating the joint distribution. 2 1 3 1 2 2 1 2 1 3 Figure 2: The illustration of the masked autoencoder for distribution estimation (MADE). Considering the above issues, to model the joint distribution of flat semantic frames, various dependencies between slot-value semantics should be leveraged. In this work, we propose to utilize a masked autoencoder for distribution estimation (MADE) (Germain et al., 2015). By zeroing certain connections, we could enforce the variable unit xd to only depend on any specific set of variables, not necessary on x<d; eventually we could still have the marginal distribution by the product rule: p(x) = D Y d p(xd | Sd), (3) where Sd is a specific set of variable units. In practice, we elementwise-multiply each weight matrix by a binary mask matrix M to interrupt some connections, as illustrated in Figure 2. To impose the autoregressive property, we first assign each hidden unit k an integer m(k) ranging from 1 to the dimension of data D −1 inclusively; for the input and output layers, we assign each unit a number ranging from 1 to D exclusively. Then binary mask matrices can be built as follows: M =      1 if ml(k′) ≥ml−1(k), 1 if mL(d) > mL−1(k), 0 otherwise. Here l indicates the index of the hidden layer, and L indicates the one of the output layer. With the constructed mask matrices, the masked autoencoder is shown to be able to estimate the joint distribution as autoregression. Because there is no explicit rule specifying the exact dependencies between slot-value pairs in our data, we consider various dependencies by ensemble of multiple decomposition, that is, to sample different sets Sd. 5475 Learning Scheme NLU NLG F1 BLEU ROUGE-1 ROUGE-2 ROUGE-L (a) Baseline: Iterative training 71.14 55.05 55.37 27.95 39.90 (b) Dual supervised learning, λ = 0.1 72.32 57.16 56.37 29.19 40.44 (c) Dual supervised learning, λ = 0.01 72.08 55.07 55.56 28.42 40.04 (d) Dual supervised learning, λ = 0.001 71.71 56.17 55.90 28.44 40.08 (e) Dual supervised learning w/o MADE 70.97 55.96 55.99 28.74 39.98 Table 1: The NLU performance reported on micro-F1 and the NLG performance reported on BLEU, ROUGE-1, ROUGE-2, and ROUGE-L of models (%). 3 Experiments To evaluate the effectiveness of the proposed framework, we conduct the experiments, the settings and analysis of the results are described as follows. 3.1 Settings The experiments are conducted in the benchmark E2E NLG challenge dataset (Novikova et al., 2017), which is a crowd-sourced dataset of 50k instances in the restaurant domain. Our models are trained on the official training set and verified on the official testing set. Each instance is a pair of a semantic frame containing specific slots and corresponding values and an associated natural language utterance with the given semantics. The data preprocessing includes trimming punctuation marks, lemmatization, and turning all words into lowercase. Although the original dataset is for NLG, of which the goal is to generate sentences based on the given slot-value pairs, we further formulate a NLU task as predicting slot-value pairs based on the utterances, which is a multi-label classification problem. Each possible slot-value pair is treated as an individual label, and the total number of labels is 79. To evaluate the quality of the generated sequences regarding both precision and recall, for NLG, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references, while F1 score is measured for the NLU results. 3.2 Model Details The model architectures for NLG and NLU are a gated recurrent unit (GRU) (Cho et al., 2014) with two identical fully-connected layers at the two ends of GRU. Thus the model is symmetrical and may have semantic frame representation as initial and final hidden states and sentences as the sequential input. In all experiments, we use mini-batch Adam as the optimizer with each batch of 64 examples, 10 training epochs were performed without early stop, the hidden size of network layers is 200, and word embedding is of size 50 and trained in an end-to-end fashion. 3.3 Results and Analysis The experimental results are shown in Table 1, where each reported number is averaged over three runs. The row (a) is the baseline that trains NLU and NLG separately and independently, and the rows (b)-(d) are the results from the proposed approach with different Lagrange parameters. The proposed approach incorporates probability duality into the objective as the regularization term. To examine its effectiveness, we control the intensity of regularization by adjusting the Lagrange parameters. The results (rows (b)-(d)) show that the proposed method outperforms the baseline on all automatic evaluation metrics. Furthermore, the performance improves more with stronger regularization (row (b)), demonstrating the importance of leveraging duality. In this paper, we design the methods for estimating marginal distribution for data in NLG and NLU tasks: language modeling is utilized for sequential data (natural language utterances), while the masked autoencoder is conducted for flat representation (semantic frames). The proposed method for estimating the distribution of semantic frames considers complex and implicit dependencies between semantics by ensemble of multiple decomposition of joint distribution. In our experiments, the empirical marginal distribution is the average over the results from 10 different masks and orders; in other words, 10 types of dependencies are modeled. The row (e) can be viewed as the ablation test, where the marginal distribution of semantic frames is estimated by considering slotvalue pairs independent to others and statistically 5476 computed from the training set. The performance is worse than the ones that model the dependencies, demonstrating the importance of considering the nature of input data and modeling data distribution via the masked autoencoder. We further analyze understanding and generation results compared with the baseline model. In some cases, it is found that our NLU model can extract the semantics of utterances better and our NLU model can generate sentences with richer information based on the proposed learning scheme. In sum, the proposed approach is capable of improving the performance of both NLU and NLG in the benchmark data, where the exploitation of duality and the way of estimating distribution are demonstrated to be important. 4 Conclusion This paper proposes a novel training framework for natural language understanding and generation based on dual supervised learning, which first exploits the duality between NLU and NLG and introduces it into the learning objective as the regularization term. Moreover, expert knowledge is incorporated to design suitable approaches for estimating data distribution. The proposed methods demonstrate effectiveness by boosting the performance of both tasks simultaneously in the benchmark experiments. Acknowledgements We thank the anonymous reviewers for their insightful feedback on this work. This work was financially supported from the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under Grant 1082636-E-002-003 and 108-2634-F-002-019. References Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In Proceedings of ICLR. Po-Chun Chen, Ta-Chung Chi, Shang-Yu Su, and YunNung Chen. 2017. Dynamic time-aware attention to speaker roles and contexts for spoken language understanding. In Proceedings of ASRU. Ta-Chung Chi, Po-Chun Chen, Shang-Yu Su, and YunNung Chen. 2017. Speaker role contextual modeling for language understanding and dialogue policy learning. In Proceedings of IJCNLP. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of EMNLP, pages 1724–1734. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In Proceedings of ACL, pages 484–495. Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. 2015. Made: Masked autoencoder for distribution estimation. In International Conference on Machine Learning, pages 881–889. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828. Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. 2017. End-to-end taskcompletion neural dialogue systems. In Proceedings of The 8th International Joint Conference on Natural Language Processing. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. Jekaterina Novikova, Ondrej Duˇsek, and Verena Rieser. 2017. The E2E dataset: New challenges for end-toend generation. In Proceedings of SIGDIAL, pages 201–206. Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Kam-Fai Wong, and Shang-Yu Su. 2018. Deep dyna-q: Integrating planning for taskcompletion dialogue policy learning. arXiv preprint arXiv:1801.06176. Shang-Yu Su and Yun-Nung Chen. 2018. Investigating linguistic pattern ordering in hierarchical natural language generation. In Proceedings of 7th IEEE Workshop on Spoken Language Technology. Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yun-Nung Chen. 2018a. Discriminative deep dyna-q: Robust planning for dialogue policy learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Shang-Yu Su, Kai-Ling Lo, Yi Ting Yeh, and YunNung Chen. 2018b. Natural language generation by hierarchical decoding with linguistic patterns. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 61–66. 5477 Shang-Yu Su, Pei-Chieh Yuan, and Yun-Nung Chen. 2018c. How time matters: Learning time-decay attention for contextual spoken language understanding in dialogues. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2133–2142. Shang-Yu Su, Pei-Chieh Yuan, and Yun-Nung Chen. 2019. Dynamically context-sensitive time-decay attention for dialogue modeling. In ICASSP 20192019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7200–7204. IEEE. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Thirteenth annual conference of the international speech communication association. Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. 2017. Listening while speaking: Speech chain by deep learning. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 301–308. IEEE. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of EACL, pages 438–449. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. 2017. Dual supervised learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3789–3798. JMLR. org. Rui Zhang, Honglak Lee, Lazaros Polymenakos, and Dragomir Radev. 2018. Addressee and response selection in multi-party conversations with speaker interaction rnns. In Proceedings of AAAI.
2019
545
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5478–5483 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5478 SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking Hwaran Lee* Jinsik Lee* SK T-Brain, AI Center, SK telecom {hwaran.lee, jinsik16.lee, oceanos}@sktbrain.com Tae-Yoon Kim Abstract In goal-oriented dialog systems, belief trackers estimate the probability distribution of slotvalues at every dialog turn. Previous neural approaches have modeled domain- and slot-dependent belief trackers, and have difficulty in adding new slot-values, resulting in lack of flexibility of domain ontology configurations. In this paper, we propose a new approach to universal and scalable belief tracker, called slot-utterance matching belief tracker (SUMBT). The model learns the relations between domain-slot-types and slotvalues appearing in utterances through attention mechanisms based on contextual semantic vectors. Furthermore, the model predicts slot-value labels in a non-parametric way. From our experiments on two dialog corpora, WOZ 2.0 and MultiWOZ, the proposed model showed performance improvement in comparison with slot-dependent methods and achieved the state-of-the-art joint accuracy. 1 Introduction As the prevalent use of conversational agents, goal-oriented systems have received increasing attention from both academia and industry. The goal-oriented systems help users to achieve goals such as making restaurant reservations or booking flights at the end of dialogs. As the dialog progresses, the system is required to update a distribution over dialog states which consist of users’ intent, informable slots, and requestable slots. This is called belief tracking or dialog state tracking (DST). For instance, for a given domain and slottypes (e.g., ‘restaurant’ domain and ‘food’ slottype), it estimates the probability of corresponding slot-value candidates (e.g., ‘Korean’ and ‘Modern *Hwaran Lee and Jinsik Lee equally contributed to this work. European’) that are pre-defined in a domain ontology. Since the system uses the predicted outputs of DST to choose the next action based on a dialog policy, the accuracy of DST is crucial to improve the overall performance of the system. Moreover, dialog systems should be able to deal with newly added domains and slots1 in a flexible manner, and thus developing scalable dialog state trackers is inevitable. Regarding to this, Chen et al. (2016) has proposed a model to capture relations from intentutterance pairs for intent expansion. Traditional statistical belief trackers (Henderson et al., 2014) are vulnerable to lexical and morphological variations because they depend on manually constructed semantic dictionaries. With the rise of deep learning approaches, several neural belief trackers (NBT) have been proposed and improved the performance by learning semantic neural representations of words (Mrkˇsi´c et al., 2017; Mrkˇsi´c and Vuli´c, 2018). However, the scalability still remains as a challenge; the previously proposed methods either individually model each domain and/or slot (Zhong et al., 2018; Ren et al., 2018; Goel et al., 2018) or have difficulty in adding new slot-values that are not defined in the ontology (Ramadan et al., 2018; Nouri and Hosseini-Asl, 2018). In this paper, we focus on developing a “scalable” and “universal” belief tracker, whereby only a single belief tracker serves to handle any domain and slot-type. To tackle this problem, we propose a new approach, called slot-utterance matching belief tracker (SUMBT), which is a domainand slot-independent belief tracker as shown in Figure 1. Inspired by machine reading comprehension techniques (Chen et al., 2017; Seo et al., 2017), SUMBT considers a domain-slot1For example, as reported by Kim et al. (2018), hundreds of new skills are added per week in personal assistant services. 5479 type (e.g., ‘restaurant-food’) as a question and finds the corresponding slot-value in a pair of user and system utterances, assuming the desirable answer exists in the utterances. SUMBT encodes system and user utterances using recently proposed BERT (Devlin et al., 2018) which provides the contextualized semantic representation of sentences. Moreover, the domain-slot-types and slotvalues are also literally encoded by BERT. Then SUMBT learns the way where to attend that is related to the domain-slot-type information among the utterance words based on their contextual semantic vectors. The model predicts the slot-value label in a non-parametric way based on a certain metric, which enables the model architecture not to structurally depend on domains and slot-types. Consequently, a single SUMBT can deal with any pair of domain-slot-type and slot-value, and also can utilize shared knowledge among multiple domains and slots. We will experimentally demonstrate the efficacy of the proposing model on two goal-oriented dialog corpora: WOZ 2.0 and MultiWOZ. We will also qualitatively analyze how the model works. Our implementation is open-published.2 2 SUMBT The proposed model consists of four parts as illustrated in Figure 1: BERT encoders for encoding slots, values, and utterances (the grey and blue boxes); a slot-utterance matching network (the red box); a belief tracker (the orange box); and a nonparametric discriminator (the dashed line on top). 2.1 Contextual Semantic Encoders For sentence encoders, we employed a pre-trained BERT model (Devlin et al., 2018) which is a deep stack of bi-directional Transformer encoders. Rather than a static word vector, it provides effective contextual semantic word vectors. Moreover, it offers an aggregated representation of a word sequence such as a phrase and sentence, and therefore we can obtain an embedding vector of slottypes or slot-values that consist of multiple words. The proposed method literally encodes words of domain-slot-types s and slot-values vt at turn t as well as the system and user utterances. For the pair of system and user utterances, xsys t = (wsys 1 , ..., wsys n ) and xusr t = (wusr 1 , ..., wusr m ), the pre-trained BERT encodes each word w into a 2https://github.com/SKTBrain/SUMBT Multi-head Attention RNN LayerNorm 𝑈" h" $ d" $ d"&' $ y)" $ 𝑑(y)" $, y" $) [CLS] restaurant –food [SEP] Trm Trm Trm Trm Trm Trm EMB0 EMB1 EMBs+2 … … … BERTsv [CLS] 𝑤$ ' [SEP] q$ [CLS] what type of food would you like ? [SEP] a moderately priced modern European food . [SEP] Trm Trm Trm Trm Trm Trm EMB0 EMB1 EMBn+m+2 … … … BERT [CLS] 𝑤" ' [SEP] u" 6 u" ' u" 7898: Trm Trm Trm Trm Trm Trm EMB0 EMB1 EMBs+2 … … … BERTsv [CLS] 𝑤; ' [SEP] y" $ [CLS] modern European [SEP] Figure 1: The architecture of slot-utterance matching belief tracker (SUMBT). An example of system and user utterances, a domain-slot-type, and a slot-value is denoted in red. contextual semantic word vector u, and the encoded utterances are represented in the following matrix representation: Ut = BERT ([xsys t , xusr t ]) . (1) Note that the sentence pairs are concatenated with a separation token [SEP], and BERT will be finetuned with the loss function (Eq. 7). For the domain-slot-type s and slot-value vt, another pre-trained BERT which is denoted as BERTsv encodes their word sequences xs and xv t into contextual semantic vectors qs and yv t , respectively. qs = BERTsv(xs), yv t = BERTsv(xv t ). (2) We use the output vectors corresponding to the classification embedding token [CLS] that summarizes the whole input sequence. Note that we consider xs as a phrase of domain and slot words (e.g., xs = “restaurant – price range”) so that qs represents both domain and slot information. Moreover, fixing the weights of BERTsv during training allows the model to maintain the encoded contextual vector of any new pairs of domain and slot-type. Hence, simply by forwarding them into the slot-value encoder, the proposed model can be scalable to the new domains and slots. 5480 2.2 Slot-Utterance Matching In order to retrieve the relevant information corresponding to the domain-slot-type from the utterances, the model uses an attention mechanism. Considering the encoded vector of the domainslot-type qs as a query, the model matches it to the contextual semantic vectors u at each word position, and then the attention scores are calculated. Here, we employed multi-head attention (Vaswani et al., 2017) for the attention mechanism. The multi-head attention maps a query matrix Q, a key matrix K, and a value matrix V with different linear h projections, and then the scaled dot-product attention is performed on those matrices. The attended context vector hs t between the slot s and the utterances at t is hs t = MultiHead(Q, K, V ), (3) where Q is Qs and K and V are Ut. 2.3 Belief Tracker As the conversation progresses, the belief state at each turn is determined by the previous dialog history and the current dialog turn. The flow of dialog can be modeled by RNNs such as LSTM and GRU, or Transformer decoders (i.e., left-to-right uni-directional Transformer). In this work, the attended context vector ht is fed into an RNN, ds t = RNN(ds t−1, hs t). (4) It learns to output a vector that is close to the target slot-value’s semantic vector. Since the output of BERT is normalized by layer normalization (Ba et al., 2016), the output of RNN dt is also fed into a layer normalization layer to help training convergence, ˆys t = LayerNorm(ds t). (5) 2.4 Training Criteria The proposed model is trained to minimize the distance between outputs and target slot-value’s semantic vectors under a certain distance metric. The probability distribution of a slot-value vt is calculated as p  vt|xsys ≤t , xusr ≤t , s  = exp (−d(ˆys t, yv t )) P v′∈Cs exp −d(ˆys t, yv′ t ) , (6) where d is a distance metric such as Euclidean distance or negative cosine distance, and Cs is a set of the candidate slot-values of slot-type s which is defined in the ontology. This discriminative classifier is similar to the metric learning method proposed in Vinyals et al. (2016), but the distance metric is measured in the fixed space that BERT represents rather than in a trainable space. Finally, the model is trained to minimize the log likelihood for all dialog turns t and slot-types s ∈ D as following: L(θ) = − X s∈D T X t=1 log p(vt|xsys ≤t , xusr ≤t , s). (7) By training all domain-slot-types together, the model can learn general relations between slottypes and slot-values, which helps to improve performance. 3 Experimental Setup 3.1 Datasets To demonstrate the performance of our approach, we conducted experiments over WOZ 2.0 (Wen et al., 2017) and MultiWOZ (Budzianowski et al., 2018) datasets. WOZ 2.0 dataset3 is a single ‘restaurant reservation’ domain, in which belief trackers estimate three slots (area, food, and price range). MultiWOZ dataset4 is a multi-domain conversational corpus, in which the model has to estimate 35 slots of 7 domains. 3.2 Baselines We designed three baseline models: BERT+RNN, BERT+RNN+Ontology, and a slot-dependent SUMBT. 1) The BERT+RNN consists of a contextual semantic encoder (BERT), an RNN-based belief tracker (RNN), and a linear layer followed by a softmax output layer for slot-value classification. The contextual semantic encoder in this model outputs aggregated output vectors like those of BERTsv. 2) The BERT+RNN+Ontology consists of all components in the BERT+RNN, an ontology encoder (Ontology), and an ontology-utterance matching network which performs element-wise multiplications between the encoded ontology and 3Downloaded from https://github.com/ nmrksic/neural-belief-tracker 4Downloaded from http://dialogue.mi.eng. cam.ac.uk/index.php/corpus. Before conducting experiments, we performed data cleansing such as correcting misspelled words. 5481 utterances as in Ramadan et al. (2018). Note that two aforementioned models BERT+RNN and BERT+RNN+Ontology use the linear layer to transform a hidden vector to an output vector, which depends on a candidate slot-value list. In other words, the models require re-training if the ontology is changed, which implies that these models have lack of scalability. 3) The slotdependent SUMBT has the same architecture with the proposed model, but the only difference is that the model is individually trained for each slot. 3.3 Configurations We employed the pre-trained BERT model that has 12 layers of 784 hidden units and 12 selfattention heads.5 We experimentally found the best configuration of hyper-parameters in which search space is denoted in the following braces. For slot and utterance matching, we used the multi-head attention with {4, 8} heads and 784 hidden units. We employed a single-layer {GRU, LSTM} with {100, 200, 300} hidden units as the RNN belief tracker. For distance measure, both Euclidean and negative cosine distances were investigated. The model was trained with Adam optimizer in which learning rate linearly increased in the warm-up phase then linearly decreased. We set the warm-up proportion to be {0.01, 0.05, 0.1} of {300, 500} epochs and the learning rate to be {1 × 10−5, 5 × 10−5}. The training stopped early when the validation loss was not improved for 20 consecutive epochs. We report the mean and standard deviation of joint goal accuracies over 20 different random seeds. For reproducibility, we publish our PyTorch implementation code and the preprocessed MultiWOZ dataset. 4 Experimental Results 4.1 Joint Accuracy Performance The experimental results on WOZ 2.0 corpus are presented in Table 1. The joint accuracy of SUMBT is compared with those of the baseline models that are described in Section 3.2 as well as previously proposed models. The models incorporating the contextual semantic encoder BERT beat all previous models. Furthermore, the three baseline models, BERT+RNN, BERT+RNN+Ontology, and the slot-dependent 5The pretrained model is published in https://github.com/huggingface/ pytorch-pretrained-BERT Model Joint Accuracy NBT-DNN (Mrkˇsi´c et al., 2017) 0.844 BT-CNN (Ramadan et al., 2018) 0.855 GLAD (Zhong et al., 2018) 0.881 GCE (Nouri and Hosseini-Asl, 2018) 0.885 StateNetPSI (Ren et al., 2018) 0.889 BERT+RNN (baseline 1) 0.892 (±0.011) BERT+RNN+Ontology (baseline 2) 0.893 (±0.013) Slot-dependent SUMBT (baseline 3) 0.891 (±0.010) Slot-independent SUMBT (proposed) 0.910 (±0.010) Table 1: Joint goal accuracy on the evaluation dataset of WOZ 2.0 corpus. Model Joint Accuracy Benchmark baseline 6 0.2583 GLAD (Zhong et al., 2018) 0.3557 GCE (Nouri and Hosseini-Asl, 2018) 0.3558 SUMBT 0.4240 (±0.0187) Table 2: Joint goal accuracy on the evaluation dataset of MultiWOZ corpus. SUMBT, showed no significant performance differences. On the other hand, the slot-independent SUMBT which learned the shared information from all across domains and slots significantly outperformed those baselines, resulting in 91.0% joint accuracy. This implies the importance of utilizing common knowledge through a single model. Table 2 shows the experimental results of the slot-independent SUMBT model on MultiWOZ corpus. Note that MultiWOZ has more domains and slots to be learned than WOZ 2.0 corpus. The SUMBT greatly surpassed the performances of previous approaches by yielding 42.4% joint accuracy. The proposed model achieved state-of-theart performance in both WOZ 2.0 and MultiWOZ datasets. 4.2 Attention Weights Analysis Figure 2 shows an example of attention weights as a dialog progresses. We can find that the model attends to the part of utterances which are semantically related to the given slots, even though the slot-value labels are not expressed in the lexically same way. For example, in case of ‘price range’ slot-type at the first turn, the slot-value label is ‘moderate’ but the attention weights are relatively 6 The benchmark baseline is the model proposed in Ramadan et al. (2018) and the performance is described in http://dialogue.mi.eng.cam.ac.uk/ index.php/corpus/ 5482 Turn 1 Turn 2 Turn 3 area price range (none) (moderate) are price range (none) (moderate) area price range (don’t care) (moderate) U: Hello, I’m looking for a restaurant, either Mediterranean or Indian, it must be reasonably priced though. S: Sorry, we don’t have any matching restaurants. U: How about Indian? S: We have plenty of Indian restaurants. Is there a particular place you’d like to stay in? U: I have no preference for the location, I just need an address and phone number. Turn 1, Turn 2, Turn 3, Dialog Example 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 Figure 2: Attention visualizations of the first three turns in a dialog (WOZ 2.0). At each turn, the first and second columns are the attention weights when the given slots are ‘area’ and ‘price range’, respectively. The slot-value labels are denoted in the parentheses. high on the phrase ‘reasonably priced’. When appropriate slot-values corresponding to the given slot-type are absent (i.e., the label is ‘none’), the model attends to [CLS] or [SEP] tokens. 5 Conclusion In this paper, we propose a new approach to universal and scalable belief tracker, called SUMBT which attends to words in utterances that are relevant to a given domain-slot-type. Besides, the contextual semantic encoders and the non-parametric discriminator enable a single SUMBT to deal with multiple domains and slot-types without increasing model size. The proposed model achieved the state-of-the-art joint accuracy performance in WOZ 2.0 and MultiWOZ corpora. Furthermore, we experimentally showed that sharing knowledge by learning from multiple domain data helps to improve performance. As future work, we plan to explore whether SUMBT can continually learn new knowledge when domain ontology is updated. Acknowledgements We would like to thank Jinyoung Yeo and anonymous reviewers for their constructive feedback and helpful discussions. We are also grateful to SK T-Brain Meta AI team for GPU cluster supports to conduct massive experiments. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. arXiv preprint, arXiv:1607.06450. Paweł Budzianowski, Tsung Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaˇsi´c. 2018. MultiWOZ - a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026. Association for Computational Linguistics. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1870–1879. Association for Computational Linguistics. Yun-Nung Chen, Dilek Hakkani-T¨ur, and Xiaodong He. 2016. Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models. In Proceedings of the 41st IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6045–6049. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint, arXiv:1810.04805. Rahul Goel, Shachi Paul, Tagyoung Chung, Jeremie Lecomte, Arindam Mandal, and Dilek Hakkani-Tur. 2018. Flexible and scalable state tracking framework for goal-oriented dialogue systems. arXiv preprint, arXiv:1811.12891. 5483 Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the SIGDIAL 2014 Conference, pages 292–299. Association for Computational Linguistics. Young-Bum Kim, Dongchan Kim, Anjishnu Kumar, and Ruhi Sarikaya. 2018. Efficient large-scale neural domain classification with personalized attention. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2214–2224. Association for Computational Linguistics. Nikola Mrkˇsi´c and Ivan Vuli´c. 2018. Fully statistical neural belief tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 108–113. Association for Computational Linguistics. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1777–1788. Association for Computational Linguistics. Elnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking model. arXiv preprint, arXiv:1812.00899. Osman Ramadan, Paweł Budzianowski, and Milica Gaˇsi´c. 2018. Large-scale multi-domain belief tracking with knowledge sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 432–437. Association for Computational Linguistics. Liliang Ren, Kaige Xie, Lu Chen, and Kai Yu. 2018. Towards universal dialogue state tracking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2780– 2786. Association for Computational Linguistics. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations (ICLR). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gasic, Lina M. Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438–449. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive dialogue state tracker. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1458–1467. Association for Computational Linguistics.
2019
546
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5484–5490 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5484 Robust Zero-Shot Cross-Domain Slot Filling with Example Values Darsh J Shah*1, Raghav Gupta*2, Amir A Fayazi2, and Dilek Hakkani-T¨ur3 1MIT CSAIL, Cambridge, MA 2Google Research, Mountain View, CA 3Amazon Alexa AI, Sunnyvale, CA [email protected], {raghavgupta,amiraf}@google.com, [email protected] Abstract Task-oriented dialog systems increasingly rely on deep learning-based slot filling models, usually needing extensive labeled training data for target domains. Often, however, little to no target domain training data may be available, or the training and target domain schemas may be misaligned, as is common for web forms on similar websites. Prior zero-shot slot filling models use slot descriptions to learn concepts, but are not robust to misaligned schemas. We propose utilizing both the slot description and a small number of examples of slot values, which may be easily available, to learn semantic representations of slots which are transferable across domains and robust to misaligned schemas. Our approach outperforms state-ofthe-art models on two multi-domain datasets, especially in the low-data setting. 1 Introduction Goal-oriented dialog systems assist users with tasks such as finding flights, booking restaurants and, more recently, navigating user interfaces, through natural language interactions. Slot filling models, which identify task-specific parameters/slots (e.g. flight date, cuisine) from user utterances, are key to the underlying spoken language understanding (SLU) systems. Advances in SLU have enabled virtual assistants such as Siri, Alexa and Google Assistant. There is also significant interest in adding third-party functionality to these assistants. However, supervised slot fillers (Young, 2002; Bellegarda, 2014) require abundant labeled training data, more so with deep learning enhancing accuracy at the cost of being data intensive (Mesnil et al., 2015; Kurata et al., 2016). Asterisk (*) denotes equal contribution. Research conducted when all authors were at Google Research. Figure 1: Misaligned schemas for flight booking from kayak.com (top) and southwest.com (bottom): slot name depart in the two schemas refers to departure date and departure city respectively, hence models trained on one schema may falter on the other. Two key challenges with scaling slot fillers to new domains are adaptation and misaligned schemas (here, slot name mismatches). Extent of supervision may vary across domains: there may be ample data for Flights but none for Hotels, requiring models to leverage the former to learn semantics of reusable slots (e.g. time, destination). In addition, schemas for overlapping domains may be incompatible by way of using different names for the same slot or the same name for different slots. This is common with web form filling: two sites in the same domain may have misaligned schemas, as in Figure 1, precluding approaches that rely on schema alignment. Zero-shot slot filling, typically, either relies on slot names to bootstrap to new slots, which may be insufficient for cases like in Figure 1, or uses hardto-build domain ontologies/gazetteers. We counter that by supplying a small number of example values in addition to the slot description to condition the slot filler. This avoids negative transfer from misaligned schemas and further helps identify unseen slots while retaining cross-domain transfer ability. Besides, example values for slots can either be crawled easily from existing web forms or specified along with the slots, with little overhead. Given as few as 2 example values per slot, our model surpasses prior work in the zero/few-shot setting on the SNIPS dataset by an absolute 2.9% slot F1, and is robust to misaligned schemas, as experiments on another multi-domain dataset show. 5485 Figure 2: Illustration of the overall model with all inputs and outputs shown. 2 Related Work Settings with resource-poor domains are typically addressed by adapting from resource-rich domains (Blitzer et al., 2006; Pan et al., 2010; Chen et al., 2018; Guo et al., 2018; Shah et al., 2018). To this end approaches such as domain adversarial learning (Liu and Lane, 2017) and multi-task learning (Jaech et al., 2016; Goyal et al., 2018; Siddhant et al., 2018) have been adapted to SLU and related tasks (Henderson et al., 2014). Work targeting domain adaptation specifically for this area includes, modeling slots as hierarchical concepts (Zhu and Yu, 2018) and using ensembles of models trained on data-rich domains (Gaˇsi´c et al., 2015; Kim et al., 2017; Jha et al., 2018). The availability of task descriptions has made zero-shot learning (Norouzi et al., 2013; Socher et al., 2013) popular. In particular, work on zeroshot utterance intent detection has relied on varied resources such as click logs (Dauphin et al., 2013) and manually defined domain ontologies (Kumar et al., 2017), as well as models such as deep structured semantic models (Chen et al., 2016) and capsule networks (Xia et al., 2018). Zero-shot semantic parsing is addressed in Krishnamurthy et al. (2017) and Herzig and Berant (2018) and specifically for SLU utilizing external resources such as label ontologies in Ferreira et al. (2015a,b) and handwritten intent attributes in Yazdani and Henderson (2015); Chen et al. (2015). Our work is closest in spirit to Bapna et al. (2017) and Lee and Jha (2018), who employ textual slot descriptions to scale to unseen intents/slots. Since slots tend to take semantically similar values across utterances, we augment our model with example values, which are easier for developers to define than manual alignments across schemas (Li et al., 2011). 3 Problem Statement We frame our conditional sequence tagging task as follows: given a user utterance with T tokens and a slot type, we predict inside-outside-begin (IOB) tags {y1, y2 . . . yT } using 3-way classification per token, based on if and where the provided slot type occurs in the utterance. Figure 3 shows IOB tag sequences for one positive (slot service, present in the utterance) and one negative (slot timeRange, not present in the utterance) instance each. service O O O B I ↑ ↑ ↑ ↑ ↑ Play Imagine on iHeart Radio ↓ ↓ ↓ ↓ ↓ timeRange O O O O O Figure 3: Example semantic frame with IOB slot annotations for a positive and a negative instance. 4 Model Architecture Figure 2 illustrates our model architecture where a user utterance is tagged for a provided slot. To represent the input slot, along with a textual slot description as in Bapna et al. (2017), we supply a small set of example values for this slot, to provide a more complete semantic representation.1 Detailed descriptions of each component follow. Inputs: We use as input dwc-dimensional embeddings for 3 input types: T user utterance tokens {ui ∈Rdwc, 1≤i≤T}, S input slot description tokens {di ∈Rdwc, 1≤i≤S}, and K example values for the slot, with the Nk token embedding for the kth example denoted by {ek i ∈Rdwc, 1≤i≤Nk}. Utterance encoder: We encode the user utterance using a den-dimensional bidirectional GRU recur1Note that the slot description is still needed since example slot values alone cannot distinguish slots which take semantically similar values (e.g. departDate vs returnDate). 5486 Intent Slot Names (Training and Evaluation) AddToPlaylist artist, entityName, musicItem, playlist, playlistOwner BookRestaurant city, cuisine, partySizeNumber, restaurantName, restaurantType, servedDish, spatialRelation, state.. . GetWeather city, conditionDescription, country, geographicPoi, spatialRelation, state, timeRange. . . PlayMusic album, artist, genre, musicItem, playlist, service, sort, track, year RateBook bestRating, objectName, objectPartOfSeriesType, objectSelect, objectType, ratingUnit, ratingValue SearchCreativeWork objectName, objectType FindScreeningEvent locationName, movieName, movieType, objectLocationType, objectType, spatialRelation, timeRange Intent Training Slot Names Evaluation Slot Names BookBus from, to, leaving, returning, travelers, tripType, departureTime from, to, departOn, addReturnTrip, tripType, promoCode, discountOptions, children, adults, seniors FindFlights from, to, depart, return, cabinClass, flightType depart, arrive, departDate, returnDate, searchType, promoCode BookRoom where, checkIn, checkOut, guests, homeType, propertyType, priceRange, amenities location, hotelName, checkIn, checkOut, rooms, roomType, pricePerNight, rating, amenities Table 1: Intents and training/evaluation slot schemas for SNIPS (top) and XSchema (bottom) datasets. rent neural network (RNN) (Chung et al., 2014). We denote the set of per-token RNN hidden states by H = {hi ∈Rden, 1≤i≤T}, which are used as contextual utterance token encodings. H = BiGRU({ui, 1≤i≤T}) (1) Slot description encoder: We obtain an encoding ds ∈Rdwc of the slot description by mean-pooling the embeddings for the S slot description tokens. ds = 1 S S X i=1 di (2) Slot example encoder: We first obtain encodings {ex k ∈Rdwc, 1≤k≤K} for each slot example value by mean-pooling the Nk token embeddings. Then, we compute an attention weighted encoding of all K slot examples {ea i ∈Rdwc, i≤1≤T} for each utterance token, with the utterance token encoding as attention context. Here, αx i ∈RK denotes attention weights over all K slot examples corresponding to the ith utterance token, obtained with general cosine similarity (Luong et al., 2015). ex k = 1 Nk Nk X i=1 ek i , 1≤k≤K (3) αx i = softmax({hiWaex k ∀k}), 1≤i≤T (4) ea i = K X k=1 αx ik × ex k (5) Tagger: We feed the concatenated utterance, slot description and example encodings to a dendimensional bidirectional LSTM. The output hidden states X = {xi ∈Rden, 1≤i≤T} are used for a 3-way IOB tag classification per token. X = BiLSTM({hi ⊕ds ⊕ea i , 1≤i≤T}) (6) yi = softmax(Wtxi + bt), 1≤i≤T (7) Parameters: We use fixed dw=128-dim pretrained word embeddings2 for all tokens. We also train per-character embeddings, fed to a 2-layer convolutional neural network (Kim, 2014) to get a dc=32-dim token embedding. For all inputs, the dwc=160-dim final embedding is the concatenation of the word and char-CNN embeddings. The RNN encoders have hidden state size den=128. All trainable weights are shared across intents and slots. The model relies largely on fixed word embeddings to generalize to new intents/slots. 5 Datasets and Experiments In this section we describe the datasets used for evaluation, baselines compared against, and more details on the experimental setup. Datasets: In order to evaluate cross-domain transfer learning ability and robustness to misaligned schemas, respectively, we use the following two SLU datasets to evaluate all models. • SNIPS: This is a public SLU dataset (Coucke et al., 2018) of crowdsourced user utterances with 39 slots across 7 intents and ∼2000 training instances per intent. Since 11 of these slots are shared (see Table 1), we use this dataset to evaluate cross-domain transfer learning. • XSchema: This is an in-house crowdsourced dataset with 3 intents (500 training instances each). Training and evaluation utterances are annotated with different schemas (Table 1) from real web forms to simulate misaligned schemas. Baselines: We compare with two strong zeroshot baselines: Zero-shot Adaptive Transfer (ZAT) (Lee and Jha, 2018) and Concept Tagger 2https://tfhub.dev/google/nnlm-en-dim128/1 5487 Target training e.g. 0 50 Intent ↓Model → CT ZAT +2Ex LSTM CT ZAT +10Ex AddToPlaylist 53.3 46.8 55.2 59.4 74.4 73.4 76.2* BookRestaurant 45.7 46.6 48.6* 57.5 63.8 63.5 63.6 GetWeather 63.5 60.7 66.0* 75.7 72.1 71.1 77.5* PlayMusic 28.7 30.1 33.8* 49.3 56.4 56.0 58.8 RateBook 24.5 31.0 28.5 85.1* 82.9 83.8 82.2 SearchCreativeWork 24.7 26.7 26.2 52.9 62.8 63.7 65.9 FindScreeningEvent 23.7 19.7 25.5* 60.8 64.9 64.6 67.0* Average 37.7 37.4 40.6* 62.8 68.2 68.0 70.1* Table 2: Slot F1 scores for baselines (CT, ZAT, LSTM) and our best models (with 2 slot values for zero-shot and 10 values for 50 train instances) on SNIPS. Rows represent different train-test splits, defined in Section 5. Our model consistently outperforms the baselines, with ∼3% absolute gain in the zero-shot setting.3 (CT) (Bapna et al., 2017), in addition to a 2layer multi-domain bidirectional LSTM baseline (Hakkani-T¨ur et al., 2016) for non-zero-shot setups. ZAT and CT condition slot filling only on slot descriptions, with ZAT adding slot description attention, char embeddings and CRFs on top of CT. Since labor-intensive long text descriptions are unavailable for our data, we use tokenized slot names in their place, as in Bapna et al. (2017). Experimental Setup: We use SNIPS to test zero/few-shot transfer: for each target intent I, we train on all ∼2000 training instances from intents other than I, and varying amounts of training data for I, evaluating exclusively on I. For XSchema, we train and evaluate on a single intent, specifically evaluating cross-schema performance. We sample positive and negative instances (Figure 3) in a ratio of 1:3. Slot values input during training and evaluation are randomly picked from values taken by the input slot in the relevant domain’s training set, excluding ones that are also present in the evaluation set. In practice, it is usually easy to obtain such example values for each slot either using automated methods (such as crawling from existing web forms) or have them be provided as part of the slot definition, with negligible extra effort. To improve performance on out-of-vocabulary entity names, we randomly replace slot value tokens in utterances and slot examples with a special token, and raise the replacement rate linearly from 0 to 0.3 during training (Rastogi et al., 2018). The final cross-entropy loss, averaged over all utterance tokens, is optimized using ADAM (Kingma and Ba, 2014) for 150K training steps. Target training e.g. 0 50 Intent ↓Model → CT ZAT +10Ex CT ZAT +10Ex BookBus 70.9 70.1 74.1* 86.8 85.2 89.4 FindFlights 43.5 44.8 53.2* 62.3 59.7 69.2* BookRoom 23.6 23.4 33.0* 49.7 52.1 58.7* Table 3: Slot F1 scores on the XSchema dataset4. We train and evaluate on a single intent, but with different schemas, thus precluding the LSTM baseline. Slot F1 score (Sang and Buchholz, 2000) is our final metric, reported after 3-fold cross-validation. 6 Results For the SNIPS dataset, Table 2 shows slot F1 scores for our model trained with randomlypicked slot value examples in addition to slot descriptions vis-`a-vis the baselines. Our best model consistently betters the zero-shot baselines CT and ZAT, which use only slot descriptions, overall and individually for 5 of 7 intents. The average gain over CT and ZAT is ∼3% in the zero-shot case. In the low-data setting, all zero-shot models gain ≥5% over the multi-domain LSTM baseline (with the 10-example-added model further gaining ∼2% on CT/ZAT). All models are comparable when all target data is used for training, with F1 scores of 87.8% for the LSTM, and 86.9% and 87.2% for CT and our model with 10 examples respectively. Table 3 shows slot F1 scores for XSchema data. Our model trained with 10 example values is robust to varying schemas, with gains of ∼3% on BookBus, and ∼10% on FindFlights and BookRoom in the zero-shot setting. For both datasets, as more training data for the target domain is added, the baselines and our approach perform more similarly. For instance, our approach improves upon the baseline by ∼0.2% on SNIPS when 2000 training examples are used for the target domain, affirming that adding example values does not hurt in the regular setting. Results by slot type: Example values help the most with limited-vocabulary slots not encountered during training: our approach gains ≥20% on slots such as conditionDescription, bestRating, service (present in intents GetWeather, RateBook, PlayMusic respectively). Intents PlayMusic and GetWeather, with several limited-vocabulary slots, see significant gains in the zero-shot setting. 3Asterisk (*) indicates a statistically significant gain over the second-best model as per McNemar’s test (p < 0.05). 5488 Figure 4: Variation of overall slot F1 score with number of slot value examples input to the model, with varying number of target intent training instances for SNIPS. For compositional open-vocabulary slots (city, cuisine), our model also compares favorably - e.g. 53% vs 27% slot F1 for unseen slot cuisine (intent BookRestaurant) - since the semantic similarity between entity and possible values is easier to capture than between entity and description. Slots with open, non-compositional vocabularies (such as objectName, entityName) are hard to infer from slot descriptions or examples, even if these are seen during training but in other contexts, since utterance patterns are lost across intents. All models are within 5% slot F1 of each other for such slots. This is also observed for unseen openvocabulary slots in the XSchema dataset (such as promoCode and hotelName). For XSchema experiments, our model does significantly better on slots which are confusing across schemas (evidenced by gains of >20% on depart in FindFlights, roomType in BookRoom). Effect of number of examples: Figure 4 shows the number of slot value examples used versus performance on SNIPS. For the zero-shot case, using 2 example values per slot works best, possibly due to the model attending to perfect matches during training, impeding generalization when more example values are used. In the few-shot and normal-data settings, using more example values helps accuracy, but the gain drops with more target training data. For XSchema, in contrast, adding more example values consistently improves performance, possibly due to more slot name mistmatches in the dataset. We avoid using over 10 example values, in contrast to prior work (Krishnamurthy et al., 2017; Naik et al., 2018) since it may be infeasible to easily provide or extract a large number of values for unseen slots. Ablation: Slot replacement offsets overfitting in our model, yielding gains of 2−5% for all models incl. baselines. Fine-tuning the pretrained word embeddings and removing character embeddings yielded losses of ∼1%. We tried more complex phrase embeddings for the slot description and example values, but since both occur as short phrases in our data, a bag-of-words approach worked well. Comparison with string matching: A training and evaluation setup including example values for slots may lend itself well to adding string matching-based slot fillers for suitable slots (for example, slots taking numeric values or having a small set of possible values). However, this is not applicable to our exact setting since we ensure that the slot values to be tagged during evaluation are never provided as input during training or evaluation. In addition, it is difficult to distinguish two slots with the same expected semantic type using such an approach, such as for slots ratingValue and bestRating from SNIPS intent RateBook. 7 Conclusions and Future Work We show that extending zero-shot slot filling models to use a small number of easily obtained example values for slots, in addition to textual slot descriptions, is a scalable solution for zero/few-shot slot filling tasks on similar and heterogenous domains, while resistant to misaligned overlapping schemas. Our approach surpasses prior state-ofthe-art models on two multi-domain datasets. The approach can, however, be inefficient for intents with many slots, as well as potentially sacrificing accuracy in case of overlapping predictions. Jointly modeling multiple slots for the task is an interesting future direction. Another direction would be to incorporate zero-shot entity recognition (Guerini et al., 2018), thus eliminating the need for example values during inference. In addition, since high-quality datasets for downstream tasks in dialogue systems (such as dialogue state tracking and dialogue management) are even more scarce, exploring zero-shot learning approaches to these problems is of immense value in building generalizable dialogue systems. Acknowledgements We would like to thank Ankur Bapna for the insightful discussions that have notably shaped this work. We would also like to thank the Deep Dialogue team at Google Research for their support. 5489 References Ankur Bapna, G¨okhan T¨ur, Dilek Hakkani-T¨ur, and Larry P. Heck. 2017. Towards zero-shot frame semantic parsing for domain scaling. In Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017. Jerome R Bellegarda. 2014. Spoken language understanding for natural interaction: The siri experience. In Natural Interaction with Robots, Knowbots and Smartphones, pages 3–14. Springer. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing, pages 120–128. Association for Computational Linguistics. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018. Adversarial deep averaging networks for cross-lingual sentiment classification. Transactions of the Association for Computational Linguistics, 6:557–570. Yun-Nung Chen, Dilek Hakkani-T¨ur, and Xiaodong He. 2016. Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pages 6045–6049. IEEE. Yun-Nung Chen, William Yang Wang, Anatole Gershman, and Alexander Rudnicky. 2015. Matrix factorization with knowledge graph propagation for unsupervised spoken language understanding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 483–494. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Alice Coucke, Alaa Saade, Adrien Ball, Th´eodore Bluche, Alexandre Caulier, David Leroy, Cl´ement Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. Yann N Dauphin, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2013. Zero-shot learning for semantic utterance classification. arXiv preprint arXiv:1401.0509. Emmanuel Ferreira, Bassam Jabaian, and Fabrice Lefevre. 2015a. Online adaptative zero-shot learning spoken language understanding using wordembedding. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5321–5325. IEEE. Emmanuel Ferreira, Bassam Jabaian, and Fabrice Lefevre. 2015b. Zero-shot semantic parser for spoken language understanding. In Sixteenth Annual Conference of the International Speech Communication Association. M Gaˇsi´c, N Mrkˇsi´c, Pei-hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Policy committee for adaptation in multi-domain spoken dialogue systems. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 806–812. IEEE. Anuj Goyal, Angeliki Metallinou, and Spyros Matsoukas. 2018. Fast and scalable expansion of natural language understanding functionality for intelligent agents. arXiv preprint arXiv:1805.01542. Marco Guerini, Simone Magnolini, Vevake Balaraman, and Bernardo Magnini. 2018. Toward zeroshot entity recognition in task-oriented conversational agents. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 317–326. Jiang Guo, Darsh Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4694–4703, Brussels, Belgium. Association for Computational Linguistics. D. Hakkani-T¨ur, G. Tur, A. Celikyilmaz, Y.-N. Chen, J. Gao, L. Deng, and Y.-Y. Wang. 2016. Multidomain joint semantic frame parsing using bidirectional rnn-lstm. In Proceedings of Interspeech. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The third dialog state tracking challenge. In 2014 IEEE Spoken Language Technology Workshop (SLT), pages 324–329. IEEE. Jonathan Herzig and Jonathan Berant. 2018. Decoupling structure and lexicon for zero-shot semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1619–1629. A. Jaech, L. Heck, and M.Ostendorf. 2016. Domain adaptation of recurrent neural networks for natural language understanding. In Proceedings of Interspeech. Rahul Jha, Alex Marin, Suvamsh Shivaprasad, and Imed Zitouni. 2018. Bag of experts architectures for model reuse in conversational language understanding. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), volume 3, pages 153–161. 5490 Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Association for Computational Linguistics. Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017. Domain attention with an ensemble of experts. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 643–653. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526. Anjishnu Kumar, Pavankumar Reddy Muddireddy, Markus Dreyer, and Bj¨orn Hoffmeister. 2017. Zeroshot learning across heterogeneous overlapping domains. In Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017, pages 2914–2918. Gakuto Kurata, Bing Xiang, Bowen Zhou, and Mo Yu. 2016. Leveraging sentence-level information with encoder lstm for semantic slot filling. arXiv preprint arXiv:1601.01530. Sungjin Lee and Rahul Jha. 2018. Zero-shot adaptive transfer for conversational language understanding. arXiv preprint arXiv:1808.10059. Xiao Li, Ye-Yi Wang, and Gokhan Tur. 2011. Multitask learning for spoken language understanding with shared slots. In Twelfth Annual Conference of the International Speech Communication Association. Bing Liu and Ian Lane. 2017. Multi-domain adversarial learning for slot filling in spoken language understanding. arXiv preprint arXiv:1711.11310. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Gr´egoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2015. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3):530–539. Chetan Naik, Arpit Gupta, Hancheng Ge, Mathias Lambert, and Ruhi Sarikaya. 2018. Contextual slot carryover for disparate schemas. Proc. Interspeech 2018, pages 596–600. Mohammad Norouzi, Tomas Mikolov, Samy Bengio, Yoram Singer, Jonathon Shlens, Andrea Frome, Greg S Corrado, and Jeffrey Dean. 2013. Zero-shot learning by convex combination of semantic embeddings. arXiv preprint arXiv:1312.5650. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th international conference on World wide web, pages 751–760. ACM. Abhinav Rastogi, Raghav Gupta, and Dilek HakkaniTur. 2018. Multi-task learning for joint language understanding and dialogue state tracking. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 376–384. Erik F Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task: Chunking. In Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning-Volume 7, pages 127–132. Association for Computational Linguistics. Darsh Shah, Tao Lei, Alessandro Moschitti, Salvatore Romeo, and Preslav Nakov. 2018. Adversarial domain adaptation for duplicate question detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1056–1063, Brussels, Belgium. Association for Computational Linguistics. Aditya Siddhant, Anuj Goyal, and Angeliki Metallinou. 2018. Unsupervised transfer learning for spoken language understanding in intelligent agents. arXiv preprint arXiv:1811.05370. Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Advances in neural information processing systems, pages 935–943. Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip Yu. 2018. Zero-shot user intent detection via capsule neural networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3090–3099. Association for Computational Linguistics. Majid Yazdani and James Henderson. 2015. A model of zero-shot learning of spoken language understanding. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 244–249. Association for Computational Linguistics. Steve Young. 2002. Talking to machines (statistically speaking). In Seventh International Conference on Spoken Language Processing. Su Zhu and Kai Yu. 2018. Concept transfer learning for adaptive language understanding. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 391–399.
2019
547
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5491–5496 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5491 Deep Unknown Intent Detection with Margin Loss Ting-En Lin, Hua Xu State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and Technology, Tsinghua University, Beijing, China Beijing National Research Center for Information Science and Technology [email protected], [email protected] Abstract Identifying the unknown (novel) user intents that have never appeared in the training set is a challenging task in the dialogue system. In this paper, we present a two-stage method for detecting unknown intents. We use bidirectional long short-term memory (BiLSTM) network with the margin loss as the feature extractor. With margin loss, we can learn discriminative deep features by forcing the network to maximize inter-class variance and to minimize intra-class variance. Then, we feed the feature vectors to the density-based novelty detection algorithm, local outlier factor (LOF), to detect unknown intents. Experiments on two benchmark datasets show that our method can yield consistent improvements compared with the baseline methods. 1 Introduction In the dialogue system, it is essential to identify the unknown intents that have never appeared in the training set. We can use those unknown intents to discover potential business opportunities. Besides, it can provide guidance for developers and accelerate the system development process. However, it is also a challenging task. On the one hand, it is often difficult to obtain prior knowledge about unknown intents due to lack of examples. On the other hand, it is hard to estimate the exact number of unknown intents. In addition, since user intents are strongly guided by prior knowledge and context, modeling high-level semantic concepts of intent is still problematic. Few previous studies are related to unknown intents detection. For example, Kim and Kim (2018) try to optimize the intent classifier and out-ofdomain detector jointly, but out-of-domain samples are still needed. The generative method (Yu et al., 2017) try to generate positive and negative examples from known classes by using adversarial learning to augment training data. However, the method does not work well in the discrete data space like text, and a recent study (Nalisnick et al., 2019) suggests that this approach may not work well on real-world data. Brychcin and Kr´al try to model intents through clustering. Still, it does not make good use of prior knowledge provided by known intents, and clustering results are usually unsatisfactory. Although there is a lack of prior knowledge about unknown intents, we can still leverage the advantage of known label information. Scheirer et al. (2013); Fei and Liu (2016) suggest that a m-class classifier should be able to reject examples from unknown class while performing mclass classification tasks. The reason is that not all test classes have appeared in the training set, which forms a (m+1)-class classification problem where the (m+1)th class represents the unknown class. This task is called open-world classification problem. The main idea is that if an example dissimilar to any of known intents, it is considered as the unknown. In this case, we use known intents as prior knowledge to detect unknown intents and simplify the problem by grouping unknown intents into a single class. Bendale and Boult (2016) further extend the idea to deep neural networks (DNNs). Shu et al. (2017) achieve the state-of-the-art performance by replacing the softmax layer of convolution neural network (CNN) with a 1-vs-rest layer consist of sigmoid and tightening the decision threshold of probability output for detection. DNN such as BiLSTM (Goo et al., 2018; Wang et al., 2018c) has demonstrated the ability to learn high-level semantic features of intents. Nevertheless, it is still challenging to detect unknown intents when they are semantically similar to known intents. The reason is that softmax loss only focuses on whether the sample is correctly classi5492 LSTM LSTM LSTM LSTM music Play from 1950 LSTM LSTM LSTM LSTM LOF Unknown Intent Detection Word Embedding Discriminative Deep Features Forward LSTM Loss Layer (LMCL) Backward LSTM Known Intent Class Figure 1: The architecture of the proposed two-stage method. We acquire intent representation by training an intent classifier on known intent with BiLSTM and learn discriminative deep features through LMCL. Then, we use LOF to detect unknown intents during the testing stage. fied, and does not require intra-class compactness and inter-class separation. Therefore, we replace softmax loss with margin loss to learn more discriminative deep features. The approach is widely used in face recognition (Liu et al., 2016, 2017; Ranjan et al., 2017). It forces the model to not only classify correctly but also maximize inter-class variance and minimize intra-class variance. Concretely, we use large margin cosine loss (LMCL) (Wang et al., 2018b) to accomplish it. It formulates the softmax loss into cosine loss with L2 norm and further maximizes the decision margin in the angular space. Finally, we feed the discriminative deep features to a density-based novelty detection algorithm, local outlier factor (LOF), to detect unknown intents. We summarize the contributions of this paper as follows. First, we propose a two-stage method for unknown intent detection with BiLSTM. Second, we introduce margin loss on BiLSTM to learn discriminative deep features, which is suitable for the detection task. Finally, experiments conducted on two benchmark dialogue datasets show the effectiveness of the proposed method. 2 Proposed Method 2.1 BiLSTM To begin with, we use BiLSTM (Mesnil et al., 2015) to train the intent classifier and use it as feature extractor. Figure 1 shows the architecture of the proposed method. Given an utterance with maximum word sequence length ℓ, we transform a sequence of input words w1:ℓinto m-dimensional word embedding v1:ℓ, which is used by forward and backward LSTM to produce feature representations x: −→ xt = LSTM(vt, −−→ ct−1), ←− xt = LSTM(vt, ←−− ct+1), x = [−→ xℓ; ←− x1], (1) where vt denotes the word embedding of input at time step t. −→ xt and ←− xt are the output vector of forward and backward LSTM respectively. −→ ct and ←− ct are the cell state vector of forward and backward LSTM respectively. We concatenate the last output vector of forward LSTM −→ xℓand the first output vector of backward LSTM ←− x1 into x as the sentence representation. It captures high-level semantic concepts learned by the model. We take x as the input of the next stage. 2.2 Large Margin Cosine Loss (LMCL) At the same time, we replace the softmax loss of BiLSTM with LMCL (Nalisnick et al., 2019). We define LMCL as the following: LLMC = 1 N 󰁛 i −log es·(cos (θyi,i)−m) es·(cos (θyi,i)−m) + 󰁓 j∕=yi es·cos θj,i , (2) 5493 Dataset Classes Vocabulary #Training #Validation #Test Class distribution SNIPS 7 11,971 13,084 700 700 Balanced ATIS 18 938 4,978 500 893 Imbalanced Table 1: Statistics of SNIPS and ATIS dataset. # indicates the total number of utterances. constrained by cos(θj, i) = W T j xi, W = W 󰂏 ||W 󰂏||, x = x󰂏 ||x󰂏||, (3) where N denotes the number of training samples, yi is the ground-truth class of the i-th sample, s is the scaling factor, m is the cosine margin, Wj is the weight vector of the j-th class, and θj is the angle between Wj and xi. LMCL transforms softmax loss into cosine loss by applying L2 normalization on both features and weight vectors. It further maximizes the decision margin in the angular space. With normalization and cosine margin, LMCL forces the model to maximize inter-class variance and to minimize intra-class variance. Then, we use the model as the feature extractor to produce discriminative intent representations. 2.3 Local Outlier Factor (LOF) Finally, because the discovery of unknown intents is closely related to the context, we feed discriminative deep features x to LOF algorithm (Breunig et al., 2000) to help us detect unknown intents in the context with local density. We compute LOF as the following: LOFk(A) = 󰁓 B∈Nk(A) lrd(B) lrd(A) |Nk(A)| , (4) where Nk(A) denotes the set of k-nearest neighbors and lrd denotes the local reachability density. We define lrd as the following: lrdk(A) = |Nk(A)| 󰁓 B∈Nk(A) reachdistk(A, B), (5) where lrdk(A) denotes the inverse of the average reachability distance between object A and its neighbors. We define reachdistk(A, B) as the following: reachdistk(A, B) = max{k-dist(B), d(A, B)}, (6) where d(A,B) denotes the distance between A and B, and k-dist denotes the distance of the object A to the kth nearest neighbor. If an example’s local density is significantly lower than its k-nearest neighbor’s, it is more likely to be considered as the unknown intents. 3 Experiments 3.1 Datasets We have conducted experiments on two publicly available benchmark dialogue datasets, including SNIPS and ATIS (T¨ur et al., 2010). The detailed statistics are shown in Table 1. SNIPS 1 SNIPS is a personal voice assistant dataset which contains 7 types of user intents across different domains. ATIS (Airline Travel Information System) 2 ATIS dataset contains recordings of people making reservations with 18 types of user intent in the flight domain. 3.2 Baselines We compare our methods with state-of-the-art methods and a variant of the proposed method. 1. Maximum Softmax Probability (MSP) (Hendrycks and Gimpel, 2016) Consider the maximum softmax probability of a sample as the score, if a sample does not belong to any known intents, its score will be lower. We calculate and apply a confidence threshold on the score as the simplest baseline where the threshold is set as 0.5. 2. DOC (Shu et al., 2017) It is the state-of-theart method in the field of open-world classification. It replaces softmax with sigmoid activation function as the final layer. It further tightens the decision boundary of the sigmoid function by calculating the confidence threshold for each class through statistics approach. 3. DOC (Softmax) A variant of DOC. It replaces the sigmoid activation function with softmax. 1https://github.com/snipsco/nlubenchmark/tree/master/2017-06-custom-intent-engines 2https://github.com/yvchen/JointSLU/tree/master/data 5494 SNIPS ATIS % of known intents 25% 50% 75% 25% 50% 75% MSP 0.0 6.2 8.3 8.1 15.3 17.2 DOC 72.5 67.9 63.9 61.6 62.8 37.7 DOC (Softmax) 72.8 65.7 61.8 63.6 63.3 38.7 LOF (Softmax) 76.0 69.4 65.8 67.3 61.8 38.9 LOF (LMCL) 79.2 84.1 78.8 69.6 63.4 39.6 Table 2: Macro f1-score of unknown intent detection with different proportion (25%, 50% and 75%) of classes are treated as known intents on SNIPS and ATIS dataset. Figure 2: Visualization of deep features learned with softmax and LMCL on SNIPS dataset. 4. LOF (Softmax) A variant of the proposed method for ablation study. We use softmax loss to train the feature extractor rather than LMCL. 3.3 Experimental Settings We follow the validation setting in (Fei and Liu, 2016; Shu et al., 2017) by keeping some classes in training as unknown and integrate them back during testing. Then we vary the number of known classes in training set in the range of 25%, 50%, and 75% classes and use all classes for testing. To conduct a fair evaluation for the imbalanced dataset, we randomly select known classes by weighted random sampling without replacement in the training set. If a class has more examples, it is more likely to be chosen as the known class. Meanwhile, the class with fewer examples still have a chance to be selected. Other classes are regarded as unknown and we will remove them in the training and validation set. We initialize the embedding layer through GloVe (Pennington et al., 2014) pre-trained word vectors 3. For BiLSTM model, we set the output dimension as 128 and the maximum epoch as 200 with early stop. For LMCL and LOF, we follow the original setting in their paper. We use macro f1-score as the evaluation metric and report the average result over 10 runs. We set the scaling factor s as 30 and cosine margin m as 0.35, which is recommended by Wang et al. (2018a). 3.4 Results and Discussion We show the experiment results in Table 2. Firstly, our method consistently performs better than all baselines in all settings. Compared with DOC, our method improves the macro f1-score on SNIPS by 6.7%, 16.2% and 14.9% in 25%, 50%, and 75% setting respectively. It confirms the effectiveness of our two-stage approach. Secondly, our method is also better than LOF (Softmax). In Figure 2, we use t-SNE (Maaten and Hinton, 2008) to visualize deep features learned with softmax and LMCL. We can see that the deep features learned with LMCL are intra-class com3http://nlp.stanford.edu/projects/glove/ 5495 pact and inter-class separable, which is beneficial for novelty detection algorithms based on local density. Thirdly, we observe that on the ATIS dataset, the performance of unknown intent detection dramatically drops as the known intent increases. We think the reason is that the intents of ATIS are all in the same domain and they are very similar in semantics (e.g., flight and flight no). The semantics of the unknown intents can easily overlap with the known intents, which leads to the poor performance of all methods. Finally, compared with ATIS, our approach improve even better on SNIPS. Since the intent of SNIPS is originated from different domains, it causes the DNN to learn a simple decision function when the known intents are dissimilar to each other. By replacing the softmax loss with the margin loss, we can push the network to further reduce the intra-class variance and the inter-class variance, thus improving the robustness of the feature extractor. 4 Conclusion In this paper, we proposed a two-stage method for unknown intent detection. Firstly, we train a BiLSTM classifier as the feature extractor. Secondly, we replace softmax loss with margin loss to learn discriminative deep features by forcing the network to maximize inter-class variance and to minimize intra-class variance. Finally, we detect unknown intents through the novelty detection algorithm. We also believe that broader families of anomaly detection algorithms are also applicable to our method. Extensive experiments conducted on two benchmark datasets show that our method can yield consistent improvements compared with the baseline methods. In future work, we plan to design a solution that can identify the unknown intent from known intents and cluster the unknown intents in an end-to-end fashion. Acknowledgments This paper is funded by National Natural Science Foundation of China (Grant No: 61673235) and National Key R&D Program Projects of China (Grant No: 2018YFC1707600). We would like to thank the anonymous reviewers and Yingwai Shiu for their valuable feedback. References Abhijit Bendale and Terrance E. Boult. 2016. Towards open set deep networks. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1563–1572. Markus M Breunig, Hans-Peter Kriegel, Raymond T Ng, and J¨org Sander. 2000. Lof: identifying densitybased local outliers. In ACM sigmod record, volume 29, pages 93–104. Tomas Brychcin and Pavel Kr´al. Unsupervised dialogue act induction using gaussian mixtures. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, pages 485–490. Geli Fei and Bing Liu. 2016. Breaking the closed world assumption in text classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 506–514. Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 753–757. Dan Hendrycks and Kevin Gimpel. 2016. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136. Joo-Kyung Kim and Young-Bum Kim. 2018. Joint learning of domain classification and out-of-domain detection with dynamic class weighting for satisficing false acceptance rates. In Interspeech 2018, 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2-6 September 2018, pages 556–560. Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. 2017. Sphereface: Deep hypersphere embedding for face recognition. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6738–6746. Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. 2016. Large-margin softmax loss for convolutional neural networks. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 1924, 2016, pages 507–516. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. 5496 Gr´egoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2015. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3):530–539. Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. 2019. Do deep generative models know what they don’t know? In International Conference on Learning Representations. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532– 1543. Rajeev Ranjan, Carlos D. Castillo, and Rama Chellappa. 2017. L2-constrained softmax loss for discriminative face verification. CoRR, abs/1703.09507. Walter J. Scheirer, Anderson Rocha, Archana Sapkota, and Terrance E. Boult. 2013. Toward open set recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35:1757–1772. Lei Shu, Hu Xu, and Bing Liu. 2017. Doc: Deep open classification of text documents. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2911–2916. G¨okhan T¨ur, Dilek Z. Hakkani-T¨ur, and Larry P. Heck. 2010. What is left to be understood in atis? 2010 IEEE Spoken Language Technology Workshop, pages 19–24. Feng Wang, Jian Cheng, Weiyang Liu, and Haijun Liu. 2018a. Additive margin softmax for face verification. IEEE Signal Processing Letters, 25(7):926– 930. Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. 2018b. Cosface: Large margin cosine loss for deep face recognition. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 5265–5274. Yu Wang, Yilin Shen, and Hongxia Jin. 2018c. A bimodel based rnn semantic frame parsing model for intent detection and slot filling. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, pages 309–314. Yang Yu, Wei-Yang Qu, Nan Li, and Zimin Guo. 2017. Open-category classification by adversarial sample generation. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pages 3357–3363.
2019
548
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5497–5502 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5497 Modeling Semantic Relationship in Multi-turn Conversations with Hierarchical Latent Variables Lei Shen1,2 Yang Feng1,2∗ Haolan Zhan2,3 1Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China {shenlei17z, fengyang}@ict.ac.cn [email protected] Abstract Multi-turn conversations consist of complex semantic structures, and it is still a challenge to generate coherent and diverse responses given previous utterances. It’s practical that a conversation takes place under a background, meanwhile, the query and response are usually most related and they are consistent in topic but also different in content. However, little work focuses on such hierarchical relationship among utterances. To address this problem, we propose a Conversational Semantic Relationship RNN (CSRR) model to construct the dependency explicitly. The model contains latent variables in three hierarchies. The discourse-level one captures the global background, the pair-level one stands for the common topic information between query and response, and the utterance-level ones try to represent differences in content. Experimental results show that our model significantly improves the quality of responses in terms of fluency, coherence and diversity compared to baseline methods. 1 Introduction Inspired by the observation that real-world human conversations are usually multi-turn, some studies have focused on multi-turn conversations and taken context (history utterances in previous turns) into account for response generation. How to model the relationship between the response and context is essential to generate coherent and logical conversations. Currently, the researchers employ some hierarchical architectures to model the relationship. Serban et al. (2016) use a context RNN to integrate historical information, Tian et al. (2017) sum up all utterances weighted by the similarity score between an utterance and the query, while Zhang et al. (2018) apply attention mechanism on history utterances. Besides, Xing et al. ∗Corresponding Author (2018) add a word-level attention to capture finegrained features. In practice, we usually need to understand the meaning of utterances and capture their semantic dependency, not just word-level alignments (Luo et al., 2018). As shown in Table 1, this short conversation is about speaker A asks the current situation of speaker B. At the beginning, they talk about B’s position. Then in the last two utterances, both speakers think about the way for B to come back. A mentions “umbrella”, while B wants A to “pick him/her up”. What’s more, there is no “word-to-word” matching in query and response. Unfortunately, the aforementioned hierarchical architectures do not model the meaning of each utterance explicitly and has to summarize the meaning of utterances on the fly during generating the response, and hence there is no guarantee that the inferred meaning is adequate to the original utterance. To address this problem, variational autoencoders (VAEs) (Kingma and Welling, 2014) are introduced to learn the meaning of utterances explicitly and a reconstruction loss is employed to make sure the learned meaning is faithful to the corresponding utterance. Besides, more variations are imported into utterance level to help generate more diverse responses. A: Where are you? B: I’m stuck in my office with rain. A: Didn’t you bring your umbrella? B: No. Please come and pick me up. Table 1: An example of the semantic relationship in a multi-turn conversation. However, all these frameworks ignore the practical situation that a conversation usually takes place under a background with two speakers communicating interactively and query is the most relevant utterance to the response. Hence we need 5498 to pay more attention to the relationship between query and response. To generate a coherent and engaging conversation, query and response should be consistent in topic and have some differences in content, the logical connection between which makes sure the conversation can go on smoothly. On these grounds, we propose a novel Conversational Semantic Relationship RNN (CSRR) to explicitly learn the semantic dependency in multiturn conversations. CSRR employs hierarchical latent variables based on VAEs to represent the meaning of utterances and meanwhile learns the relationship between query and response. Specifically, CSRR draws the background of the conversation with a discourse-level latent variable and then models the consistent semantics between query and response, e.g. the topic, with a common latent variable shared by the query and response pair, and finally models the specific meaning of the query and the response with a certain latent variable for each of them to capture the content difference. With these latent variables, we can learn the relationship between utterances hierarchically, especially the logical connection between the query and response. What is the most important, the latent variables are constrained to reconstruct the original utterances according to the hierarchical structure we define, making sure the semantics flow through the latent variables without any loss. Experimental results on two public datasets show that our model outperforms baseline methods in generating high-quality responses. 2 Approach Given n input messages {ut}n−1 t=0 , we consider the last one un−1 as query and others as context. un denotes corresponding response. The proposed model is shown in Figure 1. We add latent variables in three hierarchies to HRED (Serban et al., 2016). zc is used to control the whole background in which the conversation takes place, zp is for the consistency of topic between query and response pair, zq and zr try to model the content difference in each of them, respectively. For simplicity of equation description, we use n −1 and n as the substitution of q and r. 2.1 Context Representation Each utterance ut is encoded into a vector vt by a bidirectional GRU (BiGRU), futt θ : vt = futt θ (ut) (1) Figure 1: Graphical model of CSRR. ut is the t-th utterance, hct encodes context information up to time t. For the inter-utterance representation, we follow the way proposed by Park et al. (2018), which is calculated as: hct = ( MLPθ(zc), if t = 0 fctx θ (hct−1, vt−1, zc), otherwise (2) fctx θ (·) is the activation function of GRU. zc is the discourse-level latent variable with a standard Gaussian distribution as its prior distribution, that is: pθ(zc) = N(z|0, I) (3) For the inference of zc, we use a BiGRU fc to run over all utterance vectors {vt}n t=0 in training set. ({vt}n−1 t=0 in test set): qφ(zc|v0, ..., vn) = N(z|µc, σcI) (4) where vc = fc(v0, ..., vn) (5) µc = MLPφ(vc) (6) σc = Softplus(MLPφ(vc)) (7) MLP(·) is a feed-forward network, and Softplus function is a smooth approximation to the ReLU function and can be used to ensure positiveness (Park et al., 2018; Serban et al., 2017; Chung et al., 2015). 2.2 Query-Response Relationship Modeling According to VAEs, texts can be generated from latent variables (Shen et al., 2017). Motivated by this, we add two kinds of latent variables: pairlevel and also utterance-level ones for query and response. As depicted in Figure 1, hcn−1 encodes all context information from utterance u0 to un−2. We use zp to model the topic in query and response pair. Under the same topic, there are always some differences in content between query and response, which is represented by zq and zr, respectively. We first define the prior distribution of zp 5499 as follows: pθ(zp|u<n−1, zc) = N(z|µn−1, σn−1I) (8) u<n−1 denotes utterances {ui}n−2 i=0 , µn−1 and σn−1 are calculated as: µn−1 = MLPθ(hcn−1, zc) (9) σn−1 = Softplus(MLPθ(hcn−1, zc)) (10) Since zq(zn−1) and zr(zn) are also under the control of zp, we define the prior distributions of them as: pθ(zi|u<i, zc, zp) = N(z|µi, σiI) (11) Here, i = n −1 or n. The means and the diagonal variances are computed as: µi = MLPθ(hci, zc, zp) (12) σi = Softplus(MLPθ(hci, zc, zp)) (13) The posterior distributions are: qφ(zp|u≤n−1, zc) = N(z|µ ′ n−1, σ ′ n−1I) (14) qφ(zi|u≤i, zc, zp) = N(z|µ ′ i, σ ′ iI) (15) qφ(·) is a recognition model used to approximate the intractable true posterior distribution. The means and the diagonal variances are defined as: µ ′ n−1 = MLPφ(vn−1, vn, hcn−1, zc) (16) σ ′ n−1 = Softplus(MLPφ(vn−1, vn, hcn−1, zc)) (17) µ ′ i = MLPφ(vi, hci, zc, zp) (18) σ ′ i = Softplus(MLPφ(vi, hci, zc, zp)) (19) Note that in Equation 16 and 17, both vn−1 and vn are taken into consideration, while Equation 18 and 19 use zp and corresponding vi. 2.3 Training Because of the existence of latent variables in query-response pair, we use decoder fdec θ to generate un−1 and un: pθ(ui|u<i) = fdec θ (ui|hci, zc, zp, zi) (20) The training objective is to maximize the following variational lower-bound: log pθ(un−1, un|u0, ..., un−2) ≥ Eqφ[log pθ(ui|zc, zp, zi, u<i)] −DKL(qφ(zc|u≤n)||pθ(zc)) −DKL(qφ(zp|u≤n)||pθ(zp|u<n−1)) − n X i=n−1 DKL(qφ(zi|u≤i)||pθ(zi|u<i)) (21) Equation 21 consists of two parts: the reconstruction term and KL divergence terms based on three kinds of latent variables. 3 Experiment 3.1 Experimental Settings Datasets: We conduct our experiment on Ubuntu Dialog Corpus (Lowe et al., 2015) and Cornell Movie Dialog Corpus (Danescu-Niculescu-Mizil and Lee, 2011). As Cornell Movie Dialog does not provide a separate test set, we randomly split the corpus with the ratio 8:1:1. For each dataset, we keep conversations with more than 3 utterances. The number of multi-turn conversations in train/valid/test set is 898142/19560/18920 for Ubuntu Dialog, and 36004/4501/4501 for Cornell Movie Dialog. Hyper-parameters: In our model and all baselines, Gated Recurrent Unit (GRU) (Cho et al., 2014) is selected as the fundamental cell in encoder and decoder layers, and the hidden dimension is 1,000. We set the word embedding dimension to 500, and all latent variables have a dimension of 100. For optimization, we use Adam (Kingma and Ba, 2015) with gradient clipping. The sentence padding length is set to 15, and the max conversation length is 10. In order to alleviate degeneration problem of variational framework (Bowman et al., 2016), we also apply KL annealing (Bowman et al., 2016) in all models with latent variables. The KL annealing steps are 15,000 for Cornell Movie Dialog and 250,000 for Ubuntu Dialog. Baseline Models: We compare our model with three baselines. They all focus on multi-turn conversations, and the third one is a state-of-theart variational model. 1) Hierarchical recurrent encoder-decoder (HRED) (Serban et al., 2016). 2) Variational HRED (VHRED) (Serban et al., 2017) with word drop (w.d) and KL annealing (Bowman et al., 2016), the word drop ratio equals to 0.25. 3) Variational Hierarchical Conversation RNN (VHCR) with utterance drop (u.d) (Park et al., 2018) and KL annealing, the utterance drop ratio equals to 0.25. 3.2 Evaluation Design Open-domain response generation does not have a standard criterion for automatic evaluation, like BLEU (Papineni et al., 2002) for machine translation. Our model is designed to improve the co5500 Model Average Extrema Greedy Dist-1 Dist-2 Coherence Fluency Informativeness Ubuntu Dialog HRED 0.570 0.329 0.415 0.494 0.814 2.96 3.64 2.89 VHRED+w.d 0.556 0.312 0.405 0.523 0.856 2.52 3.35 3.24 VHCR+u.d 0.572 0.330 0.416 0.512 0.837 2.42 3.48 2.99 CSRR 0.612 0.345 0.457 0.561 0.882 3.39 3.91 3.75 Cornell Movie Dialog HRED 0.547 0.370 0.387 0.489 0.801 3.02 3.65 2.85 VHRED+w.d 0.556 0.365 0.405 0.512 0.850 3.05 3.76 3.24 VHCR+u.d 0.587 0.378 0.434 0.507 0.837 3.13 3.73 3.06 CSRR 0.620 0.395 0.462 0.522 0.873 3.43 3.82 3.78 Table 2: Automatic and human evaluation results on Ubuntu Dialog Corpus and Cornell Movie Dialog Corpus. herence/relevance and diversity of generated responses. To measure the performance effectively, we use 5 automatic evaluation metrics along with human evaluation. Average, Greedy and Extrema: Rather than calculating the token-level or n-gram similarity as the perplexity and BLEU, these three metrics are embedding-based and measure the semantic similarity between the words in the generated response and the ground truth (Serban et al., 2017; Liu et al., 2016). We use word2vec embeddings trained on the Google News Corpus 1 in this section. Please refer to Serban et al. (2017) for more details. Dist-1 and Dist-2: Following the work of Li et al. (2016), we apply Distinct to report the degree of diversity. Dist-1/2 is defined as the ratio of unique uni/bi-grams over all uni/bi-grams in generated responses. Human Evaluation: Since automatic evaluation results may not be fully consistent with human judgements (Liu et al., 2016), human evaluation is necessary. Inspired by Luo et al. (2018), we use following three criteria. Fluency measures whether the generated responses have grammatical errors. Coherence denotes the semantic consistency and relevance between a response and its context. Informativeness indicates whether the response is meaningful and good at word usage. A general reply should have the lowest Informativeness score. Each of these measurement scores ranges from 1 to 5. We randomly sample 100 examples from test set and generate total 400 responses using models mentioned above. All generated responses are scored by 7 annotators, who are postgraduate students and not involved in other parts of the experiment. 1https://code.google.com/archive/p/ word2vec/ 3.3 Results of Automatic Evaluation The left part of Table 2 is about automatic evaluation on test set. The proposed CSRR model significantly outperforms other baselines on three embedding-based metrics on both datasets. The improvement of our model indicates our semantic relationship modeling better reflects the structure of real-world conversations, and the responses generated by our models are more relevant to context. As for diversity, CSRR also gets the highest Dist-1 and Dist-2 scores. For Ubuntu Dialog dataset, VHRED+w.d is the worst. With the help of discourse-level latent variable and utterance drop, VHCR+u.d leads to better performance. However, HRED is the worst on the Cornell Movie dataset. Park et al. (2018) empirically explained the difference based on that Cornell Movie Dialog dataset is small in size, but very diverse and complex in content and style, and models like HRED often fail to generate appropriate responses for the context. 3.4 Results of Human Evaluation The right part of Table 2 is about human evaluation results on 400 (100×4) responses. First, it is clear that CSRR model receives the best evaluation on three aspects, which proves the effectiveness of CSRR on generating high quality responses. Second, because of the existence of discourse-level and pair-level latent variables, responses are more coherent. Since these two kinds of variables learn high level semantic information, utterance-level ones serve better on expression diversion, also improve sentence fluency and informativeness. 3.5 Case Study and Ablation Study Table 3 shows the examples generated by CSRR model and other baseline models. For some 5501 Example 1: Context: how’s sleeping beauty? 7→still sleeping. 7→ kiss him, maybe he’ll turn into a frog. HRED: he’s a good man. VHRED+w.d: what? VHCR+u.d: maybe he’s dead. CSRR: Really? I won’t try. Example 2: Context: tried installing the driver from nvidia, but funny thing is i’ve installed 7→what is the problem you are having exactly 7→at this point driver activated but not in use HRED: what card do you have? VHRED+w.d: are you using the nvidia driver? or from the command line? VHCR+u.d: i have no idea what you are talking about, i just know that CSRR: you need to install the nvidia driver from the nvidia site. Example 3: Context: hi wondering whether to use 32 or 64 bit 7→ how much ram does the pc have? more than 2 gb you might 7→definitely? because, i’ ve heard that ubuntu recommends 32 bit and HRED: it’s a bit more than a 32 bit version of a 32bit system. VHRED+w.d: i have no idea , but it doesn’t matter. VHCR+u.d: i don’t know, but i haven’t used it much. CSRR: yes, but you can use 64 bit or 32 bit. Example 4: Context: how can i narrow down an issue i’m having? i am running 7→check out the system logs in var logs ... in particular kern.log, syslog 7→thank you very much. HRED: no problem. VHRED+w.d: i’m glad to hear that, don’t know why. VHCR+u.d: i’m not sure what you mean by ‘ <unk> ’. CSRR: you’re welcome. i don’t know if that’s the case. Table 3: Examples generated by CSRR model and other baselines. The first example is from Cornell Movie Dialog, while the bottom three rows are from Ubuntu Dialog. easy questions, like greeting (Example 4), both HRED and CSRR perform well. In contrast, VHRED+w.d and VHCR+u.d tend to generate general and meaningless responses. For hard questions, like some technical ones (Example 1 to 3), the proposed CSRR obviously outperforms other baselines. Note that VHCR is to show the effectiveness of zc and it can also be considered as the ablation study of CSRR to illustrate the validity of zp. From above cases, we empirically find that with the help of zp, response generated by CSRR are not only relevant and consistent to context, but also informative and meaningful. 4 Conclusion and Future Work In this work, we propose a Conversational Semantic Relationship RNN model to learn the semantic dependency in multi-turn conversations. We apply hierarchical strategy to obtain context information, and add three-hierarchy latent variables to capture semantic relationship. According to automatic evaluation and human evaluation, our model significantly improves the quality of generated responses, especially in coherence, sentence fluency and language diversity. In the future, we will model the semantic relationship in previous turns, and also import reinforcement learning to control the process of topic changes. Acknowledgements This work was supported by National Natural Science Foundation of China (NO. 61662077, NO. 61876174) and National Key R&D Program of China (NO. YS2017YFGH001428). We sincerely thank the anonymous reviewers for their helpful and valuable suggestions. References Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21. Kyunghyun Cho, Bart van Merri¨enboer Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724–1734. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. In Advances in Neural Information Processing Systems, pages 2980–2988. Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics, pages 76–87. Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In the 3rd International Conference on Learning Representations. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In the 2nd International Conference on Learning Representations. 5502 Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Ryan Lowe, Nissan Pow, Iulian V Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294. Liangchen Luo, Jingjing Xu, Junyang Lin, Qi Zeng, and Xu Sun. 2018. An auto-encoder matching model for learning utterance-level semantic dependency in dialogue generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 702–707. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Yookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A hierarchical latent structure for variational conversation modeling. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies(Volume 1: Long Papers), pages 1792–1801. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 3776–3783. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3295–3301. Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017. A conditional variational framework for dialog generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 504–509. Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, and Dongyan Zhao. 2017. How to make context more useful? an empirical study on contextaware neural conversational models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231–236. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5610–5617. Weinan Zhang, Yiming Cui, Yifa Wang, Qingfu Zhu, Lingzhi Li, Lianqiang Zhou, and Ting Liu. 2018. Context-sensitive generation of open-domain conversational responses. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2437–2447.
2019
549
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 579–590 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 579 Sentiment Tagging with Partial Labels using Modular Architectures Xiao Zhang Purdue University [email protected] Dan Goldwasser Purdue University [email protected] Abstract Many NLP learning tasks can be decomposed into several distinct sub-tasks, each associated with a partial label. In this paper we focus on a popular class of learning problems, sequence prediction applied to several sentiment analysis tasks, and suggest a modular learning approach in which different sub-tasks are learned using separate functional modules, combined to perform the final task while sharing information. Our experiments show this approach helps constrain the learning process and can alleviate some of the supervision efforts. 1 Introduction Many natural language processing tasks attempt to replicate complex human-level judgments, which often rely on a composition of several sub-tasks into a unified judgment. For example, consider the Targeted-Sentiment task (Mitchell et al., 2013), assigning a sentiment polarity score to entities depending on the context that they appear in. Given the sentence “according to a CNN poll, Green Book will win the best movie award”, the system has to identify both entities, and associate the relevant sentiment value with each one (neutral with CNN, and positive with Green Book). This task can be viewed as a combination of two tasks, entity identification, locating contiguous spans of words corresponding to relevant entities, and sentiment prediction, specific to each entity based on the context it appears in. Despite the fact that this form of functional task decomposition is natural for many learning tasks, it is typically ignored and learning is defined as a monolithic process, combining the tasks into a single learning problem. Our goal in this paper is to take a step towards modular learning architectures that exploit the learning tasks’ inner structure, and as a result simplify the learning process and reduce the annotation effort. We introduce a novel task decomposition approach, learning with partial labels, in which the task output labels decompose hierarchically, into partial labels capturing different aspects, or sub-tasks, of the final task. We show that learning with partial labels can help support weakly-supervised learning when only some of the partial labels are available. Given the popularity of sequence labeling tasks in NLP, we demonstrate the strength of this approach over several sentiment analysis tasks, adapted for sequence prediction. These include target-sentiment prediction (Mitchell et al., 2013), aspect-sentiment prediction (Pontiki et al., 2016) and subjective text span identification and polarity prediction (Wilson et al., 2013). To ensure the broad applicability of our approach to other problems, we extend the popular LSTM-CRF (Lample et al., 2016) model that was applied to many sequence labeling tasks1. The modular learning process corresponds to a task decomposition, in which the prediction label, y, is deconstructed into a set of partial labels {y0, .., yk}, each defining a sub-task, capturing a different aspect of the original task. Intuitively, the individual sub-tasks are significantly easier to learn, suggesting that if their dependencies are modeled correctly when learning the final task, they can constrain the learning problem, leading to faster convergence and a better overall learning outcome. In addition, the modular approach helps alleviate the supervision problem, as often providing full supervision for the overall task is costly, while providing additional partial labels is significantly easier. For example, annotating entity segments syntactically is considerably easier than determining their associated sentiment, which requires understanding the nuances of the 1We also provide analysis for NER in the apendix 580 context they appear in semantically. By exploiting modularity, the entity segmentation partial labels can be used to help improve that specific aspect of the overall task. Our modular task decomposition approach is partially inspired by findings in cognitive neuroscience, namely the two-streams hypothesis, a widely accepted model for neural processing of cognitive information in vision and hearing (Eysenck and Keane, 2005), suggesting the brain processes information in a modular way, split between a “where” (dorsal) pathway, specialized for locating objects and a “what” (ventral) pathway, associated with object representation and recognition (Mishkin et al., 1983; Geschwind and Galaburda, 1987; Kosslyn, 1987; Rueckl et al., 1989). Jacobs et al. (1991) provided a computational perspective, investigating the “what” and “where” decomposition on a computer vision task. We observe that this task decomposition naturally fits many NLP tasks and borrow the notation. In the target-sentiment tasks we address in this paper, the segmentation tagging task can be considered as a “where”-task (i.e., the location of entities), and the sentiment recognition as the “what”-task. Our approach is related to multi-task learning (Caruana, 1997), which has been extensively applied in NLP (Toshniwal et al., 2017; Eriguchi et al., 2017; Collobert et al., 2011; Luong, 2016; Liu et al., 2018). However, instead of simply aggregating the objective functions of several different tasks, we suggest to decompose a single task into multiple inter-connected sub-tasks and then integrate the representation learned into a single module for the final decision. We study several modular neural architectures, which differ in the way information is shared between tasks, the learning representation associated with each task and the way the dependency between decisions is modeled. Our experiments were designed to answer two questions. First, can the task structure be exploited to simplify a complex learning task by using a modular approach? Second, can partial labels be used effectively to reduce the annotation effort? To answer the first question, we conduct experiments over several sequence prediction tasks, and compare our approach to several recent models for deep structured prediction (Lample et al., 2016; Ma and Hovy, 2016; Liu et al., 2018), and when available, previously published results (Mitchell et al., 2013; Zhang et al., 2015; Li and Lu, 2017; Ma et al., 2018) We show that modular learning indeed helps simplify the learning task compared to traditional monolithic approaches. To answer the second question, we evaluate our model’s ability to leverage partial labels in two ways. First, by restricting the amount of full labels, and observing the improvement when providing increasing amounts of partial labels for only one of the sub-tasks. Second, we learn the sub-tasks using completely disjoint datasets of partial labels, and show that the knowledge learned by the sub-task modules can be integrated into the final decision module using a small amount of full labels. Our contributions: (1) We provide a general modular framework for sequence learning tasks. While we focus on sentiment analysis task, the framework is broadly applicable to many other tagging tasks, for example, NER (Carreras et al., 2002; Lample et al., 2016) and SRL (Zhou and Xu, 2015), to name a few. (2) We introduce a novel weakly supervised learning approach, learning with partial labels, that exploits the modular structure to reduce the supervision effort. (3) We evaluated our proposed model, in both the fullysupervised and weakly supervised scenarios, over several sentiment analysis tasks. 2 Related Works From a technical perspective, our task decomposition approach is related to multi-task learning (Caruana, 1997), specifically, when the tasks share information using a shared deep representation (Collobert et al., 2011; Luong, 2016). However, most prior works aggregate multiple losses on either different pre-defined tasks at the final layer (Collobert et al., 2011; Luong, 2016), or on a language model at the bottom level (Liu et al., 2018). This work suggests to decompose a given task into sub-tasks whose integration comprise the original task. To the best of our knowledge, Ma et al. (2018), focusing on targeted sentiment is most similar to our approach. They suggest a joint learning approach, modeling a sequential relationship between two tasks, entity identification and target sentiment. We take a different approach viewing each of the model components as a separate module, predicted independently and then integrated into the final decision module. As we demonstrate in our experiments, this approach leads to better performance and increased flexibil581 ity, as it allows us to decouple the learning process and learn the tasks independently. Other modular neural architectures were recently studied for tasks combining vision and language analysis (Andreas et al., 2016; Hu et al., 2017; Yu et al., 2018), and were tailored for the grounded language setting. To help ensure the broad applicability of our framework, we provide a general modular network formulation for sequence labeling tasks by adapting a neural-CRF to capture the task structure. This family of models, combining structured prediction with deep learning showed promising results (Gillick et al., 2015; Lample et al., 2016; Ma and Hovy, 2016; Zhang et al., 2015; Li and Lu, 2017), by using rich representations through neural models to generate decision candidates, while utilizing an inference procedure to ensure coherent decisions. Our main observation is that modular learning can help alleviate some of the difficulty involved in training these powerful models. 3 Architectures for Sequence Prediction Using neural networks to generate emission potentials in CRFs was applied successfully in several sequence prediction tasks, such as word segmentation (Chen et al., 2017), NER (Ma and Hovy, 2016; Lample et al., 2016), chunking and PoS tagging (Liu et al., 2018; Zhang et al., 2017). A sequence is represented as a sequence of L tokens: x = [x1, x2, . . . , xL], each token corresponds to a label y ∈Y, where Y is the set of all possible tags. An inference procedure is designed to find the most probable sequence y∗= [y1, y2, . . . , yL] by solving, either exactly or approximately, the following optimization problem: y∗= arg max y P(y|x). Despite the difference in tasks, these models follow a similar general architecture: (1) Characterlevel information, such as prefix, suffix and capitalization, is represented through a character embedding layer learned using a bi-directional LSTM (BiLSTM). (2) Word-level information is obtained through a word embedding layer. (3) The two representations are concatenated to represent an input token, used as input to a word-level BiLSTM which generates the emission potentials for a succeeding CRF. (4) The CRF is used as an inference layer to generate the globally-normalized probability of possible tag sequences. 3.1 CRF Layer A CRF model describes the probability of predicted labels y, given a sequence x as input, as PΛ(y|x) = eΦ(x,y) Z , where Z = P ˜y eΦ(x,˜y) is the partition function that marginalize over all possible assignments to the predicted labels of the sequence, and Φ(x, y) is the scoring function, which is defined as: Φ(x, y) = X t φ(x, yt) + ψ(yt−1, yt). The partition function Z can be computed efficiently via the forward-backward algorithm. The term φ(x, yt) corresponds to the score of a particular tag yt at position t in the sequence, and ψ(yt−1, yt) represents the score of transition from the tag at position t −1 to the tag at position t. In the Neural CRF model, φ(x, yt) is generated by the aforementioned Bi-LSTM while ψ(yt−1, yt) by a transition matrix. 4 Functional Decomposition of Composite Tasks To accommodate our task decomposition approach, we first define the notion of partial labels, and then discuss different neural architectures capturing the dependencies between the modules trained over the different partial labels. Partial Labels and Task Decomposition: Given a learning task, defined over an output space y ∈ Y, where Y is the set of all possible tags, each specific label y is decomposed into a set of partial labels, {y0, .., yk}. We refer to y as the full label. According to this definition, a specific assignment to all k partial labels defines a single full label. Note the difference between partially labeled data (Cour et al., 2011), in which instances can have more than a single full label, and our setup in which the labels are partial. In all our experiments, the partial labels refer to two sub-tasks, (1) a segmentation task, identifying Beginning, Inside and Outside of an entity or aspect. (2) one or more type recognition tasks, recognizing the aspect type and/or the sentiment polarity associated with it. Hence, a tag yt at location t is divided into yseg t and ytyp t , corresponding to segmentation and type (sentiment type here) respectively. Fig. 1 provides an example of the 582 target-sentiment task. Note that the sentiment labels do not capture segmentation information. Text ABC News' President Tag B-neu O O Christiane Amanpour Exclusive Interview with Seg Senti Mubarak E-neu B-neu E-neu B-neu E-neu O B O O E B E B E O neu O O neu neu neu neu neu O Figure 1: Target-sentiment decomposition example. Modular Learning architectures: We propose three different models, in which information from the partial labels can be used. All the models have similar modules types, corresponding to the segmentation and type sub-tasks, and the decision module for predicting the final task. The modules are trained over the partial segmentation (yseg) and type ( ytyp) labels, and the full label y information, respectively. These three models differ in the way they share information. Model 1, denoted Twofold Modular, LSTM-CRF-T, is similar in spirit to multi-task learning (Collobert et al., 2011) with three separate modules. Model 2, denoted Twofold modular Infusion, (LSTM-CRF-TI) and Model 3, denoted Twofold modular Infusion with guided gating, (LSTM-CRF-TI(g)) both infuse information flow from two sub-task modules into the decision module. The difference is whether the infusion is direct or goes through a guided gating mechanism. The three models are depicted in Fig. 2 and described in details in the following paragraphs. In all of these models, underlying neural architecture are used for the emission potentials when CRF inference layers are applied on top. 4.1 Twofold Modular Model The twofold modular model enhances the original monolithic model by using multi-task learning with shared underlying representations. The segmentation module and the type module are trained jointly with the decision module, and all the modules share information by using the same embedding level representation, as shown in Figure 2a. Since the information above the embedding level is independent, the LSTM layers in the different modules do not share information, so we refer to these layers of each module as private. The segmentation module predicts the segmentation BIO labels at position t of the sequence by using the representations extracted from its private word level bi-directional LSTM (denoted as Hseg) as emission for a individual CRF: hseg t = Hseg(et, −→ h seg t−1, −→ h seg t+1), φ(x, yseg t ) = W seg⊺hseg t + bseg, where W seg and bseg denote the parameters of the segmentation module emission layer, and Hseg denotes its private LSTM layer. This formulation allows the model to forge the segmentation path privately through backpropagation by providing the segmentation information yseg individually, in addition to the complete tag information y. The type module, using ytyp, is constructed in a similar way. By using representations from the its own private LSTM layers, the type module predicts the sentiment (entity) type at position t of the sequence : htyp t = Htyp(et, −→ h typ t−1, −→ h typ t+1), φ(x, ytyp t ) = W typ⊺htyp t + btyp. Both the segmentation information yseg and the type information ytyp are provided together with the complete tag sequence y, enabling the model to learn segmentation and type recognition simultaneously using two different paths. Also, the decomposed tags naturally augment more training data to the model, avoiding over-fitting due to more complicated structure. The shared representation beneath the private LSTMs layers are updated via the back-propagated errors from all the three modules. 4.2 Two-fold Modular Infusion Model The twofold modular infusion model provides a stronger connection between the functionalities of the two sub-tasks modules and the final decision module, differing from multi-task leaning. In this model, instead of separating the pathways from the decision module as in the previous twofold modular model, the segmentation and the type representation are used as input to the final decision module. The model structure is shown in Figure 2b, and can be described formally as: Iseg t = W seg⊺hseg t + bseg, Ityp t = W typ⊺htyp t + btyp, St = W ⊺[ht; Iseg t ; Ityp t ] + b, where St is the shared final emission potential to the CRF layer in the decision module, and ; is the 583 SEG TYP DES Embeddings (a) LSTM-CRF-T SEG TYP DES Embeddings (b) LSTM-CRF-TI SEG TYP DES σ σ × × Embeddings (c) LSTM-CRF-TI (G) Figure 2: Three modular models for task decomposition. In them, blue blocks are segmentation modules, detecting entity location and segmentation, and yellow blocks are the type modules, recognizing the entity type or sentiment polarity. Green blocks are the final decision modules, integrating all the decisions. (G) refers to “Guided Gating” concatenation operator, combining the representation from the decision module and that from the type module and the segmentation module. The term “Infusion” used for naming this module is intended to indicate that both modules actively participate in the final decision process, rather than merely form two independent paths as in the twofold modular model. This formulation provides an alternative way of integrating the auxiliary sub-tasks back into the major task in the neural structure to help improve learning. 4.3 Guided Gating Infusion In the previous section we described a way of infusing information from other modules naively by simply concatenating them. But intuitively, the hidden representation from the decision module plays an important role as it is directly related to the final task we are interested in. To effectively use the information from other modules forming sub-tasks, we design a gating mechanism to dynamically control the amount of information flowing from other modules by infusing the expedient part while excluding the irrelevant part, as shown in Figure 2c. This gating mechanism uses the information from the decision module to guide the information from other modules, thus we name it as guided gating infusion, which we describe formally as follows: Iseg t =σ(W1ht + b1) ⊗(W seg⊺hseg t + bseg), Ityp t =σ(W2ht + b2) ⊗(W typ⊺htyp t + btyp), St =W ⊺[ht; Iseg t ; Ityp t ] + b, where σ is the logistic sigmoid function and ⊗is the element-wise multiplication. The {W1, W2, b1, b2} are the parameters of these guided gating, which are updated during the training to maximize the overall sequence labeling performance. 5 Learning using Full and Partial Labels Our objective naturally rises from the model we described in the text. Furthermore, as our experiments show, it is easy to generalize this objective, to a “semi-supervised” setting, in which the learner has access to only a few fully labeled examples and additional partially labeled examples. E.g., if only segmentation is annotated but the type information is missing. The loss function is a linear combination of the negative log probability of each sub-tasks, together with the decision module: J = − N X i log P(yi|xi) + α log P(yseg(i)|x(i)) + β log P(ytyp(i)|x(i)), (1) where N is the number of examples in the training set, yseg and ytyp are the decomposed segmentation and type tags corresponding to the two sub-task modules, and α and β are the hyperparameters controlling the importance of the two modules contributions respectively. If the training example is fully labeled with both segmentation and type annotated, training is straightforward; if the training example is partially labeled, e.g., only with segmentation but without type, we can set the log probability of the type module and the decision module 0 and only train the segmentation module. This formulation provides extra flexibility of using partially annotated corpus together with fully annotated corpus to improve the overall performance. 6 Experimental Evaluation Our experimental evaluation is designed to evaluate the two key aspects of our model: (Q1) Can the modular architecture alleviate the difficulty of learning the final task? To answer 584 this question, we compare our modular architecture to the traditional neural-CRF model and several recent competitive models for sequence labeling combining inference and deep learning. The results are summarized in Tables 1-3. (Q2) Can partial labels be used effectively as a new form of weak-supervision? To answer this question we compared the performance of the model when trained using disjoint sets of partial and full labels, and show that adding examples only associated with partial labels, can help boost performance on the final task. The results are summarized in Figures 3-5. 6.1 Experimental Settings 6.1.1 Datasets We evaluated our models over three different sentiment analysis tasks adapted for sequence prediction. We included additional results for multilingual NER in the Appendix for reference. Target Sentiment Datasets We evaluated our models on the targeted sentiment dataset released by Mitchell et al. (2013), which consists of entity and sentiment annotations on both English and Spanish tweets. Similar to previous studies (Mitchell et al., 2013; Zhang et al., 2015; Li and Lu, 2017), our task focuses on people and organizations (collapsed into volitional named entities tags) and the sentiment associated with their description in tweets. After this processing, the labels of each tweets are composed of both segmentation (entity spans) and types (sentiment tags). We used the original 10-fold cross validation splits to calculate averaged F1 score, using 10% of the training set for development. We used the same metrics in Zhang et al. (2015) and Li and Lu (2017) for a fair comparison. Aspect Based Sentiment Analysis Datasets We used the Restaurants dataset provided by SemEval 2016 Task 5 subtask 1, consisting of opinion target (aspect) expression segmentation, aspect classification and matching sentiment prediction. In the original task definition, the three tasks were designed as a pipeline, and assumed gold aspect labels when predicting the matching sentiment labels. Instead, our model deals with the challenging end-to-end setting by casting the problem as a sequence labeling task, labeling each aspect segment with the aspect label and sentiment polarity2. Subjective Polarity Disambiguation Datasets We adapted the SemEval 2013 Task 2 subtask A as another task to evaluate our model. In this task, the system is given a marked phrase inside a longer text, and is asked to label its polarity. Unlike the original task, we did not assume the sequence is known, resulting in two decisions, identifying subjective expressions (i.e., a segmentation task) and labeling their polarity, which can be modeled jointly as a sequence labeling task. 6.1.2 Input Representation and Model Architecture Following previous studies (Ma and Hovy, 2016; Liu et al., 2018) showing that the word embedding choice can significantly influence performance, we used the pre-trained GloVe 100 dimension Twitter embeddings only for all tasks in the main text. All the words not contained in these embeddings (OOV, out-of-vocabulary words) are treated as an “unknown” word. Our models were deployed with minimal hyper parameters tuning, and can be briefly summarized as: the character embeddings has dimension 30, the hidden layer dimension of the character level LSTM is 25, and the hidden layer of the word level LSTM has dimension 300. Similar to Liu et al. (2018), we also applied highway networks (Srivastava et al., 2015) from the character level LSTM to the word level LSTM. In our pilot study, we shrank the number of parameters in our modular architectures to around one third such that the total number of parameter is similar as that in the LSTM-CRF model, but we did not observe a significant performance change so we kept them as denoted. The values of α and β in the objective function were always set to 1.0. 6.1.3 Learning We used BIOES tagging scheme but only during the training and convert them back to BIO2 for evaluation for all tasks3. Our model was implemented using pytorch (Paszke et al., 2017). To help improve performance we parallelized the for2using only the subset of the data containing sequence information 3Using BIOES improves model complexity in Training, as suggested in previous studies. But to make a fair comparison to most previous work, who used BIO2 for evaluation, we converted labels to BIO2 system in the testing stage. (To be clear, using BIOES in the testing actually yields higher f1 scores in the testing stage, which some previous studies used unfairly) 585 ward algorithm and the Viterbi algorithm on the GPU. All the experiments were run on NVIDIA GPUs. We used the Stochastic Gradient Descent (SGD) optimization of batch size 10, with a momentum 0.9 to update the model parameters, with the learning rate 0.01, the decay rate 0.05; The learning rate decays over epochs by η/(1 + e ∗ρ), where η is the learning rate, e is the epoch number, and ρ is the decay rate. We used gradient clip to force the absolute value of the gradient to be less than 5.0. We used early-stop to prevent over-fitting, with a patience of 30 and at least 120 epochs. In addition to dropout, we used Adversarial Training (AT) (Goodfellow et al., 2014), to regularize our model as the parameter numbers increase with modules. AT improves robustness to small worst-case perturbations by computing the gradients of a loss function w.r.t. the input. In this study, α and β in Eq. 1 are both set to 1.0, and we leave other tuning choices for future investigation. 6.2 Q1: Monolithic vs. Modular Learning Our first set of results are designed to compare our modular learning models, utilize partial labels decomposition, with traditional monolithic models, that learn directly over the full labels. In all three tasks, we compare with strong sequence prediction models, including LSTM-CRF (Lample et al., 2016), which is directly equivalent to our baseline model (i.e., final task decision without the modules), and LSTM-CNN-CRF (Ma and Hovy, 2016) and LSTM-CRF-LM (Liu et al., 2018) which use a richer latent representation for scoring the emission potentials. Target Sentiment task The results are summarized in Tab. 1. We also compared our models with recently published state-of-the-art models on these datasets. To help ensure a fair comparison with Ma et al. which does not use inference, we also included the results of our model without the CRF layer (denoted LSTM-Ti(g)). All of our models beat the state-of-the-art results by a large margin. The source code and experimental setup are available online4. Aspect Based Sentiment We evaluated our models on two tasks: The first uses two modules, for identifying the position of the aspect in the text (i.e., chunking) and the aspect category prediction 4https://github.com/cosmozhang/ Modular_Neural_CRF System Architecture Eng. Spa. Zhang et al. (2015) Pipeline 40.06 43.04 Joint 39.67 43.02 Collapsed 38.36 40.00 Li and Lu (2017) SS 40.11 42.75 +embeddings 43.55 44.13 +POS tags 42.21 42.89 +semiMarkov 40.94 42.14 Ma et al. (2018) HMBi-GRU 42.87 45.61 baseline LSTM-CRF 49.89 48.84 This work LSTM-Ti(g) 45.84 46.59 LSTM-CRF-T 51.34 49.47 LSTM-CRF-Ti 51.64 49.74 LSTM-CRF-Ti(g) 52.15 50.50 Table 1: Comparing our models with the competing models on the target sentiment task. The results are on the full prediction of both segmentation and sentiment. (denoted E+A). The second adds a third module that predicts the sentiment polarity associated with the aspect (denoted E+A+S). I.e., for a given sentence, label its entity span, the aspect category of the entity and the sentiment polarity of the entity at the same time. The results over four languages are summarized in Tab. 2. In all cases, our modular approach outperforms all monolithic approaches. Subjective Phrase Identification and Classification This dataset contains tweets annotated with sentiment phrases, used for training the models. As in the original SemEval task, it is tested in two settings, in-domain, where the test data also consists of tweets, and out-of-domain, where the test set consists of SMS text messages. We present the results of experiments on these data set in Table 3. 6.3 Q2: Partial Labels as Weak Supervision Our modular architecture is a natural fit for learning with partial labels. Since the modular architecture decomposes the final task into sub-tasks, the absence of certain partial labels is permitted. In this case, only the module corresponding to the available partial labels will be updated while the other parts of the model stay fixed. This property can be exploited to reduce the supervision effort by defining semi-supervised learning protocols that use partial-labels when the full labels are not available, or too costly to annotate. E.g., in the target sentiment task, segmentation labels are significantly easier to annotate. To demonstrate this property we conducted two sets of experiments. The first investigates how the decision module can effectively integrate the knowledge independently learned by sub-tasks 586 Models English Spanish Dutch Russian E+A E+A+S E+A E+A+S E+A E+A+S E+A E+A+S LSTM-CNN-CRF(Ma and Hovy, 2016) 58.73 44.20 64.32 50.34 51.62 36.88 58.88 38.13 LSTM-CRF-LM(Liu et al., 2018) 62.27 45.04 63.63 50.15 51.78 34.77 62.18 38.80 LSTM-CRF 59.11 48.67 62.98 52.10 51.35 37.30 63.41 42.47 LSTM-CRF-T 60.87 49.59 64.24 52.33 52.79 37.61 64.72 43.01 LSTM-CRF-TI 63.11 50.19 64.40 52.85 53.05 38.07 64.98 44.03 LSTM-CRF-TI(g) 64.74 51.24 66.13 53.47 53.63 38.65 65.64 45.65 Table 2: Comparing our models with recent results on the Aspect Sentiment datasets. Models Tweets SMS LSTM-CNN-CRF 35.82 23.23 LSTM-CRF-LM 35.67 23.25 LSTM-CRF 34.15 26.28 LSTM-CRF-T 35.37 27.11 LSTM-CRF-Ti 36.52 28.05 LSTM-CRF-Ti(g) 37.71 29.24 Table 3: Comparing our models with competing models on the subjective sentiment task. modules using different partial labels. We quantify this ability by providing varying amounts of full labels to support the integration process. The second set studies the traditional semi-supervised settings, where we have a handful of full labels, but we have a larger amount of partial labels. Modular Knowledge Integration The modular architecture allows us to train each model using data obtained separately for each task, and only use a handful of examples annotated for the final task in order to integrate the knowledge learned by each module into a unified decision. We simulated these settings by dividing the training data into three folds. We associated each one of the first two folds with the two sub-task modules. Each one of the these folds only included the partial labels relevant for that sub-task. We then used gradually increasing amounts of the third fold, consisting of the full labels, for training the decision module. Fig. 3 describes the outcome for targetsentiment, comparing a non-modular model using only the full labels, with the modular approach, which uses the full labels for knowledge integration. Results show that even when very little full data is available results significantly improve. Additional results show the same pattern for subjective phrase identification and classification are included in the Appendix. Learning with Partially Labeled Data Partially-labeled data can be cheaper and easier to obtain, especially for low-resource languages. In this set of experiments, we model these settings 28 33 38 43 48 20% 40% 60% 80% 100% Modularized non-Modularized (a) Spanish 28 33 38 43 48 20% 40% 60% 80% 100% Modularized non-Modularized (b) English Figure 3: Modular knowledge integration results on the Target Sentiment Datasets. The x-axis is the amount of percentage of the third fold of full labels. The “nonmodularized” means we only provide fully labeled data from the third fold. 38 41 44 47 50 53 0% 20% 40% 60% 80% LSTM-CRF-TI(g) (seg) LSTM-CRF-TI(g) (typ) LSTM-CRF LSTM-CRF-TI(g) with fully labeled (a) Spanish 37 39.8 42.6 45.4 48.2 51 0% 20% 40% 60% 80% LSTM-CRF-TI(g) (seg) LSTM-CRF-TI(g) (typ) LSTM-CRF LSTM-CRF-TI(g) with fully labeled (b) English Figure 4: The fully labeled data was fixed to 20% of the whole training set, and gradually adding data with only segmentation information (Magenta), or with only type information (Orange), and test our model on the full prediction test. The LSTM-CRF model can only use fully labeled data as it does not decompose the task. over the target-sentiment task. The results are summarized in Fig. 4. We fixed the amount of full labels to 20% of the training set, and gradually increased the amount of partially labeled data. We studied adding segmentation and type separately. After the model is trained in this routine, it was tested on predicting the full labels jointly on the test set. Domain Transfer with Partially Labeled Data In our final analysis we considered a novel domain-adaptation settings, where we have a small amount of fully labeled in-domain data from aspect sentiment and more out-of-domain data 587 20 26.5 33 39.5 46 0% 20% 40% Spanish English Figure 5: Domain Transfer experiments results with fixed 20% in-domain data from aspect sentiment and varying amounts of out-of-domain data from target sentiment, shown on the x-axis. from target sentiment. However unlike the traditional domain-adaptation settings, the out-ofdomain data is labeled for a different task, and only shares one module with the original task. In our experiments we fixed 20% of the fully labeled data for the aspect sentiment task, and gradually added out-of-domain data, consisting of partial sentiment labels from the target sentiment task. Our model successfully utilized the out-ofdomain data and improved performance on the indomain task. The results are shown on Fig 5. 7 Conclusions We present and study several modular neural architectures designed for a novel learning scenario: learning from partial labels. We experiment with several sentiment analysis tasks. Our models, inspired by cognitive neuroscience findings (Jacobs et al., 1991; Eysenck and Keane, 2005) and multitask learning, suggest a functional decomposition of the original task into two simpler sub-tasks. We evaluate different methods for sharing information and integrating the modules into the final decision, such that a better model can be learned, while converging faster5. As our experiments show, modular learning can be used with weak supervision, using examples annotated with partial labels only. The modular approach also provides interesting directions for future research, focusing on alleviating the supervision bottleneck by using large amount of partially labeled data that are cheaper and easy to obtain, together with only a handful amount of annotated data, a scenario especially suitable for low-resource languages. 5Convergence results are provided in the Appendix Acknowledgements We thank the reviewers for their insightful comments. We thank the NVIDIA Corporation for their GPU donation, used in this work. This work was partially funded by a Google Gift. References Rodrigo Agerri and German Rigau. 2016. Robust multilingual named entity recognition with shallow semi-supervised features. Artificial Intelligence. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. In Proc. of the Annual Meeting of the North American Association of Computational Linguistics (NAACL). Xavier Carreras, Llu´ıs M`arquez, and Llu´ıs Padr´o. 2002. Named entity extraction using adaboost. In Proc. of the Annual Conference on Computational Natural Language Learning (CoNLL). Rich Caruana. 1997. Multitask Learning. Machine Learning, 28(1):41–75. Chen, Shi, Qiu, and Huang. 2017. Adversarial multicriteria learning for chinese word segmentation. In Proc. of the Annual Meeting of the Association Computational Linguistics (ACL). Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. J. Mach. Learn. Res., 12. Timothee Cour, Ben Sapp, and Ben Taskar. 2011. Learning from partial labels. Journal of Machine Learning Research, 12(May). Eriguchi, Tsuruoka, and Cho. 2017. Learning to parse and translate improves neural machine translation. In Proc. of the Annual Meeting of the Association Computational Linguistics (ACL). M.W. Eysenck and M.T. Keane. 2005. Cognitive Psychology: A Student’s Handbook. Psychology Press. Norman. Geschwind and Albert M. Galaburda. 1987. Cerebral lateralization : biological mechanisms, associations, and pathology. MIT Press. Gillick, Brunk, Vinyals, and Subramanya. 2015. Multilingual Language Processing From Bytes. ArXiv. I. J. Goodfellow, J. Shlens, and C. Szegedy. 2014. Explaining and Harnessing Adversarial Examples. ArXiv e-prints. Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In Proc. of the International Conference on Computer Vision (ICCV). 588 Jacobs, Jordan, and Barto. 1991. Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. Cognitive Science, 15(2). Stephen M. Kosslyn. 1987. Seeing and Imagining in the Cerebral Hemispheres: A Computational Approach. Psychological Review, 94(2):148–175. Guillaume Lample, Miguel Ballesteros, Kazuya Kawakami, Sandeep Subramanian, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proc. of the Annual Meeting of the North American Association of Computational Linguistics (NAACL). Hao Li and Wei Lu. 2017. Learning latent sentiment scopes for entity-level sentiment analysis. In Proc. of the National Conference on Artificial Intelligence (AAAI). Liyuan Liu, Jingbo Shang, Frank F. Xu, Xiang Ren, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. In Proc. of the National Conference on Artificial Intelligence (AAAI). Minh-Thang Luong. 2016. Multi-Task Sequence To Sequence Learning. In Proc. International Conference on Learning Representation (ICLR). Dehong Ma, Sujian Li, and Houfeng Wang. 2018. Joint learning for targeted sentiment analysis. In Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP). Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proc. of the Annual Meeting of the Association Computational Linguistics (ACL). Mortimer Mishkin, Leslie G. Ungerleider, and Kathleen A. Macko. 1983. Object vision and spatial vision: two cortical pathways. Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. 2013. Open domain targeted sentiment. In Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP). Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R. Curran. 2013. Learning multilingual named entity recognition from wikipedia. Artif. Intell., 194:151–175. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, ALSmadi Mohammad, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph´ee De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016). L. Ratinov and D. Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proc. of the Annual Conference on Computational Natural Language Learning (CoNLL). Rueckl, Cave, and Kosslyn. 1989. Why are ”What” and ”Where” Processed by Separate Cortical Visual Systems? A Computational Investigation. cognitive neuroscience. Tjong Kim Sang and Erik F. 2002. Introduction to the conll-2002 shared task: Language-independent named entity recognition. In Proc. of the Annual Conference on Computational Natural Language Learning (CoNLL). Tjong Kim Sang, Erik F., and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proc. of the Annual Conference on Computational Natural Language Learning (CoNLL). dos Santos and Guimar˜aes. 2015. Boosting named entity recognition with neural character embeddings. In Proc. of the Annual Meeting of the Association Computational Linguistics (ACL). Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway Networks. ArXiv eprints. Shubham Toshniwal, Hao Tang, Liang Lu, and Karen Livescu. 2017. Multitask learning with low-level auxiliary tasks for encoder-decoder based speech recognition. In INTERSPEECH. Theresa Wilson, Zornitsa Kozareva, Preslav Nakov, Alan Ritter, Sara Rosenthal, and Stoyanov Veselin. 2013. Semeval-2013 task 2: Sentiment analysis in twitter. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. 2018. Mattnet: Modular attention network for referring expression comprehension. arXiv. Meishan Zhang, Yue Zhang, and Duy Tin Vo. 2015. Neural networks for open domain targeted sentiment. In Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP). Xiao Zhang, Yong Jiang, Hao Peng, Kewei Tu, and Dan Goldwasser. 2017. Semi-supervised structured prediction with neural crf autoencoder. In Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP). Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proc. of the Annual Meeting of the Association Computational Linguistics (ACL). 589 A Examples of Task Decomposition In Figure 6, we show an example of task decomposition for standard NER. Text Brush Wellman . Tag B-ORG I-ORG O O O O O comments on beryllium lawsuits Seg B I O O O O O Ent ORG ORG O O O O O Figure 6: An example of NER decomposition. In Figure 7, we show another example of task decomposition for target sentiment, in addition to the one in the main text. Text KC Concepcion Get Tag B-pos O O Rogue Magazine Photos Continue to Seg Senti Praised B-pos B-neu E-neu S-neu O B O O E B E S O by Fans onTwitter O O O O O O O O O O pos O O pos neu neu neu O O O O O O Figure 7: An extra example of target sentiment decomposition. B Full Experimental Results on Target Sentiment The complete results of our experiments on the target sentiment task are summarized in Tab. 4. Our LSTM-CRF-TI(g) model outperforms all the other competing models in Precision, Recall and the F1 score. C Experiments on Named Entity Recognition NER datasets We evaluated our models on three NER datasets, the English, Dutch and Spanish parts of the 2002 and 2003 CoNLL shared tasks (Sang and F., 2002; Sang et al., 2003). We used the original division of training, validation and test sets. The task is defined over four different entity types: PERSON, LOCATION, ORGANIZATION, MISC. We used the BIOES tagging scheme during the training, and convert them back to original tagging scheme in testing as previous studies show that using this tagging scheme instead of BIO2 can help improve performance (Ratinov and Roth, 2009; Lample et al., 2016; Ma and Hovy, 2016; Liu et al., 2018). As a result, the segmentation module had 5 output labels, and the entity module had 4. The final decision task, consisted of the Cartesian product of the segmentation set (BIES) and the entity set, plus the “O” tag, resulting in 17 labels. Results on NER We compared our models with the state-of-the-art systems on English6, Dutch and Spanish. For Dutch and Spanish, we used cross-lingual embedding as a way to exploit lexical information. The results are shown in Tab. 5 and Tab. 67. Our best-performing model outperform all the competing systems. D Additional Experiments on Knowledge Integration We conducted additional experiments on knowledge integration in the same setting as in the main text to investigate the properties of the modules. Figure 8 shows the results for Dutch and Spanish NER datasets, while Figure 9 shows the results for the Subjective Polarity Disambiguation Datasets using the in-domain data. 55 61.25 67.5 73.75 80 20% 40% 60% 80% 100% Modularized non-Modularized (a) Dutch NER 55 61.25 67.5 73.75 80 20% 40% 60% 80% 100% Modularized non-Modularized (b) Spanish NER Figure 8: Experimental results on modular knowledge integration on the Dutch and Spanish NER datasets. E Convergence Analysis The proposed twofold modular infusion model (with guided gating as an option) breaks the complex learning problem into several sub-problems and then integrate them using joint training. The process defined by this formulation has more parameters and requires learning multiple objectives jointly. Our convergence analysis intends to evaluate whether the added complexity leads to a harder learning problem (i.e., slower to converge) or whether the tasks constrain each other and as a result can be efficiently learned. 6Liu et al.’s results are different since their implementation did not convert the predicted BIOES tags back to BIO2 during evaluation. For fair comparison, we only report the results of the standard evaluation. 7We thank reviewers for pointing out a paper (Agerri and Rigau, 2016) obtains the new state-of-the-art result on Dutch with comparable results on Spanish. 590 System Architecture English Spanish Pre Rec F1 Pre Rec F1 Zhang, Zhang and Vo (2015) Pipeline 43.71 37.12 40.06 45.99 40.57 43.04 Joint 44.62 35.84 39.67 46.67 39.99 43.02 Collapsed 46.32 32.84 38.36 47.69 34.53 40.00 Li and Lu (2017) SS 44.57 36.48 40.11 46.06 39.89 42.75 +embeddings 47.30 40.36 43.55 47.14 41.48 44.13 +POS tags 45.96 39.04 42.21 45.92 40.25 42.89 +semiMarkov 44.49 37.93 40.94 44.12 40.34 42.14 Base Line LSTM-CRF 53.29 46.90 49.89 51.17 46.71 48.84 This work LSTM-CRF-T 54.21 48.77 51.34 51.77 47.37 49.47 LSTM-CRF-Ti 54.58 49.01 51.64 52.14 47.56 49.74 LSTM-CRF-Ti(g) 55.31 49.36 52.15 52.82 48.41 50.50 Table 4: Performance on the target sentiment task Model English LSTM-CRF (Lample et al., 2016) 90.94 LSTM-CNN-CRF (Ma and Hovy, 2016) 91.21 LM-LSTM-CRF (Liu et al., 2018) 91.06 LSTM-CRF-T 90.8 LSTM-CRF-TI 91.16 LSTM-CRF-TI(g) 91.68 Table 5: Comparing our models with several stateof-the-art systems on the CoNLL 2003 English NER dataset. Model Dutch Spanish Carreras et al. (2002) 77.05 81.39 Nothman et al. (2013) 78.60 N/A dos Santos and Guimar˜aes (2015) N/A 82.21 Gillick et al. (2015) 82.84 82.95 Lample et al. (2016) 81.74 85.75 LSTM-CRF-T 83.91 84.89 LSTM-CRF-TI 84.12 85.28 LSTM-CRF-TI(g) 84.51 85.92 Table 6: Comparing our models with recent results on the 2002 CoNLL Dutch and Spanish NER datasets. 10 17.5 25 32.5 40 20% 40% 60% 80% 100% Modularized non-Modularized Figure 9: Experimental results on modular knowledge integration on the Subjective Polarity Disambiguation Datasets. We compare between our LSTM-CRF-TI(g) model and recent published top models on the English NER dataset in Figure 10 and on the subjective polarity disambiguation datasets in Figure 11. The curve compares convergence speed in terms of learning epochs. Our LSTM-CRF-TI(g) model has a much faster convergence rate compared to the other models. 48 61 74 87 100 1 2 3 4 5 6 7 8 9 10 11 12 LSTM-CRF CNN-LSTM-CRF LM-LSTM-CRF LSTM-CRF-Ti(g) Figure 10: Comparing convergence over the development set on the English NER dataset. The x-axis is number of epochs and the y-axis is the F1-score. 0 6.25 12.5 18.75 25 1 2 3 4 5 6 7 8 9 10 11 12 LSTM-CRF CNN-LSTM-CRF LM-LSTM-CRF LSTM-CRF-Ti(g) Figure 11: Comparing convergence over the development set on the subjective polarity disambiguation datasets. The x-axis is number of epochs and the y-axis is the F1-score.
2019
55
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5503–5507 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5503 Rationally Reappraising ATIS-based Dialogue Systems Jingcheng Niu and Gerald Penn Department of Computer Science University of Toronto Toronto, Canada {niu,gpenn}@cs.toronto.edu Abstract The Air Travel Information Service (ATIS) corpus has been the most common benchmark for evaluating Spoken Language Understanding (SLU) tasks for more than three decades since it was released. Recent state-of-the-art neural models have obtained F1-scores near 98% on the task of slot filling. We developed a rule-based grammar for the ATIS domain that achieves a 95.82% F1-score on our evaluation set. In the process, we furthermore discovered numerous shortcomings in the ATIS corpus annotation, which we have fixed. This paper presents a detailed account of these shortcomings, our proposed repairs, our rulebased grammar and the neural slot-filling architectures associated with ATIS. We also rationally reappraise the motivations for choosing a neural architecture in view of this account. Fixing the annotation errors results in a relative error reduction of between 19.4 and 52% across all architectures. We nevertheless argue that neural models must play a different role in ATIS dialogues because of the latter’s lack of variety. 1 Introduction Slot filling has received a great deal of recent attention from the SLU community. Typically, it is characterized as a sequence labeling problem in which certain tokens are identified as fillers that contribute argument values to a meaning representation through “slot” positions in the utterance. Wang et al. (2011) first used conditional random fields (CRF) for slot filling. A few years later, inspired by the success of recurrent neural networks (RNN) in language modeling (Mikolov et al., 2011), Mesnil et al. (2013) developed the first RNN slot filler that achieved a relative error reduction of 14%. Subsequently, different variations of RNN such as LSTM (Yao et al., 2014) were developed for slot filling, followed by encoder-decoder models that could utilize information from the entire sentence (Kurata et al., 2016), both of which avail themselves of an attention mechanism (Zhu and Yu, 2017; Li et al., 2018). As recently as Wang et al. (2018), Deep Reinforcement Learning (DRL) has been proposed as a way to refine encoder-decoder models on sparsely distributed tags; this has achieved the highest reported performance so far. This development has taken place in parallel, however, with work that has used qualitative error analyses to cast doubt on the continued use of ATIS as a benchmark for progress in slot filling. Most recently, B´echet and Raymond (2018) conclude that ATIS is simply too “shallow” to offer anything of additional substance for DNN-based architectures to achieve, formulating a three-way taxonomy of errors in the reference annotation for the ATIS corpus that account for roughly half of the remaining errors still faced by state-of-the-art slot filling models. Even prior to the recent popularity of neural architectures, Tur et al. (2010) cited a problem with earlier n-gram-based modeling approaches, which tended to fit every utterance into a known sample without regard to domain knowledge or aspects of global context that could override local n-gram contexts. We present here: (1) a thorough taxonomy of ATIS annotation errors, reminiscent of the taxonomy of slot-filling errors in B´echet and Raymond (2018), (2) a repaired version of the ATIS reference annotation, (3) a freely available rulebased grammar of the ATIS domain,1 that offers an alternative to a language-modeling-based approach, incorporating both domain knowledge and non-local inference as advocated for by Tur et al. (2010), (4) an experimental trial in which five recent neural architectures are evaluated on the re1http://www.ale.cs.toronto.edu/grammars/atis.pl 5504 Index:105 American airlines leaving Phoenix IOB B I O B Concept fromloc NE airline name airline name city name Table 1: Example of an utterance in ATIS. paired ATIS annotation alongside the rule-based grammar, and (5) an analysis of the experimental results that, while broadly supporting the conclusions of B´echet and Raymond (2018), attempts to circumscribe the possible meaning of “shallow” more precisely. Crucial to our experimental results and our conclusions is a recent, independent modification of the ATIS corpus (Zhu and Yu, 2018) that inadvertently exposes some of what neural approaches are modeling with respect to slot fillers. 2 ATIS Corpus 2.1 Dataset The ATIS Spoken Language Systems Pilot Corpus (Hemphill et al., 1990) contains utterances of users asking flight-related questions that could be answered by a relational query search from the ATIS database. For the task of slot filling, only the text part of the corpus is used. Generally, 4978 Class A utterances in the ATIS-2 and ATIS-3 corpora are used as the training set, and 893 utterances from ATIS-3 Nov93 and Dec94 are selected as the testing set. Developers may randomly split the 4978 utterances into a training set (for us, 90%) and a development test set (10%). The text data are converted to the format suitable for the slot filling task. Each token of an utterance is considered to be a potential slot, and each slot should contain a tag, with an optional Concept part and a mandatory Named Entity (NE) part, in the In/Out/Begin (IOB) format. Mesnil et al. (2013) converted the relational queries into that format using an automatic process. Table 1 is an annotated example. The entire dataset contains 9 distinct concepts and 44 NEs that yield 127 total possible tags. For ease of reference, we number both the training and test sets in lexicographical order here, starting from 0. 2.2 Errors in Annotation B´echet and Raymond (2018) identify three sources of error: annotations missing slots entirely or transposing labels, for example, between departure and arrival cities; determinately reading an Split Train Test total % total % total utterances 4978 100 893 100 incorrect 132 2.61 46 5.15 UNK 46 0.92 46 5.15 total slots 165612 100 2837 100 incorrect 188 1.14 65 2.29 Table 2: Annotation Mistakes by Dataset. utterance that is naturally ambiguous (no system should be penalized for having guessed another valid reading); and labeling only the first of several instances of the same NE in the same utterance (systems that label more than one are penalized). 1.14% of the slots in the training set are incorrectly labeled overall, as are 2.29% of those in the test set. These percentages are significant, given that state-of-the-art systems commonly report error rates of between 1.2% to 6%. Note that there are almost twice as many errors in the test set as in the training set on a percentage basis. About half of these are ambiguous slots arising from the use of “UNK” for hapax legomena. In these 46 cases, the slot cannot be determined without knowledge of what the word formerly was. Most egregiously, five of utterances 785–791 are “What is UNK?” and the other two are “What is a UNK?”. The test set is unique in other respects. Six of its slot labels (B-booking class, Bflight, B-stoploc.airport code, I-state name, I-flight number and B-compartment) are not found in the training set. Except for Bstoploc.airport code, the other five are NE annotation errors. The test set also handles the word noon differently: four instances are treated as a period of day, whereas all occurrences of noon in the training set are treated as a time. 2.3 Taxonomy We have created our own error classification (Figure 1 and Table 3). Not all of these classes map onto one of the three in B´echet and Raymond (2018). The taxonomy and errors were labelled independently by two annotators, who were then forced to reconcile where they disagreed. 3 Rule-based Grammar In addition to repairing the ATIS annotations, we developed a rule-based grammar for use as a 2After fixing ATIS, there were 4932 training utterances (16419 slots) and 847 test utterances (2665) left. 5505 • Incorrect IOB Segmentation In the test set, 309: “List airports in Arizona, Nevada and California please.” unifies the two states Arizona and Nevada into one slot, and was annotated as B-state name and I-state name. Corrected. • Wrong Word Selection Some slots select the wrong words. Utterance 1374: “I need information on ground transportation between airport and downtown in the city of Boston” labels the whole phrase city of Boston as toloc.city name, whereas elsewhere only Boston is labeled. Chose dominant word sequence. • Missing Labels Words that should be annotated are not (equivalent to label, O, i.e. outside of any slot). For example, in 29: “All am flights departing Pittsburgh arriving Denver.”, the abbreviation ‘am’ should have been labeled B-depart time.period of day, but was not annotated. Annotation added. • Concept Mistakes These are the most prevalent annotation error. For example, “Denver” in 40: “All flights before 10 am Boston Denver.” was annotated as B-fromloc.city name, where it should have been toloc. Includes ambiguities that are not consistently annotated (we chose the dominant annotation) as well as unambiguous fillers that bear more than one concept role (which the annotation standard does not permit; these were discarded). • NE Mistakes These appear in both the training and the test set. For example, in utterance 29: “Flights from Denver to Westchester county New York weekdays.”, New York means the state of New York, not New York City, but its NE was labeled as a city name instead of state name. Corrected. • Out-of-Vocabulary (UNK) These are found in the training set (e.g., 4394: “What is ⟨unk⟩?”) and the test set, as discussed above. Discarded the utterance. Figure 1: Taxonomical classes, examples, and repair actions taken. Split Train Test utterances instances utterances instances IOB 2 2 2 2 Selection 22 22 1 1 Missing 29 30 4 4 Concept 72 120 28 46 NE 12 13 11 11 UNK 46 46 46 46 Table 3: Annotation Mistakes by Taxonomic Class. baseline and domain-specific knowledge source, particularly of time and location phrases. We used the Attribute Logic Engine (ALE) (Carpenter and Penn, 1994), a grammar development system and logic programming language based upon typed feature structures. ALE compiles grammars into an all-paths chart parser that produces phrase structure forests. We use the logic programming extension to project words into individual IOB slots, given a parsing chart. The grammar does not generate a spanning parse for utterances with multiple sentences (e.g., 3612:“US air 269 leaving Boston at 428. What is the arrival time in Baltimore?”). These, as well as single sentences for which no spanning edge is found, are instead projected using a covering of edges that is selected with the greedy algorithm shown in Algorithm 1. This algorithm prefers longer spans to shorter spans and breaks ties by selecting one edge uniformly at random. Algorithm 1 GREEDY(edges) long ←a longest edge in edges L ←edges finish before long R ←edges start after long return GREEDY(L) + long + GREEDY(R) The grammar uses 601 lexical entries (one or more for each of the 573 word types in ATIS), 643 feature structure types, 22 features and 330 phrase structure rules. The feature structure types that we defined were for two major purposes: 168 syntactic types that label the nodes of a parse tree, and 475 types that declare appropriate values for features. Every syntactic node label has features that refer to a list of slot fillers (TAGS) and a list of tokens (WORDS) in the subtree at which it is rooted. Among the 330 grammar rules, 65 rules are used to capture multi-word expressions (MWE), which ALE does not otherwise support. Only 161 rules are designed specifically for ATIS, with the remaining 104 being general rules of English grammar. Nouns are further divided into different ATIS-specific slot values such as cities, states and airlines. Verb semantics are categorized based on their indication of direction. “Directional” verbs such as ‘depart’ and ‘land’ are distinguished from the others. Prepositions are further split into timerelated, direction-related, location-related, costrelated, and other special functions. 4 Experiments We reimplemented or, in one case (Zhu and Yu, 2017), obtained from the authors code for the models mentioned in Table 4, which also shows the F1-scores reported there. The hyperparameters were set to those that are reported in the papers has having the best performance. Each model was trained for 100 epochs, and then the epoch 5506 Model Reported F1 score RNN (Mesnil et al., 2013) 93.98 LSTM (Yao et al., 2014) 95.08 Encoder-Decoder (Kurata et al., 2016) 95.66 Encoder-Decoder with focus (Zhu and Yu, 2017) 95.79 Self-attentive BiLSTM3 (Li et al., 2018) 96.35 Encoder-Decoder DRL (Wang et al., 2018) 97.86 Table 4: Reported Performance of Models. with the highest development test set performance was chosen to evaluate on the ATIS test set. We were unable to reproduce comparable figures for the DRL scheme of (Wang et al., 2018) and so it has been excluded from our analysis. Our own results are reported in Table 5. The column, Test, reports results on the original ATIS test set. Fixed reports on the ATIS test set after all of the repairs mentioned in Section 2.3 were fixed. UNK reports on the ATIS test, with all repairs except the exclusion of utterances with ambiguous occurrences of UNK. Finally, X reports on a corpus, which, similar to the ATIS X test set presented in Zhu and Yu (2018), modified the ATIS test set by replacing every NE with a different NE from the same epistemic class in a travel domain ontology defined by them, such that the new NE has never occurred with the same concept. For example, the city “Toronto” appears as a fromloc.city name and toloc.city name, but never as a stoploc.city name in ATIS. So “Toronto” is used in Corpus X wherever the reference annotation requires a stoploc.city name. Zhu and Yu (2018) did this in order to experiment with a neural architecture that trains first on a coarse classification and then fine-tunes to the ATIS reference annotation in a later step, but the F1 drops on Corpus X are a result of overfitting in which the model effectively learns that Toronto is never a stopover city. Our Corpus X differs from their ATIS X test set only in that we first corrected their ontology in light of our taxonomy of annotation errors. Because the rule-based parser uses an all-paths algorithm, its F1-score is reported in three ways. Rand(om) uses the greedy Algorithm 1 in which 3The number reported here is with access to the sentence intent labels disabled. In our own runs, reported in Table 5, we disable this model’s access to intent labels as well, in order to make a make a more controlled comparison to the other models, none of which use intent labels. Using intent labels, Li et al. (2018) report an F1 score of 96.52%. 4The rule-based grammar developer did not have access to the test-domain utterances, and so the grammar replaces OOV test set vocabulary with UNK. These are counted as failures in our statistics unless the UNK token is assigned the correct tag. Model Test Fixed UNK X RNN Complete 93.56 95.83 94.71 92.3 Full Parse 93.8 96.8 95.65 93.49 LSTM Complete 93.86 96.47 95.54 93.29 Full Parse 94.22 97.44 96.4 94.57 Encoder-Decoder Complete 94.75 95.77 96.84 91.85 Full Parse 94.89 96.49 97.55 92.74 Self-att. BiLSTM Complete 94.87 96.99 96.05 93.60 Full Parse 95.06 98.02 97.25 94.72 Focus Complete 95.02 97.61 96.42 84.31 Full Parse 95.19 98.10 96.86 83.81 Rule-Based4 rand. 93.00 95.82 94.47 92.92 scep. 90.91 94.10 92.44 90.68 cred. 94.33 96.66 95.84 94.35 Full Parse rand. 95.61 98.62 97.19 95.49 scep. 94.81 97.93 96.41 94.59 cred. 96.68 99.10 98.31 96.51 Full Parse % 80.87 81.81 80.87 80.99 Table 5: Experimental Results. ties are broken at random. Scep(tical) only counts successes that every member of a tie produces. Cred(ulous) counts successes that any member of a tie produces. The sceptical and credulous scores bracket the possible parse selection strategies. Full Parse restricts the evaluation to those utterances (the percentage of which appears in the final row) for which one or more complete parses was found by the rule-based grammar. 5 Analysis and Discussion One might expect that recent neural approaches could use their word vector representations to generalize better to out-of-domain utterances than the earlier models that Tur et al. (2010) referred to. In fact, the results of the previous section on Corpus X clearly indicate that these recent architectures overfit their language models to filler content itself, overshadowing any potential gain from better contextual inference. ATIS is “shallow” in that it offers only a small amount of training data and an overall lack of lexical and syntactic variety. What is even more telling is that the performance of these recent architectures on Corpus X is so bad that it falls within the F1 range of our rule-based grammar. The advantages promised by nascent statistical approaches to natural language understanding when rule-based grammars were still in vogue were primarily centred around: (1) portability and (2) coverage. As to portability, recent neural approaches to a corpus as small as ATIS necessarily surrender a certain amount of it for the sake of jointly modeling knowledge of language and domain-specific knowledge — a laudable goal on substantially larger training sets. Our experience with industrial partners suggests, however, that extensibility, in which developers wish to roll out the same domain but to a further extent, 5507 such as with more cities, more airports etc. in the case of the ATIS corpus, is of equal importance to them as portability to different domains. There, a rule-based grammar would only be the preferred option if augmenting the filler vocabulary were all that was at stake. It would not be the preferred option if the extension were in the direction of much greater syntactic variety. That brings us to coverage. The relative error reduction observed after fixing the ATIS annotation generally fails to attain the 50% predicted by B´echet and Raymond (2018). Nevertheless, those repairs put the neural models close to the rulebased grammar’s range on utterances for which it generates a full syntactic parse.5 Our greedy parse selection approach is necessitated by the mere ∼80% coverage of the ATIS domain with our rule-based grammar. Neural parsing architectures do exist, and already provide better coverage than 80%. These arguments taken together suggest that, while there may be very little remaining reward to addressing the slot-filling problem with ATIS, there is still a very perceptible parsing problem, even on a corpus of ATIS’s size and lack of syntactic variety. ATIS is not syntactically annotated; to our knowledge, no syntactically annotated corpus in the travel reservation domain exists. The development of such a corpus, the transfer of learning between parsers on different domains of this size, and the appropriation of such a portable parser to slot filling, remain the most promising direction of further research for slot filling, in our view. In this endeavour, ATIS may still play a very prominent role. References F. B´echet and C. Raymond. 2018. Is ATIS too shallow to go deeper for benchmarking spoken language understanding models? In Interspeech. B. Carpenter and G. Penn. 1994. The Attribute Logic Engine user’s guide, version 2.0. Laboratory for Computational Linguistics Technical Report, Carnegie Mellon University, Pittsburgh. C. T. Hemphill, J. J. Godfrey, and G. R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings 5Note that on the subset of ATIS test sentences for which our rule-based grammar does obtain a full parse, the neural models also improve, and do attain the predicted 50% RER on the repaired versions of those sentences. of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. G. Kurata, B. Xiang, B. Zhou, and M. Yu. 2016. Leveraging sentence-level information with encoder LSTM for semantic slot filling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2077–2083. C. Li, L. Li, and J. Qi. 2018. A self-attentive model with gate mechanism for spoken language understanding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3824–3833. G. Mesnil, X. He, L. Deng, and Y. Bengio. 2013. Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding. In Interspeech, pages 3771–3775. T. Mikolov, S. Kombrink, L. Burget, J. ˇCernock`y, and S. Khudanpur. 2011. Extensions of recurrent neural network language model. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pages 5528–5531. IEEE. G. Tur, D. Hakkani-T¨ur, and L. Heck. 2010. What is left to be understood in ATIS? In Spoken Language Technology Workshop (SLT), 2010 IEEE, pages 19– 24. IEEE. Y. Wang, A. Patel, and H. Jin. 2018. A new concept of deep reinforcement learning based augmented general tagging system. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1683–1693. Y.-Y. Wang, L. Deng, and A. Acero. 2011. Semantic frame-based spoken language understanding. Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, pages 41–91. K. Yao, B. Peng, Y. Zhang, D. Yu, G. Zweig, and Y. Shi. 2014. Spoken language understanding using long short-term memory neural networks. In Spoken Language Technology Workshop (SLT), 2014 IEEE, pages 189–194. IEEE. S. Zhu and K. Yu. 2017. Encoder-decoder with focusmechanism for sequence labelling based spoken language understanding. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5675–5679. IEEE. S. Zhu and K. Yu. 2018. Concept transfer learning for adaptive language understanding. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 391–399.
2019
550
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5508–5521 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5508 Learning Latent Trees with Stochastic Perturbations and Differentiable Dynamic Programming Caio Corro Ivan Titov ILCC, School of Informatics, University of Edinburgh ILLC, University of Amsterdam [email protected] [email protected] Abstract We treat projective dependency trees as latent variables in our probabilistic model and induce them in such a way as to be beneficial for a downstream task, without relying on any direct tree supervision. Our approach relies on Gumbel perturbations and differentiable dynamic programming. Unlike previous approaches to latent tree learning, we stochastically sample global structures and our parser is fully differentiable. We illustrate its effectiveness on sentiment analysis and natural language inference tasks. We also study its properties on a synthetic structure induction task. Ablation studies emphasize the importance of both stochasticity and constraining latent structures to be projective trees. 1 Introduction Discrete structures are ubiquitous in the study of natural languages, for example in morphology, syntax and discourse analysis. In natural language processing, they are often used to inject linguistic prior knowledge into statistical models. For examples, syntactic structures have been shown beneficial in question answering (Cui et al., 2005), sentiment analysis (Socher et al., 2013), machine translation (Bastings et al., 2017) and relation extraction (Liu et al., 2015), among others. However, linguistic tools producing these structured representations (e.g., syntactic parsers) are not available for many languages and not robust when applied outside of the domain they were trained on (Petrov et al., 2010; Foster et al., 2011). Moreover, linguistic structures do not always seem suitable in downstream applications, with simpler alternatives sometimes yielding better performance (Wang et al., 2018). Indeed, a parallel line of work focused on inducing task-specific structured representations of language (Naradowsky et al., 2012; Yogatama et al., 2017; Kim et al., 2017; Liu and Lapata, 2018; Niculae et al., 2018). In these approaches, no syntactic or semantic annotation is needed for training: representation is induced from scratch in an end-to-end fashion, in such a way as to benefit a given downstream task. In other words, these approaches provide an inductive bias specifying that (hierarchical) structures are appropriate for representing a natural language, but do not make any further assumptions regarding what the structures represent. Structures induced in this way, though useful for the task, tend not to resemble any accepted syntactic or semantic formalisms (Williams et al., 2018a). Our approach falls under this category. In our method, projective dependency trees (see Figure 3 for examples) are treated as latent variables within a probabilistic model. We rely on differentiable dynamic programming (Mensch and Blondel, 2018) which allows for efficient sampling of dependency trees (Corro and Titov, 2019). Intuitively, sampling a tree involves stochastically perturbing dependency weights and then running a relaxed form of the Eisner dynamic programming algortihm (Eisner, 1996). A sampled tree (or its continuous relaxation) can then be straightforwardly integrated in a neural sentence encoder for a target task using graph convolutional networks (GCNs, Kipf and Welling, 2017). The entire model, including the parser and GCN parameters, are estimated jointly while minimizing the loss for the target task. What distinguishes us from previous work is that we stochastically sample global structures and do it in a differentiable fashion. For example, the structured attention method (Kim et al., 2017; Liu and Lapata, 2018) does not sample entire trees but rather computes arc marginals, and hence does not faithfully represent higher-order statistics. Much of other previous work relies either on reinforce5509 ment learning (Yogatama et al., 2017; Nangia and Bowman, 2018; Williams et al., 2018a) or does not treat the latent structure as a random variable (Peng et al., 2018). Niculae et al. (2018) marginalizes over latent structures, however, this necessitates strong sparsity assumptions on the posterior distributions which may inject undesirable biases in the model. Overall, differential dynamic programming has not been actively studied in the task-specific tree induction context. Most previous work also focused on constituent trees rather than dependency ones. We study properties of our approach on a synthetic structure induction task and experiment on sentiment classification (Socher et al., 2013) and natural language inference (Bowman et al., 2015). Our experiments confirm that the structural bias encoded in our approach is beneficial. For example, our approach achieves a 4.9% improvement on multi-genre natural language inference (MultiNLI) over a structure-agnostic baseline. We show that stochastisticity and higher-order statistics given by the global inference are both important. In ablation experiments, we also observe that forcing the structures to be projective dependency trees rather than permitting any general graphs yields substantial improvements without sacrificing execution time. This confirms that our inductive bias is useful, at least in the context of the considered downstream applications.1 Our main contributions can be summarized as follows: 1. we show that a latent tree model can be estimated by drawing global approximate samples via Gumbel perturbation and differentiable dynamic programming; 2. we demonstrate that constraining the structures to be projective dependency trees is beneficial; 3. we show the effectiveness of our approach on two standard tasks used in latent structure modelling and on a synthetic dataset. 2 Background In this section, we describe the dependency parsing problem and GCNs which we use to incorporate latent structures into models for downstream tasks. 1The Dynet code for differentiable dynamic programming is available at https://github.com/FilippoC/ diffdp. 2.1 Dependency Parsing Dependency trees represent bi-lexical relations between words. They are commonly represented as directed graphs with vertices and arcs corresponding to words and relations, respectively. Let x = x0 . . . xn be an input sentence with n words where x0 is a special root token. We describe a dependency tree of x with its adjacency matrix T ∈{0, 1}n×n where Th,m = 1 iff there is a relation from head word xh to modifier word xm. We write T (x) to denote the set of trees compatible with sentence x. We focus on projective dependency trees. A dependency tree T is projective iff for every arc Th,m = 1, there is a path with arcs in T from xh to each word xi such that h < i < m or m < i < h. Intuitively, a tree is projective as long as it can be drawn above the words in such way that arcs do not cross each other (see Figure 3). Similarly to phrase-structure trees, projective dependency trees implicitly encode hierarchical decomposition of a sentence into spans (‘phrases’). Forcing trees to be projective may be desirable as even flat span structures can be beneficial in applications (e.g., encoding multi-word expressions). Note that actual syntactic trees are also, to a large degree, projective, especially for such morphologically impoverished languages as English. Moreover, restricting the space of the latent structures is important to ease their estimation. For all these reasons, in this work we focus on projective dependency trees. In practice, a dependency parser is given a sentence x and predicts a dependency tree T ∈T (x) for this input. To this end, the first step is to compute a matrix W ∈Rn×n that scores each dependency. In this paper, we rely on a deep dotted attention network. Let e0 . . . en be embeddings associated with each word of the sentence.2 We follow Parikh et al. (2016) and compute the score for each head-modifier pair (xh, xm) as follows: Wh,m =MLPhead(eh)⊤MLPmod(em)+bh-m, (1) where MLPhead and MLPmod are multilayer perceptrons, and bh-m is a distance-dependent bias, letting the model encode preference for long or short-distance dependencies. The conditional probability of a tree pθ(T |x) is defined by a log2 The embeddings can be context-sensitive, e.g., an RNN state. 5510 linear model: pθ(T |x) = exp(P h,m Wh,mTh,m) P T ′∈T (x) exp(P h,m Wh,mT ′ h,m). When tree annotation is provided in data D, networks parameters θ are learned by maximizing the log-likelihood of annotated trees (Lafferty et al., 2001). The highest scoring dependency tree can be produced by solving the following mathematical program: T = arg max T ∈T (x) X h,m Wh,mTh,m. (2) If T (x) is restricted to be the set of projective dependency trees, this can be done efficiently in O(n3) using the dynamic programming algorithm of Eisner (1996). 2.2 Graph Convolutional Networks Graph Convolutional Networks (GCNs, Kipf and Welling, 2017; Marcheggiani and Titov, 2017) compute context-sensitive embeddings with respect to a graph structure. GCNs are composed of several layers where each layer updates vertices representations based on the current representations of their neighbors. In this work, we fed the GCN with word embeddings and a tree sample T . For each word xi, a GCN layer produces a new representation relying both on word embedding of xi and on embeddings of its heads and modifiers in T . Multiple GCN layers can be stacked on top of each other. Therefore, a vertex representation in a GCN with k layers is influenced by all vertices at a maximum distance of k in the graph. Our GCN is sensitive to arc direction. More formally, let E0 = e0 ⊙· · · ⊙en, where ⊙is the column-wise concatenation operator, be the input matrix with each column corresponding to a word in the sentence. At each GCN layer t, we compute: Et+1 = σ  f(Et) + g(Et)T + h(Et)T ⊤ , where σ is an activation function, e.g. ReLU. Functions f(), g() and h() are distinct multilayer perceptrons encoding different types of relationships: self-connection, head and modifier, respectively (hyperparameters are provided in Appendix A). Note that each GCN layer is easily parallelizable on GPU both over vertices and over batches, either with latent or predefined structures. (a) x T y (b) x T x′ T ′ y Figure 1: The two directed graphical models used in this work. Shaded and unshaded nodes represent observable and unobservable variables, respectively. (a) In the sentence classification task, the output y is conditioned on the input and the latent tree. (b) In the natural language inference task, the output is conditioned on two sentences and their respective latent trees. 3 Structured Latent Variable Models In the previous section, we explained how a dependency tree is produced for a given sentence and how we extract features from this tree with a GCN. In our model, we assume that we do not have access to gold-standard trees and that we want to induce the best structure for the downstream task. To this end, we introduce a probability model where the dependency structure is a latent variable (Section 3.1). The distribution over dependency trees must be inferred from the data (Section 3.2). This requires marginalization over dependency trees during training, which is intractable due to the large search space.3 Instead, we rely on Monte-Carlo (MC) estimation. 3.1 Graphical Model Let x be the input sentence, y be the output (e.g. sentiment labelling) and T (x) be the set of latent structures compatible with input x. We construct a directed graphical model where x and y are observable variables, i.e. their values are known during training. However, we assume that the probability of the output y is conditioned on a latent tree T ∈T (x), a variable that is not observed during training: it must be inferred from the data. Formally, the model is defined as follows: pθ(y|x) = Epθ(T |x)[p(y|x, T )] (3) = X T ∈T (x) pθ(T |x) × pθ(y|x, T ), where θ denotes all the parameters of the model. An illustration of the network is given in Figure 1a. 3This marginalization is a sum of the network outputs over all possible projective dependency trees. We cannot rely on the usual dynamic programming approach because we do not make any factorization assumptions in the GCN. 5511 3.2 Parameter Estimation Our probability distributions are parameterized by neural networks. Their parameters θ are learned via gradient-based optimization to maximize the log-likelihood of (observed) training data. Unfortunately, estimating the log-likelihood of observation requires computing the expectation in Equation 3, which involves an intractable sum over all valid dependency trees. Therefore, we propose to optimize a lower bound on the log-likelihood, derived by application of Jensen’s inequality which can be efficiently estimated with the Monte-Carlo (MC) method: log pθ(yi|xi) = log ET ∼pθ(T |xi)[pθ(yi|T , xi)] ≥ET ∼pθ(T |xi)[log pθ(yi|T , xi)]. (4) However, MC estimation introduces a nondifferentiable sampling function T ∼pθ(T |xi) in the gradient path. Score function estimators have been introduced to bypass this issue but suffer from high variance (Williams, 1987; Fu, 2006; Schulman et al., 2015). Instead, we propose to reparametrize the sampling process (Kingma and Welling, 2014), making it independent of the learned parameter θ : in such case, the sampling function is outside of the gradient path. To this end, we rely on the Perturb-and-MAP framework (Papandreou and Yuille, 2011). Specifically, we perturb the potentials (arc weights) with samples from the Gumbel distribution and compute the most probable structure with the perturbed potentials: Gh,m ∼G(0, 1), (5) f W = W + G, (6) T = arg max T ∈T (x) X h,m Th,mf Wh,m. (7) Each element of the matrix G ∈Rn×n contains random samples from the Gumbel distribution4 which is independent from the network parameters θ, hence there is no need to backpropagate through this path in the computation graph. Note that, unlike the Gumbel-Max trick (Maddison et al., 2014), sampling with Perturb-and-MAP is approximate, as the noise is factorizable: we add noise to individual arc weights rather than to scores of entire trees (which would not be tractable). This 4That is Gh,m = −log(−log(Uh,m)) where Uh,m is sampled from the uniform distribution on the interval (0, 1). Algorithm 1 This function computes the chart values for items of the form [i, j, →, ⊥] by searching the set of antecedents that maximizes its score. Because these items assume a dependency from xi to xj, we add Wi,h to the score. 1: function BUILD-URIGHT(i, j, f W ) 2: s ←null-initialized vec. of size j −i 3: for i ≤k < j do 4: si−k ←[i, k, →, ⊤] + [k + 1, j, ←, ⊤] 5: b ←ONE-HOT-ARGMAX(s) 6: BACKPTR[i, j, →, ⊥] ←b 7: WEIGHT[i, j, →, ⊥] ←b⊤s + Wj,i Algorithm 2 If item [i, j, →, ⊥] has contributed the optimal objective, this function sets Ti,j to 1. Then, it propagates the contribution information to its antecedents. 1: function BACKTRACK-URIGHT(i, j, T ) 2: Ti,j ←CONTRIB[i, j, →, ⊥] 3: b ←BACKPTR[i, j, →, ⊥] 4: for i ≤k < j do 5: CONTRIB[i, k, →, ⊤] + ←bi−kTi,j 6: CONTRIB[k + 1, j, ←, ⊤] + ←bi−kTi,j is the first source of bias in our gradient estimator. The maximization in Equation 7 can be computed using the algorithm of Eisner (1996). We stress that the marginalization in Equation 3 and MC estimated sum over trees capture high-order statistics, which is fundamentally different from computing edge marginals, i.e. structured attention (Kim et al., 2017). Unfortunately, the estimated gradient of the reparameterized distribution over parse trees is ill-defined (either undefined or null). We tackle this issue in the following section. 4 Differentiable Dynamic Programming Neural networks parameters are learned using (variants of) the stochastic gradient descent algorithm. The gradient is computed using the backpropagation algorithm that rely on partial derivative of each atomic operation in the network.5 The perturb-and-MAP sampling process relies on the dependency parser (Equation 7) which contains ill-defined derivatives. This is due to the usage of constrained arg max operations (Gould et al., 5There are some exception where a sub-derivative is enough, for example for the ReLU non-linearity. 5512 2016; Mensch and Blondel, 2018) in the algorithm of Eisner (1996). Let L be the training loss, backpropagation is problematic because of the following operation: ∂L ∂f W = ∂L ∂T ∂T ∂f W where ∂T ∂f W is the partial derivative with respect to the dependency parser (Equation 7) which is null almost everywhere, i.e. there is no descent direction information. We follow previous work and use a differentiable dynamic programming surrogate (Mensch and Blondel, 2018; Corro and Titov, 2019). The use of the surrogate is the second source of bias in our gradient estimation. 4.1 Parsing with Dynamic Programming The projective dependency parser of Eisner (1996) is a dynamic program that recursively builds a chart of items representing larger and larger spans of the input sentence. Items are of the form [i, j, d, c] where: 0 ≤i ≤j ≤n are the boundaries of the span; d ∈{→, ←} is the direction of the span, i.e. a right span →(resp. left span ←) means that all the words in the span are descendants of xi (resp. xj) in the dependency tree; c ∈{⊤, ⊥} indicates if the span is complete (⊤) or incomplete (⊥) in its direction. In a complete right span, xj cannot have any modifiers on its right side. In a complete left span, xi cannot have any modifier on its left side. A set of deduction rules defines how the items can be deduced from their antecedents. The algorithm consists of two steps. In the first step, items are deduced in a bottom-up fashion and the following information is stored in the chart: the maximum weight that can be obtained by each item and backpointers to the antecedents that lead to this maximum weight (Algorithm 1). In the second step, the backpointers are used to retrieve the items corresponding to the maximum score and values in T are set accordingly (Algorithm 2).6 4.2 Continuous Relaxation The one-hot-argmax operation on line 5 in Algorithm 1 can be written as follows: arg max b≥0 X k bksk s.t. X k bk = 1. 6The second step is often optimized to have linear time complexity instead of cubic. Unfortunately, this change is not compatible with the continuous relaxation we propose. It is known that a continuous relaxation of arg max in the presence of inequality constraints can be obtained by introducing a penalizer that prevents activation of inequalities at the optimal solutions (Gould et al., 2016): arg max b≥0 X k bksk −Ω(b) s.t. X k bk = 1. Several Ωfunctions have been studied in the literature for different purposes, including logarithmic and inverse barriers for the interior point method (Den Hertog et al., 1994; Potra and Wright, 2000) and negative entropy for deterministic annealing (Rangarajan, 2000). When using negative entropy, i.e. Ω(b) = P k bk log bk, solving the penalized one-hot-argmax has a closed form solution that can be computed using the softmax function (Boyd and Vandenberghe, 2004), that is: bk = exp(sk) P k′ exp(sk′). Therefore, we replace the non-differentiable one-hot-argmax operation in Algorithm 1 with a softmax in order to build a smooth and fully differentiable surrogate of the parsing algorithm. 5 Controlled Experiment We first experiment on a toy task. The task is designed in such a way that there exists a simple projective dependency grammar which turns it into a trivial problem. We can therefore perform thorough analysis of the latent tree induction method. 5.1 Dataset and Task The ListOps dataset (Nangia and Bowman, 2018) has been built specifically to test structured latent variable models. The task is to compute the result of a mathematical expression written in prefix notation. It has been shown easy for a Tree-LSTM that follows the gold underlying structure but most latent variable models fail to induce it. Unfortunately, the task is not compatible with our neural network because it requires propagation of information from the leafs to the root node, which is not possible for a GCN with a fixed number of layers. Instead, we transform the computation problem into a tagging problem: the task is to tag the valency of operations, i.e. the number of operands they have. We transform the original unlabelled binary phrase-structure into a dependency structure by 5513 following a simple head-percolation table: the head of a phrase is always the head of its left argument. The resulting dependencies represent two kinds of relation: operand to argument and operand to closing parenthesis (Figure 2). Therefore, this task is trivial for a GCN trained with gold dependencies: it simply needs to count the number of outgoing arcs minus one (for operation nodes). In practice, we observe 100% tagging accuracy with the gold dependencies. 5.2 Neural Parametrization We build a simple network where a BiLSTM is followed by deep dotted attention which computes the dependency weights (see Equation 1). In these experiments, unlike Section 6, GCN does not have access to input tokens (or corresponding BiLSTM states): it is fed ‘unlexicalized’ embeddings (i.e. the same vector is used as input for every token).7 Therefore, the GCN is forced to rely on tree information alone (see App. A.1 for hyperparameters). There are several ways to train the neural network. First, we test the impact of MC estimation at training. Second, we choose when to use the continuous relaxation. One option is to use a StraightThrough estimator (ST, Bengio, 2013; Jang et al., 2017): during the forward pass, we use a discrete structure as input of the GCN, but during the backward pass we use the differentiable surrogate to compute the partial derivatives. Another option is to use the differentiable surrogate for both passes (Forward relaxed). As our goal here is to study induced discrete structures, we do not use relaxations at test time. We compare our model with the non-stochastic version, i.e. we set G = 0. 5.3 Results The attachment scores and the tagging accuracy are provided in Table 1. We draw two conclusions from these results. First, using the ST estimator hurts performance, even though we do not relax at test time. Second, the MC approximations, unlike the non-stochastic model, produces latent structures almost identical to gold trees. The non-stochastic version is however relatively successful in terms of tagging accuracy: we hypothesize that the LSTM model solved the problem and 7To put it clearly, we have two sets of learned embeddings: a set of lexicalized embeddings used for the input of the BiLSTM and a single unlexicalized embedding used for the input of the GCN. * (max 3 4 (med 9 3 ) 1 ) 4 2 - - - Figure 2: An example from the ListOps dataset. Numbers below operation tokens are valencies. (top) the original unlabelled phrase-structure. (bottom) our dependency conversion: each dependency represents either an operand to argument relation or a closing parenthesis relation. Acc. Att. Latent tree - G = 0 Forward relaxed 98.1 83.2 Straight-Through 70.8 33.9 Latent tree - MC training Forward relaxed 99.6 99.7 Straight-Through 77.0 83.2 Table 1: ListOps results: tagging accuracy (Acc.) and attachment score for the latent tree grammar (Att.). uses trees as messages to communicate solutions. See extra analysis in App. C.8 6 Real-world Experiments We evaluate our method on two real-world problems: a sentence comparison task (natural language inference, see Section 6.1) and a sentence classification problem (sentiment classification, see Section 6.2). Besides using the differentiable dynamic programming method, our approach also differs from previous work in that we use GCNs followed by a pooling operation, whereas most previous work used Tree-LSTMs. Unlike Tree-LSTMs, GCNs are trivial to parallelize over batches on GPU. 6.1 Natural Language Inference The Natural Language Inference (NLI) problem is a task developed to test sentence understanding capacity. Given a premise sentence and a hypothesis sentence, the goal is to predict a relation between them: entailment, neutral or contradiction. We evaluate on the Stanford NLI (SNLI) and the 8 This results are not cherry-picked to favor the MC model. We observed a deviation of ±0.54% in attachment score for the non-stochastic model, whereas, for MC sampling, all except one achieved an attachment score above 99.7 (out of 5 runs). 5514 Acc. #Params Yogatama et al. (2017) *100D SPINN 80.5 2.3M Maillard et al. (2017) LSTM 81.2 161K *Latent Tree-LSTM 81.6 231K Kim et al. (2017) No Intra Attention 85.8 Simple Simple Att. 86.2 *Structured Attention 86.8 Choi et al. (2018) *100D ST Gumbel Tree 82.6 262K *300D ST Gumbel Tree 85.6 2.9M *600D ST Gumbel Tree 86.0 10.3M Niculae et al. (2018) Left-to-right Trees 81.0 Flat 81.7 Treebank 81.7 *SparseMAP 81.9 Liu and Lapata (2018) 175D No Attention 85.3 600K *100D Projective Att. 86.8 1.2M *175D Non-projective Att. 86.9 1.1M This work No Intra Attention 84.4 382K Simple Intra Att. 83.8 582K *Latent Tree + 1 GCN 85.2 703K *Latent Tree + 2 GCN 86.2 1M Table 2: SNLI results and number of network parameters (discarding word embeddings). Stars indicate latent tree models. Multi-genre NLI (MultiNLI) datasets. Our network is based on the decomposable attention (DA) model of Parikh et al. (2016). We induce structure of both the premise and the hypothesis (see Equation 1 and Figure 1b). Then, we run a GCN over the tree structures followed by inter-sentence attention. Finally, we apply max-pooling for each sentence and feed both sentence embeddings into a MLP to predict the label. Intuitively, using GCNs yields a form of intra-attention. See the hyperparameters in Appendix A.2. SNLI: The dataset contains almost 0.5m training instances extracted from image captions (Bowman et al., 2015). We report results in Table 2. Our model outperforms both no intra-attention and simple intra-attention baselines9 with 1 layer 9The attention weights are computed in the same way as scores for tree prediction, i.e. using Equation 1. of GCN (+0.8) or two layers (+1.8). The improvements with using multiple GCN hops, here and on MultiNLI (Table 3b), suggest that higherorder information is beneficial.10 It is hard to compare different tree induction methods as they build on top of different baselines, however, it is clear that our model delivers results comparable with most accurate tree induction methods (Kim et al., 2017; Liu and Lapata, 2018). The improvements from using latent structure exceed these reported in previous work. MultiNLI: MultiNLI is a broad-coverage NLI corpus Williams et al. (2018b): the sentence pairs originate from 5 different genres of written and spoken English. This dataset is particularly interesting because sentences are longer than in SNLI, making it more challenging for baseline models.11 We follow the evaluation setting in Williams et al. (2018b,a): we include the SNLI training data, use the matched development set for early stopping and evaluate on the matched test set. We use the same network and parameters as for SNLI. We report results in Table 3b. The DA baseline (‘No Intra Attention’) performs slightly better (+0.6%) than the original BiLSTM baseline. Our latent tree model significantly improves over our the baseline, either with a single layer GCN (+3.4%) or with a 2-layer GCN (+4.9%). We observe a larger gap than on SNLI, which is expected given that MultiNLI is more complex. We perform extra ablation tests on MultiNLI in Section 6.3. 6.2 Sentiment Classification We experiment on the Stanford Sentiment Classification dataset (Socher et al., 2013). The original dataset contains predicted constituency structure with manual sentiment labeling for each phrase. By definition, latent tree models cannot use the internal phrase annotation. We follow the setting of Niculae et al. (2018) and compare to them in two set-ups: (1) with syntactic dependency trees predicted by CoreNLP (Manning et al., 2014); (2) with latent dependency trees. Results are reported in Table 3a. First, we observe that the bag of bigrams base10 In contrast, multiple hops relying on edge marginals was not beneficial (Liu and Lapata, 2018), personal communication. 11 The average sentence length in SNLI (resp. MultiNLI) is 11.16 (resp. 16.79). There is 21% (resp. 42%) of sentence longer than 15 words in SNLI (resp. MultiNLI). 5515 (a) Socher et al. (2013) Bigram 83.1 Naive Bayes Niculae et al. (2018) CoreNLP 83.2 *Latent tree 84.7 This work CoreNLP 83.8 *Latent tree 84.6 (b) Acc. Williams et al. (2018a) 300D LSTM 69.1 *300D SPINN 66.9 300D Balanced Trees 68.2 *300D ST Gumbel Tree 69.5 *300D RL-SPINN 67.3 This work No Intra Attention 68.1 *Latent tree + 1 GCN 71.5 *Latent tree + 2 GCN 73.0 (c) Match Mis. Baselines No Intra Att 68.5 68.9 Simple Intra Att 67.9 68.4 Left-to-right trees 1 GCN 71.2 71.8 2 GCN 72.3 71.1 Latent head selection model 1 GCN 69.0 69.4 2 GCN 68.7 69.6 Latent tree model 1 GCN 71.9 71.7 2 GCN 73.2 72.9 Table 3: (a) SST results. Stars indicate latent tree models. (b) MultiNLI results. Stars indicate latent tree models. (c) Ablation tests on MultiNLI (results on the matched and mismatched development sets). * My favorite restaurants are always at least a hundred miles away from my house . * We do n’t loan a lot of money . * He had recently seen pictures depicting those things . Figure 3: Examples of trees induced on the matched development set of MultiNLI, the model using 2 GCN layers. line of Socher et al. (2013) achieves results comparable to all structured models. This suggest that the dataset may not be well suited for evaluating structure induction methods. Our latent dependency model slighty improves (+0.8) over the CoreNLP baseline. However, we observe that while our baseline is better than the one of Niculae et al. (2018), their latent tree model slightly outperforms ours (+0.1). We hypothesize that graph convolutions may not be optimal for this task. 6.3 Analysis (Ablations) In order to test if the tree constraint is important, we do ablations on MultiNLI with two models: one with a latent projective tree variable (i.e. our full model) and one with a latent head selection model that does not impose any constraints on the structure. The estimation approach and the model are identical, except for the lack of the tree constraint (and hence dynamic programming) in the ablated model. We report results on development sets in Table 3c. We observe that the latent tree models outperform the alternatives. Previous work (e.g., Niculae et al., 2018) included comparison with balanced trees, flat trees and left-to-right (or right-to-left) chains. Flat trees are pointless with the GCN + DA combination: the corresponding pooling operation is already done in DA. Though balanced trees are natural with bottom-up computation of TreeLSTMs, for GCNs they would result in embedding essentially random subsets of words. Consequently, we compare only to left-to-right chains of dependencies.12 This approach is substantially less accurate than our methods, especially for out-of-domain (i.e. mismatched) data. (Grammar) We also investigate the structure of the induced grammar. We report the latent structure of three sentences in Figure 3. We observe that sentences are divided into spans, where each span is represented with a series of left dependencies. Surprisingly, the model chooses to use only left-to-right dependencies. The neural network does not include a RNN layer, so this may suggest that the grammar is trying to reproduce an recurrent model while also segmenting the sentence in phrases. (Speed) We use a O(n3)-time parsing algorithm. 12They are the same as right-to-left ones, as our GCNs treat both directions equivalently. 5516 Nevertheless, our model is efficient: one epoch on SNLI takes 470 seconds, only 140 seconds longer than with the O(n2)-time latent-head version of our model (roughly equivalent to classic self-attention). The latter model is computed on GPU (Titan X) while ours uses CPU (Xeon E52620) for the dynamic program and GPU for running the rest of the network. 7 Related work Recently, there has been growing interest in providing an inductive bias in neural network by forcing layers to represent tree structures (Kim et al., 2017; Maillard et al., 2017; Choi et al., 2018; Niculae et al., 2018; Williams et al., 2018a; Liu and Lapata, 2018). Maillard et al. (2017) also operates on a chart but, rather than modeling discrete trees, uses a soft-gating approach to mix representations of constituents in each given cell. While these models showed consistent improvement over comparable baselines, they do not seem to explicitly capture syntactic or semantic structures (Williams et al., 2018a). Nangia and Bowman (2018) introduced the ListOps task where the latent structure is essential to predict correctly the downstream prediction. Surprisingly, the models of Williams et al. (2018a) and Choi et al. (2018) failed. Much recent work in this context relies on latent variables, though we are not aware of any work closely related to ours. Differentiable structured layers in neural networks have been explored for semi-supervised parsing, for example by learning an auxiliary task on unlabelled data (Peng et al., 2018) or using a variational autoencoder (Corro and Titov, 2019). Besides research focused on inducing taskspecific structures, another line of work, grammar induction, focused on unsupervised induction of linguistic structures. These methods typically rely on unlabeled texts and are evaluated by comparing the induced structures to actual syntactic annotation (Klein and Manning, 2005; Shen et al., 2018; Htut et al., 2018). 8 Conclusions We introduced a novel approach to latent tree learning: a relaxed version of stochastic differentiable dynamic programming which allows for efficient sampling of projective dependency trees and enables end-to-end differentiation. We demonstrate effectiveness of our approach on both synthetic and real tasks. The analyses confirm importance of the tree constraint. Future work will investigate constituency structures and new neural architectures for latent structure incorporation. Acknowledgments We thank Maximin Coavoux and Serhii Havrylov for their comments and suggestions. We are grateful to Vlad Niculae for the help with preprocessing the SST data. We also thank the anonymous reviewers for their comments. The project was supported by the Dutch National Science Foundation (NWO VIDI 639.022.518) and European Research Council (ERC Starting Grant BroadSem 678254). References Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1947–1957. Association for Computational Linguistics. Yoshua Bengio. 2013. Estimating or propagating gradients through stochastic neurons. arXiv preprint arXiv:1305.2982. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Stephen Boyd and Lieven Vandenberghe. 2004. Convex optimization. Cambridge university press. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of the 2018 Association for the Advancement of Artificial Intelligence (AAAI). and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP). Caio Corro and Ivan Titov. 2019. Differentiable perturb-and-parse: Semi-supervised parsing with a structured variational autoencoder. In Proceedings of the International Conference on Learning Representations. Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and TatSeng Chua. 2005. Question answering passage retrieval using dependency relations. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 400–407. ACM. 5517 D Den Hertog, Cornelis Roos, and Tam´as Terlaky. 1994. Inverse barrier methods for linear programming. RAIRO-Operations Research, 28(2):135– 163. Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics. Jennifer Foster, ¨Ozlem C¸ etinoglu, Joachim Wagner, Joseph Le Roux, Stephen Hogan, Joakim Nivre, Deirdre Hogan, and Josef Van Genabith. 2011. # hardtoparse: Pos tagging and parsing the twitterverse. In AAAI 2011 workshop on analyzing microtext, pages 20–25. Michael C Fu. 2006. Gradient estimation. Handbooks in operations research and management science, 13:575–616. Stephen Gould, Basura Fernando, Anoop Cherian, Peter Anderson, Rodrigo Santa Cruz, and Edison Guo. 2016. On differentiating parameterized argmin and argmax problems with application to bi-level optimization. arXiv preprint arXiv:1607.05447. Phu Mon Htut, Kyunghyun Cho, and Samuel Bowman. 2018. Grammar induction with neural language models: An unusual replication. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4998–5003. Association for Computational Linguistics. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In Proceedings of the 2017 International Conference on Learning Representations. Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. 2017. Structured attention networks. In Proceedings of the International Conference on Learning Representations. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of the International Conference on Learning Representations. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations. Dan Klein and Christopher D Manning. 2005. The unsupervised learning of natural language structure. Stanford University Stanford, CA. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning. Yang Liu and Mirella Lapata. 2018. Learning structured text representations. Transactions of the Association for Computational Linguistics, 6:63–75. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng WANG. 2015. A dependency-based neural network for relation classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 285– 290. Association for Computational Linguistics. Chris J Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Advances in Neural Information Processing Systems, pages 3086–3094. Jean Maillard, Stephen Clark, and Dani Yogatama. 2017. Jointly learning sentence embeddings and syntax with unsupervised tree-LSTMs. arXiv preprint arXiv:1705.09189. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. Association for Computational Linguistics. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1507–1516. Association for Computational Linguistics. Arthur Mensch and Mathieu Blondel. 2018. Differentiable dynamic programming for structured prediction and attention. In Proceedings of the 35th International Conference on Machine Learning. Nikita Nangia and Samuel Bowman. 2018. Listops: A diagnostic dataset for latent tree learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 92– 99. Association for Computational Linguistics. Jason Naradowsky, Sebastian Riedel, and David Smith. 2012. Improving NLP through marginalization of hidden syntactic structure. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 810–820, Jeju Island, Korea. Association for Computational Linguistics. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. 5518 Vlad Niculae, Andr´e F. T. Martins, and Claire Cardie. 2018. Towards dynamic computation graphs via sparse latent structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 905–911. Association for Computational Linguistics. George Papandreou and Alan L Yuille. 2011. Perturband-MAP random fields: Using discrete optimization to learn and sample from energy models. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 193–200. IEEE. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255. Association for Computational Linguistics. Hao Peng, Sam Thomson, and Noah A. Smith. 2018. Backpropagating through structured argmax using a spigot. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1863–1873. Association for Computational Linguistics. Slav Petrov, Pi-Chuan Chang, Michael Ringgaard, and Hiyan Alshawi. 2010. Uptraining for accurate deterministic question parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 705–713. Association for Computational Linguistics. Florian A Potra and Stephen J Wright. 2000. Interiorpoint methods. Journal of Computational and Applied Mathematics, 124(1-2):281–302. Anand Rangarajan. 2000. Self-annealing and self-annihilation: unifying deterministic annealing and relaxation labeling. Pattern Recognition, 33(4):635–649. John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. 2015. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pages 3528– 3536. Yikang Shen, Zhouhan Lin, Chin wei Huang, and Aaron Courville. 2018. Neural language modeling by jointly learning syntax and lexicon. In International Conference on Learning Representations. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. Association for Computational Linguistics. Xinyi Wang, Hieu Pham, Pengcheng Yin, and Graham Neubig. 2018. A tree-based decoder for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4772–4777. Association for Computational Linguistics. Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018a. Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Computational Linguistics, 6:253–267. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018b. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. R Williams. 1987. A class of gradient-estimation algorithms for reinforcement learning in neural networks. In Proceedings of the International Conference on Neural Networks, pages II–601. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. In Proceedings of the International Conference on Learning Representations. 5519 A Neural Parametrization (Implementation) We implemented our neural networks with the C++ API of the Dynet library (Neubig et al., 2017). The continuous relaxation of the parsing algorithm is implemented as a custom computation node. (Training) All networks are trained with Adam initialized with a learning rate of 0.0001 and batches of size 64. If the dev score did not improve in the last 5 iterations, we multiply the learning rate by 0.9 and load the best known model on dev. For the ListOps task, we run a maximum of 100 epochs, with exactly 100 updates per epoch. For NLI and SST tasks, we run a maximum of 200 epochs, with exactly 8500 and 100 updates per epoch, respectively. All MLPs and GCNs have a dropout ratio of 0.2 except for the ListOps task where there is no dropout. We clip the gradient if its norm exceed 5. A.1 ListOps Valency Tagging (Dependency Parser) Embeddings are of size 100. The BiLSTM is composed of two stacks (i.e. we first run a left-to-right and a right-to-left LSTM, then we concatenate their outputs and finally run a left-to-right and a right-to-left LSTM again) with one single hidden layer of size 100. The initial state of the LSTMs are fixed to zero. The MLPs of the dotted attention have 2 layers of size 100 and a ReLU activation function (Tagger) The unique embedding is of size 100. The GCN has a single layer of size 100 and a ReLU activation. Then, the tagger is composed of a MLP with a layer of size 100 and a ReLU activation followed by a linear projection into the output space (i.e. no bias, no non-linearity). A.2 Natural Language Inference All activation functions are ReLU. The interattention part and the classifier are exactly the same than in the model of Parikh et al. (2016). (Embeddings) Word embeddings of size 300 are initialized with Glove and are not updated during training. We initialize 100 unknown word embeddings where each value is sampled from the normal distribution. Unknown words are mapped using a hashing method. (GCN) The embeddings are first passed through a one layer MLP with an output size of 200. The dotted attention is computed by two MLP with two layers of size 200 each. Function f(), g() and h() in the GCN layers are one layer MLPs without activation function. The σ activation function of a GCN is ReLU. We use dense connections for the GCN. A.3 Sentiment Classification (Embeddings) We use Glove embeddings of size 300. We learn the unknown word embeddings. Then, we compute context sensitive embeddings with a single-stack/single-layer BiLSTM with a hidden-layer of size 100. (GCN) The dotted attention is computed by two MLP with one layer of size 300 each. There is no distance bias in this model. Function f(), g() and h() in the GCN layers are one layer MLPs without activation function. The σ activation function of a GCN is ReLU. We do not use dense connections in this model. (Output) We use a max-pooling operation on the GCN outputs followed by an single-layer MLP of size 300. B Illustration of the Continuous Relaxation Too give an intuition of the continuous relaxation, we plot the arg max function and the penalized arg max in Figure 4. We plot the first output for input (x1, x2, 0). C ListOps Training We plot tagging accuracy and attachment score with respect to the training epoch in Figure 5. On the one hand, we observe that the non-stochastic versions converges way faster in both metrics: we suspect that it develops an alternative protocol to pass information about valencies from LSTM to the GCN. On the other hand, MC sampling may have a better exploration of the search space but it is slower to converge. We stress that training with MC estimation results in the latent tree corresponding (almost) perfectly to the gold grammar. D Fast differentiable dynamic program implementation In order to speed up training, we build a a fast the differentiable dynamic program (DDP) as a custom computational node in Dynet and use it in a 5520 (a) −2 0 2 −2 0 2 0 0.5 1 (b) −2 0 2 −2 0 2 0 0.5 1 Figure 4: (a) Single output of an arg max function. The derivative is null almost everywhere, i.e. there is no descent direction. (b) Single output of the differentiable relaxation. The derivatives are non-null. static graph. Instead of relying on masking, we add an input the DDP node that contains the sentence size : therefore, even if the size of the graph is fixed, the cubic-time algorithm is run on the true input length only. Moreover, instead of allocating memory with the standard library functionnality, we use the fast scratch memory allocator of Dynet. 5521 (a) Accuracy 0 20 40 60 80 100 0 20 40 60 80 100 epoch accuracy (b) Attachment score 0 20 40 60 80 100 0 20 40 60 80 100 epoch attachment score Figure 5: Accuracy of tagging and attachment score of the latent tree during training. (red solid line) Nonstochastic training with forward relaxation. (blue dashed line) MC training with forward relaxation. (black dotted) Non-stochastic training with backward relaxation. (green dashdotted) MC with backward relaxation.
2019
551
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5522–5526 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5522 Abstract Although the proper use of idioms can enhance the elegance of writing, the active use of various expressions is a challenge because remembering idioms is difficult. In this study, we address the problem of idiom recommendation by leveraging a neural machine translation framework, in which we suppose that idioms are written in one pseudo target language. Two types of reallife datasets are collected to support this study. Experimental results show that the proposed approach achieves promising performance compared with other baseline methods. 1 Introduction Nearly every language has some ancient idioms, aphorisms, and sayings from history (Muzny et al., 2013; Moussallem et al., 2018). Chinese idioms, also known as “ready phrases” and usually consist of only four characters, can reveal complex meaning and enhance the conciseness and elegance of writing if properly used. For example, in the text segment “一夜春雷雨,朋友圈的微 商如雨后春笋般冒了出来。” (During the thunderstorm overnight, microbusinessmen in the circle of friends sprang up like bamboo shoots after rain.), the author elegantly describes the rapid emergence of things in large numbers by properly using the popular idiom “雨后春笋 ” (When it rains in spring, many bamboo shoots grow simultaneously). Therefore, automatically recommending idioms that are pertinent to the input context is an appealing task because remembering idioms is difficult for most people. To this end, one typical and straightforward approach is to regard idiom recommendation as a standard classification problem and assign a piece of context to one idiom label by training corresponding classifiers. Whereas by doing so, the meaningful text information in the idiom itself tends to be ignored. Intuitively, combining textual information in the context and idiom in the training stage may be helpful. However, texts in the idiom are usually written in ancient classical Chinese for conciseness; thus, they are highly different from those in the context and difficult to directly utilize for classifying unseen contexts. In most cases, such as in the aforementioned example, few common words or characters are shared between the idiom and the surrounding context. In this study, we provide a new perspective for idiom recommendation by formulating it as a translation problem, in which the idioms are assumed to be written with a pseudo target language because they are usually written in ancient Chinese and have special and limited vocabularies. We propose a machine translationbased approach that operates in three stages. First, an attention-based neural network is used to encode the context sequence (source language). Second, the coded context attention vector is decoded into one intermediate sequence (target language). Third, the final recommended idioms are selected through sequence mapping. The remainder of this paper is organized as follows. The related work is surveyed in Section 2. Sections 3 and 4 present the proposed approach and experimental results, respectively. Finally, conclusions and future directions are drawn in Section 5. 2 Related works Our task can be viewed as a content-based recommendation, and the closely related work includes scientific article citation (He et al., 2010), news (Lu et al., 2014), and quotation (Tan et al., 2015) recommendations. He et al. (2010) used a context-aware approach and measured the relevance between context and candidate items for scientific citation recommendation. Tan et al. (2015) proposed a supervised ranking framework to recommend quotes for writing. The difference Neural-based Chinese Idiom Recommendation for Enhancing Elegance in Essay Writing Yuanchao Liu, Bo Pang, Bingquan Liu School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China {ycliu, bpang, liubq}@hit.edu.cn 5523 between idiom recommendation and the above ones is that idioms were usually formed in ancient times and commonly written in classical Chinese, thereby exhibiting few common surface features with context. Sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014) have recently received great success in various tasks, such as machine translation (Bahdanau et al., 2015), image caption generation (Xu et al., 2015), and text summarization (Chopra et al., 2016). Cho et al. (2014) showed that the performance of a basic encoder–decoder rapidly deteriorates as the length of input context increases. Correspondingly, attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) has been proposed to address such types of problem. 3 Methodology We formulate idiom recommendation as a context-to-idiom machine translation problem by using the encoder–decoder framework. Figure 1 shows the architecture of our approach. This scheme works by taking an idiom-bearing sentence and yielding the idiom as output. The framework consists of five layers from the embedding (bottom) to the prediction (top) layer. The encoder and decoder separately receive the words in the source context sentence and characters in the target idiom as inputs. The implementation of each layer is presented as follows. w1 w2 ... ... ... ... C1 C2 <end> <S> I1 I2 I3 Context vector ... Embedding Layer Context Encoding Layer Prediction Layer Attention Layer Source Embedding Target Embedding Decoder Layer M4 M1 M2 M3 I4 I1 I2 I3 Figure 1: Graphical illustration of the proposed model. 3.1 Embedding Layer The model determines the source and target embeddings to retrieve the corresponding word representations. A vocabulary is initially selected for context and idiom separately. For context, only the frequent words (fre. ≥2 ) are treated as unique to reduce the effect of noise that is usually caused by low-frequency words. For target idioms, all the unique Chinese characters shown in the idioms are used to create the vocabulary because there is a relatively limited character set for the idioms. 3.2 Context Encoding Layer The word embeddings retrieved from the embedding layer are fed into the encoder for the source language C (context) and decoder for the target language I (idiom). We use a bidirectional long short-term memory (BiLSTM) network (Graves et al., 2013) to capture the left and right contexts of each word in the input. ൣℎሬ⃗ ௜ ஼, 𝑐⃗ ௜ ஼ ൧= 𝐿𝑆𝑇𝑀஼ ሬሬሬሬሬሬሬሬሬሬሬሬሬሬ⃗(𝑡௜, ℎሬ⃗ ௜ିଵ ஼ , 𝑐⃗ ௜ିଵ ஼ ), (1) ൣ ℎ⃖ሬ ௜ ஼, 𝑐⃖ ௜ ஼൧= 𝐿𝑆𝑇𝑀஼ ⃖ሬሬሬሬሬሬሬሬሬሬሬሬሬሬ (𝑡௜, ℎ⃖ሬ ௜ାଵ ஼ , 𝑐⃖ ௜ାଵ ஼ ), (2) where h ∈ℝௗ×ଵ and c ∈ℝௗ×ଵ are the hidden and cell states of the LSTM, respectively; → (←) indicates the forward (backward) pass; and 𝑡௜ is the input context word vector at time step 𝑖. Then, the output for each input is the concatenation of the two vectors from both directions. The bottom half of the decoding layer for the idiom also takes the same measures, whereas the Chinese character is used for each time step. The last source state from the encoder is passed to the decoder when the decoding process is initiated. 3.3 Attention Layer Various words in the long context are generally of different importance. For example, the context words “冒” (sprang up) and “出来” (show up) in the aforementioned example are intuitively strong indicators for recommending the idiom “雨后春 笋” (When it rains in spring, many bamboo shoots grow simultaneously). Thus, increased attention should be given to such words. Consequently, a feasible solution is to introduce attention mechanism. Thus, various attention weights are given to different input words. We use a global attentional model (Luong et al., 2015) to obtain the attention vector. This model consists of the following stages: 1. The current target hidden state is compared with all the source states to calculate the attention weights 𝛼௧௦, as shown as follows: 𝛼௧௦= ୣ୶୮ (௦௖௢௥௘(௛෩೟,௛ഥೞ)) ∑ ୣ୶୮ (௦௖௢௥௘(௛෩೟,௛ഥೞᇲ)) ೄ ೞᇲసభ , (3) 5524 where the function 𝑠𝑐𝑜𝑟𝑒 is used to produce attention weights. In the training stage, we extend the target hidden state as ℎ෨௧= 𝑉[ℎ௧; ℎ௠] , where 𝑉∈ℝௗ×ଶௗ, ℎ௧ is the target hidden state, and ℎ௠ is the average of the embedding of all the words in the modern plain text meaning of the idiom. Then, we compare the extended target hidden state ℎ෨௧with each of the source hidden states ℎത௦ to compute 𝑠𝑐𝑜𝑟𝑒 (i.e., 𝑠𝑐𝑜𝑟𝑒൫ℎ෨௧, ℎത௦൯= ℎ෨௧ ⊺𝑊ℎത௦, where W ∈ℝ𝑑×2𝑑). 2. Then, the context vector 𝑐௧ is calculated as the weighted average of the source states. 𝑐௧= ∑𝛼௧௦ℎത௦ ௦ (4) 3. Finally, the attention vector 𝛼௧ is derived by combining the context vector with the current target hidden state ℎ௧. 𝛼௧ = 𝑡𝑎𝑛ℎ (𝑊𝑐[𝑐𝑡; ℎ𝑡]) (5) 3.4 Decoder Layer Given the attention vector 𝛼௧ and all the previously predicted target idiom characters {𝐼ଵ, · · · , 𝐼௧ିଵ}, the decoder layer defines a probability over the translation by decomposing the joint probability into the ordered conditionals to predict the next character 𝐼௧. p(I) = ∏ 𝑝(𝐼௧|{𝐼ଵ, … 𝐼௧ିଵ}, 𝛼𝑡) ் ௧ୀଵ (6) We use BiLSTM to model each conditional probability (Bahdanau et al., 2015). In the decoder layer, we create a candidate character table for different locations in the idiom to decrease the decoding space. For example, when generating the first character in the preceding example, “雨” (rain) is eligible because it is in the table of Position 1, which consists of all the unique characters shown in the first position of all idioms. Thus, many other ineligible characters that are not in this table will be naturally ignored. 3.5 Prediction Layer Many standard idioms are present in our work compared with the traditional machine translation. Therefore, the translated character sequences in this layer are further mapped into the standard idioms in the idiom set (i.e., 𝐼∗ 𝑡𝑜 𝑀∗ in Figure 1). To achieve this goal, we use edit distance (Navarro, 2001) to find the most similar idiom 1 The datasets are available at http://u.163.com/syyAdG6P, pass code: YdgIfzHn 2 http://scikit-learn.org/ from the standard idiom set as the prediction result. 4 Experiments 4.1 Experimental settings Datasets. We carry out experiments on two datasets, which are referred as BN and WB respectively. The datasets are collected from Weibo and Baidu News as two data sources to get the short context by inputting the idiom as the query. Table 1 provides the details of the datasets. Table 1. Details of the datasets1. Dataset # of total pairs # of snippets per Idiom # of Idioms WB 167,844 ≈176 956 BN 163,817 ≈171 956 Baselines and Evaluation Metrics.We conduct experiments using the following baselines: (1) Elastic Net, (2) KNN (K-Nearest Neighbor), (3) Multinomial naive Bayes, (4) LinearSVC. We use the scikit-learn (Version 0.19) implementation2 of the above models (using the default settings) for the experiments. We also experiment with several neural network based classification approaches, namely, (5) TextCNN (Convolutional neural network) (Kim et al., 2014) and (6) Bi-LSTMRNN (Graves et al., 2013), and (7) HierAtteNet (Hierarchical attention network) (Yang et al., 2017). All the review texts are segmented into Chinese words using Jieba3. We mainly use recall as the primary recommendation metrics in accordance with the study of He et al. (2010). We remove the original idioms from the testing documents. The recall is defined as the percentage of original idioms that appear in the recommended ones. Moreover, we also use smoothed BLEU4, which is widely used in MT performance evaluation, to examine the intermediate results of our approach. Training Details. We use a minibatch stochastic gradient descent (SGD) algorithm and Adadelta (Zeiler, 2012) to train each model. A total of 12 training epochs is conducted, and a simple learning rate schedule begins with a learning rate of 1.0, followed by six epochs. Then, the learning rate is divided every epoch. Each SGD update direction is computed using a minibatch of 128 snippets. We set the dropout to 0.2, target max length to 4, and source max length to 50. The 3 https://pypi.python.org/pypi/jieba/ 4 http://www.nltk.org/ 5525 pretrained Chinese word and Chinese idiom character embeddings are trained by word2vec (Mikolov et al., 2013) toolkit, and unseen words are assigned with unique random vectors. Both languages have a set of embedding weights because they actually come from the same mother language, although considerable differences exist in their vocabulary sets. 4.2 Results and Analysis In the first experiment, we compare the performance of our approach with baseline methods. We separate our datasets into 8:1:1 as the training, validation, and test sets. Table 2 summarizes the performance comparison on WB and BN datasets. Table 2. Comparison with baseline methods. Method WB BN Elastic Net (loss=hinge) 0.239 0.378 KNN (n-neibous=10) 0.182 0.225 Multinomial Naive Bayes 0.164 0.314 LinearSVC 0.221 0.339 Bi-LSTM-RNN 0.294 0.395 TextCNN 0.325 0.386 HATT 0.362 0.412 Proposed method 0.412 0.448 Evidently, the proposed method notably outperforms all the other baseline methods on both datasets due to the following reasons. First, user-generated content is inherently noisy. The classification performance may be adversely affected by the considerable classes because of the hundreds of idioms present. Conversely, the proposed method focuses on the salient words in the context, thereby alleviating the adverse effect of noisy words to some extent. Second, the proposed encoder–decoder framework provides substantial advantages in this task: in comparison with many classification approaches that regard the entire idiom as a classification label, our approach considers the relationship between the context and the character inside the idiom by using attention-based neural machine translation architecture because some characters in the idiom have a close relationship with the context. Notably, neither the attention-based NMT nor other approaches effectively perform in recommendation across the two datasets. The recall values of BN and WB are 44.8% and 41.2%, respectively, thereby indicating that nearly half of the context cannot obtain the original idiom recommended. One possible reason is that the quality of the corpora considerably influences the result. Sometimes, selecting the suitable idioms according to the context may be relatively difficult for experienced people, not to mention for the models. In the second experiment, we intend to examine the performance with different number of iterations. Subpanels (a) and (b) of Figure 2 depict the BLEU and recall of BN and WB datasets, respectively, when the iteration number varies from 100 to 3000. The result shows that the recommendation performance can greatly improve by increasing the number of iterations, thereby obtaining excellent results for iterations of approximately 1000 to 1500. However, after considerable iterations (greater than 2000), decreasing trends are observed for the model performance. This result is due to overfitting of the training data with numerous iterations. Moreover, when mapping is added, an increase is observed in the recall, this indicates that the transformation in prediction layer is necessary to recommend the idiom from the standard set. (a) BN (b) WB Figure 2: Metrics as a function of the number of iterations of our model on both datasets. 5 Conclusion In this study, we address the appealing problem of idiom recommendation on the basis of the surrounding context and formulate it as a translation task. The evaluation results over two datasets demonstrate the effectiveness of the proposed approach. In the future, several ways of extending our model (e.g., exploring more attention mechanisms, such as location attention) are suggested to encode the context, because some particular locations in the context may be more important for different idioms. Moreover, substantial research will be conducted to propose other approaches for target language generation, which is one of the intermediary steps in our approach for the final idiom recommendation. 6 Acknowledgments This study was supported by the National Natural Science Foundation of China (61672192 and 61572151). 5526 References Bahdanau, D., Cho, K., & Bengio, Y. 2015. Neural machine translation by jointly learning to align and translate. ICLR 2015. Cho, K., van Merrienboer, B., Bahdanau, D., and Bengio, Y. 2014. On the properties of neural machine translation: Encoder–Decoder approaches. Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111 Graves A, Mohamed A, Hinton G. 2013. Speech Recognition with Deep Recurrent Neural Networks. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013, pages 6645–6649. He, Q., Pei, J., Kifer, D., Mitra, P., and Giles, L. 2010. Context aware citation recommendation. In Proceedings of the 19th international conference on World wide web. pages 421–430. Kim, Y. 2014. Convolutional Neural Networks for Sentence Classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 2014, pages 1746–1751. Lu, M., Qin, Z., Cao, Y., Liu, Z., & Wang, M. 2014. Scalable news recommendation using multidimensional similarity and jaccard-kmeans clustering. Journal of Systems & Software, 95(9), pages 242-251. Luong M T , Pham H , Manning C D. Effective Approaches to Attention-based Neural Machine Translation. EMNLP 2015. Mikolov T , Chen K , Corrado G , et al. 2013. Efficient Estimation of Word Representations in Vector Space. Proceedings of Workshop at ICLR. arXiv:1301.3781v1. Moussallem et al., 2018. LIDIOMS: A Multilingual Linked Idioms Data Set. arXiv:1802.08148 Muzny G. and Zettlemoyer L. 2013. Automatic Idiom Identification in Wiktionary. EMNLP 2013. pages 1417–1421 Navarro, Gonzalo. 2001. A guided tour to approximate string matching. ACM Computing Surveys. 33 (1): 31–88. Sumit Chopra, Michael Auli, Alexander M. Rush. 2016. Abstractive Sentence Summarization with Attentive Recurrent Neural Networks, NAACL 2016. pages 93-98. Sutskever I, Vinyals O, Le Q V. 2014. Sequence to Sequence Learning with Neural Networks. NIPS 2014. pages 3104-3112. Tan, J., Wan, X., Xiao, J. 2015. Learning to Recommend Quotes for Writing. AAAI 2015. pages 2453-2459 Xu K., Ba J., Kiros R. . 2015. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. ICML2015. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E.. 2017. Hierarchical Attention Networks for Document Classification. NAACL 2017, pages 1480-1489. Zeiler, M. D. 2012. Adadelta: an adaptive learning rate method. arXiv:1212.5701.
2019
552
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5527–5532 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5527 Better Exploiting Latent Variables in Text Modeling Canasai Kruengkrai Yahoo Japan Corporation [email protected] Abstract We show that sampling latent variables multiple times at a gradient step helps in improving a variational autoencoder and propose a simple and effective method to better exploit these latent variables through hidden state averaging. Consistent gains in performance on two different datasets, Penn Treebank and Yahoo, indicate the generalizability of our method.1 1 Introduction Introducing latent variables to neural language models would help in generating plausible sentences that reflect sentential semantics (Bowman et al., 2016). The success of learning latent variables is also beneficial to various natural language processing (NLP) tasks such as sentence compression (Miao and Blunsom, 2016) and text style transfer (Shen et al., 2017). One of the widely-used latent variable models is the variational autoencoder (VAE) (Kingma and Welling, 2014; Rezende et al., 2014). When applying the VAE to text data, recurrent neural networks are typically utilized for both the encoder and the decoder. Training the VAE with a high-capacity decoder such as a long short-term memory (LSTM) network (Hochreiter and Schmidhuber, 1997) can be challenging. The LSTM is powerful enough to model the underlying data distribution without the use of latent variables. In this paper, we take a closer look at one of the components in an LSTM-VAE model, namely the latent variable sampling scheme. Fig. 1 illustrates our baseline LSTM-VAE model built upon Bowman et al. (2016)’s model. At each gradient step (i.e., a minibatch run), most previous work pairs an input sentence with a single latent variable denoted by z. This would be sufficient in some 1The code for reproducibility is available at https:// research-lab.yahoo.co.jp/en/software. Word embedding Encoding LSTM cell Decoding LSTM cell Description Linear Linear+Softmax Dropout new hope a !"# a new hope a new hope $"# %∼'(0,1) ( ) * ⨁ ⨀ Figure 1: Baseline LSTM-VAE model. tasks but not necessarily effective in text modeling. At the beginning of training, the latent variable z contains a small amount of information about the input sentence. Many latent units of z are pulled towards the prior early to optimize an objective function before they capture useful information (Hoffman et al., 2013; Sønderby et al., 2016). Without a cost annealing strategy or a constraint on the decoder (Bowman et al., 2016; Chen et al., 2017; Yang et al., 2017), z would be entirely ignored for the remaining training steps. In our work, we aim at developing a simple variant of the LSTM-VAE model to address this common training issue. We observe that pairing the input sentence with multiple latent variables improves latent variable usage. In addition, we present a method that leverages multiple latent variables to further boost the performance of the baseline LSTM-VAE model. Our contributions are as follows: We suggest sampling the latent variables multiple times at each gradient step. We propose a simple method to better exploit these latent variables through hidden state averaging. We evaluate the proposed method on two different datasets, Penn Treebank and Yahoo, and compare to the best results published in the literature. Our empirical results show that our method can effectively make use of the latent variables, leading to the state-of-the-art performance. 5528 2 Related work Bowman et al. (2016) first proposed an LSTMVAE model for text. They observed the posteriorcollapse problem in which the approximate posterior collapses to the prior, and the model ignores the latent variable. They suggested two techniques to alleviate this issue: cost annealing (called warm-up in (Sønderby et al., 2016)) and word dropout. Weakening the decoder with word dropout forces the latent variable to encode more information, but their LSTM-VAE model still underperforms against the standard LSTM language model. Yang et al. (2017) proposed to replace the LSTM decoder with a dilated convolutional neural network (CNN) (van den Oord et al., 2016) to control the contextual capacity. However, their positive results also came from initializing the encoder with a pre-trained LSTM language model. Guu et al. (2018) first proposed using the von Mises– Fisher (vMF) distribution to model the VAE instead of using the Gaussian distribution. However, the vMF distribution presupposes that all data are directional unit vectors. Other applications of the vMF distribution can be found in (Davidson et al., 2018; Xu and Durrett, 2018). Kim et al. (2018) presented a semi-amortized (SA) approach to training the VAE, while He et al. (2019) proposed an aggressive inference network training. However, their training algorithms are computationally expensive since they require backpropagating through the decoder or the encoder multiple times. Our method is simpler and easy to implement. In practice, we just place a loop before reparameterization and do averaging. 3 Background Let x = [w1, w2, . . . , wT ] be a sentence representation, where wt is the t-th word. Assume that x is generated from a continuous latent variable z using a random process x ∼pθ(x|z) parameterized by θ. By applying the standard language model (Bengio et al., 2003), we get: pθ(x|z) = T Y t=1 pθ(wt|w1:t−1, z). (1) Given a dataset X = {x(1), . . . , x(N)}, we typically fit the model by maximizing the average logmarginal likelihood 1 N PN 1 log pθ(x(i)). We can express an individual log-marginal likelihood by log pθ(x) = log R z pθ(x|z)p(z)dz, where p(z) is the prior on z. Unfortunately, the integral over z is intractable (Hoffman et al., 2013). Alternatively, we would sample z directly from the posterior distribution pθ(z|x). However, pθ(z|x) is also intractable since pθ(z|x) = pθ(x|z)p(z)/pθ(x). Variational inference approximates the posterior distribution pθ(z|x) with a variational family of distributions qφ(z|x) parameterized by φ. We wish that qφ(z|x) is close to pθ(z|x). We measure this closeness by the Kullback–Leibler (KL) divergence: KL(qφ(z|x)||pθ(z|x)). Instead of maximizing the true log-marginal likelihood, we maximize its lower bound: log pθ(x) ≥log pθ(x) −KL(qφ(z|x)||pθ(z|x)) = Eqφ(z|x)[log pθ(x|z)] −KL(qφ(z|x)||p(z)). (2) The above equation is typically referred to as the evidence lower bound (ELBO) (Hoffman and Johnson, 2016). The ELBO consists of two terms: the expected reconstruction term and the KLdivergence term. We can solve the KL-divergence term analytically given that both the prior p(z) and the variational posterior qφ(z|x) are Gaussian (see Kingma and Welling (2014)’s Appendix B). We then need to rewrite the expected reconstruction term into some closed-form expression (detailed in §4) so that we can maximize it by applying stochastic optimization methods. Optimizing the ELBO forms the VAE architecture in which qφ(z|x) encodes x into a latent variable z, and pθ(x|z) decodes z to reconstruct x. The gradient of the ELBO w.r.t. φ can have low variance by applying the reparameterization trick (Kingma and Welling, 2014) that estimates z ∼qφ(z|x) using z = µ + σ ⊙ϵ, where mean µ and variance σ2 are outputs of some neural networks, and ϵ ∼N(0, 1). 4 Proposed method Having covered the technical background, we now describe our two extensions to improve the baseline LSTM-VAE model in Fig. 1. The baseline model approximates the expected reconstruction term by sampling one latent variable z ∼qφ(z|x) at each gradient step (Bowman et al., 2016). Thus, Eqφ(z|x)[log pθ(x|z)] ≈log pθ(x|z). Our first extension is to improve the sampling by using a Monte Carlo estimate of the expected 5529 !(1) " # LSTM encoder $(1) ⨀ ⨁ $(2)∼((0,1) ⨁ ⨀ !(2) Embedding layer LSTM decoder Embedding layer Linear+Softmax layer Figure 2: Example of sampling two latent variables using different random noise vectors drawn from the standard Gaussian distribution. Embedding layer !(1) !(2) a avg new avg hope avg EOS avg Figure 3: Example of averaging two hidden states at each decoding step. reconstruction term (Kingma and Welling, 2014): Eqφ(z|x)[log pθ(x|z)]≈1 L L X l=1 log pθ(x|z(l)), (3) where z(l) = µ + σ ⊙ϵ(l) and ϵ(l) ∼N(0, 1). Sampling latent variables multiple times at each gradient step should result in a better approximation of the expected reconstruction term. Fig. 2 shows an example of sampling two latent variables. Note that we use the same µ and σ for both latent variables. By using the language model from Eq. (1), we can decompose the reconstruction term as: log pθ(x|z(l)) = T X t=1 log pθ(wt|w1:t−1, z(l)). (4) Let V be a fixed size vocabulary of words in a dataset. Given the entire history of previous words w1:t = [w1, . . . , wt] and the latent variable z(l), we compute the distribution over the possible corresponding values of wt+1 by applying a linear transformation to the decoder hidden state followed by a softmax: pθ(wt+1|w1:t, z(l)) = softmax(h(l) t M1), h(l) t = dec(h(l) t−1, wt), h(l) 0 = M2z(l), (5) where M1 ∈Rm×|V| and M2 ∈Rm×n are the trainable weight matrices, h(l) t ∈Rm is the decoder hidden state, z(l) ∈Rn is the latent variable at each sampling step l, and wt ∈Rd is the embedding vector of the word wt. We compute µ and σ2 used in the reparameterization trick by: µ = M3sT , log σ2 = M4sT , st = enc(st−1, wt), t = 1, . . . , T s0 = 0, (6) where M3, M4 ∈Rn×m are the trainable weight matrices and sT ∈Rm is the last encoder hidden state. Our second extension is to exploit multiple latent variables to directly improve the expressiveness of the decoder. Instead of computing the separate reconstruction terms and taking the average of them as in Eq. (3), we combine the decoder hidden states at each time step t: ˜ht = 1 L L X l=1 h(l) t , (7) where each hidden state is initialized with a different latent variable z(l). Fig. 3 shows an example of averaging two hidden states at each decoding step. Thus our distribution of wt+1 becomes: pθ(wt+1|w1:t, z) = softmax(˜htM1). (8) Here we drop the superscript (l) since all hidden states h(l) t are averaged into ˜ht. 5 Experiments 5.1 Datasets and training details We experiment on two datasets: Penn Treebank (PTB) (Marcus et al., 1993) and Yahoo (Zhang et al., 2015). Training/validation/test sets are identical to (Bowman et al., 2016; Xu and Durrett, 2018) for PTB and (Yang et al., 2017; Kim et al., 2018) for Yahoo. We use single-layer unidirectional LSTMs as an encoder and a decoder. Configurations of our baseline model (LSTM-VAE, Fig. 1) are identical to (Xu and Durrett, 2018) for PTB and (Yang et al., 2017; Kim et al., 2018) for Yahoo. When the LSTM encoder is not applied, our model falls back to a vanilla language model (LSTM-LM). Table 1 summarizes data statistics and our model configurations. We use the last hidden state (not the cell state) of the LSTM encoder and feed it through linear transformations to get the mean µ and the variance σ2. 5530 PTB Yahoo Training 42068 100000 Validation 3370 10000 Test 3761 10000 |V| 10000 20000 d 100 512 m 400 1024 n 32 32 Table 1: Data statistics and model configurations. |V| = vocabulary size; d = dimensionality of word embeddings; m = number of LSTM hidden units; n = dimensionality of latent variables. We sample z using the reparameterization trick and feed it through a linear transformation to get the initial hidden state of the LSTM decoder while setting the initial cell state to zero. We concatenate z with the word embedding at each decoding step. We use dropout (Hinton et al., 2012) with probability 0.5 on the input-to-hidden layers and the hidden-to-softmax layers. We initialize all model parameters and word embeddings by sampling from U(−0.1, 0.1). We train all models using stochastic gradient descent (SGD) with the batch size of 32, the learning rate of 1.0, and the gradient clipping at 5. The learning rate decays by halves if the validation perplexity does not improve. We train for 30 epochs or until the validation perplexity has not improved for 3 times. All models are trained on NVIDIA Tesla P40 GPUs. Following previous work (Bowman et al., 2016; Sønderby et al., 2016), we apply KL cost annealing to all LSTM-VAE models. The multiplier on the KL term is increased linearly from 0 to 1 during the first 10 epochs of training. We also try word dropout (Bowman et al., 2016) during development but find that it is not effective when combined with standard dropout. Our finding conforms to (Kim et al., 2018). So we do not apply word dropout to our models. 5.2 Main results We report the upper bounds (i.e., the negative ELBO in Eq. (2)) on NLL/PPL. We vary the number of latent variables L in the variational models to assess their impact on performance. LSTMVAE-AVG indicates the averaging of hidden states at each decoding step in Eq. (8). We also report the results of the inputless setting (Bowman et al., 2016), which corresponds to dropping all ground truth words during decoding. Table 2 shows the results of various models. The LSTM-VAE-AVG models with multiple latent variables provide the best improvements in terms of NLL/PPL. The LSTM-VAE models trained with more latent variables offer slight improvements over the baseline version (i.e., using one latent variable) for the standard setting. The baseline LSTM-VAE models have low KL values and underperform against LSTM-LM for the standard setting. Incorporating multiple latent variables consistently helps in increasing the KL values. Note that a high KL term does not necessarily imply a better upper bound. Generally, we do not expect the KL term to approach zero. When KL(qφ(z|x)||p(z)) = 0, it indicates that z and x are independent (i.e., qφ(z|x) = qφ(z) = p(z)). In other words, z learns nothing from x. The LSTM-VAE-AVG models have relatively high KL values (except the inputless setting on Yahoo), while still maintaining better upper bounds on NLL/PPL. These results suggest that our models with expressive decoders can effectively make use of the latent variables. 5.3 Discussion On PTB, LSTM-VAE-AVG (L = 10) achieves the best results compared to previous work (Bowman et al., 2016; Xu and Durrett, 2018). On Yahoo, LSTM-VAE-AVG (L = 5) slightly outperforms Kim et al. (2018)’s SA-VAE. Our model can provide similar improvements while being simpler. We also observe that our vanilla LSTMLM model and that of Kim et al. (2018) have better results than Yang et al. (2017)’s models. One plausible explanation is that Yang et al. (2017) trained their models with Adam (Kingma and Ba, 2015), while we used SGD. For text modeling, researchers have shown that SGD performs better than other adaptive optimization methods such as Adam (Wilson et al., 2017; Keskar and Socher, 2017). The ELBO has been commonly used to evaluate the variational models (Bowman et al., 2016; Yang et al., 2017; Xu and Durrett, 2018; Kim et al., 2018). There also exists a line of work that uses importance sampling to estimate the true logmarginal likelihood (Rezende et al., 2014; Burda et al., 2016; Tomczak and Welling, 2018; He et al., 2019). We further conduct experiments by computing the importance sampling estimates with 500 samples and comparing to He et al. (2019)’s 5531 Model Standard Inputless NLL KL PPL NLL KL PPL Bowman et al. (2016) LSTM-LM 100 – 116 135 – 600 LSTM-VAE 101 2 119 125 15 380 Xu and Durrett (2018) LSTM-LM 100 – 114 134 – 596 LSTM-VAE 99 4.4 109 125 6.3 379 LSTM-vMF-VAE 96 5.7 98 117 18.6 262 This work LSTM-LM 100.8±0.2 – 99.4±0.7 139.9±0.0 – 592.3±0.5 LSTM-VAE 102.5±0.2 1.5±0.3 107.5±1.0 134.8±0.4 3.8±0.5 469.3±7.6 LSTM-VAE (L = 5) 100.7±0.3 2.1±0.4 98.8±1.2 134.7±0.9 3.9±0.9 468.1±19.4 LSTM-VAE (L = 10) 100.4±0.2 2.2±0.4 97.7±0.9 134.8±0.8 3.5±1.0 468.2±16.2 LSTM-VAE-AVG (L = 5) 97.3±0.6 7.6±0.9 84.6±2.4 118.8±0.5 10.6±0.3 225.8±5.6 LSTM-VAE-AVG (L = 10) 94.3±0.4 8.1±0.2 73.8±1.5 113.8±1.0 9.6±0.5 179.7±8.3 (a) PTB Model Standard Inputless NLL KL PPL NLL KL PPL Yang et al. (2017) CNN-LM 335.4 – 66.6 – – – CNN-VAE + init 332.1 10.0 63.9 – – – Kim et al. (2018) LSTM-LM 329.1 – 61.6 – – – SA-VAE 327.5 7.2 60.4 – – – This work LSTM-LM 328.4±0.2 – 61.1±0.2 507.4±0.0 – 574.0±0.0 LSTM-VAE 330.4±0.4 1.5±0.5 62.6±0.3 467.5±0.3 18.5±0.4 348.5±1.5 LSTM-VAE (L = 5) 328.8±0.1 2.6±0.5 61.4±0.1 464.3±1.2 22.2±1.7 334.8±4.9 LSTM-VAE (L = 10) 329.1±0.1 2.8±0.7 61.6±0.1 464.3±1.3 22.8±1.7 334.8±5.5 LSTM-VAE-AVG (L = 5) 327.3±0.5 12.2±0.4 60.3±0.4 446.4±0.1 19.7±0.2 267.5±0.4 LSTM-VAE-AVG (L = 10) 328.5±1.3 10.8±1.0 61.2±1.0 441.4±0.5 16.8±0.2 251.2±1.5 (b) Yahoo Table 2: Results on (a) PTB and (b) Yahoo test sets. For LSTM-LM, we show the exact negative log likelihood (NLL) and perplexity (PPL). For the variational models, we show the upper bounds (i.e., the negative ELBO) on NLL/PPL. The KL portion of the ELBO is given in the column alongside NLL. NLL/KL values are averaged across examples. L indicates the number of latent variables at each gradient step. We report mean and standard deviation computed across five training/test runs from different random initial starting points. PTB Yahoo NLL-ELBO NLLIW NLL-ELBO NLLIW He et al. (2019) LSTM-VAE-AIN + anneal – – 328.4±0.2 326.7±0.1 This work LSTM-VAE 102.5±0.2 102.1±0.2 330.4±0.4 329.6±0.2 LSTM-VAE-AVG (L = 5) 97.3±0.6 95.1±0.8 327.3±0.5 324.0±0.5 LSTM-VAE-AVG (L = 10) 94.3±0.4 91.7±0.5 328.5±1.3 324.9±1.3 Table 3: Comparison of different NLL estimates on PTB and Yahoo test sets. NLL-ELBO = the upper bounds taken from Table 2; NLLIW = the importance sampling estimates of NLL with 500 samples. We report mean and standard deviation computed across five training/test runs from different random initial starting points. aggressive inference network (AIN) training. Table 3 shows a comparison of different NLL estimates. Our results are consistent with those of (He et al., 2019) in which the importance sampling yields the tighter bounds than the ELBO. 6 Conclusion We have shown that using multiple latent variables at each gradient step can improve the performance of the baseline LSTM-VAE model. The empirical results indicate that our models combined with expressive decoders can successfully make use of the latent variables, resulting in higher KL values and better NLL/PPL results. Our proposed method is simple and can serve as a strong baseline for latent variable text modeling. Acknowledgments We thank the anonymous reviewers for their insightful comments. 5532 References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res., 3. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of CoNLL. Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. 2016. Importance weighted autoencoders. In Proceedings of ICLR. Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2017. Variational lossy autoencoder. In Proceedings of ICLR. Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. 2018. Hyperspherical variational auto-encoders. In Proceedings of UAI. Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics (TACL). Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. In Proceedings of ICLR. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8). Matthew D. Hoffman, David M. Blei, Chong Wang, and John W. Paisley. 2013. Stochastic variational inference. Journal of Machine Learning Research, 14(1). Matthew D. Hoffman and Matthew J. Johnson. 2016. Elbo surgery: yet another way to carve up the variational evidence lower bound. In Proceedings of NIPS 2016 Workshop on Advances in Approximate Bayesian Inference. Nitish Shirish Keskar and Richard Socher. 2017. Improving generalization performance by switching from adam to SGD. CoRR, abs/1712.07628. Yoon Kim, Sam Wiseman, Andrew Miller, David Sontag, and Alexander Rush. 2018. Semi-amortized variational autoencoders. In Proceedings of ICML. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of ICLR. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Comput. Linguist., 19(2). Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In Proceedings of EMNLP. A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. CoRR, abs/1609.03499. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of ICML. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Proceedings of NIPS. Casper Kaae Sønderby, Tapani Raiko, Lars Maalø e, Søren Kaae Sø nderby, and Ole Winther. 2016. Ladder variational autoencoders. In Proceedings of NIPS. Jakub M. Tomczak and Max Welling. 2018. VAE with a VampPrior. In Proceedings of AISTATS. Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. 2017. The marginal value of adaptive gradient methods in machine learning. In Proceedings of NIPS. Jiacheng Xu and Greg Durrett. 2018. Spherical latent spaces for stable variational autoencoders. In Proceedings of EMNLP. Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In Proceedings of ICML. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of NIPS.
2019
553
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5533–5538 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5533 Misleading Failures of Partial-input Baselines Shi Feng Computer Science University of Maryland [email protected] Eric Wallace Allen Institute for Artificial Intelligence [email protected] Jordan Boyd-Graber Computer Science, iSchool, UMIACS, and LSC University of Maryland [email protected] Abstract Recent work establishes dataset difficulty and removes annotation artifacts via partial-input baselines (e.g., hypothesis-only models for SNLI or question-only models for VQA). When a partial-input baseline gets high accuracy, a dataset is cheatable. However, the converse is not necessarily true: the failure of a partialinput baseline does not mean a dataset is free of artifacts. To illustrate this, we first design artificial datasets which contain trivial patterns in the full input that are undetectable by any partial-input model. Next, we identify such artifacts in the SNLI dataset—a hypothesis-only model augmented with trivial patterns in the premise can solve 15% of the examples that are previously considered “hard”. Our work provides a caveat for the use of partial-input baselines for dataset verification and creation. 1 Dataset Artifacts Hurt Generalizability Dataset quality is crucial for the development and evaluation of machine learning models. Largescale natural language processing (NLP) datasets often use human annotations on web-crawled data, which can introduce artifacts. For example, crowdworkers might use specific words to contradict a given premise (Gururangan et al., 2018). These artifacts corrupt the intention of the datasets to train and evaluate models for natural language understanding. Importantly, a human inspection of individual examples cannot catch artifacts because they are only visible in aggregate on the dataset level. However, machine learning algorithms, which detect and exploit recurring patterns in large datasets by design, can just as easily use artifacts as real linguistic clues. As a result, models trained on these datasets can achieve high test accuracy by exploiting artifacts but fail to generalize, e.g., they fail under adversarial evaluation (Jia and Liang, 2017; Ribeiro et al., 2018). The identification of dataset artifacts has changed model evaluation and dataset construction (Chen et al., 2016; Jia and Liang, 2017; Goyal et al., 2017). One key method is to use partialinput baselines, i.e., models that intentionally ignore portions of the input. Example use cases include hypothesis-only models for natural language inference (Gururangan et al., 2018), question-only models for visual question answering (Goyal et al., 2017), and paragraph-only models for reading comprehension (Kaushik and Lipton, 2018). A successful partial-input baseline indicates that a dataset contains artifacts which make it easier than expected. On the other hand, examples where this baseline fails are “hard” (Gururangan et al., 2018), and the failure of partial-input baselines is considered a verdict of a dataset’s difficulty (Zellers et al., 2018; Kaushik and Lipton, 2018). These partial-input analyses are valuable and indeed reveal dataset issues; however, they do not tell the whole story. Just as being free of one ailment is not the same as a clean bill of health, a baseline’s failure only indicates that a dataset is not broken in one specific way. There is no reason that artifacts only infect part of the input—models can exploit patterns that are only visible in the full input. After reviewing partial-input baselines (Section 2), we construct variants of a natural language inference dataset to highlight the potential pitfalls of partial-input dataset validation (Section 3). Section 4 shows that real datasets have artifacts that evade partial-input baselines; we use a hypothesisplus-one-word model to solve 15% of the “hard” examples from SNLI (Bowman et al., 2015; Gururangan et al., 2018) where hypothesis-only models fail. Furthermore, we highlight some of the artifacts learned by this model using k-nearest neighbors in representation space. Section 5 discusses how partial-input baselines should be used in future dataset creation and analysis. 5534 2 What are Partial-input Baselines? A long-term goal of NLP is to solve tasks that we believe require a human-level understanding of language. The NLP community typically defines tasks with datasets: reproduce these answers given these inputs, and you have solved the underlying task. This task-dataset equivalence is only valid when the dataset accurately represents the task. Unfortunately, verifying this equivalence via humans is fundamentally insufficient: humans reason about examples one by one, while models can discover recurring patterns. Patterns that are not part of the underlying task, or artifacts of the data collection process, can lead to models that “cheat”—ones that achieve high test accuracy using patterns that do not generalize. One frequent type of artifact, especially in classification datasets where each input contains multiple parts (e.g., a question and an image), is a strong correlation between a part of the input and the label. For example, a model can answer many VQA questions without looking at the image (Goyal et al., 2017). These artifacts can be detected using partialinput baselines: models that are restricted to using only part of the input. Validating a dataset with a partial-input baseline has the following steps: 1. Decide which part of the input to use. 2. Reduce all examples in the training set and the test set. 3. Train a new model from scratch on the partialinput training set. 4. Test the model on the partial-input test set. High accuracy from a partial-input model implies the original dataset is solvable (to some extent) in the wrong ways, i.e., using unintended patterns. Partial-input baselines have identified artifacts in many datasets, e.g., SNLI (Gururangan et al., 2018; Poliak et al., 2018), VQA (Goyal et al., 2017), EmbodiedQA (Anand et al., 2018), visual dialogue (Massiceti et al., 2018), and visual navigation (Thomason et al., 2019). 3 How Partial-input Baselines Fail If a partial-input baseline fails, e.g., it gets close to chance accuracy, one might conclude that a dataset is difficult. For example, partial-input baselines are used to identify the “hard” examples in SNLI (Gururangan et al., 2018), verify that SQuAD is well constructed (Kaushik and Lipton, 2018), and that SWAG is challenging (Zellers et al., 2018). Reasonable as it might seem, this kind of argument can be misleading—it is important to understand what exactly these results do and do not imply. A low accuracy from a partial-input baseline only means that the model failed to confirm a specific exploitable pattern in the part of the input that the model can see. This does not mean, however, that the dataset is free of artifacts—the full input might still contain very trivial patterns. To illustrate how the failures of partial-input baselines might shadow more trivial patterns that are only visible in the full input, we construct two variants of the SNLI dataset (Bowman et al., 2015). The datasets are constructed to contain trivial patterns that partial-input baselines cannot exploit, i.e., the patterns are only visible in the full input. As a result, a full-input can achieve perfect accuracy whereas partial-input models fail. 3.1 Label as Premise In SNLI, each example consists of a pair of sentences: a premise and a hypothesis. The goal is to classify the semantic relationship between the premise and the hypothesis—either entailment, neutral, or contradiction. Our first SNLI variant is an extreme example of artifacts that cannot be detected by a hypothesisonly baseline. Each SNLI example (training and testing) is copied three times, and the copies are assigned the labels Entailment, Neutral, and Contradiction, respectively. We then set each example’s premise to be the literal word of the associated label: “Entailment”, “Neutral”, or “Contradiction” (Table 1). From the perspective of a hypothesisonly model, the three copies have identical inputs but conflicting labels. Thus, the best accuracy from any hypothesis-only model is chance—the model fails due to high Bayes error. However, a full-input model can see the label in the premise and achieve perfect accuracy. This serves as an extreme example of a dataset that passes a partial-input baseline test but still contains artifacts. Obviously, a premise-only baseline can detect these artifacts; we address this in the next dataset variant. 3.2 Label Hidden in Premise and Hypothesis The artifact we introduce in the previous dataset can be easily detected by a premise-only baseline. In this variant, we “encrypt” the label such that it is only visible if we combine the premise and the hypothesis, i.e., neither premise-only nor hypothesis5535 Old Premise Animals are running New Premise Entailment Hypothesis Animals are outdoors Label Entailment Table 1: Each example in this dataset has the groundtruth label set as the premise. Every hypothesis occurs three times in the dataset, each time with a unique label and premise combination (not shown in this table). Therefore, a hypothesis-only baseline will only achieve chance accuracy, but a full-input model can trivially solve the dataset. Label Combinations Entailment A+B C+D E+F Contradiction A+F C+B E+D Neutral A+D C+F E+B Table 2: We “encrypt” the labels to mimic an artifact that requires both parts of the input. Each capital letter is a code word, and each label is derived from the combination of two code words. Each combination uniquely identifies a label, e.g., A in the premise and B in the hypothesis equals Entailment. However, a single code word cannot identify the label. only baselines can detect the artifact. Each label is represented by the concatenation of two “code words”, and this mapping is one-to-many: each label has three combinations of code words, and each combination uniquely identifies a label. Table 2 shows our code word configuration. The design of the code words ensures that a single code word cannot uniquely identify a label—you need both. We put one code word in the premise and the other in the hypothesis. These encrypted labels mimic an artifact that requires both parts of the input. Table 3 shows an SNLI example modified accordingly. A full-input model can exploit the artifact and trivially achieve perfect accuracy, but a partial-input model cannot. A more extreme version of this modified dataset has exactly the nine combinations in Table 2 as both the training set and the test set. Since a single code word cannot identify the label, neither hypothesisonly nor premise-only baselines can achieve more than chance accuracy. However, a full-input model can perfectly extract the label by combining the premise and the hypothesis. Premise A Animals are running Hypothesis B Animals are outdoors Label Entailment Table 3: Each example in this dataset has a code word added to both the premise and the hypothesis. Following the configuration of Table 2, A in the premise combined with B in the hypothesis indicates the label is Entailment. A full-input model can easily exploit this artifact but partial-input models cannot. 4 Artifacts Evade Partial-input Baselines Our synthetic dataset variants contain trivial artifacts that partial-input baselines fail to detect. Do real datasets such as SNLI have artifacts that are not detected by partial-input baselines? We investigate this by providing additional information about the premise to a hypothesis-only model. In particular, we provide the last noun of the premise, i.e., we form a hypothesis-plus-one-word model. Since this additional information appears useless to humans (examples below), it is an artifact rather than a generalizable pattern. We use a BERT-based (Devlin et al., 2019) classifier that gets 88.28% accuracy with the regular, full input. The hypothesis-only version reaches 70.10% accuracy.1 With the hypothesis-plus-oneword model, the accuracy improves to 74.6%, i.e., the model solves 15% of the “hard” examples that are unsolvable by the hypothesis-only model.2 Table 4 shows examples that are only solvable with the one additional word from the premise. For both the hypothesis-only and hypothesis-plus-oneword models, we follow Papernot and McDaniel (2018) and Wallace et al. (2018) and retrieve training examples using nearest neighbor search in the final BERT representation space. In the first example, humans would not consider the hypothesis “The young boy is crying” as a contradiction to the premise “camera”. In this case, the hypothesis-only model incorrectly predicts Entailment, however, the hypothesis-plus-one-word model correctly predicts Contradiction. This pattern—including one premise word—is an artifact that regular partialinput baselines cannot detect but can be exploited by a full-input model. 1Gururangan et al. (2018) report 67.0% using a simpler hypothesis-only model. 2We create the easy-hard split of the dataset using our model, not using the model from Gururangan et al. (2018). 5536 Label Premise Hypothesis Contradiction A young boy hanging on a pole smiling at the camera. The young boy is crying. Contradiction A boy smiles tentatively at the camera. a boy is crying. Contradiction A happy child smiles at the camera. The child is crying at the playground. Contradiction A girl shows a small child her camera. A boy crying. Entailment A little boy with a baseball on his shirt is crying. A boy is crying. Entailment Young boy crying in a stroller. A boy is crying. Entailment A baby boy in overalls is crying. A boy is crying. Entailment Little boy playing with his toy train. A boy is playing with toys. Entailment A little boy is looking at a toy train. A boy is looking at a toy. Entailment Little redheaded boy looking at a toy train. A little boy is watching a toy train. Entailment A young girl in goggles riding on a toy train. A girl rides a toy train. Contradiction A little girl is playing with tinker toys. A little boy is playing with toys. Contradiction A toddler shovels a snowy driveway with a shovel. A young child is playing with toys. Contradiction A boy playing with toys in a bedroom. A boy is playing with toys at the park. Table 4: We create a hypothesis-plus-one-word model that sees the hypothesis alongside the last noun in the premise. We show two SNLI test examples (highlighted) that are answered correctly using this model but are answered incorrectly using a hypothesis-only model. For each test example, we also show the training examples that are nearest neighbors in BERT’s representation space. When using the hypothesis and the last noun in the premise (underlined), training examples with the correct label are retrieved; when using only the hypothesis, examples with the incorrect label are retrieved. 5 Discussion and Related Work Partial-input baselines are valuable sanity checks for datasets, but as we illustrate, their implications should be understood carefully. This section discusses methods for validating and creating datasets in light of possible artifacts from the annotation process, as well as empirical results that corroborate the potential pitfalls highlighted in this paper. Furthermore, we discuss alternative approaches for developing robust NLP models. Hypothesis Testing Validating datasets with partial-input baselines is a form of hypothesistesting: one hypothesizes trivial solutions to the dataset (i.e., a spurious correlation between labels and a part of the input) and verifies if these hypotheses are true. While it is tempting to hypothesize other ways a model can cheat, it is infeasible to enumerate over all of them. In other words, if we could write down all the necessary tests for test-driven development (Beck, 2002) of a machine learning model, we would already have a rule-based system that can solve our task. Adversarial Annotation Rather than using partial-input baselines as post-hoc tests, a natural idea is to incorporate them into the data generation process to reject bad examples. For example, the SWAG (Zellers et al., 2018) dataset consists of multiple-choice answers that are selected adversarially against an ensemble of partial-input and heuristic classifiers. However, since these classifiers can be easily fooled if they rely on superficial patterns, the resulting dataset may still contain artifacts. In particular, a much stronger model (BERT) that sees the full-input easily solves the dataset. This demonstrates that using partial-input baselines as adversaries may lead to datasets that are just difficult enough to fool the baselines but not difficult enough to ensure that no model can cheat. Adversarial Evaluation Instead of validating a dataset, one can alternatively probe the model directly. For example, models can be stress tested using adversarial examples (Jia and Liang, 2017; Wallace et al., 2019) and challenge sets (Glockner et al., 2018; Naik et al., 2018). These tests can reveal strikingly simple model limitations, e.g., basic paraphrases can fool textual entailment and visual question answering systems (Iyyer et al., 2018; Ribeiro et al., 2018), while common typos drastically degrade neural machine translation quality (Belinkov and Bisk, 2018). Interpretations Another technique for probing models is to use interpretation methods. Interpretations, however, have a problem of faithfulness (Rudin, 2018): they approximate (often locally) a complex model with a simpler, interpretable model (often a linear model). Since interpretations are inherently an approximation, they can never be completely faithful—there are cases where the original model and the simple model behave differently (Ghorbani et al., 2019). These 5537 cases might also be especially important as they usually reflect the counter-intuitive brittleness of the complex models (e.g., in adversarial examples). Certifiable Robustness Finally, an alternative approach for creating models that are free of artifacts is to alter the training process. In particular, model robustness research in computer vision has begun to transition from an empirical arms race between attackers and defenders to more theoretically sound robustness methods. For instance, convex relaxations can train models that are provably robust to adversarial examples (Raghunathan et al., 2018; Wong and Kolter, 2018). Despite these method’s impressive (and rapidly developing) results, they largely focus on adversarial perturbations bounded to an L∞ball. This is due to the difficulties in formalizing attacks and defenses for more complex threat models, of which the discrete nature of NLP is included. Future work can look to generalize these methods to other classes of model vulnerabilities and artifacts. 6 Conclusion Partial-input baselines are valuable sanity checks for dataset difficulty, but their implications should be analyzed carefully. We illustrate in both synthetic and real datasets how partial-input baselines can overshadow trivial, exploitable patterns that are only visible in the full input. Our work provides an alternative view on the use of partial-input baselines in future dataset creation. Acknowledgments This work was supported by NSF Grant IIS1822494. Boyd-Graber and Feng are also supported by DARPA award HR0011-15-C-0113 under subcontract to Raytheon BBN Technologies. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor. References Ankesh Anand, Eugene Belilovsky, Kyle Kastner, Hugo Larochelle, and Aaron Courville. 2018. Blindfold baselines for embodied QA. In NeurIPS Visually-Grounded Interaction and Language Workshop. Kent Beck. 2002. Test-Driven Development by Example. Addison-Wesley. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In Proceedings of the International Conference on Learning Representations. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of Empirical Methods in Natural Language Processing. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics. Amirata Ghorbani, Abubakar Abid, and James Y. Zou. 2019. Interpretation of neural networks is fragile. In Association for the Advancement of Artificial Intelligence. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the Association for Computational Linguistics. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In Computer Vision and Pattern Recognition. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Conference of the North American Chapter of the Association for Computational Linguistics. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke S. Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Conference of the North American Chapter of the Association for Computational Linguistics. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. 5538 In Proceedings of Empirical Methods in Natural Language Processing. Divyansh Kaushik and Zachary C. Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. In Proceedings of Empirical Methods in Natural Language Processing. Daniela Massiceti, Puneet K. Dokania, N. Siddharth, and Philip H.S. Torr. 2018. Visual dialogue without vision or dialogue. In NeurIPS Workshop on Critiquing and Correcting Trends in Machine Learning. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of International Conference on Computational Linguistics. Nicolas Papernot and Patrick D. McDaniel. 2018. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. arXiv preprint arXiv: 1803.04765. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In 7th Joint Conference on Lexical and Computational Semantics (*SEM). Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Certified defenses against adversarial examples. In Proceedings of the International Conference on Learning Representations. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Proceedings of the Association for Computational Linguistics. Cynthia Rudin. 2018. Please stop explaining black box models for high stakes decisions. In NeurIPS Workshop on Critiquing and Correcting Trends in Machine Learning. Jesse Thomason, Daniel Gordan, and Yonatan Bisk. 2019. Shifting the baseline: Single modality performance on visual navigation & QA. In Conference of the North American Chapter of the Association for Computational Linguistics. Eric Wallace, Shi Feng, and Jordan Boyd-Graber. 2018. Interpreting neural networks with nearest neighbors. In EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, and Jordan Boyd-Graber. 2019. Trick me if you can: Human-in-the-loop generation of adversarial examples for question answering. In Transactions of the Association for Computational Linguistics. Eric Wong and J. Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the International Conference of Machine Learning. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of Empirical Methods in Natural Language Processing.
2019
554
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5539–5544 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5539 Soft Contextual Data Augmentation for Neural Machine Translation Fei Gao1,∗, Jinhua Zhu2,∗, Lijun Wu3, Yingce Xia4, Tao Qin4, Xueqi Cheng1, Wengang Zhou2, Tie-Yan Liu4 1Institute of Computing Technology, Chinese Academy of Sciences; 2University of Science and Technology of China, 3Sun Yat-sen University, 4Microsoft Reserach Asia; 1{gaofei17b, cxq}@ict.ac.cn, 2{teslazhu@mail., zhwg@}ustc.edu.cn, [email protected], 4{Yingce.Xia, taoqin, tyliu}@microsoft.com Abstract While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is still very limited. In this paper, we present a novel data augmentation method for neural machine translation. Different from previous augmentation methods that randomly drop, swap or replace words with other words in a sentence, we softly augment a randomly chosen word in a sentence by its contextual mixture of multiple related words. More accurately, we replace the onehot representation of a word by a distribution (provided by a language model) over the vocabulary, i.e., replacing the embedding of this word by a weighted combination of multiple semantically similar words. Since the weights of those words depend on the contextual information of the word to be replaced, the newly generated sentences capture much richer information than previous augmentation methods. Experimental results on both small scale and large scale machine translation datasets demonstrate the superiority of our method over strong baselines1. 1 Introduction Data augmentation is an important trick to boost the accuracy of deep learning methods by generating additional training samples. These methods have been widely used in many areas. For example, in computer vision, the training data are augmented by transformations like random rotation, resizing, mirroring and cropping (Krizhevsky et al., 2012; Cubuk et al., 2018). While similar random transformations have also been explored in natural language processing (NLP) tasks (Xie et al., 2017), data augmentation ∗The first two authors contributed equally to this work. 1Our code can be found at https://github.com/ teslacool/SCA is still not a common practice in neural machine translation (NMT). For a sentence, existing methods include randomly swapping two words, dropping word, replacing word with another one and so on. However, due to text characteristics, these random transformations often result in significant changes in semantics. A recent new method is contextual augmentation (Kobayashi, 2018; Wu et al., 2018), which replaces words with other words that are predicted using language model at the corresponding word position. While such method can keep semantics based on contextual information, this kind of augmentation still has one limitation: to generate new samples with adequate variation, it needs to sample multiple times. For example, given a sentence in which N words are going to be replaced with other words predicted by one language model, there could be as many as exponential candidates. Given that the vocabulary size is usually large in languages, it is almost impossible to leverage all the possible candidates for achieving good performance. In this work, we propose soft contextual data augmentation, a simple yet effective data augmentation approach for NMT. Different from the previous methods that randomly replace one word to another, we propose to augment NMT training data by replacing a randomly chosen word in a sentence with a soft word, which is a probabilistic distribution over the vocabulary. Such a distributional representation can capture a mixture of multiple candidate words with adequate variations in augmented data. To ensure the distribution reserving similar semantics with original word, we calculate it based on the contextual information by using a language model, which is pretrained on the training corpus. To verify the effectiveness of our method, we conduct experiments on four machine transla5540 tion tasks, including IWSLT2014 German to English, Spanish to English, Hebrew to English and WMT2014 English to German translation tasks. In all tasks, the experimental results show that our method can obtain remarkable BLEU score improvement over the strong baselines. 2 Related Work We introduce several related works about data augmentation for NMT. Artetxe et al. (2017) and Lample et al. (2017) randomly shuffle (swap) the words in a sentence, with constraint that the words will not be shuffled further than a fixed small window size. Iyyer et al. (2015) and Lample et al. (2017) randomly drop some words in the source sentence for learning an autoencoder to help train the unsupervised NMT model. In Xie et al. (2017), they replace the word with a placeholder token or a word sampled from the frequency distribution of vocabulary, showing that data noising is an effective regularizer for NMT. Fadaee et al. (2017) propose to replace a common word by low-frequency word in the target sentence, and change its corresponding word in the source sentence to improve translation quality of rare words. Most recently, Kobayashi (2018) propose an approach to use the prior knowledge from a bi-directional language model to replace a word token in the sentence. Our work differs from their work that we use a soft distribution to replace the word representation instead of a word token. 3 Method In this section, we present our method in details. 3.1 Background and Motivations Given a source and target sentence pair (s, t) where s = (s1, s2, ..., sT ) and t = (t1, t2, ..., tT ′), a neural machine translation system models the conditional probability p(t1, ..., tT ′|s1, ..., sT ). NMT systems are usually based on an encoderdecoder framework with an attention mechanism (Sutskever et al., 2014; Bahdanau et al., 2014). In general, the encoder first transforms the input sentence with words/tokens s1, s2, ..., sT into a sequence of hidden states {ht}T t=1, and then the decoder takes the hidden states from the encoder as input to predict the conditional distribution of each target word/token p(tτ|ht, t<τ) given the previous ground truth target word/tokens. Similar to the NMT decoder, a language model is intended to predict the next word distribution given preceding words, but without another sentence as a conditional input. In NMT, as well as other NLP tasks, each word is assigned with a unique ID, and thus represented as an one-hot vector. For example, the i-th word in the vocabulary (with size |V |) is represented as a |V |-dimensional vector (0, 0, ..., 1, ..., 0), whose i-th dimension is 1 and all the other dimensions are 0. Existing augmentation methods generate new training samples by replacing one word in the original sentences with another word (Wang et al., 2018; Kobayashi, 2018; Xie et al., 2017; Fadaee et al., 2017). However, due to the sparse nature of words, it is almost impossible for those methods to leverage all possible augmented data. First, given that the vocabulary is usually large, one word usually has multiple semantically related words as replacement candidates. Second, for a sentence, one needs to replace multiple words instead of a single word, making the number of possible sentences after augmentation increases exponentially. Therefore, these methods often need to augment one sentence multiple times and each time replace a different subset of words in the original sentence with different candidate words in the vocabulary; even doing so they still cannot guarantee adequate variations of augmented sentences. This motivates us to augment training data in a soft way. 3.2 Soft Contextual Data Augmentation Inspired by the above intuition, we propose to augment NMT training data by replacing a randomly chosen word in a sentence with a soft word. Different from the discrete nature of words and their one-hot representations in NLP tasks, we define a soft word as a distribution over the vocabulary of |V | words. That is, for any word w ∈V , its soft version is P(w) = (p1(w), p2(w), ..., p|V |(w)), where pj(w) ≥0 and P|V | j=1 pj(w) = 1. Since P(w) is a distribution over the vocabulary, one can sample a word with respect to this distribution to replace the original word w, as done in Kobayashi (2018). Different from this method, we directly use this distribution vector to replace a randomly chosen word from the original sentence. Suppose E is the embedding matrix of all the |V | words. The embedding of the soft word w is ew = P(w)E = |V | X j=0 pj(w)Ej, (1) 5541 which is the expectation of word embeddings over the distribution defined by the soft word. The distribution vector P(w) of a word w can be calculated in multiple ways. In this work, we leverage a pretrained language model to compute P(w) and condition on all the words preceding w. That is, for the t-th word xt in a sentence, we have pj(xt) = LM(wj|x<t), where LM(wj|x<t) denotes the probability of the j-th word in the vocabulary appearing after the sequence x1, x2, · · · , xt−1. Note that the language model is pretrained using the same training corpus of the NMT model. Thus the distribution P(w) calculated by the language model can be regarded as a smooth approximation of the original one-hot representation, which is very different from previous augmentation methods such as random swapping or replacement. Although this distributional vector is noisy, the noise is aligned with the training corpus. Figure 1 shows the architecture of the combination of the encoder of the NMT model and the language model. The decoder of the NMT model is similarly combined with the language model. In experiments, we randomly choose a word in the training data with probability γ and replace it by its soft version (probability distribution). BOS x0 x2 x1 xn x0 x1 x3 𝑃(𝑥2) EOS … … Shifted Sentences Original Sentences Language Model NMT Encoder 𝑃(𝑥0) 𝑃(𝑥1) 𝑃(𝑥3) x2 𝑃(𝐸𝑂𝑆) Replace Replace Embedding … Figure 1: The overall architecture of our soft contextual data augmentation approach in encoder side for source sentences. The decoder side for target sentences is similar. At last, it is worth pointing out that no additional monolingual data is used in our method. This is different from previous techniques, such as back translation, that rely on monolingual data (Sennrich et al., 2015a; Gulcehre et al., 2015; Cheng et al., 2016; He et al., 2016; Hoang et al., 2018). We leave the exploration of leveraging monolingual data to future work. 4 Experiment In this section, we demonstrate the effectiveness of our method on four translation datasets with different scale. The translation quality is evaluated by case-sensitive BLEU score. We compare our approach with following baselines: • Base: The original training strategy without any data augmentation; • Swap: Randomly swap words in nearby positions within a window size k (Artetxe et al., 2017; Lample et al., 2017); • Dropout: Randomly drop word tokens (Iyyer et al., 2015; Lample et al., 2017); • Blank: Randomly replace word tokens with a placeholder token (Xie et al., 2017); • Smooth: Randomly replace word tokens with a sample from the unigram frequency distribution over the vocabulary (Xie et al., 2017); • LMsample: Randomly replace word tokens sampled from the output distribution of one language model (Kobayashi, 2018). All above introduced methods except Swap incorporate a hyper-parameter, the probability γ of each word token to be replaced in training phase. We set γ with different values in {0, 0.05, 0.1, 0.15, 0.2}, and report the best result for each method. As for swap, we use 3 as window size following Lample et al. (2017). For our proposed method, we train two language models for each translation task. One for source language, and the other one for target language. The training data for the language models is the corresponding source/target data from the bilingual translation dataset. 4.1 Datasets We conduct experiments on IWSLT2014 {German, Spanish, Hebrew} to English ({De, Es, He}→En) and WMT2014 English to German (En→De) translation tasks to verify our approach. We follow the same setup in Gehring et al. (2017) for IWSLT2014 De→En task. The training data and validation data consist of 160k and 7k 5542 IWSLT WMT De →En Es →En He →En En →De Base 34.79 41.58 33.64 28.40 +Swap 34.70 41.60 34.25 28.13 +Dropout 35.13 41.62 34.29 28.29 +Blank 35.37 42.28 34.37 28.89 +Smooth 35.45 41.69 34.61 28.97 +LMsample 35.40 42.09 34.31 28.73 Ours 35.78 42.61 34.91 29.70 Table 1: BLEU scores on four translation tasks. sentence pairs. tst2010, tst2011, tst2012, dev2010 and dev2012 are concatenated as our test data. For Es→En and He→En tasks, there are 181k and 151k parallel sentence pairs in each training set, and we use tst2013 as the validation set, tst2014 as the test set. For all IWSLT translation tasks, we use a joint source and target vocabulary with 10K byte-pair-encoding (BPE) (Sennrich et al., 2015b) types. For WMT2014 En→De translation, again, we follow Gehring et al. (2017) to filter out 4.5M sentence pairs for training. We concatenate newstest2012 and newstest2013 as the validation set and use newstest2014 as test set. The vocabulary is built upon the BPE with 40k sub-word types. 4.2 Model Architecture and Optimization We adopt the sate-of-the-art Transformer architecture (Vaswani et al., 2017) for language models and NMT models in our experiments. For IWSLT tasks, we take the transformer base configuration, except a) the dimension of the inner MLP layer is set as 1024 instead of 2048 and b) the number of attention heads is 4 rather than 8. As for the WMT En→De task, we use the default transformer big configuration for the NMT model, but the language model is configured with transformer base setting in order to speed up the training procedure. All models are trained by Adam (Kingma and Ba, 2014) optimizer with default learning rate schedule as Vaswani et al. (2017). Note that after training the language models, the parameters of the language models are fixed while we train the NMT models. 4.3 Main Results The evaluation results on four translation tasks are presented in Table 1. As we can see, our method can consistently achieve more than 1.0 BLEU score improvement over the strong Transformer base system for all tasks. Compared with other augmentation methods, we can find that 1) our method achieves the best results on all the translation tasks and 2) unlike other methods that may not be powerful in all tasks, our method universally works well regardless of the dataset. Specially, on the large scale WMT 2014 En→De dataset, although this dataset already contains a large amount of parallel training sentence pairs, our method can still outperform the strong base system by +1.3 BLEU point and achieve 29.70 BLEU score. These results clearly demonstrate the effectiveness of our approach. 4.4 Study 0.00 0.05 0.10 0.15 0.20 Probability 34 35 36 BLEU base dropout blank smooth lmsample ours Figure 2: BLEU scores of each method on IWSLT De→En dataset with different replacing probability. As mentioned in Section 4, we set different 5543 probability value of γ to see the effect of our approach and other methods in this subsection. Figure 2 shows the BLEU scores on IWSLT De→En dataset of each method, from which we can see that our method can observe a consistent BLEU improvement within a large probability range and obtain a strongest performance when γ = 0.15. However, other methods are easy to lead to performance drop over the baseline if γ > 0.15, and the improvement is also limited for other settings of γ. This can again prove the superior performance of our method. 5 Conclusions and Future Work In this work, we have presented soft contextual data augmentation for NMT, which replaces a randomly chosen word with a soft distributional representation. The representation is a probabilistic distribution over vocabulary and can be calculated based on the contextual information of the sentence. Results on four machine translation tasks have verified the effectiveness of our method. In the future, besides focusing on the parallel bilingual corpus for the NMT training in this work, we are interested in exploring the application of our method on the monolingual data. In addition, we also plan to study our approach in other natural language tasks, such as text summarization. References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017. Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semisupervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1965–1974. Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. 2018. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501. Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for lowresource neural machine translation. arXiv preprint arXiv:1705.00440. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. In Proc. of ICML. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828. Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18–24. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1681–1691. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all 5544 you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. Switchout: an efficient data augmentation algorithm for neural machine translation. arXiv preprint arXiv:1808.07512. Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2018. Conditional bert contextual augmentation. arXiv preprint arXiv:1812.06705. Ziang Xie, Sida I Wang, Jiwei Li, Daniel L´evy, Aiming Nie, Dan Jurafsky, and Andrew Y Ng. 2017. Data noising as smoothing in neural network language models. arXiv preprint arXiv:1703.02573.
2019
555
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5545–5550 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5545 Reversing Gradients in Adversarial Domain Adaptation for Question Deduplication and Textual Entailment Tasks Anusha Kamath Carnegie Mellon University Pittsburgh, PA [email protected] Sparsh Gupta University of California San Diego San Diego, CA [email protected] Vitor Carvalho Intuit AI San Diego, CA vitor [email protected] Abstract Adversarial domain adaptation has been recently introduced as an effective technique for textual matching tasks, such as question deduplication (Shah et al., 2018). Here we investigate the use of gradient reversal on adversarial domain adaptation to explicitly learn both shared and unshared (domain specific) representations between two textual domains. In doing so, gradient reversal learns features that explicitly compensate for domain mismatch, while still distilling domain specific knowledge that can improve target domain accuracy. We evaluate reversing gradients for adversarial adaptation on multiple domains, and demonstrate that it significantly outperforms other methods on question deduplication as well as on recognizing textual entailment (RTE) tasks, achieving up to 7% absolute boost in base model accuracy on some datasets. 1 Introduction Domain adaptation is a flexible machine learning approach that allows the transfer of category independent information between domains. Through domain adaptation we can leverage source task representations to bring the source and target distributions closer in a learned joint feature space. In this paper we are focused only on semi-supervised domain adaptation — when knowledge from a large labeled dataset in a source domain can be somewhat transferred to help improve the same task on a target domain, which typically has a significantly smaller number of labels. In particular, this paper focuses on domain adaptation for the detection of question duplicates in community question answering forums (Shah et al., 2018; Hoogeveen et al., 2015), as well as for RTE tasks (Dagan et al., 2005; Zhao et al., 2017). Generally speaking, the effectiveness of domain adaptation depends essentially on two factors: the similarity between source and target domains, and representation strategy to transfer the source domain knowledge. Long et al. showed transferring features across domains becomes increasingly difficult as domain discrepancy increases (Long et al., 2017), since the features learned by models gradually transition from general to highly domain specific as training progresses. Recent domain adaptation strategies attempt to counter this issue by making certain features invariant across source and target domains using distribution matching (Cao et al., 2018) or minimizing distance metrics between the representations (Sohn et al., 2019). The idea of generating domain invariant features was further enhanced by the use of adversarial learning methods. Recent work has advocated for tuning networks using a loss functions that reduce the mismatch between source and target data distributions (Sankaranarayanan et al., 2018; Tzeng et al., 2017). Others have proposed a domain discriminator that maximizes the domain classification loss between source and target domains (Cohen et al., 2018; Shah et al., 2018). One particular limitation of these approaches is that they are restricted to using only the shared domain invariant features and hence can’t benefit from target domain specific information. Small amounts of labeled target domain data could in principle be used to fine-tune learned shared representations and improve the target task, however this could also lead to overfitting (Sener et al., 2016). To address this issue, Qiu et al. used both shared domain invariant and domain specific features: while the shared features are learned by maximizing domain discriminator loss, the domain specific features are learned by jointly minimizing the task loss and the domain classification loss by domain specific discriminators (Qiu et al., 2018). Similar ideas were put forth by 5546 Peng et al for cross-domain sentiment classification where they demonstrate the effectiveness of using both domain specific and domain invariant features (Peng et al., 2018). Moreover, Bousmalis et al have made similar observations in domain adaptation for image classification and related vision tasks (Bousmalis et al., 2016). All these studies follow similar approach of learning shared feature space by maximizing domain classification loss. In contrast, our work here enhances the ideas from from Qiu et al. by utilizing a Gradient Reversal Layer (GRL) (Ganin and Lempitsky, 2015) to train the domain discriminator in a minimax game manner, and show that it results in significantly better transfer performance to multiple target domains. The use of gradient reversal layer is further advocated by works of Elazar et al (Elazar and Goldberg, 2018) and Fu et al (Fu et al., 2017) for removal of demographic attributes from text, and relation extraction from text, respectively. To the best of our knowledge, the use of Gradient Reversal in textual matching tasks, such as question deduplication and RTE, is novel and may trigger further applications of this approach in other language tasks. To summarize our contributions, (1) we propose a novel approach for adversarial domain adaptation that uses gradient reversal layers to discover shared representations between source and target domains on textual matching tasks, and elegantly combines domain specific and domain invariant shared features. (2) We apply it to question deduplication tasks and empirically confirm that it outperforms all other strong baselines and feature sets on five different domains, with absolute accuracy gains of up to 4.5%. (3) We further apply the same approach to two different textual entailment domains, where it again outperforms other baselines by as much as 7% absolute accuracy points. 2 Approaches 2.1 Base Model:BiMPM Wang et al. (Wang et al., 2017) proposed the Bilateral Multi-Perspective Matching model for many language tasks, including question duplicate detection and RTE. This model takes in the two candidate sentences as inputs to a Bi-LSTM layer that generates hidden representations for both of them. These representations are passed on to a multiperspective matching block that uses four differFigure 1: (a) Architecture for data flow of pass 1, (b) Architecture for data flow of passes 2 and 3 ent matching mechanisms - full matching, maxpooling matching, attentive matching and max attentive matching to generate matched representations of all words of both the sentences. This matching takes place in both the directions, i.e. if P and Q are the two input sentences, then representations for all words of P are computed by matching with words of Q, and same is done for all words of Q by matching with all words of P. These representations are then fed into an aggregation layer followed by fully connected layers for classification. In our experiments, we modified this architecture by replacing the aggregation LSTM in the aggregation layer by an aggregating attention layer, and replacing the following fully connected layers by a bilinear layer. 2.2 Adversarial Domain Adaptation Methods The overall architecture used for prediction makes use of both shared and domain specific features. The shared features are learned in an adversarial fashion wherein the desired feature layer that needs to be shared sends its output to a domain discriminator. For our experiments, we plug in this domain discriminator at the base of the model, right after the Bi-LSTM layer. This is to ensure that the layers following Bi-LSTM are trained only for the duplicate classification task, and use domain invariant features generated by the BiLSTM. Our work uses two domain discriminators - shared domain discriminator with gradient rever5547 Figure 2: Architecture for data flow of passes 4 and 5 sal layer (explained below), that is used to train shared Embedding and Bi-LSTM layers to generate domain invariant features, and unshared domain discriminator that is used to train all the domain specific Embedding and Bi-LSTM layers to generate highly domain specific features. These discriminators consist of an aggregation layer (attention mechanism), followed by a fully connected layer for domain classification (see Figures 1(a) and 1(b)). The shared domain discriminator uses a Gradient Reversal Layer (GRL) (see Figure 1(a)) that acts as an identity transform in the forward pass through the network. During the backward pass however, this layer multiplies the incoming gradient by a negative factor −λ which reverses the gradient direction. The use of this layer allows the domain discriminator to be trained in a minimax game fashion, where the domain classification layer tries to minimize the domain classification loss, thus trying to be better at this task, while feature extraction layers (layers before GRL) act as adversaries by trying to make the task harder for domain classification layer. This ensures that feature extraction layers are as ineffective as possible for domain classification, thus bringing the feature maps of both domains closer. As a result, the desired feature layers should generate shared feature representations that are almost indistinguishable by the domain classification layer. The shared features obtained from shared Bi-LSTM should also be more effective to transfer than the ones obtained by simply maximizing the domain classification loss throughout the domain discriminator and base model layers. The domain specific features are learned using an unshared domain discriminator that is identical to the domain discriminator used for shared features, except that the GRL is replaced by identity transform layer (see Figure 1(b)). This layer however, multiplies the incoming gradient by a positive factor +λ to maintain uniformity in gradient magnitudes with shared domain discriminator. This domain discriminator tries to minimize the domain classification loss, as do the preceding layers and thus the desired feature layer learns to generate highly domain specific feature representations. A block diagram of the proposed adversarial learning framework for domain adaptation has been shown in Figure 3. Figure 3: Adversarial Learning Framework for Domain Adaptation 2.3 Model Architecture The training data has sentence pairs (QS) from source domain S, and sentence pairs (QT ) from target domain T. Figures 1 and 2 show the overall architecture of the model. The initial layers of the network - Embedding, Bi-LSTM and multiperspective match block - are of two kinds: shared and domain specific. Shared layers are used in the network for sentences of all domain types, whereas the domain specific layers work on sentences of only corresponding domains. The Embedding layers can be appropriately initialized and trained end-to-end along with the rest of the network. Each domain has domain specific aggregation and classification (fully connected) layers as well. The aggregation layer takes in the domain specific and shared features as inputs (Figure 2), aggregates them and concatenates these aggregated vectors to form a combined representation. 5548 This combined feature vector is passed to the classification layers for task classification. 2.4 Model Training The forward propagation through the model involves 5 passes, which are listed below: • Pass 1 (Figure 1(a)) - QS and QT through shared layers and shared domain discriminator (Loss = L1). • Pass 2 (Figure 1(b)) - QS through domain specific layers and unshared domain discriminator (Loss = L2). • Pass 3 (Figure 1(b)) - QT through domain specific layers and unshared domain discriminator (Loss = L3). • Pass 4 (Figure 2) - QS through domain specific and shared layers for task classification (Loss = L4). • Pass 5 (Figure 2) - QT through domain specific and shared layers for task classification (Loss = L5). The source domain layers are trained by minimizing LS (Equation 1). The target domain layers are trained by minimizing LT (Equation 2). The shared embedding, Bi-LSTM and aggregation layers are learned by minimizing LSh (Equation 3), while fully connected layer of shared domain discriminator minimizes L1. LS = L2 + L4 (1) LT = L3 + L5 (2) LSh = L4 + L5 −λL1 (3) Note that not all domain specific layers contribute to losses L2 and L3, and thus the gradient due to these losses affects only the Embedding and Bi-LSTM layers for all domains. We trained all the models and tuned all the hyperparameters to optimize the validation set performance on target domain data. 3 Experiments 3.1 Datasets For question duplicate detection, we use the Quora question pairs dataset(Quora, 2017) as the source domain dataset and 5 datasets that are from different and diverse set of domains as our target domains. The Android, Mathematica, Programmers and Unix question datasets were used from the Stack Exchange dataset (StackExchchange, 2018). We obtained the Tax Domain Qs from a popular forum for tax related question answers, which we plan to make public shortly. For RTE, the Stanford Natural Language Inference (SNLI) (SNLI, 2015) has been used as source domain, and for target domains we used The Guardian Headlines RTE (RTE, 2012) and SICK (SICK, 2014) datasets. The size for all these datasets has been mentioned in Table 1 in the (train/ validation/ test) format. 3.2 Results In Table 1 we compared the base model BiMPM (base) trained only on the target domains to three variants of the same model, each obtained after a different approach for adversarial domain adaptation. Model T1 was trained by using both the shared and domain specific features, but maximizing the domain classification loss to learn shared features. Model T2 used only the shared features learned using gradient reversal strategy, along with fine-tuned features obtained from later layers of the network. Model T3 used both the domain specific features as well as the shared features learned using the gradient reversal method. The accuracy of these models for five different question deduplication and two RTE target domains is reported in Table 1. Comparisons of accuracy numbers between different rows are fairly consistent across all domains1, enabling us to draw the following empirical claims: T1, T2 and T3 outperform baseline, hence enforcing the effectiveness of adversarial domain adaptation in all tasks in Table 1. T3 outperforms T2, thus indicating that learning a combination of domain specific and shared representations is quite beneficial for all domain transfer experiments in Table 1. This observation was also noted by Qiu et al (Qiu et al., 2018), even if without the use of gradient reversal. Both T2 and T3 outperform T1, hence providing strong evidence that GRL significantly improves overall feature learning if compared to maximizing the domain classification loss. In particular, the comparison between T3 and T1, shows that learning exactly the same feature set using GRL for adversarial domain adaptation is more effective than maximizing the loss. T3 outperforms all other models, showing that our proposed approach consistently beats all other settings for domain adaptation in both ques1All row differences are statistically significant on paired t-test(p-value< 0.05) 5549 Model Adversarial Features Question Duplicate Detection Textual Entailment (BiMPM) Approach Tax Domain Android Mathematica Programmers Unix Guardian SICK (3k/ 1k/ 1k) (7k/ 1.5k/ 1.5k) (5.4k/ 1.2k/ 1.2k) (6.5k/ 1.5k/ 1.5k) (7k/ 1.5k/ 1.5k) (23k/ 5k/ 5k) (6.8k/ 1.5k/ 1.5k) base – DSF 84.7 90.7 80.0 90.7 88.7 92.3 69.5 T1 maxLoss SF + DSF 87.6 91.3 82.1 91.6 89.6 94.3 72.7 T2 GRL SF 88.1 92.0 82.6 91.9 90.8 96.4 73.8 T3 GRL SF + DSF 89.3 92.6 83.0 92.4 91.1 97.4 76.4 Table 1: Comparison of Accuracy for different domain adaptation methods; Source domain for question duplicate detection: Quora (240k/ 80k/ 80k), Source domain for RTE: SNLI (550k/ 10k/ 10k); SF: shared features, DSF: domain specific features, maxLoss: maximizing domain discriminator loss, GRL: gradient reversal layer tion duplicate classification and RTE. 4 Discussion and Conclusion We systematically evaluated different adversarial domain adaptation techniques for duplicate question detection and RTE tasks. Our experiments showed that adversarial domain adaptation using gradient reversal yields the best knowledge transfer between all textual domains in Table 1. This method outperformed existing domain adaptation techniques, including recently proposed adversarial domain adaptation method of maximizing the domain classification loss by a discriminator. Furthermore, we show that the models that use both domain specific features and shared features outperform the models that use only either of these features. References Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In 30th Conference on Neural Information Processing Systems (NIPS 2016), pages 343–351. Yue Cao, Mingsheng Long, and Jianmin Wang. 2018. Unsupervised domain adaptation with distribution matching machines. In AAAI. Daniel Cohen, Bhaskar Mitra, Katja Hofmann, and W. Bruce Croft. 2018. Cross domain regularization for neural ranking models using adversarial learning. In SIGIR’18: 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1025–1028. ACM. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop, pages 177–190. Springer. Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11–21. ACL. Lisheng Fu, Thien Huu Nguyen, Bonan Min, and Ralph Grishman. 2017. Domain adaptation for relation extraction with domain adversarial neural network. In Proceedings of the The 8th International Joint Conference on Natural Language Processing, pages 425–429. ACL. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In ICML’15: Proceedings of the 32nd International Conference on Machine Learning, volume 37, pages 1180–1189. ACM. Doris Hoogeveen, Karin M. Verspoor, and Timothy Baldwin. 2015. Cqadupstack: A benchmark data set for community question-answering research. In ADCS. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. 2017. Deep transfer learning with joint adaptation networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2208–2217, International Convention Centre, Sydney, Australia. PMLR. Minlong Peng, Qi Zhang, Yu-gang Jiang, and Xuanjing Huang. 2018. Cross-domain sentiment classification with target domain specific information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2505– 2513. ACL. Minghui Qiu, Liu Yang, Feng Ji, Wei Zhou, Jun Huang, Haiqing Chen, Bruce Croft, and Wei Lin. 2018. Transfer learning for context-aware question matching in information-seeking conversations in ecommerce. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 208–213. Association for Computational Linguistics. Quora. 2017. Quora Duplicte Questions Dataset. https://www.kaggle.com/c/ quora-question-pairs/data. Guardian Headlines RTE. 2012. The Guardian Headlines Entailment Training Dataset. https://github.com/daoudclarke/ rte-experiment. 5550 Swami Sankaranarayanan, Yogesh Balaji, Carlos D. Castillo, and Rama Chellappa. 2018. Generate to adapt: Aligning domains using generative adversarial networks. In Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition, pages 8503–8512. IEEE. Ozan Sener, Hyun Oh Song, Ashutosh Saxena, and Silvio Savarese. 2016. Learning transferrable representations for unsupervised domain adaptation. In Advances in Neural Information Processing Systems, pages 2110–2118. Darsh Shah, Tao Lei, Alessandro Moschitti, Salvatore Romeo, and Preslav Nakov. 2018. Adversarial domain adaptation for duplicate question detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1056–1063. Association for Computational Linguistics. SICK. 2014. Sentences Involving Compositional Knowledge (SICK). http://clic.cimec. unitn.it/composes/sick.html. SNLI. 2015. The Stanford Natural Language Inference Corpus. https://nlp.stanford.edu/ projects/snli/. Kihyuk Sohn, Wenling Shang, Xiang Yu, and Manmohan Chandraker. 2019. Unsupervised domain adaptation for distance metric learning. In International Conference on Learning Representations. StackExchchange. 2018. Stack Exchange Data Dump. https://archive.org/download/ stackexchange. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, pages 7167–7176. IEEE. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4144–4150. AAAI. Kai Zhao, Liang Huang, and Mingbo Ma. 2017. Textual entailment with structured attentions and composition. arXiv preprint arXiv:1701.01126.
2019
556
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5551–5557 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5551 Towards Integration of Statistical Hypothesis Tests into Deep Neural Networks Ahmad Aghaebrahimian Zurich University of Applied Sciences Switzerland [email protected] Mark Cieliebak Zurich University of Applied Sciences Switzerland [email protected] Abstract We report our ongoing work about a new deep architecture working in tandem with a statistical test procedure for jointly training texts and their label descriptions for multi-label and multi-class classification tasks. A statistical hypothesis testing method is used to extract the most informative words for each given class. These words are used as a class description for more label-aware text classification. Intuition is to help the model to concentrate on more informative words rather than more frequent ones. The model leverages the use of label descriptions in addition to the input text to enhance text classification performance. Our method is entirely data-driven, has no dependency on other sources of information than the training data, and is adaptable to different classification problems by providing appropriate training data without major hyper-parameter tuning. We trained and tested our system on several publicly available datasets, where we managed to improve the state-of-the-art on one set with a high margin, and to obtain competitive results on all other ones. 1 Introduction Text classification is a complex problem in Natural Language Processing (NLP) with lots of applications from sentiment analysis (Liu, 2015) to question answering (Aghaebrahimian and Jurˇc´ıˇcek, 2016b,a; Yu et al., 2014) or abusive language detection (von Gr¨unigen et al., 2018; Founta et al., 2018), to name just a few. Text classification is defined as the task of assigning a certain pre-defined class to a document. The number of classes can be arbitrarily large in multi-class classification, whereas there are only two classes for binary classification. In multilabel classification, the number of labels attached to each document is not known and usually larger than one, while in multi-class classification, only one class is assigned to each document. There exist numerous approaches for text classification, ranging from simple hand-crafted lexicallevel features with Naive Bayes or Support Vector Machines (SVM) (Wang and Manning, 2012) to self-learning approaches with Deep Neural Networks (DNN) (Deriu and Cieliebak, 2017). For the latter, several architectures such as Convolutional or Recurrent Neural Networks (CNN or RNN) (Shen et al., 2017; Wang et al., 2018b) have been proposed. These architectures learn different levels of textual representation in their layers, which are an essential source of information for the classification process. As an alternative, attention networks are also introduced (Bahdanau et al., 2015; Yang et al., 2016) to capture the features with the highest discriminative power regarding the class and irrespective of their distance. On the other hand, the field of Statistics has since long developed and optimized various methods to capture ‘relevant’ properties of a given dataset. In this work, we extend DNNs with statistical hypothesis testing methods to enhance their performance in assessing feature relevancy on the input data. More precisely, our approach works as follows: - For each class, we generate a class description, which is a set of ‘most informative words’ that will help to distinguish the class from others. - To achieve this, we apply two statistical hypothesis testing approaches called χ2 test (Pennington et al., 1893) and Analysis of Variance test (ANOVA) (Fisher, 1921). - We then extend a DNN that is based on bidirectional Gated Recurrent Units (GRU) with an additional input channel for encoding the class descriptions. This channel uses attention, in addition, to enable the network to focus on the most informative words for each document and given each class. 5552 Our experiments on four standard datasets show that this approach can already reach or even outperform state-of-the-art solutions for these datasets. While this is very promising, we want to stress already here that this is ongoing work, and that it needs extensive further experiments to fully understand when and why the proposed method works. The main contributions of this work are the use of statistical hypothesis testing methods specifically for class descriptor extraction rather than feature extraction, and a new deep architecture working in tandem with a statistical test procedure with state-of-the-art performance in multilabel and multi-class classification. We organize the remaining content into the following sections. After a review on state of the art in Section 2, we describe how we extract the class descriptors in Section 3. Then we continue with a description of our deep architecture, followed by the system setup for our experiments in Sections 4 and 5, respectively. Finally we report our results in Section 6 and conclude in Section 7. 2 Related Work Many solutions for text classification with DNNs use word embeddings as the primary and only representation of the input text. Word embeddings are low-dimensional word vectors that can be precomputed using, for instance, Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), or fastText (Bojanowski et al., 2017). Many studies show the importance of embeddings in text classification (Arora et al., 2017; Wieting et al., 2016), and DNNs have been proven very useful in capturing syntactic- and semantic-level textual information. Still, it has been shown that they can benefit from additional representations of the input, e.g., by using a method called Attention Mechanism (Bahdanau et al., 2015; Yang et al., 2016; Cui et al., 2017; Gehring et al., 2017). The main idea in the attention mechanism for text classification is to put emphasis on more informative tokens for each class. The attention mechanism has been successfully applied for different tasks in NLP, including but not limited to sentiment analysis (Zhou et al., 2016), modeling sentence pair (Aghaebrahimian, 2018a; Yin et al., 2016), question answering (Aghaebrahimian, 2018b; Seo et al., 2016), and summarization (Rush et al., 2015). The idea of joint learning of text and class descriptions is already practiced by Wang et al. (2018a). They showed that training an attention model on class descriptions in a joint embedding space is beneficial for text classification. However, they extracted class descriptions only from the class names, which limits the functionality of this approach, since in many use cases the names are very short, sometimes even just one token. In our work, the class descriptions are extracted using a statistical, data-driven approach. This makes our model independent from the label-set description which is not always available. Our model is also similar to the one by Founta et al. (2018) with two differences. First, they assume that classes are provided with metadata, while our model extracts class descriptions directly from the training data. Second, the token channel and class description channel in our model both have the same time complexity thus they both converge simultaneously, and we do not need to worry about over-fitting one channel while the other is still training. The use of statistical tests for extracting textual features in text classification is quite common and already proven beneficial. Bahassine et al. (2018), for instance, used three different statistical tests for feature extraction to improve Arabic text classification. However, we do not use statistical tests for feature extraction. Instead, we use them to extract class descriptions which are used as a second channel alongside with their accompanying texts in a deep neural network. This is the first time that statistical tests are used for extracting class descriptor tokens which can be used for jointly training deep neural models on texts with their class descriptions. 3 Generating Class Descriptions We show how to extract class descriptions using a data-driven method applied to the training data. To retrieve the class descriptions, we use two statistical hypothesis testing approaches called χ2 and Analysis of Variance (ANOVA) tests. We assume that each class is triggered given specific tokens; hence, given each class, the frequencies of those tokens should be distinguishable from other non-triggering ones. Therefore, for each class, we formulate a null hypothesis (i.e., an indirect assumption) that states that the presence of specific tokens does not have any impact on 5553 their class determination. Then we check which words can reject the hypothesis, hence, have discriminative significance in distinguishing classes from each other. The χ2 test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies of tokens in each one of the classes. Using the training data, we separate the documents into mutually exclusive classes. Given each class and the null hypothesis, we compute the χ2 of the documents’ words, which provides us with the probability with which any token falls into the corresponding class. The χ2 test allows us to evaluate how likely it is to observe a word in a class, assuming the null hypothesis is true. Similarly, we use the ANOVA F-statistics to determine whether there is a dependence between certain tokens and a specific class, so to eliminate the words that are most likely independent of the class, hence, irrelevant for classification. Both tests provide each word with a probability given each class. To get an n-dimensional class description vector, we extract the n top-rated words for each class and use them as class descriptors. One should be careful not to confuse word embedding dimensions with the dimension of class descriptions. By class description dimension, we mean the length of the string containing the most informative words given each class. Some of the most informative words given the classes available in the AG News datasets (Del Corso et al., 2005) are presented in Table 1. Class Informative words World iraq, minister, president, prime, baghdad, iraqi, dig, palestinian, military, nuclear, israeli, ... Sports dig, season, league, team, game, cup, night, coach, victory, win, sports, championship, olympic, ... Business oil, stocks, prices, percent, quickinfo, target, profit, company, shares, bilion, quarter, sales, earnings, ... Science microsoft, software, internet, space, music, computer, users, web, search, windows, technology, ... Table 1: Extracted χ2 words for the AG News dataset 4 Model The overall architecture of our system is illustrated in Figure 1. We use a lookup table for transforming token indices to embedding vectors. The embeddings are fed into a bidirectional Gated Recurrent Unit (BiGRU) (Cho et al., 2014). The resulting tensors are then max- and average-pooled to extract the most relevant features. At the end of this channel, we have the vector of an encoded text. A similar channel is used for encoding the class descriptions. Using the words extracted by the χ2 test as described in Section 3, we generate a new string in which only the χ2 words are available. Given each class, this contains the highest informative words for this class. Additionally, we put an attention layer on top of this channel to learn the importance of each word given a particular class. The attention layer is implemented based on the work of Yang et al. (2016). The mathematical representation is as follow: u = f(ω · h + b) ai = Softmax(ui · us) vi = σiai · hi. (1) where h are the tensors out of the BiGRU layer, and w, b, a, and v are the weight vectors, bias terms, attention vectors, and document vectors respectively. Finally, we concatenate the resulting tensors from the attention layer with the max- and average-pooling layers and feed them into a dense layer for final classification. For multi-class classification it is common to use the Softmax function P(cj|xi) = exp(ωj · xi) PC c=1 exp(ωc · xi) . where xi, c, and ω are features, classes, and associated weight vectors, respectively. This function is used as the dense output layer of the network to normalize the logits generated by previous layers. In this way, we can model the probability of class cj as a multi-nominal distribution. The consequence of this decision is that the probability for a class is not independent of the other class probabilities, which would not be the desired behavior when dealing with a multi-label classification task. For instance, in a multi-label classification for hate speech detection, the probability of a 5554 Figure 1: The system architecture comment for being offensive is independent of its probability of being hateful, because an offensive tone can be used in a text that is not necessarily hateful (Founta et al., 2018). For this reason, instead of Softmax, we use the Sigmoid activation function σ(z) = 1 1 + exp(−z) which is a better choice for multi-label classification. In this way we can model the probability of a class as Bernoulli’s distribution P(cj|xi) = 1 1 + exp(−ωj · xi) which makes the probability of each class cj independent from the other class probabilities. Therefore we use a Softmax dense layer for multi-class and Sigmoid dense layer for multilabel classification to get the probabilities associated with each target class. 5 System Setup For the text representation in our system, we use pre-trained Glove embeddings (Pennington et al., 2014) trained on 840 billion tokens with 300dimensional vectors and set it to get updated through training. As the loss function for multiclass and multi-label settings, we use the Categorical and the Binary cross-entropy, respectively. We define the bidirectional GRUs, each with 128 units. We also set both the drop-outs (Srivastava et al., 2014) and recurrent drop-outs to 0.5. In the following subsections, some details concerning the pre-processing of tested datasets are presented. 5.1 Multi-label Data. For the multi-label classification task, we train and test our model on a large publicly available dataset provided for toxic comment classification in a Kaggle competition called ‘Toxic Comment Classification Challenge.’ The texts of the dataset were extracted from Wikipedia comments and have been labeled by human raters for six categories of toxic behavior: toxic, severe-toxic, obscene, threat, insult, and identity-hate. The training and test datasets contain 160k and 153k comments, respectively. The task is to train a model which assigns a probability to each of the six categories of toxic behavior given a new comment. Pre-processing. The pre-processing step for this dataset is performed by lower-casing, cleaning the comments from non-alphanumeric characters, using the first 130k most frequent tokens and removing comments longer than 80 tokens (95% percentile of the training dataset). Shorter comments are padded with zero to fixate the length of all comments to 80 tokens. Performance Measure. The Area Under the Receiver Operating Characteristic Curve (AUCROC) is used to measure the performance of the systems. ROC is a probability curve, and AUC is a measure of separability. This measure tells how much a model is capable of distinguishing between classes. Since the output of the model is a vector of probabilities that the model computes for each class and we want to assign more than one class to each text, we define a threshold using the validation data and accept all the classes with probabilities above the threshold as positive class. 5.2 Multi-class Data. We also train and test our system on three other datasets for multi-class classifications, namely Hate Speech dataset (Davidson et al., 2017), AG News (Del Corso et al., 2005), and DBpedia, to measure its performance on multi-class classification. Some statistics of these datasets are reported in Table 2. Dataset Type Classes/Labels Training Testing Hate Speech Multi-class 3 22.5K 2.5K DBpedia Multi-class 14 560K 70K AG News Multi-class 4 120K 7.6K Kaggle-toxic comments Multi-label 6 160K 153K Table 2: Types, number of classes, and number of training/testing samples in the datasets used for training in this work 5555 Pre-processing. The pre-processing step for these datasets is performed by lower-casing, removing non-alphanumeric characters, and removing repetitive characters from tokens (e.g. yoooouuuuu ->you). Performance Measure. In contrast to the multi-label setting, in the multi-class setting, we do not need to define a threshold. Instead, we get the argmax of the vector of probabilities since we need to return only one class. 6 Experimental Results Table 3 shows that the system obtains superior results in the Hate Speech dataset and yields competitive results on the Kaggle data in comparison to some sate-of-the-art baseline systems. Table 4 shows the results of our system on the DBpedia and AG News datasets. Using the same model without any tuning, we managed to obtain competitive results again compared to previous stateof-the-art systems. We also ran preliminary experiments on class description vectors with different dimensions (50 vs. 100), indicated by the suffix of each name in Table 3. By dimension, we mean the number of words given each label and not the dimension of word vectors which are all the same for both channels (i.e., 300). It turns out that in all but one case, the more words, the better the performance. However, we did not get statistically significant results with class descriptors with dimensions higher than 100. It seems that the range 50-100 is the optimal dimension for this approach and these datasets. Bigger vectors such as 150 did not yield any statistically significant improvement in performance, and 200-, and 300-dimensional vectors deteriorated the performance. We observed that the decline in the performance comes mainly from two sources: the network over-fit, and the similar words in different classes. By increasing the number of informative words, the number of similar words in different classes increases which leads to sub-optimal classification decision boundaries. 7 Conclusion Previous studies in text classification have shown that training classifiers with class descriptions or class metadata alongside the text is beneficial. However, many of these systems depend on the Hate Speech dataset P(%) R(%) F1(%) AUC(%) (Davidson et al., 2017) 91 90 90 87 (Founta et al., 2018) 89 89 89 92 This work+χ250 89.7 90.4 90 92.9 This work+χ2100 90.3 92.5 91.3 93.7 This work+ANOVA50 89.2 89.6 89.3 92.1 This work+ANOVA100 89.8 89.2 89.4 92.4 Kaggle dataset Leader-board 98.82 This work+χ250 98.05 This work+χ2100 98.24 Table 3: The results of our system on the Hate Speech and Kaggle datasets. With one exception, in all cases longer class description leads to better performance. The results of the Kaggle dataset are only reported in AUC to be comparable with other systems in the multilabel category. DBpedia(%) AG News(%) Bi-BloSAN(Shen et al., 2018) 98.77 93.32 LEAM(Wang et al., 2018a) 99.02 92.45 This work 98.90 92.05 Table 4: Competitive results on DBpedia and AG News reported in accuracy (%) without any hyper-parameter tuning. provided label set for generating their class descriptors. This dependence on an external source of information limits their applicability when such information is not available. In this paper, we proposed a data-driven approach for extracting class descriptions for jointly training text with their class descriptors, based on pure statistical tests. Moreover, we designed a new deep neural architecture to make use of the output of this statistical approach for enhancing the performance of text classification by attending on the informative words of each class. Although we have shown that the approach works in principle, by achieving state-of-the-art results on four standard datasets, it needs to be further explored in order to understand why it works. In particular, we need to understand why words extracted with χ2 yield better results compared to ANOVA, how many words should be extracted given a specific task, if other statistical tests might even improve the outcomes, etc. Once this understanding is achieved, this may lead us towards proposing better data-driven approaches for extracting class descriptions that will be beneficial in text classification. 5556 References Ahmad Aghaebrahimian. 2018a. Deep neural networks at the service of multilingual parallel sentence extraction. In Proceedings of the 27th International Conference on Computational Linguistics (CoLing), pages 1372–1383, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Ahmad Aghaebrahimian. 2018b. Linguistically-based deep unstructured question answering. In Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL), pages 433–443, Brussels, Belgium. Association for Computational Linguistics. Ahmad Aghaebrahimian and Filip Jurˇc´ıˇcek. 2016a. Constraint-based open-domain question answering using knowledge graph search. In Proceedings of the 19th International Conference of Text, Speech, and Dialogue (TSD), volume 9924, pages 28–36. Ahmad Aghaebrahimian and Filip Jurˇc´ıˇcek. 2016b. Open-domain factoid question answering via knowledge graph search. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)- the Workshop on Human-Computer Question Answering, pages 22–28, San Diego, California. Association for Computational Linguistics. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In Proceedings of the International Conference on Learning Representations (ICLR). Said Bahassine, Abdellah Madani, Mohammed AlSarem, and Mohamed Kissi. 2018. Feature selection using an improved chi-square for arabic text classification. Journal of King Saud University - Computer and Information Sciences. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of the Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-overattention neural networks for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 593–602, Vancouver, Canada. Association for Computational Linguistics. Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media (ICWSM). Gianna M. Del Corso, Antonio Gull´ı, and Francesco Romani. 2005. Ranking a stream of news. In Proceedings of the 14th International Conference on World Wide Web, WWW ’05, pages 97–106, New York, NY, USA. ACM. Jan Milan Deriu and Mark Cieliebak. 2017. Swissalps at semeval-2017 task 3 : attention-based convolutional neural network for community question answering. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 334–338, Canberra, Australia. Association for Computational Linguistics. Ronald A. Fisher. 1921. On the ‘probable error’ of a coefficient of correlation deduced from a small sample. Metron, 1:3–32. Antigoni-Maria Founta, Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Athena Vakali, and Ilias Leontiadis. 2018. A unified deep learning architecture for abuse detection. Computing Research Repository, CoRR, abs/1802.00385. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. arXiv preprint, arXiv:1705.03122. Dirk von Gr¨unigen, Ralf Grubenmann, Fernando Benites, Pius Von D¨aniken, and Mark Cieliebak. 2018. spMMMP at GermEval 2018 shared task: Classification of offensive content in tweets using convolutional neural networks and gated recurrent units. In Proceedings of the GermEval 2018 Workshop : 14th Conference on Natural Language Processing (KONVENS). Bing Liu. 2015. Sentiment Analysis. Cambridge University Press, Cambridge, UK. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781. Jeffrey Pennington, Richard Socher, and Christopher Manning. 1893. Contributions to the mathematical theory of evolution. In Proceedings of the Royal Society. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 5557 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Association for Computational Linguistics. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv:1611.01603. Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, and Lawrence Carin. 2017. Deconvolutional latent-variable model for text sequence matching. CoRR, abs/1709.07109. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. 2018. Bi-directional block selfattention for fast and memory-efficient sequence modeling. In Proceedings of the International Conference on Learning Representations (ICLR). Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from over fitting. Journal of Machine Learning Research, 15(1):1929–1958. Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018a. Joint embedding of words and labels for text classification. In Proceedings of the Association for Computational Linguistics (ACL). Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the Association for Computational Linguistics (ACL). Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. 2018b. Topic compositional neural language model. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS). John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In Proceedings of the International Conference on Learning Representations (ICLR). Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Wenpeng Yin, Hinrich Schutze, Bing Xiang, and Bowen Zhou. 2016. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics (TACL). Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. In Proceedings of the Conference on Neural Information Processing Systems (NIPS) - Deep learning workshop. Xinjie Zhou, Xiaojun Wan, , and Jianguo Xiao. 2016. Attention-based lstm network for cross-lingual sentiment classification. In Proceedings of the conference on Empirical Methods in Natural Language Processing (EMNLP).
2019
557
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5558–5563 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5558 Depth Growing for Neural Machine Translation Lijun Wu1,∗, Yiren Wang2,∗, Yingce Xia3,†, Fei Tian3, Fei Gao3, Tao Qin3, Jianhuang Lai1, Tie-Yan Liu3 1School of Data and Computer Science, Sun Yat-sen University; 2 University of Illinois at Urbana-Champaign; 3 Microsoft Research Asia 1{wulijun3, stsljh}@mail2.sysu.edu.cn, [email protected], 3{Yingce.Xia, fetia, feiga, taoqin, tyliu}@microsoft.com Abstract While very deep neural networks have shown effectiveness for computer vision and text classification applications, how to increase the network depth of neural machine translation (NMT) models for better translation quality remains a challenging problem. Directly stacking more blocks to the NMT model results in no improvement and even reduces performance. In this work, we propose an effective two-stage approach with three specially designed components to construct deeper NMT models, which result in significant improvements over the strong Transformer baselines on WMT14 English→German and English→French translation tasks. 1 Introduction Neural machine translation (briefly, NMT), which is built upon deep neural networks, has gained rapid progress in recent years (Bahdanau et al., 2014; Sutskever et al., 2014; Sennrich et al., 2015; He et al., 2016a; Sennrich et al., 2016a; Xia et al., 2017; Wang et al., 2019) and achieved significant improvement in translation quality (Hassan et al., 2018). Variants of network structures have been applied in NMT such as LSTM (Wu et al., 2016), CNN (Gehring et al., 2017) and Transformer (Vaswani et al., 2017). Training deep networks has always been a challenging problem, mainly due to the difficulties in optimization for deep architecture. Breakthroughs have been made in computer vision to enable deeper model construction via advanced initialization schemes (He et al., 2015), multi-stage training strategy (Simonyan and Zisserman, 2014), and novel model architectures (Srivastava et al., 2015; He et al., 2016b). While constructing very deep ∗The first two authors contributed equally to this work. This work is conducted at Microsoft Research Asia. †Corresponding author. Figure 1: Performances of Transformer models with different number of encoder/decoder blocks (recorded on x-axis) on WMT14 En→De translation task. † denotes the result reported in (Vaswani et al., 2017). neural networks with tens and even more than a hundred blocks have shown effectiveness in image recognition (He et al., 2016b), question answering and text classification (Devlin et al., 2018; Radford et al., 2019), scaling up model capacity with very deep network remains challenging for NMT. The NMT models are generally constructed with up to 6 encoder and decoder blocks in both state-of-the-art research work and champion systems of machine translation competition. For example, the LSTM-based models are usually stacked for 4 (Stahlberg et al., 2018) or 6 (Chen et al., 2018) blocks, and the state-of-the-art Transformer models are equipped with a 6-block encoder and decoder (Vaswani et al., 2017; JunczysDowmunt, 2018; Edunov et al., 2018). Increasing the NMT model depth by directly stacking more blocks results in no improvement or performance drop (Figure 1), and even leads to optimization failure (Bapna et al., 2018). There have been a few attempts in previous works on constructing deeper NMT models. Zhou 5559 et al. (2016) and Wang et al. (2017) propose increasing the depth of LSTM-based models by introducing linear units between internal hidden states to eliminate the problem of gradient vanishing. However, their methods are specially designed for the recurrent architecture which has been significantly outperformed by the state-ofthe-art transformer model. Bapna et al. (2018) propose an enhancement to the attention mechanism to ease the optimization of models with deeper encoders. While gains have been reported over different model architectures including LSTM and Transformer, their improvements are not made over the best performed baseline model configuration. How to construct and train deep NMT models to push forward the state-ofthe-art translation performance with larger model capacity remains a challenging and open problem. In this work, we explore the potential of leveraging deep neural networks for NMT and propose a new approach to construct and train deeper NMT models. As aforementioned, constructing deeper models is not as straightforward as directly stacking more blocks, but requires new mechanisms to boost the training and utilize the larger capacity with minimal increase in complexity. Our solution is a new two-stage training strategy, which “grows” a well-trained NMT model into a deeper network with three components specially designed to overcome the optimization difficulty and best leverage the capability of both shallow and deep architecture. Our approach can effectively construct a deeper model with significantly better performance, and is generally applicable to any model architecture. We evaluate our approach on two large-scale benchmark datasets, WMT14 English→German and English→French translations. Empirical studies show that our approach can significantly improve in translation quality with an increased model depth. Specifically, we achieve 1.0 and 0.6 BLEU score improvement over the strong Transformer baseline in English→German and English→French translations. 2 Approach We introduce the details of our proposed approach in this section. The overall framework is illustrated in Figure 2. Our model consists of a bottom module with N blocks of encoder and decoder (the grey comInput Embedding Inputs N× Output Embedding Outputs ×N Linear Softmax Output Probability Positional Encoding Positional Encoding Encoder Block Decoder Block Linear Softmax Output Probability ×M M× Encoder Block Decoder Block Figure 2: The overall framework of our proposed deep model architecture. N and M are the numbers of blocks in the bottom module (i.e., grey parts) and top module (i.e., blue and green parts). Parameters of the bottom module are fixed during the top module training. The dashed parts denote the original training/decoding of the bottom module. The weights of the two linear operators before softmax are shared. ponents in Figure 2), and a top module with M blocks (the blue and green components). We denote the encoder and decoder of the bottom module as enc1 and dec1, and the corresponding two parts of the top module as enc2 and dec2. An encoder-decoder attention mechanism is used in the decoder blocks of the NMT models, and here we use attn1 and attn2 to represent such attention in the bottom and top modules respectively. The model is constructed via a two-stage training strategy: in Stage 1, the bottom module (i.e., enc1 and dec1) is trained and subsequently holds constant; in Stage 2, only the top module (i.e., enc2 and dec2) is optimized. Let x and y denote the embedding of source and target sequence. Let ly denote the number of words in y, and y<t denote the elements before time step t. Our proposed model works in the following way: h1 = enc1(x); h2 = enc2(x + h1); (1) s1,t = dec1(y<t, attn1(h1)), ∀t ∈[ly]; (2) s2,t = dec2(y<t + s1,<t, attn2(h2)), (3) which contains three key components specially designed for deeper model construction, including: (1) Cross-module residual connections: As shown in Eqn.(1), the encoder enc1 of the bottom module encodes the input x to a hidden repre5560 sentation h1, then a cross-module residual connection is introduced to the top module and the representation h2 is eventually produced. The decoders work in a similar way as shown in Eqn.(2) and (3). This enables the top module to have direct access to both the low-level input signals from the word embedding and high-level information generated by the bottom module. Similar principles can be found in Wang et al. (2017); Wu et al. (2018). (2) Hierarchical encoder-decoder attention: We introduce a hierarchical encoder-decoder attention calculated with different contextual representations as shown in Eqn.(2) and (3), where h1 is used as key and value for attn1 in the bottom module, and h2 for attn2 in the top module. Hidden states from the corresponding previous decoder block are used as queries for both attn1 and attn2 (omitted for readability). In this way, the strong capability of the well trained bottom module can be best preserved regardless of the influence from top module, while the newly stacked top module can leverage the higher-level contextual representations. More details can be found from source code in the supplementary materials. (3) Deep-shallow decoding: At the decoding phase, enc1 and dec1 work together according to Eqn.(1) and Eqn.(2) as a shallow network netS, integrate both bottom and top module works as a deep network netD according to Eqn.(1)∼Eqn.(3). netS and netD generate the final translation results through reranking. Discussion • Training complexity: As aforementioned, the bottom module is trained in Stage 1 and only parameters of the top module are optimized in Stage 2. This significantly eases optimization difficulty and reduces training complexity. Jointly training the two modules with minimal training complexity is left for future work. • Ensemble learning: What we propose in this paper is a single deeper model with hierarchical contextual information, although the deep-shallow decoding is similar to the ensemble methods in terms of inference complexity (Zhou, 2012). While training multiple diverse models for good ensemble performance introduces high additional complexity, our approach, as discussed above, “grows” a well-trained model into a deeper one with minimal increase in training complexity. Detailed empirical analysis is presented in Section 3.3. 3 Experiments We evaluate our proposed approach on two largescale benchmark datasets. We compare our approach with multiple baseline models, and analyze the effectiveness of our deep training strategy. 3.1 Experiment Design Datasets We conduct experiments to evaluate the effectiveness of our proposed method on two widely adopted benchmark datasets: the WMT141 English→German translation (En→De) and the WMT14 English→French translation (En→Fr). We use 4.5M parallel sentence pairs for En→De and 36M pairs for En→Fr as our training data2. We use the concatenation of Newstest2012 and Newstest2013 as the validation set, and Newstest2014 as the test set. All words are segmented into sub-word units using byte pair encoding (BPE)3 (Sennrich et al., 2016b), forming a vocabulary shared by the source and target languages with 32k and 45k tokens for En→De and En→Fr respectively. Architecture The basic encoder-decoder framework we use is the strong Transformer model. We adopt the big transformer configuration following Vaswani et al. (2017), with the dimension of word embeddings, hidden states and non-linear layer set as 1024, 1024 and 4096 respectively. The dropout rate is 0.3 for En→De and 0.1 for En→Fr. We set the number of encoder/decoder blocks for the bottom module as N = 6 following the common practice, and set the number of additionally stacked blocks of the top module as M = 2. Our models are implemented based on the PyTorch implementation of Transformer4 and the code can be found in the supplementary materials. Training We use Adam (Kingma and Ba, 2014) optimizer following the optimization settings and default learning rate schedule in Vaswani et al. (2017) for model training. All models are trained on 8 M40 GPUs. 1http://www.statmt.org/wmt14/ translation-task.html 2Training data are constructed with filtration rules following https://github.com/pytorch/fairseq/ tree/master/examples/translation 3https://github.com/rsennrich/ subword-nmt 4https://github.com/pytorch/fairseq 5561 Table 1: The test set performances of WMT14 En→De and En→Fr translation tasks. ‘†’ denotes the performance figures reported in the previous works. Model En→De En→Fr Transformer (6B)† 28.40 41.80 Transformer (6B) 28.91 42.69 Transformer (8B) 28.75 42.63 Transparent Attn (16B)† 28.04 − Ours (8B) 29.92 43.27 Evaluation We evaluate the model performances with tokenized case-sensitive BLEU5 score (Papineni et al., 2002) for the two translation tasks. We use beam search with a beam size of 5 and length penalty 0.6 for both tasks. 3.2 Results We compare our method (Ours) with the Transformer baselines of 6 blocks (6B) and 8 blocks (8B), and a 16-block Transformer with transparent attention (Transparent Attn (16B))6 (Bapna et al., 2018). We also reproduce a 6-block Transformer baseline, which has better performance than what is reported in (Vaswani et al., 2017) and we use it to initialize the bottom module in our model. From the results in Table 1, we see that our proposed approach enables effective training for deeper network and achieves significantly better performances compared to baselines. With our method, the performance of a well-optimized 6block model can be further boosted by adding two additional blocks, while simply using Transformer (8B) will lead to a performance drop. Specifically, we achieve a 29.92 BLEU score on En→De translation with 1.0 BLEU improvement over the strong baselines, and achieve a 0.6 BLEU improvement for En→Fr. The improvements are statistically significant with p < 0.01 in paired bootstrap sampling (Koehn, 2004). 3.3 Analysis To further study the effectiveness of our proposed framework, we present additional comparisons in 5https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ generic/multi-bleu.perl 6We directly use the performance figure from (Bapna et al., 2018), which uses the base Transformer configuration. We run the method of our own implementation with the widely adopted and state-of-the-art big setting, but no improvement has been observed. Baseline (6B) DS (8B) scratch DS (8B) grow Ensemble (6B/6B) Ensemble (6B/8B) Ours (8B) 28.6 28.8 29.0 29.2 29.4 29.6 29.8 30.0 30.2 BLEU Score 28.91 28.75 28.81 29.6 29.57 29.92 Baseline Direct Stacking Ensemble Ours Figure 3: The test performances of WMT14 En→De translation task. En→De translation with two groups of baseline approaches in Figure 3: (1) Direct stacking (DS): we extend the 6-block baseline to 8-block by directly stacking 2 additional blocks. We can see that both training from scratch (DS scratch) and “growing” from a welltrained 6-block model (DS grow) fails to improve performance in spite of larger model capacity. The comparison with this group of models shows that directly stacking more blocks is not a good strategy for increasing network depth, and demonstrates the effectiveness and necessity of our proposed mechanisms for training deep networks. (2) Ensemble learning (Ensemble): we present the two-model ensemble results for fair comparison with our approach that involves a two-pass deepshallow decoding. Specifically, we present the ensemble performances of two independently trained 6-block models (Ensemble 6B/6B), and ensemble of one 6-block and one 8-block model independently trained from scratch (Ensemble 6B/8B). As expected, the ensemble method improves translation quality over the single model baselines by a large margin (over 0.8 BLEU improvement). Regarding training complexity, it takes 40 GPU days (5 days on 8 GPU) to train a single 6-block model from scratch, 48 GPU days for a 8-block model , and 8 GPU days to “grow” a 6-block model into 8-block with our approach. Therefore, our model is better than the two-model ensemble in terms of both translation quality (more than 0.3 BLEU improvement over the ensemble baseline) and training complexity. 5562 4 Conclusion In this paper, we propose a new training strategy with three specially designed components, including cross-module residual connection, hierarchical encoder-decoder attention and deep-shallow decoding, to construct and train deep NMT models. We show that our approach can effectively construct deeper model with significantly better performance over the state-of-the-art transformer baseline. Although only empirical studies on the transformer are presented in this paper, our proposed strategy is a general approach that can be universally applicable to arbitrary model architectures, including LSTM and CNN. In future work, we will further explore an efficient strategy that can jointly train all modules of the deep model with minimal increase in training complexity. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. 2018. Training deeper neural machine translation models with transparent attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3028–3033. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, et al. 2018. The best of both worlds: Combining recent advances in neural machine translation. arXiv preprint arXiv:1804.09849. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489–500. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. JMLR. org. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic chinese to english news translation. arXiv preprint arXiv:1803.05567. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016a. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Marcin Junczys-Dowmunt. 2018. Microsoft’s submission to the wmt2018 news translation task: How i learned to stop worrying and love the data. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 425–430. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 conference on empirical methods in natural language processing. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. 5563 Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387. Felix Stahlberg, Adri`a de Gispert, and Bill Byrne. 2018. The university of cambridges machine translation systems for wmt18. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 504–512. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Mingxuan Wang, Zhengdong Lu, Jie Zhou, and Qun Liu. 2017. Deep neural machine translation with linear associative unit. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 136–145. Yiren Wang, Yingce Xia, Tianyu He, Fei Tian, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Multiagent dual learning. In International Conference on Learning Representations. Lijun Wu, Fei Tian, Li Zhao, Jianhuang Lai, and TieYan Liu. 2018. Word attention for sequence to sequence text understanding. In Thirty-Second AAAI Conference on Artificial Intelligence. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In Advances in Neural Information Processing Systems, pages 1784–1794. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. Transactions of the Association for Computational Linguistics, 4:371–383. Zhi-Hua Zhou. 2012. Ensemble methods: foundations and algorithms. Chapman and Hall/CRC.
2019
558
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5564–5569 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 5564 Generating Fluent Adversarial Examples for Natural Languages Huangzhao Zhang1∗ Hao Zhou2 Ning Miao2 Lei Li2 1Institute of Computer Science and Technology, Peking University, China 2ByteDance AI Lab, Beijing, China zhang [email protected] {miaoning,zhouhao.nlp,lileilab}@bytedance.com Abstract Efficiently building an adversarial attacker for natural language processing (NLP) tasks is a real challenge. Firstly, as the sentence space is discrete, it is difficult to make small perturbations along the direction of gradients. Secondly, the fluency of the generated examples cannot be guaranteed. In this paper, we propose MHA, which addresses both problems by performing Metropolis-Hastings sampling, whose proposal is designed with the guidance of gradients. Experiments on IMDB and SNLI show that our proposed MHA outperforms the baseline model on attacking capability. Adversarial training with MHA also leads to better robustness and performance. 1 Introduction Adversarial learning has been a popular topic in deep learning. Attackers generate adversarial examples by perturbing the samples and use these examples to fool deep neural networks (DNNs). From the perspective of defense, adversarial examples are mixed into the training set to improve performance and robustness of the victim models. However, building an attacker for NLP models (such as a text classifier) is extremely challenging. Firstly, it is difficult to perform gradientbased perturbations since the sentence space is discrete. However, gradient information is critical – it leads to the steepest direction to more effective examples. Secondly, adversarial examples are usually not fluent sentences. Unfluent examples are less effective in attacking, as victim models can easily learn to recognize them. Meanwhile, adversarial training on them usually does not perform well (see Figure 1 for detailed analysis). Current methods cannot properly handle the two problems. Ebrahimi et al. (2018) (HotFlip) ∗Work done while Huangzhao Zhang was a research intern in ByteDance AI Lab, Beijing, China. (a) Adversarial training with fluent adversarial examples (b) Adversarial training with unfluent adversarial examples Figure 1: Effect of adversarial training on (a) fluent and (b) unfluent adversarial examples. ◦and • represent positive and negative samples in the training set, while △and ▲are the corresponding adversarial examples. Solid and Dotted lines represent decision boundaries before and after adversarial training, respectively. As unfluent adversarial examples are not in the manifold of real sentences, the victim model only needs to adjust its decision boundary out of the sentence manifold to fit them. As a result, fluent adversarial examples may be more effective than unfluent ones. propose to perturb a sentence by flipping one of the characters, and use the gradient of each perturbation to guide sample selection. But simple character flipping often leads to meaningless words (eg. “mood” to “mooP”). Genetic attack (Alzantot et al., 2018) is a population-based word replacing attacker, which aims to generate fluent sentences by filtering out the unreasonable sentences with a language model. But the fluency of examples generated by genetic attack is still not satisfactory and it is inefficient as the gradient is discarded. To address the aforementioned problems, we propose the Metropolis-Hastings attack (MHA) algorithm in this short paper. MHA is an adversarial example generator based on Metropolis-Hastings (M-H) sampling (Metropolis et al., 1953; HASTINGS, 1970; Chib and Greenberg, 1995). MH sampling is a classical MCMC sampling approach, which has been applied to many NLP 5565 really i like this movie i like this movie i truely like the movie Sentiment Classifier we truely like the we truely like the movie show truely 99% Positive 82% Positive 76% Positive 68% Positive 59% Negative Figure 2: A simple example of adversarial attack on a sentimental classifier by performing word replacement. tasks, such as natural language generation (Kumagai et al., 2016), constrained sentence generation (Miao et al., 2018), guided open story generation (Harrison et al., 2017), etc. We propose two variants of MHA, namely a black-box MHA (bMHA) and a white-box MHA (w-MHA). Specifically, in contrast to previous language generation models using M-H, b-MHA’s stationary distribution is equipped with a language model term and an adversarial attacking term. The two terms make the generation of adversarial examples fluent and effective. w-MHA even incorporates adversarial gradients into proposal distributions to speed up the generation of adversarial examples. Our contributions include that we propose an efficient approach for generating fluent adversarial examples. Experimental results on IMDB (Maas et al., 2011) and SNLI (Bowman et al., 2015) show that, compared with the state-of-the-art genetic model, MHA generates examples faster, achieving higher success rates with much fewer invocations. Meanwhile, adversarial samples from MHA are not only more fluent but also more effective to improve the adversarial robustness and classification accuracy after adversarial training. 2 Preliminary Generally, adversarial attacks aim to mislead the neural models by feeding adversarial examples with perturbations, while adversarial training aims to improve the models by utilizing the perturbed examples. Adversarial examples fool the model into producing erroneous outputs, such as irrelevant answers in QA systems or wrong labels in text classifiers (Figure 2). Training with such examples may enhance performance and robustness. Definitions of the terms in this paper are as follow. The victim models are word-level classifiers, which take in tokenized sentences and output their labels. The attackers generate sentences by perturbing the original ones, in order to mislead the victim model into making mistakes. Adversarial attacks include two categories: (a) blackbox attack only allows the attackers to have access to model outputs, while (b) white-box attack allows full access to the victim model, including model outputs, gradients and (hyper-)parameters. For adversarial training, the same victim model is trained from scratch on an updated training set with adversarial examples included. 3 Proposed Method: MHA In this section, we first introduce M-H sampling briefly, and then describe how to apply M-H sampling efficiently to generate adversarial examples for natural language. 3.1 Metropolis-Hastings Sampling The M-H algorithm is a classical Markov chain Monte Carlo sampling approach. Given the stationary distribution (π(x)) and transition proposal, M-H is able to generate desirable examples from π(x). Specifically, at each iteration, a proposal to jump from x to x′ is made based on the proposal distribution (g(x′|x)). The proposal is accepted with a probability given by the acceptance rate: α(x′|x) = min{1, π(x′)g(x|x′) π(x)g(x′|x) } (1) Once accepted, the algorithm jumps to x′. Otherwise, it stays at x. 3.2 Black-Box Attack In black-box attack (b-MHA), we expect the examples to meet three requirements: (a) to read fluently; (b) to be able to fool the classifier; (c) to invoke the classifier for as few times as possible. Stationary distribution. To meet these requirements, the stationary distribution is designed as: π(x|˜y) ∝LM(x) · C(˜y|x) (2) where LM(x) is the probability of the sentence (x) given by a pre-trained language model (LM) and C(˜y|x) is the probability of an erroneous label (˜y) given by the victim model. LM(x) guarantees fluency, while C(˜y|x) is the attack target. Transition proposal. There are three word-level transition operations – replacement, insertion and deletion. Traversal indexing is applied to select words on which operations are performed. Suppose MHA selects the i-th word (wi) on the t-th proposal, then on the (t + 1)-th proposal, the selected word (w∗) is: 5566 w∗= ( wi+1, if i ̸= n w1, otherwise The transition function for replacement is as Equation 3, where wm is the selected word to be replaced, and Q is a pre-selected candidate set, which will be explained later. The insertion operation (T B i (x′|x)) consists of two steps – inserting a random word into the position and then performing replacement upon it. The deletion operation is rather simple. T B d (x′|x) = 1 if x′ = x−m, where x−m is the sentence after deleting the m-th word (wm), or T B d (x′|x) = 0 otherwise. T B r (x′|x) = I{wc ∈Q}· (3) π(w1, · · · , wm−1, wc, wm+1, · · · , wn|˜y) P w∈Q π(w1, · · · , wm−1, w, wm+1, · · · , wn|˜y) The proposal distribution is a weighted sum of the transition functions: g(x′|x) = prT B r (x′|x)+piT B i (x′|x)+pdT B d (x′|x) where pr, pi and pd are pre-defined probabilities of the operations. Pre-selection. The pre-selector generates a candidate set for T B r (x′|x) and T B i (x′|x). It chooses the most possible words according to the score (SB(w|x)) to form the candidate word set Q. SB(w|x) is formulated as: SB(w|x) = LM(w|x[1:m−1]) · LMb(w|x[m+1:n]) where x[1:m−1] = {w1, · · · , wm−1} is the prefix of the sentence, x[m+1:n] is the suffix of the sentence, and LMb is a pre-trained backward language model. Without pre-selection, Q will include all words in the vocabulary, and the classifier will be invoked repeatedly to compute the denominator of Equation 3, which is inefficient. 3.3 White-Box Attack The only difference between white-box attack (wMHA) and b-MHA lies in the pre-selector. Pre-selection. In w-MHA, the gradient is introduced into the pre-selection score (SW (w|x)). SW (w|x) is formulated as: SW (w|x) = SB(w|x) · S( ∂˜L ∂em , em −e) where S is the cosine similarity function, ˜L = L(˜y|x, C) is the loss function on the target label, em and e are the embeddings of the current word (wm) and the substitute (w). The gradient ( ∂˜L ∂em ) leads to the steepest direction, and em−e is the actual changing direction if em is replaced by e. The cosine similarity term (S( ∂˜L ∂wm , ∆w)) guides the samples to jumping along the direction of the gradient, which raises C(˜y|x) and α(x′|x), and eventually makes w-MHA more efficient. Note that insertion and deletion are excluded in w-MHA, because it is difficult to compute their gradients. Take the insertion operation for instance. One may apply a similar technique in b-MHA, by first inserting a random word forming intermediate sentence x∗= {w1, · · · , wm, w∗, wm+1, · · · , wn} and then performing replacement operation upon x∗. Computing ∂L(˜y|x∗,C) ∂w∗ is easy, but it is not the actual gradient. Computing of the actual gradient (∂L(˜y|x,C) ∂w∗ ) is hard, since the change from x to x∗is discrete and non-differential. 4 Experiments Datasets. Following previous works, we validate the performance of proposed MHA on IMDB and SNLI datesets. The IMDB dataset includes 25,000 training samples and 25,000 test samples of movie reviews, tagged with sentimental labels (positive or negative). The SNLI dataset contains 55,000 training samples, 10,000 validation samples and 10,000 test samples. Each sample contains a premise, a hypothesis and an inference label (entailment, contradiction or neutral). We adopt a single layer bi-LSTM and the BiDAF model (Seo et al., 2016) (which employs bidirectional attention flow mechanism to capture relationships between sentence pairs) as the victim models on IMDB and SNLI, respectively. Baseline Genetic Attacker. We take the stateof-the-art genetic attack model (Alzantot et al., 2018) as our baseline, which uses a gradient-free population-based algorithm. Intuitively, it maintains a population of sentences, and perturbs them by word-level replacement according to the embedding distances without considering the victim model. Then, the intermediate sentences are 5567 0 2000 4000 6000 Invocation # 0.0 0.2 0.4 0.6 0.8 1.0 Succ rate Genetic b-MHA w-MHA (a) IMDB 0 2000 4000 6000 Invocation # 0.0 0.2 0.4 0.6 0.8 1.0 Succ rate Genetic b-MHA w-MHA (b) SNLI Figure 3: Invocation-success curves of the attacks. Task Approach Succ(%) Invok# PPL α(%) IMDB Genetic 98.7 1427.5 421.1 – b-MHA 98.7 1372.1 385.6 17.9 w-MHA 99.9 748.2 375.3 34.4 SNLI Genetic 76.8 971.9 834.1 – b-MHA 86.6 681.7 358.8 9.7 w-MHA 88.6 525.0 332.4 13.3 Table 1: Adversarial attack results on IMDB and SNLI. The acceptance rates (α) of M-H sampling are in a reasonable range. filtered by the victim classifier and a language model, which leads to the next generation. Hyper-parameters. As in the work of Miao et al. (2018), MHA is limited to make proposals for at most 200 times, and we pre-select 30 candidates at each iteration. Constraints are included in MHA to forbid any operations on sentimental words (eg. “great”) or negation words (eg. “not”) in IMDB experiments with SentiWordNet (Esuli and Sebastiani, 2006; Baccianella et al., 2010). All LSTMs in the victim models have 128 units. The victim model reaches 83.1% and 81.1% test accuracies on IMDB and SNLI, which are acceptable results. More detailed hyper-parameter settings are included in the appendix. 4.1 Adversarial Attack To validate the attacking efficiency, we randomly sample 1000 and 500 correctly classified examples from the IMDB and SNLI test sets, respectively. Attacking success rate and invocation times (of the victim model) are employed for testing efficiency. As shown in Figure 3, curves of our proposed MHA are above the genetic baseline, which indicates the efficiency of MHA. By incorporating gradient information in proposal distribution, w-MHA even performs better than b-MHA, as the curves rise fast. Note that the ladder-shaped Case 1 Premise: three men are sitting on a beach dressed in orange with refuse carts in front of them. Hypothesis: empty trash cans are sitting on a beach. Prediction: ⟨Contradiction⟩ Genetic: empties trash cans are sitting on a beach. Prediction: ⟨Entailment⟩ b-MHA: the trash cans are sitting in a beach. Prediction: ⟨Entailment⟩ w-MHA: the trash cans are sitting on a beach. Prediction: ⟨Entailment⟩ Case 2 Premise: a man is holding a microphone in front of his mouth. Hypothesis: a male has a device near his mouth. Prediction: ⟨Entailment⟩ Genetic: a masculine has a device near his mouth. Prediction: ⟨Neutral⟩ b-MHA: a man has a device near his car. Prediction: ⟨Neutral⟩ w-MHA: a man has a device near his home. Prediction: ⟨Neutral⟩ Table 2: Adversarial examples generated on SNLI. curves of the genetic approach is caused by its population-based nature. We list detailed results in Table 1. Success rates are obtained by invoking the victim model for at most 6,000 times. As shown, the gaps of success rates between the models are not very large, because all models can give pretty high success rate. However, as expected, our proposed MHA provides lower perplexity (PPL) 1, which means the examples generated by MHA are more likely to appear in the corpus of the evaluation language model. As the corpus is large enough and the language model for evaluation is strong enough, it indicates the examples generated by MHA are more likely to appear in natural language space. It eventually leads to better fluency. Human evaluations are also performed. From the examples that all three approaches successfully attacked, we sample 40 examples on IMDB. Three volunteers are asked to label the generated examples. Examples with false labels from the victim classifier and with true labels from the volunteers are regarded as actual adversarial examples. The adversarial example ratios of the genetic approach, b-MHA and w-MHA are 98.3%, 99.2% and 96.7%, respectively, indicating that almost all generated examples are adversarial examples. Volunteers are also asked to rank the generated examples by fluency on SNLI (“1” indicating the most 1We use the open released GPT2 (Radford et al.) model for PPL evaluation. 5568 Model Attack succ (%) Genetic b-MHA w-MHA Victim model 98.7 98.7 99.9 + Genetic adv training 93.8 99.6 100.0 + b-MHA adv training 93.0 95.7 99.7 + w-MHA adv training 92.4 97.5 100.0 Table 3: Robustness test results on IMDB. Model Acc (%) Train # = 10K 30K 100K Victim model 58.9 65.8 73.0 + Genetic adv training 58.8 66.1 73.6 + w-MHA adv training 60.0 66.9 73.5 Table 4: Accuracy results after adversarial training. fluent while “3” indicating the least fluent). 20 examples are sampled in the same manners mentioned above. The mean values of ranking of the genetic approach, b-MHA and w-MHA are 1.93, 1.80 and 2.03, indicating that b-MHA generates the most fluent samples. Samples generated by w-MHA are less fluent than the genetic approach. It is possibly because the gradient introduced into the pre-selector could influence the fluency of the sentence, from the perspective of human beings. Adversarial examples from different models on SNLI are shown in Table 2. The genetic approach may replace verbs with different tense or may replace nouns with different plurality, which can cause grammatical mistakes (eg. Case 1), while MHA employs the language model to formulate the stationary distribution in order to avoid such grammatical mistakes. MHA does not have constraints that word replacement should have similar meanings. MHA may replace entities or verbs with some irrelevant words, leading to meaning changes of the original sentence (eg. Case 2). More cases are included in the appendix. 4.2 Adversarial Training In order to validate whether adversarial training is helpful for improving the adversarial robustness or classification accuracy of the victim model, a new model is trained from scratch after mixing the generated examples into the training set. To test the adversarial robustness, we attack the new models with all methods on IMDB. As shown in Table 3, the new model after genetic adversarial training can not defend MHA. On the contrary, adversarial training with b-MHA or w-MHA decreases the success rate of genetic attack. It shows that the adversarial examples from MHA could be more effective than unfluent ones from genetic attack, as assumed in Figure 1. To test whether the new models could achieve accuracy gains after adversarial training, experiments are carried out on different sizes of training data, which are subsets of SNLI’s training set. The number of adversarial examples is fixed to 250 during experiment. The classification accuracies of the new models after the adversarial training by different approaches are listed in Table 4. Adversarial training with w-MHA significantly improves the accuracy on all three settings (with p-values less than 0.02). w-MHA outperforms the genetic baseline with 10K and 30K training data, and gets comparable improvements with 100K training data. Less training data leads to larger accuracy gains, and MHA performs significantly better than the genetic approach on smaller training set. 5 Future Works Current MHA returns the examples when the label is changed, which may lead to incomplete sentences, which are unfluent from the perspective of human beings. Constraints such as forcing the model to generate ⟨EOS⟩at the end of the sentence before returning may address this issue. Also, entity and verb replacements without limitations have negative influence on adversarial example generations for tasks such as NLI. Limitations of similarity during word operations are essential to settle the problem. Constraints such as limitation of the embedding distance may help out. Another solution is introducing the inverse of embedding distance in the pre-selection source. 6 Conclusion In this paper, we propose MHA, which generates adversarial examples for natural language by adopting the MH sampling approach. Experimental results show that our proposed MHA could generate adversarial examples faster than the genetic baseline. Obtained adversarial examples from MHA are more fluent and may be more effective for adversarial training. 7 Acknowledgments We would like to thank Lili Mou for his constructive suggestions. We also would like to thank the anonymous reviewers for their insightful comments. 5569 References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. In Lrec, volume 10, pages 2200–2204. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Siddhartha Chib and Edward Greenberg. 1995. Understanding the metropolis-hastings algorithm. The american statistician, 49(4):327–335. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 31–36. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In LREC, volume 6, pages 417– 422. Citeseer. Brent Harrison, Christopher Purdy, and Mark O Riedl. 2017. Toward automated story generation with markov chain monte carlo methods and deep neural networks. In Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference. WK HASTINGS. 1970. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):97–109. Kaori Kumagai, Ichiro Kobayashi, Daichi Mochihashi, Hideki Asoh, Tomoaki Nakamura, and Takayuki Nagai. 2016. Human-like natural language generation using monte carlo tree search. In Proceedings of the INLG 2016 Workshop on Computational Creativity in Natural Language Generation, pages 11– 18. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pages 142–150. Association for Computational Linguistics. Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. 1953. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087–1092. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2018. Cgmh: Constrained sentence generation by metropolis-hastings sampling. arXiv preprint arXiv:1811.10996. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. ICLR’17; arXiv preprint arXiv:1611.01603.
2019
559
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 591–601 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 591 DOER: Dual Cross-Shared RNN for Aspect Term-Polarity Co-Extraction Huaishao Luo1, Tianrui Li1∗, Bing Liu2, Junbo Zhang3,4,5 1School of Information Science and Technology, Southwest Jiaotong University, China [email protected], [email protected] 2Department of Computer Science, University of Illinois at Chicago, USA [email protected] 3JD Intelligent Cities Business Unit & 4JD Intelligent Cities Research, China 5Institute of Artificial Intelligence, Southwest Jiaotong University, China [email protected] Abstract This paper focuses on two related subtasks of aspect-based sentiment analysis, namely aspect term extraction and aspect sentiment classification, which we call aspect term-polarity co-extraction. The former task is to extract aspects of a product or service from an opinion document, and the latter is to identify the polarity expressed in the document about these extracted aspects. Most existing algorithms address them as two separate tasks and solve them one by one, or only perform one task, which can be complicated for real applications. In this paper, we treat these two tasks as two sequence labeling problems and propose a novel Dual crOss-sharEd RNN framework (DOER) to generate all aspect termpolarity pairs of the input sentence simultaneously. Specifically, DOER involves a dual recurrent neural network to extract the respective representation of each task, and a cross-shared unit to consider the relationship between them. Experimental results demonstrate that the proposed framework outperforms state-of-the-art baselines on three benchmark datasets. 1 Introduction Aspect terms extraction (ATE) and aspect sentiment classification (ASC) are two fundamental, fine-grained subtasks of aspect-based sentiment analysis. Aspect term extraction is the task of extracting the attributes (or aspects) of an entity upon which opinions have been expressed, and aspect sentiment classification is the task of identifying the polarities expressed on these extracted aspects in the opinion text (Hu and Liu, 2004). Consider the example in Figure 1, which contains comments that people expressed about the aspect terms “operating system”, “preloaded software”, “keyboard”, “bag”, “price”, and “service” labeled with their polarities, respectively. The polarities contain ∗Tianrui Li is the corresponding author. I love the [operating system]positiveand the [preloaded software]positive. No backlit [keyboard]conflict, but not an issue for me. You may need to special order a [bag]neutral. The [price]positive is reasonable although the [service]negative is poor. Figure 1: Aspect terms extraction and aspect sentiment classification. four classes, e.g., positive (PO), conflict (CF), neutral (NT)1, and negative (NG). To facilitate practical applications, our goal is to solve ATE and ASC simultaneously. For easy description and discussion, these two subtasks are referred to as aspect term-polarity co-extraction. Both ATE and ASC have attracted a great of attention among researchers, but they are rarely solved together at the same time due to some challenges: 1) ATE and ASC are quite different tasks. ATE is an extraction or sequence labeling task (Jakob and Gurevych, 2010; Wang et al., 2016a), while ASC is a classification task (Jiang et al., 2011; Wagner et al., 2014; Tang et al., 2016a,b; Tay et al., 2018). Thus, they are naturally treated as two separate tasks, and solved one by one in a pipeline manner. However, this two-stage framework is complicated and difficult to use in applications because it needs to train two models separately. There is also the latent error propagation when an aspect term is used to classify its corresponding polarity. Thus, due to the different natures of the two tasks, most current works focus either on extracting aspect terms (Yin et al., 2016; Luo et al., 2018; Xu et al., 2018) or on classifying aspect sentiment (Ma et al., 2017; Wang and Lu, 2018). A possible idea to bridge the difference between the two tasks is to change ASC to a sequence labeling task. Then, ATE and ASC 1Neutral means no sentiment is expressed, and we also regard it as a polarity as in many prior works. 592 have the same formulation. 2) The number of aspect term-polarity pairs in a sentence is arbitrary. Considering the examples depicted in Figure 1, we can observe that some sentences contain two term-polarity pairs and some sentences contain one pair. Moreover, each aspect term can consist of any number of words, which makes the co-extraction task difficult to solve. Some existing research has treated ATE and ASC as two sequence labeling tasks and dealt with them together. Mitchell et al. (2013) and Zhang et al. (2015) compared pipelined, joint, and collapsed approaches to extracting named entities and their sentiments. They found that the joint and collapsed approaches are superior to the pipelined approach. Li and Lu (2017) proposed a collapsed CRF model. The difference with the standard CRF is that they expanded the node type at each word to capture sentiment scopes. Another interesting work comes from Li et al. (2019), where the authors proposed a unified model with the collapsed approach to do aspect term-polarity co-extraction. We can intuitively explain the pipelined, joint, and collapsed approaches through Figure 2. The pipelined approach first labels the given sentence using aspect term tags, e.g., “B” and “I” (the Beginning and Inside of an aspect term) and then feeds the aspect terms into a classifier to obtain their corresponding polarities. The collapsed approach uses collapsed labels as the tags set, e.g., “B-PO” and “I-PO”. Each tag indicates the aspect term boundary and its polarity. The joint approach jointly labels each sentence with two different tag sets: aspect term tags and polarity tags. We believe that the joint approach is more feasible than the collapsed approach when integrating with neural networks because the combined tags of the latter may easily make the learned representation confused. As an example in Figure 2, the “operating system” is an aspect term. Its polarity “positive” actually comes from the word “love”. They should be learned separately because the meanings of these two groups of words are different. That means that using “B-PO I-PO” to extract the meaning of “operating system” and “love” simultaneously is difficult in training (this will be clearer later). In contrast, the joint approach has separate representations for ATE and ASC and separate labels. Thus, an extra sentiment lexicon can improve the representation of ASC individuInput I love the operating system and the preloaded software . Joint O O O B I O O B I O O O O PO PO O O PO PO O Collapsed O O O B-PO I-PO O O B-PO I-PO O Figure 2: A labeling example of aspect terms and their polarities. ally, and the interaction of ATE and ASC can further enhance the performance of each other. In this paper, we propose a novel Dual crOsssharEd RNN framework (DOER) to generate all aspect term-polarity pairs of a given sentence. DOER mainly contains a dual recurrent neural network (RNN) and a cross-shared unit (CSU). The CSU is designed to take advantage of the interactions between ATE and ASC. Apart from them, two auxiliary tasks, aspect length enhancement and sentiment enhancement, are integrated to improve the representation of ATE and ASC. An extra RNN cell called the Residual Gated Unit (ReGU) is also proposed to improve the performance of aspect term-polarity co-extraction. The ReGU utilizes a gate to transfer the input to the output like skip connection (He et al., 2016), and thus, is capable of training deeper and obtaining more useful features. In a word, DOER generates aspect terms and their polarities simultaneously by an end-to-end method instead of building two separate models, which saves time and gives a unified solution to practical applications. Our contributions are summarized as follows: • A novel framework DOER is proposed to address the aspect term-polarity co-extraction problem in an end-to-end fashion. A crossshared unit (CSU) is designed to leverage the interaction of the two tasks. • Two auxiliary tasks are designed to enhance the labeling of ATE and ASC, and an extra RNN cell ReGU is proposed to improve the capability of feature extraction. 2 Methodology The proposed framework is shown in Figure 3a. We will first formulate the aspect term-polarity coextraction problem and then describe this framework in detail in this section. 2.1 Problem Statement This paper deals with aspect term-polarity coextraction, in which the aspect terms are explicitly 593 1 w 2 w n w Embedding CRF RNN Cross-Sharing Interface RNN Auxiliary Aspect Length Enhancement Auxiliary Sentiment Enhancement ReGU ReGU ReGU ReGU ReGU ReGU ReGU ReGU ReGU ReGU ReGU Max Pool ReGU Cross-Shared Unit Max Pool , PO ··· Joint Linear + Softmax (a) Dual cross-shared RNN framework (DOER) 1 w 2 w n w 1 w 2 w n w (b) Cross-shared unit (CSU) Figure 3: An illustration of the proposed DOER framework. mentioned in the text. We solve it as two sequence labeling tasks. Formally, given a review sentence S with n words from a particular domain, denoted by S = {wi|i = 1,...,n}. For each word wi, the objective of ATE is to assign it a tag ta i ∈T a, and likewise, the objective of ASC is to assign a tag t p i ∈T p, where T a = {B, I, O} and T p = {PO, NT, NG, CF, O}. The tags B, I and O in T a stand for the beginning of an aspect term, the inside of an aspect term, and other words, respectively. The tags PO, NT, NG, and CF indicate polarity categories: positive, neutral, negative, and conflict, respectively. The tag O in T p means other words like that in T a. Figure 2 shows a labeling example of the first sentence in Figure 1. 2.2 Model Overview We discuss the proposed framework DOER in detail below. Word Embedding. Instead of adopting standard techniques to generate the embedding of each word wi by concatenating word embedding and char embedding, we use the double embeddings proposed in (Xu et al., 2018) as the initial word embeddings. The double embeddings contain two types: general-purpose embeddings and domainspecific embeddings, which are distinguished by whether the embeddings are trained by an indomain corpus or not. Formally, each word wi will be initialized with a feature vector hwi ∈RdG+dD, where dG and dD are the first dimension size of the general-purpose embeddings G ∈RdG×|V| and the domain-specific embeddings D ∈RdD×|V|, respectively. |V| is the size of the vocabulary. Hence, hwi is generated by hwi = G(wi)⊕D(wi), where ⊕ means the concatenation operation. hg and hd in Figure 3a denote G(wi) and D(wi), respectively. All the out-of-vocabulary words are randomly initialized, and all sentences are padded (or tailored when testing) and initialized with zeros to the max length of the training sentences. tanh x x + x + x ɶ tanh 11Figure 4: Residual gated unit (ReGU). Stacked Dual RNNs. The main architecture of DOER is a stacked dual RNNs, one stacked RNN for ATE, and one stacked RNN for ASC. Each layer of RNNs is a bidirectional ReGU (BiReGU). As shown in Figure 4, ReGU has two gates to control the flow of input and hidden state. Given input xt at time t and the previous memory cell ct−1, the new memory cell ct is calculated via the following equation: ct = (1−ft)⊙ct−1 + ft ⊙tanh(Wixt), (1) and the new hidden state ht is then computed as ht = (1−ot)⊙ct +ot ⊙˜xt, (2) where ft = σ (Wf xt +Uf ct−1) is a forget gate, ot = σ (Woxt +Uoct−1) is a residual gate, and ˜xt is xt or tanh(Wxxt) according to whether the size of xt is equal to ct or not. ft controls the information flow from the previous timestamp to the next timestamp. ot controls the information flow from the previous layer to the next layer. σ denotes the logistic function, tanh means the hyperbolic tangent 594 function, and ⊙is element-wise multiplication.W∗ of size d ×dI and U∗of size d ×d are weight matrices, where ∗∈{i, f,o,x}. The bias vectors are omitted for simplicity. The size of dI changes with the dimension of the input. Its value is dG + dD when it is the first layer of the stacked BiReGU. BiReGU owns two directional representations of the input like Bidirectional LSTM (Graves and Schmidhuber, 2005). We concatenate the hidden states generated by ReGU in both directions belonging to the same input as the output vector, which is expressed as ht = −→h t ⊕←−h t, where ⊕ again means concatenation. −→h t and ←−h t have the same formulation as Eq. (2) but different propagation directions. Thus, the size of ht is 2d, and the size of dI will also become 2d when stacking a new BiReGU layer. We refer the outputs of dual BiReGU as hA and hP separately to differentiate ATE and ASC. Cross-Shared Unit. When generating the representation after BiReGU, the information of ATE and ASC is separated from each other. However, the fact is that the labels of ATE and the labels of ASC have strong relations. For instance, if the label of ATE is O, the label for ASC should be O as well, and if the label of ASC is PO, the label for ATE should be B or I. Besides, both the labels of ATE and the labels of ASC have the information to imply the boundary of each aspect term. The cross-shared unit (CSU) is used to consider the interaction of ATE and ASC. We first compute the composition vector αM ij ∈RK through the following tensor operator: αM i j = fm hm i ,hm j  = tanh  (hm i )⊤Gmhm j  , (3) where M ∈{A,P}, m ∈{a, p}, hm i ∈hM, and Gm ∈ RK×2d×2d are 3-dimensional tensors. K is a hyperparameter. A,a and P, p are indexes of ATE and ASC, respectively, m = p,M = A if m = a, and m = a,M = P if m = p. Such tensor operators can be seen as multiple bilinear terms, which have the capability of modeling more complicated compositions between two vectors (Socher et al., 2013; Wang et al., 2017). After obtaining the composition vectors, the attention score SM ij is calculated as: SM ij = v⊤ mαM ij , (4) where vm ∈RK is a weight vector used to weight each value of the composition vector, M ∈{A,P}, and m ∈{a, p}. Thus, SM i j is a scalar. All these scalars SA i j and SP i j are gathered in two matrices SA and SP, respectively. A higher score SA i j indicates a higher correlation between aspect term i and the polarity representation captured from j-th word. Likewise, a higher score SP i j indicates a higher correlation between aspect polarity i and the representation of aspect term captured from j-th word. We use their related representations to enhance the original ATE and ASC features through: hM = hM +softmaxr SM hM, (5) where softmaxr is a row-based softmax function, M ∈{A,P}, M = P if M = A, and M = A if M = P. Such an operation can make ATE and ASC get enhanced information from each other. The process is shown in Figure 3b. Interface. To generate the final ATE tags and ASC tags, either a dense layer plus a softmax function or a Conditional Random Fields (CRF) can be used. According to the comparison in (Reimers and Gurevych, 2017), using a CRF instead of a softmax classifier as the last layer can obtain a performance increase for tasks with a high dependency between tags. Thus, we use the linear-chain CRF as our inference layer. Its log-likelihood is computed as follows: L(Wc,bc) = ∑ i log p(y|h;Wc,bc). (6) where p(y|h;Wc,bc) is the probability function of CRF, and Wc and bc are the weight and bias, respectively. The Viterbi algorithm is used to generate the final labels of ATE and ASC. Joint Output. After generating the labels for ATE and ASC in the inference layer, the last step is to obtain the aspect term-polarity pairs. It is convenient to get the aspect terms of the given sentence according to the meaning of the elements in T a. To generate the polarity of each aspect term, we use the aspect term as the boundary of polarity labels, and then count the number of each polarity category within the boundary and adopt the label that has the maximum number or the first label (if all the numbers of each polarity category are equal) as the final polarity. For example, the final polarity of “PO NT” is “PO”, the final polarity of “PO PO” is also “PO”, and the final polarity of “PO NT NT” is “NT”. This method is simple and effective in our experiments. 595 Auxiliary Aspect Term Length Enhancement. Although CRF is capable of considering the correlation of two adjacent labels, there are generated discontinuous labels, especially for a long target aspect term. To alleviate the influence resulted from the length of the aspect term, we designed an auxiliary task to predict the average length of aspect terms in each sentence when training the model. The computational process of the prediction in ATE is as follows: zuA = σ  W ⊤ uA ˜hA  , (7) where ˜hA ∈R2d is the result of max-pooling of hl1 A, which is generated by the first RNN layer, WuA ∈R2d is a weight parameter. We calculate the prediction loss through the mean squared error (MSE): LuA = ∥zuA −ˆzu∥2, (8) where ˆzu is the average length of aspect terms in a sentence after global normalization on the training dataset. ASC has a similar prediction process to ATE after the first layer of the stacked RNNs, but it has different weight WuP and hidden feature ˜hP than WuA and ˜hA. The prediction loss is denoted by LuP. Auxiliary Sentiment Lexicon Enhancement. As previously discussed, the polarity of an aspect term is usually inferred from its related opinion words. Thus, we also use a sentiment lexicon to guide ASC. Specifically, we train an auxiliary word-level classifier on the branch of ASC for discriminating positive words and negative words based on the sentiment labels ˆY S p . This means that we use a sentiment lexicon to map each word of a sentence to a sentiment label in training. For each feature of ASC hp,l1 i generated by the first RNN layer, we use a linear layer and the softmax function to get its sentiment label: zs i = softmax  W ⊤ s hp,l1 i  , (9) where Ws ∈R2d×c is a weight parameter, c = 3 means the sentiment label is one of the three elements in the set {positive, negative, none}. We use the cross-entropy error to calculate the loss of each sentence: Ls = −1 n n ∑ i=1  I ˆyS i  (log(zs i))⊤ , (10) where I(ˆyS i ) means the one-hot vector of ˆyS i ∈ˆY S p . Datasets Train Dev Test Total SL #PO 941 32 340 1,313 #NT 446 4 169 619 #NG 820 17 126 963 #CF 41 1 16 58 SR #PO 3,262 126 1,490 4,878 #NT 674 13 250 937 #NG 1,205 46 500 1,751 #CF 88 0 14 102 ST #PO 698 #NT 2,254 #NG 271 Table 1: Datasets from SemEval and Twitter. 2.3 Joint Loss On the whole, the proposed framework DOER has two branches: one for ATE labeling and the other for ASC labeling. Each of them is differentiable, and thus can be trained with gradient descent. We equivalently use the negative of L(Wc,bc) in Eq. (6) as the error to do minimization via back-propagation through time (BPTT) (Goller and Kuchler, 1996). Thus, the loss is as follows: L = −∑ i log p(y|h;Wc,bc), (11) Then, the losses from both tasks and the auxiliary tasks are constructed as the joint loss of the entire model: J (Θ)=(La+Lp)+(LuA+LuP+Ls)+λ 2 ∥Θ∥2, (12) where La and Lp, which have the same formulation as Eq. (11), denote the loss for aspect term and polarity, respectively. Θ represents the model parameters containing all weight matrices W, U, v and bias vectors b. λ is a regularization parameter. 3 Experiments 3.1 Datasets We conduct experiments on two datasets from the SemEval challenges and one English Twitter dataset. The details of these benchmark datasets are summarized in Table 1. SL comes from SemEval 2014 (Pontiki et al., 2014), which contains laptop reviews, and SR are restaurant reviews merged from SemEval 2014, SemEval 2015 (Pontiki et al., 2015), and SemEval 2016 (Pontiki et al., 596 2016). We keep the official data division of these datasets for the training set, validation set, and testing set. The reported results of SL and SR are averaged scores of 10 runs. ST consists of English tweets. Due to lack of standard train-test split, we report the ten-fold cross-validation results of ST as done in (Mitchell et al., 2013; Zhang et al., 2015; Li et al., 2019). For the auxiliary task of sentiment lexicon enhancement, we exploit a sentiment lexicon 2 to generate the label when training the model. The evaluation metric is F1 score based on the exact match of aspect term and its polarity. 3.2 Word Embeddings To initialize the domain-specific word embeddings, we train the word embeddings by CBOW (Mikolov et al., 2013) using Amazon reviews3 and Yelp reviews4, which are in-domain corpora for laptop and restaurant respectively. Thus, for SL, we use Amazon embedding, and for SR, we use Yelp embedding. The Amazon review dataset contains 142.8M reviews, and the Yelp review dataset contains 2.2M restaurant reviews. The embeddings from all these datasets are trained by Gensim5 which contains the implementation of CBOW. The parameter min count is set to 10 and iter is set to 200. We use Amazon embedding as the domain-specific word embeddings of ST as Amazon corpora is large and comprehensive although not in the same domain. The general-purpose embeddings are initialized by Glove.840B.300d embeddings (Pennington et al., 2014). Its corpus is crawled from the Web. 3.3 Settings In our experiments, the regularization parameter λ is empirically set as 0.001, and dG and dD as 300 and 100, respectively. The hidden state size of d of ReGU is 300. The hyperparameter K is set to 5. We use Adam (Kingma et al., 2014) as the optimizer with the learning rate of 0.001 and the batch size of 16. We also employ dropout (Srivastava et al., 2014) on the outputs of the embedding layer and two BiReGU layers. The dropout rate is 0.5. To avoid the exploding gradient problem, we clip the gradient norm within 5. The max2http://mpqa.cs.pitt.edu/ (the lexicon of (Hu and Liu, 2004) https://www.cs.uic.edu/˜liub/ FBS/sentiment-analysis.html can be used as well. 3http://jmcauley.ucsd.edu/data/amazon/ 4https://www.yelp.com/academic_dataset 5https://radimrehurek.com/gensim/ imum number of epochs is set to 50. The word embeddings are fixed during the training process. We implemented DOER using the TensorFlow library (Abadi et al., 2016), and all computations are done on an NVIDIA Tesla K40 GPU. 3.4 Baseline Methods To validate the performance of the proposed model DOER 6 on the aspect term-polarity co-extraction task, a comparative experiment is conducted with the following baseline models: • CRF-{pipelined, joint, collapsed}: They leverage linguistically informed features with CRF to perform the sequence labeling task using the pipelined, joint, or collapsed approach7 (Mitchell et al., 2013). • NN+CRF-{pipelined, joint, collapsed}: An improvement of (Mitchell et al., 2013) that concatenates target word embedding and context four-word embeddings besides using linguistically informed features plus CRF to finish the sequence labeling task (Zhang et al., 2015). Instead of using the officially released code8 due to the outdated library, we reproduce the results with the original settings. • Sentiment-Scope: A collapsed CRF model9 (Li and Lu, 2017), which expands the node types of CRF to capture sentiment scopes. The discrete features used in this model are exactly the same as the above two groups of models. • DE-CNN+TNet: DE-CNN10 (Xu et al., 2018) and TNet (Li et al., 2018) are the current state-of-the-art models for ATE and ASC, respectively. DE-CNN+TNet combines them in a pipelined manner. We use the official TNet-AS variant11 as our TNet implementation. • LSTM+CRF-{LSTMc, CNNc}: They all use BiLSTM plus CRF for sequence labeling. 6The code of DOER is available at https://github. com/ArrowLuo/DOER 7http://www.m-mitchell.com/code/ 8https://github.com/SUTDNLP/ OpenTargetedSentiment 9https://github.com/leodotnet/ sentimentscope 10https://github.com/howardhsu/DE-CNN 11https://github.com/lixin4ever/TNet 597 Model SL SR ST Pipeline Baselines CRF-pipeline 51.08 54.78 31.91 NN+CRF-pipeline 53.36 60.78 45.08 DE-CNN+TNet 56.47 67.54 48.74 Collapsed Baselines CRF-collapsed 49.24 59.52 32.00 NN+CRF-collapsed 50.64 61.74 45.52 Sentiment-Scope 50.27 62.01 45.91 LSTM+CRF-LSTMc 54.43 65.93 46.57 LSTM+CRF-CNNc 54.71 66.36 47.35 LM-LSTM-CRF 56.39 67.56 48.46 E2E-TBSA 57.99 69.91 49.13 Joint Baselines CRF-joint 50.73 59.75 32.42 NN+CRF-joint 52.81 60.27 44.69 Ours S-BiLSTM 56.83 71.22 48.94 S-BiReGU 57.82 71.47 49.11 S-BiReGU+CSU 58.99 72.19 49.89 S-BiReGU+CSU+AuL 59.06 72.32 51.06 S-BiReGU+CSU+AuS 60.11 72.64 51.13 DOER 60.35 72.78 51.37 Table 2: F1 score (%) comparison of all systems for aspect term-polarity pair extraction. The difference is that LSTM+CRF-LSTMc (Lample et al., 2016) encodes char embedding by BiLSTM, while LSTM+CRF-CNNc (Ma and Hovy, 2016) uses CNN. • LM-LSTM-CRF: It is a language model enhanced LSTM-CRF model proposed in (Liu et al., 2018), which achieved competitive results on several sequence labeling tasks12. • E2E-TBSA: It is an end-to-end model of the collapsed approach proposed to address ATE and ASC simultaneously13 (Li et al., 2019). • S-BiLSTM: It is a stacked BiLSTM model with two layers that adopts the joint approach and has the same Embeddings, Interface, Joint Output layers as DOER. • S-BiReGU: It is similar to S-BiLSTM but uses a ReGU cell instead of an LSTM cell. We use two abbreviations AuL and AuS for the ablation study. AuL denotes the auxiliary task of aspect term length enhancement, and AuS denotes the auxiliary task of sentiment lexicon enhancement. All baselines have publicly available codes, 12https://github.com/LiyuanLucasLiu/ LM-LSTM-CRF 13https://github.com/lixin4ever/ E2E-TBSA and we ran these officially released codes to reproduce the baseline results except the NN+CRF variants due to the outdated library as discussed in the bullet point for these baseline systems. 3.5 Results and Analysis Comparison Results. The comparison results are shown in Table 2, which are F1 scores of aspect term-polarity pairs. As the results show, our DOER obtains consistent improvement over baselines. Compared to the best pipelined model, the proposed framework outperforms DE-CNN+TNet by 3.88%, 5.24%, and 2.63% on SL, SR, and ST, respectively. It indicates that an elaborated joint model can achieve better performance than pipeline approaches on aspect term-polarity coextraction task. Besides, seven collapsed models are also introduced to the comparison. Compared to the best of these collapsed approaches, DOER improves by 2.36%, 2.87%, and 2.24% over E2ETBSA on SL, SR, and ST, respectively. This result shows the potential of a joint model which considers the interaction between the two relevant tasks. Comparing with existing works based on the joint approach, i.e., CRF-joint and NN+CRF-joint, DOER makes substantial gains over them as well. The improvements over DE-CNN+TNet and E2ETBSA are statistically significant (p < 0.05). Ablation Study. To test the effectiveness of 598 each component of DOER, we conduct an ablation experiment with results shown in the last block of Table 2. The fact that S-BiReGU gives superior performance compared to S-BiLSTM indicates the effectiveness of ReGU in our task. This residual architecture enables information transfer to the next layers more effective. With the help of CSU, S-BiReGU+CSU achieves better performance than without it. We believe the interaction of information between ATE and ASC is essential to improve each other. Although the samples with long aspect terms are rare, the auxiliary task of aspect term length can improve the performance. Another auxiliary task of sentiment lexicon can also enhance the representation of the proposed framework. As a whole of S-BiReGU, CSU, AuL, and AuS, the proposed DOER achieves superior performance. It mainly benefits from the enhanced features by the two auxiliary tasks and the interaction of two separate routes of ATE and ASC. Results on ATE. Table 3 shows the results of aspect term extraction only. DE-CNN is the current state-of-the-art model on ATE as mentioned above. Comparing with it, DOER achieves new state-of-the-art scores. DOER∗denotes the DOER without ASC part. As the table shows, DOER achieves better performance than DOER∗, which indicates the interaction between ATE and ASC can yield better performance for ATE than only conduct a single task. Model SL SR ST DE-CNN 81.26 78.98 63.23 DOER∗ 82.11 79.98 68.99 DOER 82.61 81.06 71.35 Table 3: F1 score (%) comparison only for aspect term extraction. Case Study. Table 4 shows some examples of S-BiLSTM, S-BiReGU+CSU, and DOER. As observed in the first and second rows, SBiReGU+CSU and DOER predict the aspect termpolarity pair correctly but S-BiLSTM does not. With the constraint of CSU, the error words can be avoided as shown in the second row. The two auxiliary tasks work well on the CSU. They can capture a better sentiment representation, e.g., the third row, and alleviate the misjudgment on the long aspect terms, e.g., the last row. Impact of K. We investigate the impact of hy59 60 61 1 2 3 4 5 6 7 8 9 10 F1 (%) K Figure 5: F1 scores on SL with different K. perparameter K of the CSU on the final performance. The experiment is conducted on SL by varying K from 1 to 10 with the step of 1. As shown in Figure 5, value 5 is the best choice for the proposed method to address our task. Due to the performance demonstrated in the figure, K is set to 5 cross all experiments for simplicity. Visualization of Attention Scores in CSU. We also try to visualize the attention scores SA and SP to explore the effectiveness of CSU. As shown in Figure 6, SA and SP have different values, which indicate that both ATE and ASC indeed interact with each other. The red dashed rectangle in Figure 6a shows that the model learns to focus on itself when labeling the word “OS” in the ATE task. Likewise, the red dashed rectangle in Figure 6b shows that the model learns to focus on the word “great” instead of itself when labeling the word “OS” in the ASC task. The fact that the polarity on the target aspect “OS” is positive, which is inferred from the “great”, verifies that the system is doing the right job. In summary, we can conclude that the attention scores learned by CSU benefit the labeling process. The OS is great . The 0.005 0.031 0.007 0.008 0.005 OS 0.017 0.046 0.014 0.011 0.011 is 0.003 0.039 0.025 0.003 0.004 great 0.003 0.018 0.006 0.026 0.004 . 0.020 0.005 0.013 0.012 0.006 (a) SA The OS is great . The 0.007 0.035 0.008 0.004 0.032 OS 0.013 0.007 0.002 0.013 0.021 is 0.006 0.012 0.008 0.002 0.003 great 0.003 0.020 0.009 0.005 0.004 . 0.004 0.005 0.023 0.005 0.010 (b) SP Figure 6: Visualization of SA and SP in CSU. 4 Related Work Our work spans two major topics of aspect-based sentiment analysis: aspect term extraction and aspect sentiment classification. Each of them has 599 Input S-BiLSTM S-BiReGU+CSU DOER I like the [lighted screen]PO at night. None () [lighted screen]PO [lighted screen]PO It is a great [size]PO and amazing [windows 8]PO included! [size]PO, [windows 8 included]PO () [size]PO, [windows 8]PO [size]PO, [windows 8]PO I tried several [monitors]NT and several [HDMI cables]NT and this was the case each time. [HDMI cables]NG () None (), [HDMI cables]NT [monitors]NT, [HDMI cables]NT The [2.9 ghz dual-core i7 chip]PO really out does itself. [dual-core i7 chip]PO () [dual-core i7 chip]PO () [2.9 ghz dual-core i7 chip]PO Table 4: Case analysis on S-BiLSTM, S-BiReGU+CSU, and DOER.  means wrong prediction. been studied by many researchers. Hu and Liu (2004) extracted aspect terms using frequent pattern mining. Qiu et al. (2011) and Liu et al. (2015) proposed to use rule-based approach exploiting either hand-crafted or automatically generated rules about some syntactic relationships. Mei et al. (2007), He et al. (2011) and Chen et al. (2014) used topic modeling based on Latent Dirichlet Allocation (Blei et al., 2003). All of the above methods are unsupervised. For supervised methods, the ATE task is usually treated as a sequence labeling problem solved by CRF. For the ASC task, a large body of literature has tried to utilize the relation or position between the aspect terms and the surrounding context words as the relevant information or context for prediction (Tang et al., 2016a; Laddha and Mukherjee, 2016). Convolution neural networks (CNNs) (Poria et al., 2016; Li and Xue, 2018), attention network (Wang et al., 2016b; Ma et al., 2017; He et al., 2017), and memory network (Wang et al., 2018) are also active approaches. However, the above methods are proposed for either the ATE or the ASC task. Lakkaraju et al. (2014) proposed to use hierarchical deep learning to solve these two subtasks. Wu et al. (2016) utilized cascaded CNN and multi-task CNN to address aspect extraction and sentiment classification. Their main idea is to directly map each review sentence into pre-defined aspect terms by using classification and then classifying the corresponding polarities. We believe the pre-defined aspect terms are in general insufficient for most analysis applications because they will almost certainly miss many important aspects in review texts. This paper regards ATE and ASC as two parallel sequence labeling tasks and solves them simultaneously. Comparing with the methods that address them one by one using two separate models, our framework is easy to use in practical applications by outputting all the aspect term-polarity pairs of input sentences at once. Similar to our work, Mitchell et al. (2013) and Zhang et al. (2015) are also about performing two sequence labeling tasks, but they extract named entities and their sentiment classes jointly. We have a different objective and utilize a different model. Li et al. (2019) have the same objective as us. The main difference is that their approach belongs to a collapsed approach but ours is a joint approach. The model proposed by (Li and Lu, 2017) is also a collapsed approach based on CRF. Its performance is heavily dependent on manually crafted features. 5 Conclusion In this paper, we introduced a co-extraction task involving aspect term extraction and aspect sentiment classification for aspect-based sentiment analysis and proposed a novel framework DOER to solve the problem. The framework uses a joint sequence labeling approach and focuses on the interaction between two separate routes for aspect term extraction and aspect sentiment classification. To enhance the representation of sentiment and alleviate the difficulty of long aspect terms, two auxiliary tasks were also introduced in our framework. Experimental results on three benchmark datasets verified the effectiveness of DOER and showed that it significantly outperforms the baselines on aspect term-polarity co-extraction. Acknowledgments This work is supported by the National Key R&D Program of China (No. 2017YFB1401401). 600 References Mart´ın Abadi, Ashish Agarwal, Paul Barham, et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. In JMLR, pages 993–1022. Zhiyuan Chen, Arjun Mukherjee, and Bing Liu. 2014. Aspect extraction with automated prior knowledge learning. In ACL, pages 347–358. Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In ICNN, pages 347– 352. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5-6):602–610. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR, pages 770–778. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In ACL, pages 388– 397. Yulan He, Chenghua Lin, and Harith Alani. 2011. Automatically extracting polarity-bearing topics for cross-domain sentiment classification. In ACL. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD, pages 168–177. Niklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single-and cross-domain setting with conditional random fields. In EMNLP, pages 1035–1045. Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In ACL. Diederik Kingma, Jimmy Ba, Diederik Kingma, and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Abhishek Laddha and Arjun Mukherjee. 2016. Extracting aspect specific opinion expressions. In EMNLP, pages 627–637. Himabindu Lakkaraju, Richard Socher, and Chris Manning. 2014. Aspect specific sentiment analysis using hierarchical deep learning. In NIPS Workshop on Deep Learning and Representation Learning. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Hao Li and Wei Lu. 2017. Learning latent sentiment scopes for entity-level sentiment analysis. In AAAI, pages 3482–3489. Tao Li and Wei Xue. 2018. Aspect based sentiment analysis with gated convolutional networks. In ACL, pages 2514–2523. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In ACL, pages 946–956. Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019. A unified model for opinion target extraction and target sentiment prediction. AAAI. Liyuan Liu, Jingbo Shang, Xiang Ren, Frank Fangzheng Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. In AAAI, pages 5253–5260. Qian Liu, Zhiqiang Gao, Bing Liu, and Yuanlin Zhang. 2015. Automated rule selection for aspect extraction in opinion mining. In IJCAI, pages 1291–1297. Huaishao Luo, Tianrui Li, Bing Liu, Bin Wang, and Herwig Unger. 2018. Improving aspect term extraction with bidirectional dependency tree representation. arXiv preprint arXiv:1805.07889. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In IJCAI, pages 4068–4074. Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In ACL, pages 1064–1074. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In WWW, pages 171–180. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. 2013. Open domain targeted sentiment. In EMNLP, pages 1643–1654. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In SemEval@NAACL-HLT, pages 486–495. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In SemEval@NAACL-HLT, pages 19–30. 601 Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In SemEval@COLING, pages 27–35. Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2016. Aspect extraction for opinion mining with a deep convolutional neural network. KnowledgeBased Systems, 108:42–49. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Linguistics, 37(1):9–27. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In EMNLP, pages 338–348. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pages 1631–1642. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929– 1958. Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective lstms for target-dependent sentiment classification. In COLING, pages 3298–3307. Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect level sentiment classification with deep memory network. In EMNLP, pages 214–224. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis. In AAAI. Joachim Wagner, Piyush Arora, Santiago Cortes, Utsab Barman, Dasha Bogdanova, Jennifer Foster, and Lamia Tounsi. 2014. Dcu: Aspect-based polarity classification for semeval task 4. In SemEval. Bailin Wang and Wei Lu. 2018. Learning latent opinions for aspect-level sentiment classification. In AAAI. Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, and Yi Chang. 2018. Target-sensitive memory networks for aspect sentiment classification. In ACL, pages 957–967. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016a. Recursive neural conditional random fields for aspect-based sentiment analysis. In EMNLP, pages 616–626. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In AAAI, pages 3316–3322. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016b. Attention-based lstm for aspectlevel sentiment classification. In EMNLP. Haibing Wu, Yiwei Gu, Shangdi Sun, and Xiaodong Gu. 2016. Aspect-based opinion summarization with convolutional neural networks. In IJCNN, pages 3157–3163. Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2018. Double embeddings and cnn-based sequence labeling for aspect extraction. In ACL, pages 592–598. Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, and Ming Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term extraction. In IJCAI, pages 2979–2985. Meishan Zhang, Yue Zhang, and Duy Tin Vo. 2015. Neural networks for open domain targeted sentiment. In EMNLP, pages 612–621.
2019
56