text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7689–7700 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7689 Hard-Coded Gaussian Attention for Neural Machine Translation Weiqiu You∗, Simeng Sun∗, Mohit Iyyer College of Information and Computer Sciences University of Massachusetts Amherst {wyou,simengsun,miyyer}@cs.umass.edu Abstract Recent work has questioned the importance of the Transformer’s multi-headed attention for achieving high translation quality. We push further in this direction by developing a “hardcoded” attention variant without any learned parameters. Surprisingly, replacing all learned self-attention heads in the encoder and decoder with fixed, input-agnostic Gaussian distributions minimally impacts BLEU scores across four different language pairs. However, additionally hard-coding cross attention (which connects the decoder to the encoder) significantly lowers BLEU, suggesting that it is more important than self-attention. Much of this BLEU drop can be recovered by adding just a single learned cross attention head to an otherwise hard-coded Transformer. Taken as a whole, our results offer insight into which components of the Transformer are actually important, which we hope will guide future work into the development of simpler and more efficient attention-based models. 1 Introduction The Transformer (Vaswani et al., 2017) has become the architecture of choice for neural machine translation. Instead of using recurrence to contextualize source and target token representations, Transformers rely on multi-headed attention mechanisms (MHA), which speed up training by enabling parallelization across timesteps. Recent work has called into question how much MHA contributes to translation quality: for example, a significant fraction of attention heads in a pretrained Transformer can be pruned without appreciable loss in BLEU (Voita et al., 2019; Michel et al., 2019), and self-attention can be replaced by less expensive modules such as convolutions (Yang et al., 2018; Wu et al., 2019). In this paper, we take this direction to an extreme by developing a variant of MHA without * Authors contributed equally. Standard Transformer: scaled dot product of learned query and key vectors Jane went to the office Ours: fixed Gaussian distributions centered around nearby tokens Figure 1: Three heads of learned self-attention (top) as well as our hard-coded attention (bottom) given the query word “to”. In our variant, each attention head is a Gaussian distribution centered around a different token within a local window. any learned parameters (Section 3). Concretely, we replace each attention head with a “hard-coded” version, which is simply a standard normal distribution centered around a particular position in the sequence (Figure 1).1 When we replace all encoder and decoder self-attention mechanisms with our hard-coded variant, we achieve almost identical BLEU scores to the baseline Transformer for four different language pairs (Section 4).2 These experiments maintain fully learned MHA cross attention, which allows the decoder to condition its token representations on the encoder’s outputs. We next attempt to additionally replace cross attention with a hard-coded version, which results in substantial drops of 5-10 BLEU. Motivated to find the minimal number of learned attention 1In Figure 1, the hard-coded head distribution centered on the word “to” (shown in green) is [0.054, 0.24, 0.40, 0.24, 0.054]. 2Our code is available at https://github.com/ fallcat/stupidNMT 7690 L1 L2 L3 L4 L5 0 10 20 30 Distance Encoder Self-Attention Head 1 Head 2 L1 L2 L3 L4 L5 Layer Decoder Self-Attention L1 L2 L3 L4 L5 Decoder Cross-Attention Figure 2: Most learned attention heads for a Transformer trained on IWSLT16 En-De focus on a local window around the query position. The x-axis plots each head of each layer, while the y-axis refers to the distance between the query position and the argmax of the attention head distribution (averaged across the entire dataset). parameters needed to make up this deficit, we explore configurations with only one learned cross attention head in total, which performs just slightly worse (1-3 BLEU) than the baseline. By replacing MHA with hard-coded attention, we improve memory efficiency (26.4% more tokens per batch) and decoding speed (30.2% increase in sentences decoded per second) without significantly lowering BLEU, although these efficiency improvements are capped by other more computationally-expensive components of the model (Section 5). We also perform analysis experiments (Section 6.2) on linguistic properties (e.g., long-distance subject-verb agreement) that MHA is able to better model than hard-coded attention. Finally, we develop further variants of hard-coded attention in Section 6.3, including a version without any attention weights at all. Our hard-coded Transformer configurations have intuitively severe limitations: attention in a particular layer is highly concentrated on a local window in which fixed weights determine a token’s importance. Nevertheless, the strong performance of these limited models indicates that the flexibility enabled by fully-learned MHA is not as crucial as commonly believed: perhaps attention is not all you need. We hope our work will spur further development of simpler, more efficient models for neural machine translation. 2 Background In this section, we first briefly review the Transformer architecture of Vaswani et al. (2017) with a focus on its multi-headed attention. Then, we provide an analysis of the learned attention head distributions of a trained Transformer model, which motivates the ideas discussed afterwards. 2.1 Multi-headed Transformer attention The Transformer is an encoder-decoder model formed by stacking layers of attention blocks. Each encoder block contains a self-attention layer followed by layer normalization, a residual connection, and a feed-forward layer. Decoder blocks are identical to those of the encoder except they also include a cross attention layer, which connects the encoder’s representations to the decoder. To compute a single head of self-attention given a sequence of token representations t1...n, we first project these representations to queries q1...n, keys k1...n, and values v1...n using three different linear projections. Then, to compute the self-attention distribution at a particular position i in the sequence, we take the scaled dot product between the query vector qi and all of the key vectors (represented by matrix K). We then use this distribution to compute a weighted average of the values (V): Attn(qi, K, V) = softmax(qiK⊤ √dk )V (1) where dk is the dimensionality of the key vector. For MHA, we use different projection matrices to obtain the query, key, and value representations for each head. The key difference between selfattention and cross attention is that the queries and keys come from different sources: specifically, the keys are computed by passing the encoder’s final layer token representations through a linear projection. To summarize, MHA is used in three different components of the Transformer: encoder self-attention, decoder self-attention, and cross attention. 7691 2.2 Learned heads mostly focus on local windows The intuition behind MHA is that each head can focus on a different type of information (e.g., syntactic or semantic patterns). While some heads have been shown to possess interpretable patterns (Voita et al., 2019; Correia et al., 2019), other work has cautioned against using attention patterns to explain a model’s behavior (Jain and Wallace, 2019). In our analysis, we specifically examine the behavior of a head with respect to the current query token’s position in the sequence. We train a baseline Transformer model (five layers, two heads per layer) on the IWSLT 2016 En→De dataset, and compute aggregated statistics on its learned heads. Figure 2 shows that outside of a few layers, most of the model’s heads focus their attention (i.e., the argmax of the attention distribution) on a local neighborhood around the current sequence position. For example, both self-attention heads in the first layer of the encoder tend to focus on just a one to two token window around the current position. The decoder self-attention and cross attention heads show higher variability, but most of their heads are still on average focused on local information. These results beg the question of whether replacing self-attention with “hard-coded” patterns that focus on local windows will significantly affect translation quality. 3 Hard-coded Gaussian attention While learned attention enables model flexibility (e.g., a head can “look” far away from the current position if it needs to), it is unclear from the above analysis how crucial this flexibility is. To examine this question, we replace the attention distribution computation in Equation 1 (i.e., scaled dot product of queries and keys) with a fixed Gaussian distribution.3 In doing so, we remove all learned parameters from the attention computation: the mean of the Gaussian is determined by the position i of the current query token, and the standard deviation is always set to 1.4 As Transformers contain both self-attention and cross attention, the rest of this section details how we replace both of these components with simplified versions. We will re3 Yang et al. (2018) implement a similar idea, except the mean and standard deviation of their Gaussians are learned with separate neural modules. 4Preliminary experiments with other standard deviation values did not yield significant differences, so we do not vary the standard deviation for any experiments in this paper. fer to experimental results on the relatively small IWSLT16 English-German dataset throughout this section to contextualize the impact of the various design decisions we describe. Section 4 contains a more fleshed out experimental section with many more datasets and language pairs. 3.1 Hard-coded self-attention In self-attention, the queries and keys are derived from the same token representations and as such have the same length n. The baseline Transformer (BASE) computes the self-attention distribution at position i by taking the dot product between the query representation qi and all of the key vectors k1...n. We instead use a fixed Gaussian distribution centered around position i −1 (token to the left), i (the query token), or i + 1 (token to the right). More formally, we replace Equation 1 with Attn(i, V) = N(f(i), σ2)V. (2) The mean of the Gaussian f(i) and its standard deviation σ2 are both hyperparameters; for all of our experiments, we set σ to 1 and f(i) to either i −1, i or i + 1, depending on the head configuration.5 Note that this definition is completely agnostic to the input representation: the distributions remain the same regardless of what sentence is fed in or what layer we are computing the attention at. Additionally, our formulation removes the query and key projections from the attention computation; the Gaussians are used to compute a weighted average of the value vectors.6 Instead of learning different query and key projection matrices to define different heads, we simply design head distributions with different means. Figure 1 shows an example of our hard-coded selfattention for a simple sentence. We iterate over different configurations of distribution means f(i) on the IWSLT16 En-De dataset, while keeping the cross attention learned.7 Our best validation result with hard-coded self-attention (HC-SA) replaces encoder self-attention with distributions centered around i −1 and i + 1 and decoder self-attention with distributions centered around i −1 and i. This 5The Gaussian distribution is cut off on the borders of the sentence and is not renormalized to sum to one. 6Preliminary models that additionally remove the value projections performed slightly worse when we hard-coded cross attention, so we omit them from the paper. 7See Appendix for a table describing the effects of varying f(i) on IWSLT16 En-De BLEU score. We find in general that hard-coded heads within each layer should focus on different tokens within the local window for optimal performance. 7692 model achieves slightly higher BLEU than the baseline Transformer (30.3 vs 30.0 BLEU). 3.2 Alternatives to cross attention We turn next to cross attention, which on its face seems more difficult to replace with hard-coded distributions. Unlike self-attention, the queries and keys in cross attention are not derived from the same token representations; rather, the queries come from the decoder while the keys come from the encoder. Since the number of queries can now be different from the number of keys, setting the distribution means by position is less trivial than it is for self-attention. Here, we describe two methods to simplify cross attention, starting with a fully hard-coded approach and moving onto a minimal learned configuration. Hard-coded cross attention: We begin with a simple solution to the problem of queries and keys having variable lengths. Given a training dataset, we compute the length ratio γ by dividing the average source sentence length by the average target sentence length. Then, to define a hard-coded cross attention distribution for target position i, we center the Gaussian on positions ⌊γi −1⌋, ⌊γi⌋, and ⌊γi + 1⌋of the source sentence. When we implement this version of hard-coded cross attention and also hard-code the encoder and decoder self-attention as described previously (HC-ALL), our BLEU score on IWSLT16 En-De drops from 30.3 to 21.1. Clearly, cross attention is more important for maintaining translation quality than self-attention. Michel et al. (2019) notice a similar phenomenon when pruning heads from a pretrained Transformer: removing certain cross attention heads can substantially lower BLEU. Learning a single cross attention head: Prior to the advent of the Transformer, many neural machine translation architectures relied on just a single cross attention “head” (Bahdanau et al., 2015). The Transformer has many heads at many layers, but how many of these are actually necessary? Here, we depart from the parameter-free approach by instead removing cross attention at all but the final layer of the decoder, where we include only a single learned head (SH-X). Note that this is the only learned head in the entire model, as both the encoder and decoder self-attention is hard-coded. On IWSLT16 En-De, our BLEU score improves from 21.1 to 28.1, less than 2 BLEU under the BASE Transformer. Train Test Len SRC Len TGT IWSLT16 En-De 196,884 993 28.5 29.6 IWSLT17 En-Ja 223,108 1,452 22.9 16.0 WMT16 En-Ro 612,422 1,999 27.4 28.3 WMT14 En-De 4,500,966 3,003 28.5 29.6 WMT14 En-Fr 10,493,816 3,003 26.0 28.8 Table 1: Statistics of the datasets used. The last two columns show the average number of tokens for source and target sentences, respectively. 4 Large-scale Experiments The previous section developed hard-coded configurations and presented results on the relatively small IWSLT16 En-De dataset. Here, we expand our experiments to include a variety of different datasets, language pairs, and model sizes. For all hard-coded head configurations, we use the optimal IWSLT16 En-De setting detailed in Section 3.1 and perform no additional tuning on the other datasets. This configuration nevertheless proves robust, as we observe similar trends with our hard-coded Transformers across all of datasets.8 4.1 Datasets We experiment with four language pairs, English↔{German, Romanian, French, Japanese} to show the consistency of our proposed attention variants. For the En-De pair, we use both the small IWSLT 20169 and the larger WMT 2014 datasets. For all datasets except WMT14 En→De and WMT14 En→Fr,10 we run experiments in both directions. For English-Japanese, we train and evaluate on IWSLT 2017 En↔Ja TED talk dataset. More dataset statistics are shown in Table 1. 4.2 Architectures Our BASE model is the original Transformer from Vaswani et al. (2017), reimplemented in PyTorch (Paszke et al., 2019) by Akoury et al. (2019).11 To implement hard-coded attention, we only modify the attention functions in this codebase and keep everything else the same. For the two small IWSLT datasets, we follow prior work 8Code and scripts to reproduce our experimental results to be released after blind review. 9We report BLEU on the IWSLT16 En-De dev set following previous work (Gu et al., 2018; Lee et al., 2018; Akoury et al., 2019). For other datasets, we report test BLEU. 10As the full WMT14 En→Fr is too large for us to feasibly train on, we instead follow Akoury et al. (2019) and train on just the Europarl / Common Crawl subset, while evaluating using the full dev/test sets. 11https://github.com/dojoteef/synst 7693 BASE HC-SA HC-ALL SH-X IWSLT16 En-De 30.0 30.3 21.1 28.2 IWSLT16 De-En 34.4 34.8 25.7 33.3 IWSLT17 En-Ja 20.9 20.7 10.6 18.5 IWSLT17 Ja-En 11.6 10.9 6.1 10.1 WMT16 En-Ro 33.0 32.9 25.5 30.4 WMT16 Ro-En 33.1 32.8 26.2 31.7 WMT14 En-De 26.8 26.3 21.7 23.5 WMT14 En-Fr 40.3 39.1 35.6 37.1 Table 2: Comparison of the discussed Transformer variants on six smaller datasets (top)14 and two larger datasets (bottom). Hard-coded self-attention (HC-SA) achieves almost identical BLEU scores to BASE across all datasets, while a model with only one cross attention head (SH-X) performs slightly worse. by using a small Transformer architecture with embedding size 288, hidden size 507, four heads,12 five layers, and a learning rate 3e-4 with a linear scheduler. For the larger datasets, we use the standard Tranformer base model, with embedding size 512, hidden size 2048, eight heads, six layers, and a warmup scheduler with 4,000 warmup steps. For all experiments, we report BLEU scores using SacreBLEU (Post, 2018) to be able to compare with other work.13 4.3 Summary of results Broadly, the trends we observed on IWSLT16 EnDe in the previous section are consistent for all of the datasets and language pairs. Our findings are summarized as follows: • A Transformer with hard-coded self-attention in the encoder and decoder and learned cross attention (HC-SA) achieves almost equal BLEU scores to the BASE Transformer. • Hard-coding both cross attention and selfattention (HC-ALL) considerably drops BLEU compared to BASE, suggesting cross attention is more important for translation quality. • A configuration with hard-coded self12For hard-coded configurations, we duplicate heads to fit this architecture (e.g., we have two heads per layer in the encoder with means of i + 1 and i −1). 13SacreBLEU signature: BLEU+case.mixed+lang.LANG +numrefs.1+smooth.exp+test.TEST+tok.intl+version.1.2.11, with LANG ∈{en-de, de-en, en-fr} and TEST ∈ {wmt14/full, iwslt2017/tst2013}. For WMT16 EnRo and IWSLT17 En-Ja, we follow previous work for preprocessing (Sennrich et al., 2016), encoding the latter with a 32K sentencepiece vocabulary (https://github.com/google/sentencepiece) and measuring the de-tokenized BLEU with SacreBLEU. attention and a single learned cross attention head in the final decoder layer (SH-X) consistently performs 1-3 BLEU worse than BASE. These results motivate a number of interesting analysis experiments (e.g., what kinds of phenomena is MHA better at handling than hard-coded attention), which we describe in Section 6. The strong performance of our highly-simplified models also suggests that we may be able to obtain memory or decoding speed improvements, which we investigate in the next section. 5 Bigger Batches & Decoding Speedups We have thus far motivated our work as an exploration of which components of the Transformer are necessary to obtain high translation quality. Our results demonstrate that encoder and decoder self-attention can be replaced with hard-coded attention distributions without loss in BLEU, and that MHA brings minor improvements over singleheaded cross attention. In this section, we measure efficiency improvements in terms of batch size increases and decoding speedup. Experimental setup: We run experiments on WMT16 En-Ro with the larger architecture to support our conclusions.15 For each model variant discussed below, we present its memory efficiency as the maximum number of tokens per batch allowed during training on a single GeForce RTX 2080 Ti. Additionally, we provide inference speed as the number of sentences per second each model can decode on a 2080 Ti, reporting the average of five runs with a batch size of 256. Hard-coding self-attention yields small efficiency gains: Table 7 summarizes our profiling experiments. Hard-coding self-attention and preserving learned cross attention allows us to fit 17% more tokens into a single batch, while also providing a 6% decoding speedup compared to BASE on the larger architecture used for WMT16 En-Ro. The improvements in both speed and memory usage are admittedly limited, which motivates us to measure the maximum efficiency gain if we only modify self-attention (i.e., preserving learned cross attention). We run a set of upper bound experiments where we entirely remove self-attention in the encoder and decoder. The resulting encoder 15Experiments with the smaller IWSLT16 En-De model are described in the Appendix. 7694 Model BLEU sent/sec tokens/batch BASE 33.0 26.8 9.2K HC-SA 32.9 28.4 10.8K SH-X 30.3 34.9 11.7K BASE/-SA 27.0 30.1 11.8K SH-X/-SA 15.0 37.6 13.3K Table 3: Decoding speedup (in terms of sentences per second) and memory improvements (max tokens per batch) on WMT16 En-Ro for a variety of models. The last two rows refer to BASE and SH-X configurations whose self-attention is completely removed. thus just becomes a stack of feed-forward layers on top of the initial subword embeddings. Somewhat surprisingly, the resulting model still achieves a fairly decent BLEU of 27.0 compared to the BASE model’s 33.0. As for the efficiency gains, we can fit 27% more tokens into a single batch, and decoding speed improves by 12.3% over BASE. This relatively low upper bound for HC-SA shows that simply hard-coding self-attention does not guarantee significant speedup. Previous work that simplifies attention (Wu et al., 2019; Michel et al., 2019) also report efficiency improvements of similar low magnitudes. Single-headed cross attention speeds up decoding: Despite removing learned self-attention from both the encoder and decoder, we did not observe huge efficiency or speed gains. However, reducing the source attention to just a single head results in more significant improvements. By only keeping single-headed cross attention in the last layer, we are able to achieve 30.2% speed up and fit in 26.4% more tokens to the memory compared to BASE . Compared to HC-SA, SH-X obtains a 22.9% speedup and 8.0% bigger batch size. From our profiling experiments, most of the speed and memory considerations of the Transformer are associated with the large feed-forward layers that we do not modify in any of our experiments, which caps the efficiency gains from modifying the attention implementation. While we did not show huge efficiency improvements on modern GPUs, it remains possible that (1) a more tailored implementation could leverage the model simplifications we have made, and (2) that these differences are larger on other hardware (e.g., CPUs). We leave these questions for future work. 1 2 3 4 5 6 number of layers 25 30 BLEU 1.8 3.2 WMT2016 En-Ro BASE HC-SA BASE/-FF HC-SA/-FF Figure 3: BLEU performance on WMT16 En-Ro before and after removing all feed-forward layers from the models. BASE and HC-SA achieve almost identical BLEU scores, but HC-SA relies more on the feedforward layers than the vanilla Transformer. As shown on the plot, with a four layer encoder and decoder, the BLEU gap between BASE-FF and BASE is 1.8, while the gap between HC-SA and HC-SA-FF is 3.2. 6 Analysis Taken as a whole, our experimental results suggest that many of the components in the Transformer can be replaced by highly-simplified versions without adversely affecting translation quality. In this section, we explain how hard-coded self-attention does not degrade translation quality (Section 6.1), perform a detailed analysis of the behavior of our various models by comparing the types of errors made by learned versus hard-coded attention (Section 6.2), and also examine different attention configurations that naturally follow from our experiments (Section 6.3). 6.1 Why does hard-coded self-attention work so well? Given the good performance of HC-SA on multiple datasets, it is natural to ask why hard-coding selfattention does not deteriorate translation quality. We conjecture that feed-forward (FF) layers play a more important role in HC-SA than in BASE by compensating for the loss of learned dynamic selfattention. To test this hypothesis, we conduct an analysis experiment in which we train four model configurations while varying the number of layers: BASE, BASE without feed-forward layers (BASE/FF), HC-SA and HC-SAwithout feed-forward layers (HC-SA/-FF). As shown in Figure 3, BASE and HCSA have similar performance and both -FF models have consistently lower BLEU scores. However, HC-SA without FF layers performs much worse 7695 <10 10-20 20-30 30-40 >40 reference length 16 18 20 22 24 26 28 BLEU WMT En-De BASE HC-SA HC-ALL SH-X Figure 4: BLEU difference vs. BASE as a function of reference length on the WMT14 En-De test set. When cross attention is hard-coded (HC-ALL), the BLEU gap worsens as reference length increases. compared to its BASE counterpart. This result confirms our hypothesis that FF layers are more important in HC-SA and capable of recovering the potential performance degradation brought by hardcoded self-attention. Taking a step back to hardcoding cross attention, the failure of hard-coding cross attention might be because the feed-forward layers of the decoder are not powerful enough to compensate for modeling both hard-coded decoder self-attention and cross attention. 6.2 Error analysis of hard-coded models Is learned attention more important for longer sentences? Since hard-coded attention is much less flexible than learned attention and can struggle to encode global information, we are curious to see if its performance declines as a function of sentence length. To measure this, we categorize the WMT14 En-De test set into five bins by reference length and plot the decrease in BLEU between BASE and our hard-coded configurations for each bin. Somewhat surprisingly, Figure 4 shows that the BLEU gap between BASE and HC-SA seems to be roughly constant across all bins.16 However, the fully hard-coded HC-ALL model clearly deteriorates as reference length increases. Does hard-coding attention produce any systematic linguistic errors? For a more fine-grained analysis, we run experiments on LingEval97 (Sennrich, 2017), an English→German dataset consisting of contrastive 16We note that gradients will flow across long distances if the number of layers is large enough, since the effective window size increases with multiple layers (van den Oord et al., 2016; Kalchbrenner et al., 2016). Error type BASE HC-SA HC-ALL np-agreement 54.2 53.5 53.5 subj-verb-agreement 87.5 85.8 82.5 subj-adequacy 87.3 85.0 80.3 polarity-particle-nicht-del 94.0 91.4 83.2 polarity-particle-kein-del 91.4 88.3 79.9 polarity-affix-del 91.6 90.8 83.1 polarity-particle-nicht-ins 92.6 92.5 89.8 polarity-particle-kein-ins 94.8 96.7 98.7 polarity-affix-ins 91.9 90.6 84.3 auxiliary 89.1 87.5 85.6 verb-particle 74.7 72.7 70.2 compound 88.1 89.5 80.5 transliteration 97.6 97.9 93.4 Table 4: Accuracy for each error type in the LingEval97 contrastive set. Hard-coding self-attention results in slightly lower accuracy for most error types, while more significant degradation is observed when hardcoding self and cross attention. We refer readers to Sennrich (2017) for descriptions of each error type. translation pairs. This dataset measures targeted errors on thirteen different linguistic phenomena such as agreement and adequacy. BASE and HC-SA perform17 very similarly across all error types (Table 4), which is perhaps unsurprising given that their BLEU scores are almost identical. Interestingly, the category with the highest decrease from BASE for both HC-SA and HC-ALL is deleted negations;18 HC-ALL is 11% less accurate (absolute) at detecting these substitutions than BASE (94% vs 83%). On the other hand, both HC-SA and HC-ALL are actually better than BASE at detecting inserted negations, with HC-ALL achieving a robust 98.7% accuracy. We leave further exploration of this phenomenon to future work. Finally, we observe that for the subject-verb agreement category, the discrepancy between BASE and the hard-coded models increases as the distance between subject-verb increases (Figure 5). This result confirms that self-attention is important for modeling some long-distance phenomena, and that cross attention may be even more crucial. Do hard-coded models struggle when learned self-attention focuses on non-local information? Since hard-coded models concentrate most of the attention probability mass on local tokens, they might underperform on sentences for which the 17Accuracy is computed by counting how many references have lower token-level cross entropy loss than their contrastive counterparts. 18Specifically, when ein is replaced with negation kein. 7696 5 10 >15 subject-verb distance 0.65 0.70 0.75 0.80 0.85 0.90 Accuracy LingEval: subject-verb agreement BASE HC-SA HC-ALL Figure 5: Hard-coded models become increasingly worse than BASE at subject-verb agreement as the dependency grows longer. 40%-60% 60%-80% 80%-100% Percent off Diagonal 15 20 25 30 35 40 45 BLEU IWSLT De-En Decoder Self-Attention BASE HC-SA SH-X HC-ALL Figure 6: Hard-coded attention performs better for sentences with low off-diagonality (i.e., sentences for which the BASE model’s learned attention focuses close to the query position for most of their tokens). learned heads of the BASE model focus on tokens far from the current query position. We define a token to be “off-diagonal” when the maximum probability of that token’s attention is at least two steps away from query position. A sentence’s “off-diagonality” is then the proportion of off-diagonal tokens within the sentence. We bin the sentences in IWSLT En-De development set by their off-diagonality and analyze the translation quality of our models on these different bins. Figure 6 shows that for decoder self attention, the BLEU gap between HC-ALL and BASE increases as off-diagonality increases, while the gap between BASE and SH-X remains relatively constant across all bins. HC-SA even outperforms BASE for sentences with fewer off-diagonal tokens. 6.3 Other hard-coded model configurations Is it important for the Gaussian to span the entire sequence? One natural question about the hard-coded attention strategy described in SecOriginal Conv (window=3) Indexing En-De 30.3 30.1 29.8 En-Ro 32.4 32.3 31.4 Table 5: Comparison of three implementations of HCSA. Truncating the distribution to a three token span has little impact, while removing the weights altogether slightly lowers BLEU. tion 3 is whether it is necessary to assign some probability to all tokens in the sequence. After all, the probabilities outside a local window become very marginal, so perhaps it is unnecessary to preserve them. We take inspiration from Wu et al. (2019), who demonstrate that lightweight convolutions can replace self-attention in the Transformer without harming BLEU, by recasting our hard-coded attention as a convolution with a hard-coded 1-D kernel. While this decision limits the Gaussian distribution to span over just tokens within a fixed window around the query token, it does not appreciably impact BLEU (second column of Table 5). We set the window size to 3 in all experiments, so the kernel weights become [0.242, 0.399, 0.242]. Are any attention weights necessary at all? The previous setting with constrained window size suggests another follow-up: is it necessary to have any attention weights within this local window at all? A highly-efficient alternative is to have each head simply select a single value vector associated with a token in the window. Here, our implementation requires no explicit multiplication with a weight vector, as we can compute each head’s representation by simply indexing into the value vectors. Mathematically, this is equivalent to convolving with a binary kernel (e.g., convolution with [1, 0, 0] is equivalent to indexing the left token representation). The third column of Table 5 shows that this indexing approach results in less than 1 BLEU drop across two datasets, which offers an interesting avenue for future efficiency improvements. Where should we add additional cross attention heads? Our experiments with cross attention so far have been limited to learning just a single head, as we have mainly been interested in minimal configurations. If we have a larger budget of cross attention heads, where should we put them? Is it better to have more cross attention heads in the last layer in the decoder (and no heads anywhere else), or to distribute them across multiple layers 7697 1 2 3 4 Number of learned heads 27.5 28.5 29.5 30.5 BLEU WMT2016 En-Ro Multiple heads same layer Single head across layers Figure 7: Adding more cross attention heads in the same layer helps less than adding individual heads across different layers. of the decoder? Experiments on the WMT16 EnRo dataset19 (Figure 7) indicate that distributing learned heads over multiple layers leads to significantly better BLEU than adding all of them to the same layer. 7 Related Work Attention mechanisms were first introduced to augment vanilla recurrent models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015; Chorowski et al., 2015; Wu et al., 2016; Miceli Barone et al., 2017) but have become the featured component of the state-of-the-art Transformer architecture (Vaswani et al., 2017) for NMT. We review recent research that focuses on analysing and improving multiheaded attention, and draw connections to our work. The intuitive advantage of MHA is that different heads can focus on different types of information, all of which will eventually be helpful for translation. Voita et al. (2019) find that some heads focus on adjacent tokens to the query (mirroring our analysis in Section 2), while others focus on specific dependency relations or rare tokens. Correia et al. (2019) discover that some heads are sensitive to subword clusters or interrogative words. Tang et al. (2018) shows that the number of MHA heads affects the ability to model long-range dependencies. Michel et al. (2019) show that pruning many heads from a pretrained model does not significantly impact BLEU scores. Similarly, Voita et al. (2019) prune many encoder self-attention heads without degrading BLEU, while Tang et al. (2019) further 19We used the smaller IWSLT En-De architecture for this experiment. simplify the Transformer by removing the entire encoder for a drop of three BLEU points. In contrast to existing literature on model pruning, we train our models without learned attention heads instead of removing them post-hoc. There have been many efforts to modify MHA in Transformers. One such direction is to inject linguistic knowledge through auxiliary supervised tasks (Garg et al., 2019; Pham et al., 2019). Other work focuses on improving inference speed: Yang et al. (2018) replace decoder self-attention with a simple average attention network, assigning equal weights to target-side previous tokens.20 Wu et al. (2019) also speed up decoding by replacing selfattention with convolutions that have time-step dependent kernels; we further simplify this work with our fixed convolutional kernels in Section 6. Cui et al. (2019) also explore fixed attention while retaining some learned parameters, and Vashishth et al. (2019) show that using uniform or random attention deteriorates performances on paired sentences tasks including machine translation. Other work has also explored modeling locality (Shaw et al., 2018; Yang et al., 2018). 8 Conclusion In this paper, we present “hard-coded” Gaussian attention, which while lacking any learned parameters can rival multi-headed attention for neural machine translation. Our experiments suggest that encoder and decoder self-attention is not crucial for translation quality compared to cross attention. We further find that a model with hard-coded selfattention and just a single cross attention head performs slightly worse than a baseline Transformer. Our work provides a foundation for future work into simpler and more computationally efficient neural machine translation. Acknowledgments We thank the anonymous reviewers for their thoughtful comments, Omer Levy for general guidance and for suggesting some of our efficiency experiments, the UMass NLP group for helpful comments on earlier drafts, Nader Akoury for assisting with modifications to his Transformer codebase, and Kalpesh Krishna for advice on the structure of the paper. 20In preliminary experiments, we find that using uniform distributions for encoder self-attention decreases BLEU. This result is similar to the indexing implementation we describe in Section 6.3. 7698 References Nader Akoury, Kalpesh Krishna, and Mohit Iyyer. 2019. Syntactically supervised transformers for faster neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1269–1281, Florence, Italy. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 577–585. Curran Associates, Inc. Gonc¸alo M. Correia, Vlad Niculae, and Andr´e F. T. Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2174–2184, Hong Kong, China. Association for Computational Linguistics. Hongyi Cui, Shohei Iida, Po-Hsuan Hung, Takehito Utsuro, and Masaaki Nagata. 2019. Mixed multihead self-attention for neural machine translation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 206–214, Hong Kong. Association for Computational Linguistics. Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4452–4461, Hong Kong, China. Association for Computational Linguistics. Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In International Conference on Learning Representations. Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543–3556, Minneapolis, Minnesota. Association for Computational Linguistics. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709, Seattle, Washington, USA. Association for Computational Linguistics. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173–1182, Brussels, Belgium. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Antonio Valerio Miceli Barone, Jindˇrich Helcl, Rico Sennrich, Barry Haddow, and Alexandra Birch. 2017. Deep architectures for neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 99–107, Copenhagen, Denmark. Association for Computational Linguistics. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 14014– 14024. Curran Associates, Inc. A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. In The 9th ISCA Speech Synthesis Workshop, Sunnyvale, CA, USA, 13-15 September 2016, page 125. ISCA. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. 7699 Thuong Pham, Dominik Mach´aˇcek, and Ondˇrej Bojar. 2019. Promoting the knowledge of source syntax in transformer nmt is not needed. Computaci´on y Sistemas, 23. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Rico Sennrich. 2017. How grammatical is characterlevel neural machine translation? assessing MT quality with contrastive translation pairs. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 376–382, Valencia, Spain. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 371–376, Berlin, Germany. Association for Computational Linguistics. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464– 468, New Orleans, Louisiana. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Gongbo Tang, Mathias M¨uller, Annette Rios, and Rico Sennrich. 2018. Why self-attention? a targeted evaluation of neural machine translation architectures. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4263–4272, Brussels, Belgium. Association for Computational Linguistics. Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2019. Understanding neural machine translation by simplification: The case of encoder-free models. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 1186–1193, Varna, Bulgaria. INCOMA Ltd. Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention interpretability across nlp tasks. arXiv preprint arXiv:1909.11218. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy. Association for Computational Linguistics. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4449–4458, Brussels, Belgium. Association for Computational Linguistics. A Mixed position for hard-coded self-attention works the best Enc-Config Dec-Config BLEU (l, l) (l, l) 27.4 (l, l) (c, c) 27.8 (l, l) (l, c) 28.1 (l, r) (l, c) 30.3 Table 6: Search for best hard-coded configuration for hard-coded self-attention. ‘l’ stands for left, focusing on i −1, ‘r’ for i + 1 and ‘c’ for i. Middle layers are (l,r) for encoder and (l,c) for decoder. Each cell shows settings we used in the lowest and highest layer. B Memory efficiency and inference speedups Table 7 summarizes the results of our profiling experiments on IWSLT16 En-De development set. 7700 40%-60% 60%-80% 80%-100% Percent off Diagonal 15 20 25 30 BLEU IWSLT En-De Encoder Self-Attention BASE HC-SA SH-X HC-ALL 40%-60% 60%-80% 80%-100% Percent off Diagonal 20 25 30 35 40 BLEU IWSLT En-De Decoder Self-Attention BASE HC-SA SH-X HC-ALL 40%-60% 60%-80% 80%-100% Percent off Diagonal 20 25 30 35 40 BLEU IWSLT De-En Encoder Self-Attention BASE HC-SA SH-X HC-ALL Figure 8: Off-diagonal analysis for IWSLT En-De/De-En self-attention Model BLEU sent/sec tokens/batch BASE 30.0 43.1 14.1k HC-SA 30.3 44.0 15.1k SH-X 28.1 50.1 16k BASE/-SA 22.8 46.1 16.1k SH-X/-SA 14.9 54.9 17k Table 7: Decoding speedup (in terms of sentences per second) and memory improvements (max tokens per batch) on IWSLT16 En-De for a variety of models. The last two rows refer to BASE and SH-X configurations whose self-attention is completely removed. C Off-diagonal Analysis In addition to IWSLT16 De-En decoder selfattention analysis, we provide here the off-diagonal analysis results on IWSLT16 En-De encoder and decoder self-attention, and IWSLT16 De-En encoder self-attention in Figures 8.
2020
687
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7701–7710 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7701 In Neural Machine Translation, What Does Transfer Learning Transfer? Alham Fikri Aji1 Nikolay Bogoychev1 Kenneth Heafield1 Rico Sennrich2,1 1School of Informatics, University of Edinburgh 2Department of Computational Linguistics, University of Zurich {a.fikri, n.bogoych, kenneth.heafield, rico.sennrich}@ed.ac.uk Abstract Transfer learning improves quality for lowresource machine translation, but it is unclear what exactly it transfers. We perform several ablation studies that limit information transfer, then measure the quality impact across three language pairs to gain a black-box understanding of transfer learning. Word embeddings play an important role in transfer learning, particularly if they are properly aligned. Although transfer learning can be performed without embeddings, results are sub-optimal. In contrast, transferring only the embeddings but nothing else yields catastrophic results. We then investigate diagonal alignments with auto-encoders over real languages and randomly generated sequences, finding even randomly generated sequences as parents yield noticeable but smaller gains. Finally, transfer learning can eliminate the need for a warmup phase when training transformer models in high resource language pairs. 1 Introduction Transfer learning is a common method for lowresource neural machine translation (NMT) (Zoph et al., 2016; Dabre et al., 2017; Qi et al., 2018; Nguyen and Chiang, 2017; Gu et al., 2018b). However, it is unclear what settings make transfer learning successful and what knowledge is being transferred. Understanding why transfer learning is successful can improve best practices while also opening the door to investigating ways to gain similar benefits without requiring parent models. In this paper, we perform several ablation studies on transfer learning in order to understand what information is being transferred. We apply a black box methodology by measuring the quality of end-to-end translation systems. Typically, our experiments have a baseline that was trained from scratch, an off-the-shelf transfer learning baseline and simplified versions of the transfer learning scheme. If a simplified version recovers some of the quality gains of full transfer learning, it suggests that the simplified version has captured some of the information being transferred. Since information may be transferred redundantly, our claims are limited to sufficiency rather than exclusivity. Transferring word embeddings is not straightforward since languages have different vocabularies. Zoph et al. (2016) claimed that vocabulary alignment is not necessary, while Nguyen and Chiang (2017) and Kocmi and Bojar (2018) suggest a joint vocabulary. We find that the vocabulary has to be aligned before transferring the embedding to achieve a substantial improvement. Transfer learning without the embedding or with vocabulary mismatches is still possible, but with lower quality. Conversely, transferring only the word embeddings can be worse than transferring nothing at all. A rudimentary model of machine translation consists of alignment and token mapping. We hypothesize that these capabilities are transferred across languages. To test this, we experiment with transferring from auto-encoders that learn purely diagonal alignment and possibly language modelling. To remove the effect of language modelling, we train auto-encoders on random strings sampled uniformly. However, all of these scenarios still have simple copying behaviour, especially with tied embeddings. Therefore, we also attempt a bijective vocabulary mapping from source to target, forcing the model to learn the mapping as well. Curiously, parents trained with bijectively-mapped vocabularies transfer slightly better to children. We then investigate transfer learning for highresource children, where the goal is reduced training time since they mainly attain the same quality. Transfer learning primarily replaces the warm-up 7702 period, though only real language parents yielded faster training. 2 Related Work Transfer learning has been successfully used in lowresource scenarios for NMT. Zoph et al. (2016) gain 5 BLEU points in Uzbek–English by transferring from French–English. Their style of transfer learning copies the entire model, including word embeddings, ignoring the vocabulary mismatch between parent and child. They used separate embeddings for source and target language words, whereas tied embeddings (Press and Wolf, 2017; Vaswani et al., 2017) have since become the de-facto standard in low-resource NMT. Tied embeddings provide us with the opportunity to revisit some of their findings. In Section 5, we find an English–English copy model does work as a parent with tied embeddings, whereas Zoph et al. (2016) reported no gains from a copy model with untied embeddings. Methods to cope with vocabulary mismatch have improved since Zoph et al. (2016). Kocmi and Bojar (2018) suggest that a shared vocabulary between the parent language and the child is beneficial, though this requires knowledge of the child languages when the parent is trained. Addressing this issue, Gheini and May (2019) proposed a universal vocabulary for transfer learning. Their universal vocabulary was obtained by jointly training the sub-word tokens across multiple languages at once, applying Romanisation to languages in non-Latin scripts. However, unseen languages may only be representable in this universal vocabulary with a very aggressive and potentially sub-optimal subword segmentation. Orthogonally, Kim et al. (2018); Lample et al. (2018); Artetxe et al. (2018); Kim et al. (2019) use bilingual word embedding alignment to initialise the embedding layer to tackle low resource language pairs. In Section 4.2, we compare a variety of vocabulary transfer methods. Prior work (Dabre et al., 2017; Nguyen and Chiang, 2017) stated that a related language is the best parent for transfer learning. Lin et al. (2019) explore options to choose the best parent and conclude that the best parent language might not necessarily be related but is instead based on external factors such as the corpus size. In Section 3, we try two parent models in both directions to set baselines for the rest of the paper; an exhaustive search is not our main purpose. Another approach to low-resource (or even zeroshot) NMT is through multilingual models (Johnson et al., 2016), which is similar to training the parent and child simultaneously. A related idea creates meta-models with vocabulary residing in a shared semantic space (Gu et al., 2018a,b). If there is more parallel data with a third language, often English, then pivoting through a third language can outperform direct translation (Cheng et al., 2016). This approach requires enough source– pivot and target–pivot parallel data, which is arguably hard in many low resource scenarios, such as Burmese, Indonesian, and Turkish. Orthogonal to transfer learning, Lample et al. (2018) and Artetxe et al. (2018) have proposed a fully zero-shot approach for low resource languages that relies on aligning separately-trained word embeddings to induce an initial bilingual dictionary. The dictionary is then used as the basis for a translation model. However, these methods do not generalise to arbitrary language pairs (Søgaard et al., 2018). Moreover, our setting presumes a small amount of parallel data in the low-resource pair. 3 Baseline Transfer Learning We start with arguably the simplest form of transfer learning: train a parent model then switch to training with the child’s dataset following Zoph et al. (2016). We attempt to initialise the embedding vectors of the same tokens from the parent to the child. We later investigate different approaches to transferring the embeddings. As transfer learning requires a parent model, we start by sweeping different high-resource languages for the parent model to set a baseline. Choosing a parent language pair is one of the first issues to solve when performing a transferlearning experiment. However, this is not a simple task. Prior work (Dabre et al., 2017; Nguyen and Chiang, 2017) suggest that a related language is the best option, albeit related is not necessarily well defined. Recently, Lin et al. (2019) performed a grid-search across various parent languages to determine the best criteria for selecting the optimal parent when performing transfer learning. Their work showed that the best language parents might also be determined by external factors such as the corpus size, on top of the language relatedness. According to the BLEU score, the difference between various parents is usually not that significant. We first explore four potential parents: German 7703 and Russian from/to English. From each of them, we transfer the parameters to our low-resource language pair of {Burmese, Indonesian, Turkish} to English. Before presenting the results, we lay out the experimental setup used for the rest of the paper. 3.1 High-Resource Datasets We use German-English and Russian-English datasets for our parent models. Our GermanEnglish dataset is taken from the WMT17 news translation task (Bojar et al., 2017). Our RussianEnglish is taken from the WMT18 task (Bojar et al., 2018). For both pairs, we preprocess the input with byte-pair encoding (Sennrich et al., 2016b). 3.2 Low-Resource Datasets We use the following datasets: Burmese–English: For our My→En parallel data, we used 18k parallel sentences from the Asian Language Treebank (ALT) Project (Ding et al., 2018, 2019) collected from news articles. Indonesian–English: Id→En parallel data consists of 22k news-related sentences, which are taken from the PAN Localization BPPT corpus.1 This dataset does not have a test/validation split. Hence we randomly sample 2000 sentences to use as test and validation sets. We augment our data by backtranslating (Sennrich et al., 2016a) News Crawl from 2015. Our total training set (including the back-translated sentences) consists of 88k pairs of sentences. Turkish–English: Tr→En data comes from the WMT17 news translation task (Bojar et al., 2017). This data consists of 207k pairs of sentences. Similar to Id→En, we add a back-translation corpus from News Crawl 2015. Our total training data consists of 415k sentence pairs. For all language pairs, we use byte-pair encoding (Sennrich et al., 2016b) to tokenise words into subword units. 3.3 Training Setup We use a standard transformer-base architecture with six encoder and six decoder layers for all experiments with the default hyperparameters (Vaswani et al., 2017). Training and decoding use Marian (Junczys-Dowmunt et al., 2018), while evaluation uses SacreBLEU (Post, 2018). 1http://www.panl10n.net/english/ OutputsIndonesia2.htm 3.4 Results BLEU Parent My→En Id→En Tr→En 4.0 20.6 19.0 En→De 17.5 27.5 20.2 En→Ru 17.8 27.4 20.3 De→En 17.3 26.3 20.1 Ru→En 17.1 26.8 20.6 Table 1: Transfer learning performance across different language parents. Our results on Table 1 show that there is no clear evidence that one parent is better than another. Whether the non-English languages share a script or English is on the same side does not have a consistent impact. The main goal of this section was to set appropriate baselines; we primarily use English→German and German→English as the parents. 4 Transferring Embedding Information Parent and child languages have a different vocabulary, so embeddings are not inherently transferable. We investigate what is transferred in the embeddings and evaluate several vocabulary combination methods. 4.1 Are the Embeddings Transferable? We first explore whether the embedding matrix contains any transferable information. We divide the model into embedding parameters and everything else: inner layers. Table 2 shows what happens when these parts are or are not transferred. Our low-resource languages achieve better BLEU even if we only transfer the inner layers. In contrast, only transferring the embeddings is not beneficial, and sometimes it is even harmful to the performance. Finally, transferring all layers yields the best performance. To further investigate which part of the network is more crucial to transfer, we took the bestperforming child then reset either the embeddings or inner layers and restarted training. We explore whether the model is capable of recovering the same or comparable quality by retraining. We can look at this experiment as ‘self’ transfer learning. Results are shown in Table 3. When the inner layers are reset, self-transfer performs poorly (close to the quality without transfer learning at all), even though the embeddings are properly transferred. 7704 BLEU Transferring De→En parent En→De parent Emb. Inner My→En Id→En Tr→En My→En Id→En Tr→En avg. Y Y 17.8 27.4 20.3 17.5 27.5 20.2 21.7 N Y 13.6 25.3 19.4 10.8 24.9 19.3 18.3 Y N 3.0 18.2 19.1 3.4 18.8 18.9 13.7 N N 4.0 20.6 19.0 4.0 20.6 19.0 14.5 Table 2: Transfer learning performance by only transferring parts of the network. Inner layers are the nonembedding layers. N = not-transferred. Y = transferred. BLEU Transfer My→En Id→En Tr→En baseline (no transfer) 4.0 20.6 19.0 transfer, train 17.8 27.4 20.3 transfer, train, reset emb, train 13.3 25.0 20.0 transfer, train, reset inner, train 3.6 18.0 19.1 Table 3: Investigating the model’s capability to restore its quality if we reset the parameters. We use En→De as the parent. Conversely, the models can somewhat restore their quality even if we reset the embedding layer. This result further verifies that transferring the inner layers is the most critical aspect of transfer learning. We conclude that transferring the inner layers is critical to performance, with far more impact than transferring the embeddings. However, the embedding matrix has transferable information, as long as the inner layers are included. 4.2 How to Transfer the Embeddings Mixed recommendations exist on how to transfer embeddings between languages with different vocabularies. We compare methods from previous work, namely random assignment (Zoph et al., 2016) and joint vocabularies (Nguyen and Chiang, 2017) with two additional embedding assignment strategies based on the frequency and token matching as a comparison. In detail, we explore: • Exclude Embedding: We do not transfer the embeddings at all. As such, we show that transfer learning works without transferring the embedding layer. In the present experiment, this method acts as one of the baselines. • Frequency Assignment: We can transfer the embedding information regardless of the vocabulary mismatch. However, the toolkit sorts the words based on their frequency; therefore, embeddings are also transferred in that particular order. Regardless, we can determine whether word frequency information is transferred. • Random Assignment: Zoph et al. (2016) suggest that randomly assigning a parent word embedding to each child word is sufficient, relying on the model to untangle the permutation. This approach is simple and languageagnostic, thus universally applicable. We shuffle the vocabulary to achieve a random assignment. • Joint Vocabulary: Nguyen and Chiang (2017) suggest that it is better to use a shared vocabulary between the parent and child language. This can be obtained by training a joint BPE token. To achieve this, we transfer the word embedding information of the common tokens. Since tied embeddings are used, we share the same vocabulary between the target and source of both the parent and the child language. One drawback of this technique is that we must prepare the vocabulary in advance. Therefore, switching the parent or the child might require us to re-train the model. • Token Matching: We assign the embeddings with the same token first and randomise the rest. This approach is designed to allow some word embeddings to be transferred correctly without the need to re-train the parent with every experiment, as in the case of joint vocabulary. The different strategies are illustrated in Figure 1. Prior experiments in Section 4.1 demonstrate that we can apply transfer learning even if we only transfer the inner layers. Curiously, random assignment and frequency assignment are not better than excluding the embeddings, except for Burmese to 7705 a b c d x y a b (a) Exclude embedding a b c d x y a b (b) Freq. assignment a b c d x y a b (c) Random assignment a b c d x y a b (d) Token Match a b c d a b c d x y x y (e) Joint vocab Figure 1: Illustration of various strategies on how to transfer the embedding vector. BLEU De→En parent En→De parent Embedding My→En Id→En Tr→En My→En Id→En Tr→En avg. 4.0 20.6 19 4.0 20.6 19 14.5 Exclude embedding 13.6 25.3 19.4 10.8 24.9 19.3 18.3 Frequency assign 14.2 24.4 19.4 13.9 24.3 19.4 19.2 Random assign 13.9 24.6 19.2 13.8 23.9 19.3 19.0 Token matching 17.8 27.4 20.3 17.5 27.5 20.2 21.7 Joint vocabulary 18.5 27.5 20.9 18.5 28.0 19.6 22.0 Table 4: Transfer learning performance with different ways to handle the embedding layer. English transferred from English to German. Therefore, the information in the embedding is lost when transferred to the incorrect token. From these results, we conclude that the model is incapable of untangling the embedding permutation as stated by Zoph et al. (2016). Transfer learning yields better results when we attempt to transfer the embeddings to the correct tokens. In the joint vocabulary setting, not every token is observed in the parent language dataset; therefore, only a section of the embedding layer is correctly trained. However, we still observe a significant improvement over the random and frequency-based assignment. We can also transfer the embedding vectors by matching and assigning the word embedding with the same tokens. Vocab matching achieves comparable results to joint vocabulary, except for the lowest-resource language, Burmese. Therefore, this simple matching can be used as a cheaper alternative over a joint vocabulary. On top of that, this approach is more efficient as we do not transfer and wastefully reserve extra memory for tokens that will not be seen in the child language. These results suggest that word information stored in the embedding layer is transferable, as long as the vectors are assigned correctly. Therefore, better ways of handling the embedding layer transfer are joint BPE and token matching, as they further improve the performance of the child language pair. 5 Transferring Structural Information To understand what information is being transferred with transfer learning, we test the parent model’s performance on the child language without any additional training. When a pre-trained model is transferred to another language pair, the model has not yet seen the child language vocabulary. When presented with an input in a new language, the model is unable to translate correctly. However, as we can see in Table 5, the model manages to perform diagonal alignment properly, albeit it is mostly copying the input (on average of 75% of the time). Based on this observation, we see that fallback copying behaviour, including monotonic alignment, is transferred. This can be useful for named entity translation (Currey et al., 2017). To test our claim, we prepare parents that implicitly learn to copy or transform input tokens diagonally. We can create a copy sequence model (or autoencoder) model by giving the model the same sentences for both source and target. We pick an English monolingual dataset. We also use a Chinese monolingual corpus to explore whether the chosen 7706 Parent Shared Example En→De Id→En src: Bank Mandiri bisa masuk dari mikro hingga korporasi . out: Bank Mandiri bisa memperingatkan dari cen@@ hingga korporasi . alignment: 0-0 1-1 3-3 5-5 6-6 7-7 9-2 9-4 9-8 9-9 De→En Id→En src: Bank Mandiri bisa masuk dari mikro hingga korporasi . out: seperti Mandiri bisa masuk a mikro hingga korporasi . alignment: 2-2 3-0 3-1 3-3 3-9 5-5 6-6 7-7 7-8 9-4 Table 5: Output example of transferred model without fine tuning. The model performs monotonic alignment. monolingual language matters. Besides, we can artificially create a random sequence for the training set. The random sequence is useful to determine whether any language-specific information is being transferred, as such information is absent in a random sequence. To simulate the translation behaviour better, we also prepare a substitution parallel corpus. We transform every token into another based on a predetermined 1:1 mapping. We create a substitution corpus for both the English and the synthetic corpus. With tied embeddings, the substitution corpus should help the model translate one token into another, instead of just copying. Table 6 illustrates the 6 monolingual/synthetic parents that we use for this experiment. We perform transfer learning experiments from every monolingual and synthetic parent to all three child languages, as summarised in Table 7. For comparison, we also provide the result of transfer learning with an actual translation model as a parent. We notice that there is no improvement in transfer learning for the Turkish model in terms of the final BLEU. However, upon further investigation, transfer learning has an impact on the convergence speed, thus signalling information being transferred. To measure this, we capture the validation BLEU score for Tr→En after 10k training steps. In general, transferring from any monolingual or synthetic parent yields better BLEU (or faster convergence for Turkish) compared to training from scratch. Although, the improvement is suboptimal when compared with transfer learning from a proper parent. However, we can use these gains to measure the information transferred in transfer learning. In general using monolingual English is better than using monolingual Chinese. In monolingual English, we can transfer the embedding information correctly with token matching. Therefore, consistent with our previous experiment, embedding information is transferred. Using a Chinese parent is better than using random sequences. Our random sequence is uniformly sampled independently for each token. Therefore, unlike a real monolingual corpus, learning language modelling from this random sequence is impossible. Thus, we conclude that the model transfers some statistical properties of natural languages. Transferring from a random sequence copy model yields better result compared to training the model from scratch. While the improvement is minimal, we can see that a na¨ıve model that performs copying is better as a model initialisation. Moreover, substitution sequence parent models perform better than their copying counterparts. We suspect that copy models with tied embeddings converge to a local optimum that is a poorer initialisation for other translation models, compared to the substitution models. Transfer learning with an actual NMT system as a parent still outperforms the monolingual and synthetic parents, albeit they are initially a copy model. We argue that the monolingual parents perform nearly perfectly at the copying task, and have perfect diagonal alignment, and therefore overfit to this artificial setting when used as a parent. 6 Transfer Learning for High-Resource Languages Transfer learning can be used to initialise a model even if final quality does not change. Compared to random initialisation, we argue that a pre-trained model functions as better initialisation. Therefore, since we initialise the model better, it should converge faster. This behaviour was already presented in Table 7, where the transferred model converges more rapidly. However, we should explore this behaviour in a setting where faster training matters more: when training high-resource language pairs. 7707 Parent Type Mono copy sequence src: Madam President , on a point of order . (En→En) tgt: Madam President , on a point of order . Mono substitution sequence src: Click write , ideologies rotate sful ECHO recommended struggle (EnS →En) tgt: Madam President , on a point of order . Mono copy sequence src: 保持点神秘感。 (Zh→Zh) tgt: 保持点神秘感。 Mono substitution sequence src:比赛漂亮家宝1503 知识产权 (ZhS →Zh) tgt: 保持点神秘感。 Random copy sequence src: 1 3 2 1 1 (Rand→Rand) tgt: 1 3 2 1 1 Random substitution sequence src: 2 4 3 2 2 (RandS →Rand) tgt: 1 3 2 1 1 Table 6: Monolingual and random parents with their sentence example. BLEU Parent My→En Id→En Tr→En Tr(10k) 4.0 20.6 19.0 14.3 De→En 17.8 27.4 20.3 20.2 En→En 10.4 23.3 18.5 16.0 EnS →En 12.3 23.8 19.0 16.5 Zh→Zh 8.3 22.5 18.8 16.3 ZhS →Zh 11.2 23.5 19.0 16.3 Rnd→Rnd 6.2 21.9 19.0 15.2 RndS →Rnd 7.9 22.0 19.3 15.1 Table 7: Transfer learning performance on monolingual and synthetic parents. We also measure the validation BLEU of Tr→En after 10k updates. For this experiment, we take an English-toRussian model as a parent for an English-toGerman model. We align the embedding with the same BPE tokens instead of using a joint vocabulary since this would require re-training the parent. We also attempt to exclude the embedding completely. These choices are practical in a realworld scenario, especially when we measure for efficiency. In Table 8, we show that transfer learning does not improve the model’s final quality. However, we can see both from the Table, and visually in Figure 2, that transfer learning speeds up the convergence by up to 1.4x, assuming the parent model has been prepared before. In the early stage of training, the gradients produced are quite noisy, which is particularly harmful to the transformer model (Popel and Bojar, 2018). Therefore, training transformer models usually require a precise warm-up setup. However, transfer Parent BLEU Num. Steps to 34 BLEU Baseline 35.6 48k + no warm-up 0.0 En→EnS 35.4 60k (0.8x faster) En→Ru 35.7 40k (1.2x faster) + token matching 35.7 34k (1.4x faster) + no warm-up 35.6 22k (2.1x faster) Table 8: Transfer learning effect to the model’s quality of high-resource language. We also measure the time to reach a near-convergence level of 34 BLEU. learning can be used as a better initialisation, thus skipping the noisy early training. To further confirm this, we remove the learning rate warm-up to observe the impact of a pre-trained model. As shown in Figure 2, the pre-trained model remains capable of learning under more aggressive hyperparameters. On the other hand, the model without pre-training fails to learn. This result is congruent with the findings of Platanios et al. (2019), who found that warm-up in the Transformer can be removed with curriculum learning. 7 Conclusion We demonstrate that the internal layers of the network are the most crucial for cross-lingual transfer learning. The embeddings contain transferable information, as long as the vectors are mapped correctly and the inner layers are also transferred. While not as optimal, we can still perform transfer learning by excluding the embedding. In transfer learning, we can also transfer the alignment. Transferred parents without fine-tuning will align 7708 0 5 10 15 20 25 30 num updates x1000 0 10 20 30 validation BLEU Convergence per-update Baseline Baseline + No warm-up En-En Substitiution En-Ru En-Ru + Token matching En-Ru + Token matching + No warm-up Figure 2: Transfer learning effect on the convergence of a high-resource system. Transfer learning removes the need for warm-up. the input diagonally and copy most of the tokens. We further demonstrate that transfer learning still functions with a simple copy model, even with an artificial dataset—albeit with a reduced quality. From a theoretical perspective, our results indicate that while transfer learning is effective in our scenario, it performed less “transfer” than previously thought. Therefore, a promising research direction to investigate would involve the development and assessment of improved initialisation methods that would more efficiently yield the benefits of the model transfer. From a practical perspective, our results indicate that we can initialise models with a pre-trained model regardless of the parent language or vocabulary handling. With this perspective in mind, we can use transfer learning as a better initialisation, resulting in the child model having more stable gradients from the onset of training. Therefore, models can train and converge faster, which is useful in high-resource settings. With transfer learning, the model can be trained with more aggressive hyperparameters—such as removing the learning rate warm-up entirely—to further improve the convergence speed. This result further highlights the use of transfer learning as a better model initialisation. Acknowledgments This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (http: //www.csd3.cam.ac.uk/), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). Alham Fikri Aji is funded by the Indonesia Endowment Fund for Education scholarship scheme. Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727). References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Conference on Machine Translation, pages 169– 214, Copenhagen, Denmark. Association for Computational Linguistics. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272–303, Belgium, Brussels. Association for Computational Linguistics. Yong Cheng, Yang Liu, Qian Yang, Maosong Sun, and Wei Xu. 2016. Neural machine translation with pivot languages. arXiv preprint arXiv:1611.04928. 7709 Anna Currey, Antonio Valerio Miceli Barone, and Kenneth Heafield. 2017. Copied monolingual data improves low-resource neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 148–156. Raj Dabre, Tetsuji Nakagawa, and Hideto Kazawa. 2017. An empirical study of language relatedness for transfer learning in neural machine translation. on Language, Information and Computation, page 282. Chenchen Ding, Hnin Thu Zar Aye, Win Pa Pa, Khin Thandar Nwet, Khin Mar Soe, Masao Utiyama, and Eiichiro Sumita. 2019. Towards Burmese (Myanmar) morphological analysis: Syllable-based tokenization and part-of-speech tagging. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 19(1):5. Chenchen Ding, Masao Utiyama, and Eiichiro Sumita. 2018. NOVA: A feasible and flexible annotation system for joint tokenization and part-of-speech tagging. ACM Transactions on Asian and LowResource Language Information Processing (TALLIP), 18(2):17. Mozhdeh Gheini and Jonathan May. 2019. A universal parent model for low-resource neural machine translation transfer. arXiv preprint arXiv:1909.06516. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O.K. Li. 2018a. Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 344–354, New Orleans, Louisiana. Association for Computational Linguistics. Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018b. Meta-learning for lowresource neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622–3631, Brussels, Belgium. Association for Computational Linguistics. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr´e F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116– 121, Melbourne, Australia. Association for Computational Linguistics. Yunsu Kim, Yingbo Gao, and Hermann Ney. 2019. Effective cross-lingual transfer of neural machine translation models without shared vocabularies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1246– 1257, Florence, Italy. Association for Computational Linguistics. Yunsu Kim, Jiahui Geng, and Hermann Ney. 2018. Improving unsupervised word-by-word translation with language model and denoising autoencoder. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 862–868, Brussels, Belgium. Association for Computational Linguistics. Tom Kocmi and Ondˇrej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 244–252, Belgium, Brussels. Association for Computational Linguistics. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics. Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125–3135, Florence, Italy. Association for Computational Linguistics. Toan Q Nguyen and David Chiang. 2017. Transfer learning across low-resource, related languages for neural machine translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 296–301. Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1162–1172. Martin Popel and Ondˇrej Bojar. 2018. Training tips for the transformer model. The Prague Bulletin of Mathematical Linguistics, 110(1):43–70. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. 7710 Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain. Association for Computational Linguistics. Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 529–535, New Orleans, Louisiana. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788, Melbourne, Australia. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568–1575, Austin, Texas. Association for Computational Linguistics.
2020
688
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7711–7723 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7711 Learning a Multi-Domain Curriculum for Neural Machine Translation Wei Wang Google Research [email protected] Ye Tian Google Research [email protected] Jiquan Ngiam Google Brain [email protected] Yinfei Yang Google Research [email protected] Isaac Caswell Google Research [email protected] Zarana Parekh Google Research [email protected] Abstract Most data selection research in machine translation focuses on improving a single domain. We perform data selection for multiple domains at once. This is achieved by carefully introducing instance-level domain-relevance features and automatically constructing a training curriculum to gradually concentrate on multi-domain relevant and noise-reduced data batches. Both the choice of features and the use of curriculum are crucial for balancing and improving all domains, including out-ofdomain. In large-scale experiments, the multidomain curriculum simultaneously reaches or outperforms the individual performance and brings solid gains over no-curriculum training. 1 Introduction In machine translation (MT), data selection, e.g., (Moore and Lewis, 2010; Axelrod et al., 2011), has remained as a fundamental and important research topic. It has played a crucial role in domain adaptation by selecting domain-matching training examples, or data cleaning (aka denoising) by selecting high-quality examples. So far, the most extensively studied scenario assumes a single domain to improve. It becomes both technically challenging and practically appealing to build a large-scale multidomain neural machine translation (NMT) model that performs simultaneously well on multiple domains at once. This requires addressing research challenges such as catastrophic forgetting (Goodfellow et al., 2014) at scale and data balancing. Such a model can easily find potential use cases, i.e., as a solid general service, for downstream transfer learning, for better deployment efficiency, or for transfer learning across datasets. Unfortunately, existing single-domain dataselection methods do not work well for multiple domains. For example, improving the translation Static Dynamic Single domain Y Y noise Y Y Multi domain Y noise N N (Our Work) Table 1: Data selection and data mixing research in NMT. ‘Y’: There is previous research that studies this case. ‘N’: No previous research has studied this case. accuracy of one domain will often hurt that of another (van der Wees et al., 2017; Britz et al., 2017), and improving model generalization across all domains by clean-data selection (Koehn et al., 2018) may not promise optimization of a particular domain. Multiple aspects need to be considered for training a multi-domain model. This paper presents a dynamic data selection method to multi-domain NMT. Things we do differently from previous work in mixing data are the choice of instance-level features and the employment of a multi-domain curriculum that is additionally able to denoise. These are crucial for mixing and improving all domains, including outof-domain. We experiment with large datasets at different noise levels and show that the resulting models meet our requirements. 2 Related Work In MT, research that is most relevant to our work is data selection and data mixing, both being concerned with how to sample examples to train an MT model, usually for domain adaptation. Table 1 categorizes previous research by two aspects and shows where our work stands. These two aspects are: 1. Is the method concerned with a single domain or multiple domains? 2. Does the method use data statically or dynamically? 7712 Static data selection for a single domain. Moore and Lewis (2010) select in-domain data for n-gram language model (LM) training. It is later generalized by Axelrod et al. (2011) to select parallel data for training MT models. Chen and Huang (2016); Chen et al. (2016) use classifiers to select domain data. Clean-data selection (Koehn et al., 2019, 2018; Junczys-Dowmunt, 2018) reduces harmful data noise to improve translation quality across domains. All these works select a data subset for a single “domain”1 and treat the selected data as a static/flat distribution. Dynamic data selection for a single domain. Static selection has two shortcomings: it discards data and it treats all examples equally after selection. When data is scarce, any data could be helpful, even if it is out of domain or noisy2. Dynamic data selection is introduced to “sort” data from least in-domain to most in-domain. Training NMT models on data sorted this way effectively takes advantage of transfer learning. Curriculum learning (CL) (Bengio et al., 2009) has been used as a formulation for dynamic data selection. Domain curricula (van der Wees et al., 2017; Zhang et al., 2019) are used for domain adaptation. Model stacking (Sajjad et al., 2017; Freitag and Al-Onaizan, 2016) is a practical idea to build domain models. CL is also used for denoising (Kumar et al., 2019; Wang et al., 2018a,b), and for faster convergence and improved general quality (Zhang et al., 2018; Platanios et al., 2019). Wang et al. (2018a) introduce a curriculum for training efficiency. In addition to data sorting/curriculum, instance/loss weighting (Wang et al., 2017; Chen et al., 2017; Wang et al., 2019b) has been used as an alternative. CL for NMT represents the SOTA data-selection method, but most existing works target at a single “domain”, be it a specific domain or the “denoising domain”. Static data mixing for multiple domains. When mixing data from multiple domains, a fundamental challenge is to address catastrophic forgetting (Goodfellow et al., 2014)–training an NMT model to focus on one domain can likely hurt another (van der Wees et al., 2017; Britz et al., 1We treat denoising as a domain in the paper, inspired by previous works that treat data noise using domain adaptation methods, e.g., (Junczys-Dowmunt, 2018). 2We refer to data regularization (using more data) and to transfer learning (fine-tuning) to exploit both data quantity and quality, the idea behind dynamic data selection. See Appendix C. 2017). Britz et al. (2017) learn domain-discerning (or -invariant) network representation with a domain discriminator network for NMT. The methods, however, require that domain labels are available in data. Tars and Fishel (2018) cluster data and tag each cluster as multi-domain NMT training data, but the method treats data in each cluster as a flat distribution. Farajian et al. (2017) implement multi-domain NMT by on-the-fly data retrieval and adaptation per sentence, at increased inference cost. Most existing methods (or experiment setups) have the following problems: (i) They mix data statically. (ii) They don’t consider the impact of data noise, which is a source of catastrophic forgetting. (iii) Experiments are carried out with small datasets, without separate examination on the data regularization effect. (iv) They do not examine out-of-domain performamce. Automatic data balancing for multi-domains. (Wang et al., 2020) automatically learn to weight (flat) data streams of multi-languages (or "domains"). We perform dynamic data selection and regularization through a mulit-domain curriculum. Automatic curriculum learning. Our work falls under automatic curriculum construction (Graves et al., 2017) and is directly inspired by Tsvetkov et al. (2016), who learn to weight and combine instance-level features to form a curriculum for an embedding learning task, through Bayesian Optimization. A similar idea (Ruder and Plank, 2017) is used to improve other NLP tasks. Here, we use the idea for NMT to construct a multi-domain data selection scheme with various selection scores at our disposal. The problem we study is connected to the more general multiobjective optimization problem. Duh (2018) uses Bandit learning to tune hyper-parameters such as the number of network layers for NMT. More related work. Previously, catastrophic forgetting has mostly been studied in the continued-training setup (Saunders et al., 2019; Khayrallah et al., 2018), to refer to the degrading performance on the out-of-domain task when a model is fine-tuned on in-domain data. This setup is a popular topic in general machine learning research (Aljundi et al., 2019). Thompson et al. (2018) study domain adaptation by freezing subnetworks. Our work instead addresses forgetting in the data-balancing scenario for multi-domains. We use curriculum to generalize fine-tuning. 7713 S1 S2 S3 S3 S2 S1 S1 S3 S2 S2 S1 S3 (1) (2) (3) (5) S2 S1 S3 (4) Figure 1: Data order in single-domain curricula and a potential multi-domain curriculum. (1) A toy training dataset of 3 examples. Each example has three scores, representing relevance to three domains, grey/dark/white domains, respectively. The higher the bar the more relevant. (2) Grey-domain order. (3) Dark-domain order. (4) White-domain order. (5) A potential multi-domain data order. 3 Curriculum Learning for NMT We first introduce curriculum learning (CL) (Bengio et al., 2009), which serves as a formulation for SOTA single-domain dynamic data selection and which our method is built upon and generalizes. In CL, a curriculum, C, is a sequence of training criteria over training steps. A training criterion, Qt(y|x), at step t is associated with a set of weights, Wt(x, y),3 over training sentence pairs (x, y) in a parallel dataset D, where y is the translation for x. Qt(y|x) is a re-weighting of the original training distribution P(y|x): Qt(y|x) ∝Wt(x, y)P(y|x), ∀(x, y) ∈D (1) Hence, for T maximum training steps, C is a sequence: C = ⟨Q1, ..., Qt, ..., QT ⟩ (2) At step t, an online learner randomly samples a data batch from Qt to fine-tune model mt−1 into mt. Therefore, C corresponds to a sequence of models, ⟨m1, ..., mt, ..., M⟩. (3) M is the final model that the entire curriculum has been optimizing towards. Intermediate models, mt, serve as “stepping stones” to M, to transfer knowledge through them and regularize the training for generalization. A performance metric P(C) evaluates M on a development or test set, after training on C. 3As a preview, in our paper, Wt(x, y) uses uniform weights over selected examples and assigns zero weights for filtered examples, similar to a mask. W1 →W2 →W3 W1 →W2 →W3   1/3 1/3 1/3     1/2 1/2  0.0     1.0  0.0  0.0     1/3 1/3 1/3     1/2 1/2  0.0     0.0 1.0  0.0   (1) (2) Table 2: Curriculum examples characterized by reweighting, Wt(x, y), over three steps, to stochastically order data to benefit a final domain. Strikethrough discards examples. (1) corresponds to data order Figure 1 (2). (2) corresponds to data order Figure 1 (5). In NMT, CL is used to implement dynamic data selection. First, a scoring function (Section 4.3) is employed to measure the usefulness of an example to a domain and sort data. Then mini-batch sampling, e.g., (Kocmi and Bojar, 2017), is designed to realize the weighting Wt, to dynamically evolve the training criteria Qt towards in-domain. Figure 1 (1)-(4) illustrates the basic idea of the curriculum we use. (1) shows three sentence pairs, S1, S2, S3, each having three scores, respectively representing usefulness to three domains. A greydomain training curriculum, for example, relies on the data order in (2), gradually discards least useful examples according to Wt(x, y) (Eq. 1) in Table 2 (1): At step 1, the learner uniformly samples from all examples (W1), producing model m1. In step 2, the least-in-domain S3 is discarded (strikethrough) by W2 so we sample from subset {S1, S2} uniformly to reach m2. We repeat this until reaching the final model M. In this process, sampling is uniform in each step, but in-domain examples (e.g., S1) are reused more over steps. Similarly, we can construct the dark-domain curriculum in Figure 1 (3) and the white-domain (4). 4 Our Approach: Learning a Multi-Domain Curriculum 4.1 General Idea The challenges in multi-domain/-task data selection lie in addressing catastrophic forgetting and data balancing. In Figure 1, while curriculum (2) moves a model to the grey-domain direction, this direction may not necessarily be positively consistent with the dark domain (Figure 1 (3)), causing dropped dark-domain performance. Ideally, a training example that introduces the least forgetting across all domains would have gradients that move the model in a common direction towards all domains. While this may not be easily feasible by selecting a single example, we would like the intuition to work in a data batch on average. Therefore, our idea is to carefully introduce 7714 D curriculum finetune NMT f1(x, y) fN(x, y) ... model v1 vN ... P1 PK . . . eval: optimizer f(x, y) = V · F(x, y) bC(V ) V = Figure 2: Learning a multi-domain curriculum. per-example data-selection scores (called features) to measure “domain sharing”, intelligently weight them to balance the domains of interest, and dynamically schedule examples to trade-off between regularization and domain adaptation. A method to realize the above idea has the following properties: 1. Features of an example reflect its relevance to domains. 2. Feature weights are jointly learned/optimized based on end model performance. 3. Training is dynamic, by gradually focusing on multi-domain relevant and noise-reduced data batches. Furthermore, a viable multi-domain curriculum meets the following performance requirements: (i) It improves the baseline model across all domains. (ii) It simultaneously reaches (or outperforms) the peak performance of individual singledomain curricula. Above requires improvement over out-of-domain, too. 4.2 The Framework Formally, for a sentence pair (x, y), let fn(x, y) ∈ R be its n-th feature that specifies how (x, y) is useful to a domain. Suppose we are interested in K domains and each example has N features. For instance, each sentence pair of S1, S2, S3 in Figure 1 (1) has three features (N = 3), each for one domain (K = 3).4 We represent (x, y)’s features using a feature vector F(x, y) = 4But N does not necessarily equal K because we can introduce multiple features for one domain or a single feature for multiple domains. [f0(x, y), ..., fN−1(x, y)]. Given a weight vector V = [v0, ..., vN−1] for all sentence pairs, we compute an aggregated score f(x, y) = V · F(x, y) (4) for each sentence pair and sort the entire data in increasing order. We then construct a curriculum bC(V ) to fine-tune a warmed-up model, evaluate its performance and propose a next weight vector. After several iterations/trials, the optimal weight vector V ∗is the one with the best end performance: V ∗= arg max V P( bC(V )) (5) Figure 2 shows the framework. For the process to be practical and scalable, bC fine-tunes a warmedup model for a small number of steps. The learned V ∗can then eventually be used for retraining a final model from scratch. 4.3 Instance-Level Features We design the following types of features for each training example and instantiate them in Experiments (Section 5). NMT domain features (qZ) compute, for a pair (x, y), the cross-entropy difference between two NMT models: qZ (x, y)=log P (y|x; θZ)−log P (y|x; θbase) |y| (6) P (y|x; θbase) is a baseline model with parameters θbase trained on the background parallel corpus, P (y|x; θZ) is a Z-domain model with θZ by finetuning θbase on a small, Z-domain parallel corpus bDZ with trusted quality and |y| is the length of y. qZ discerns both noise and domain Z (Wang et al., 2019a). Each domain Z has its own bDZ. Importantly, Grangier (2019) shows that, under the Taylor approximation (Abramowitz and Stegun, 1964), qZ approximates the dot product between gradient, g(x, y; θbase), of training example (x, y) and gradient, g( bDZ, θbase), of seed data bDZ.5 Thus an example with positive qZ likely 5That is, according to Grangier (2019): qZ(x, y) × |y| = log P(y|x; θZ) −log P(y|x; θbase) ≈ λ g(x, y; θbase)⊤g( bDZ, θbase) (7) when θbase and θZ are close, which is the case for finetuning: θZ = θbase + λ g( bDZ, θbase). 7715 moves a model towards domain Z. For multiple domains, Z1, ..., ZK, selecting a batch of examples with qZk’s all being positive would move a model towards a common direction shared across multiple domains, which alleviates forgetting. The Z-domain feature qZ (x, y) can be easily generalized into a single multi-domain feature, qZ, for a set of domains Z: qZ (x, y)=log P (y|x; θZ)−log P (y|x; θbase) |y| (8) by simply concatenating all the seed parallel corpus bDZ from the constituent domains into bDZ and use it to fine-tune the baseline θbase into θZ. A benefit of qZ is scalability: using a single feature value to approximate (x, y)’s gradient consistency with the multiple domains at once. Simple concatenation means, however, domain balancing is not optimized as in Eq. 5. NLM domain features (dZ) (Moore and Lewis, 2010; van der Wees et al., 2017) compute Zdomain relevance of sentence x with neural language models (NLM), like qZ: dZ (x) = log P (x; ϑZ) −log P (x; ϑbase) |x| (9) where P(x; ϑbase) is an NLM with parameters ϑbase trained on the x half of the background parallel data, and P(x; ϑZ) is obtained by fine-tuning P(x; ϑbase) on Z-domain monolingual data. Although dZ may not necessarily reflect the translation gradient of an example under an NMT model, it effectively assesses the Z-domain relevance and, furthermore, allows us to include additional larger amounts of in-domain monolingual data. We do not use its bilingual version (Axelrod et al., 2011), but choose to consider only the source side, for simplicity. Cross-lingual embedding similarity feature (emb) computes the cosine similarity of a sentence pair in a cross-lingual embedding space. The embedding model is trained to produce similar representations exclusively for true bilingual sentence pairs, following Yang et al. (2019). BERT quality feature (BERT) represents quality scores from a fine-tuned multilingual BERT model (Devlin et al., 2018). We fine-tune a pre-trained BERT model6 on a supervised dataset with positive and negative translation pairs. 6We use the public cased 12 layers multilingual model: multi_cased_L-12_H-768_A-12 Algorithm 1: Bayesian optimization 1: H = ∅; # Trial history. 2: σ0 = GP; # Initialize surrogate model. 3: α = EI; # Initialize acquisition function. 4: i = 1; 5: while i ≤T do 6: Vi = arg maxV α(V ; σi−1, H); # Predict weights vector Vi by maximizing acquisition function. 7: p = P( bC(Vi)) by fine-tuning NMT on bC(Vi); 8: H = H ∪{(Vi, p)}; # Update trial history. 9: Estimate σi with H; 10: i = i + 1; 11: end while 12: return (V ∗, p∗) (∈H) w/ the best performance p∗. These features compensate each other by capturing the information in a sentence pair from different aspects: NLM features capture domain. NMT features additionally discern noise. BERT and emb are introduced for denoising, by transfering the strength of the data they are trained on. All these features are from previous research and here we integrate them to solve a generalized problem. 4.4 Performance Metric P Eq. 5 evaluates the end performance P( bC(V )) of a multi-domain curriculum candidate. We simply combine the validation sets from multi-domains into a single validation set to report the perplexity of the last model checkpoint, after training the model on bC(V ). The best multi-domain curriculum minimizes model’s perplexity (or maximizes its negative per Eq. 5) on the mixed validation set. We experiment with different mixing ratios. 4.5 Curriculum Optimization We solve Eq. 5 with Bayesian Optimization (BayesOpt) (Shahriari et al., 2016) as the optimizer in Figure 2. BayesOpt is derivative-free and can optimize expensive black-box functions, with no assumption of the form of P. It has recently become popular for training expensive machinelearning models in the “AutoML” paradigm. It consists of a surrogate model for approximating P( bC(V )) and an acquisition function for deciding the next sample to evaluate. The surrogate model evaluates bC(V ) without running the actual NMT training, by the Gaussian process (GP) priors over functions that express assumptions about P. The acquisition function depends on previous trials, as well as the GP hyper-parameters. The Expected Improvement (EI) criterion (Srinivas et al., 2010) is usually used as acquisition function. Algo7716 rithm 1 depicts how BayesOpt works in our setup. We use Vizier (Golovin et al., 2017) for Batched Gaussian Process Bandit, but open-source implementations of BayesOpt are easily available.7. 4.6 Curriculum Construction We pre-compute all features for each sentence pair (x, y) in training data and turn its features into a single score f(x, y) by Eq. 4, given a weight vector. We then construct a curriculum by instantiating its re-weighting Wt(x, y) (Eq. 1). To that end, we define a Boolean, dynamic data selection function χf ρ(x, y; t) to check, at step t, if (x, y) ∈D belongs to the top ρ(t)-ratio examples in training data D sorted in increasing order of f(x, y), (0 < ρ ≤1). So χf ρ is a mask. Suppose n(t) examples are selected by χf ρ(x, y; t), the re-weighting will then be Wt(x, y) = 1/n(t) × χf ρ(x, y; t). (10) Filtered examples have zero weights and selected ones are uniformly weighted. We set ρ(t) = (1/2)t/H to decay/tighten over time8, controlled by the hyper-parameter H. During training, χf ρ(x, y; t) progressively selects higher f(x, y)scoring examples. In implementation, we integrate χf ρ(x, y; t) in the data feeder to pass only selected examples to the downstream model trainer; we also normalize f(x, y) offline to directly compare to ρ(t) online to decide filtering. As an example, the Wt(x, y) for the multi-domain curriculum order in Figure 1 (5) can look like Table 2 (2). 5 Experiments 5.1 Setup Data and domains. We experiment with two English→French training datasets: the noisy ParaCrawl data9 (290 million sentence pairs) and the WMT14 training data (38 million pairs). We use SentencePiece model (Kudo, 2018) for subword segmentation with a source-target shared vocabulary of 32,000 subword units. We evaluate our method with three “domains”: two specific domains, news and TED subtitles, and outof-domain. News domain uses the WMT14 news 7E.g.,https://github.com/tobegit3hub/ advisor 8When the training data is small, we can, in practice, let a model warm up before applying the schedule. 9https://paracrawl.eu testset (N14) for testing, and WMT12-13 for validation in early stopping (Prechelt, 1997). The TED domain uses the IWSLT15 testset (T15) for testing, and the IWSLT14 testset for validation. Out-of-domain performance is measured by two additional testsets, patent testset (PA) (2000 sentences)10 and WMT15 news discussion testset (D15). We report SacreBLEU11 (Post, 2018). Features. NMT features use the parallel data to train the baseline NMT models. The new-domaindiscerning NMT feature qN uses WMT10-11 (5500 pairs) as in-domain data bDN. The TED NMT feature qT uses the TED subtitle training data (22k pairs) as in-domain data bDT . NLM features use the English half of parallel data to train the baseline NLMs. The news-domain-discerning NLM feature dN uses the 28 million English sentences from WMT14. The TED subtitle NLM feature dT uses the English side of IWSLT15 indomain parallel training data. The training of the cross-lingual embedding model follows Yang et al. (2019) with a 3-layer transformer (Vaswani et al., 2017) (more details in Appendix A). For the BERT feature, we sample positive pairs from the same data to train the cross-lingual embedding model. The negatives are generated using the cross-lingual embedding model, via 10-nearest neighbor retrieval in the embedding space, excluding the true translation. We pick the nearest neighbor to form a hard negative pair with the English sentence, and a random neighbor to form another negative pair. We sample 600k positive pairs and produce 1.8M pairs in total. Model. We use LSTM NMT (Wu et al., 2016) as our models, but with the Adam optimizer (Kingma and Ba, 2015). The batch size is 10k averaged over 8 length-buckets (with synchronous training). NLM/NMT features uses 512 dimensions by 3 layers–NLM shares the same architecture as NMT by using dummy source sentences (Sennrich et al., 2016). The final models are of 1024 dimensions by 8 layers, trained for 55k max steps. Training on WMT data uses a dropout probability of 0.2. Transformer results are in Appendix B. Curriculum optimization. In Eq. 5 (Section 4.5), we launch 30 trials (candidate curricula). BayesOpt spends 25 trials in exploration 10Randomly sampled from www.epo.org 11Signature: BLEU+case.mixed+numrefs.1+ smooth.exp+tok.13a+version.1.4.2 7717 Curriculum N14 T15 PA D15 Avg P1: B 33.4 35.7 29.8 30.4 32.3 P2: bC6-feats 37.0 38.1 48.3 35.7 39.8 W1: B 38.039.2 37.9 45.6 34.5 39.0 (Wu et al., 2016) 39.2 – – – – W2: bC6-feats 39.3 38.8 46.1 36.1 40.1 Table 3: English→French multi-domain curriculum improves no-curriculum baseline (B) over all testsets. Avg: averaged score per row, for ease of reading. P: ParaCrawl data. W: WMT14 training data. BLEUs in italics are tokenized BLEU. Other scores are de-tokenized SacreBLEU. and the last 5 in exploitation. Each trial trains for 2k steps12 by fine-tuning a warmed-up model with the candidate curriculum. The curriculum decays (ρ(t)) from 100% and plateaus at 20% at step 2k. We simply and heuristically set a range of [0.0, 1.0] for all feature weights. We don’t normalize feature values when weighting them. 5.2 Results We evaluate if the multi-domain curriculum meets requirements (i) and (ii) in Section 4.1. 5.2.1 Compared to no curriculum We compare: • B: baseline that does not use curriculum learning. • bC6-feats: multi-domain curriculum with 6 features, dN, dT , qN, qT , BERT, emb, weights learned by BayesOpt. Table 3 shows bC6-feats improves B on all testsets, especially on noisy ParaCrawl–requirement (i) is met. It is important to note that our WMT baseline (W1) matches Wu et al. (2016) on N14, as shown by re-computed tokenized BLEU (italics). 5.2.2 Compared to single-domain curricula We examine the following individual curricula, by training NMT models with each, respectively: • CdN , uses news NLM feature dN (Eq. 9). • CdT , uses TED subtitle NLM feature dT . • CqN , uses news NMT feature qN (Eq. 6). • CqT , uses TED NMT feature qT . • CBERT, uses BERT quality feature. • Cemb, uses cross-lingual embedding feature. 122k is empirically chosen to be practical. We use a number of fine-tuning trials in Eq. 5. NMT training is expensive so we don’t want a trial to tune for many steps. NMT is very adaptive on domain data, so each trial does not need many steps. We find no significant difference among 1k, 2k, 6k. Curriculum N14 T15 PA D15 Avg P1: B 33.4 35.7 29.8 30.4 32.3 P3: CdN 34.7 36.2 32.6 32.6 34.0 P4: CdT 34.8 36.3 30.1 32.4 33.4 P5.1: CBERT 36.8 37.3 47.9 35.0 39.3 P5.2: Cemb 36.9 37.7 46.0 35.2 39.0 P6: CqN 36.8 37.1 47.7 34.9 39.1 P7: CqT 35.6 38.3 46.6 34.9 38.9 P2: bC6-feats 37.0 38.1 48.3 35.7 39.8 P2 – P* +0.1 -0.2 +0.4 +0.5 +0.2 W1: B 38.0 37.9 45.6 34.5 39.0 W3: CdN 38.3 38.1 39.1 35.1 37.7 W4: CdT 38.1 38.4 43.0 36.1 38.9 W5.1: CBERT 38.5 37.8 45.9 35.9 39.5 W5.2: Cemb 38.5 37.8 45.8 35.9 39.5 W6: CqN 37.8 38.0 45.9 35.3 39.3 W7: CqT 38.5 38.8 45.0 36.1 39.6 W2: bC6-feats 39.3 38.8 46.1 36.1 40.1 W2 – W* +0.8 0.0 +0.2 0.0 +0.3 Table 4: English→French multi-domain curriculum (P2, W2) vs. single-domain curricula (P3-7, W3-7). Frame boxes mark best per-testset BLEU (W*, P*) over all single-domain curricula. Bold color denotes multi-domain curriculum has best BLEU (W2-W* ≥0). In Table 4, frame boxes mark the best BLEUs (P* or W*) per column, across P3-P7 or W3-W7. The last column shows averaged BLEU over all testsets. Bold font indicates C6-feats matches or improves W*. As shown, C6-feats matches or slightly outperforms the per-domain curricula across testsets. Therefore, bC6-feats meets requirement (ii). 5.3 Ablation Studies 5.3.1 Features Strengths and weaknesses of a feature. Table 4 also reveals the relative strengths and weaknesses of each type of features. The peak BLEU (in a frame box) on each testset is achieved by one of CBERT/emb, CqN and CqT , less by NLM features dN, dT . This contrast seems bigger on the noisy ParaCrawl, but the NLM features do bring gains over B. Overall, CBERT/emb (P5, W5) perform well, attributed to their denoising power, but lose to the NMT features (P7, W7) on T15, due to lack of explicit capturing of domain. The NMT features seem to subtly compensate in domains, and the domain features in denoising, but working with other features improves the model. BERT and emb features. Both BERT and emb use knowledge external to the experiment setup. For a fair comparison to baselines and a better understanding of them, we drop them by building 7718 Curriculum N14 T15 PA D15 Avg P2: bC6-feats 37.0 38.1 48.3 35.7 39.8 P8: bC4-feats 36.6 38.1 46.7 35.5 39.2 W2: bC6-feats 39.3 38.8 46.1 36.1 40.1 W8: bC4-feats 38.9 38.9 46.5 36.1 40.1 Table 5: BERT and emb features positively contribute to bC6-feats on ParaCrawl (P). dN dT qN qT BERT emb 0 0.2 0.4 0.6 0.8 1 Feature Name Feature Value Paracrawl WMT Figure 3: BayesOpt learns to weight features adaptively on ParaCrawl and WMT, respectively. • bC4-feats, multi-domain curriculum that excludes BERT and emb and uses 4 features. Table 5 shows BERT and emb features in bC6-feats improve bC4-feats with ParaCrawl, adding to the intuition that they have a denoising effect. Learned feature weights. Figure 3 shows BayesOpt learns to weight features adaptively in bC6-feats on ParaCrawl (grey) and WMT (white), respectively. ParaCrawl is very noisy thus noise non-discerning features dN and dT do not have a chance to help, but their weights become stronger on the cleaner WMT training data. It is surprising that BERT feature is still useful to the WMT training. We hypothesize this may suggest BERT feature have additional strength to just denoising, or that data noise could be subtle and exist in cleaner data. 5.3.2 BayesOpt vs. random search We compare BayesOpt (BO) and Random Search (RS) (Bergstra and Bengio, 2012) to solve Eq. 5, as well as uniform weighting (Uniform). In Table 6, all improve baselines, especially on ParaCrawl (P). RS does surprisingly well on ParaCrawl, but BayesOpt appears better overall.13 5.3.3 Mixing validation sets Eq. 5 evaluates P using the concatenated validation set (Section 4.4). Table 7 shows that the newsvs-TED mixing ratios can affect the per-domain 13RS uses 30 trials, as BO (Section 5.1), so the results show their comparison given the same number of trials. Curriculum N14 T15 PA D15 Avg P1 : B 33.4 35.7 29.8 30.4 32.3 P2 : bC6-feats (BO) 37.0 38.1 48.3 35.7 39.8 P9 : bC6-feats (RS) 36.7 38.4 48.0 35.5 39.7 P10: bC6-feats (Uniform) 35.4 36.9 48.3 34.1 38.7 W1 : B 38.0 37.9 45.6 34.5 39.0 W2 : bC6-feats (BO) 39.3 38.8 46.1 36.1 40.1 W9 : bC6-feats (RS) 39.0 38.2 43.7 36.4 39.3 W10: bC6-feats (Uniform) 38.8 39.1 43.0 36.0 39.2 Table 6: On average, BayesOpt (BO) performs better than Random Search (RS) and uniform weighting (Uniform), for learning feature weights of a multi-domain curriculum. 0 0.2 0.4 0.6 0.8 1 −0.05 0.00 0.05 0.10 0.15 Selection Threshed = 1-ρ(t) (Section 4.6) Domain Relevance news TED subtitles 3.0 3.1 3.2 3.3 3.4 3.5 3.6 Quality human rating Figure 4: The multi-domain curriculum dynamically balances multi-domain-relevant and noise-reduced data, as validated by human ratings. BLEUs. For example, on ParaCrawl, when news sentences are absent from the validation set, N14 drops by 0.7 BLEU (P8 vs. P13). We use the four feats as in bC4-feats in this examination. 5.3.4 Dynamic data balancing We simulate dynamic data selection with a random sample of 2000 pairs from the WMT data and annotate each pair by human raters with 0 (nonsense) - 4 (perfect) quality scale (following Wang et al. (2018b)). We sort the pairs by f(x, y) (Eq. 4). A threshold selects a subset of pairs, for which we average the respective NMT feature values as the domain relevance. Figure 4 shows that the multi-domain curriculum ( bC6-feats) learns to dynamically increase quality and multi-domain relevance. Therefore, our idea (Section 4.1) works as intended. Furthermore, training seems to gradually increase quality or domain in different speeds, determined by Eq. 5. 5.3.5 Weighting loss vs. curriculum With the learned weights, we compute a weight for each example to sort data to form a curriculum. Alternatively, we could weight the cross-entropy loss for that sentence during training (Wang et al., 2017; Chen et al., 2017). Table 8 shows that curriculum yields improvements over weighing per7719 Mixing Ratio N14 T15 PA D15 Avg P11: 1.0:0.0 36.3 37.8 47.3 35.3 39.2 P12: 0.8:0.2 36.4 38.2 47.7 35.4 39.4 P8: 0.5:0.5 36.6 38.1 46.7 35.5 39.2 P13: 0.0:1.0 35.9 38.1 47.0 35.2 39.1 W11: 1.0:0.0 39.1 38.6 46.4 36.0 40.0 W12: 0.8:0.2 39.0 38.7 46.3 35.7 39.9 W8: 0.5:0.5 38.9 38.9 46.5 36.1 40.1 W13: 0.0:1.0 39.1 38.6 46.4 36.0 40.0 Table 7: Guiding multi-domain curriculum learning by mixing validation sets. Experiments use 4 features as in bC4-feats. Model N14 T15 PA D15 Avg P8: Curriculum 36.6 38.1 46.7 35.5 39.2 P14: Weight Loss 35.3 37.8 39.3 32.6 36.3 W8: Curriculum 38.9 38.9 46.5 36.1 40.1 W14: Weight Loss 38.6 37.6 45.7 35.3 39.3 Table 8: Forming a curriculum with learned weights performs better than weighting instance loss in training. Experiments use 4 features (as in bC4-feats). sentence loss, in particular on noisy training data, confirming previous findings (van der Wees et al., 2017). 5.3.6 In-domain fine-tuning CqN and CqT each use a small in-domain parallel dataset, but we can simply fine-tune the final models on either dataset (+N, +T) or their concatenation (+N+T). Table 9 shows that bC6-feats can be further improved by in-domain fine-tuning14 and that both bC6-feats and its fine-tuning still improve the fine-tuned baselines, in particular on ParaCrawl. 5.4 Discussion: Feature Dependency One potential issue with using multiple perdomain features (qZ(x, y)’s in Eq. 6) is scores are not shared across domains and linear weighting may not capture feature dependency. For example, we need two NMT features if there are two domains. We replace the two NMT features, qN and qT , in bC4-feats with a single two-domain feature qZ={N,T} (Eq. 8), but with the two corresponding NLM features unchanged (so the new experiment has 3 features). Table 10 shows multi-domain feature contributes slightly better than linear combination of per-domain features (P19 vs. P8). The per-domain features, however, have the advantage of efficient feature weighting. In case of many features, learning to compress them seems to be an interesting future investigation. 14We fine-tune with SGD for 20k steps, with batch size 16, learning rate 0.0001. Model N14 T15 PA D15 Avg P15: B+N 35.8 37.1 41.2 32.8 36.7 P16: B+T 35.8 38.7 45.4 34.6 38.6 P17: B+N+T 35.9 38.7 44.8 34.4 38.4 P2 : bC6-feats (BO) 37.0 38.1 48.3 35.7 39.8 P18: bC6-feats +N+T 38.1 39.7 48.6 36.6 40.8 W15: B+N 38.7 37.4 46.4 34.6 39.3 W16: B+T 36.8 38.9 44.8 36.5 39.3 W17: B+N+T 38.6 39.1 46.1 35.8 39.9 W2 : bC6-feats (BO) 39.3 38.8 46.1 36.1 40.1 W18: bC6-feats +N+T 39.3 39.8 46.0 36.6 40.4 Table 9: The multi-domain curricula still bring improvements, even after models are fine-tuned on in-domain parallel data. +N: fine-tune on news parallel data bDN (Section 5.1); +T: fine-tune on TED parallel data bDT ; +N+T on concatenation. Model N14 T15 PA D15 Avg P8: per-dom. 36.6 38.1 46.7 35.5 39.2 P19: multi-dom. 36.6 38.6 46.8 35.9 39.5 Table 10: Multi-domain/task feature (Eq. 8) seems to contribute slightly better than linear combination of multiple perdomain features (Eq. 6). 6 Conclusion Existing curriculum learning research in NMT focuses on a single domain. We present a multidomain curriculum learning method. We carefully introduce instance-level features and learn a training curriculum to gradually concentrate on multi-domain relevant and noise-reduced data batches. End-to-end experiments and ablation studies on large datasets at different noise levels show that the multi-domain curriculum simultaneously reaches or outperforms the individual performance and brings solid gains over no-curriculum training, on in-domain and out-ofdomain testsets. Acknowledgments The authors would like to thank David Grangier for Eq. 7 and derivation, the three anonymous reviewers for their insightful reviews, Yuan Cao for his technical suggestions, Jason Smith, Markus Freitag, Pidong Wang and Reid Pryzant for comments on an earlier draft, Quoc V. Le for suggestions in a related thread. 7720 References Milton Abramowitz and Irene A. Stegun. 1964. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, ninth dover printing, tenth gpo printing edition. Dover, New York. Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. 2019. Gradient based sample selection for online continual learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 11816– 11825. Curran Associates, Inc. Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 355–362. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26 th International Conference on Machine Learning, pages 86–96, Montreal, Canada. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13:281–305. Denny Britz, Quoc Le, and Reid Pryzant. 2017. Effective domain mixing for neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 118–126. Association for Computational Linguistics. Boxing Chen, Colin Cherry, George Foster, and Samuel Larkin. 2017. Cost weighting for neural machine translation domain adaptation. In Proceedings of the First Workshop on Neural Machine Translation, pages 40–46, Vancouver. Association for Computational Linguistics. Boxing Chen and Fei Huang. 2016. Semi-supervised convolutional networks for translation adaptation with tiny amount of in-domain data. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL), pages 314–323. Boxing Chen, Roland Kuhn, George Foster, Colin Cherry, and Fei Huang. 2016. Bilingual methods for adaptive training data selection for machine translation. In AMTA. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Kevin Duh. 2018. Multi-objective hyperparameter search for fast and accurate neural machine translation - progress report. Technical report, Johns Hopkins University. M. Amin Farajian, Marco Turchi, Matteo Negri, and Marcello Federico. 2017. Multi-domain neural machine translation through unsupervised adaptation. In Proceedings of the Second Conference on Machine Translation, pages 127–137, Copenhagen, Denmark. Association for Computational Linguistics. Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. CoRR, abs/1612.06897. Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Elliot Karro, and D. Sculley, editors. 2017. Google Vizier: A Service for BlackBox Optimization. IJ Goodfellow, M Mirza, X Da, Aaron Courville, and Yoshua Bengio. 2014. An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks. TR arXiv:1312.6211v2. David Grangier. 2019. Distribution matched contrastive data selection. In Google internal report. Alex Graves, Marc G. Bellemare, Jacob Menick, Rémi Munos, and Koray Kavukcuoglu. 2017. Automated curriculum learning for neural networks. CoRR, abs/1704.03003. Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 901–908, Belgium, Brussels. Association for Computational Linguistics. Huda Khayrallah, Brian Thompson, Kevin Duh, and Philipp Koehn. 2018. Regularized training objective for continued training for domain adaptation in neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 36–44, Melbourne, Australia. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Tom Kocmi and Ondˇrej Bojar. 2017. Curriculum learning and minibatch bucketing in neural machine translation. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 379–386. INCOMA Ltd. Philipp Koehn, Francisco Guzman, Vishrav Chaudhary, and Juan Pino. 2019. Findings of the wmt 2019 shared task on parallel corpus filtering for lowresource conditions. In Proceedings of the Fourth Conference on Machine Translation, Florence, Italy. Association for Computational Linguistics. 7721 Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L. Forcada. 2018. Findings of the wmt 2018 shared task on parallel corpus filtering. In Proceedings of the Third Conference on Machine Translation, Belgium, Brussels. Association for Computational Linguistics. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75. Association for Computational Linguistics. Gaurav Kumar, George Foster, Colin Cherry, and Maxim Krikun. 2019. Reinforcement learning based curriculum optimization for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2054–2061, Minneapolis, Minnesota. Association for Computational Linguistics. Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceedings of the ACL 2010 Conference, pages 220– 224. Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabás Póczos, and Tom M. Mitchell. 2019. Competence-based curriculum learning for neural machine translation. CoRR, abs/1903.09848. Matt Post. 2018. A call for clarity in reporting bleu scores. Computing Research Repository, arXiv:1804.08771v1. Version 2. Lutz Prechelt. 1997. Automatic early stopping using cross validation: Quantifying the criteria. Neural Networks, 11:761–767. Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with bayesian optimization. CoRR, abs/1707.05246. Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, and Stephan Vogel. 2017. Neural machine translation training in a multi-domain scenario. arXiv preprint arXiv:1708.08712v2. Danielle Saunders, Felix Stahlberg, Adrià de Gispert, and Bill Byrne. 2019. Domain adaptive inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 222–228, Florence, Italy. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando de Freitas. 2016. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104:148–175. Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. 2010. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, pages 1015–1022, USA. Omnipress. Sander Tars and Mark Fishel. 2018. Multi-domain neural machine translation. CoRR, abs/1805.02282. Brian Thompson, Huda Khayrallah, Antonios Anastasopoulos, Arya D. McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim Anderson, and Philipp Koehn. 2018. Freezing subnetworks to analyze domain adaptation in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 124–132, Brussels, Belgium. Association for Computational Linguistics. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Brian MacWhinney, and Chris Dyer. 2016. Learning the curriculum with bayesian optimization for taskspecific word representation learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 130–139, Berlin, Germany. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482–1488. Association for Computational Linguistics. Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2018a. Dynamic sentence sampling for efficient training of neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 298–304, Melbourne, Australia. Association for Computational Linguistics. Wei Wang, Isaac Caswell, and Ciprian Chelba. 2019a. Dynamically composing domain-data selection with clean-data selection by “co-curricular learning” for neural machine translation. In Proceedings of the 7722 57th Conference of the Association for Computational Linguistics, pages 1282–1292, Florence, Italy. Association for Computational Linguistics. Wei Wang, Taro Watanabe, Macduff Hughes, Tetsuji Nakagawa, and Ciprian Chelba. 2018b. Denoising neural machine translation training with trusted data and online data selection. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 133–143. Association for Computational Linguistics. Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Graham Neubig, and Jaime Carbonell. 2019b. Optimizing data usage via differentiable rewards. arXiv preprint arXiv:1911.10088. Xinyi Wang, Yulia Tsvetkov, and Graham Neubig. 2020. Balancing training for multilingual neural machine translation. arXiv preprint arXiv:2004.06748. Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural machine transaltion. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1400–1410. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Yinfei Yang, Gustavo Hernández Ábrego, Steve Yuan, Mandy Guo, Qinlan Shen, Daniel Cer, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Improving multilingual sentence embedding using bidirectional dual encoder with additive margin softmax. CoRR, abs/1902.08564. Xuan Zhang, Gaurav Kumar, Huda Khayrallah, Kenton Murray, Jeremy Gwinnup, Marianna J. Martindale, Paul McNamee, Kevin Duh, and Marine Carpuat. 2018. An empirical exploration of curriculum learning for neural machine translation. CoRR, abs/1811.00739. Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul McNamee, Marine Carpuat, and Kevin Duh. 2019. Curriculum learning for domain adaptation in neural machine translation. In 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Appendices A Cross-lingual Embedding Model Parameters The sentence encoder has a shared 200k token multilingual vocabulary with 10k OOV buckets. For each token, we also extract character n-grams (n = [3, 6]) hashed to 200k buckets. Word token items and character n-gram items are mapped to 320 dim. character embeddings. Word and character n-gram representations are summed together to produce the final input token representation. The encoder is a 3-layer Transformer with hidden size of 512, filter size of 2048, and 8 attention heads. We train for 40M steps using an SGD optimizer with batch size K=100 and learning rate 0.003. During training, the word and character embeddings are scaled by a gradient multiplier of 25. B Transformer-Big Results We replicate experiments with the TransformerBig architecture. Table 11 shows the TransformerBig results that correspond to the RNN results in Table 3. These results show that the multi-domain curriculum meets the performance requirement (i) (Section 4.1) using the Transformer architecture. Table 12 shows the Transformer-Big results corresponding to RNN results in Table 4. They show that the proposed multi-domain curriculum meets the performance requirement (ii) using Transformer. Curri. N14 T15 PA D15 Avg P1: B 34.1 36.3 34.2 32.3 34.2 P2: bC6-feats 39.6 40.2 50.6 37.7 42.0 W1: B 40.8 39.9 46.0 37.8 41.1 W2: bC6-feats 41.8 41.2 48.1 38.8 42.5 Table 11: Transformer Big SacreBLEU: English →French multi-domain curriculum improves nocurriculum baseline (B) over all testsets, using Transformer-Big. P: Paracrawl training data. W: WMT14 training data. C An Explanation: Noisy Data Useful in Low-Resource Setup With noisy, limited data (e.g., 100k pairs), we can train a model A on all data, or a model B on the filtered subset (e.g., 10k). We can also finetune A on the filtered data, to produce model C. C could be better than A due to use of higherquality data or better than B due to use of more 7723 Curri. N14 T15 PA D15 Avg P1: B 34.1 36.3 34.2 32.3 34.2 P3: CdN 33.7 36.1 32.7 32.5 33.8 P4: CdT 35.3 37.7 32.8 34.0 35.0 P5: CBERT 39.2 40.1 49.7 37.5 41.6 P6: CqN 38.9 39.8 48.9 36.9 41.1 P7: CqT 37.3 40.4 44.7 36.2 39.7 P2: bC6-feats 39.6 40.2 50.6 37.7 42.0 P2 – P* +0.4 -0.2 +0.9 +0.2 +0.3 W1: B 40.8 39.9 46.0 37.8 41.1 W3: CdN 40.9 39.2 44.4 37.6 40.5 W4: CdT 39.8 39.6 43.3 37.3 40.0 W5: CBERT 40.5 39.2 45.7 38.3 40.9 W6: CqN 41.1 40.0 47.6 38.0 41.7 W7: CqT 41.1 41.4 47.7 38.5 42.2 W2: bC6-feats 41.8 41.2 48.1 38.8 42.5 W2 – W* +0.7 -0.2 +0.4 +0.3 +0.3 Table 12: Transformer Big SacreBLEU: English → French multi-domain curriculum (P2, W2) vs. singledomain curricula (P3-7, W3-7). BLEU scores over 4 testsets and their average. Frame boxes mark best pertestset BLEU (W*, P*) over all single-domain curricula. Bold color denotes multi-domain curriculum has best BLEU (W2-W* ≥0). P: ParaCrawl training data. W: WMT14 training data. data (200k>10k). Therefore, by “noisy data can be helpful”, we refer to data regularization (using more data) and to transfer learning (fine-tuning) to exploit both data quantity and quality, the idea behind dynamic data selection.
2020
689
This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 752–765 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 752 Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation Kaustubh D. Dhole Amelia Science RnD, IPsoft New York, NY 10004 [email protected] Christopher D. Manning Department of Computer Science Stanford University Stanford, CA 94305 [email protected] Abstract Question Generation (QG) is fundamentally a simple syntactic transformation; however, many aspects of semantics influence what questions are good to form. We implement this observation by developing Syn-QG, a set of transparent syntactic rules leveraging universal dependencies, shallow semantic parsing, lexical resources, and custom rules which transform declarative sentences into questionanswer pairs. We utilize PropBank argument descriptions and VerbNet state predicates to incorporate shallow semantic content, which helps generate questions of a descriptive nature and produce inferential and semantically richer questions than existing systems. In order to improve syntactic fluency and eliminate grammatically incorrect questions, we employ back-translation over the output of these syntactic rules. A set of crowd-sourced evaluations shows that our system can generate a larger number of highly grammatical and relevant questions than previous QG systems and that back-translation drastically improves grammaticality at a slight cost of generating irrelevant questions. 1 Introduction Automatic Question Generation (QG) is the task of generating question-answer pairs from a declarative sentence. It has direct use in education and generating engagement, where a system automatically generates questions about passages that someone has read. A more recent secondary use is for automatic generation of questions as a data augmentation approach for training Question Answering (QA) systems. QG was initially approached by syntactic rules for question-generation, followed by some form of statistical ranking of goodness, e.g., (Heilman and Smith, 2009, 2010). In recent years, as in most areas of NLP, the dominant approach has been neural network generation (Du et al., 2017), Figure 1: The SRL structure is leveraged to invoke a template, and a simple rearrangement of the modifying arguments is performed. in particular using a sequence-to-sequence architecture, which exploits the data in the rapidly growing number of large QA data sets. Previous rule-based approaches suffer from a significant lack of variety in the questions they generate, sticking to a few simple and reliable syntactic transformation patterns. Neural architectures provide a pathway to solving this limitation since they can exploit QA datasets to learn the broad array of human question types, providing the usual neural network advantages of a data-exploiting, end-toend trainable architecture. Nevertheless, we observe that the quality of current neural QG systems is still lacking: The generated questions lack syntactic fluency, and the models lack transparency and an easy way to improve them. We argue that in essence QG can be governed by simple syntactic “question transformations” – while the implementation details vary, this is in accord with all major linguistic viewpoints, such as Construction Grammar and Chomskyan Generative Grammar, which emphasize grammatical rules and the existence of finite ways to create novel utterances. However, successful, fluent question generation requires more than just understanding syntactic question transformations, since felicitous questions must also observe various semantic and RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 753 pragmatic constraints. We approach these by making use of semantic role labelers (SRL), previously unexploited linguistic semantic resources like VerbNet’s predicates (Figure 2) and PropBank’s rolesets and custom rules like implications, allowing us to generate a broader range of questions of a descriptive and inferential nature. A simple transformation commonly used in rule-based QG is also displayed in Figure 1. Figure 2: VerbNet Predicate Question Generation. Detailed intermediate steps are described in Figure 3. We evaluate our QG framework, Syn-QG against three QG systems on a mixture of Wikipedia and commercial text sentences outperforming existing approaches in grammaticality and relevance in a crowd-sourced human evaluation while simultaneously generating more types of questions. We also notice that back-translated questions are grammatically superior but are sometimes slightly irrelevant as compared to their original counterparts. The Java code is publicly available at https://bitbucket.org/kaustubhdhole/syn-qg/. 2 Related Work With the advent of large-scale QA datasets (Rajpurkar et al., 2016; Nguyen et al., 2016), recent work in QG (Du et al., 2017; Zhou et al., 2017) has primarily focused on training sequence-tosequence and attention-based architectures. Dong et al. (2019) fine-tuned the question generation task by taking advantage of a large pre-trained language model. Success in reinforcement learning has inspired teacher-student frameworks (Wang et al., 2017; Tang et al., 2017) treating QA and QG as complementary tasks and performing joint training by using results from QA as rewards for the QG task. Yuan et al. (2017); Hosking and Riedel (2019); Zhang and Bansal (2019) used evaluation metrics like BLEU, sentence perplexity, and QA probability as rewards for dealing with exposure bias. Chen et al. (2019) trained a reinforcement learning based graph-to-sequence architecture by embedding the passage via a novel gated bi-directional graph neural network and generating the question via a recurrent neural network. To estimate the positions of copied words, Liu et al. (2019) used a graph convolution network and convolved over the nodes of the dependency parse of the passage. Li et al. (2019) jointly modeled OpenIE relations along with the passage using a gated-attention mechanism and a dual copy mechanism. Traditionally, question generation has been tackled by numerous rule-based approaches (Heilman and Smith, 2009; Mostow and Chen, 2009; Yao and Zhang, 2010; Lindberg et al., 2013; Labutov et al., 2015). Heilman and Smith (2009, 2010) introduced an overgenerate-and-rank approach that generated multiple questions via rule-based tree transformations of the constituency parse of a declarative sentence and then ranked them using a logistic-regression ranker with manually designed features. Yao and Zhang (2010) described transformations of Minimal Recursion Semantics representations guaranteeing grammaticality. Other transformations have been in the past defined in terms of templates (Mazidi and Nielsen, 2014, 2015; Mazidi and Tarau, 2016; Flor and Riordan, 2018), or explicitly performed (Heilman and Smith, 2009) by searching tree patterns via Tregex, followed by their manipulation using Tsurgeon (Levy and Andrew, 2006). Kurdi et al. (2020) provide a comprehensive summary of QG, analysing and comparing approaches before and after 2014. Vis-`a-vis current neural question generators, rule-based architectures are highly transparent, easily extensible, and generate well-formed questions since they perform clearly defined syntactic transformations like subject-auxiliary inversion and WHmovement over parse structures whilst leveraging fundamental NLP annotations like named entities, co-reference, temporal entities, etc. However, most of the existing rule-based systems have lacked diversity, being mostly focused on generating What-type and boolean questions and have mainly exploited parse structures which are not semantically informed. Mazidi and Tarau (2016); Flor and Riordan (2018) use Dependency, SRL, and NER templates but do not handle modalities and negation in a robust manner. Moreover, there is plenty of availability of core linguistic resources like VerbNet and PropBank, which provide RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 754 further unique ways to look at sentences and ask questions differently besides the generally wellestablished dependency and SRL parses. 3 Syn-QG Syn-QG is a rule-based framework which generates questions by identifying potential short answers in 1) the nodes of crucial dependency relations 2) the modifying arguments of each predicate in the form of semantic roles 3) named entities and other generic entities 4) the states of VerbNet’s thematic roles in the form of semantic predicates and 5) PropBank roleset specific natural language descriptions. Each of the five heuristics works independently, generating a combined set of question-answer pairs, which are eventually back-translated. We describe each of these five sources. 3.1 Dependency Heuristics Dependency trees are syntactic tree structures, wherein syntactic units in the form of words are connected via directed links. The finite verb is considered as the structural root of the tree, and all other syntactic units are either directly (nsubj, dobj,, etc.) or indirectly (xcomp, iobj, etc.) dependent on this finite verb. We present rules over such dependency trees annotated according to the Universal Dependencies (UD) format (de Marneffe et al., 2014). To extract dependency structures, we use the parser of Gardner et al. (2018). We make use of PropBank's predicate-argument structure (SRL) for clausal extraction of the verb headed by a select few dependency nodes which can serve as answers. These rules treat the clause as a combination of a subject, an object, the head verb and other non-core arguments. The clause is further refined with modals, auxiliaries and negations if found around the verb. Finally, we make use of a set of predefined handwritten templates, a few of which are described in Table 1. In each of the templates, we convert What to Who/Whom, When or Where depending on the named entity of the potential answer and do to does or did according to the tense and number of the subject to ensure subject-verb agreement. The pseudo code is described in Algorithm 2 of the Appendix. 3.2 SRL Heuristics While dependency representations are perhaps the most popular syntactic method for automatically extracting relationships between words, they lack sufficient semantic detail. Being able to answer “Who did what to whom and how, why, when and where” has been a central focus in understanding language. In recent decades, shallow semantic parsing has been a prominent choice in understanding these relationships and has been extensively used in question generation (Mazidi and Tarau, 2016; Flor and Riordan, 2018). PropBank-style frames provide semantically motivated roles that arguments around a verb play. Moreover, highly accurate semantic role labeling models are being developed owing to corpora like PropBank and FrameNet. We take advantage of the SRL model of Gardner et al. (2018) for extracting the roles of each verb in the sentence. Algorithm 1 SRL Heuristics {SRL1 . . . SRLs} ←SRL(w0 . . . wn) loop j = 0, until j = s: if SRLj contains A0 or A1 and at least 1 Am then {A0 . . . ACAU, ATMP } ←SRLj loop Ax ∈SRLj if Ax = modifier: subj ←A0 A− x ←P(A3, A4, ...ATMP −Ax) verb ←{Av, modals, negation} template ←modifiertype ←Ax QA ←template(subj, Ax, verb, A− x ) close; We succinctly describe the steps taken in Algorithm 1. We first filter out all the predicates which have an Agent or a Patient and at least one other modifier like Extent, Manner, Direction, etc. These modifiers would serve as our short answers. We make use of a set of predefined handwritten templates described in Table 2, which rearrange the arguments within the fact to convert it into an interrogative statement depending on the modifier. In Figure 1, the predicate “won” is modified by a Patient “New Mexico”, an Agent “Obama”, an Extent modifier “by a margin of 5%” and a Temporal modifier “in 2008”. For Extent as a short answer, we fill a pre-defined template “By how much mainAux nsubj otherAux verb obj modifiers ?” to get the above question-answer pair. We keep the order of arguments as they appear in the original RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 755 Potential Short Answer (Dependencies) Question Template Sample Fact Generated Question subject (nsubj) Wh mainAux otherAux verb obj modifiers? Ricky Ponting accepted captaincy during Australia’s golden era. Who accepted captaincy during Australia’s golden era? direct object(dobj) Wh mainAux nsubj otherAux verb modifiers? In monsoon, India receives large amounts of rain that can cause flooding. What does India receive in monsoon? open clausal complement (xcomp) Wh mainAux nsubj verb modifiers? The Sheriff did not try to eat the apples while the outlaws were fasting. What did the Sheriff not try while the outlaws were fasting? copula (cop) How would you describe nsubj? Comets are leftovers from the creation of our solar system about 4.5 billion years ago. How would you describe comets ? Table 1: A few templates to describe the construction of questions. Different word units are shown in unique colors to describe the filling of the template. All the short answers are highlighted in blue. sentence. The templates are described in Table 2. 3.3 Named Entities, Custom Entities, and Hypernyms We create separate templates when any numbered SRL argument contains common named entities like Person, Location, Organization etc. Like Flor and Riordan (2018), we add specific rules in the form of regexes to address special cases to differentiate between phrases like For how long and Till when instead of a generic When question type. Some of the templates are described in Table 7 in the Appendix. The approach is described in Algorithm 3 in the Appendix. We also use WordNet (Miller, 1998) hypernyms of all potential short answers and replace What with the bigram Which hypernym. So, for a sentence like “Hermione plays badminton at the venue”, we generate a question “Which sport does Hermione play at the venue?”. For computing the hypernym, we use the sense disambiguation implementation of Tan (2014). While supersenses do display a richer lexical variety, sense definitions don’t always fit well. 3.4 Handling modals and auxilliaries During explicit inversion of the verb and arguments around it via our templates, we tried to ensure that the positions of auxiliaries are set, and negations are correctly treated. We define a few simple rules to ensure that. • When there are multiple auxiliaries, we only invert the first auxiliary while the second and further auxiliaries remain as they are just before the main verb. • We make the question auxiliary finite and agree with the subject. • We ensure that the object is kept immediately after the verb. • For passive cases, subj-verb-obj is changed to obj-verb-by-subj. 3.5 Handling Factualness via Implicature Previous rule-based approaches (Mazidi and Tarau, 2016; Flor and Riordan, 2018) have used the NEG dependency label to identify polarity. But such an approach would suffer whenever polarities would be hierarchically entailed from their parent clauses in cases like “Picard did not fail to X” where the entailed polarity of “X” is, in fact, positive. Moreover, in one-way implications like “Bojack hesitated to X”, it would be best not to generate a question for unsure cases since it is open-ended if Bojack did or did not X. A similar example is displayed in Figure 5. For each verb representing a subordinate clause, we compute its entailed truth or falsity from its parent clause using the set of one-way and two-way implicative verbs, and verb-noun collocations provided by Karttunen (2012). For example, the two-way implicative construction “forget to X” entails that “X” did not happen, so it would be wrong to ask questions about “X”. Karttunen (2012) provides simple implications in the form of 92 verbs and phrasal implications in the form of 9 sets of verbs and 8 sets of nouns making 1002 verb-noun collocations. The entailed polarity of a RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 756 Potential Short Answer (Verb Arguments) Question Template Sample Fact Generated Question Locative (LOC) Where mainAux nsubj otherAux verb obj modifiers ? Americans eat about 100 acres of pizza each day, with about 3 billion pizzas sold annually in the USA. Where do about 3 billion pizzas sell annually ? Manner (MNR) How mainAux nsubj otherAux verb obj modifiers ? Young Sheldon was caught unaware as the liquid was oozing out of the chamber in a zig-zag fashion. How was the liquid oozing out of the chamber? Purpose (PNC and PRP) For what purpose mainAux nsubj otherAux verb obj modifiers ? Collectively, South African women and children walk a daily distance equivalent to 16 trips to the moon and back to fetch water. For what purpose do South African women and children walk a daily distance equivalent to 16 trips to the moon and back collectively ? Cause (CAU) Why mainAux nsubj otherAux verb obj modifiers ? Since the average faucet releases 2 gallons of water per minute, you can save up to four gallons of water every morning by turning off the tap while you brush your teeth. Why can you save up to four gallons of water by turning off the tap while you brush your teeth every morning ? Temporal (TMP) When mainAux nsubj otherAux verb obj modifiers ? Till when mainAux nsubj otherAux verb obj modifiers? Stephen Hawking once on June 28, 2009 threw a party for time-travelers but he announced the party the next day. Princess Sita travelled the whole town until the end of summer. When did Stephen Hawking throw a party for time travelers ? When did Stephen Hawking announce the party ? Till when did Princess Sita travel the whole town? Extent (EXT) By how much mainAux nsubj otherAux verb obj modifiers ? New Mexico was won by Obama by a margin of 5% in 2008. By how much was New Mexico won by Obama in 2008? Table 2: The templates of temporal, direction, extent, etc. are leveraged to ask questions about different modifiers. Answer fragments are highlighted in blue. In passive cases like the last example, we change the template order from subj-verb-obj to obj-verb-by-subj. clause can be either TRUE, FALSE, or UNSURE1. For FALSE clauses, we only generate a boolean question with a NO answer. For UNSURE clauses, we do not generate any question. For TRUE clauses and verbs and collocations not present in the above set, we rely on the NEG label. 3.6 VerbNet Predicate Templates While SRL’s event-based representations have permitted us to generate questions that talk about the roles participants of an event play, we exploit VerbNet’s sub-event representation to ask questions on 1Unsure clauses appear in one-way implicatives when it’s unclear if the clause is true or false under either an affirmative or a negative parent clause. how participants’ states change across the time frame of the event. In Figure 2, the event murder (VerbNet class murder-42.1) results in a final state in which the participant Julius Caesar is in a not-alive state. Each class in VerbNet (Schuler, 2005; Brown et al., 2019) includes a set of member verbs, the thematic roles used in the predicate-argument structure, accompanied with flat syntactic patterns and their corresponding semantic predicates represented in neo-Davidsonian first-order-logic formulation. These semantic predicates bring forth a temporal sequencing of sub-events tracking how participants’ states change over the course of the event. The advantage is to be able to ask questions RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 757 Figure 3: VerbNet Predicate Question Generation. All the predicates of the two sub-events e4 and e5 (HAS POSSESSION) would be considered since e3 possesses a process-oriented predicate TRANSFER. COST is the predicate of the main event E. bearing a surface form different from the source sentence but which are driven by reasoning rather than just being paraphrastic. For example, in the sentence, “Brutus murdered Julius Caesar”, the event murder-42.1 entails a final state of “death” or the Patient participant not being alive at the end of the event. So, we construct a template “mainAux the Patient otherAux not alive?”. Similarly, the event pay-68-1 results in a final state in which the Recipient “Perry” has possession of “$100” and the Agent “John” has possession of “the car”, against which we define the templates as shown in Figure 3. We formulate two sets of questions: boolean type and which-type questions asking specifically about these states. We create templates for VerbNet’s stateful predicates like has location, has possession, has information, seem, has state, cost, desire, harmed, has organization role, together, social interaction, authority relationship, etc. which are present in 64.4% of the member verbs in VerbNet2. We outline a few of the templates in Table 3. During inference time, we first compute the VerbNet sense, the associated thematic role mapping, 2Out of 4854 member verbs, there are 3128 members whose syntactic frame contains at least one of these predicates. and syntactic frame (along with the predicates) with the help of Brown et al. (2019)’s parser. VerbNet’s predicates are governed by the sub-events in which they occur. Although VerbNet’s representation lays out a sequence of sub-events, no sub-event is explicitly mentioned as the final one3. We choose all the predicates of those sub-events which are preceded by other sub-events which possess at least one process-oriented predicate.4 3.7 PropBank Argument Descriptions PropBank rolesets’ course-grained annotation of verb-specific argument definitions (“killer”, “payer”, etc.) to represent semantic roles offers robustly specific natural language descriptions to ask questions about the exact roles participants play. Nonetheless, not all descriptions are suitable to be utilized directly in rigid templates. So, we incorporate back-translation to 1) get rid of grammatical errors propagated from incorrect parsing and template restrictions, and 2) eliminate rarely used Prop-Bank descriptions and generate highly probable questions. While previous work in rule-based QG has used SRL templates and WordNet senses to describe the roles arguments around a verb play, previous SRL templates have always been verb-agnostic, and we believe there is a great deal of potential in PropBank descriptions. Moreover, WordNet supersenses do not always give rise to acceptable questions. On manual evaluation, question relevance decreased after incorporating templates with WordNet supersenses. Instead, we make use of PropBank’s verb-specific natural language argument descriptions to create an additional set of templates. VerbNet senses have a one-to-one mapping with PropBank rolesets via the SemLink project (Palmer, 2009). We hence make use of Brown et al. (2019)’s parser to find the appropriate PropBank roleset for a sentence. However, we observed that a lot of PropBank descriptions were noisy and made use of phrases which would be unarguably rare in ordinary parlance like “breather” or “truster”. To eliminate such descriptions, we computed the mean Google N-gram probabilities (Lin et al., 2012) of all the PropBank phrases in the timespan of the last 100 3or a sub-event, which is an outcome of a process 4Out of 174 VerbNet predicates, we manually categorize 84 predicates like HAS LOCATION, HAS POSSESSION as stateful predicates and the remaining ones like DESCRIBE, TRANSFER, etc. as process-oriented predicates. RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 758 Triggering Predicate and Thematic Arguments Question Template Sample Fact & VerbNet Predicate Generated Question HAS POSSESSION (Asset,Recipient) Who has Asset ? Recipient Robert paid $100 to Mary for the cycle. HAS POSSESSION(Mary,$100) Who has $100 ? Mary HARMED (Patient) What is harmed ? Patient The terrorists bombed the building. HARMED(the building) What is harmed ? the building NOT ALIVE (Patient) Is Patient alive ? No. According to epics, Vishnu killed the demon Kaitabh. NOT ALIVE (the demon Kaitabh) Is the demon Kaitabh alive ? No. Table 3: VerbNet predicate templates (simplified) along with sample questions with the thematic roles highlighted. A question is created from the concept of “being alive” which is not synonymous with but is an outcome of “killing”. Figure 4: Here, “killer” is the natural language description of “Brutus” in the MURDER.01 roleset. years and kept only those phrases which ranked in the top 50%. 3.8 Back-Translation Back-translation has been used quite often in grammatical error correction (Xie et al., 2018) and is well known to translate noisy and ungrammatical sentences to their cleaner high probability counterparts. We exploit this observation to clean questions with noisy and inconsistent PropBank descriptions like “wanter” (Figure 5). We use two state-of-the-art (SOTA) pre-trained transformer models transformer.wmt19.en-de and transformer.wmt19.de-en from Ott et al. (2019) trained on the English-German and GermanEnglish translation tasks of WMT 2019. Figure 6 in the Appendix shows the output of all the five sets of templates applied together over one Figure 5: Back-translation and Implicature. Since the entailed polarity of “murder” is unsure, no questions are generated. sentence (along-with implicature). 4 Evaluation and Results Most of the prior QG studies have evaluated the performance of the generated questions using automatic evaluation metrics used in the machine translation literature. We use the traditional BLEU scores (Papineni et al., 2002) and compare the performance of Syn-QG on the SQuAD (Rajpurkar et al., 2016) test split created by Zhou et al. (2017). BLEU measures the average n-gram precision on a set of reference sentences. A question lexically and syntactically similar to a human question would score high on such n-gram metrics. Despite not utilizing any training data, Syn-QG performs better than the previous SOTA on two evaluation metrics BLEU-3 and BLEU-4 and close to SOTA on BLEU-1 and BLEU-2 (Table 4) at the time of submission. The high scores obtained without conducting any training arguably shed a little light on the predictable nature of the SQuAD dataset too. Besides SRL, Dependency, and NER templates, RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 759 Architecture BLEU-1 BLEU-2 BLEU-3 BLEU-4 PCFG-Trans (Heilman and Smith, 2010) 28.77 17.81 12.64 9.47 SeqCopyNet (Zhou et al., 2018) 13.02 NQG++ (Zhou et al., 2017) 42.36 26.33 18.46 13.51 MPQG (Song et al., 2017) 13.91 Answer-focused Position-aware model (Sun et al., 2018) 43.02 28.14 20.51 15.64 To the Point Context (Li et al., 2019) 44.40 29.48 21.54 16.37 s2sa-at-mp-gsa (Zhao et al., 2018) 44.51 29.07 21.06 15.82 ASs2s (Kim et al., 2019) 16.17 CGC-QG (Liu et al., 2019) 46.58 30.9 22.82 17.55 Capturing Greater Context (Tuan et al., 2019) 46.60 31.94 23.44 17.76 Natural QG with RL based Graph-to-Sequence (Chen et al., 2019) 17.94 RefineNet (Nema et al., 2019) 47.27 31.88 23.65 18.16 QPP&QAP (Zhang and Bansal, 2019) 18.37 ACS-QG∗(Liu et al., 2020) 52.30∗ 36.70∗ 28.00∗ 22.05 UNILM∗(Wang et al., 2020) 24.32 ERNIE-GEN∗(Xiao et al., 2020) 25.57 UNILMv2∗(Bao et al., 2020) 26.30 ProphetNet∗(Yan et al., 2020) 26.72∗ Syn-QG 45.55 30.24 23.84 18.72 Table 4: Automatic Evaluation Results on SQuAD of different QG models. PCFG-TRANS and Syn-QG are two rule-based models. *Work contemporaneous with or subsequent to the submission of this paper. System #Questions Generated Avg. #Questions Per Sentence Grammaticality Relevance H&S 381 3.81 3.49 4.23 NQG 100 1 3.48 3.28 QPP&QAP — — 3.9 4.03 Syn-QG 654 6.54 3.93 4.34 Table 5: Comparison of human evaluation with H&S (Heilman and Smith, 2009), NQG (Du et al., 2017) and QPP&QAP (Zhang and Bansal, 2019) System Avg. novel unigrams Avg. novel bigrams Avg. novel trigrams H&S 23.6 40.64 52.22 Syn-QG (w/o BT) 26.8 43.93 53.4 Syn-QG 39.34 64.08 76.24 SQUAD 42.86 74.2 86.35 Syn-QG (BT vs w/o-BT) 28.78 55.18 67.81 Table 6: The percentage of n-grams of the generated questions which are not present in the source sentence. The last row indicates the percentage of n-grams not present in the non-backtranslated questions. Syn-QG’s questions also arise from VerbNet’s predicates and PropBank’s descriptions, which indeed by nature describe events not mentioned explicitly within the fact. Like in Figure 3, the sentence with the event “paid” results in a question with a stateful event of “cost”. Deducible questions like these have a good chance of having a distribution of ngrams quite different from the source sentences, possibly exposing the weakness of traditional ngram metrics and rendering them less useful for a task like QG. In order to have a complete and more reliable evaluation to gauge the system, we also carry out a human evaluation using two of the metrics used in QG-STEC Task B (Rus et al., 2012), namely grammaticality, and relevance which we define below. We compared the questions generated from our system against the constituency-based H&S (Heilman and Smith, 2009), a neural system NQG (Du et al., 2017) which does not depend on a separate answer extractor and QPP&QAP5 (Zhang and Bansal, 2019) which has outperformed existing methods. We fed a total of 100 facts randomly picked from Wikipedia and 5 commercial domains (IT, Healthcare, Sports, Banking and Politics) combined, to each of the four systems. We then conducted a crowd-sourced evaluation over Amazon Mechanical Turk for the generated questions. • Grammatical Correctness: Raters had to rate a question on how grammatically correct 5Since the QPP&QAP model does not have a separate answer extractor, we use the answer spans computed from Syn-QG (412 in total after discarding overlaps). RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 760 it is or how syntactically fluent it is, disregarding its underlying meaning. • Relevance Score: Raters had to give a score on how relevant the generated question is to the given fact. The relevance score helps us gauge whether the question should have been generated or not irrespective of its grammaticality.6 Each question was evaluated by three people scoring grammaticality and relevance on a 5 point Likert scale. The inter-rater agreement (Krippendorff’s co-efficient) among human evaluations was 0.72. The instructions given to the Mturk raters are provided in the Appendix Figure 7. The results of the evaluation are shown in Table 5. Syn-QG generates a larger number of questions than H&S and performs strongly on grammaticality ratings. SynQG is also able to generate highly relevant questions without the use of a ranker. Also, rule-based approaches seem to be much better at generating relevant questions than neural ones. QG-STEC also used variety and question types as their evaluation criteria and rewarded systems to generate questions meeting a range of specific question types. Syn-QG’s questions cover each of those question types. Since many times, despite the ability to paraphrase (Table 6), back-translated outputs tend to change the meaning of the original sentence, we also measured back-translation’s impact on the above QG metrics. We considered questions generated from 50 facts of Wikipedia measuring the grammaticality and relevance before and after backtranslation. While grammaticality increased from 3.54 to 4.11, question relevance fell a bit from 3.96 to 3.88. This observation, along with the performance of QPP&QAP shown in Table 4, accentuates that while neural models are learning syntactic structures well, there is still some progress to be made to generate relevant questions. 5 Discussion We introduced Syn-QG, a set of broad coverage rules leveraging event-based and sub-event based sentence views along with verb-specific argument descriptions. Automatic and manual evaluations 6In cases when the grammaticality is extremely low like 1 or 2, the relevance score will also tend to be low. Otherwise, we assume that minor grammatical variations can be ignored while gauging relevance. show that Syn-QG is able to generate a large number of diverse and highly relevant questions with better fluency. Verb-focused rules help approach long-distance dependencies and reduce the need for explicit sentence simplification by breaking down a sentence into clauses while custom rules like implications serve a purpose similar to a reranker to discard irrelevant questions but with increased determinism. While our work focuses on sentence-level QG, it would be interesting to see how questions generated from VerbNet predicates would have an impact on multi-sentence or passage level QG, where the verb-agnostic states of the participants would change as a function of multiple verbs. The larger goal of QG is currently far from being solved. Understanding abstract representations, leveraging world knowledge, and reasoning about them is crucial. However, we believe that with an extensible and transparent architecture, it is very much possible to keep improving the system continuously in order to achieve this larger goal. Acknowledgments We thank the three anonymous reviewers for their helpful comments and invaluable suggestions. We also thank the members of Amelia Science, RnD IPsoft, India - Manjunath Hegde, Anant Khandelwal, Ashish Shrivastava for their work in QG and especially Viswa Teja Ravi, for helping in replicating Mazidi and Tarau (2016)’s work. We also thank Uday Chinta and IPsoft, India, for supporting and providing access to Amazon Mechanical Turk. References Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, et al. 2020. Unilmv2: Pseudo-masked language models for unified language model pre-training. arXiv preprint arXiv:2002.12804. Susan Windisch Brown, Julia Bonn, James Gung, Annie Zaenen, James Pustejovsky, and Martha Palmer. 2019. VerbNet representations: Subevent semantics for transfer verbs. In Proceedings of the First International Workshop on Designing Meaning Representations, pages 154–163. Yu Chen, Lingfei Wu, and Mohammed J Zaki. 2019. Natural question generation with reinforcement learning based graph-to-sequence model. arXiv preprint arXiv:1910.08832. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 761 Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. arXiv preprint arXiv:1905.03197. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342– 1352. Michael Flor and Brian Riordan. 2018. A semantic role-based approach to open-domain automatic question generation. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 254–263. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640. Michael Heilman and Noah A Smith. 2009. Question generation via overgenerating transformations and ranking. Technical Report CMU-LTI-09-013, Language Technologies Institute, Carnegie Mellon University. Michael Heilman and Noah A Smith. 2010. Good question! Statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609– 617. Association for Computational Linguistics. Tom Hosking and Sebastian Riedel. 2019. Evaluating rewards for question generation models. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. Lauri Karttunen. 2012. Simple and phrasal implicatives. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 124–131. Association for Computational Linguistics. Yanghoon Kim, Hwanhee Lee, Joongbo Shin, and Kyomin Jung. 2019. Improving neural question generation using answer separation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6602–6609. Ghader Kurdi, Jared Leo, Bijan Parsia, Uli Sattler, and Salam Al-Emari. 2020. A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30(1):121–204. Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 889–898. Roger Levy and Galen Andrew. 2006. Tregex and tsurgeon: tools for querying and manipulating tree data structures. In LREC, pages 2231–2234. Citeseer. Jingjing Li, Yifan Gao, Lidong Bing, Irwin King, and Michael R Lyu. 2019. Improving question generation with to the point context. arXiv preprint arXiv:1910.06036. Yuri Lin, Jean-Baptiste Michel, Erez Lieberman Aiden, Jon Orwant, Will Brockman, and Slav Petrov. 2012. Syntactic annotations for the Google books ngram corpus. In Proceedings of the ACL 2012 system demonstrations, pages 169–174. Association for Computational Linguistics. David Lindberg, Fred Popowich, John Nesbit, and Phil Winne. 2013. Generating natural language questions to support learning on-line. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 105–114. Bang Liu, Haojie Wei, Di Niu, Haolan Chen, and Yancheng He. 2020. Asking questions the human way: Scalable question-answer generation from text corpus. In Proceedings of The Web Conference 2020, pages 2032–2043. Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, and Yu Xu. 2019. Learning to generate questions by learning what not to generate. arXiv preprint arXiv:1902.10418. Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. 2014. Universal Stanford dependencies: A cross-linguistic typology. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), pages 4585–4592, Reykjavik, Iceland. European Languages Resources Association (ELRA). Karen Mazidi and Rodney D Nielsen. 2014. Linguistic considerations in automatic question generation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 321–326. Karen Mazidi and Rodney D Nielsen. 2015. Leveraging multiple views of text for automatic question generation. In International Conference on Artificial Intelligence in Education, pages 257–266. Springer. Karen Mazidi and Paul Tarau. 2016. Infusing NLU into automatic question generation. In Proceedings of the 9th International Natural Language Generation conference, pages 51–60. RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 762 George A Miller. 1998. WordNet: An electronic lexical database. MIT press. Jack Mostow and Wei Chen. 2009. Generating instruction automatically for the reading strategy of selfquestioning. In AIED, pages 465–472. Preksha Nema, Akash Kumar Mohankumar, Mitesh M Khapra, Balaji Vasan Srinivasan, and Balaraman Ravindran. 2019. Let’s ask again: Refine network for automatic question generation. arXiv preprint arXiv:1909.05355. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038. Martha Palmer. 2009. Semlink: Linking propbank, verbnet and framenet. In Proceedings of the generative lexicon conference, pages 9–15. GenLex-09, Pisa, Italy. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Cristian Moldovan. 2012. A detailed account of the first question generation shared task evaluation challenge. Dialogue & Discourse, 3(2):177–204. Karin Kipper Schuler. 2005. VerbNet: A broadcoverage, comprehensive verb lexicon. Ph.D. thesis, University of Pennsylvania. Linfeng Song, Zhiguo Wang, and Wael Hamza. 2017. A unified query-based generative model for question generation and question answering. arXiv preprint arXiv:1709.01058. Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3930– 3939. Liling Tan. 2014. Pywsd: Python implementations of word sense disambiguation (wsd) technologies [software]. https://github.com/alvations/pywsd. Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027. Luu Anh Tuan, Darsh J Shah, and Regina Barzilay. 2019. Capturing greater context for question generation. arXiv preprint arXiv:1910.10274. Tong Wang, Xingdi Yuan, and Adam Trischler. 2017. A joint model for question answering and question generation. arXiv preprint arXiv:1706.01450. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. arXiv preprint arXiv:2002.10957. Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation. arXiv preprint arXiv:2001.11314. Ziang Xie, Guillaume Genthial, Stanley Xie, Andrew Y Ng, and Dan Jurafsky. 2018. Noising and denoising natural language: Diverse backtranslation for grammar correction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 619–628. Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future ngram for sequence-to-sequence pre-training. arXiv preprint arXiv:2001.04063. Xuchen Yao and Yi Zhang. 2010. Question generation with minimal recursion semantics. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 68–75. Citeseer. Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Saizheng Zhang, Sandeep Subramanian, and Adam Trischler. 2017. Machine comprehension by text-to-text neural question generation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 15–25, Vancouver, Canada. Association for Computational Linguistics. Shiyue Zhang and Mohit Bansal. 2019. Addressing semantic drift in question generation for semisupervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2495–2509. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 763 Empirical Methods in Natural Language Processing, pages 3901–3910. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 662–671. Springer. Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2018. Sequential copying networks. In ThirtySecond AAAI Conference on Artificial Intelligence. A Appendices Algorithm 2 Dependency Heuristics {d0 . . . dn} ←dependency(w0 . . . wn) loop i = 0, until i = n: if parent(di)! = null then dv ←parent(di) {A0 . . . ACAU} ←SRL(dv) subj ←A0 if di ∈A1 then obj ←A1 else obj ←A2 Ax ←P(A3, A4, ...ATMP ) verb ←{dv, modals, negation} template ←deptype ←di QA ←template(subj, obj, verb, Ax) close; Algorithm 3 Named Entity Heuristics {SRL1 . . . SRLs} ←SRL(w0 . . . wn) loop j = 0, until j = s: if SRLj contains A0 or A1 and at least 1 Am then {A0 . . . ACAU, ATMP } ←SRLj loop Ax ∈SRLj if Ax contains a NE: subj ←A0 A− x ←P(A3, A4, ...ATMP −Ax) verb ←{Av, modals, negation} template ←NEtype ←Ax QA ←template(subj, Ax, verb, A− x ) close; RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 764 Potential Short Answer (Named Entities) Question Template Sample Fact Generated Question Location Where mainAux subj otherAux verb obj modifiers ? The event was organized at Times Square. Where was the event organized? Person Who mainAux subj otherAux verb obj modifiers ? Whom mainAux obj otherAux verb modifiers WestWorld brought back the life of the roboticist Craig Smith. Whom did WestWorld bring back the life of? Date When mainAux subj otherAux verb obj modifiers ? Donald Trump won the elections in the year 2016 When did Donald Trump win the elections? Number How many mainAux subj otherAux verb obj modifiers? A thousand will not be enough for the event. How many will not be enough for the event? Phone Number At what number mainAux subj otherAux verb obj modifiers ? The pizza guy can be reached at +91-748-728-781 At what phone number can the pizza guy be reached? Duration For how long mainAux subj otherAux verb obj modifiers? Lauren would be staying in the hut for around 10 minutes. For how long would Lauren be staying at the hut? Organization Which organization mainAux subj otherAux verb obj modifiers? Deepak joined the big firm, the United Nations. Which organization did Deepak join? Table 7: SRL arguments which contain a named entity are fully considered as a short answer “for around 10 minutes” rather than only the named entity span “10 minutes”. SRL arguments are highlighted in blue. Figure 6: Questions generated by each set of heuristics for one sentence which are further sent for back-translation. RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.69. 765 Figure 7: The MTURK template used for collecting responses for measuring question relevance and grammaticality. RETRACTED
2020
69
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7724–7736 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7724 Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem Danielle Saunders and Bill Byrne Department of Engineering, University of Cambridge, UK {ds636, wjb31}@cam.ac.uk Abstract Training data for NLP tasks often exhibits gender bias in that fewer sentences refer to women than to men. In Neural Machine Translation (NMT) gender bias has been shown to reduce translation quality, particularly when the target language has grammatical gender. The recent WinoMT challenge set allows us to measure this effect directly (Stanovsky et al., 2019). Ideally we would reduce system bias by simply debiasing all data prior to training, but achieving this effectively is itself a challenge. Rather than attempt to create a ‘balanced’ dataset, we use transfer learning on a small set of trusted, gender-balanced examples. This approach gives strong and consistent improvements in gender debiasing with much less computational cost than training from scratch. A known pitfall of transfer learning on new domains is ‘catastrophic forgetting’, which we address both in adaptation and in inference. During adaptation we show that Elastic Weight Consolidation allows a performance trade-off between general translation quality and bias reduction. During inference we propose a latticerescoring scheme which outperforms all systems evaluated in Stanovsky et al. (2019) on WinoMT with no degradation of general test set BLEU, and we show this scheme can be applied to remove gender bias in the output of ‘black box‘ online commercial MT systems. We demonstrate our approach translating from English into three languages with varied linguistic properties and data availability. 1 Introduction As language processing tools become more prevalent concern has grown over their susceptibility to social biases and their potential to propagate bias (Hovy and Spruit, 2016; Sun et al., 2019). Natural language training data inevitably reflects biases present in our society. For example, gender bias manifests itself in training data which features more examples of men than of women. Tools trained on such data will then exhibit or even amplify the biases (Zhao et al., 2017). Gender bias is a particularly important problem for Neural Machine Translation (NMT) into genderinflected languages. An over-prevalence of some gendered forms in the training data leads to translations with identifiable errors (Stanovsky et al., 2019). Translations are better for sentences involving men and for sentences containing stereotypical gender roles. For example, mentions of male doctors are more reliably translated than those of male nurses (Sun et al., 2019; Prates et al., 2019). Recent approaches to the bias problem in NLP have involved training from scratch on artificially gender-balanced versions of the original dataset (Zhao et al., 2018; Zmigrod et al., 2019) or with debiased embeddings (Escud´e Font and Costa-juss`a, 2019; Bolukbasi et al., 2016). While these approaches may be effective, training from scratch is inefficient and gender-balancing embeddings or large parallel datasets are challenging problems (Gonen and Goldberg, 2019). Instead we propose treating gender debiasing as a domain adaptation problem, since NMT models can very quickly adapt to a new domain (Freitag and Al-Onaizan, 2016). To the best of our knowledge this work is the first to attempt NMT bias reduction by fine-tuning, rather than retraining. We consider three aspects of this adaptation problem: creating less biased adaptation data, parameter adaptation using this data, and inference with the debiased models produced by adaptation. Regarding data, we suggest that a small, trusted gender-balanced set could allow more efficient and effective gender debiasing than a larger, noisier set. To explore this we create a tiny, handcrafted profession-based dataset for transfer learning. For contrast, we also consider fine-tuning on a coun7725 terfactual subset of the full dataset and propose a straightforward scheme for artificially genderbalancing parallel text for NMT. We find that during domain adaptation improvement on the gender-debiased domain comes at the expense of translation quality due to catastrophic forgetting (French, 1999). We can balance improvement and forgetting with a regularised training procedure, Elastic Weight Consolidation (EWC), or in inference by a two-step lattice rescoring procedure. We experiment with three language pairs, assessing the impact of debiasing on general domain BLEU and on the WinoMT challenge set (Stanovsky et al., 2019). We find that continued training on the handcrafted set gives far stronger and more consistent improvements in genderdebiasing with orders of magnitude less training time, although as expected general translation performance as measured by BLEU decreases. We further show that regularised adaptation with EWC can reduce bias while limiting degradation in general translation quality. We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model. We believe this approach, rescoring with models targeted to remove bias, is novel in NMT. The rescoring procedure improves WinoMT accuracy by up to 30% with no decrease in BLEU on the general test set. Recent recommendations for ethics in Artificial Intelligence have suggested that social biases or imbalances in a dataset be addressed prior to model training (HLEG, 2019). This recommendation presupposes that the source of bias in a dataset is both obvious and easily adjusted. We show that debiasing a full NMT dataset is difficult, and suggest alternative efficient and effective approaches for debiasing a model after it is trained. This avoids the need to identify and remove all possible biases prior to training, and has the added benefit of preserving privacy, since no access to the original data or knowledge of its contents is required. As evidence, in section 3.4.5, we show this scheme can be applied to remove gender bias in the output of ‘black box‘ online commercial MT systems. 1.1 Related work Vanmassenhove et al. (2018) treat gender as a domain for machine translation, training from scratch by augmenting Europarl data with a tag indicating the speaker’s gender. This does not inherently remove gender bias from the system but allows control over the translation hypothesis gender. Moryossef et al. (2019) similarly prepend a short phrase at inference time which acts as a gender domain label for the entire sentence. These approaches are not directly applicable to text which may have more than one gendered entity per sentence, as in coreference resolution tasks. Escud´e Font and Costa-juss`a (2019) train NMT models from scratch with debiased word embeddings. They demonstrate improved performance on an English-Spanish occupations task with a single profession and pronoun per sentence. We assess our fine-tuning approaches on the WinoMT coreference set, with two entities to resolve per sentence. For monolingual NLP tasks a typical approach is gender debiasing using counterfactual data augmentation where for each gendered sentence in the data a gender-swapped equivalent is added. Zhao et al. (2018) show improvement in coreference resolution for English using counterfactual data. Zmigrod et al. (2019) demonstrate a more complicated scheme for gender-inflected languages. However, their system focuses on words in isolation, and is difficult to apply to co-reference and conjunction situations with more than one term to swap, reducing its practicality for large MT datasets. Recent work recognizes that NMT can be adapted to domains with desired attributes using small datasets (Farajian et al., 2017; Michel and Neubig, 2018). Our choice of a small, trusted dataset for adaptation specifically to a debiased domain connects also to recent work in data selection by Wang et al. (2018), in which fine-tuning on less noisy data reduces translation noise. Similarly we propose fine-tuning on less biased data to reduce gender bias in translations. This is loosely the inverse of the approach described by Park et al. (2018) for monolingual abusive language detection, which pre-trains on a larger, less biased set. 2 Gender bias in machine translation We focus on translating coreference sentences containing professions as a representative subset of the gender bias problem. This follows much recent work on NLP gender bias (Rudinger et al., 2018; Zhao et al., 2018; Zmigrod et al., 2019) including the release of WinoMT, a relevant challenge set for NMT (Stanovsky et al., 2019). A sentence that highlights gender bias is: The doctor told the nurse that she had been busy. A human translator carrying out coreference resolution would infer that ‘she’ refers to the doctor, and correctly translate the entity to German as Die ¨Arztin. An NMT model trained on a biased dataset in which most doctors are male might incorrectly default to the masculine form, Der Arzt. Data bias does not just affect translations of the stereotyped roles. Since NMT inference is usually left-to-right, a mistranslation can lead to further, more obvious mistakes later in the translation. For example, our baseline en-de system translates the English sentence The cleaner hates the developer because she always leaves the room dirty. to the German Der Reiniger haßt den Entwickler, weil er den Raum immer schmutzig l¨asst. Here not only is ‘developer’ mistranslated as the masculine den Entwickler instead of the feminine die Entwicklerin, but an unambiguous pronoun translation later in the sentence is incorrect: er (‘he’) is produced instead of sie (‘she’). In practice, not all translations with genderinflected words can be unambiguously resolved. A simple example is: The doctor had been busy. This would likely be translated with a masculine entity according to the conventions of a language, unless extra-sentential context was available. As well, some languages have adopted gender-neutral singular pronouns and profession terms, both to include non-binary people and to avoid the social biases of gendered language (Misersky et al., 2019). However, the target languages supported by WinoMT lack widely-accepted non-binary inflection conventions (Ackerman, 2019). This paper addresses gender bias that can be resolved at the sentence level and evaluated with existing test sets, and does not address these broader challenges. 2.1 WinoMT challenge set and metrics WinoMT (Stanovsky et al., 2019) is a recently proposed challenge set for gender bias in NMT. Moreover it is the only significant challenge set we are aware of to evaluate translation gender bias comparably across several language pairs. It permits automatic bias evaluation for translation from English to eight target languages with grammatical gender. The source side of WinoMT is 3888 concatenated sentences from Winogender (Rudinger et al., 2018) and WinoBias (Zhao et al., 2018). These are coreference resolution datasets in which each sentence contains a primary entity which is co-referent with a pronoun – the doctor in the first example above and the developer in the second – and a secondary entity – the nurse and the cleaner respectively. WinoMT evaluation extracts the grammatical gender of the primary entity from each translation hypothesis by automatic word alignment followed by morphological analysis. WinoMT then compares the translated primary entity with the gold gender, with the objective being a correctly gendered translation. The authors emphasise the following metrics over the challenge set: • Accuracy – percentage of hypotheses with the correctly gendered primary entity. • ∆G – difference in F1 score between the set of sentences with masculine entities and the set with feminine entities. • ∆S – difference in accuracy between the set of sentences with pro-stereotypical (‘pro’) entities and those with anti-stereotypical (‘anti’) entities, as determined by Zhao et al. (2018) using US labour statistics. For example, the ‘pro’ set contains male doctors and female nurses, while ‘anti’ contains female doctors and male nurses. Our main objective is increasing accuracy. We also report on ∆G and ∆S for ease of comparison to previous work. Ideally the absolute values of ∆G and ∆S should be close to 0. A high positive ∆G indicates that a model translates male entities better, while a high positive ∆S indicates that a model stereotypes male and female entities. Large negative values for ∆G and ∆S, indicating a bias towards female or anti-stereotypical translation, are as undesirable as large positive values. We note that ∆S can be significantly skewed by low-accuracy systems. A model generating male forms for most test sentences, stereotypical roles or not, will have very low ∆S, since its pro- and anti-stereotypical class accuracy will both be about 50%. Consequently in Appendix A we report: • M:F – ratio of hypotheses with male predictions to those with female predictions. This should be close to 1.0, since WinoMT balances male- and female-labelled sentences. M:F correlates strongly with ∆G, but we consider M:F 7726 7727 easier to interpret, particularly since very high or low M:F reduce the relevance of ∆S. Finally, we wish to reduce gender bias without reducing translation performance. We report BLEU (Papineni et al., 2002) on separate, general test sets for each language pair. WinoMT is designed to work without target language references, and so it is not possible to measure translation performance on this set by measures such as BLEU. 2.2 Gender debiased datasets 2.2.1 Handcrafted profession dataset Our hypothesis is that the absence of gender bias can be treated as a small domain for the purposes of NMT model adaptation. In this case a well-formed small dataset may give better results than attempts at debiasing the entire original dataset. We therefore construct a tiny, trivial set of gender-balanced English sentences which we can easily translate into each target language. The sentences follow the template: The [PROFESSION] finished [his|her] work. We refer to this as the handcrafted set1. Each profession is from the list collected by Prates et al. (2019) from US labour statistics. We simplify this list by removing field-specific adjectives. For example, we have a single profession ‘engineer’, as opposed to specifying industrial engineer, locomotive engineer, etc. In total we select 194 professions, giving just 388 sentences in a gender-balanced set. With manually translated masculine and feminine templates, we simply translate the masculine and feminine forms of each listed profession for each target language. In practice this translation is via an MT first-pass for speed, followed by manual checking, but given available lexicons this could be further automated. We note that the handcrafted sets contain no examples of coreference resolution and very little variety in terms of grammatical gender. A set of more complex sentences targeted at the coreference task might further improve WinoMT scores, but would be more difficult to produce for new languages. We wish to distinguish between a model which improves gender translation, and one which improves its WinoMT scores simply by learning the vocabulary for previously unseen or uncommon professions. We therefore create a handcrafted nooverlap set, removing source sentences with profes1Handcrafted sets available at https://github. com/DCSaunders/gender-debias sions occurring in WinoMT to leave 216 sentences. We increase this set back to 388 examples with balanced adjective-based sentences in the same pattern, e.g. The tall [man|woman] finished [his|her] work. 2.2.2 Counterfactual datasets Figure 1: Generating counterfactual datasets for adaptation. The Original set is 1||2, a simple subset of the full dataset. FTrans original is 1||3, FTrans swapped is 4||5, and Balanced is 1,4||2,5 For contrast, we fine-tune on an approximated counterfactual dataset. Counterfactual data augmentation is an intuitive solution to bias from data over-representation (Lu et al., 2018). It involves identifying the subset of sentences containing bias – in this case gendered terms – and, for each one, adding an equivalent sentence with the bias reversed – in this case a gender-swapped version. While counterfactual data augmentation is relatively simple for sentences in English, the process for inflected languages is challenging, involving identifying and updating words that are co-referent with all gendered entities in a sentence. Genderswapping MT training data additionally requires that the same entities are swapped in the corresponding parallel sentence. A robust scheme for gender-swapping multiple entities in inflected language sentences directly, together with corresponding parallel text, is beyond the scope of this paper. Instead we suggest a rough but straightforward approach for counterfactual data augmentation for NMT which to the best of our knowledge is the first application to parallel sentences. We first perform simple gender-swapping on the subset of the English source sentences with gendered terms. We use the approach described in Zhao et al. (2018) which swaps a fixed list of gen7728 dered stopwords (e.g. man / woman, he / she).2. We then greedily forward-translate the gender-swapped English sentences with a baseline NMT model trained on the the full source and target text, producing gender-swapped target language sentences. This lets us compare four related sets for gender debiasing adaptation, as illustrated in Figure 1: • Original: a subset of parallel sentences from the original training data where the source sentence contains gendered stopwords. • Forward-translated (FTrans) original: the source side of the original set with forwardtranslated target sentences. • Forward-translated (FTrans) swapped: the original source sentences are genderswapped, then forward-translated to produce gender-swapped target sentences. • Balanced: the concatenation of the original and FTrans swapped parallel datasets. This is twice the size of the other counterfactual sets. Comparing performance in adaptation of FTrans swapped and FTrans original lets us distinguish between the effects of gender-swapping and of obtaining target sentences from forward-translation. 2.3 Debiasing while maintaining general translation performance Fine-tuning a converged neural network on data from a distinct domain typically leads to catastrophic forgetting of the original domain (French, 1999). We wish to adapt to the gender-balanced domain without losing general translation performance. This is a particular problem when finetuning on the very small and distinct handcrafted adaptation sets. 2.3.1 Regularized training Regularized training is a well-established approach for minimizing catastrophic forgetting during domain adaptation of machine translation (Barone et al., 2017). One effective form is Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) which in NMT has been shown to maintain or even improve original domain performance (Thompson et al., 2019; Saunders et al., 2019). In EWC a 2The stopword list and swapping script are provided by the authors of Zhao et al. (2018) at https://github.com/ uclanlp/corefBias regularization term is added to the original log likelihood loss function L when training the debiased model (DB): L′(θDB) = L(θDB)+λ X j Fj(θDB j −θB j )2 (1) θB j are the converged parameters of the original biased model, and θDB j are the current debiased model parameters. Fj = E  ∇2L(θB j )  , a Fisher information estimate over samples from the biased data under the biased model. We apply EWC when performance on the original validation set drops, selecting hyperparameter λ via validation set BLEU. 2.3.2 Gender-inflected search spaces for rescoring with debiased models (a) A subset of flower transducer T. T maps vocabulary to itself as well as to differently-gendered inflections. (b) Acceptor YB representing the biased first-pass translation yB for source fragment ’the doctor’. The German hypothesis has the male form. (c) Gender-inflected search space constructed from the biased hypothesis ‘der Arzt’. Projection of the composition YB ◦T contains paths with differently-gendered inflections of the original biased hypothesis. This lattice can now be rescored by a debiased model. Figure 2: Finite State Transducers for lattice rescoring. An alternative approach for avoiding catastrophic forgetting takes inspiration from lattice rescoring for NMT (Stahlberg et al., 2016) and Grammatical Error Correction (Stahlberg et al., 2019). We assume we have two NMT models. With one we decode fluent translations which contain gender bias (B). For the one-best hypothesis we would translate: yB = argmaxypB(y|x) (2) The other model has undergone debiasing (DB) at a cost to translation performance, producing: yDB = argmaxypDB(y|x) (3) 7729 We construct a flower transducer T that maps each word in the target language’s vocabulary to itself, as well as to other forms of the same word with different gender inflections (Figure 2a). We also construct YB, a lattice with one path representing the biased but fluent hypothesis yB (Figure 2b). The acceptor P(yB) = projoutput(YB ◦T) defines a language consisting of all the genderinflected versions of the biased first-pass translation yB that are allowed by T (Figure 2c). We can now decode with lattice rescoring (LR) by constraining inference to P(yB): yLR = argmaxy∈P(yB)pDB(y|x) (4) In practice we use beam search to decode the various hypotheses, and construct T using heuristics on large vocabulary lists for each target language. 3 Experiments 3.1 Languages and data WinoMT provides an evaluation framework for translation from English to eight diverse languages. We select three pairs for experiments: English to German (en-de), English to Spanish (en-es) and English to Hebrew (en-he). Our selection covers three language groups with varying linguistic properties: Germanic, Romance and Semitic. Training data available for each language pair also varies in quantity and quality. We filter training data based on parallel sentence lengths and length ratios. For en-de, we use 17.6M sentence pairs from WMT19 news task datasets (Barrault et al., 2019). We validate on newstest17 and test on newstest18. For en-es we use 10M sentence pairs from the United Nations Parallel Corpus (Ziemski et al., 2016). While still a large set, the UNCorpus exhibits far less diversity than the en-de training data. We validate on newstest12 and test on newstest13. For en-he we use 185K sentence pairs from the multilingual TED talks corpus (Cettolo et al., 2014). This is both a specialized domain and a much smaller training set. We validate on the IWSLT 2012 test set and test on IWSLT 2014. Table 1 summarises the sizes of datasets used, including their proportion of gendered sentences and ratio of sentences in the English source data containing male and female stopwords. A gendered sentence contains at least one English gendered stopword as used by Zhao et al. (2018). Interestingly all three datasets have about the same proportion of gendered sentences: 11-12% of the overall set. While en-es appears to have a much more balanced gender ratio than the other pairs, examining the data shows this stems largely from sections of the UNCorpus containing phrases like ‘empower women’ and ‘violence against women’, rather than gender-balanced professional entities. Training Gendered training M:F Test en-de 17.5M 2.1M 2.4 3K en-es 10M 1.1M 1.1 3K en-he 185K 21.4K 1.8 1K Table 1: Parallel sentence counts. A gendered sentence pair has minimum one gendered stopword on the English side. M:F is ratio of male vs female gendered training sentences. For en-de and en-es we learn joint 32K BPE vocabularies on the training data (Sennrich et al., 2016). For en-he we use separate source and target vocabularies. The Hebrew vocabulary is a 2kmerge BPE vocabulary, following the recommendations of Ding et al. (2019) for smaller vocabularies when translating into lower-resource languages. For the en-he source vocabulary we experimented both with learning a new 32K vocabulary and with reusing the joint BPE vocabulary trained on the largest set – en-de – which lets us initialize the enhe system with the pre-trained en-de model. The latter resulted in higher BLEU and faster training. 3.2 Training and inference For all models we use a Transformer model (Vaswani et al., 2017) with the ‘base’ parameter settings given in Tensor2Tensor (Vaswani et al., 2018). We train baselines to validation set BLEU convergence on one GPU, delaying gradient updates by factor 4 to simulate 4 GPUs (Saunders et al., 2018). During fine-tuning training is continued without learning rate resetting. Normal and lattice-constrained decoding is via SGNMT3 with beam size 4. BLEU scores are calculated for cased, detokenized output using SacreBLEU (Post, 2018) 3.3 Lattice rescoring with debiased models For lattice rescoring we require a transducer T containing gender-inflected forms of words in the target vocabulary. To obtain the vocabulary for German we use all unique words in the full target training dataset. For Spanish and Hebrew, which have smaller and less diverse training sets, we use 2018 3https://github.com/ucam-smt/sgnmt 7730 OpenSubtitles word lists4. We then use DEMorphy (Altinok, 2018) for German, spaCy (Honnibal and Montani, 2017) for Spanish and the small set of gendered suffixes for Hebrew (Schwarzwald, 1982) to approximately lemmatize each vocabulary word and generate its alternately-gendered forms. While there are almost certainly paths in T containing non-words, we expect these to have low likelihood under the debiasing models. For lattice compositions we use the efficient OpenFST implementations (Allauzen et al., 2007). 3.4 Results 3.4.1 Baseline analysis In Table 2 we compare our three baselines to commercial systems on WinoMT, using results quoted directly from Stanovsky et al. (2019). Our baselines achieve comparable accuracy, masculine/feminine bias score ∆G and pro/anti stereotypical bias score ∆S to four commercial translation systems, outscoring at least one system for each metric on each language pair. The ∆S for our en-es baseline is surprisingly small. Investigation shows this model predicts male and female entities in a ratio of over 6:1. Since almost all entities are translated as male, pro- and anti-stereotypical class accuracy are both about 50%, making ∆S very small. This highlights the importance of considering ∆S in the context of ∆G and M:F prediction ratio. 3.4.2 Counterfactual adaptation Table 3 compares our baseline model with the results of unregularised fine-tuning on the counterfactual sets described in Section 2.2.2. Fine-tuning for one epoch on original, a subset of the original data with gendered English stopwords, gives slight improvement in WinoMT accuracy and ∆G for all language pairs, while ∆S worsens. We suggest this set consolidates examples present in the full dataset, improving performance on gendered entities generally but emphasizing stereotypical roles. On the FTrans original set ∆G increases sharply relative to the original set, while ∆S decreases. We suspect this set suffers from bias amplification (Zhao et al., 2017) introduced by the baseline system during forward-translation. The model therefore over-predicts male entities even more heavily 4Accessed Oct 2019 from https://github.com/ hermitdave/FrequencyWords/ than we would expect given the gender makeup of the adaptation data’s source side. Over-predicting male entities lowers ∆S artificially. Adapting to FTrans swapped increases accuracy and decreases both ∆G and ∆S relative to the baseline for en-de and en-es. This is the desired result, but not a particularly strong one, and it is not replicated for en-he. The balanced set has a very similar effect to the FTrans swapped set, with a smaller test BLEU difference from the baseline. We do find that the largest improvement in WinoMT accuracy consistently corresponds to the model predicting male and female entities in the closest ratio (see Appendix A). However, the best ratios for models adapted to these datasets are 2:1 or higher, and the accuracy improvement is small. The purpose of EWC regularization is to avoid catastrophic forgetting of general translation ability. This does not occur in the counterfactual experiments, so we do not apply EWC. Moreover, WinoMT accuracy gains are small with standard fine-tuning, which allows maximum adaptation: we suspect EWC would prevent any improvements. Overall, improvements from fine-tuning on counterfactual datasets (FTrans swapped and balanced) are present. However, they are not very different from the improvements when fine-tuning on equivalent non-counterfactual sets (original and FTrans original). Improvements are also inconsistent across language pairs. 3.4.3 Handcrafted profession set adaptation Results for fine-tuning on the handcrafted set are given in lines 3-6 of Table 4. These experiments take place in minutes on a single GPU, compared to several hours when fine-tuning on the counterfactual sets and far longer if training from scratch. Fine-tuning on the handcrafted sets gives a much faster BLEU drop than fine-tuning on counterfactual sets. This is unsurprising since the handcrafted sets are domains of new sentences with consistent sentence length and structure. By contrast the counterfactual sets are less repetitive and close to subsets of the original training data, slowing forgetting. We believe the degradation here is limited only by the ease of fitting the small handcrafted sets. Line 4 of Table 4 adapts to the handcrafted set, stopping when validation BLEU degrades by 5% on each language pair. This gives a WinoMT accuracy up to 19 points above the baseline, far more improvement than the best counterfactual result. Difference in gender score ∆G improves by at least 7731 en-de en-es en-he Acc ∆G ∆S Acc ∆G ∆S Acc ∆G ∆S Microsoft 74.1 0.0 30.2 47.3 36.8 23.2 48.1 14.9 32.9 Google 59.4 12.5 12.5 53.1 23.4 21.3 53.7 7.9 37.8 Amazon 62.4 12.9 16.7 59.4 15.4 22.3 50.5 10.3 47.3 SYSTRAN 48.6 34.5 10.3 45.6 46.3 15.0 46.6 20.5 24.5 Baseline 60.1 18.6 13.4 49.6 36.7 2.0 51.3 15.1 26.4 Table 2: WinoMT accuracy, masculine/feminine bias score ∆G and pro/anti stereotypical bias score ∆S for our baselines compared to commercial systems, whose scores are quoted directly from Stanovsky et al. (2019). en-de en-es en-he BLEU Acc ∆G ∆S BLEU Acc ∆G ∆S BLEU Acc ∆G ∆S Baseline 42.7 60.1 18.6 13.4 27.8 49.6 36.7 2.0 23.8 51.3 15.1 26.4 Original 41.8 60.7 15.9 15.6 28.3 53.0 24.3 10.8 23.5 53.6 12.2 31.7 FTrans original 43.3 60.0 20.0 13.9 27.4 51.6 31.6 -4.8 23.4 48.7 23.0 20.9 FTrans swapped 43.4 63.0 15.4 12.7 27.4 53.7 24.5 -3.8 23.7 48.1 20.7 22.7 Balanced 42.5 64.0 12.6 12.4 27.7 52.8 26.2 1.9 23.8 48.3 20.8 24.0 Table 3: General test set BLEU and WinoMT scores after unregularised fine-tuning the baseline on four genderbased adaptation datasets. Improvements are inconsistent across language pairs. en-de en-es en-he BLEU Acc ∆G ∆S BLEU Acc ∆G ∆S BLEU Acc ∆G ∆S 1 Baseline 42.7 60.1 18.6 13.4 27.8 49.6 36.7 2.0 23.8 51.3 15.1 26.4 2 Balanced 42.5 64.0 12.6 12.4 27.7 52.8 26.2 1.9 23.8 48.3 20.8 24.0 3 Handcrafted (no overlap) 40.6 71.2 3.9 10.6 26.5 64.1 9.5 -10.3 23.1 56.5 -6.2 28.9 4 Handcrafted 40.8 78.3 -0.7 6.5 26.7 68.6 5.2 -8.7 22.9 65.7 -3.3 20.2 5 Handcrafted (converged) 36.5 85.3 -3.2 6.3 25.3 72.4 0.8 -3.9 22.5 72.6 -4.2 21.0 6 Handcrafted EWC 42.2 74.2 2.2 8.4 27.2 67.8 5.8 -8.2 23.3 65.2 -0.4 25.3 7 Rescore 1 with 3 42.7 68.3 7.6 11.8 27.8 62.4 11.1 -9.7 23.9 56.2 2.8 23.0 8 Rescore 1 with 4 42.7 74.5 2.1 6.5 27.8 64.2 9.7 -10.8 23.9 58.4 2.7 18.6 9 Rescore 1 with 5 42.5 81.7 -2.4 1.5 27.7 68.4 5.6 -8.0 23.6 63.8 0.7 12.9 Table 4: General test set BLEU and WinoMT scores after fine-tuning on the handcrafted profession set, compared to fine-tuning on the most consistent counterfactual set. Lines 1-2 duplicated from Table 3. Lines 3-4 vary adaptation data. Lines 5-6 vary adaptation training procedure. Lines 7-9 apply lattice rescoring to baseline hypotheses. a factor of 4. Stereotyping score ∆S also improves far more than for counterfactual fine-tuning. Unlike the Table 3 results, the improvement is consistent across all WinoMT metrics and all language pairs. The model adapted to no-overlap handcrafted data (line 3) gives a similar drop in BLEU to the model in line 4. This model also gives stronger and more consistent WinoMT improvements over the baseline compared to the balanced counterfactual set, despite the implausibly strict scenario of no English profession vocabulary in common with the challenge set. This demonstrates that the adapted model does not simply memorise vocabulary. The drop in BLEU and improvement on WinoMT can be explored by varying the training procedure. The model of line 5 simply adapts to handcrafted data for more iterations with no regularisation, to approximate loss convergence on the handcrafted set. This leads to a severe drop in BLEU, but even higher WinoMT scores. In line 6 we regularise adaptation with EWC. There is a trade-off between general translation performance and WinoMT accuracy. With EWC regularization tuned to balance validation BLEU and WinoMT accuracy, the decrease is limited to about 0.5 BLEU on each language pair. Adapting to convergence, as in line 5, would lead to further WinoMT gains at the expense of BLEU. 3.4.4 Lattice rescoring with debiased models In lines 7-9 of Table 4 we consider lattice-rescoring the baseline output, using three models debiased on the handcrafted data. Line 7 rescores the general test set hypotheses (line 1) with a model adapted to handcrafted data that has no source language profession vocabulary overlap with the test set (line 3). This scheme shows no BLEU degradation from the baseline on any language and in fact a slight improvement on en-he. Accuracy improvements on WinoMT 7732 en-de en-es en-he Acc ∆G ∆S Acc ∆G ∆S Acc ∆G ∆S 1 82.0 (74.1) -3.0 (0.0) 4.0 (30.2) 65.8 (47.3) 3.8 (36.8) 1.9 (23.2) 63.9 (48.1) -2.6 (14.9) 23.8 (32.9) 2 80.0 (59.4) -3.0 (12.5) 2.7 (12.5) 68.9 (53.1) 0.6 (23.4) 4.6 (21.3) 64.6 (53.7) -1.8 (7.9) 21.5 (37.8) 3 81.8 (62.4) -2.6 (12.9) 4.3 (16.7) 71.1 (59.4) 0.7 (15.4) 6.7 (22.3) 62.8 (50.5) -1.1 (10.3) 26.9 (47.3) 4 78.4 (48.6) -4.0 (34.5) 5.3 (10.3) 66.0 (45.6) 4.2 (46.3) -2.1 (15.0) 62.5 (46.6) -2.0 (20.5) 10.2 (24.5) Table 5: We generate gender-inflected lattices from commercial system translations, collected by Stanovsky et al. (2019) (1: Microsoft, 2: Google, 3: Amazon, 4: SYSTRAN). We then rescore with the debiased model from line 5 of Table 4. Scores are for the rescored hypotheses, with bracketed baseline scores duplicated from Table 2. are only slightly lower than for decoding with the rescoring model directly, as in line 3. In line 8, lattice rescoring with the nonconverged model adapted to handcrafted data (line 4) likewise leaves general BLEU unchanged or slightly improved. When lattice rescoring the WinoMT challenge set, 79%, 76% and 49% of the accuracy improvement is maintained on en-de, en-es and en-he respectively. This corresponds to accuracy gains of up to 30% relative to the baselines with no general translation performance loss. In line 9, lattice-rescoring with the converged model of line 5 limits BLEU degradation to 0.2 BLEU on all languages, while maintaining 85%, 82% and 58% of the WinoMT accuracy improvement from the converged model for the three language pairs. Lattice rescoring with this model gives accuracy improvements over the baseline of 36%, 38% and 24% for en-de, en-es and en-he. Rescoring en-he maintains a much smaller proportion of WinoMT accuracy improvement than en-de and en-es. We believe this is because the en-he baseline is particularly weak, due to a small and non-diverse training set. The baseline must produce some inflection of the correct entity before lattice rescoring can have an effect on gender bias. 3.4.5 Reducing gender bias in ‘black box’ commercial systems Finally, in Table 5, we apply the gender inflection transducer to the commercial system translations5 listed in Table 2. We find rescoring these lattices with our strongest debiasing model (line 5 of Table 4) substantially improves WinoMT accuracy for all systems and language pairs. One interesting observation is that WinoMT accuracy after rescoring tends to fall in a fairly narrow range for each language relative to the performance range of the baseline systems. For example, a 25.5% range in baseline en-de accuracy 5The raw commercial system translations are provided by the authors of Stanovsky et al. (2019) at https://github. com/gabrielStanovsky/mt_gender becomes a 3.6% range after rescoring. This suggests that our rescoring approach is not limited as much by the bias level of the baseline system as by the gender-inflection transducer and the models used in rescoring. Indeed, we emphasise that the large improvements reported in Table 5 do not require any knowledge of the commercial systems or the data they were trained on; we use only the translation hypotheses they produce and our own rescoring model and transducer. 4 Conclusions We treat the presence of gender bias in NMT systems as a domain adaptation problem. We demonstrate strong improvements under the WinoMT challenge set by adapting to tiny, handcrafted gender-balanced datasets for three language pairs. While naive domain adaptation leads to catastrophic forgetting, we further demonstrate two approaches to limit this: EWC and a lattice rescoring approach. Both allow debiasing while maintaining general translation performance. Lattice rescoring, although a two-step procedure, allows far more debiasing and potentially no degradation, without requiring access to the original model. We suggest small-domain adaptation as a more effective and efficient approach to debiasing machine translation than counterfactual data augmentation. We do not claim to fix the bias problem in NMT, but demonstrate that bias can be reduced without degradation in overall translation quality. Acknowledgments This work was supported by EPSRC grants EP/M508007/1 and EP/N509620/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service6 funded by EPSRC Tier-2 capital grant EP/P020259/1. 6http://www.hpc.cam.ac.uk References Lauren Ackerman. 2019. Syntactic and cognitive issues in investigating gendered coreference. Glossa: a journal of general linguistics, 4(1). Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wojciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In International Conference on Implementation and Application of Automata, pages 11–23. Springer. Duygu Altinok. 2018. DEMorphy, German language morphological analyzer. arXiv preprint arXiv:1803.00902. Antonio Valerio Miceli Barone, Barry Haddow, Ulrich Germann, and Rico Sennrich. 2017. Regularization techniques for fine-tuning in neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1489–1494, Copenhagen, Denmark. Association for Computational Linguistics. Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in neural information processing systems, pages 4349–4357. Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign, IWSLT 2014. In Proceedings of the International Workshop on Spoken Language Translation, Hanoi, Vietnam, page 57. Shuoyang Ding, Adithya Renduchintala, and Kevin Duh. 2019. A call for prudent choice of subword merge operations in neural machine translation. In Proceedings of Machine Translation Summit XVII Volume 1: Research Track, pages 204–213, Dublin, Ireland. European Association for Machine Translation. Joel Escud´e Font and Marta R. Costa-juss`a. 2019. Equalizing gender bias in neural machine translation with word embeddings techniques. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 147–154, Florence, Italy. Association for Computational Linguistics. M. Amin Farajian, Marco Turchi, Matteo Negri, and Marcello Federico. 2017. Multi-domain neural machine translation through unsupervised adaptation. In Proceedings of the Second Conference on Machine Translation, pages 127–137, Copenhagen, Denmark. Association for Computational Linguistics. Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for Neural Machine Translation. CoRR, abs/1612.06897. Robert M French. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128–135. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614. AI HLEG. 2019. Ethics guidelines for trustworthy AI. High-Level Expert Group on Artificial Intelligence. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with bloom embeddings. Convolutional Neural Networks and Incremental Parsing. Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591–598, Berlin, Germany. Association for Computational Linguistics. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 114(13):3521–3526. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2018. Gender bias in neural natural language processing. arXiv preprint arXiv:1807.11714. Paul Michel and Graham Neubig. 2018. Extreme adaptation for personalized neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 312–318, Melbourne, Australia. Association for Computational Linguistics. Julia Misersky, Asifa Majid, and Tineke M Snijders. 2019. Grammatical gender in German influences how role-nouns are interpreted: Evidence from erps. Discourse Processes, 56(8):643–654. 7733 Amit Moryossef, Roee Aharoni, and Yoav Goldberg. 2019. Filling gender & number gaps in neural machine translation with black-box context injection. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 49–54, Florence, Italy. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2799–2804, Brussels, Belgium. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Marcelo OR Prates, Pedro H Avelar, and Lu´ıs C Lamb. 2019. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications, pages 1–19. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics. Danielle Saunders, Felix Stahlberg, Adri`a de Gispert, and Bill Byrne. 2018. Multi-representation ensembles and delayed SGD updates improve syntaxbased NMT. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 319– 325, Melbourne, Australia. Association for Computational Linguistics. Danielle Saunders, Felix Stahlberg, Adri`a de Gispert, and Bill Byrne. 2019. Domain adaptive inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 222–228, Florence, Italy. Association for Computational Linguistics. Ora Schwarzwald. 1982. Feminine formation in modern Hebrew. Hebrew Annual Review, 6:153–178. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Felix Stahlberg, Christopher Bryant, and Bill Byrne. 2019. Neural grammatical error correction with finite state transducers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4033–4039. Felix Stahlberg, Eva Hasler, Aurelien Waite, and Bill Byrne. 2016. Syntactically guided neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 299–305, Berlin, Germany. Association for Computational Linguistics. Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679–1684, Florence, Italy. Association for Computational Linguistics. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1630–1640, Florence, Italy. Association for Computational Linguistics. Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2062–2068, Minneapolis, Minnesota. Association for Computational Linguistics. Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3003–3008, Brussels, Belgium. Association for Computational Linguistics. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2Tensor for neural machine translation. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 193–199, Boston, MA. Association for Machine Translation in the Americas. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz 7734 Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Wei Wang, Taro Watanabe, Macduff Hughes, Tetsuji Nakagawa, and Ciprian Chelba. 2018. Denoising neural machine translation training with trusted data and online data selection. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 133–143, Belgium, Brussels. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics. Michał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 3530–3534, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1651–1661, Florence, Italy. Association for Computational Linguistics. A WinoMT male:female prediction ratio We report ∆G on WinoMT for easy comparison to previous work, but also find that M:F prediction ratio on WinoMT is an intuitive and interesting metric. Tables 6 and 7 expand on the results of Tables 3 and 4 respectively. 7735 en-de en-es en-he BLEU Acc M:F BLEU Acc M:F BLEU Acc M:F Baseline 42.7 60.1 3.4 27.8 49.6 6.3 23.8 51.3 2.2 Original 41.8 60.7 3.1 28.3 53.0 4.0 23.5 53.6 2.0 FTrans original 43.3 60.0 3.9 27.4 51.6 5.4 23.4 48.7 3.0 FTrans swapped 43.4 63.0 3.1 27.4 53.7 4.0 23.7 48.1 2.6 Balanced 42.5 64.0 2.7 27.7 52.8 4.3 23.8 48.3 2.7 Table 6: General test set BLEU and WinoMT scores after unregularised fine-tuning the baseline on four genderbased adaptation datasets. en-de en-es en-he BLEU Acc M:F BLEU Acc M:F BLEU Acc M:F 1 Baseline 42.7 60.1 3.4 27.8 49.6 6.3 23.8 51.3 2.2 2 Balanced 42.5 64.0 2.7 27.7 52.8 4.3 23.8 48.3 2.7 3 Handcrafted (no overlap) 40.6 71.2 1.7 26.5 64.1 2.4 23.1 56.5 0.8 4 Handcrafted 40.8 78.3 1.3 26.7 68.6 1.9 22.9 65.7 0.9 5 Handcrafted (converged) 36.5 85.3 0.9 25.3 72.4 1.5 22.5 72.6 1.0 6 Handcrafted EWC 42.2 74.2 1.6 27.2 67.8 2.0 23.3 65.2 1.2 7 Rescore 1 with 3 42.7 68.3 2.2 27.8 62.4 2.3 23.9 56.2 1.3 8 Rescore 1 with 4 42.7 74.5 1.6 27.8 64.2 2.1 23.9 58.4 1.3 9 Rescore 1 with 5 42.5 81.7 1.1 27.7 68.4 1.8 23.6 63.8 1.3 Table 7: General test set BLEU and WinoMT scores after fine-tuning on the handcrafted profession set, compared to fine-tuning on the most consistent counterfactual set. Lines 1-2 duplicated from Table 6. Lines 3-4 vary adaptation data. Lines 5-6 vary adaptation training procedure. Lines 7-9 apply lattice rescoring to baseline hypotheses. 7736
2020
690
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7737–7746 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7737 Translationese as a Language in “Multilingual” NMT Parker Riley△∗, Isaac Caswell▽, Markus Freitag▽, David Grangier▽ △University of Rochester ▽Google Research Abstract Machine translation has an undesirable propensity to produce “translationese” artifacts, which can lead to higher BLEU scores while being liked less by human raters. Motivated by this, we model translationese and original (i.e. natural) text as separate languages in a multilingual model, and pose the question: can we perform zero-shot translation between original source text and original target text? There is no data with original source and original target, so we train a sentence-level classifier to distinguish translationese from original target text, and use this classifier to tag the training data for an NMT model. Using this technique we bias the model to produce more natural outputs at test time, yielding gains in human evaluation scores on both adequacy and fluency. Additionally, we demonstrate that it is possible to bias the model to produce translationese and game the BLEU score, increasing it while decreasing human-rated quality. We analyze these outputs using metrics measuring the degree of translationese, and present an analysis of the volatility of heuristic-based train-data tagging. 1 Introduction “Translationese” is a term that refers to artifacts present in text that was translated into a given language that distinguish it from text originally written in that language (Gellerstam, 1986). These artifacts include lexical and word order choices that are influenced by the source language (Gellerstam, 1996) as well as the use of more explicit and simpler constructions (Baker et al., 1993). These differences between translated and original text mean that the direction in which parallel data (bitext) was translated is potentially important for machine translation (MT) systems. Most *Work done while at Google Research. Translated Original Translated Original Target Source Src-Orig Data Trg-Orig Data MT Training Bitext Figure 1: Illustration of MT train+test parallel data, organized into quadrants based on whether the source or target is translated or original. parallel data is either source-original (the source was translated into the target) or target-original (the target was translated into the source), though sometimes neither side is original because both were translated from a third language. Figure 1 illustrates the four possible combinations of translated and original source and target data. Recent work has examined the impact of translationese in MT evaluation, using the WMT evaluation campaign as the most prominent example. From 2014 through 2018, WMT test sets were constructed such that 50% of the sentence pairs are source-original (upper right quadrant of Figure 1) and the rest are target-original (lower left quadrant). Toral et al. (2018), Zhang and Toral (2019), and Graham et al. (2019) have examined the effect of this testing setup on MT evaluation, and have all argued that target-original test data should not be included in future evaluation campaigns because the translationese source is too easy to translate. While target-original test data does have the downside of a translationese source side, recent work has also shown that human raters prefer MT output that is closer in distribution to original target text than 7738 translationese (Freitag et al., 2019). This indicates that the target side of test data should also be original (upper left quadrant of Figure 1); however, it is unclear how to produce high-quality test data (let alone training data) that is simultaneously sourceand target-original. Because of this lack of original-to-original sentence pairs, we frame this as a zero-shot translation task, where translationese and original text are distinct languages or domains. We adapt techniques from zero-shot translation with multilingual models (Johnson et al., 2016), where the training pairs are tagged with a reserved token corresponding to the domain of the target side: translationese or original text. Tagging is helpful when the training set mixes data of different types by allowing the model to 1) see each pair’s type in training to preserve distinct behaviors and avoid regressing to a mean/dominant prediction across data types, and 2) elicit different behavior in inference, i.e. providing a tag at test time yields predictions resembling a specific data type. We then investigate what happens when the input is an original sentence in the source language and the model’s output is also biased to be original, a scenario never observed in training. Tagging in this fashion is not trivial, as most MT training sets do not annotate which pairs are sourceoriginal and which are target-original1, so in order to distinguish them we train binary classifiers to distinguish original and translated target text. Finally, we perform several analyses of tagging these “languages” and demonstrate that tagged back-translation (Caswell et al., 2019) can be framed as a simplified version of our method, and thereby improved by targeted decoding. Our contributions are as follows: 1. We propose two methods to train translationese classifiers using only monolingual text, coupled with synthetic text produced by machine translation. 2. Using only original→translationese and translationese→original training pairs, we apply techniques from zero-shot multilingual MT to enable original→original translation. 3. We demonstrate with human evaluations that this technique improves translation quality, both in terms of fluency and adequacy. 1Europarl (Koehn, 2005) is a notable exception, but it is somewhat small and not in the news domain. 4. We show that biasing the model to instead produce translationese outputs inflates BLEU scores while harming quality as measured by human evaluations. 2 Classifier Training + Tagging Motivated by prior work detailing the importance of distinguishing translationese from original text (Kurokawa et al., 2009; Lembersky et al., 2012; Toral et al., 2018; Zhang and Toral, 2019; Graham et al., 2019; Freitag et al., 2019; Edunov et al., 2019) as well as work in zero-shot translation (Johnson et al., 2016), we hypothesize that performance on the source-original translation task can be improved by distinguishing target-original and target-translationese examples in the training data and constructing an NMT model to perform zero-shot original→original translation. Because most MT training sets do not annotate each sentence pair’s original language, we train a binary classifier to predict whether the target side of a pair is original text in that language or translated from the source language. This follows several prior works attempting to identify translations (Kurokawa et al., 2009; Koppel and Ordan, 2011; Lembersky et al., 2012). To train the classifier, we need target-language text annotated by whether it is original or translated. We use News Crawl data from WMT2 as targetoriginal data. It consists of news articles crawled from the internet, so we assume that most of them are not translations. Getting translated data is trickier; most human-translated pairs where the original language is annotated are only present in test sets, which are generally small. To sidestep this, we choose to use machine translation as a proxy for human translationese, based on the assumption that they are similar. This allows us to create classifier training data using only unannotated monolingual data. We propose two ways of doing this: using forward translation (FT) or round-trip translation (RTT). Both are illustrated in Figure 2. To generate FT data, we take source-language News Crawl data and translate it into the target language using a machine translation model trained on WMT training bitext. We can then train a classifier to distinguish the generated text from monolingual target-language text. One potential problem with the FT data set is that the original and translated pairs may differ not only 2http://www.statmt.org/wmt18/translation-task.html 7739 Figure 2: Illustration of data set creation for the FT and RTT translationese classifiers. The Source→Target and Target→Source nodes represent NMT systems. in the respects we care about (i.e. translationese), but also in content. Taking English→French as an example language pair, one could imagine that certain topics are more commonly reported on in original English language news than in French, and vice versa, e.g. news about American or French politics, respectively. The words and phrases representing those topics could then act as signals to the classifier to distinguish the original language. To address this, we also experiment with RTT data. For this approach we take target-language monolingual data and round-trip translate it with two machine translation models (target→source and then source→target), resulting in another target-language sentence that should contain the same content as the original sentence, alleviating the concern with FT data. Here we hope that the noise introduced by round-trip translation will be similar enough to human translationese to be useful for our downstream task. In both settings, we use the trained binary classifier to detect and tag training bitext pairs where the classifier predicted that the target side is original. 3 Experimental Set-up 3.1 Data We perform our experiments on WMT18 English→German bitext and WMT15 English→French bitext. We use WMT News Crawl for monolingual data (2007-2017 for German and 2007-2014 for French). We filter out sentences longer than 250 subwords (see Section 3.2 for the vocabulary used) and remove pairs whose length ratio is greater than 2. This results in about 5M pairs for English→German. We do not filter the English→French bitext, resulting in 41M sentence pairs. For monolingual data, we deduplicate and filter sentences with more than 70 tokens or 500 characters. For the experiments described later in Section 5.3, this monolingual data is back-translated with a target-to-source translation model; after doing so, we remove any sentence pairs where the back-translated source is longer than 75 tokens or 550 characters. This results in 216.5M sentences for English→German (of which we only use 24M at a time) and 39M for English→French. As a final step, we use an in-house language identification tool based on the publicly-available Compact Language Detector 23 to remove all pairs with the incorrect source or target language. This was motivated by observing that some training pairs had the incorrect language on one side, including cases where both sides were the same; Khayrallah and Koehn (2018) found that this type of noise is especially harmful to neural models. The classifiers were trained on the target language monolingual data in addition to either an equal amount of source language monolingual data machine-translated into the target language (for the FT classifiers) or the same target sentences roundtrip translated through the source language with MT (for the RTT classifiers). In both cases, the MT models were trained only with WMT bitext. The models used to generate the synthetic data have BLEU (Papineni et al., 2002) performance as follows on newstest2014/full: German→English 31.8; English→German 28.5; French→English 39.2; English→French 40.6. Here and elsewhere, we report BLEU scores with SacreBLEU (Post, 2018); see Section 3.3. Both language pairs considered in this work are high-resource. While translationese is a potential concern for all language pairs, in low-resource settings it is overshadowed by general quality concerns stemming from the lack of training data. We leave for future work the application of these techniques to low-resource language pairs. 3.2 Architecture and Training Our NMT models use the transformer-big architecture (Vaswani et al., 2017) implemented in lingvo (Shen et al., 2019) with a shared sourcetarget byte-pair-encoding (BPE) vocabulary (Sennrich et al., 2016b) of 32k types. To stabilize training, we use exponentially weighted moving average (EMA) decay (Buduma and Locascio, 2017). 3https://github.com/CLD2Owners/cld2 7740 Language Classifier Bitext BT Type % Orig. % Orig. French FT 47% 84% RTT 30% 68% German FT 22%* 82% RTT 29%* 70% Table 1: Percentage of training data where the target side was classified as original. English→German pairs with predicted original German (marked with a *) were upsampled to balance both bitext subsets’ sizes. Checkpoints were picked by best dev BLEU on a set consisting of a tagged and untagged version of every input. For the translationese classifier, we trained a three-layer CNN-based classifier optimized with Adagrad. We picked checkpoints by F1 on the development set, which was newstest2015 for English→German and a subset of newstest2013 containing 500 English-original and 500 Frenchoriginal sentence pairs for English→French. We found that the choice of architecture (RNN/CNN) and hyperparameters did not make a substantial difference in classifier accuracy. 3.3 Evaluation We report BLEU (Papineni et al., 2002) scores with SacreBLEU (Post, 2018) and include the identification string4 to facilitate comparison with future work. We also run human evaluations for the best performing systems (Section 4.3). 4 Results and Discussion 4.1 Classifier Accuracy Before evaluating the usefulness of our translationese classifiers for the downstream task of machine translation, we can first evaluate how accurate they are at distinguishing original text from human translations. We use WMT test sets for this evaluation, because they consist of source-original and target-original sentence pairs in equal number. For French, the FT classifier scored 0.81 F1 and the RTT classifier scored 0.68 on newstest2014/full. For German, the FT classifier achieved 0.85 F1 and the RTT classifier scored 0.65 on newstest2015. We note that while the FT classifiers perform reasonably well, the RTT classifiers are less effective. This result is in line with prior work by 4BLEU + case.mixed + lang.LANGUAGE PAIR + numrefs.1 + smooth.exp + test.SET + tok.intl + version.1.2.15 Test set → Src-Orig Trg-Orig Both Decode → Nt. Tr. Tr. Nt. Match Match? →      a. En→Fr: Avg. newstest20{14/full,15} Untagged 39.5 39.5 44.5 44.5 42.0 FT clf. 37.7 40.0 42.5 45.0 42.5 RTT clf. 38.0 39.4 43.2 44.1 41.8 b. En→De: Avg. newstest20{14/full,16,17,18} Untagged 36.3 36.3 30.0 30.0 34.0 FT clf. 28.3 36.0 29.4 29.8 33.6 RTT clf. 32.3 36.2 30.0 30.2 33.9 Table 2: Average BLEU for models trained on (a) WMT 2014 English→French bitext and (b) WMT 2018 English→German bitext, tagged according to target side classifier predictions. The tag controls the output domain: translationese (“Tr”) or original/natural text (“Nt.”). Matching output and test domains (“Match?” row) for both halves (“Both” column) achieves the highest combined BLEU. Kurokawa et al. (2009), who trained an SVM classifier on French sentences to detect translations from English. They used word n-gram features for their classifier and achieved 0.77 F1, but were worried about a potential content effect and so also trained a classifier where nouns and verbs were replaced with corresponding part-of-speech (POS) tags, achieving 0.69 F1. Note that they tested on the Canadian Hansard corpus (containing Canadian parliamentary transcripts in English and French) while we tested on WMT test sets, so the numbers are not directly comparable, but it is interesting to see the similar trends in comparing content-aware and content-unaware versions of the same method. We also point out that Kurokawa et al. (2009) both trained and tested with humantranslated sentences, while we trained our classifiers with machine-translated sentences while still testing on human-translated data. The portion of our data classified as targetoriginal by each classifier is reported in Table 1. 4.2 NMT with Translationese-Classified Bitext Table 2a shows the BLEU scores of three models all trained on WMT 2014 English→French bitext. They differ in how the data was partitioned: either it wasn’t, or tags were applied to those sentence pairs with a target side that a classifier predicted to be original French. We first note that the model trained on data tagged by the round-trip translation 7741 Test set → Src-Orig Tagging ↓ Decode BLEU % Preferred Untagged 43.9 26.6% FT clf. Natural 41.5 31.9% Test set → Src-Orig Tagging ↓ Decode BLEU % Preferred FT clf. Transl. 44.6 24.2% FT clf. Natural 41.5 30.7% Table 3: Fluency side-by-side human evaluation for WMT English→French newstest2014/full (Table 2a). We evaluate only the source-original half of the test set because it corresponds to our goal of original→original translation. Despite a BLEU drop, humans rate the natural decode on average as more fluent than both the bitext model output and the same model with the translationese decode. (RTT) classifier performs slightly worse than the baseline. However, the model trained with data tagged by the forward translation (FT) classifier is able to achieve an improvement of 0.5 BLEU on both halves of the test set when biased toward translationese on the source-original half and original text on the target-original half. This, coupled with the observation that the BLEU score on the source-original half sharply drops when adding the tag, indicates that the two halves of the test set represent quite different tasks, and that the model has learned to associate the tag with some aspects specific to generating original text as opposed to translationese. However, we were not able to replicate this positive result on the English→German language pair (Table 2b). Interestingly, in this scenario the relative ordering of the FT and RTT models is reversed, with the German RTT-trained model outperforming the FT-trained one. This is also interesting because the German FT classifier achieved a higher F1 score than the French one, indicating that a classifier’s performance alone is not a sufficient indicator of its effect on translation performance. One possible explanation for the negative result is that the English→German bitext only contains 5M pairs, as opposed to the 41M for English→French, so splitting the data into two portions could make it difficult to learn both portions’ output distributions properly. 4.3 Human Evaluation Experiments In the previous subsection, we saw that BLEU for the source-original half of the test set went down when the model trained with FT classifications (FT clf.) was decoded it as if it were target-original (Table 2a). Prior work has shown that BLEU has a low correlation with human judgments when the reference contains translationese but the system output is biased toward original/natural text (Freitag et al., 2019). This is the very situation we find ourselves in now. Consequently, we run a human evaluation to see if the output truly is more natural and thereby preferred by human raters, despite the loss in BLEU. We run both a fluency and an adequacy evaluation for English→French to compare the quality of this system when decoding as if source-original vs. target-original. We also compare the system with the Untagged baseline. All evaluations are conducted with bilingual speakers whose native language is French, and each is rated by 3 different raters, with the average taken as the final score. Our two evaluations are as follows: • Adequacy: Raters were shown only the source sentence and the model output. Each output was scored on a 6-point scale. • Fluency: Raters saw two target sentences (two models’ outputs) without the source sentence, and were asked to select which was more fluent, or whether they were equally good. Fluency human evaluation results are shown in Table 3. We measured inter-rater agreement using Fleiss’ Kappa (Fleiss, 1971), which attains a maximum value of 1 when raters always agree. This value was 0.24 for the comparison with the untagged baseline, and 0.16 for the comparison with the translationese decodes. The agreement levels are fairly low, indicating a large amount of subjectivity for this task. However, raters on average still indicated a preference for the FT clf. model’s natural decodes. This provides evidence that they are more fluent than both the translationese decodes from the same model and the baseline untagged model, despite the drop in BLEU compared to each. Adequacy human ratings are summarised in Table 4. Both decodes from the FT clf. model scored significantly better than the baseline. This is especially true of the natural decodes, demonstrating that the model does not suffer a loss in adequacy by generating more fluent output, and actually sees a significant gain. We hypothesize that splitting the data as we did here allowed the model to learn a sharper distribution for both portions, thereby increasing the quality of both decode types. Some 7742 Test set → Src-Orig Tagging ↓ Decode BLEU Adequacy Untagged 43.9 4.51 FT clf. Transl. 44.6 4.67* FT clf. Natural 41.5 4.72** Table 4: Human evaluation of adequacy for WMT English→French on the source-original half of newstest2014/full. Humans rated each output separately on a 6-point scale. As with fluency (Table 3), the natural decode scores the best, despite a BLEU loss. The single and double asterisks indicate that the adequacy value is significantly greater than the first row’s value at significance level α = 0.05 and α = 0.01, respectively, according to a one-tailed paired t-test. The difference between the second and third rows was not significant at α = 0.1. additional evidence for this is the fact that the FT clf. model’s training loss was consistently lower than that of the baseline. 5 Supplemental Experiments 5.1 Measuring Translationese Translationese tends to be simpler, more standardised and more explicit (Baker et al., 1993) compared to original text and can retain typical characteristics of the source language (Toury, 2012). Toral (2019) proposed metrics attempting to quantify the degree of translationese present in a translation. Following their work, we quantify lexical simplicity with two metrics: lexical variety and lexical density. We also calculate the length variety between the source sentence and the generated translations to measure interference from the source. 5.1.1 Lexical Variety An output is simpler when it uses a lower number of unique tokens/words. By generating output closer to original target text, our hope is to increase lexical variety. Lexical variety is calculated as the typetoken ratio (TTR): TTR = number of types number of tokens (1) 5.1.2 Lexical Density Scarpa (2006) found that translationese tends to be lexically simpler and have a lower percentage of content words (adverbs, adjectives, nouns and verbs) than original written text. Lexical density is calculated as follows: lex density = number of content words number of total words (2) 5.1.3 Length Variety Both MT and humans tend to avoid restructuring the source sentence and stick to sentence structures popular in the source language. This results in a translation with similar length to that of the source sentence. By measuring the length variety, we measure interference in the translation because its length is guided by the source sentence’s structure. We compute the normalized absolute length difference at the sentence level and average the scores over the test set of source-target pairs (x, y): length variety = ||x| −|y|| |x| (3) 5.1.4 Results Results for all three different translationese measurements are shown in Table 5. Test set → Src-Orig Tagging ↓ Decode Lex. Lex. Len. Var. Density Var. Untagged 0.258 0.393 0.246 FT clf. Transl. 0.255 0.396 0.264 FT clf. Natural 0.260 0.397 0.245 Table 5: Measuring the degree of translationese for WMT English→French newstest2014/full on the source-original half. Higher lexical variety, lexical density, and length variety indicate less translationese output. Lexical Variety : Using the tag to decode as natural text (i.e. more like original target text) increases lexical variety. This is expected as original sentences tend to use a larger vocabulary. Lexical Density : We also increase lexical density when decoding as natural text. In other words, the model has a higher percentage of content words in its output, which is an indication that it is more like original target-language text. Length Variety : Unlike the previous two metrics, decoding as natural text does not lead to a more “natural” (i.e. larger) average length variety. One reason may be related to the fact that this is the only metric that also depends on the source sentence: since all of our training pairs feature translationese on either the source or target side, both the tagged and untagged training pairs will 7743 feature similar sentence structures, so the model never fully learns to produce different structures. This further illustrates the problem of the lack of original→original training data noted in the introduction. 5.2 Tagging using Translationese Heuristics Rather than tagging training data with a trained classifier, as explored in the previous sections, it might be possible to tag using much simpler heuristics, and achieve a similar effect. We explore two options here. 5.2.1 Length Ratio Tagging Here, we partition the training pairs (x, y) according to a simple length ratio |x| |y|. We use a threshold ˆρlength empirically calculated from two large monolingual corpora, Mx and My: ˆρlength = 1 |Mx| P xi∈Mx |xi| 1 |My| P yi∈My |yi| (4) For English→French, we found ˆρlength = 0.8643, meaning that original French sentences tend to have more tokens than English. We tag all pairs with length ratio greater than ˆρlength (49.8% of the training bitext). Based on the discussion in Section 5.1.3, we expect that |x| |y| ≈1.0 indicates translationese, so in this case the tag should mean “produce translationese” instead of “produce original text.” 5.2.2 Lexical Density Tagging We tag examples with a target-side lexical density of greater than 0.5, which means that the target is more likely to be original than translationese. Please refer to Section 5.1.2 for an explanation of this metric. 5.2.3 Results Table 6 shows the results for this experiment, compared to the untagged baseline and the classifiertagged model from Table 2a. This table specifically looks at the effect of controlling whether the output should feature more or less translationese on each subset of the test set. We see that the lexical density tagging approach yields expected results, in that the tag can be used to effectively increase BLEU on the target-original portion of the test set. The length-ratio tagging, however, has the opposite effect: producing shorter outputs (“decode as if translationese”) produces higher target-original BLEU and lower source-original BLEU. We speculate that this data partition has accidentally picked up on some artifact of the data. Two interesting observations from Table 6 are that 1) both heuristic tagging methods perform much more poorly than the classifier tagging method on both test set halves, and 2) all varieties of tagging produce large performance changes (up to -7.2 BLEU). This second observation highlights that tagging can be powerful – and dangerous when it does not correspond well with the desired feature. 5.3 Back-Translation Experiments We also investigated whether using a classifier to tag training data improved model performance in the presence of back-translated (BT) data. Caswell et al. (2019) introduced tagged back-translation (TBT), where all back-translated pairs are tagged and no bitext pairs are. They experimented with decoding the model with a tag (“as-if-backtranslated”) but found it harmed BLEU score. However, in our early experiments we discovered that doing this actually improved the model’s performance on the target-original portion of the test set, while harming it on the source-original half. Thus, we frame TBT as a heuristic method for identifying target-original pairs: the monolingual data used for the back-translations is assumed to be original, and the target side of the bitext is assumed to be translated. We wish to know whether we can find a better tagging scheme for the combined BT+bitext data, based on a classifier or some other heuristic. Results for English→French models trained with BT data are presented in Table 7a. While combining the bitext classified by the FT classifier with all-tagged BT data yields a minor gain of 0.2 BLEU over the TBT baseline of Caswell et al. (2019), the other methods do not beat the baseline. This indicates that assuming all of the target monolingual data to be original is not as harmful as the error introduced by the classifiers. English→German results are presented in Table 7b. Combining the bitext classified by the RTT classifier with all-tagged BT data matched the performance of the TBT baseline, but none of the models outperformed it. This is expected, given the poor performance of the bitext-only models for this language pair. 7744 Test set → Src-Orig Src-Orig Trg-Orig Trg-Orig Decode as if → Natural Transl. Transl. Natural ∴Domain match? →     Train data tagging ↓ Untagged 39.5 39.5 44.5 44.5 FT clf. 37.7 40.0 42.5 45.0 Length Variety 38.2 36.1 43.6 36.2 Lex. Density 36.9 36.7 41.2 43.4 Table 6: Comparing heuristic- and classifier-based tagging. BLEU scores are averaged for newstest2014/full and newstest2015 English→French. The trained classifier outperforms both heuristics, and length-ratio tagging has the reverse effect from what we expect. Test set → Src-Orig Trg-Orig Combined Decode as if → Natural Transl. Transl. Natural Both ∴Domain match? →      Bitext tagging ↓ BT tagging ↓ a. English→French: Avg. newstest20{14/full, 15} Untagged All Tagged 38.4 40.8 47.5 49.8 45.5 FT clf. All Tagged 38.8 40.8 47.3 50.3 45.7 FT clf. FT clf. 38.2 40.9 45.5 49.0 45.2 RTT clf. RTT clf. 38.3 40.1 49.4 49.5 45.1 b. English→German: Avg. newstest20{14/full,16,17,18} Untagged All Tagged 33.5 37.3 36.7 37.1 37.6 FT clf. All Tagged 33.4 37.2 36.2 37.2 37.5 RTT clf. All Tagged 33.6 37.4 36.6 37.1 37.6 RTT clf. RTT clf. 31.6 35.7 36.8 36.7 36.4 FT clf. FT clf. 30.5 35.5 36.5 37.0 36.5 Table 7: Average BLEU scores for models trained on (a) WMT 2018 English→French bitext plus 39M backtranslated monolingual sentences, and (b) WMT 2018 English→German bitext plus 24M back-translated monolingual sentences. As before, we tag by heuristics and/or classifier predictions on the target (German) side. 6 Example Output In Table 8, we show example outputs for WMT English→French comparing the Untagged baseline with the FT clf. natural decodes. In the first example, avec suffisamment d’art is an incorrect word-for-word translation, as the French word art cannot be used in that context. Here the word habilement, which is close to “skilfully” in English, sounds more natural. In the second example, libre d’impˆot is the literal translation of “tax-free”, but French documents rarely use it, they prefer pas imposable, meaning “not taxable”. 7 Related Work 7.1 Translationese The effects of translationese on MT training and evaluation have been investigated by many prior authors (Kurokawa et al., 2009; Lembersky et al., 2012; Toral et al., 2018; Zhang and Toral, 2019; Graham et al., 2019; Freitag et al., 2019; Edunov et al., 2019; Freitag et al., 2020). Training classifiers to detect translationese has also been done (Kurokawa et al., 2009; Koppel and Ordan, 2011). Similarly to this work, Kurokawa et al. (2009) used their classifier to preprocess MT training data; however, they completely removed target-original pairs. In contrast, Lembersky et al. (2012) used both types of data (without explicitly distinguishing them with a classifier), and used entropy-based measures to cause their phrase-based system to favor phrase table entries with target phrases that are more similar to a corpus of translationese than original text. In this work, we combine aspects from each of these: we train a classifier to partition the training data, and use both subsets to train a single model with a mechanism allowing control over the degree of translationese to produce in the output. We also show with human evaluations that source-original test sentence pairs result in BLEU scores that do not correlate well with translation quality when evaluating models trained to produce more original output. 7.2 Training Data Tagging for NMT In addition to the methods in Caswell et al. (2019), tagging training data and using the tags to control output is a technique that has been growing in popularity. Tags on the source sentence have 7745 Source Sorry she didn’t phrase it artfully enough for you. Untagged D´esol´ee, elle ne l’a pas formul´e avec suffisamment d’art pour vous. FT clf. D´esol´e elle ne l’a pas formul´e assez habilement pour vous. Source Your first 10,000 is tax free. Untagged Votre premi`ere tranche de 10 000 est libre d’impˆot. FT clf. La premi`ere tranche de 10 000 n’est pas imposable. Table 8: Example English→French output comparing the untagged baseline with the FT clf. natural decode. been used to indicate target language in multilingual models (Johnson et al., 2016), formality level in English→Japanese (Yamagishi et al., 2016), politeness in English→German (Sennrich et al., 2016a), gender from a gender-neutral language (Kuczmarski and Johnson, 2018), as well as to produce domain-targeted translation (Kobus et al., 2016). Shu et al. (2019) use tags at training and inference time to increase the syntactic diversity of their output while maintaining translation quality; similarly, Agarwal and Carpuat (2019) and Marchisio et al. (2019) use tags to control the reading level (e.g. simplicity/complexity) of the output. Overall, tagging can be seen as domain adaptation (Freitag and Al-Onaizan, 2016; Luong and Manning, 2015). 8 Conclusion We have demonstrated that translationese and original text can be treated as separate target languages in a “multilingual” model, distinguished by a classifier trained using only monolingual and synthetic data. The resulting model has improved performance in the ideal, zero-shot scenario of original→original translation, as measured by human evaluation of adequacy and fluency. However, this is associated with a drop in BLEU score, indicating that better automatic evaluation is needed. Acknowledgments We are grateful to the anonymous reviewers for suggesting useful additions. References Swetha Agarwal and Marine Carpuat. 2019. Controlling Text Complexity in Neural Machine Translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Mona Baker, Gill Francis, and Elena Tognini-Bonelli. 1993. Corpus Linguistics and Translation Studies: Implications and Applications, chapter 2. John Benjamins Publishing Company, Netherlands. Nikhil Buduma and Nicholas Locascio. 2017. Fundamentals of deep learning: Designing nextgeneration machine intelligence algorithms. O’Reilly Media, Inc. Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 53–63, Florence, Italy. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Marc’Aurelio Ranzato, and Michael Auli. 2019. On the evaluation of machine translation systems trained with back-translation. arXiv preprint arXiv:1908.05204. J.L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378–382. Markus Freitag and Yaser Al-Onaizan. 2016. Fast Domain Adaptation for Neural Machine Translation. CoRR, abs/1612.06897. Markus Freitag, Isaac Caswell, and Scott Roy. 2019. APE at Scale and Its Implications on MT Evaluation Biases. In Proceedings of the Fourth Conference on Machine Translation, pages 34–44, Florence, Italy. Association for Computational Linguistics. Markus Freitag, David Grangier, and Isaac Caswell. 2020. BLEU might be Guilty but References are not Innocent. Martin Gellerstam. 1986. Translationese in swedish novels translated from english. In Lars Wollin and Hans Lindquist, editors, Translation Studies in Scandinavia, page 8895. CWK Gleerup. Martin Gellerstam. 1996. Translations as a source for cross-linguistic studies. Lund Studies in English, 88:53–62. Yvette Graham, Barry Haddow, and Philipp Koehn. 2019. Translationese in machine translation evaluation. CoRR, abs/1906.09833. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Vi’egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. CoRR, abs/1611.04558. Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74–83, Melbourne, Australia. Association for Computational Linguistics. 7746 Catherine Kobus, Josep Maria Crego, and Jean Senellart. 2016. Domain Control for Neural Machine Translation. CoRR, abs/1612.06140. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79–86, Phuket, Thailand. AAMT, AAMT. Moshe Koppel and Noam Ordan. 2011. Translationese and its dialects. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 1318–1326, Stroudsburg, PA, USA. Association for Computational Linguistics. James Kuczmarski and Melvin Johnson. 2018. Genderaware natural language translation. Technical Disclosure Commons. David Kurokawa, Cyril Goutte, and Pierre Isabelle. 2009. Automatic detection of translated text and its impact on machine translation. In Proceedings of MT-Summit XII, pages 81–88. Gennadi Lembersky, Noam Ordan, and Shuly Wintner. 2012. Adapting translation models to translationese improves SMT. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL ’12, pages 255– 265, Stroudsburg, PA, USA. Association for Computational Linguistics. Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spoken language domains. In Proceedings of the International Workshop on Spoken Language Translation, pages 76–79. Kelly Marchisio, Jialiang Guo, Cheng-I Lai, and Philipp Koehn. 2019. Controlling the reading level of machine translation output. In Proceedings of Machine Translation Summit XVII Volume 1: Research Track, pages 193–203, Dublin, Ireland. European Association for Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics. Matt Post. 2018. A Call for Clarity in Reporting Bleu Scores. arXiv preprint arXiv:1804.08771. Federica Scarpa. 2006. Corpus-based qualityassessment of specialist translation: A study using parallel and comparable corpora in English and Italian. Insights into specialized translation–linguistics insights. Bern: Peter Lang, pages 155–172. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 35–40, San Diego, California. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara N. Sainath, and Yuan Cao et al. 2019. Lingvo: a Modular and Scalable Framework for Sequence-toSequence Modeling. CoRR, abs/1902.08295. Raphael Shu, Hideki Nakayama, and Kyunghyun Cho. 2019. Generating Diverse Translations with Sentence Codes. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics. Antonio Toral. 2019. Post-editese: an exacerbated translationese. CoRR, abs/1907.00900. Antonio Toral, Sheila Castilho, Ke Hu, and Andy Way. 2018. Attaining the unattainable? Reassessing claims of human parity in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 113–123, Belgium, Brussels. Association for Computational Linguistics. Gideon Toury. 2012. Descriptive translation studies and beyond: Revised edition, volume 100. John Benjamins Publishing. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Processing Systems, pages 5998–6008. Hayahide Yamagishi, Shin Kanouchi, Takayuki Sato, and Mamoru Komachi. 2016. Controlling the voice of a sentence in Japanese-to-English neural machine translation. In Proceedings of the 3rd Workshop on Asian Translation (WAT2016), pages 203–210. Mike Zhang and Antonio Toral. 2019. The effect of translationese in machine translation test sets. CoRR, abs/1906.08069.
2020
691
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747–7763 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7747 Unsupervised Domain Clusters in Pretrained Language Models Roee Aharoni1 & Yoav Goldberg1,2 1 Computer Science Department, Bar Ilan University 2 Allen Institute for Artificial Intelligence [email protected] Abstract The notion of “in-domain data” in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domainspecific systems. We show that massive pretrained language models implicitly learn sentence representations that cluster by domains without supervision – suggesting a simple datadriven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and by precision and recall of sentence selection with respect to an oracle. 1 Introduction It is common knowledge in modern NLP that using large amounts of high-quality training data is a key aspect in building successful machine-learning based systems. For this reason, a major challenge when building such systems is obtaining data in the domain of interest. But what defines a domain? Natural language varies greatly across topics, styles, levels of formality, genres and many other linguistic nuances (van der Wees et al., 2015; van der Wees, 2017; Niu et al., 2017). This overwhelming diversity of language makes it hard to find the right data for the task, as it is nearly impossible to well-define the exact requirements from such data with respect to all the aforementioned aspects. On top of that, domain labels are usually unavailable – e.g. in large-scale web-crawled data like Common Crawl1 which was recently used to 1https://commoncrawl.org/ it koran subtitles medical law bert-base-uncased Figure 1: A 2D visualization of average-pooled BERT hidden-state sentence representations using PCA. The colors represent the domain for each sentence. train state-of-the-art pretrained language models for various tasks (Raffel et al., 2019). Domain data selection is the task of selecting the most appropriate data for a domain from a large corpus given a smaller set of in-domain data (Moore and Lewis, 2010; Axelrod et al., 2011; Duh et al., 2013; Silva et al., 2018). In this work, we propose to use the recent, highly successful self-supervised pre-trained language models, e.g. Devlin et al. (2019); Liu et al. (2019) for domain data selection. As pretrained LMs demonstrate state-of-theart performance across many NLP tasks after being trained on massive amounts of data, we hypothesize that the robust representations they learn can be useful for mapping sentences to domains in an unsupervised, data-driven approach. We show that these models indeed learn to cluster sentence representations to domains without further supervision (e.g. Figure 1), and quantify this phenomenon by fitting Gaussian Mixture Models (GMMs) to the learned representations and measuring the purity of the resulting unsupervised clustering. We then pro7748 pose methods to leverage these emergent domain clusters for domain data selection in two ways: • Via distance-based retrieval in the sentence embedding space induced by the pretrained language model. • By fine-tuning the pretrained language model for binary classification, where positive examples are from the domain of interest. Our methods enable to select relevant data for the task while requiring only a small set of monolingual in-domain data. As they are based solely on the representations learned by self-supervised LMs, they do not require additional domain labels which are usually vague and over-simplify the notion of domain in textual data. We evaluate our method on data selection for neural machine translation (NMT) using the multi-domain GermanEnglish parallel corpus composed by Koehn and Knowles (2017). Our data selection methods enable to train NMT models that outperform those trained using the well-established cross-entropy difference method of Moore and Lewis (2010) across five diverse domains, achieving a recall of more than 95% in all cases with respect to an oracle that selects the “true” in-domain data. Our contributions in this work are as follows. First, we show that pre-trained language models are highly capable of clustering textual data to domains with high accuracy in a purely unsupervised manner. Second, we propose methods to select in-domain data based on this property using vectorspace retrieval and positive-unlabeled fine-tuning of pretrained language models for binary classification. Third, we show the applicability of our proposed data selection methods on a popular benchmark for domain adaptation in machine translation. An additional contribution is a new, improved data split we create for this benchmark, as we point on issues with previous splits used in the literature. The code and data for this work is publicly available.2 We hope this work will encourage more research on understanding the data landscape in NLP, enabling to “find the right data for the task” in the age of massive models and diverse data sources. 2https://github.com/roeeaharoni/ unsupervised-domain-clusters 2 Emerging Domain Clusters in Pretrained Language Models 2.1 Motivation The proliferation of massive pretrained neural language models such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019) has enabled great progress on many NLP benchmarks (Wang et al., 2018, 2019a). Larger and larger models trained on billions of tokens of raw text are released in an ever-increasing pace (Raffel et al., 2019), enabling the NLP community to fine-tune them for the task of interest. While many works tried to “probe” those models for the morphological, syntactic and semantic information they capture (Tenney et al., 2019; Goldberg, 2019; Clark et al., 2019), an important aspect of language remained overlooked in this context – the domain the data comes from, often referred to as the “data distribution”. The definition of domain is many times vague and over-simplistic (e.g. “medical text” may be used for biomedical research papers and for clinical conversations between doctors and patients, although the two vary greatly in topic, formality etc.). A common definition treats a domain as a data source: “a domain is defined by a corpus from a specific source, and may differ from other domains in topic, genre, style, level of formality, etc.” (Koehn and Knowles, 2017). We claim that a more data-driven definition should take place, as different data sources may have sentences with similar traits and vice versa - a single massive web-crawled corpus contains texts in numerous styles, topics and registers. Our analysis in Section 2 shows examples for such cases, e.g. a sentence discussing “Viruses and virus-like organisms” in a legal corpus. We hypothesize that massive pretrained LMs can learn representations that cluster to domains, as texts from similar domains will appear in similar contexts. We test this hypothesis across several large, publicly-available pretrained LMs; we explore both masked-language-models (MLMs) and auto-regressive LMs. 2.2 Method We encode multi-domain data at the sentence level into vector representations. We then cluster these vector representations for each model using a Gaussian Mixture Model (GMM) with k pre-defined clusters. We chose GMM as our clustering approach as it allows soft assignments (vs. hard as7749 k=5 k=10 k=15 Random 15.08 (±0.0) 16.77 (±0.0) 17.78 (±0.0) LDA 24.31 (±0.99) 26.73 (±2.19) 30.79 (±2.97) with PCA (n=50) without PCA k=5 k=10 k=15 k=5 k=10 k=15 word2vec 53.65 (±0.79) 68.14 (±2.58) 73.44 (±0.68) 45.93 65.80 76.26 BERT-base 87.66 (±0.24) 88.02 (±1.10) 88.37 (±0.66) 85.74 85.08 86.37 BERT-large 85.64 (±6.13) 87.61 (±0.26) 89.07 (±0.53) 68.56 86.53 86.99 DistillBERT 83.68 (±7.14) 86.31 (±0.86) 87.53 (±0.85) 79.00 86.42 88.14 RoBERTa-base 79.05 (±0.10) 86.39 (±0.90) 86.51 (±0.28) 70.21 80.35 81.49 RoBERTa-large 80.61 (±0.33) 89.04 (±0.15) 89.94 (±0.23) 69.88 81.07 85.91 GPT-2 70.30 (±0.05) 84.76 (±0.30) 82.56 (±1.29) 37.82 39.02 41.45 XLNet 55.72 (±0.69) 68.17 (±3.93) 72.65 (±1.92) 30.36 32.96 48.55 Table 1: Unsupervised domain clustering as measured by purity for the different models. Best results are marked in bold for each setting. signments as in e.g. K-means) which we think fits the task better (as a sentence can be seen as drawn from a mixture of several domain).3 In all cases, to create a sentence representation we perform average pooling of the last hidden state (before the softmax layer) for each token in the sentence.4 To accelerate the clustering process and enable visualization we also experiment with performing dimensionality reduction with PCA over the sentence vectors before clustering them. We experiment with k in 5, 10 and 15 to test how adding flexibility would improve the domain clustering accuracy. 2.3 Models and Baselines For MLM-based models we use BERT (Devlin et al., 2019), DistilBERT (Sanh et al., 2019) and RoBERTa (Liu et al., 2019) (in both the base and large versions). For autoregressive models we use GPT-2 (Radford et al., 2018) and XLNet (Yang et al., 2019). In all cases we use the implementations from the HuggingFace Transformers toolkit (Wolf et al., 2019). We also evaluated three additional, simpler baselines. The first is using representations from word2vec (Mikolov et al., 2013), where we average-pooled the word vectors for the tokens that were present in the model vocabulary. The second is using Latent Dirichlet Allocation (LDA, Blei et al., 2003), which is a classic approach to unsupervised clustering of text.5 We also 3See further discussion comparing GMMs and K-means in Daume (2009). 4Using the penultimate layer or others may result in better performance; we leave this for future work. 5We used the LDA implementation provided in the Gensim toolkit: https://radimrehurek.com/gensim/ report results for a baseline which assigns sentences by sampling randomly from a uniform distribution over the clusters. 2.4 Evaluation To evaluate the unsupervised domain clustering we used the multi-domain corpus proposed by Koehn and Knowles (2017) which includes textual data in five diverse domains: subtitles6, medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (manuals and localization files of open-source software). This dataset includes parallel sentences in English and German; for this experiment we used the English portion of the data. See more details on the dataset in Section 3.1. We used 2000 distinct sentences from each domain. To evaluate whether the resulting clusters indeed capture the domains the data was drawn from we measure the clustering purity, which is a well-known metric for evaluating clustering (Manning et al., 2008). To measure the clustering purity, we assign each unsupervised cluster with the most common “true” domain in the sentences assigned to that cluster, and then compute the accuracy according to this majority-based cluster-domain assignment (note that in this case several unsupervised clusters can be assigned to the same domain). In cases where randomness is involved we run each experiment five times with different initializations and report the mean and variance of the purity metric for each model. 6From http://www.opensubtitles.org/ 7750 it koran subtitles medical law Predicted label it koran subtitles medical law True label 1927 0 55 16 2 4 1767 225 0 4 47 21 1918 9 5 340 0 82 1413 165 206 0 10 58 1726 Figure 2: A confusion matrix for clustering with k=5 using BERT-base. 2.5 Results and Discussion As can be seen in Table 1, pre-trained language models are indeed highly capable of generating sentence representations that cluster by domains, resulting in up to 87.66%, 89.04% and 89.94% accuracy when using k=5, k=10 and k=15 clusters, respectively, across 10,000 sentences in 5 domains. We find these scores remarkably high given our straight-forward average-pooling strategy and that no domain-supervision was involved in the process of learning the pre-trained representations. Figure 3 also demonstrates the quality of the obtained clusters in 2D using the BERT-base model, where the ellipses describe the mean and variance parameters learned for each cluster by the GMM with k = 5.7 We note that some classes of models did better than others: while all vector-based models did far better than the random and LDA baselines8, the MLM-based models dominated in all cases over word2vec and the auto-regressive models. This may be explained by the fact that the MLM-based models use the entire sentence context when generating the representations for each token, while the auto-regressive models only use the past context, and word2vec uses a limited window context. Using PCA improved performance in most cases and especially for the auto-regressive models, although the results for the MLMs remain high in 7Similar visualizations for additional models are available in the supplementary material. 8Note that the LDA models were trained using the multidomain data alone, and did not utilize additional pretraining as in the other, more successful models. This may explain their relatively weak performance. both cases – suggesting that these models encode the information very differently. 2.6 Analysis As can be seen in Figure 3, in some areas the domains are somewhat overlapping in the embedding space, which may lead to outlier cases where examples from one domain are assigned to a cluster of a another domain. We plot a confusion matrix (Figure 2) to analyze this further based on the clustering with BERT-base and k=5. We first note that the outlier sentences are much shorter than the average sentence length in the corpus (11.62 tokens on average for outliers vs. 20.5 tokens on average in general). This makes sense as shorter sentences contain less information, making it harder to assign them to an appropriate cluster. Table 2 shows examples of outlier sentences, assigned to clusters of domains different from their originating domain. We can see that in many cases the assignments are sensible – for example for sentences originating from the subtitles corpus, a sentence that mentions “great priest” is assigned to the Koran cluster, a sentence that mentions “The International Criminal Court in The Hague” is assigned to the Law cluster, a sentence that mentions “the virus” is assigned to the Medical cluster and so on. This strengthens our claim that defining domains based on the corpus they originated from is over-simplistic, and using a data-driven approach may enable to find better domain assignments across different corpora. The domain that attracted the largest number of outliers is the IT domain cluster, with 597 sentences assigned to it from other domains. Looking it koran subtitles medical law bert-base-uncased Figure 3: A 2D visualization of the unsupervised GMM clustering for the same sentences as in Figure 1. 7751 Subtitles assigned to Koran Subtitles assigned to Medical I am Spa’am, high priest of the boars. Oxygen supply at 50%. Joseph, go in peace, and the Lord be with you. Or it can help her walk again if the virus is kept in check with this. Subtitles assigned to IT Subtitles assigned to Law Push it up to the front of the screen. Statutes, transcripts, redacted immunity agreements. Polyalloy requires programming to take permanent The Security Council therefore must press for his immediate form. referral to the International Criminal Court in The Hague. Law assigned to Medical Law assigned to IT - Viruses and virus-like organisms ”INFORMATION SOCIETY STATISTICS where the glucose content is equal to or less than This document must be attached to the certificate and field the fructose content. with it, except where there is a computerised checking system. Medical assigned to Law Medical assigned to IT This will be introduced by a Regulation adopted by the An updated and improved version of the CD-ROM was issued European Commission. to all subscribers during the first half of the year. The marketing authorisation was renewed on 22 May - All tables will be based on generic and not product-specific 2002 and 22 May 2007. data. IT assigned to Medical IT assigned to Subtitles R65: Harmful: may cause lung damage if swallowed At the end we say good bye. Automatic Red-Eye Removal What would you like to do for your next shot? Table 2: Sentences from one domain which were assigned to another domain by the BERT-based clustering, k=5. more closely we find that more than half of these sentences (340 out of 597) included numbers (e.g. “34% 25% 34%” (from medical), “(b) reference number 20 is deleted;” (from law), “(Command of Prostration # 1)” (from Koran) or “The message, R2.” (from subtitles)). As numbers appear in many different contexts, they may be harder to assign to a specific domain by the context-aware language models in such short sentences. The second largest attractor of outliers is the Subtitles cluster, with 372 sentences assigned to it from other domains. We find that most of these sentences contain personal pronouns or question marks (228 out of 372, 61.2%) while the ratio of such sentences in the entire corpus is only 40%. Examples include “Why did you choose the name & amarok;?” (from IT), or “What is Avonex?” (from Medical). This may be expected as the subtitles corpus mainly includes transcriptions of spoken, conversational language, and “conversation tends to have more verbs, more personal pronouns, and more questions” (Conrad and Biber, 2005). Another possible reason for the subtitles domain to attract outliers is the fact that this is the least-topical cluster: movies and TV series may discuss diverse topics, unlike medical, religious, legal and technical texts that may have a more cohesive topic. 3 Neural Machine Translation in a Multi-Domain Scenario As we showed that pre-trained language models are indeed very useful in clustering sentence representations by domains in an unsupervised manner, we now seek to harness this property for a downstream task – domain data selection for machine translation. Domain data selection is the task of selecting examples from a large corpus which are as close as possible to the domain of interest, given a smaller set of in-domain examples. The selected examples can be used to either (1) train a domainspecific model from scratch (Axelrod et al., 2011), (2) fine-tune a pre-trained general-domain model (Sajjad et al., 2017; Silva et al., 2018), or (3) prioritize data for annotation as in an Active-Learning framework, if only monolingual data is available (Haffari et al., 2009). To demonstrate the need for domain data selection and set the stage for our data selection experiments, we perform preliminary experiments with NMT in a multi-domain scenario. 3.1 Multi-Domain Dataset To simulate a diverse multi-domain setting we use the dataset proposed in Koehn and Knowles (2017), as it was recently adopted for domain adaptation research in NMT (Hu et al., 2019; M¨uller et al., 2019; Dou et al., 2019a,b). The dataset includes parallel text in German and English from five diverse domains (Medical, Law, Koran, IT, Subtitles; as discussed in Section 2), available via OPUS (Tiedemann, 2012; Aulamo and Tiedemann, 2019). In a preliminary analysis of the data we found that in both the original train/dev/test split by Koehn and Knowles (2017) and in the more recent split by M¨uller et al. (2019) there was overlap between the training data and the dev/test data.9 Fixing these issues is important, as it may affect the conclusions one draws from experiments with 9More details are available in the supplementary material. 7752 Original New Split Medical 1,104,752 248,099 Law 715,372 467,309 IT 378,477 222,927 Koran 533,128 17,982 Subtitles 22,508,639 14,458,058 Table 3: Number of training examples for each domain in the original split (M¨uller et al., 2019) and in our split. this dataset. For example, as overlapping development sets favor memorization of the training set, one may choose checkpoints and report results on over-fitting models. This is especially relevant with neural sequence-to-sequence models, as they are highly susceptible to memorization (Aharoni and Goldberg, 2018) and hallucination (Lee et al., 2018), as confirmed by M¨uller et al. (2019). To create a better experimental setting to test generalization within and across domains, we create a new data split where we ensure that no such overlap between the training, development and test sets occur. We started from the split of M¨uller et al. (2019) as it included newer versions of some of the datasets.10 Furthermore, we did not allow more than one translation of a given source or target sentence, as such cases were very frequent in the dataset and usually stand for duplicate sentence pairs (See Table 3). For example, applying this filtering reduced the size of the Koran corpus from 533,128 sentence pairs to only 17,982. Finally, following M¨uller et al. (2019) we cap the subtitles corpus to 500,000 sentence pairs as it is much larger than the rest. We make the new split publicly available and hope it will enable better future experimentation on this important subject.11 3.2 Cross-Domain Experiments Experimental Setup We follow Hu et al. (2019) and train domain-specific models for all domains. We then evaluate each model across the different domain test sets, enabling us to understand the effect of different domains on the downstream MT performance and to set up strong baselines for data selection experiments. We also train a generaldomain model using the available data from all domains, as it is also a common approach in multidomain scenarios (M¨uller et al., 2019). In all experiments we use a similar Transformer (Vaswani et al., 2017) model, and only control for the train10Their dataset is available in: https://github.com/ ZurichNLP/domain-robustness 11https://github.com/roeeaharoni/ unsupervised-domain-clusters Medical Law Koran IT Subtitles Medical 56.5 18.3 1.9 11.4 4.3 Law 21.7 59 2.7 13.1 5.4 Koran 0.1 0.2 15.9 0.2 0.5 IT 14.9 9.6 2.8 43 8.6 Subtitles 7.9 5.5 6.4 8.5 27.3 All 53.3 57.2 20.9 42.1 27.6 Table 4: SacreBLEU (Post, 2018) scores of our baseline systems on the test sets of the new data split. Each row represents the results from one model on each test set. The best result in each column is marked in bold. ing data. More details on the exact training and hyperparameter settings for the NMT models are available in the supplementary material. Results The results for the cross-domain evaluation are available in Table 4. In most cases, the best results for each domain are obtained by training on the in-domain data. Training on all the available data helped mostly for the Koran test set. This is expected as the training data for this domain is considerably smaller than the training data for rest of the domains (Table 3). We can also see that more data is not necessarily better (Gasc´o et al., 2012): while the subtitles corpus is the largest of all 5 and includes 500,000 sentence pairs, it is second to last in performance as measured by the average BLEU across all test sets. Cross-Domain BLEU vs. Cluster Proximity An interesting observation can be made with respect to the visual analysis of the domain clusters as depicted in Figure 3: as the Medical cluster (in Yellow), Law cluster (in Purple) and IT cluster (in Red) are close to each other in the embedding space, their cross-domain BLEU scores are also higher. For example, note how in the results for the Medical domain-specific model (first row in Table 4), the BLEU scores on the Law and IT test sets are much higher in comparison to those on the Koran and Subtitles test sets, which clusters are farther away in the visualized embedding space. Similarly, as the Subtitles cluster (Blue) is closer to the Koran cluster (Green), the highest cross-domain BLEU score on the Koran test set is from the Subtitles model. To further quantify this phenomenon, we plot and measure Pearson’s correlation between the cosine similarity of the centroids for the English BERT-based dev sentence representations for each domain pair, and the cross-domain BLEU score for this domain pair. This is shown in Figure 4. We can see the general trend where the closer the domain centroids are (with a similarity of 1 for training and evaluating on the same domain), the higher the cross-domain BLEU is between those domains, 7753 Figure 4: The cosine similarity between the centroids of the BERT representations for each domain pair vs. the corresponding cross-domain BLEU. resulting in a Pearson’s correlation of 0.81 (strong correlation). This suggests that such preliminary visual analysis can be a useful tool for understanding the relationship between diverse datasets, and motivates the use of pre-trained language model representations for domain data selection in MT. 4 Domain Data Selection with Pretrained Language Models As shown in the previous section, using the right data is critical for achieving good performance on an in-domain test set, and more data is not necessarily better. However, in real-world scenarios, the availability of data labeled by domain is limited, e.g. when working with large scale, web-crawled data. In this section we focus on a data-selection scenario where only a very small number of indomain sentences are used to select data from a larger unlabeled parallel corpus. An established method for data selection was proposed by Moore and Lewis (2010), which was also used in training the winning systems in WMT 2019 (Ng et al., 2019; Barrault et al., 2019). This method compares the cross-entropy, according to domain-specific and non-domain-specific language models, for each candidate sentence for selection. The sentences are then ranked by the cross-entropy difference, and only the top sentences are selected for training. While the method by Moore and Lewis (2010) is tried-and-true, it is based on simple n-gram language models which cannot generalize beyond the n-grams that are seen in the in-domain set. In addition, it is restricted to the in-domain and generaldomain datasets it is trained on, which are usually small. On the contrary, pre-trained language models are trained on massive amounts of text, and, as we showed through unsupervised clustering, learn representations with domain-relevant information. In the following sections, we investigate whether this property of pretrained language models makes them useful for domain data selection. 4.1 Methods We propose two methods for domain data selection with pretrained language models. Domain-Cosine In this method we first compute a query vector, which is the element-wise average over the vector representations of the sentences in the small in-domain set. We use the same sentencelevel average-pooling approach as described in Section 2 to obtain sentence representations. We then retrieve the most relevant sentences in the training set by computing the cosine similarity of each sentence with this query vector and ranking the sentences accordingly. Domain-Finetune It is now common knowledge that pretrained language models are especially useful when fine-tuned for the task of interest in an end-to-end manner (Ruder et al., 2019). In this method we fine-tune the pretrained LM for binary classification, where we use the in-domain sentences as positive examples, and randomly sampled general-domain sentences as negative examples. We then apply this classifier on the generaldomain data and pick the sentences that are classified as positive as in-domain, or choose the top-k sentences as ranked by the classifier output distribution. This can be seen as an instance of positiveunlabeled learning for document-set expansion; see Jacovi et al. (2019) for a recent discussion and methodology for this task. Negative Sampling with Pre-ranking One problem that may rise when randomly sampling negative examples is that unlabeled in-domain sentences from the general-domain data may be sampled as negative examples – deteriorating the classifier performance. To alleviate this issue, we perform a biased sampling of negative examples. We first rank the general-domain data using the without pre-ranking with pre-ranking p r F1 p r F1 Subtitles 0.722 0.984 0.833 0.964 0.978 0.971 Law 0.761 0.94 0.841 0.944 0.94 0.942 Medical 0.821 0.916 0.866 0.929 0.92 0.925 IT 0.848 0.956 0.898 0.955 0.98 0.967 Koran 0.966 0.958 0.962 0.994 0.974 0.984 Table 5: Ablation analysis showing precision (p) recall (r) and F1 for the binary classification accuracy on a held-out set, with and without pre-ranking. 7754 Medical Law Koran IT Subtitles Average Random-500k 49.8 53.3 18.5 37.5 25.5 36.92 Moore-Lewis-Top-500k 55 58 21.4 42.7 27.3 40.88 Domain-Cosine-Top-500k 52.7 58 22 42.5 27.1 40.46 Domain-Finetune-Top-500k 54.8 58.8 21.8 43.5 27.4 41.26 Domain-Finetune-Positive 55.3 58.7 19.2 42.5 27 40.54 Oracle 56.5 59 15.9 43 27.3 40.34 All 53.3 57.2 20.9 42.1 27.6 40.22 Table 6: SacreBLEU scores for the data selection experiments. Highest scores per column are marked in bold. Domain-Cosine method, and then sample negative examples under a certain threshold in the ranking (in our experiments we sampled from the bottom two-thirds). Table 5 shows an ablation for such pre-ranking, measuring precision, recall and F1 for binary classification on a held-out set for each domain. When not using pre-ranking, as the training data for the domain is larger, the precision is lower – since more in-domain examples are drawn as negative samples. Using pre-ranking indeed alleviates this issue, achieving higher F1 scores in all cases. Given the results in Table 5 we always use pre-ranking in the following experiments. 4.2 Experimental Setup We perform data selection experiments for each domain in the multi-domain dataset. As the small set of monolingual in-domain data we take the 2000 development sentences from each domain. For the general-domain corpus we concatenate the training data from all domains, resulting in 1,456,317 sentences. To enable faster experimentation we used DistilBERT (Sanh et al., 2019) for the DomainCosine and Domain-Finetune methods. More technical details are available in the supplementary material. We compare our methods to four approaches: (1) The established method by Moore and Lewis (2010), (2) a random selection baseline, (3) an oracle which is trained on all the available in-domain data, and (4) the model we train on all the domains concatenated. We select the top 500k examples to cover the size of every specific in-domain dataset. We train Transformer NMT models on the selected data with a similar configuration to the ones trained in the cross-domain evaluation. 4.3 Results The results are available in Table 6. We can see that all selection methods performed much better in terms of BLEU than random selection. It is also nice to see that all selection methods performed better than using all the available data or the oracle-selected data when averaged across all Moore-Lewis D-Cosine D-Finetune p r p r p r Medical 0.476 0.955 0.391 0.788 0.485 0.975 Law 0.836 0.894 0.841 0.899 0.902 0.965 Koran 0.35 0.985 0.36 0.989 0.36 0.998 IT 0.441 0.985 0.382 0.857 0.447 0.998 Subtitles 0.899 0.899 0.916 0.916 0.957 0.957 Average 0.6 0.944 0.578 0.89 0.63 0.979 Table 7: Precision (p) and recall (r) for data selection of 500k sentences with respect to the oracle selection. domains, showing again that more data is not necessarily better in multi-domain scenarios and that data selection is a useful approach. Regarding a comparison of the data selection methods, MooreLewis performed better than Domain-Cosine, while Domain-Finetune performed best, showing the benefit of fine-tuning large pretrained models for the data selection task. Using the positively-labeled examples alone (Domain-Finetune-Positive) performed worse than using the top 500k examples but better than Domain-Cosine, while not requiring to determine the number of selected sentences. 4.4 Analysis We perform an analysis on the selected datasets, where we measure the precision and recall of sentence selection with respect to the oracle selection. The results are available in Table 7. As also reflected in the BLEU scores, the Domain-Finetune method resulted in the highest domain recall with a minimum of 97.5, while Moore-Lewis and DomainCosine scored 89.4 and 78.8 respectively. We find these results very appealing given that only 2000 in-domain sentences were used for selection for each domain out of 1.45 million sentences. Also note that we used DistilBERT in these experiments: we believe that using larger, non-distilled models may result in even better selection performance (although at the price of larger computational requirements). 5 Related Work Previous works used n-gram LMs for data selection (Moore and Lewis, 2010; Axelrod et al., 2011) or 7755 other count-based methods (Axelrod, 2017; Poncelas et al., 2018; Parcheta et al., 2018; Santamar´ıa and Axelrod, 2019). While such methods work well in practice, they cannot generalize beyond the N-grams observed in the in-domain datasets, which are usually small. Duh et al. (2013) proposed to replace n-gram models with RNN-based LMs with notable improvements. However, such methods do not capture the rich sentence-level global context as in the recent self-attention-based MLMs; as we showed in the clustering experiments, autoregressive neural LMs were inferior to masked LMs in clustering the data by domain. In addition, training large LMs may be prohibitive without relying on pre-training. Regarding domain clustering for MT, Hasler et al. (2014) discovered topics using LDA instead of using domain labels. Cuong et al. (2016) induced latent subdomains from the training data using a dedicated probabilistic model. Many works used vector-based retrieval for data selection; Ruder and Plank (2017) learn to select data using Bayesian optimization, and explored word2vec for that purpose. Duma and Menzel (2016) create paragraph vectors for data selection in the context of SMT. Wang et al. (2017) use internal representations from the NMT model to perform data selection. Bapna and Firat (2019) propose a mechanism for incorporating retrieved sentences for each instance for domain adaptation in NMT, using representations extracted from a pretrained NMT model. Farajian et al. (2017) explored instance-based data selection in a multi-domain scenario using information retrieval methods. Other related works on domain adaptation include Dou et al. (2019a) that adapts multi-domain NMT models with domain-aware feature embeddings, which are learned via an auxiliary language modeling task. Peris et al. (2017) proposed neuralnetwork based classifiers for data selection in SMT. For more related work on data selection and domain adaptation in the context of MT, see the surveys by Eetemadi et al. (2015) for SMT and more recently Chu and Wang (2018) for NMT. Unrelated to MT, Ma et al. (2019) used BERT to select data for tasks from the GLUE benchmark (Wang et al., 2018). However, they assumed supervision for all the different tasks/domains, while we propose an unsupervised method requiring only a small set of in-domain data. Also in the context of pretrained language models, Gururangan et al. (2020) show the importance of additional pretraining with in-domain data to improve the downstream task-specific performance. While previous work made important contributions to domain data selection, our work is the first to explore massive pretrained language models for both unsupervised domain clustering and for data selection in NMT. 6 Conclusions and Future Work We showed that massive pre-trained language models are highly effective in mapping data to domains in a fully-unsupervised manner using averagepooled sentence representations and GMM-based clustering. We suggest that such clusters are a more appropriate, data driven approach to domains in natural language than simplistic labels (e.g. “medical text”), and that it will improve over time as better and larger pretrained LMs will become available. We proposed new methods to harness this property for domain data selection using distance-based ranking in vector space and pretrained LM finetuning, requiring only a small set of in-domain data. We demonstrated the effectiveness of our methods on a new, improved data split we created for a previously studied multi-domain machine translation benchmark. Our methods perform similarly or better than an established data selection method and oracle in-domain training across all five domains in the benchmark. This work just scratches the surface with what can be done on the subject; possible avenues for future work include extending this with multilingual data selection and multilingual LMs (Conneau and Lample, 2019; Conneau et al., 2019; Wu et al., 2019; Hu et al., 2020), using such selection methods with domain-curriculum training (Zhang et al., 2019; Wang et al., 2019b), applying them on noisy, web-crawled data (Junczys-Dowmunt, 2018) or for additional tasks (Gururangan et al., 2020). Another interesting avenue is applying this to unsupervised NMT, which is highly sensitive to domain mismatch (Marchisio et al., 2020; Kim et al., 2020). We hope this work will encourage more research on finding the right data for the task, towards more efficient and robust NLP. Acknowledgements We thank Wei Wang for early discussions on domain adaptation and data selection that inspired this work during Roee’s internship in Google Translate. 7756 References Roee Aharoni and Yoav Goldberg. 2018. Split and rephrase: Better evaluation and stronger baselines. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 719–724, Melbourne, Australia. Association for Computational Linguistics. Mikko Aulamo and J¨org Tiedemann. 2019. The OPUS resource repository: An open package for creating parallel corpora and machine translation services. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 389–394, Turku, Finland. Link¨oping University Electronic Press. Amittai Axelrod. 2017. Cynical selection of language model training data. arXiv preprint arXiv:1709.02279. Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 355–362, Edinburgh, Scotland, UK. Association for Computational Linguistics. Ankur Bapna and Orhan Firat. 2019. Non-parametric adaptation for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1921–1931, Minneapolis, Minnesota. Association for Computational Linguistics. Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Chenhui Chu and Rui Wang. 2018. A survey of domain adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1304–1319, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does BERT look at? an analysis of BERT’s attention. arXiv preprint arXiv:1906.04341. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7057–7067. Susan M Conrad and Douglas Biber. 2005. The frequency and use of lexical bundles in conversation and academic prose. Lexicographica. Hoang Cuong, Khalil Sima’an, and Ivan Titov. 2016. Adapting to all domains at once: Rewarding domain invariance in SMT. Transactions of the Association for Computational Linguistics, 4:99–112. Hal Daume. 2009. K-means vs GMM, sum-product vs max-product. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Zi-Yi Dou, Junjie Hu, Antonios Anastasopoulos, and Graham Neubig. 2019a. Unsupervised domain adaptation for neural machine translation with domainaware feature embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1417–1422, Hong Kong, China. Association for Computational Linguistics. Zi-Yi Dou, Xinyi Wang, Junjie Hu, and Graham Neubig. 2019b. Domain differential adaptation for neural machine translation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, Hong Kong. Association for Computational Linguistics. Kevin Duh, Graham Neubig, Katsuhito Sudoh, and Hajime Tsukada. 2013. Adaptation data selection using neural language models: Experiments in machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 678–683, Sofia, Bulgaria. Association for Computational Linguistics. Mirela-Stefania Duma and Wolfgang Menzel. 2016. Data selection for IT texts using paragraph vector. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 428–434, Berlin, Germany. Association for Computational Linguistics. 7757 Sauleh Eetemadi, William Lewis, Kristina Toutanova, and Hayder Radha. 2015. Survey of data-selection methods in statistical machine translation. Machine Translation, 29(3-4):189–223. M. Amin Farajian, Marco Turchi, Matteo Negri, and Marcello Federico. 2017. Multi-domain neural machine translation through unsupervised adaptation. In Proceedings of the Second Conference on Machine Translation, pages 127–137, Copenhagen, Denmark. Association for Computational Linguistics. Guillem Gasc´o, Martha-Alicia Rocha, Germ´an Sanchis-Trilles, Jes´us Andr´es-Ferrer, and Francisco Casacuberta. 2012. Does more data always yield better translations? In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 152–161, Avignon, France. Association for Computational Linguistics. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. arXiv preprint arXiv:1901.05287. Suchin Gururangan, Ana Marasovi, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don’t stop pretraining: Adapt language models to domains and tasks. ACL. Gholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 415–423, Boulder, Colorado. Association for Computational Linguistics. Eva Hasler, Phil Blunsom, Philipp Koehn, and Barry Haddow. 2014. Dynamic topic adaptation for phrase-based MT. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 328– 337, Gothenburg, Sweden. Association for Computational Linguistics. Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187–197, Edinburgh, Scotland. Association for Computational Linguistics. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. arXiv preprint arXiv:2003.11080. Junjie Hu, Mengzhou Xia, Graham Neubig, and Jaime Carbonell. 2019. Domain adaptation of neural machine translation by lexicon induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Association for Computational Linguistics. Alon Jacovi, Gang Niu, Yoav Goldberg, and Masashi Sugiyama. 2019. Scalable evaluation and improvement of document set expansion via neural positive-unlabeled learning. arXiv preprint arXiv:1910.13339. Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 888–895, Belgium, Brussels. Association for Computational Linguistics. Yunsu Kim, Miguel Grac¸a, and Hermann Ney. 2020. When and why is unsupervised neural machine translation useless? arXiv preprint arXiv:2004.10581. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39, Vancouver. Association for Computational Linguistics. Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2019. Domain adaptation with BERT-based domain classification and data selection. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 76–83, Hong Kong, China. Association for Computational Linguistics. Christopher D Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to information retrieval. Cambridge university press. Kelly Marchisio, Kevin Duh, and Philipp Koehn. 2020. When does unsupervised machine translation work? arXiv preprint arXiv:2004.05516. 7758 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceedings of the ACL 2010 Conference Short Papers, pages 220–224, Uppsala, Sweden. Association for Computational Linguistics. Mathias M¨uller, Annette Rios, and Rico Sennrich. 2019. Domain robustness in neural machine translation. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR’s WMT19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 314–319, Florence, Italy. Association for Computational Linguistics. Xing Niu, Marianna Martindale, and Marine Carpuat. 2017. A study of style in machine translation: Controlling the formality of machine translation output. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2814–2819, Copenhagen, Denmark. Association for Computational Linguistics. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Zuzanna Parcheta, Germ´an Sanchis-Trilles, and Francisco Casacuberta. 2018. Data selection for NMT using infrequent n-gram recovery. EAMT 2018. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. ´Alvaro Peris, Mara Chinea-R´ıos, and Francisco Casacuberta. 2017. Neural networks classifier for data selection in statistical machine translation. The Prague Bulletin of Mathematical Linguistics, 108(1):283–294. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Alberto Poncelas, Gideon Maillette de Buy Wenniger, and Andy Way. 2018. Data selection with feature decay algorithms using an approximated target side. arXiv preprint arXiv:1811.03039. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. OpenAI blog. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. Sebastian Ruder, Matthew E. Peters, Swabha Swayamdipta, and Thomas Wolf. 2019. Transfer learning in natural language processing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials, pages 15–18, Minneapolis, Minnesota. Association for Computational Linguistics. Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372–382, Copenhagen, Denmark. Association for Computational Linguistics. Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, and Stephan Vogel. 2017. Neural machine translation training in a multi-domain scenario. arXiv preprint arXiv:1708.08712. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Luc´ıa Santamar´ıa and Amittai Axelrod. 2019. Data selection with cluster-based language difference models and cynical selection. arXiv preprint arXiv:1904.04900. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational 7759 Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Catarina Cruz Silva, Chao-Hong Liu, Alberto Poncelas, and Andy Way. 2018. Extracting in-domain training corpora for neural machine translation using data selection methods. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 224–231, Belgium, Brussels. Association for Computational Linguistics. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 2214–2218, Istanbul, Turkey. European Languages Resources Association (ELRA). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017. Sentence embedding for neural machine translation domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 560–566, Vancouver, Canada. Association for Computational Linguistics. Wei Wang, Isaac Caswell, and Ciprian Chelba. 2019b. Dynamically composing domain-data selection with clean-data selection by “co-curricular learning” for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1282–1292, Florence, Italy. Association for Computational Linguistics. Marlies van der Wees. 2017. What’s in a Domain? Towards Fine-Grained Adaptation for Machine Translation. Ph.D. thesis, University of Amsterdam. Marlies van der Wees, Arianna Bisazza, Wouter Weerkamp, and Christof Monz. 2015. What’s in a domain? analyzing genre and topic differences in statistical machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 560–566, Beijing, China. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Emerging cross-lingual structure in pretrained language models. arXiv preprint arXiv:1911.01464. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul McNamee, Marine Carpuat, and Kevin Duh. 2019. Curriculum learning for domain adaptation in neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1903–1915, Minneapolis, Minnesota. Association for Computational Linguistics. 7760 A Appendix A.1 NMT Training Figure 5 details the hyperparameter configuration we used to train the NMT models. We use Transformer models (Vaswani et al., 2017) in the Base configuration using the implementation provided in Fairseq (Ott et al., 2019). For all models we use a joint BPE vocabulary (Sennrich et al., 2016) learned with 32k merge operations over the concatenated corpus in both languages, enabling to tie all the embedding layers (Press and Wolf, 2017).12 We perform early stopping if the BLEU score on the domain-specific development set did not improve in 10 consequent checkpoints. We use the ADAM (Kingma and Ba, 2014) optimizer with an initial learning rate of 5 · 10−4 and a maximum of 4096 tokens per batch. We trained all models on a single NVIDIA GPU. We decode using beam search with a beam size of 5. For pre-processing we used the Moses (Koehn et al., 2007) pipeline including tokenization, normalize-punctuation, nonprinting character removal, truecasing and cleaning. We removed examples with sequences longer than 100 tokens from the training data (before subword segmentation). A.2 Data Split Table 8 shows details about the overlap between the training, development and test sets for the different data splits of the multi-domain dataset. The overlap was computed using the English part of the corpus. A.3 GMM Clustering We learn GMMs with full covariance matrices, i.e. without constraints on covariance matrices that determine the shape of each component in the mixture, as implemented in scikit-learn (Pedregosa et al., 2011). We train the models until convergence or for a maximum of 150 EM iterations. A.4 Language Model Finetuning We fine-tune the binary classification head for 5 epochs. We use the ADAM (Kingma and Ba, 2014) optimizer with an initial learning rate of 2 · 10−5. We train the model using 4 NVIDIA GPUs with 256 sentences per batch (64 per GPU). 12We used the implementation in https://github. com/rsennrich/subword-nmt CUDA_VISIBLE_DEVICES=0 \ python $FAIRSEQ_PATH/train.py ${BINARIZED_DATA_DIR} \ --arch transformer_wmt_en_de \ --share-all-embeddings \ --optimizer adam \ --adam-betas ’(0.9, 0.98)’ \ --clip-norm 1.0 \ --lr 0.0005 \ --lr-scheduler inverse_sqrt \ --warmup-updates 4000 \ --warmup-init-lr 1e-07 \ --dropout 0.2 \ --weight-decay 0.0 \ --criterion label_smoothed_cross_entropy \ --label-smoothing 0.1 \ --max-tokens 4096 \ --update-freq 5 \ --attention-dropout 0.2 \ --activation-dropout 0.2 \ --max-epoch 200 \ --seed 17 \ -s $src \ -t $tgt \ --save-dir $MODEL_PATH \ --save-interval-updates 10000 \ --validate-interval 1 Figure 5: The hyperparameter configuration we used for NMT model training using Fairseq (Ott et al., 2019). A.5 Moore-Lewis Implementation We used the implementation of Moore and Lewis (2010) by Pamela Shapiro, as available in: https://github.com/pamelashapiro/ moore-lewis. This implementation uses the KenLM N-Gram language model toolkit (Heafield, 2011). A.6 Additional Visualizations Figure 6 shows visualizations of the multi-domain dataset from additional pre-trained masked language models (BERT large and RoBERTa), and Figure 7 shows the same visualization for autoregressive models (XLNet and GPT2). 7761 Koehn and Knowles (2017) M¨uller et al. (2019) New Split % dev in train Medical 1090/2000 (54.5%) 1204/2000 (60.2%) 0/2000 Koran 0/2000 1926/2000 (96.3) 0/2000 Subtitles 1183/5000 (23.66%) 638/2000 (31.9%) 0/2000 Law 595/2000 (29.75%) 1000/2000 (50%) 0/2000 IT 2496/2526 (98.81%) 783/2000 (39.15%) 0/2000 % test in train Medical 571/2000 (28.55%) 516/1691 (30.51%) 0/2000 Koran 0/2000 1949/2000 (97.45%) 0/2000 Subtitles 451/5000 (9.02%) 478/2000 (23.9%) 0/2000 Law 649/2000 (32.45%) 966/2000 (48.3%) 0/2000 IT 945/1856 (50.92%) 1036/2000 (51.8%) 0/2000 Table 8: Details about the different data splits for the multi-domain corpus. 7762 it koran subtitles medical law bert-large-cased it koran subtitles medical law roberta-large Figure 6: 2D visualizations of the unsupervised GMM-based clustering for different pretrained MLMs. 7763 it koran subtitles medical law xlnet-base-cased it koran subtitles medical law gpt2 Figure 7: 2D visualizations of the unsupervised GMM-based clustering for different pretrained auto-regressive LMs.
2020
692
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7764–7770 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7764 Using Context in Neural Machine Translation Training Objectives Danielle Saunders and Felix Stahlberg∗and Bill Byrne Department of Engineering, University of Cambridge, UK [email protected] [email protected] [email protected] Abstract We present Neural Machine Translation (NMT) training using document-level metrics with batch-level documents. Previous sequence-objective approaches to NMT training focus exclusively on sentence-level metrics like sentence BLEU which do not correspond to the desired evaluation metric, typically document BLEU. Meanwhile research into document-level NMT training focuses on data or model architecture rather than training procedure. We find that each of these lines of research has a clear space in it for the other, and propose merging them with a scheme that allows a document-level evaluation metric to be used in the NMT training objective. We first sample pseudo-documents from sentence samples. We then approximate the expected document BLEU gradient with Monte Carlo sampling for use as a cost function in Minimum Risk Training (MRT). This twolevel sampling procedure gives NMT performance gains over sequence MRT and maximum-likelihood training. We demonstrate that training is more robust for document-level metrics than with sequence metrics. We further demonstrate improvements on NMT with TER and Grammatical Error Correction (GEC) using GLEU, both metrics used at the document level for evaluations. 1 Introduction Neural Machine Translation (NMT) research has explored token-level likelihood functions (Sutskever et al., 2014; Bahdanau et al., 2015) and sequence-level objectives inspired by reinforcement learning (Ranzato et al., 2016; Bahdanau et al., 2016) or expected Minimum Risk Training (MRT) (Shen et al., 2016). A typical sequence objective in these cases is based on sentence-level BLEU (sBLEU) (Edunov et al., 2018). However ∗Now at Google sBLEU, even if aggregated over sentences, is only an approximation of the desired metric, documentlevel BLEU. Beyond translation, many metrics for natural language tasks do not have robust sentencelevel approximations. A logical progression is the extension of sequence-level NMT training objectives to include context from outside the sentence. Document-based NMT, by contrast, aims to use out-of-sentence context to improve translation. Recent research explores lexical consistency by providing additional sentences during training (Maruf et al., 2019; Voita et al., 2018, 2019) or inference (Voita et al., 2019; Stahlberg et al., 2019), potentially with adjustments to model architecture. However, to the best of our knowledge, no attempt has been made to extend sequence-level neural training objectives to include document-level reward functions. This is despite document-level BLEU being arguably the most common NMT metric, and being the function originally optimised by Minimum Error Rate Training (MERT) for Statistical Machine Translation (SMT) (Och, 2003). We propose merging lines of research on training objectives and document-level translation. We achieve this by presenting a document-level approach to sequence-level objectives which brings the training objective closer to the actual evaluation metric, using MRT as a representative example. We demonstrate MRT under document-level BLEU as well as Translation Edit Rate (TER) (Snover, 2006), which while decomposable to sentence level is less noisy when used over documents. We consider both pseudo-documents where sentences are assigned randomly to a mini-batch, and true document context where all sentences in the batch are from the same document. We finally apply our scheme to supervised Grammatical Error Correction, for which using neural models is becoming increasingly popular (Xie et al., 2016; Sakaguchi et al., 2017; Stahlberg et al., 2019). 7765 We show gains in GEC metrics GLEU (Napoles et al., 2015) and M2 (Dahlmeier and Ng, 2012). 1.1 Related Work Minimum Error Rate Training was introduced for phrase-based SMT with document-level BLEU (Och, 2003). Shen et al. (2016) extend these ideas to NMT, using expected minimum risk at the sequence level with an sBLEU cost for end-to-end NMT training. Edunov et al. (2018) explore random and beam sampling for NMT sequence-MRT, as well as other sequence-level training losses. Related developments in NMT include combined reinforcement-learning/cross-entropy approaches such as MIXER (Ranzato et al., 2016), which itself has origins in the REINFORCE algorithm described by Williams (1992). We do not explore such approaches, although our documentsampling and document-metric schemes could in principle be extended to them. Sequence-level MRT has seen success outside NMT. Ayana et al. (2016) use sequence MRT for summarization, while Shannon (2017) uses a related approach for speech recognition. MRT can be seen as a special case of neural reinforcement learning, which Sakaguchi et al. (2017) apply to GEC with sequence-level costs. Closest to our approach is the work of Jean and Cho (2019) on NMT with a minibatch-context-sensitive training procedure. However, they do not optimize on document metrics over those contexts. They also sample contexts randomly, while we find diverse context sampling is important for the success of document-MRT. 2 Background 2.1 Sequence-level MRT Sentence-level MRT for NMT aims to minimize the expected loss on training data with a loss function between sampled target sentences y and gold reference sentences y∗. For NMT a common sentencelevel cost function ∆(y, y∗) is 1 - sBLEU, where sBLEU is smoothed by setting initial n-gram counts to 1 (Edunov et al., 2018). We take N samples for each of the S sentences in a mini-batch. We write the cost function between the sth reference in a mini-batch, y(s)∗, and its nth sample, y(s) n , as ∆(s) n = ∆(y(s) n , y(s)∗). The risk gradient for end-to-end NMT with MRT as in Shen et al. (2016), with sample-count scaling, is then: ∇θR(θ) = 1 N S X s=1 N X n=1 ∆(s) n ∂ ∂θ log P(y(s) n |x(s); θ) (1) 2.2 Document-level MRT By analogy with sequence-level MRT, we consider MRT over batches of S sentence pairs, which we treat as a pseudo-document. In practice we experiment both with sentences chosen randomly from all training data, and with true context where all sentences per batch are from a single document. Let X = [x(1), . . . , x(S)] be the source document, Y = [y(1), . . . , y(S)] be a document of candidate translations, and Y ∗= [y(1)∗, . . . , y(S)∗] be the reference translations. Document-level metric D(Y, Y ∗), which may be non-differentiable, replaces the sequence-level metric ∆(y, y(s)∗). We define the document-level risk: R(θ) = X Y D(Y, Y ∗)P(Y |X; θ) Using pθ∇θ log pθ = ∇pθ, and defining L(Y ) = log P(Y |X; θ) for brevity: ∇θR(θ) = X Y D(Y, Y ∗)P(Y |X; θ)∇θL(Y ) = E  D(Y, Y ∗)∇θL(Y )|X; θ  (2) Using simple Monte-Carlo, after Shannon (2017), we replace the expectation by an average taken over N sampled translation documents Yn ∼P(Y |X; θ) ∇θR(θ) ≈1 N N X n=1 D(Yn, Y ∗)∇θL(Yn) The nth sample for the sth sentence in the batchlevel document, y(s) n , contributes the following term to the overall gradient: 1 N X Y :y(s)=y(s) n D(Y, Y ∗)∇θ log P(y(s) n |x(s); θ) In other words the gradient of each sample is weighted by the aggregated document-level scores for documents in which the sample appears. 7766 Figure 1: Sample-ordering schemes for MRT with S = 2 sentences / batch and N = 3 samples / sentence, showing sample costs. In sequence-MRT each sample has its own cost (e.g. sBLEU). For doc-MRT (ordered), samples are ordered and sorted into N-wise ‘documents’, each with a combined cost (e.g. document BLEU). The ordered assignment enforces an extreme range of combined costs. In doc-MRT (random), samples are randomly assigned, making documents on average less diverse with less distinct scores, with a low likelihood of extreme distributions. 2.3 Mini-batch level document sampling To generate sample documents we first sample sentences. Sentence sampling for NMT generates new tokens in a left-to-right manner (Shen et al., 2016). In left-to-right generation each token is sampled from a distribution conditioned on previously sampled tokens, minimizing exposure bias to gold references which the model is unlikely to see at inference time (Ranzato et al., 2016). Sampling can be via beam search, or random sampling from the model distribution given previously sampled tokens. Beam search produces more likely samples which may be less diverse compared to random sampling (Edunov et al., 2018). Here we only consider sampling during training. While samples can be more easily generated offline with respect to fixed model parameters, such samples are not representative of the current model. With N sample translations for each of the S sentence pairs per batch we can construct NS possible sample documents as sequences of S sentences. Considering all possible documents is intractable unless N and S are small. It also carries the risk that a single sentence will appear in multiple sampled documents, giving it undue weight. Instead we propose creating N documents by first ordering samples for each sentence (e.g. by sBLEU), then creating the nth sample document Yn by concatenating the nth sample from each sentence. This gives a set of N diverse documents sampled from NS possibilities. We expect the sampled documents to be diverse in contents, since a given sentence will only ever occur in a single document context, and diverse in score. We refer to this scheme as ordered document sampling. Figure 1 illustrates ordered document sampling by comparison to a scheme which randomly samples sentences to form documents. 3 Experiments We report on English-German NMT. We initialize with a baseline trained on 17.5M sentence pairs from WMT19 news task datasets (Barrault et al., 2019), on which we learn a 32K-merge joint BPE vocabulary (Sennrich et al., 2016). We validate on newstest2017, and evaluate on newstest2018. We apply MRT only during fine-tuning, following previous work (Edunov et al., 2018; Shen et al., 2016). In early experiments, we found that training from scratch with discriminative objectives (sequence- or document-based) is ineffective. We suspect samples produced early in training are so unlike the references that the model never receives a strong enough signal for effective training. We fine-tune on old WMT news task test sets (2008-2016) in two settings. With random batches sentences from different documents are shuffled randomly into mini-batches. In this case doc-MRT metrics are over pseudo-documents. With document batches each batch contains only sentences from one document, and doc-MRT uses true document context. We use the same sampling temperatures and the same risk sharpness factors for both forms of MRT for each experiment. For Grammatical Error Correction (GEC) we train on sentences from NUCLE (Dahlmeier et al., 2013) and Lang-8 Learner English (Mizumoto et al., 2012) with at least one correction, a total of 660K sentences. We evaluate on the JFLEG (Napoles et al., 2017) and CoNLL 2014 (Ng et al., 2014) sets. For GEC experiments we use random batching only. For all models we use a Transformer model (Vaswani et al., 2017) with the ‘base’ Tensor2Tensor parameters (Vaswani et al., 2018). We train to validation set BLEU convergence on a single GPU. The batch size for baselines and MLE is 4096 tokens. For MRT, where each sentence in the batch is sampled N times, we reduce batch size by N while delaying gradient updates by the same factor to keep the effective batch size constant (Saunders et al., 2018). At inference time we decode using beam size 4. All BLEU scores 7767 are for cased, detokenized output, calculated using SacreBLEU (Post, 2018). 3.1 Computation and sample count Our proposed document-MRT approach is more complex than sequence-MRT due to the additional score-aggregation and context-sampling steps. In practice we find that the extra computation of ordering and aggregating sequence scores is negligible when compared to the computational cost of sentence sampling, required for all forms of MRT. Our MRT experiments use N = 8 random samples per sentence unless otherwise stated. In this we choose the highest N we can practically experiment with, since previous work finds MRT performance increasing steadily with more samples per sentence (Shen et al., 2016). That we see improvements with so few samples is in contrast to previous work which finds BLEU gains only with 20 or more samples per sentence for sequence-MRT (Shen et al., 2016; Edunov et al., 2018). However, we find that document-MRT allows improvements with far fewer samples, perhaps because the aggregation of scores over sentences in a context increases robustness to variation in individual samples. Relatedly, we find that add-one BLEU smoothing (Lin and Och, 2004) is required for sequenceMRT as in Shen et al. (2016). However we find that doc-MRT can achieve good results without smoothing, perhaps because n-gram precisions are far less likely to be 0 when calculated over a document. 3.2 MRT for NMT Model Random batches Document batches Baseline 42.7 MLE 40.0 41.0 N = 4 N = 8 N = 4 N = 8 Seq-MRT 42.6 43.5 42.6 43.5 Doc-MRT (random) 41.7∗ 43.1∗ 43.1 43.0 Doc-MRT (ordered) 43.4 43.7 43.4 43.9 Table 1: BLEU on en-de after MLE and MRT under 1−sBLEU (seq-MRT) and 1−doc BLEU (doc-MRT). Results indicated by ∗are averages over 3 runs with the same settings, which all came within 0.2 BLEU. . In Table 1, we fine-tune an en-de baseline on documents from past news sets. We compare sentenceBLEU and document-BLEU MRT to fine-tuning with Maximum Likelihood Estimation (MLE). Model Random batches Document batches Baseline 39.2 39.2 MLE 41.2 40.0 Seq-MRT 39.4 40.5 Doc-MRT (ordered) 39.0 38.9 Table 2: TER on en-de after MLE and MRT under sentence-TER (seq-MRT) and doc-TER (doc-MRT). Lower TER is better. MLE fine-tuning degrades the baseline. This suggests the baseline is well-converged, as is desirable for applying MRT (Shen et al., 2016). The degradation is smaller with batches containing only sentences from the same document. We connect this to the idea that NMT batches with fewer sentence pairs have ‘noisier’ estimated gradients, harming training (Saunders et al., 2018). We expect batches of sentences from a single document to be similar and therefore give less noisy gradient estimates. Both seq-MRT and doc-MRT improve over the baseline with random sampling and N = 8. We also explore MRT at N = 4, with batch size adjusted as described in section 3 for the same effective batch size per update, and with fewer training steps such that the model ‘sees’ a similar proportion of the overall dataset. We do not report beam sampling results as early experiments indicate beam sampling gives similarly poor results for both seqMRT and doc-MRT. This may be because beam search produces insufficiently diverse samples for this task (Freitag and Al-Onaizan, 2017). Sequence-MRT gives a 0.8 BLEU gain over the baseline with both batching schemes using N = 8 samples, but starts to degrade the baseline with N = 4 samples. With document batches and N = 8 Doc-MRT (ordered) outperforms seq-MRT by a further 0.4 BLEU. With N = 4 doc-MRT (ordered) still achieves a 0.7 BLEU improvement over the baseline, or a 0.8 BLEU improvement over seq-MRT. We suggest therefore that doc-MRT (ordered) may be a computationally more efficient alternative to seq-MRT when large sample counts are not practical. For contrast with the ordered document sampling approach of Section 2.3, we give results for doc-MRT (random), which uses randomly sampled contexts. This approach falls significantly behind doc-MRT (ordered) with either batching scheme. Since doc-MRT (random) with random batches is exposed to randomness at the batch construction, sentence sampling and document sampling stages, 7768 Model JFLEG CONLL2014 P R M2 GLEU P R M2 GLEU Baseline 67.3 38.2 58.4 50.4 54.4 21.8 41.9 67.3 MLE 64.7 37.7 56.6 50.1 51.4 20.9 39.8 67.1 Seq-MRT 62.7 39.1 56.0 50.0 52.4 24.5 42.7 67.1 Doc-MRT (ordered) 64.4 41.0 57.8 51.4 53.2 24.6 43.2 67.5 Table 3: GEC Precision, Recall, M2, and GLEU after MLE and MRT. MRT is under 1−sentence-GLEU for seqMRT and 1−doc-GLEU for doc-MRT. Both MRT schemes uses random batches and random sentence sampling. Higher scores are better for all metrics. these results are averages over 3 experimental runs, which gave fairly consistent results (<0.2 BLEU range). In general we do find that results with random batches and random ordering are variable and sensitive to batch size and batching scheme. We interpret these results by considering the effect on the per-sentence cost for the different schemes. We find MRT works well when sample scores are different enough to be discriminated, but suffers if scores are too different. This is in line with the findings of Edunov et al. (2018) that including the gold reference causes the model to assign low relative probabilities to every other sample. Doc-MRT aggregates scores over many samples, while seq-MRT uses individual scores. We believe this explains the stronger performance of doc-MRT for small values of N, especially for the ordered document scheme, which ensures scores are still different enough for MRT to discriminate. Our approach can also be used with documentlevel metrics that are not intended to be used with individual sentences. In Table 2 we demonstrate this with TER, which estimates the edit rate required to correct a set of translation hypotheses. Document-TER MRT improves over a strong baseline, although batching scheme has less of an impact here. Notably seq-level MRT does not improve TER over the baseline, indicating TER may be too noisy a metric for use at the sentence level. 3.3 MRT for GEC Finally, we apply our MRT approach to the GEC GLEU metric (Napoles et al., 2015), an n-gram edit measure typically used at the document level. Table 3 shows that document MRT fine-tuning improves GLEU over the baseline, MLE fine-tuning, and a sequence-GLEU MRT formulation. Also notable is the change in M2, which finds the phraselevel edit sequence achieving the highest overlap with the gold-standard (Dahlmeier and Ng, 2012). MLE and sequence-MRT improve recall at a detriment to precision, suggesting over-generation of spurious corrections. Document-MRT likewise improves recall, but with a precision score closer to the baseline for more balanced performance. There is clear indication of a tension between M2 and GLEU: a small increase in GLEU under doc-MRT on CONLL leads to a large increase in M2, while a large increase in GLEU under doc-MRT on JFLEG leads to a small decrease in M2. We note that our improvements on JFLEG are similar to the improvements shown by Sakaguchi et al. (2017) for neural reinforcement learning with a sequence-GLEU cost metric. However, their results involve N=20 samples and 600k updates, compared to N=8 and 3k updates with our approach. 4 Conclusions and future work We present a novel approach for structured loss training with document-level objective functions. Our approach relies on a procedure for sampling a set of diverse batch-level contexts using N-wise sample ordering. As well as randomly selecting training data, we assess training with mini-batches consisting only of single document contexts. While the scope of this work does not extend to sampling sentences given document context, this would be an interesting direction for future work. We demonstrate improvements covering three document-level evaluation metrics: BLEU and TER for NMT and GLEU for GEC. We finish by noting that the original MERT procedure developed for SMT optimised document-level BLEU and with our procedure we reintroduce this to NMT. Acknowledgments This work was supported by EPSRC grants EP/M508007/1 and EP/N509620/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service1 funded by EPSRC Tier-2 capital grant EP/P020259/1. 1http://www.hpc.cam.ac.uk 7769 References Shiqi Shen Ayana, Zhiyuan Liu, and Maosong Sun. 2016. Neural headline generation with minimum risk training. arXiv preprint arXiv:1604.01904. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR’15). Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568–572. Association for Computational Linguistics. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, pages 22–31. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, et al. 2018. Classical structured prediction losses for sequence to sequence learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 355–364. Markus Freitag and Yaser Al-Onaizan. 2017. Beam search strategies for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 56–60, Vancouver. Association for Computational Linguistics. S´ebastien Jean and Kyunghyun Cho. 2019. Contextaware learning for neural machine translation. arXiv preprint arXiv:1903.04715. Chin-Yew Lin and Franz Josef Och. 2004. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 501–507, Geneva, Switzerland. COLING. Sameen Maruf, Andr´e F. T. Martins, and Gholamreza Haffari. 2019. Selective attention for context-aware neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3092–3102, Minneapolis, Minnesota. Association for Computational Linguistics. Tomoya Mizumoto, Yuta Hayashibe, Mamoru Komachi, Masaaki Nagata, and Yuji Matsumoto. 2012. The effect of learner corpus size in grammatical error correction of ESL writings. In Proceedings of COLING 2012: Posters, pages 863–872. The COLING 2012 Organizing Committee. Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammatical error correction metrics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 588–593. Association for Computational Linguistics. Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2017. JFLEG: A fluency corpus and benchmark for grammatical error correction. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 229–234. Association for Computational Linguistics. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–14. Association for Computational Linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 160–167. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR. Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2017. Grammatical error correction with neural reinforcement learning. In Proceedings of the Eighth International Joint Conference on 7770 Natural Language Processing (Volume 2: Short Papers), pages 366–372. Asian Federation of Natural Language Processing. Danielle Saunders, Felix Stahlberg, Adri`a de Gispert, and Bill Byrne. 2018. Multi-representation ensembles and delayed SGD updates improve syntaxbased NMT. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 319– 325, Melbourne, Australia. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Matt Shannon. 2017. Optimizing expected word error rate via sampling for speech recognition. arXiv preprint arXiv:1706.02776. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum Risk Training for Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1683–1692. M Snover. 2006. A study of translation edit rate with targeted human annotation. Proc. Association for Machine Translation in the Americas (AMTA2006). Felix Stahlberg, Christopher Bryant, and Bill Byrne. 2019. Neural grammatical error correction with finite state transducers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4033–4039. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2Tensor for neural machine translation. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 193–199, Boston, MA. Association for Machine Translation in the Americas. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Elena Voita, Rico Sennrich, and Ivan Titov. 2019. Context-aware monolingual repair for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 876–885, Hong Kong, China. Association for Computational Linguistics. Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Melbourne, Australia. Association for Computational Linguistics. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, and Andrew Y Ng. 2016. Neural language correction with character-based attention. arXiv preprint arXiv:1603.09727.
2020
693
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7771–7777 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7771 Variational Neural Machine Translation with Normalizing Flows Hendra Setiawan Matthias Sperber Udhay Nallasamy Matthias Paulik Apple { hendra,sperber,udhay,mpaulik }@apple.com Abstract Variational Neural Machine Translation (VNMT) is an attractive framework for modeling the generation of target translations, conditioned not only on the source sentence but also on some latent random variables. The latent variable modeling may introduce useful statistical dependencies that can improve translation accuracy. Unfortunately, learning informative latent variables is non-trivial, as the latent space can be prohibitively large, and the latent codes are prone to be ignored by many translation models at training time. Previous works impose strong assumptions on the distribution of the latent code and limit the choice of the NMT architecture. In this paper, we propose to apply the VNMT framework to the state-of-the-art Transformer and introduce a more flexible approximate posterior based on normalizing flows. We demonstrate the efficacy of our proposal under both in-domain and out-of-domain conditions, significantly outperforming strong baselines. 1 Introduction Translation is inherently ambiguous. For a given source sentence, there can be multiple plausible translations due to the author’s stylistic preference, domain, and other factors. On the one hand, the introduction of neural machine translation (NMT) has significantly advanced the field (Bahdanau et al., 2015), continually producing state-of-the-art translation accuracy. On the other hand, the existing framework provides no explicit mechanisms to account for translation ambiguity. Recently, there has been a growing interest in latent-variable NMT (LV-NMT) that seeks to incorporate latent random variables into NMT to account for the ambiguities mentioned above. For instance, Zhang et al. (2016) incorporated latent codes to capture underlying global semantics of source sentences into NMT, while Su et al. (2018) proposed fine-grained latent codes at the word level. The learned codes, while not straightforward to analyze linguistically, are shown empirically to improve accuracy. Nevertheless, the introduction of latent random variables complicates the parameter estimation of these models, as it now involves intractable inference. In practice, prior work resorted to imposing strong assumptions on the latent code distribution, potentially compromising accuracy. In this paper, we focus on improving Variational NMT (VNMT) (Zhang et al., 2016): a family of LV-NMT models that relies on the amortized variational method (Kingma and Welling, 2014) for inference. Our contributions are twofold. (1) We employ variational distributions based on normalizing flows (Rezende and Mohamed, 2015), instead of uni-modal Gaussian. Normalizing flows can yield complex distributions that may better match the latent code’s true posterior. (2) We employ the Transformer architecture (Vaswani et al., 2017), including Transformer-Big, as our VNMT’s generator network. We observed that the generator networks of most VNMT models belong to the RNN family that are relatively less powerful as a translation model than the Transformer. We demonstrate the efficacy of our proposal on the German-English IWSLT’14 and EnglishGerman WMT’18 tasks, giving considerable improvements over strong non-latent Transformer baselines, and moderate improvements over Gaussian models. We further show that gains generalize to an out-of-domain condition and a simulated bimodal data condition. 2 VNMT with Normalizing Flows Background Let x and y be a source sentence and its translation, drawn from a corpus D. Our model seeks to find parameters θ that maximize the marginal of a latent-variable model pθ(y, Z | x) 7772 where Z ∈RD is a sentence-level latent code similar to (Zhang et al., 2016). VNMT models sidestep the marginalization by introducing variational distributions and seek to minimize this function (i.e., the Evidence Lower Bound or ELBO): X (x,y)∈D Eq(Z|x,y) [log pθ(y | x, Z)] −KL (q(Z | x, y) || p(Z | x)) , (1) where q(Z | x, y), p(Z | x) are the variational posterior and prior distribution of the latent codes, while p(y | x, Z) is a generator that models the generation of the translation conditioned on the latent code1. The ELBO is improved when the model learns a posterior distribution of latent codes that minimizes the reconstruction loss (the first term) while incurring a smaller amount of KL divergence penalty between the variational posterior and the prior (the second term). The majority of VNMT models design their variational distributions to model unimodal distribution via isotropic Gaussians with diagonal covariance, which is the simplest form of prior and approximate posterior distribution. This assumption is computationally convenient because it permits a closed-form solution for computing the KL term and facilitates end-to-end gradient-based optimization via the re-parametrization trick (Rezende and Mohamed, 2015). However, such a simple distribution may not be expressive enough to approximate the true posterior distribution, which could be non-Gaussian, resulting in a loose gap between the ELBO and the true marginal likelihood. Therefore, we propose to employ more flexible posterior distributions in our VNMT model, while keeping the prior a Gaussian. Normalizing Flows-based Posterior Rezende and Mohamed (2015) proposed Normalizing Flows (NF) as a way to introduce a more flexible posterior to Variational Autoencoder (VAE). The basic idea is to draw a sample, Z0, from a simple (e.g., Gaussian) probability distribution and to apply K invertible parametric transformation functions (fk) called flows to transform the sample. The final latent code is given by ZK = fK(...f2(f1(Z0))...) whose probability density function, qλ(ZK | x, y), 1In VAE terms, the posterior and prior distributions are referred to as the encoders, while the generator is referred to as the decoder. As these terms have other specific meaning in NMT, we avoid to use them in this paper. is defined via the change of variable theorem as follows: q0(Z0 | x, y) K Y k=1 det ∂fk(Zk−1; λk(x, y)) ∂Zk−1 −1 , where λk refers to the parameters of the k-th flow with λ0 corresponds to the parameters of a base distribution. In practice, we can only consider transformations, whose determinants of Jacobians (the second term) are invertible and computationally tractable. For our model, we consider several NFs, namely planar flows (Rezende and Mohamed, 2015), Sylvester flows (van den Berg et al., 2018) and affine coupling layer (Dinh et al., 2017), which have been successfully applied in computer vision tasks. Planar flows (PF) applies this function: fk(Z; λk(x, y)) = Z + u · tanh(wT Z + b), where λk = {u, w ∈RD, b ∈R}. Planar flows perform contraction or expansion to the direction perpendicular to the (wT Z + b) hyperplane. Sylvester flows (SF) applies this function: fk(Z; λk(x, y)) = Z + A · tanh(BZ + b), where λk = {A, B ∈RM×D, b ∈RM} and M is the number of hidden units. Planar flows are a special case of Sylvester flows where M = 1. In our experiments, we consider the orthogonal Sylvester flows (van den Berg et al., 2018), whose parameters are matrices with M orthogonal columns. Meanwhile, the affine coupling layer (CL) first splits Z into Zd1, Zd2 ∈RD/2 and applies the following function: fk(Zd1; λk(x, y)) = Zd1, fk(Zd2; λk(x, y, Zd1)) = Zd2 ⊙exp(sk) + tk, where it applies identity transform to Zd1 and applies a scale-shift transform to Zd2 according to λk = {sk, tk}, which are conditioned on Zd1, x and y. CL is less expressive than PF and SF, but both sampling and computing the probability of arbitrary samples are easier. In practice, we follow (Dinh et al., 2017) to switch Zd1 and Zd2 alternately for subsequent flows. As we adopt the amortized inference strategy, the parameters of these NFs are data-dependent. In our model, they are the output of 1-layer linear map 7773 with inputs that depend on x and y. Also, as the introduction of normalizing flows no longer offers a simple closed-form solution, we modify the KL term in Eq. 1 into: Eqλ(Z|x,y) [log qλ(Z | x, y) −log pψ(Z | x)] where we estimate the expectation w.r.t. q(ZK|x; λ) via L Monte-Carlo samples. We found that L = 1 is sufficient, similar to (Zhang et al., 2016). To address variable-length inputs, we use the average of the embeddings of the source and target tokens via a mean-pooling layer, i.e., meanpool(x) and meanpool(y) respectively. Transformer-based Generator We incorporate the latent code to the Transformer model by mixing the code into the output of the Transformer decoder’s last layer (hj) as follows: gj = δ([hj; Z]), hj = (1 −gj) ∗hj + gj ∗Z where gj controls the latent code’s contribution, and δ(·) is the sigmoid function. In the case of the dimension of the latent code (D) doesn’t match the dimension of hj, we apply a linear projection layer. Our preliminary experiments suggest that Transformer is less likely to ignore the latent code in this approach compared to other approaches we explored, e.g., incorporating the latent code as the first generated token as used in (Zhang et al., 2016). Prediction Ultimately, we search for the most probable translation (ˆy) given a source sentence (x) through the evidence lower bound. However, sampling latent codes from the posterior distribution is not straightforward, since the posterior is conditioned on the sentence being predicted. Zhang et al. (2016) suggests taking the prior’s mean as the latent code. Unfortunately, as our prior is a Gaussian distribution, this strategy can diminish the benefit of employing normalizing flows posterior. Eikema and Aziz (2018) explore two strategies, namely restricting the conditioning of the posterior to x alone (dropping y) and introducing an auxiliary distribution, r(Z|x), from which the latent codes are drawn. They found that the former is more accurate with the benefit of being simpler. This is confirmed by our preliminary experiments. We opt to adopt this strategy and use the mean of the posterior as the latent code at prediction time. Mitigating Posterior Collapse As reported by previous work, VNMT models are prone to posterior collapse, where the training fails to learn informative latent code as indicated by the value of KL term that vanishes to 0. This phenomenon is often attributed to the strong generator (Alemi et al., 2018) employed by the models, in which case, the generator’s internal cells carry sufficient information to generate the translation. Significant research effort has been spent to weaken the generator network. Mitigating posterior collapse is crucial for our VNMT model as we employ the Transformer, an even stronger generator that comes with more direct connections between source and target sentences (Bahuleyan et al., 2018). To remedy these issues, we adopt the βC-VAE (Prokhorov et al., 2019) and compute the following modified KL term: β |KL −C| where β is the scaling factor while C is a rate to control the KL magnitude. When C > 0, the models are discouraged from ignoring the latent code. In our experiments, we set C = 0.1 and β = 1. Additionally, we apply the standard practice of word dropping in our experiments. Related Work VNMT comes in two flavors. The first variant models the conditional probability akin to a translation model, while the second one models the joint probability of the source and target sentences. Our model adopts the first variant similar to (Zhang et al., 2016; Su et al., 2018; Pagnoni et al., 2018), while (Eikema and Aziz, 2018; Shah and Barber, 2018) adopt the second variant. The majority of VNMT models employ RNN-based generators and assume isotropic Gaussian distribution, except for (McCarthy et al., 2019) and (Przystupa et al., 2019). The former employs the Transformer architecture but assumes a Gaussian posterior, while the latter employs the normalizing flows posterior (particularly planar flows) but uses an RNN-based generator. We combine more sophisticated normalizing flows and the more powerful Transformer architecture to produce state-of-the-art results. 3 Experimental Results Experimental Setup We integrate our proposal into the Fairseq toolkit (Ott et al., 2019; Gehring et al., 2017a,b). We report results on the IWSLT’14 German-English (De-En) and the 7774 WMT’18 English-German (En-De) tasks. For IWSLT’14, we replicate Wu et al. (2019); Edunov et al. (2018)’s setup with 160K training sentences and a 10K joint BPE vocabulary, while for WMT’18, we replicate Edunov et al. (2018)’s setup with 5.2M training sentences and a 32K joint BPE vocabulary. For WMT experiments, we report the accuracy using detokenized SacreBLEU (Post, 2018) to facilitate fair comparison with other published results. Note that tokenized BLEU score is often higher depending on the tokenizer, thus not comparable. We apply KL annealing schedule and token dropout similar to (Bowman et al., 2016), where we set the KL annealing to 80K updates and drop out 20% target tokens in the IWSLT and 10% in the WMT experiments. The encoder and decoder of our Transformer generator have 6 blocks each. The number of attention heads, embedding dimension, and inner-layer dimensions are 4, 512, 1024 for IWSLT; and 16, 1024, 4096 for WMT. The WMT setup is often referred to as the Transformer Big. To our knowledge, these architectures represent the best configurations for our tasks. We set the latent dimension to D = 128, which is projected using a 1-layer linear map to the embedding space. We report decoding results with beam=5. For WMT experiments, we set the length penalty to 0.6. For all experiments with NF-based posterior, we employ flows of length 4, following the results of our pilot study. In-Domain Results We present our IWSLT results in rows 1 to 6 of Table 1. The accuracy of the baseline Transformer model is reported in row (1), which matches the number reported by Wu et al. (2019). In row (2), we report a static Z experiment, where Z = meanpool(x). We design this experiment to isolate the benefits of token dropping and utilizing average source embedding as context. As shown, the static Z provides +0.8 BLEU point gain. In row (3), we report the accuracy of our VNMT baseline when the approximate posterior is a Gaussian, which is +1.3 BLEU point from baseline or +0.5 point from the static Z, suggesting the efficacy of latent-variable modeling. We then report the accuracy of different variants of our model in rows (4) to (6), where we replace the Gaussian posterior with a cascade of 4 PF, SF and CL, respectively. For SF, we report the result with M = 8 orthogonal columns in row (5). As shown, these flows modestly add +0.2 to +0.3 points. It is worth noticing that the improvement introduces only around 5% additional parameters. System #params BLEU 1 Transformer IWSLT 42.9M 34.5 2 + static Z 42.9M 35.3 3 + Z ∼Gaussian 43.6M 35.8 4 + Z ∼4 x PF 44.2M 36.1 5 + Z ∼4 x SF (M=8) 45.9M 36.0 6 + Z ∼4 x CL 44.3M 36.1 7 (1) + distilled 42.9M 34.9 8 (6) + distilled 44.3M 36.6 9 (Edunov et al., 2018) 29.0 10 Transformer Big 209.1M 28.9 11 + static Z 209.1M 29.0 12 + Z ∼Gaussian 210.5M 29.1 13 + Z ∼4 x PF 211.6M 29.3 14 +Z ∼4 x SF (M=8) 215.3M 29.5 15 +Z ∼4 x CL 210.6M 29.2 16 (10) + distilled 209.1M 29.2 17 (14) + distilled 215.3M 29.9 Table 1: The translation accuracy on the De-En IWSLT’14 task (rows 1-8), the En-De WMT’18 task (rows 10-17). Each task’s best results in the in-domain setting are italicized, while the results with added distilled data are in bold. We report our WMT results that use the Transformer Big architecture in rows (10) to (15). For comparison, we quote the state-of-the-art result for this dataset from Edunov et al. (2018) in row (9), where the SacreBLEU score is obtained from Edunov (2019). As shown, our baseline result (row 10) is on par with the state-of-the-art result. The WMT results are consistent with the IWSLT experiments, where our models (rows 13-15) significantly outperform the baseline, even though they differ in terms of which normalizing flows perform the best. The gain over the VNMT baseline is slightly higher, perhaps because NF is more effective in larger datasets. In particular, we found that SF and PF perform better than CL, perhaps due to their simpler architecture, i.e., their posteriors are conditioned only on the source sentence, and their priors are uninformed Gaussian. Row (11) shows that the static Z’s gain is minimal. In row (14), our best VNMT outperforms the state-of-the-art Transformer Big model by +0.6 BLEU while adding only 3% additional parameters. 7775 Simulated Bimodal Data We conjecture that the gain partly comes from NF’s ability to capture non-Gaussian distribution. To investigate this, we artificially increase the modality of our training data, i.e., forcing all source sentences to have multiple translations. We perform the sequence-level knowledge distillation (Kim and Rush, 2016) with baseline systems as the teachers, creating additional data referred to as distilled data. We then train systems on this augmented training data, i.e., original + distilled data. Rows (7) and (16) show that the baseline systems benefit from the distilled data. Rows (8) and (17) show that our VNMT models gain more benefit, resulting in +2.1 and +0.9 BLEU points over non-latent baselines on IWSLT and WMT tasks respectively. Simulated Out-of-Domain Condition We investigate whether the in-domain improvement carries to out-of-domain test sets. To simulate an out-of-domain condition, we utilize our existing setup where the domain of the De-En IWSLT task is TED talks while the domain of the En-De WMT task is news articles. In particular, we invert the IWSLT De-En test set, and decode the English sentences using our baseline and best WMT En-De systems of rows (10) and (14). For this inverted set, the accuracy of our baseline system is 27.9, while the accuracy of our best system is 28.8, which is +0.9 points better. For reference, the accuracy of the Gaussian system in row (11) is 28.2 BLEU. While more rigorous out-of-domain experiments are needed, this result gives a strong indication that our model is relatively robust for this out-of-domain test set. Translation Analysis To better understand the effect of normalizing flows, we manually inspect our WMT outputs and showcase a few examples in Table 2. We compare the outputs of our best model that employs normalizing flows (VNMT-NF, row 14) with the baseline non-latent Transformer (row 10) and the baseline VNMT that employs Gaussian posterior (VNMT-G, row 12). As shown, our VNMT model consistently improves upon gender consistency. In example 1, the translation of the interior decorator depends on the gender of its cataphora (her), which is feminine. While all systems translate the cataphora correctly to ihrem, the baseline and VNMT-G translate the Example 1 Source In her book , the interior decorator presents 17 housing models for independent living in old age . Reference In ihrem Buch stellt die Innenarchitektin 17 Wohnmodelle f¨ur ein selbstbestimmtes Wohnen im Alter vor . Non-latent Baseline In ihrem Buch pr¨asentiert der Innenarchitekt 17 Wohnmodelle f¨ur ein unabh¨angiges Leben im Alter . VNMT-G In ihrem Buch stellt die der Innenarchitekt 17 Wohnmodelle f¨ur ein selbstbestimmtes Wohnen im Alter vor . VNMT-NF In ihrem Buch pr¨asentiert die Innendekoratorin 17 Wohnmodelle f¨ur ein unabh¨angiges Leben im Alter . Example 2 Source Even though she earns S 3, 000( 2,400 ) a month as an administrator and her husband works as well , the monthly family income is insufficient , she says . Reference Obwohl sie jeden Monat 3.000 SingapurDollar (ca 1.730 Euro ) als Verwaltungsmitarbeiterin verdiene –truncated– Non-latent Baseline Obwohl sie pro Monat 3.000 S $ ( 2.400 $ ) als Verwalter verdient und auch ihr Mann arbeitet , ist das –truncated– VNMT-G Obwohl sie jeden Monat 3.000 Singapur Dollar ( ca 1.730 Euro ) als Verwaltungsmitarbeiterin –truncated– VNMT-NF Obwohl sie S $ 3.000 ( $ 2.400 ) pro Monat als Administratorin verdient und ihr Mann auch –trunctated– Table 2: Translation examples with different gender consistency. Inconsistent, consistent translations and source words are in red, orange, blue respectively. phrase to its masculine form. In contrast, the translation of our VNMT-NF produces the feminine translation, respecting the gender agreement. In example 2, only VNMT-NF and VNMT-G produce gender consistent translations. 4 Discussions and Conclusions We present a Variational NMT model that outperforms a strong state-of-the-art non-latent NMT model. We show that the gain modestly comes from the introduction of a family of flexible distribution based on normalizing flows. We also demonstrate the robustness of our proposed model in an increased multimodality condition and on a simulated out-of-domain test set. We plan to conduct a more in-depth investigation into actual multimodality condition with highcoverage sets of plausible translations. We conjecture that conditioning the posterior on the target sentences would be more beneficial. Also, we plan to consider more structured latent variables beyond modeling the sentence-level variation as well as to apply our VNMT model to more language pairs. 7776 References Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. 2018. Fixing a broken ELBO. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, pages 159–168. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Representation Learning (ICLR), San Diego, USA. Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, and Pascal Poupart. 2018. Variational attention for sequence-to-sequence models. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1672–1682, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Rianne van den Berg, Leonard Hasenclever, Jakub M. Tomczak, and Max Welling. 2018. Sylvester normalizing flows for variational inference. In Conference on Uncertainty in Artificial Intelligence (UAI), Tel Aviv-Yafo, Israel. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. 2017. Density estimation using Real NVP. Sergey Edunov. 2019. https://github. com/pytorch/fairseq/issues/506# issuecomment-464411433. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 489–500, Brussels, Belgium. Bryan Eikema and Wilker Aziz. 2018. Auto-encoding variational neural machine translation. arXiv: 1807.10564v1. Jonas Gehring, Michael Auli, David Grangier, and Yann N. Dauphin. 2017a. A convolutional encoder model for neural machine translation. In Conference of the Association for Computational Linguistics (ACL), Vancouver, Canada. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017b. Convolutional sequence to sequence learning. In International Conference on Machine Learning (ICML), Sydney, Australia. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327, Austin, Texas. Association for Computational Linguistics. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. Arya D. McCarthy, Xian Li, Jiatao Gu, and Ning Dong. 2019. Improved variational neural machine translation by promoting mutual information. https: //arxiv.org/abs/1909.09237. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations. Artidoro Pagnoni, Kevin Liu, and Shangyan Li. 2018. Conditional variational autoencoder for neural machine translation. CoRR, abs/1812.04405. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Victor Prokhorov, Ehsan Shareghi, Yingzhen Li, Mohammad Taher Pilehvar, and Nigel Collier. 2019. On the importance of the Kullback-Leibler divergence term in variational autoencoders for text generation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 118–127, Hong Kong. Association for Computational Linguistics. Michael Przystupa, Mark Schmidt, and Muhammad Abdul-Mageed. 2019. Investigating the impact of normalizing flows on latent variable machine translation. In Proceedings of The Workshop on Invertible Neural Nets and Normalizing Flows. International Conference of Machine Learning. Danilo Rezende and Shakir Mohamed. 2015. Variational inference with normalizing flows. In International Conference on Machine Learning (ICML), pages 1530–1538, Lille, France. Harshil Shah and David Barber. 2018. Generative neural machine translation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 1346–1355. Curran Associates, Inc. Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Lu, Xianpei Han, and Biao Zhang. 2018. Variational recurrent neural machine translation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications 7777 of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5488–5495. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Neural Information Processing Systems Conference (NIPS), pages 5998–6008, Long Beach, USA. Felix Wu, Angela Fan, Alexei Baevski, Yann N. Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations (ICLR), New Orleans, USA. Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 521–530, Austin, Texas. Association for Computational Linguistics. A Word dropout We investigate the effect of different dropout rate and summarize the results in Table 3. In particular, we take the VNMT baseline with Gaussian latent variable for IWSLT (row 3 in Table 1) and for WMT (row 12 in Table 1). As shown, word dropout is important for both setup but it is more so for IWSLT. It seems that tasks with low resources benefit more from word dropout. We also observe that above certain rate, word dropout hurts the performance. Dropout rate 0.0 0.1 0.2 0.3 IWSLT 34.4 35.7 35.8 35.6 WMT 29.0 29.1 28.8 28.7 Table 3: Results of different dropout rate for IWSLT and WMT setup. The best results are in bold. B Latent Dimension We report the results of varying the dimension of latent variable (D) in Table 4. For this study, we use the VNMT baseline with Gaussian latent variable in IWSLT condition (row 3 in Table 1) . Our experiments suggest that the latent dimension between 64 and 128 is optimal. The same conclusion holds for the WMT condition. D 8 16 32 64 128 256 BLEU 35.6 35.5 35.4 35.7 35.8 35.4 Table 4: Results of different dropout rate for IWSLT. The best results are in bold. C Normalizing Flow Configuration In the Experimental Results section, we report the accuracy for our models with 4 flows. In Table 5, we conduct experiments varying the number of flows for the IWSLT condition. Our baseline (num flows=0) is an NMT model with word dropout, which performs on par with the static Z experiment reported in Table 1’s row 3. These results suggest that increasing the number of flows improves accuracy, but the gain diminishes after 4 flows. The results are consistent for all normalizing flows that we considered. We also conduct experiments with employing more flows, but unfortunately, we observe either unstable training or lower accuracy. Num PF SF CL Flows (M=8) 0 35.3 1 35.8 35.6 35.8 2 35.7 35.5 35.8 3 36.0 35.9 35.7 4 36.1 36.0 36.1 5 35.9 36.1 35.9 6 35.8 36.0 35.9 Table 5: Translation accuracy of VNMT models employing various number of flows in the IWSLT condition. The best results are in bold. In Table 6, we conduct experiments varying the number of orthogonal columns (M) in our Sylvester normalizing flows (SF) experiments. As shown, increasing M improves the accuracy up to M = 24. We see no additional gain from employing more additional orthogonal columns beyond 24. In Table 1, we report M = 8, because it introduces the least number of additional parameters. M 2 4 8 16 24 32 BLEU 35.7 35.5 36.0 36.0 36.2 35.9 Table 6: Results of different number of orthogonal columns for SF. The best results are in bold.
2020
694
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7778–7790 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7778 The Paradigm Discovery Problem Alexander Erdmann@,D Micha Elsner@ Shijie WuZ Ryan CotterellQ,6 Nizar HabashD @Ohio State University DNew York University Abu Dhabi ZJohns Hopkins University 6University of Cambridge QETH Zürich {ae1541,nizar.habash}@nyu.edu [email protected] [email protected] [email protected] Abstract This work treats the paradigm discovery problem (PDP)—the task of learning an inflectional morphological system from unannotated sentences. We formalize the PDP and develop evaluation metrics for judging systems. Using currently available resources, we construct datasets for the task. We also devise a heuristic benchmark for the PDP and report empirical results on five diverse languages. Our benchmark system first makes use of word embeddings and string similarity to cluster forms by cell and by paradigm. Then, we bootstrap a neural transducer on top of the clustered data to predict words to realize the empty paradigm slots. An error analysis of our system suggests clustering by cell across different inflection classes is the most pressing challenge for future work. Our code and data are available at https://github.com/ alexerdmann/ParadigmDiscovery. 1 Introduction In childhood, we induce our native language’s morphological system from unannotated input. For instance, we learn that ring and rang belong to the same inflectional paradigm. We also learn that rings and bangs belong to the same cell, i.e., they realize the same morphosyntactic properties 3.SG.PRES, but in different paradigms. Acquiring such paradigmatic knowledge enables us to produce unseen inflectional variants of new vocabulary items, i.e. to complete morphological paradigms. Much work has addressed this task, which Ackerman et al. (2009) call the paradigm cell filling problem (PCFP),1 but few have discussed inducing paradigmatic knowledge from scratch, which we call the paradigm discovery problem (PDP).2 1In the NLP literature, this task is called morphological reinflection or morphological inflection generation (Cotterell et al., 2016a); this is only a difference in nomenclature. 2Elsner et al. (2019) call the task the paradigm cell discovery problem; we drop cell to distinguish our task from As an unsupervised task, the PDP poses challenges for modeling and evaluation and has yet to be attempted in its full form (Elsner et al., 2019). However, we contend there is much to be gained from formalizing and studying the PDP. There are insights for cognitive modeling to be won (Pinker, 2001; Goldwater, 2007) and intuitions on combating sparse data for language generation (King and White, 2018) to be accrued. Unsupervised language processing also has natural applications in the documentation of endangered languages (Zamaraeva et al., 2019) where a lot of annotated data is never likely to exist. Our formalization of the PDP offers a starting point for future work on unsupervised morphological paradigm completion. Our paper presents a concrete formalization of the PDP. Then, as a baseline for future work, we introduce a heuristic benchmark system. Our benchmark system takes an unannotated text corpus and a lexicon of words from the corpus to be analyzed. It first clusters the lexicon by cell and then by paradigm making use of distributional semantics and string similarity. Finally, it uses this clustering as silver-standard supervision to bootstrap a neural transducer (Vaswani et al., 2017) that generates the desired target inflections. That is, the model posits forms to realize unoccupied cell slots in each proposed paradigm. Even though our benchmark system models only one part of speech (POS) at a time, our framework extends to the full PDP to support future, more intricate systems. We propose two separate metrics to evaluate both the clustering of attested forms into paradigms and cells and the prediction of unseen inflected forms. Our metrics handle non-canonical morphological behavior discussed in theoretical literature (Corbett, 2005) and extend to the full PDP. For three of the five languages we consider, our benchmark system predicts unattested inflections one of its subtasks which Boyé and Schalchli (2019) call the paradigm cell finding problem (see §2.2). 7779 of lexicon forms with accuracy within 20% of a fully supervised system. However, our analysis suggests clustering forms into cells consistently across paradigms is still a very pressing challenge. 2 Previous Work in Morphology This section couches our work on the PDP in terms of previous trends in morphological modeling. 2.1 Unsupervised Morphology Much work on unsupervised morphological modeling focuses on segmentation (Gaussier, 1999; Goldsmith, 2001; Creutz and Lagus, 2005; Narasimhan et al., 2015; Bergmanis and Goldwater, 2017; Xu et al., 2018). While morphological segmenters can distinguish real from spurious affixes (e.g., bring ̸= br + ing) with high accuracy, they do not attempt to solve the PDP. They do, however, reveal which forms take the same affixes (e.g., walked, talked), not which forms occupy the same cell (e.g., walked, brought). Indeed, they explicitly struggle with irregular morphology. Segmenters also cannot easily model non-concatenative phenomena like ablaut, vowel harmony and templatic processes. Two works have proposed tasks which can be considered alternative formulations of the PDP, using either minimal or indirect supervision to bootstrap their models. We discuss each in turn. First, Dreyer and Eisner (2011) use a generative model to cluster forms into paradigms and cells with a Bayesian non-parametric mixture of weighted finite-state transducers. They present a PDP framework which, in principle, could be fully unsupervised, but their model requires a small seed of labeled data to get key information like the number of cells distinguished, making it less relevant cognitively. In contrast, our task is not directly supervised and focuses on distributional context. Second, contemporaneous to our work, Jin et al. (2020) propose a similar framework for SIGMORPHON 2020’s shared task on unsupervised morphological paradigm completion. Given only a small corpus and lexicon of verbal lemmata, participating systems must propose full paradigms for each lemma. By contrast, our framework does not reveal how many paradigms should be generated, nor do we privilege a specific form as the lemma, but we do use a larger lexicon of exclusively verbal or nominal forms. Their proposed baseline uses distributional context for POS tagging and features, but does not train embeddings as the corpus is small. 2.2 Subtasks of Paradigm Discovery A few works address subtasks of the PDP. Erdmann and Habash (2018) learn paradigm membership from raw text, but do not sort paradigms into cells. Boyé and Schalchli (2019) discuss the paradigm cell finding problem, identifying the cell (but not paradigm) realized by a given form. Lee (2015) clusters forms into cells across inflection classes. Beniamine et al. (2018) group paradigms into inflection classes, and Eskander et al. (2013) induce inflection classes and lemmata from cell labels. 2.3 The Paradigm Cell Filling Problem The PCFP is the task of predicting unseen inflected forms given morphologically labeled input. PCFP models can guess a word’s plural having only seen its singular, but the child must bootstrap morphological knowledge from scratch, first learning that singular–plural is a relevant distinction. Thus, the PDP must be at least partially solved before the PCFP can be attempted. Yet, as a supervised task, the PCFP is more easily studied, and has received much attention on its own, especially from the word-and-paradigm camp of morphological theory. Some cognitive works suggest the PCFP cannot be too difficult for any language (Dale et al., 1998; Ackerman and Malouf, 2013, 2015; Blevins et al., 2017; Cotterell et al., 2019). Neural models can test and extend such proposals (Cotterell et al., 2018a; Silfverberg and Hulden, 2018). A related vein of work discusses how speakers inflect nonce words (Berko, 1958; Plunkett and Juola, 1999; Yang, 2015), e.g., is the past tense of sping, spinged or spung? There is a long tradition of modeling past-tense generation with neural networks (Rumelhart and McClelland, 1986; Kirov and Cotterell, 2018; Corkery et al., 2019). On the engineering side, Durrett and DeNero (2013) inspired much recent work, which has since benefited from large inflectional datasets (Kirov et al., 2018) and advances in neural sequence modeling (Bahdanau et al., 2015). Shared tasks have drawn extra attention to the PCFP (Cotterell et al., 2016a, 2017, 2018c; McCarthy et al., 2019). 3 The Paradigm Discovery Problem Paradigm discovery is a natural next step in computational morphology, building on related minimally or indirectly supervised works (§2.2) to bridge the gap between unsupervised traditions (§2.1) and supervised work on the PCFP (§2.3). In the PCFP, 7780 Corpus The cat watched me watching it . I followed the show but she had n’t seen it . Let ’s see who follows your logic . Lexicon watching, seen, follows, watched, followed, see Gold Grid cell 1 cell 2 cell 3 cell 4 cell 5 paradigm 1 «watch» «watches» watching watched watched paradigm 2 «follow» follows «following» followed followed paradigm 3 see «sees» «seeing» «saw» seen Table 1: An example corpus, lexicon, and gold analyses. All lexicon entries appear in the corpus and, for our experiments, they will all share a POS, here, verb. The grid reflects all possible analyses of syncretic forms (e.g., walked, followed), even though these only occur in the corpus as PST realizations, like saw in Cell 4, not as PST.PTCP, like seen in Cell 5. Bracketed «forms» are paradigm mates of attested forms, not attested in the lexicon. each input form is labeled with its morphosyntactic property set, i.e., the cell in the paradigm which it realizes, and its lexeme, i.e., the paradigm of related forms to which it belongs. By contrast, to solve the PDP, unlabeled input forms must be assigned cells and paradigms. This task requires learning what syntactic and semantic factors distinguish cells, what combinations of cells can cooccur in a paradigm, and what aspects of a surface form reflect its paradigm and its cell, respectively. 3.1 Task Setup Table 1 provides an overview of our PDP setup. The first two rows show input data: an unannotated corpus and a lexicon of forms attested in that corpus. Given only these data, the task is to output a grid such that (i) all lexicon forms and all their (potentially unseen) inflectional variants appear in the grid, (ii) all forms appearing in the same column realize the same morphosyntactic cell, and (iii) all forms appearing in the same row belong to the same paradigm. Unattested «forms» to be generated are depicted in brackets in Table 1’s gold grid, which shows the ideal output of the system. Our setup permits multiple forms realizing the same slot, i.e., a specific cell in a specific paradigm, a single form realizing multiple slots, and unrealizable empty slots. This supports overabundance (Thornton, 2010, 2011), defectiveness (Sims, 2015), and syncretism (Blevins, 1995; Cotterell et al., 2018b). See Corbett (2005) for more on these phenomena. Experimentally, we constrain the PDP by limiting the lexicon to forms from one POS, but our formalization is more general. 3.2 Data for the PDP For a given language and POS, we create a corpus, lexicon, and gold grid based on a Universal Dependencies (UD) corpus (Nivre et al., 2016). At a high level, the corpus includes raw, non-UD sentences, and UD sentences stripped of annotations. The lexicon includes all forms occurring in the UD sentences with the specified POS (potentially including variant spellings and typographical errors). The gold grid consists of full paradigms for every word which co-occurs in UD and the UniMorph lexicon (Kirov et al., 2018) with a matching lemma–cell analysis; this is similar to the corpus created by Vylomova et al. (2019). As a system does not know which lexicon forms will be evaluated in the gold grid, it must model the entire lexicon, which should contain a realistic distribution over rare words and inflection classes having been directly extracted from distributional data (Bybee, 2003; Lignos and Yang, 2018). To ensure the gold grid is reasonably clean, we take all word–lemma–feature tuples from the UD portion of the corpus matching the specified POS and convert the features to a morphosyntactic cell identifier compatible with UniMorph representation as in McCarthy et al. (2018).3 Then we check which word–lemma–cell tuples also occur in UniMorph. For each unique lemma in this intersection, the full paradigm is added as a row to the gold grid. To filter typos and annotation discrepancies, we identify any overabundant slots, i.e., slots realized by multiple forms, and remove all but the most frequently attested realization in UD. While some languages permit overabundance (Thornton, 2010), it often indicates typographical or annotation errors 3Aligning UniMorph and UD requires removing diacritics in (Latin and Arabic) UniMorph corpora to match UD. This can obscure some morphosyntactic distinctions but is more consistent with natural orthography in distributional data. The use of orthographic data for morphological tasks is problematic, but standard in the field, due to scarcity of phonologically transcribed data (Malouf et al., 2020). 7781 Predictions cell 1 cell 2 cell 3 cell 4 paradigm 1 watched watching «watches» «watch» paradigm 2 followed «following» follows «follow» paradigm 3 «seed» «seeing» «sees» see paradigm 4 «seened» «seening» «seens» seen Table 2: Toy predictions made from the corpus and lexicon in Table 1, to be evaluated against the toy gold grid. Again, bracketed «forms» are those not occurring in the lexicon. in UD and UniMorph (Gorman et al., 2019; Malouf et al., 2020). Unlike the gold grid, the lexicon retains overabundant realizations, requiring systems to handle such phenomena. For each language, the raw sentences used to augment the corpus add over 1 million additional words. For German and Russian, we sample sentences from OpenSubtitles (Lison and Tiedemann, 2016), for Latin, the Latin Library (Johnson et al., 2016), and for English and Arabic, Gigaword (Parker et al., 2011a,b). Supplementary sentences are preprocessed via Moses (Koehn et al., 2007) to split punctuation, and, for supported languages, clitics. Table 3 shows corpus and lexicon sizes. 3.3 Metrics A system attemping the PDP is expected to output a morphologically organized grid in which rows and columns are arbitrarily ordered, but ideally, each row corresponds to a gold paradigm and each column to a gold cell. Aligning rows to paradigms and columns to cells is non-trivial, making it difficult to simply compute accuracy over gold grid slots. Furthermore, cluster-based metrics (Rosenberg and Hirschberg, 2007) are difficult to apply as forms can appear in multiple columns or rows. Thus, we propose novel metrics that are lexical, based on analogical relationships between forms. We propose a set of PDP metrics, to measure how well organized lexicon forms are in the grid, and a set of PCFP metrics, to measure how well the system anticipates unattested inflectional variants. All metrics support non-canonical phenomena such as defective paradigms and overabundant slots. 3.3.1 PDP Metrics A form f’s paradigm mates are all those forms that co-occur in at least one paradigm with f. f’s paradigm F-score is the harmonic mean of precision and recall of how well we predicted its paradigm mates when viewed as an information retrieval problem (Manning et al., 2008). We macroaverage all forms’ paradigm F-scores to compute Fpar. Qualitatively, Fpar tells us how well we cluster words that belong to the same paradigm. A form f’s cell mates are all those forms that cooccur in at least one cell with f. f’s cell F-score is the harmonic mean of precision and recall of how well we predicted its cell mates. As before, we macro-average all forms’ cell F-scores to compute Fcell. Qualitatively, Fcell tells us how well we cluster words that belong to the same cell. Finally, we propose the Fgrid metric as the harmonic mean of Fpar and Fcell. Fgrid is a single number that reflects a system’s ability to cluster forms into both paradigms and cells. Because we designate separate PCFP metrics to evaluate gold grid forms not in the lexicon, we restrict f’s mates to only include forms that occur in the lexicon. Consider the proposed grid in Table 2. There are 6 lexicon forms in the gold grid. Starting with watched, we correctly propose its only attested paradigm mate, watching. Thus, watched’s paradigm F-score is 100%. For see, we propose no attested paradigm mates, but we should have proposed seen. 0 correct out of 1 true paradigm mate from 0 predictions results in an F-score of 0% for seen. We continue like this for all 6 attested forms in the gold grid and average their scores to get Fpar. As for Fcell, we correctly predict that watched’s only cell mate is followed, yielding an F-score of 100%. However, we incorrectly predict that see has a cell mate, seen, yielding an F-score of 0%; we average each word’s F-score to get Fcell; the harmonic mean of Fpar and Fcell gives us Fgrid. While Fgrid handles syncretism, overabundance, defectiveness and mismatched grid dimensions, it is exploitable by focusing exclusively on the best attested cells realized by the most unique forms, since attested cells tend to exhibit a Zipfian distribution (Blevins et al., 2017; Lignos and Yang, 2018). Exploiting Fgrid in this manner propagates errors when bootstrapping to predict unattested forms and, thus, will be punished by PCFP metrics. 3.3.2 PCFP Metrics We cannot evaluate the PCFP as in supervised settings (Cotterell et al., 2016a) because proposed 7782 Lexicon Corpus UD Types Tokens Tokens Arabic 8,732 1,050,336 223,881 German 19,481 1,270,650 263,804 English 3,330 1,212,986 204,608 Latin 6,903 1,218,377 171,928 Russian 36,321 1,885,302 871,548 Table 3: Statistics regarding the input corpus and lexicon. UD tokens refers to tokens in the corpus originally extracted from UD sentences. cells and paradigms cannot be trivially aligned to gold cells and paradigms. Instead, we create a test set by sampling 2,000 four-way analogies from the gold grid. The first and second forms must share a row, as must the third and fourth; the first three forms must be attested and the fourth unattested, e.g., watched : watching :: seen : «seeing». From this test set and a proposed grid, we compute a strict analogy accuracy (An) metric and a more lenient lexicon expansion accuracy (LE) metric. An counts predictions as correct if all analogy directions hold in the proposed grid (i.e., watched, watching and seen, «seeing» share rows and watched, seen and watching, «seeing» share columns). LE counts predictions as correct if the unattested fourth form appears anywhere in the grid. That is, LE asks, for each gold form, if it was predicted in any slot in any paradigm. Like the PDP metrics, our PCFP metrics support syncretism, overabundance, defectiveness, etc. One can, however, exploit them by proposing a gratuitous number of cells, paradigms, and syncretisms, increasing the likelihood of completing analogies by chance, though this will reduce Fgrid. As both PDP and PCFP metrics can be exploited independently but not jointly, we argue that both types of metrics should be considered when evaluating an unsupervised system. 4 Building a Benchmark This section presents a benchmark system for proposing a morphologically organized grid given a corpus and lexicon. First, we cluster lexicon forms into cells. Then we cluster forms into paradigms given their fixed cell membership. To maintain tractability, clustering assumes a one-to-one mapping of forms to slots. Following cell and paradigm clustering, we predict forms to realize empty slots given one of the lexicon forms assigned to a cell in the same paradigm. This allows forms to appear in multiple slots, but does not support overabundance, defectiveness, or multi-word inflections. 4.1 Clustering into Cells We use a heuristic method to determine the number of cells and what lexicon forms to assign to each. Inspired by work on inductive biases in word embeddings (Pennington et al., 2014; Trask et al., 2015; Goldberg, 2016; Avraham and Goldberg, 2017; Tu et al., 2017), we train morphosyntactically biased embeddings on the corpus and use them to k-means cluster lexicon forms into cells. Following Erdmann et al. (2018), we emphasize morphosyntactically salient dimensions in embedding space by manipulating hyperparameters in fastText (Bojanowski et al., 2017). Specifically, to encourage grouping of morphologically related words, fastText computes a word’s embedding as the sum of its subword embeddings for all subword sequences between 3 and 6 characters long (Schütze, 1993). We shorten this range to 2 to 4 to bias the grouping toward shared affixes rather than (usually longer) shared stems. This helps recognize that the same affix is likely to realize the same cell, e.g., watch +ed and follow +ed. We limit the context window size to 1; small windows encourage a morphosyntactic bias in embeddings (Erk, 2016). We determine the number of cells to cluster lexicon forms into, k, via the elbow method, which progressively considers adding clusters until the reduction in dispersion levels off (Kodinariya and Makwana, 2013; Bholowalia and Kumar, 2014).4 Since Tibshirani et al. (2001)’s popular formalism of the method does not converge on our data, we implement a simpler technique that works in our case. We incrementally increase k, each time recording clustering dispersion, dk (for consistency, we average dk over 25 iterations). Starting at k = 2, we calculate dispersion deceleration as the difference between the current and previous dispersions: decel(k) = dk−1 −2(dk) + dk+1 (1) Once decel(k) decreases below p decel(2), we take the kth clustering: the (k + 1)th cluster did not explain enough variation in the embedding space to justify an additional morphosyntactic distinction. 4Clustering dispersion is the squared distance of a point from its cluster’s centroid, summed over all points clustered. 7783 4.2 Clustering into Paradigms Given a clustering of lexicon forms into k cells, denoted as C1, . . . , Ck, we heuristically cluster each form f into a paradigm, π, as a function of f’s cell, c. For tractability, we assume paradigms are pairwise disjoint and no paradigm contains multiple forms from the same cell. Our algorithm greedily builds paradigms cell by cell. To gauge the quality of a candidate paradigm, we first identify its base and exponents. Following Beniamine et al. (2018), we define π’s base, bπ, as the longest common subsequence shared by all forms in π.56 For each form f in π, we define the exponent xf as the subsequences of f that remain after removing bπ, i.e., xf is a tuple of affixes. For example, if π contains words wxyxz and axx, bπ is xx and the exponents are (<w, y, z>) and (<a), respectively.7 Inspired by unsupervised maximum matching in greedy tokenization (Guo, 1997; Erdmann et al., 2019), we define the following paradigm score function: score(π) = X ⟨c,f⟩∈π  |bπ| −|xf|  (2) which scores a candidate paradigm according to the number of base characters minus the number of exponent characters; it can be negative. Algorithm 1 then details our heuristic clustering approach. We greedily select one or zero forms from each cell to add (via the list concatenation operator ◦) to each paradigm such that the paradigm’s score is maximized.8 After performing a first pass of paradigm clustering with Algorithm 1, we estimate an unsmoothed probability distribution p(x | c) as follows: we take the number of times each exponent (tuple of affixes) realizes a cell in the output of Algorithm 1 and divide by the number of occurrences of that cell. We use this distribution p(x | c) to construct an exponent penalty: 5The fact that we use a subsequence, instead of a substring, means that we can handle non-concatenative morphology. 6We note that the longest common subsequence may be found with a polynomial-time dynamic program; however, there will not exist an algorithm whose runtime is polynomial in the number of strings unless P = NP (Maier, 1978). 7We use word start (<) and end (>) tokens to distinguish exponents; they do not count as exponent characters in eq. (2). 8Algorithm 1 has complexity O(|L|2) where |L| is lexicon size. In practice, to make Algorithm 1 tractable, we limit the candidates for f ′ j (line 8) to the n = 250 forms from cell j nearest to fi in pre-trained embedding space (trained via FastText with default parameters). This achieves a complexity upper bounded by O(|L|nk). Algorithm 1 Paradigm Clustering Algorithm 1: input C1, . . . , Ck 2: π ←[ ] 3: for Ci ∈{C1, . . . , Ck} do 4: for fi ∈Ci do 5: π ←[⟨i, fi⟩] 6: s ←score(π) 7: for Cj ∈{Ci+1, . . . , Ck} do 8: fj ←argmax f′ j∈Cj score(π ◦[⟨j, f′ j⟩]) 9: sfj ←score(π ◦[⟨j, fj⟩]) 10: if sfj > s then 11: π ←π ◦[⟨j, fj⟩] 12: s ←sfj 13: Cj.remove(fj) 14: π ←π ◦[π] 15: return π ω(xf, c) (3) =    0 if argmax x p(x | c) = xf 2 − p(xf|c) maxx p(x|c) otherwise Intuitively, if an exponent is the most likely exponent in the cell to which it belongs, the penalty weight is zero and its characters are not subtracted from the score. Otherwise, the weight is in the interval [1, 2] such that each exponent character is penalized at least as harshly but no more than twice as harshly than in the first pass, according to the exponent’s likelihood. We use this exponent penalty weight to define a penalized score function: scoreω(π) = X ⟨c,f⟩∈π  |bπ| −|xf| ω(xf, c)  (4) We then re-run Algorithm 1, swapping out score(·) for scoreω(·), to re-cluster forms into paradigms. Empirically, we find that harsher exponent penalties—i.e., forcing weights to be greater than 1 for suboptimal exponents—lead to higher paradigm precision in this second pass. For an example, consider candidate paradigm [«», watched, «», «», «»]. If we add nothing, each character of watched can be analyzed as part of the base, yielding a score of 7. What if we attempt to add watching—pre-determined to belong to column 5 during cell clustering? Candidate paradigm [«», watched, «», «», watching] increases the number of base characters to 10 (watch shared by 2 words), but yields a score of 5 after subtracting 7784 the characters from both exponents, (ed>) and (ing>). Hence, we do not get this paradigm right on our first pass, as 5 < 7. Yet, after the first pass, should (ed>) and (ing>) be the most frequent exponents in the second and fifth cells, the second pass will be different. Candidate paradigm [«», watched, «», «», watching] is not penalized for either exponent, yielding a score of 10, thereby allowing watching to be added to the paradigm. 4.3 Reinflection We now use the output of the clustering by cell and paradigm to bootstrap the PCFP. We use a Transformer (Vaswani et al., 2017) to predict the forms that realize empty slots. Transformer-based neural transducers constitute the state of the art for the PCFP. 9 In Cotterell et al. (2016b)’s terms, we reinflect the target from one of the non-empty source cells in the same paradigm. We select the source from which we can most reliably reinflect the target. We quantify this reliability by calculating the accuracy with which each target cell’s realizations were predicted from each source cell’s realizations in our development set. For each target cell, we rank our preferred source cells according to accuracy. To generate train and development sets, we create instances for every possible pair of realizations occurring in the same paradigm (90% train, 10% development). We pass these instances into the Transformer, flattening cells and characters into a single sequence. Neural models for reinflection often perform poorly when the training data are noisy. We mitigate this via the harsh exponent penalty weights (eq. (3)) which encourage high paradigm precision during clustering. 5 Results and Discussion Table 4 shows results for two versions of our benchmark system: BENCH, as described in §4, and GOLD k, with the number of cells oracularly set to the ground truth. For reference, we also report a supervised benchmark, SUP, which assumes a gold grid as input, then solves the PCFP exactly as the benchmark does. In terms of the PDP, clustering assigns lexicon forms to paradigms (46–82%) more accurately than to cells (26–80%). Results are high for English, which has the fewest gold cells, and 9We use the following hyperparameters: N = 4, dmodel = 128, dff = 512. Remaining hyperparameters retain their default values as specified in Vaswani et al. (2017). Our models are trained for 100 epochs in batches of 64. We stop early after 20 epochs without improvement on the development set. PDP PCFP Cells Paradigms Fcell Fpar Fgrid An LE Arabic nouns – 8,732 forms SUP 27 4,283 85.9 87.0 BENCH 12.8 5,279.3 39.9 48.5 43.7 16.8 49.5 GOLD k 27 4,930.3 25.9 46.4 33.1 16.1 57.2 German nouns – 19,481 forms SUP 8 17,018 72.2 74.9 BENCH 7.3 17,073.3 35.2 59.4 43.3 14.2 56.7 GOLD k 8 16,836.0 29.4 66.6 40.8 14.8 60.4 English verbs – 3,330 forms SUP 5 1,801 80.4 80.7 BENCH 7.5 1,949.5 64.0 80.1 71.1 52.0 67.5 GOLD k 5 1,977.3 79.6 82.1 80.8 54.7 69.4 Latin nouns – 6,903 forms SUP 12 3,013 80.0 88.0 BENCH 13.0 3,746.5 38.8 73.2 50.6 17.2 72.9 GOLD k 12 3,749.0 39.9 71.6 51.3 17.5 72.6 Russian nouns – 36,321 forms SUP 14 14,502 94.7 96.8 BENCH 16.5 19,792.0 44.5 72.2 55.0 31.9 86.2 GOLD k 14 20,944.0 45.7 69.1 55.0 31.6 84.3 Table 4: PDP and PCFP results for all languages and models, averaged over 4 runs. Metrics are defined in §3.3. An refers to analogy accuracy and LE to the lexicon expansion accuracy. lower elsewhere. In German, Latin, and Russian, our benchmark proposes nearly as many cells as GOLD k, thus performing similarly. For English, it overestimates the true number and performs worse. For Arabic, it severely underestimates k but performs better, likely due to the orthography: without diacritics, the three case distinctions become obscured in almost all instances. In general, fixing the true number of cells can be unhelpful because syncretism and the Zipfian distribution of cells creates situations where certain gold cells are too difficult to detect. Allowing the system to choose its own number of cells lets it focus on distinctions for which there is sufficient distributional evidence. As for the PCFP, our benchmark system does well on lexicon expansion accuracy and poorly on the analogy task. While lexicon expansion accuracy (50–86% compared to 72–97% for SUP) shows that the benchmark captures meaningful inflectional trends, analogy accuracy demonstrates vast room for improvement in terms of consistently organizing cell-realizations across paradigms. English is the only language where analogy accuracy is within half of SUP’s upper bound. A major reason for low analogy accuracy is that forms, despite being clustered into paradigms well, get assigned 7785 SG PL NOM GEN DAT ACC ABL NOM GEN DAT ACC ABL Gloss serv-us i o um o i orum is os is “slave.M” serv-a ae ae am a ae arum is as is “slave.F” frat-er ris ri rem re res rum ribus res ribus “brother” Table 5: Suffixal exponents for each cell in the paradigm of three Latin nouns from different inflection classes. Cell Interpretations Suffix 0 ACC.SG (.51), GEN.PL (.45) um 1 ACC.PL (.71), NOM.PL (.27) s 2 ACC.SG (.99) m 3 ABL.PL (.52), GEN.SG (.40) is 4 NOM.SG (.39), ABL.SG (.36) a 5 ABL.SG (.62), NOM.SG (.36) o 6 GEN.SG (.46), DAT.SG (.30) i 7 ABL.PL (.77), DAT.PL (.25) s 8 NOM.SG (.67), ABL.SG (.22) ∅ 9 ABL.SG (.936) e 10 ABL.SG (.5), GEN.SG (.28) e 11 NOM.SG (.87), ACC.PL (.16) us Table 6: System clustering of Latin nouns. to the wrong cell, or the same gold cell gets misaligned across paradigms from different inflection classes. We discuss this phenomenon in more detail below. 5.1 Latin Noun Error Analysis A detailed analysis of Latin nouns (also analyzed by Stump and Finkel (2015) and Beniamine et al. (2018)) reveals challenges for our system. Table 5 shows the inflectional paradigms for three Latin nouns exemplifying different inflection classes, which are mentioned throughout the analysis. In keeping with the UD standard, there are no diacritics for long vowels in the table. One major challenge for our system is that similar affixes can mark different cells in different inflection classes, e.g. the ACC.SG of servus “slave.M” ends in um, as does the GEN.PL of frater “brother”. Table 6 shows system-posited cells, the gold cells they best match to, and the longest suffix shared by 90% of their members. The system is often misled by shared affixes, e.g., cell 0 is evenly split between ACC.SG and GEN.PL, driven by the suffix um (cells 3 (is) and 4 (a) suffer from this as well). This kind of confusion could be resolved with better context modeling, as each distinct underlying cell, despite sharing a surface affix, occurs in distinct distributional contexts. We observe that the current system often fails to make use of context to handle some misleading suffixes. However, Cell 7 correctly groups ABL.PL forms marked with both is and ibus, excluding other suffixes ending in s. Similarly, cell 8 contains NOM.SG forms with heterogeneous endings, e.g., r, ix and ns. In some cases, the system misinterprets derivational processes as inflectional, combining gold paradigms. Derivational relatives servus and serva, male and female variants of “slave”, are grouped into one paradigm, as are philosophos “philosopher” and philosophia “philosophy.” In other cases, cell clustering errors due to shared suffixes create spurious paradigms. After falsely clustering gold paradigm mates servum (ACC.SG) and servorum (GEN.PL) into the same cell, we must assign each to separate paradigms during paradigm clustering. This suggests clustering cells and paradigms jointly might avoid error propagation in future work. We also find that clustering errors lead to PCFP errors. For servus/a, the neural reinflector predicts servibus in cell 8 with a suffix from the wrong inflection class, yet the slot should not be empty in the first place. The correct form, servis, is attested, but was mistakenly clustered into cell 3. 5.2 Benchmark Analysis Table 7 evaluates variants of the benchmark to determine the contribution of several system–task components in Arabic and Latin. We consider augmenting and shrinking the corpus. We also reset the fastText hyperparameters used to achieve a morphosyntactic inductive bias to their default values (no affix or window bias) and consider two constant exponent penalty weights (ω(xf, c) = 1 and ω(xf, c) = 0) instead of our heuristic weight defined in eq. (3). Finally, we consider selecting random sources for PCFP reinflection instead of identifying reliable sources. For all variants, the number of cells is fixed to the ground truth. Corpus Size We consider either using a smaller corpus containing only the UD subset, or using a larger corpus containing 15 (Latin) or 100 (Ara7786 PDP PCFP Paradigms Fcell Fpar Fgrid An LE Arabic nouns – 27 cells GOLD k 4,930.3 25.9 46.4 33.1 16.1 57.2 larger corpus 5,039.5 29.1 37.5 32.8 20.4 49.2 smaller corpus 5,004.0 18.8 37.7 24.9 9.5 42.1 no affix bias 4,860.3 21.5 47.7 29.7 16.3 43.5 no window bias 4,978.5 24.0 47.5 31.8 17.6 55.8 ω(x, c) = 1 3,685.0 34.4 28.8 5.2 35.5 ω(x, c) = 0 1,310.5 10.0 13.9 0.1 5.8 random sources 16.3 55.9 Latin nouns – 12 cells GOLD k 3,749.0 39.9 71.6 51.3 17.5 72.6 larger corpus 3,529.5 42.8 79.1 55.5 16.2 69.9 smaller corpus 4,381.5 30.7 49.1 37.8 14.6 51.1 no affix bias 3,906.8 37.1 68.2 48.1 22.7 66.6 no window bias 3,756.5 42.0 71.2 52.8 17.9 70.9 ω(x, c) = 1 3,262.5 67.1 49.6 11.0 52.9 ω(x, c) = 0 1,333.3 26.3 31.7 0.7 7.1 random sources 16.5 72.3 Table 7: Benchmark variations demonstrating the effects of various factors, averaged over 4 runs. bic) million words from additional supplementary sentences. As expected, performance decreases for smaller corpora, but it does not always increase for larger ones, potentially due to domain differences between UD and the supplemental sentences. Interestingly, Fcell always increases with larger corpora, yet this can lead to worse Fpar scores, more evidence of error propagation that might be avoided with joint cell–paradigm clustering. Embedding Morphosyntactic Biases Targeting affix embeddings by shrinking the default fastText character n-gram sizes seems to yield a much more significant effect than shrinking the context window. In Latin, small context windows can even hurt performance slightly, likely due to extremely flexible word order, where agreement is often realized over non-adjacent words. Exponent Penalties When clustering paradigms with the penalty weight ω(x, c) = 1, (which is equivalent to just running the first pass of paradigm clustering), we see a steep decline in performance as opposed to the proposed heuristic weighting. It is even more detrimental to not penalize exponents at all (i.e., ω(x, c) = 0), but maximize the base characters in paradigms without concern for size or likelihoods of exponents. Given allomorphic variation and multiple inflection classes, we ideally want a penalty weight which is lenient to more than just the single most likely exponent, but without supervised data, it is difficult to determine when to stop being lenient and start being harsh in a language agnostic manner. Our choice to be harsh by default proposes fewer false paradigm mates, yielding less noisy input to train the reinflection model. In a post-hoc study, we calculated GOLD k PCFP scores on pure analogies only, where the first three attested forms were assigned correctly during clustering. Pure analogy PCFP scores were still closer to GOLD k’s performance than SUP’s for all languages. This suggests most of the gap between GOLD k and SUP is due to noisy training on bad clustering assignments, not impossible test instances created by bad clustering assignments. This supports our choice of harsh penalties and suggests future work might reconsider clustering decisions given the reinflection model’s confidence. Reinflection Source Selection During reinflection, feeding the Transformer random sources instead of learning the most reliable source cell for each target cell slightly hurts performance. The margin is small, though, as most paradigms have only one attested form. In preliminary experiments, we also tried jointly encoding all available sources instead of just the most reliable, but this drastically lowers performance. 6 Conclusion We present a framework for the paradigm discovery problem, in which words attested in an unannotated corpus are analyzed according to the morphosyntactic property set they realize and the paradigm to which they belong. Additionally, unseen inflectional variants of seen forms are to be predicted. We discuss the data required to undertake this task, a benchmark for solving it, and multiple evaluation metrics. We believe our benchmark system represents a reasonable approach to solving the problem based on past work and highlights many directions for improvement, e.g. joint modeling and making better use of distributional semantic information. Acknowledgments The authors would like to thank the members of New York University Abu Dhabi’s CAMeL Lab, Marie-Catherine de Marneffe, Eleanor Chodroff, Katharina Kann, and Markus Dreyer. We acknowledge the support of the High Performance Computing Center at New York University Abu Dhabi. Finally, we wish to thank the anonymous reviewers at EMNLP 2019 and ACL 2020 for their feedback. 7787 References Farrell Ackerman, James P. Blevins, and Robert Malouf. 2009. Parts and wholes: Implicative patterns in inflectional paradigms. In Analogy in Grammar: Form and Acquisition, pages 54–82. Oxford University Press. Farrell Ackerman and Robert Malouf. 2013. Morphological organization: The low conditional entropy conjecture. Language, 89(3):429–464. Farrell Ackerman and Robert Malouf. 2015. The No Blur Principle effects as an emergent property of language systems. In Anna E. Jurgensen, Hannah Sande, Spencer Lamoureux, Kenny Baclawski, and Alison Zerbe, editors, Proceedings of the Forty-First Annual Meeting of the Berkeley Linguistics Society, pages 1–14. Berkeley Linguistics Society. Oded Avraham and Yoav Goldberg. 2017. The interplay of semantics and morphology in word embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 422–426, Valencia, Spain. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015. Sacha Beniamine, Olivier Bonami, and Benoît Sagot. 2018. Inferring inflection classes with description length. Journal of Language Modelling, 5(3):465– 525. Toms Bergmanis and Sharon Goldwater. 2017. From segmentation to analyses: a probabilistic model for unsupervised morphology induction. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 337–346, Valencia, Spain. Association for Computational Linguistics. Jean Berko. 1958. The child’s learning of English morphology. Word, 14(2-3):150–177. Purnima Bholowalia and Arvind Kumar. 2014. EBKmeans: A clustering technique based on elbow method and k-means in WSN. International Journal of Computer Applications, 105(9). James P. Blevins. 1995. Syncretism and paradigmatic opposition. Linguistics and Philosophy, 18(2):113– 152. James P. Blevins, Petar Milin, and Michael Ramscar. 2017. The Zipfian paradigm cell filling problem. In Perspectives on Morphological Organization, pages 139–158. Brill. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Gilles Boyé and Gauvain Schalchli. 2019. Realistic data and paradigms: The paradigm cell finding problem. Morphology, 29(2):199–248. Joan Bybee. 2003. Mechanisms of change in grammaticization: The role of frequency. In Brian D. Joseph and Richard D. Janda, editors, The Handbook of Historical Linguistics, pages 602–623. Blackwell. Greville G. Corbett. 2005. The canonical approach in typology. Linguistic Diversity and Language Theories, 25:49. Maria Corkery, Yevgen Matusevych, and Sharon Goldwater. 2019. Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3868–3877, Florence, Italy. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, Mans Hulden, and Jason Eisner. 2018a. On the diachronic stability of irregularity in inflectional morphology. arXiv preprint arXiv:1804.08262. Ryan Cotterell, Christo Kirov, Mans Hulden, and Jason Eisner. 2019. On the complexity and typology of inflectional morphological systems. Transactions of the Association for Computational Linguistics, 7:327–342. Ryan Cotterell, Christo Kirov, Sabrina J. Mielke, and Jason Eisner. 2018b. Unsupervised disambiguation of syncretism in inflected lexicons. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 548–553, New Orleans, Louisiana. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Arya D. McCarthy, Katharina Kann, Sabrina J. Mielke, Garrett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018c. The CoNLL–SIGMORPHON 2018 shared task: Universal morphological reinflection. In Proceedings of the CoNLL–SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection, pages 1–27, Brussels. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 1–30, Vancouver. Association for Computational Linguistics. 7788 Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016a. The SIGMORPHON 2016 shared Task— Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10–22, Berlin, Germany. Association for Computational Linguistics. Ryan Cotterell, Hinrich Schütze, and Jason Eisner. 2016b. Morphological smoothing and extrapolation of word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1651– 1660, Berlin, Germany. Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2005. Unsupervised morpheme segmentation and morphology induction from text corpora using Morfessor 1.0. Helsinki University of Technology. Robert Dale, Barbara Di Eugenio, and Donia Scott. 1998. Introduction to the special issue on natural language generation. Computational Linguistics, 24(3):345–353. Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a Dirichlet process mixture model. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 616–627, Edinburgh, Scotland, UK. Association for Computational Linguistics. Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1185–1195, Atlanta, Georgia. Association for Computational Linguistics. Micha Elsner, Andrea D. Sims, Alexander Erdmann, Antonio Hernandez, Evan Jaffe, Lifeng Jin, Martha Booker Johnson, Shuan Karim, David L. King, Luana Lamberti Nunes, et al. 2019. Modeling morphological learning, typology, and change: What can the neural sequence-to-sequence framework contribute? Journal of Language Modelling, 7(1):53–98. Alexander Erdmann and Nizar Habash. 2018. Complementary strategies for low resourced morphological modeling. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 54–65, Brussels, Belgium. Association for Computational Linguistics. Alexander Erdmann, Salam Khalifa, Mai Oudah, Nizar Habash, and Houda Bouamor. 2019. A little linguistics goes a long way: Unsupervised segmentation with limited language specific guidance. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 113–124, Florence, Italy. Association for Computational Linguistics. Alexander Erdmann, Nasser Zalmout, and Nizar Habash. 2018. Addressing noise in multidialectal word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 558– 565, Melbourne, Australia. Association for Computational Linguistics. Katrin Erk. 2016. What do you know about an alligator when you know the company it keeps? Semantics and Pragmatics, 9:17–1. Ramy Eskander, Nizar Habash, and Owen Rambow. 2013. Automatic extraction of morphological lexicons from morphologically annotated corpora. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1032–1043, Seattle, Washington, USA. Association for Computational Linguistics. Eric Gaussier. 1999. Unsupervised learning of derivational morphology from inflectional lexicons. In Unsupervised Learning in Natural Language Processing. Yoav Goldberg. 2016. A primer on neural network models for natural language processing. Journal of Artificial Intelligence Research, 57:345–420. John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27(2):153–198. Sharon Goldwater. 2007. Nonparametric Bayesian Models of Lexican Acquisition. Ph.D. thesis. Kyle Gorman, Arya D. McCarthy, Ryan Cotterell, Ekaterina Vylomova, Miikka Silfverberg, and Magdalena Markowska. 2019. Weird inflects but OK: Making sense of morphological generation errors. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 140–151, Hong Kong, China. Association for Computational Linguistics. Jin Guo. 1997. Critical tokenization and its properties. Computational Linguistics, 23(4):569–596. Huiming Jin, Liwei Cai, Yihui Peng, Chen Xia, Arya McCarthy, and Katharina Kann. 2020. Unsupervised morphological paradigm completion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL). Kyle P. Johnson, P. J. Burns, L. Hollis, M. Pozzi, A. Shilo, S. Margheim, G. Badger, and E. Bell. 2016. CLTK: The classical languages toolkit. David King and Michael White. 2018. The OSU realizer for SRST ‘18: Neural sequence-to-sequence inflection and incremental locality-based linearization. 7789 In Proceedings of the First Workshop on Multilingual Surface Realisation, pages 39–48, Melbourne, Australia. Association for Computational Linguistics. Christo Kirov and Ryan Cotterell. 2018. Recurrent neural networks in linguistic theory: Revisiting Pinker and Prince (1988) and the past tense debate. Transactions of the Association for Computational Linguistics, 6:651–665. Christo Kirov, Ryan Cotterell, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sabrina J. Mielke, Arya McCarthy, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. UniMorph 2.0: Universal morphology. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Trupti M. Kodinariya and Prashant R. Makwana. 2013. Review on determining number of cluster in kmeans clustering. International Journal of Advance Research in Computer Science and Management Studies, 1(6):90–95. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Jackson Lee. 2015. Morphological paradigms: Computational structure and unsupervised learning. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 161– 167, Denver, Colorado. Association for Computational Linguistics. Constantine Lignos and Charles Yang. 2018. Morphology and language acquisition. In Andrew Hippisley and Gregory T. Stump, editors, Cambridge Handbook of Morphology, pages 765–791. Cambridge University Press. Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 923–929, Portorož, Slovenia. European Language Resources Association (ELRA). David Maier. 1978. The complexity of some problems on subsequences and supersequences. Journal of the ACM, 25(2):322–336. Robert Malouf, Farrell Ackerman, and Artrus Semenuks. 2020. Lexical databases for computational analyses: A linguistic perspective. Proceedings of the Society for Computation in Linguistics, 3(1):297–307. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press. Arya D. McCarthy, Miikka Silfverberg, Ryan Cotterell, Mans Hulden, and David Yarowsky. 2018. Marrying universal dependencies and universal morphology. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 91–101, Brussels, Belgium. Association for Computational Linguistics. Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Garrett Nicolai, Christo Kirov, Miikka Silfverberg, Sabrina J. Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229– 244, Florence, Italy. Association for Computational Linguistics. Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. 2015. An unsupervised method for uncovering morphological chains. Transactions of the Association for Computational Linguistics, 3:157–167. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajiˇc, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 1659–1666, Portorož, Slovenia. European Language Resources Association (ELRA). Robert Parker, David Graff, Ke Chen, Junbo Kong, and Kazuaki Maeda. 2011a. Arabic Gigaword Fifth Edition. LDC catalog number No. LDC2011T11, ISBN 1-58563-595-2. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011b. English gigaword. Linguistic Data Consortium. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Steven Pinker. 2001. Four decades of rules and associations, or whatever happened to the past tense debate. Language, the Brain, and Cognitive Development: Papers in Honor of Jacques Mehler, pages 157–179. 7790 Kim Plunkett and Patrick Juola. 1999. A connectionist model of English past tense and plural morphology. Cognitive Science, 23(4):463–490. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 410– 420, Prague, Czech Republic. Association for Computational Linguistics. David E. Rumelhart and James L. McClelland. 1986. On learning the past tenses of English verbs. Hinrich Schütze. 1993. Word space. In Advances in Neural Information Processing Systems, pages 895– 902. Miikka Silfverberg and Mans Hulden. 2018. An encoder-decoder approach to the paradigm cell filling problem. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2883–2889, Brussels, Belgium. Association for Computational Linguistics. Andrea D. Sims. 2015. Inflectional Defectiveness, volume 148. Cambridge University Press. Gregory Stump and Raphael A. Finkel. 2015. The complexity of inflectional systems. Linguistics Vanguard, 1(1):101–117. Anna Thornton. 2011. Overabundance (multiple forms realizing the same cell): A non-canonical phenomenon in Italian verb morphology. Anna M. Thornton. 2010. Towards a typology of overabundance. In Décembrettes 7: International Conference on Morphology, University of Toulouse, pages 2–3. Robert Tibshirani, Guenther Walther, and Trevor Hastie. 2001. Estimating the number of clusters in a data set via the gap statistic. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(2):411–423. Andrew Trask, David Gilmore, and Matthew Russell. 2015. Modeling order in neural word embeddings at scale. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2266–2275, Lille, France. PMLR. Lifu Tu, Kevin Gimpel, and Karen Livescu. 2017. Learning to embed words in context for syntactic tasks. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 265–275, Vancouver, Canada. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Ekaterina Vylomova, Ryan Cotterell, Trevor Cohn, Timothy Baldwin, and Jason Eisner. 2019. Contextualization of morphological inflection. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2018–2024, Minneapolis, Minnesota. Association for Computational Linguistics. Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2465–2474, Brussels, Belgium. Association for Computational Linguistics. Charles Yang. 2015. For and against frequencies. Journal of Child Language, 42(2):287–293. Olga Zamaraeva, Kristen Howell, and Emily M. Bender. 2019. Handling cross-cutting properties in automatic inference of lexical classes: A case study of Chintang. In Proceedings of the 3rd Workshop on the Use of Computational Methods in the Study of Endangered Languages Volume 1 (Papers), pages 28–38, Honolulu. Association for Computational Linguistics.
2020
695
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7791–7795 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7791 Supervised Grapheme-to-Phoneme Conversion of Orthographic Schwas in Hindi and Punjabi Aryaman Arora*, † Luke Gessler † Nathan Schneider † *School Without Walls High School †Georgetown University {aa2190, lg876, nathan.schneider}@georgetown.edu Abstract Hindi grapheme-to-phoneme (G2P) conversion is mostly trivial, with one exception: whether a schwa represented in the orthography is pronounced or unpronounced (deleted). Previous work has attempted to predict schwa deletion in a rule-based fashion using prosodic or phonetic analysis. We present the first statistical schwa deletion classifier for Hindi, which relies solely on the orthography as the input and outperforms previous approaches. We trained our model on a newly-compiled pronunciation lexicon extracted from various online dictionaries. Our best Hindi model achieves state of the art performance, and also achieves good performance on a closely related language, Punjabi, without modification. 1 Introduction Hindi is written in the Devanagari script, which is an abugida, an orthographic system where the basic unit consists of a consonant and an optional vowel diacritic or a single vowel. Devanagari is fairly regular, but a Hindi word’s actual pronunciation can differ from what is literally written in the Devanagari script.1 For instance, in the Hindi word ppr ⟨pep@R@⟩‘paper’, there are three units p ⟨pe⟩, p ⟨p@⟩, and r ⟨R@⟩, corresponding to the pronounced forms [pe], [p@], and [r]. The second unit’s inherent schwa is retained in the pronounced form, but the third unit’s inherent schwa is deleted. Predicting whether a schwa will be deleted from a word’s orthographic form is generally difficult. Some reliable rules can be stated, e.g. ‘delete any schwa at the end of the word’, but these do not perform well enough for use in an application that requires schwa deletion, like a text-to-speech synthesis system. 1Throughout this paper, we will adopt the convention of using ⟨angle brackets⟩to describe how a word is literally spelled, and [square brackets] to describe how a word is actually pronounced. This work approaches the problem of predicting schwa deletion in Hindi with machine learning techniques, achieving high accuracy with minimal human intervention. We also successfully apply our Hindi schwa deletion model to a related language, Punjabi. Our scripts for obtaining machinereadable versions of the Hindi and Punjabi pronunciation datasets are published to facilitate future comparisons.2 2 Previous Work Previous approaches to schwa deletion in Hindi broadly fall into two classes. The first class is characterized by its use of rules given in the formalism of The Sound Pattern of English (Chomsky and Halle, 1968). Looking to analyses of schwa deletion produced by linguists (e.g., Ohala, 1983) in this framework, others built schwa deletion systems by implementing their rules. For example, this is a rule used by Narasimhan et al. (2004), describing schwa deletion for words like j\glF ⟨dZ@Ng@li:⟩: C V C C a C V C V C C C V dZ @ N g @ l i: →dZ @ N g l i: Paraphrasing, this rule could be read, “if a schwa occurs with a vowel and two consonants to its left, and a consonant and a vowel to its right, it should be deleted.” A typical system of this class would apply many of these rules to reach a word’s output form, sometimes along with other information, like the set of allowable consonant clusters in Hindi. These systems were able to achieve fair accuracy (Narasimhan et al. achieve 89%), but were ill-equipped to deal with cases that seemed to rely on detailed facts about Hindi morphology and prosody. 2All of the code, models, and datasets for this research are publicly available at https://github.com/aryamanarora/ schwa-deletion. 7792 short g all netic m 1, n orefore owel ence final rs, so More e aseight f the on in Algoand s for eived C-V word then yllam 1, foot; foot Given that /[ka$ra$nA$]/ already has a long vowel in word-final position, we can assign metrical feet based upon the weight of the syllables as shown in the following diagram (Algorithm 1, Step 3): PrWd !!! " " " F !! " " σlight CV ka σlight CV ra F σheavy CVV nA In a situation such as this, both light syllables together form a heavy foot. Equally important, the second light syllable cannot combine with the second foot because a foot of the form L H is impossible (see Sect. 2 for more details). Now there are two heavy feet, and the nonfinal foot is the stressed foot since there is a tie between syllable weights (Algorithm 1, Step 4). Within the stressed foot, there are two light syllables, so stress falls on the first syllable as indicated by the frame in the next diagram (Algorithm 1, Step 5): PrWd !!! " " " F′ !! " " σ ′ light CV ka σlight CV ra F σheavy CVV nA In this word, there is only one unstressed schwa. The algorithm deletes it (Algorithm 1, Step 6), and as a result, the consonant [r] is in its own syllable as shown here: 2Dollar signs denote syllable boundaries, double quotation marks indicate primary lexical stress and uppercase vowels show lengthened vowels. Figure 1: A representative example of the linguistic representations used by Tyson and Nagar (2009). Proceeding from top to bottom, a prosodic word (PrWd) consists of feet, syllables (which have weights), and syllable templates. Systems of the second class make use of linguistically richer representations of words. Typical of this class is the system of Tyson and Nagar (2009), which analyzes each word into a hierarchical phonological representation (see figure 1). These same representations had been used in linguistic analyses: Pandey (1990), for instance, as noted by Tyson and Nagar (2009), “claimed that schwas in Hindi cannot appear between a strong and weak rhyme3 within a prosodic foot.” Systems using prosodic representations perform fairly well, with Tyson and Nagar’s (2009) system achieving performance ranging from 86% to 94% but prosody proved not to be a silver bullet; Tyson and Nagar (2009) remark, “it appears that schwa deletion is a phenomenon governed by not only prosodic information but by the observance of the phonotactics of consonant clusters.” There are other approaches to subsets of the schwa-deletion problem. One is the diachronic analysis applied by Choudhury et al. (2004) which achieved 99.80% word-level accuracy on native Sanskrit-derived terms. Machine learning has not been applied to schwa deletion in Hindi prior to our work. Johny and Jansche (2018) used neural networks to model schwa deletion in Bengali (which is not a binary classification problem as in Hindi) and achieved great advances in accuracy. We employ a similar approach to Hindi, but go further by applying gradient-boosting decision trees to the problem, which are more easily interpreted in a linguistic format. 3The rhyme in Hindi (not pictured in figure 1), is the part of the syllable that begins with the vowel and includes any consonants that come after the vowel. Its weight is determined by vowel length and whether any consonants appear in it. Devanagari a kwAhV Orthographic a ˜ k a rr aa h a tt a Phonemic a ˜ k rr aa h a tt Table 1: An example entry from the Hindi training dataset. Similar research has been undertaken in other Indo-Aryan languages that undergo schwa-deletion, albeit to a lesser extent than Hindi. Wasala et al. (2006), for example, proposed a rigorous rulebased G2P system for Sinhala. 3 Methodology We frame schwa deletion as a binary classification problem: orthographic schwas are either fully retained or fully deleted when spoken. Previous work has shown that even with rich linguistic representations of words, it is difficult to discover categorical rules that can predict schwa deletion. This led us to approach the problem with machine learning, which we felt would stand a better chance at attaining high performance. We obtained training data from digitized dictionaries hosted by the University of Chicago Digital Dictionaries of South Asia project. The Hindi data, comprised of the original Devanagari orthography and the phonemic transcription, was parsed out of McGregor (1993) and Bahri (1989) and transcribed into an ASCII format. The Punjabi data was similarly processed from Singh (1895). Table 1 gives an example entry from the McGregor Hindi dataset. To find all instances of schwa retention and schwa deletion, we force-aligned orthographic and phonemic representations of each dictionary entry using a linear-time algorithm. In cases where force-alignment failed due to idiosyncrasies in the source data (typos, OCR errors, etc.) we discarded the entire word. We provide statistics about our datasets in table 2. We primarily used the dataset from McGregor in training our Hindi models due to its comprehensiveness and high quality. Each schwa instance was an input in our training set. The output was a boolean value indicating whether the schwa was retained. Our features in the input column were a one-hot encoding of a variable window of phones to the left (c−n,...,c−1) and right (c+1,...,c+m) of the schwa instance (c0) under consideration. The length of the window on either side was treated as a hyperparamater and tuned. We also tested whether including phonological features (for vowels: height, backness, rounded7793 Hindi Dict. Entries Schwas Deletion Rate McGregor 34,952 36,183 52.94% Bahri 9,769 14,082 49.41% Google 847 1,098 56.28% Punjabi Dict. Entries Schwas Deletion Rate Singh 28,324 34,576 52.25% Table 2: Statistics about the datasets used. The deletion rate is the percentage of schwas that are deleted in their phonemic representation. The Google dataset, taken from Johny and Jansche (2018), was not considered in our final results due to its small size and overrepresentation of proper nouns. Model A P R Word A Hindi XGBoost 98.00% 98.04% 97.60% 97.78% Neural 97.83% 97.86% 97.42% 97.62% Logistic 97.19% 97.19% 96.70% 96.86% Wiktionary 94.18% 92.89% 94.29% 94.18% Punjabi XGBoost 94.66% 92.79% 95.90% 94.18% Neural 94.66% 93.25% 95.47% 94.07% Logistic 93.77% 91.73% 95.04% 93.14% Table 3: Results for our models on the McGregor and Singh datasets: Per-schwa accuracy, precision, and recall, as well as word-level accuracy (all schwas in the word must be correctly classified). ness, and length; for consonants: voice, aspiration, and place of articulation) of the adjacent graphemes affected the accuracy of the model. We trained three models on each dataset: logistic regression from scikit-learn, MLPClassifier (multilayer perceptron neural network) from scikitlearn, and XGBClassifier (gradient-boosting decision trees) from XGBoost. We varied the size of the window of adjacent phonemes and trained with and without phonological feature data. 4 Results Table 3 tabulates the performances of our various models. We obtained a maximum of 98.00% accuracy for all schwa instances in our test set from the McGregor dataset with gradient-boosted decision trees from XGBoost. We used a window of 5 phonemes to the left and right of the schwa instance, phonological features, 200 estimators, and a maximum tree depth of 11. Any model with at least 200 estimators and a depth of at least 5 obtains a comparable accuracy, but this gradually degrades with increasing estimators due to overfitting. Without phonological feature data, the model consistently achieves a slightly lower accuracy of 97.93%. Logistic regression with the same features achieved 97.19% accuracy. An MLP classifier with a single hidden layer of 250 neurons and a learning rate of 10−4 achieved 97.83% accuracy. On the Singh dataset for Punjabi, the same XGBoost model (except without phonological features) achieved 94.66% accuracy. This shows the extensibility of our system to other Indo-Aryan languages that undergo schwa deletion. We were unable to obtain evaluation datasets or code from previous work (Narasimhan et al. 2004, Tyson and Nagar 2009) for a direct comparison of our system with previous ones.4 However, we were able to port and test the Hindi transliteration code written in Lua utilized by Wiktionary (2018), an online freely-editable dictionary operated by the Wikimedia Foundation, the parent of Wikipedia. That system obtains 94.94% word-level accuracy on the McGregor dataset, which we outperform consistently. 5 Discussion Our system achieved higher performance than any other. The schwa instances which our model did not correctly predict tended to fall into two classes: borrowings from Persian, Arabic, or European languages, or compounds of native or Sanskritborrowed morphemes. Of the 150 Hindi words from our test set from McGregor that our best model incorrectly predicted schwa deletion for, we sampled 20 instances and tabulated their source languages. 10 were native Indo-Aryan terms descended through the direct ancestors of Hindi, 4 were learned Sanskrit borrowings, 5 were PersoArabic borrowings, and 1 was a Dravidian borrowing. 9 were composed of multiple morphemes. Borrowings are overrepresented relative to the baseline rate for Hindi; in one frequency list, only 8 of the 1,000 top words in Hindi were of Perso-Arabic origin (Ghatage 1964). Notably, some of the Perso-Arabic borrowings that the model failed on actually reflected colloquial pronunciation; e.g. amn ⟨@m@n@⟩is [@mn] in McGregor yet our model predicts [@m@n] which is standard in most speech. We qualitatively analyzed our system to investigate what kind of linguistic representations it seemed to be learning. To do this, we inspected several decision trees generated in our model, and found that our system was learning both prosodic 4We were able to obtain code from Roy (2017) but were unable to run it on our machines. 7794 and phonetic patterns. Some trees very clearly encoded phonotactic information. One tree we examined had a subtree that could be paraphrased like so, where cn indicates the phone n characters away from the schwa being considered: “If c+1 is beyond the end of the word, and c−2 is not beyond the beginning of the word, and c−2 is a ⟨t⟩, then if c−1 is a ⟨j⟩, then penalize deleting this schwa;5 otherwise if c−1 is not a ⟨j⟩, prefer deleting this schwa.” Put another way, this subtree penalizes deleting a schwa if it comes at the end of a word, the preceding two characters are exactly ⟨tj⟩, and the word extends beyond the preceding two characters. This is just the kind of phonetic rule that systems like Narasimhan et al. (2004) were using. The extent to which our system encodes prosodic information was less clear. Our features were phonetic, not prosodic, but some prosodic information can be somewhat captured in terms of phonetics. Take, for instance, this subtree that we found in our model, paraphrasing as before: “If c−3 is beyond the beginning of the word, and c−2 is ⟨a:⟩, then if c+2 is ⟨@⟩, prefer deletion; otherwise, if c+2 is not ⟨@⟩, penalize deletion.” Consider this rule as it would apply to the first schwa in the Hindi word aAmdnF ⟨a:m@d@ni:⟩ -3 -2 -1 0 1 2 3 4 5 a: m @ d @ n i: The rule decides that deleting the first schwa should be penalized, and it decided this by using criteria that entail that the preceding rhyme is heavy and the following rhyme is light.6 Obviously, though, this same rule would not work for other heavy and light syllables: if any of the vowels had been different, or at different offsets, a non-deletion rather than a deletion would have been preferred, which is not what it ought to do if it is emulating the prosodic rule. It is expected that our model is only able to capture ungeneralized, low-level patterns like this, since it lacks the symbolic vocabulary to capture elegant linguistic generalizations, and it is perhaps surprising that our system is able to achieve the 5Penalize deleting and not delete, because this tree is only contributing towards the final decision, along with all the other trees. 6Actually, this is not exactly true, since if the following syllable had any consonants in the rhyme, it would become heavy, even if there were a schwa present. But this is an error that could be corrected by other decision trees. performance it does even with this limitation. In future work, it would be interesting to give our system more directly prosodic representations, like the moraic weights of the surrounding syllables and syllabic stress. Another limitation of our system is that it assumes all schwas are phonologically alike, which may not be the case. While most schwas are at all times either pronounced or deleted, there are less determinate cases where a schwa might or might not be deleted according to sociolinguistic and other factors. McGregor (1993, p. xi) calls these “weakened schwas”, describing them as “weakened by Hindi speakers in many phonetic contexts, and dropped in others” and orthographically indicating them with a breve. s(y is transcribed saty. Our best model correctly classified 80.4% of the weakened schwas present in our test set taken from McGregor. Improving our performance for this class of schwas may require us to treat them differently from other schwas. Further research is needed on the nature of weakened schwas. 6 Conclusion We have presented the first statistical schwa deletion classifier for Hindi achieves state-of-the-art performance. Our system requires no hard-coded phonological rules, instead relying solely on pairs of orthographic and phonetic forms for Hindi words at training time. Furthermore, this research presents the first schwa-deletion model for Punjabi, and has contributed several freely-accessible scripts for scraping Hindi and Punjabi pronunciation data from online sources. References Hardev Bahri. 1989. Learners’ Hindi-English dictionary. Rajpal & Sons. Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper & Row. Monojit Choudhury, Anupam Basu, and Sudeshna Sarkar. 2004. A diachronic approach for schwa deletion in Indo Aryan languages. In Proceedings of the 7th Meeting of the ACL Special Interest Group in Computational Phonology: Current Themes in Computational Phonology and Morphology, pages 20–26, Barcelona, Spain. Amrit Madhav Ghatage. 1964. Phonemic and morphemic frequencies in Hindi. Poona Deccan College Postgraduate and Research Institute. 7795 Cibu C. Johny and Martin Jansche. 2018. Brahmic Schwa-Deletion with Neural Classifiers: Experiments with Bengali. In Proc. The 6th Intl. Workshop on Spoken Language Technologies for UnderResourced Languages, pages 259–263. R. S. McGregor. 1993. The Oxford Hindi-English dictionary. Oxford University Press. Bhuvana Narasimhan, Richard Sproat, and George Kiraz. 2004. Schwa-deletion in Hindi text-to-speech synthesis. International Journal of Speech Technology, 7(4):319–333. Manjari Ohala. 1983. Aspects of Hindi Phonology. Motilal Banarsidass Publishers Pvt. Ltd. P. K. Pandey. 1990. Hindi schwa deletion. Lingua, 82:277–311. Somnath Roy. 2017. A finite state and rule-based akshara to prosodeme (A2P) converter in Hindi. CoRR, abs/1705.01833. Maya Singh. 1895. The Panjabi dictionary. Munshi Gulab Singh & Sons. Naim R. Tyson and Ila Nagar. 2009. Prosodic rules for schwa-deletion in Hindi text-to-speech synthesis. International Journal of Speech Technology, 12(1):15. Asanka Wasala, Ruvan Weerasinghe, and Kumudu Gamage. 2006. Sinhala grapheme-to-phoneme conversion and rules for schwa epenthesis. In Proc. of ACL-COLING, pages 890–897, Sydney, Australia. Wiktionary. 2018. Module:hi-translit — Wiktionary, The Free Dictionary. [Online; accessed 6-October2019].
2020
696
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7796–7810 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7796 Automated Evaluation of Writing – 50 Years and Counting Beata Beigman Klebanov and Nitin Madnani Educational Testing Service, Princeton, NJ, USA {bbeigmanklebanov,nmadnani}@ets.org Abstract In this theme paper, we reflect on the progress of Automated Writing Evaluation (AWE), using Ellis Page’s seminal 1966 paper to frame the presentation. We discuss some of the current frontiers in the field, and offer some thoughts on the emergent uses of this technology. 1 A Minimal Case for AWE In a seminal paper on the imminence of automated grading of essays, Page (1966) showed that a high correlation between holistic machine and human scores is possible. He demonstrated automated scoring of 276 essays written by high school students by a system with 32 features, resulting in a multiple R = 0.65 between machine and average human score, after adjustment. He also provided a thoughtful discussion of his ambitions for automated scoring and of the possible objections. Page made the case that automated evaluation of student writing is needed to take some of the evaluation load off the teachers and to provide students evaluations of their (potentially multiple) drafts with a fast turnaround. He then appealed to the then-burgeoning interest and fascination with machine learning to argue for the feasibility of such an enterprise, namely, that machines can learn how to give the right grades to essays, if trained on an expert-scored sample. As part of the feasibility argument, Page emphasized the need to carefully define the goal so that success can be judged appropriately. The goal is not a “real” master analysis of the essay the way a human reader would do but merely an imitation that would produce a correlated result (using what Page called proxes – approximations). Page considered this goal to be both useful and achievable. 2 Report Card: Where are We Now? 2.1 Accomplishments Page’s minimal desiderata have certainly been achieved – AWE systems today can score in agreement with the average human rater, at least in some contexts.1 For example, Pearson’s Intelligent Essay Assessor™(IEA) scores essays written for the Pearson Test of English (PTE) as well as for other contexts: “IEA was developed more than a decade ago and has been used to evaluate millions of essays, from scoring student writing at elementary, secondary and university level, to assessing military leadership skills.”2 Besides sole automated scoring as for PTE, there are additional contexts where the automated score is used in addition to a human score, such as for essays written for the Graduate Record Examination (GRE®)3 or for the Test of English as a Foreign Language (TOEFL®).4 Does this mean that the problem of AWE is solved? Well, not exactly. 2.2 Needs Improvement Page did anticipate some difficulties for AWE systems. It is instructive to see where we are with those. 2.2.1 Originality What about the gifted student who is offbeat and original? Won’t he be overlooked by the computer? (Page, 1966) Page’s argument is that the original student is not going to be much worse off with a com1It is not our goal to survey in detail techniques that underlie this success. See Ke and Ng (2019) for a recent review. 2https://pearsonpte.com/the-test/ about-our-scores/how-is-the-test-scored/ 3https://www.ets.org/gre/revised_general/ scores/how/ 4https://www.ets.org/toefl/ibt/scores/ understand/ 7797 puter than with an (average) human reader, because originality is a subjective construct. Thus, once research uncovers objective and measurable aspects of “original” writing, relevant features can be added into an AWE system; finding such aspects, as well as measuring them, is still work in progress. While no current operational scoring system we are aware of is specifically looking for originality, research into aspects of writing that are often considered original is taking place. For example, using data from different tests, Beigman Klebanov and Flor (2013a) and Beigman Klebanov et al. (2018) found that the extent of metaphor use (proportion of metaphorically used words in an essay) correlates with essay quality; Littlemore et al. (2014) likewise found that more skilled writers use metaphor more often. Song et al. (2016) observed a positive correlation between use of parallelism – syntactically similar and semantically related constructors, often used for emphasis or to enhance memorability – in student essays. Some pioneering work has been done on comparing writing that is recognized as outstanding (through receiving prestigious prizes) vs writing that is “merely” good in the domain of scientific journalism (Louis and Nenkova, 2013). Once various indicators of originality can be successfully measured, additional work may be necessary to incorporate these measurements into scoring ecosystems since such indicators may only occur infrequently. One way to achieve this would be to compute a “macro” feature that aggregates multiple such indicators, another would be to direct such essays to a human rater for review. 2.2.2 Gaming Won’t this grading system be easy to con? Can’t the shrewd student just put in the proxies which will get a good grade? (Page, 1966) Certainly, students can and do employ gaming strategies to discover and exploit weaknesses of AWE systems. Such strategies can involve repeating the same paragraphs over and over, varying sentence structure, replacing words with more sophisticated variants, re-using words from the prompt, using general academic words, plagiarizing from other responses or from material found on the Internet, inserting unnecessary shell language – linguistic scaffolding for organizing claims and arguments, and automated generation of essays (Powers et al., 2001; Bejar et al., 2013, 2014; Higgins and Heilman, 2014; Sobel et al., 2014). Such strategies are generally handled by building in filters or flags for aberrant responses (Higgins et al., 2006; Zhang et al., 2016; Yoon et al., 2018; Cahill et al., 2018). However, developers of AWE systems can never anticipate all possible strategies and may have to react quickly as new ones are discovered in use, by developing new AWE methods to identify them. This cat-andmouse game is particularly rampant in the context of standardized testing (§3.2). This is one of the reasons standardized tests are often not scored solely by an AWE system but also by a human rater. 2.2.3 Content We are talking awfully casually about grading subject matter like history. Isn’t this a wholly different sort of problem? Aren’t we supposed to see that what the students are saying makes sense, above and beyond their using commas in the right places? (Page, 1966) Indeed, work has been done over the last decade on automated evaluation of written responses for their content and not their general writing quality (Sukkarieh and Bolge, 2008; Mohler et al., 2011; Ziai et al., 2012; Basu et al., 2013; Madnani et al., 2013; Ramachandran et al., 2015; Burrows et al., 2015; Sakaguchi et al., 2015; Madnani et al., 2016; Padó, 2016; Madnani et al., 2017a; Riordan et al., 2017; Kumar et al., 2017; Horbach et al., 2018; Riordan et al., 2019). Scoring for content focuses primarily on what students know, have learned, or can do in a specific subject area such as Computer Science, Biology, or Music, with the fluency of the response being secondary. For example, some spelling or grammar errors are acceptable as long as the desired specific information (e.g., scientific principles, trends in a graph, or details from a reading passage) is included in the response. Note that most current content scoring systems ascertain the “correctness" of a response based on its similarity to other responses that humans have deemed to be correct or, at least, high-scoring; they do not employ explicit fact-checking or reasoning for this purpose. Concerns about specific content extends to other cases where the scoring system needs to pay 7798 attention to details of genre and task – not all essays are five-paragraph persuasive essays; the specific task might require assessing whether the student has appropriately used specific source materials (Beigman Klebanov et al., 2014; Rahimi et al., 2017; Zhang and Litman, 2018) or assessing narrative (Somasundaran et al., 2018) or reflective (Beigman Klebanov et al., 2016a; Luo and Litman, 2016), rather than persuasive, writing. 2.2.4 Feedback Page emphasized the importance of feedback, and considered the following to be “the sort of feedback that can almost be programmed right now” (original italics): John [.. .], please correct the following misspellings: believe, receive. Note the ie/ei problem. You overuse the words interesting, good, nice; then was repeated six times. Check trite expressions. All of your sentences are of the subject-verb variety and all are declarative. Reconstruct. Check subject-verb agreement in second paragraph. You had trouble with this in your last paper. Title lacking. Do the following related assignments for tomorrow . .. (Page, 1966) Today a substantial amount of writing feedback, particularly about spelling and grammar, is incorporated into widely used text editors such as Microsoft Word, Google Docs, and Overleaf. Dedicated writing assistance software such as ETS’s Writing Mentor®5 (Burstein et al., 2018), ASU’s Writing Pal6 (Roscoe and McNamara, 2013; Allen et al., 2014), ETS’ Criterion®7 (Burstein et al., 2004), Grammarly’s Writing Assistant,8 CambridgeEnglish’s Write & Improve,9 Ginger’s Essay Checker,10 TurnItIn’s Revision Assistant,11 Vantage Learning’s MY Access!,12 Pearson’s My Writing Lab Writing Practice Module and WriteToLearn™13,14 typically go beyond grammar 5https://mentormywriting.org/ 6http://www.adaptiveliteracy.com/writing-pal 7http://www.ets.org/criterion 8https://www.grammarly.com/ 9https://writeandimprove.com/ 10https://www.gingersoftware.com/essay-checker 11https://www.turnitin.com/products/ revision-assistant 12http://www.vantagelearning.com/products/ my-access-school-edition/ 13https://www.pearsonmylabandmastering.com 14http://wtl.pearsonkt.com and spelling.15 Such tools provide feedback on discourse structure (Criterion), topic development and coherence (Writing Mentor), tone (Writing Assistant, Rao and Tetreault (2018)), thesis relevance (Writing Pal), sentence “spicing” through suggestions of synonyms and idioms (Ginger’s Sentence Rephraser), and style & argumentationrelated feedback (Revision Assistant). Can we then put a green check-mark against Page’s agenda for automated feedback, which “may magnify and disseminate the best human capacities to criticize, evaluate, and correct”? Alas, not yet; research on effectiveness of automated feedback on writing is inconclusive (Englert et al., 2007; Shermis et al., 2008; Grimes and Warschauer, 2010; Choi, 2010; Roscoe and McNamara, 2013; Wilson and Czik, 2016; Wilson, 2017; Bai and Hu, 2017; Ranalli et al., 2017). One potential reason for the different outcomes is difference in user populations – feedback that works for L1 writers might not work for L2 writers; differences in ages, skill levels, presence or absence of learning disabilities could all play a role. Adjustment of the evaluation methodology to the specific purpose of the writing assistance tool is another issue for consideration; we will return to this issue in §4. 3 Going off the Page So far, Page’s outline of the promises and challenges of AWE have provided a good framework for surveying the field. There are also a number of developments that were not mapped on Page’s chart; we turn to reviewing those next. 3.1 Assessing writing in multiple languages In order to advance the work on understanding and assessing writing quality, there is clearly a need for a multi-lingual perspective, since methods developed for one language or dialect may not work for another. This consideration does not appear in Page (1966), yet it is an active line of subsequent work. While most of the research we cited so far has been on English, various aspects of writing evaluation, e.g., annotation, detection of various types of errors, and building AWE systems, have been researched for a variety of languages: Song et al. (2016), Rao et al. (2017), Shiue et al. (2017) worked with data in Chinese, 15Writing Pal does not provide specific grammar and spelling feedback. 7799 Lorenzen et al. (2019) in Danish, Berggren et al. (2019) in Norwegian, Amorim and Veloso (2017) in Portuguese, Stymne et al. (2017) in Swedish, Berkling (2018) and Weiss and Meurers (2019) in German, Mezher and Omar (2016) in Arabic, Kakkonen et al. (2005) in Finnish, Loraksa and Peachavanish (2007) in Thai, Lemaire and Dessus (2001) in French, and Ishioka and Kameda (2006) in Japanese. The list is by no means exhaustive; see Flor and Cahill (2020) for a recent review. 3.2 Standardized Testing The use of automated evaluation technology envisioned by Page was as a service to reduce a teacher’s burden; to eventually “lift from the shoulders of the English teacher, that brave and harried soul, his perpetual pressure of unassigned papers, or his unassuaged guilt.” While such use has certainly been made (Burstein et al., 2004; Grimes and Warschauer, 2010), the most visible use case for AWE technology has arguably evolved to be in the context of standardized testing, be it for a test of English such as TOEFL® or PTE, a broader, more advanced psychometric examination such as the GRE® or GMAT, or for professional licensure such as AICPA or PRAXIS®. This development of often high-stakes usage has led to somewhat different challenges from those that Page had anticipated. These challenges generally fall under the purview of the field of educational measurement (Bennett and Bejar, 1998; Clauser et al., 2002; Williamson et al., 2012): How to ensure that the automatic scores assigned to test takers are (1) valid, i.e., they actually measure the skill that the test developer designed the test to measure, (2) defensible, i.e., there is a reasonably clear explanation of why test takers received the particular scores they did, and (3) fair to all the test takers. We address each of these challenges separately below. Note that an additional challenge of high-stakes usage, not elaborated on here, is how to architect scoring systems for large-scale, low-latency use which requires them to be reliable, scalable, flexible, and attentive to the choice of software and application frameworks (Madnani et al., 2018). 3.2.1 Construct Validity Page declares that he is not after “generating measures of what the true characteristics of the essays are, as ordinarily discussed by human raters” but rather is content “to settle for the correlates of these true characteristics.” Page seems to do away rather quickly with trying to measure the actual thing – the set of all and only “true characteristics of essays”, or trins. Why is that? He explains: Notwithstanding the wonders of the computer, we have to develop a strategy in order to tell the computer what to do. The difficult part is the development of this strategy. It is difficult because we do not really understand what the psychological components are in the judgment of essays. It is easy enough to get persons to expound authoritatively on such judgment, but the fuzziness and inutility of their thinking becomes at once evident when the effort is made to translate it into a computer program. (Page, 1966) Page’s argument is that we do not know precisely enough what the human raters are doing to try and implement that. Some work on rater cognition has already been done in the early 1950s and 1960s, e.g., in the context of the College Entrance Examination Board’s development of the General Composition Test. Diederich et al. (1961) had 53 distinguished individuals from various academic disciplines and beyond (English, Social Science, Natural Science, Law, Writers and Editors, Business Executives) sort student essays “in order of merit”, with no definition thereof, instructing readers as follows: Use your own judgment as to what constitutes “writing ability.” Do not assume that we want you to do this or that. We want you to use whatever hunches, intuitions, or preferences you normally use in deciding that one piece of writing is better than another. You need not even act as a representative of your field, since individuals in any field have varying tastes and standards. Readers were also asked to a write brief comments on anything that they liked or disliked about the essay, on as many essays as possible. For the study, a sample of U.S. college freshmen were asked to write essays in response to four topics as part of homework. A total of 300 essays addressing two topics were chosen for the analyses, sampled so as to make sure that the full range of abilities is represented (approximated via SAT Verbal 7800 scores). The researchers performed a factor analysis on the matrix of pairwise correlations among the readers, and identified groups of readers (factors) that represent five “schools of thought” about writing quality. Analyzing the comments made by readers who belong to the different “schools of thought”, they identified five categories that were each prioritized by one of the groups of readers: 1. Ideas (including relevance, clarity, quantity, development, persuasiveness) 2. Form (including spelling, organization, analysis, coherence) 3. Flavor (including style, originality, quality of ideas, interest, sincerity) 4. Mechanics (including punctuation, grammar, sentence structure, phrasing) 5. Wording (including felicity of expression, comments on specific word choices, cliches) It is based on such findings above that general scoring criteria have emerged (Deane, 2013) and morphed into scoring rubrics. These are explicit criteria set by and for human raters for evaluating essays. For example, to score highly on the GRE® Issue essay-writing task,16 one typically: • articulates a clear and insightful position on the issue in accordance with the assigned task • develops the position fully with compelling reasons and/or persuasive examples • sustains a well-focused, well-organized analysis, connecting ideas logically • conveys ideas fluently and precisely, using effective vocabulary and sentence variety • demonstrates superior facility with the conventions of standard written English (i.e., grammar, usage and mechanics), but may have minor errors In the current practice of automated scoring of standardized tests, developers of a scoring engine often need to provide a construct validity argument in order to show that what the system is measuring is actually aligned with the “writing construct” – the actual set of writing skills that the test is supposed to measure. 16https://www.ets.org/gre/revised_general/ prepare/analytical_writing/issue/scoring_guide Some of the items in a human-oriented scoring rubrics are amenable to reasonably direct implementation, often with the help of human-annotated gold standard data such as misspellings (Flor, 2012; Flor and Futagi, 2013) and specific grammar errors (Rozovskaya and Roth, 2010; Leacock et al., 2014). It might be the case that the system would miss some grammar errors and declare an error where there is none, but a grammar assessment system can be built for identifying specific, observable instances of errors that a human reader focused on Mechanics would likely pick upon. For other items in a rubric, one might need to drill down, articulate a reliable guideline for humans to assess that particular aspect of the essay, annotate a substantial enough number of essays using the guidelines to make machine learning possible, and then find automatically measurable properties of essays that would provide information relevant to that particular aspect of essay quality. This would be a mix between what Page called a prox and a trin, in that a particular, intrinsically interesting, aspect of an essay can be identified reliably by humans, and an automated system can learn how to approximate that particular construct. Such approaches have been developed for organization (well-organized) (Burstein et al., 2003), coherence (well-focused, conveys ideas fluently) (Burstein et al., 2010; Somasundaran et al., 2014), grammaticality (facility with conventions) (Heilman et al., 2014), thesis clarity (clarity) (Persing and Ng, 2013) as well as aspects of scoring rubrics that are more task-specific, e.g., argumentation (clear position, with compelling reasons) (Stab and Gurevych, 2014; Ghosh et al., 2016; Beigman Klebanov et al., 2017; Stab and Gurevych, 2017; Carlile et al., 2018), use of evidence in the context of source-based writing (Rahimi et al., 2017). Finally, for some rubric items, it is not clear exactly how to reliably translate the relevant aspect of the writing construct into annotations guidelines, and so proxes might be employed. For example, consider Page’s argument for capturing “diction” (appropriate word choice) through word frequency – a writer who can use many different words, including rarer and often semantically nuanced ones, is likelier to make precise word choices than a writer who uses a more limited vocabulary. Attempts to capture topicality (Beigman Klebanov et al., 2016b) or development 7801 (Beigman Klebanov and Flor, 2013b; Somasundaran et al., 2016) through properties of vocabulary distribution without human annotation of topicality and development exemplify such approaches. 3.2.2 Model Interpretability Recent research has shown that more sophisticated machine learning models might perform better than simple regression-based models when it comes to predictive accuracy (Chen and He, 2013; Cummins et al., 2016; Taghipour and Ng, 2016; Alikaniotis et al., 2016; Dong et al., 2017; Dasgupta et al., 2018; Jin et al., 2018). However, unlike linear regression where stakeholders can understand how much each feature used in the model contributed to the predicted score, many of the more complex models are essentially “black boxes” and do not really lend themselves to post-hoc interpretability (Lipton, 2016). Although interpretability is an active area of research in the machine learning literature (Ribeiro et al., 2016; Koh and Liang, 2017; Doshi-Velez and Kim, 2017), it currently lags behind the research on machine learning methods. For this reason, some automated scoring systems used for high-stakes standardized testing – like ETS’s eRater (Attali and Burstein, 2006) – still use some variant of least squares linear regression as the machine learning model to predict test taker scores. 3.3 Increased Attention to Fairness It would probably not be an overstatement to say that fairness in AI is quickly becoming its own sub-field, with a new annual ACM conference on Fairness, Accountability, and Transparency having been inaugurated in 201817 and relevant research appearing at many impactful publication venues, such as Science (Caliskan et al., 2017), NIPS (Pleiss et al., 2017; Kim et al., 2018), ICML (Kearns et al., 2018), ACL (Hovy and Spruit, 2016; Sun et al., 2019; Sap et al., 2019), KDD (Speicher et al., 2018), AAAI (Zhang and Bareinboim, 2018), and others (Dwork et al., 2012; Hajian and Domingo-Ferrer, 2013). There is also recent work that examines fairness and ethical considerations when using AI in an education (Mayfield et al., 2019; Gardner et al., 2019). In the context of assessment, fairness considerations dictate that the test reflects the same construct(s) for the entire test taking population, that 17https://facctconference.org/ scores from the test have the same meaning for all the test taking population, and that a fair test does not offer undue advantages (or disadvantages) to some individuals because of their characteristics – such as those associated with race, ethnicity, gender, age, socioeconomic status, or linguistic or cultural background – or the test characteristics itself, e.g., the different prompts shown to different testtakers at test time. The educational measurement community has long been studying fairness in automated scoring (Williamson et al., 2012; Ramineni and Williamson, 2013; AERA, 2014) and recent progress made by the NLP community towards enhancing the usual accuracy-based evaluations with some of these psychometric analyses – from computing indicators of potential biases in automatic scores across various demographic sub-groups to computing new metrics that incorporate measurement theory to produce more reliable indicators of system performance – is quite promising (Madnani et al., 2017b; Loukina et al., 2019). 3.4 Pervasiveness of Technology Page’s gedankenexperiment on the potential of automated essay evaluation in a classroom context no doubt appeared audacious in 1966 but nothing back then could have prepared his readers to the pervasiveness of technology we are experiencing today. Today you can very literally carry your AWE system in your pocket; you can even carry several. You can use them (almost) at any time and at any place – not only in classrooms, but at home, at work, and even while texting with a friend. This is perhaps the biggest issue that Page’s vision did not address: the possibility of universal availability and the concomitant co-optation of a tool beyond its original intended purpose. Much like the calculator – invented by Blaise Pascal to help his father with the tedious arithmetic of tax collection – ended up “freeing” people from the burden of figuring out their intended tip at a restaurant through mental arithmetic, a future writing aid meant to help a student improve his argument writing assignment for a class could end up being used by a lawyer for composing his closing argument. Since such usages are on the horizon, we should consider the implications now. 7802 4 Discussion Once an invention is out in the open, it is difficult to predict what specific uses people would put it to. How do we go about evaluating the tool if we don’t know what the user’s goal is? While it isn’t possible to anticipate all specific uses, it is possible, we believe, to consider the types of uses that suggest different evaluation strategies. From the current vantage point, we see three types of uses. 4.1 Support Consequential Decision Making The first use is where a consequential decision about the writer or a related entity (such as a class or a school) is being made based on the written product. This use is exemplified by the application of automated scoring in a standardized testing context to decide on admissions to an institution of higher education or the granting of a professional licenses; other cases such as course placement decisions, coursework grading, or even extension of a job offer (where the submission of a writing sample is a part of the job application process) would belong to this type of use. In all such cases, the automated system needs to provide valid and fair scores (or other types of feedback), since the livelihood or professional trajectory of people might depend on the outcome. We have dealt with the particulars of this case in detail in §3.2. 4.2 Create a Better Written Product The second type of use is one where the focus is on the final product, namely, the actual piece of writing produced following the writer’s use of AWE technology. In this context, it does not much matter exactly what part of the final product is due to the human and which part is due to the machine – perhaps the machine only corrected misspellings, or suggested improvements for the human to vet, or maybe the human only contributed the very first ideation, and the machine has done the rest. Perhaps all the human writer contributed was the thesis (‘I think school should start at 8 rather than 7’) and then clicked ‘submit’ to get back an essay making a cogent and convincing case in support of the thesis. Mining large textual databases for arguments and evaluating them are feasible today as recently demonstrated by IBM’s Debater technology18 (Rinott et al., 2015; Levy et al., 2017; Gretz et al., 2019); introduce some figuration to 18https://www.research.ibm.com/ artificial-intelligence/project-debater/ make it more appealing (Veale et al., 2017; Veale, 2018) and storify it (Riegl and Veale, 2018; Radford et al., 2019), et voilà! This type of use is essentially a machine’s augmentation of human ability, and is hinted at, for example, in a customer testimonial for Grammarly: “Grammarly allows me to get those communications out and feel confident that I’m putting my best foot forward. Grammarly is like a little superpower, especially when I need to be at 110%.” The human presumably remains at the same level of ability, but the product of the machine-human collaboration is superior to what the human alone could have produced. In this context, the primary evaluation criterion for AWE is the fitness of the resulting communication to its purpose, or, at least, some evidence of improvement of the product over the human’s first draft. Indeed, measurements of improvement across drafts and evidence of students’ making corrections following feedback are often used for evaluation (Attali, 2004; Lipnevich and Smith, 2008; Foltz et al., 2014; Chapelle et al., 2015). Within the product-centered evaluation paradigm, there could be various specific objectives other than the improvement of the holistic quality of the piece of writing; it could be an increase in the speed of production, or the maximization of click-through rate in an advertisement text, for example. 4.3 Help the User Learn to Write Better The third type of use for AWE software is to help the writer improve his or her writing skill. Scores or other types of feedback are designed, in this context, to provide tutoring or guidance, not for fixing specific problems in the current piece of writing but to help the user learn more general skills that would make the first draft of their next essay better than the first draft of their current essay. Evaluation of a tool though a demonstration of skill-improvement – the efficacy of the tool – is a complicated endeavor. To demonstrate that the observed improvement in skill is specifically due to the use of the writing tool, and not due to something else happening in students’ life and education at the same time requires a research design that can take other potential sources of variation in outcomes into account, such as the one used in randomized controlled studies often used to as7803 sess interventions, including in education (Connolly et al., 2018); some such studies have been performed with respect to AWE tools (Rock, 2007; Wilson and Roscoe, 2020). A tool that allows for monitoring of improvement in skill (even if the improvement is due to other factors such as school instruction or participation in some activity or community) could also be useful in the broader context of skill-oriented use, as the learner and the teacher would be able to tell that improvement is happening, even if we do not know exactly why. Improvement in important aspects of learning such as motivation and self-efficacy could also provide value to the learner (Grimes and Warschauer, 2010; Wilson and Roscoe, 2020). 4.4 Relationships between Types of Use One could argue that an ideal automated writing assistant would support all the different goals at once – help one produce better writing, help one learn, and do both in a psychometrically responsible fashion – benefits are not restricted to certain types of users more than others – so that decisionmaking based on the outcome of the usage of the tool can also be supported. Indeed, the uses are not necessarily mutually exclusive. For example, the human augmentation and consequential decision use cases could apply at the same time. It is possible that, at some future point in time, spelling will be deemed to lie outside of the construct targeted by the consequential assessment of writing and spell-correction software will be made available to test-takers. However, this would require a careful examination of the impact of correction on the distributions and interpretations of the scores. In particular, Choi and Cho (2018) found that manually-vetted correction of spelling errors yielded a significant increase in scores assigned to the essays by trained raters, and that, even after controlling for the error quantity and quality predictors, the magnitude of the average gain in the score was smaller for responses with higher original scores. Add to the mix the finding that automated spelling correction system is more accurate on essays that are of better quality to begin with (Flor, 2012), and it’s likely that the automated assessment of an automatically spell-corrected version of an essay might show an unexpected relationship with original scores that would need to be closely examined for bias or for an increase in construct-irrelevant variance. It is also possible that the effect of using a tool optimized for one use case could be the opposite of what another use case requires. If ‘use it or lose it’ has any truth to it, a potential consequence of extensive, consistent, and pervasive human augmentation for producing superior written products is an adverse impact on the skill of the human in the human-machine team. If the near universal adoption of calculators is any guide, once a skill (long division) can be reliably outsourced to a machine, humans stop valuing it in daily practice and, therefore, might set out to lose it in the long run.19 Spelling is a likely candidate writing skill where reliable access to high quality correction software could make humans stop worrying about it rather than invest effort in improving it. Many of the tools mentioned in §2.2.4 seem to position themselves somewhere between the skillimprovement and the product-improvement use cases, perhaps assuming that quantity will eventually turn into quality, namely, extensive work on improving the written product might lead to internalization and generalization of the skill to new contexts. This might or might not be true. Feedback that helps the user fix an error quickly by pointing it out and by suggesting a correction might be good in a product-oriented context, but not in a skill-oriented context; letting the user pinpoint and fix the error himself or herself might be a better skill-development strategy (Hyland and Hyland, 2006). According to Graham and Perin (2007) meta-analysis of writing interventions for adolescents, explicit grammar instruction tended to be ineffective; this finding is cited by the developers for Writing Pal to support their decision to forgo giving explicit feedback on grammar (McNamara et al., 2013), in contrast to most other AWE systems that do provide such feedback. 5 Summary & Conclusion In his visionary paper from 1966, Ellis Page provided a proof-of-concept demonstration of the possibility of automated grading of essays, as well 191989 Curriculum and Evaluation Standards for School Mathematics from the National Council of Teachers of Mathematics recommend in the Summary of Changes to Content and Emphasis in K-4 Mathematics (p.21) decreasing the attention devoted to long division specifically and to “complex paper-and-pencil computations” in general; the recommendation for grades 5-8 is likewise to decrease emphasis on “tedious paper-and-pencil computations” (p.71). https: //archive.org/details/curriculumevalua00nati. The document has sparked substantial controversy, including with regards to long division (Klein and Milgram, 2000). 7804 as outlined some potential challenges to its adoption. Subsequent research and practice have delivered on Page’s minimum desiderata for an AWE system; current research is working to address the outstanding challenges dealing with a variety of languages, content domains, and writing tasks. The field of AWE has thus progressed according to the trajectory charted by Page to a large extent, though not completely. In particular, while Page imagined the main use case of AWE to be in the service of a harried English teacher and his feedback-thirsty students, in reality, the most visible use case has arguably evolved to be automated scoring of essays for standardized testing, which, in turn, has led to new challenges, such as ensuring the validity and fairness of scores. The other development that Page could not anticipate is the sheer pervasiveness of technology in people’s daily lives; AWE software can be made available not only in classrooms to be used under the watchful eye of the English teacher, but (almost) anywhere and at any time, including on mobile devices. While it is difficult to predict specific uses people would find for such software, we outlined a number of types of use, depending on the goal: (a) consequential decision making about the user; (b) delivery of the best possible written product in partnership with the user; and (c) assisting the user in improving her writing skills. We believe that we, as researchers, can help users find value in our technology by considering the goals, engaging partners from other relevant disciplines, and designing the tools as well as their evaluations to focus on specific types of use. Acknowledgements We would like to thank our colleagues Anastassia Loukina, Jill Burstein, Aoife Cahill, and Isaac Bejar, as well as ACL reviewers and area chair, for their thoughtful comments on earlier drafts of this paper. References AERA. 2014. Standards for Educational and Psychological Testing. American Educational Research Association. Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 715–725. Laura Allen, Scott Crossley, Erica Snow, and Danielle McNamara. 2014. L2 writing practice: Game enjoyment as a key to engagement. Language Learning & Technology, 18(2):124–150. Evelin Amorim and Adriano Veloso. 2017. A multiaspect analysis of automatic essay scoring for Brazilian Portuguese. In Proceedings of the Student Research Workshop at the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 94–102. Yigal Attali. 2004. Exploring the feedback and revision features of criterion. Journal of Second Language Writing, 14:191–205. Yigal Attali and Jill Burstein. 2006. Automated Essay Scoring with e-rater˝o V. 2. The Journal of Technology, Learning and Assessment, 4(3):1–30. Lifang Bai and Guangwei Hu. 2017. In the face of fallible awe feedback: how do students respond? Educational Psychology, 37(1):67–81. Sumit Basu, Chuck Jacobs, and Lucy Vanderwende. 2013. Powergrading: a Clustering Aapproach to Amplify Human Effort for Short Answer Grading. Transactions of the Association for Computational Linguistics, 1:391–402. Beata Beigman Klebanov, Jill Burstein, Judith Harackiewicz, Stacy Priniski, and Matthew Mulholland. 2016a. Enhancing STEM motivation through personal and communal values: NLP for assessment of utility value in student writing. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 199–205. Beata Beigman Klebanov and Michael Flor. 2013a. Argumentation-relevant metaphors in test-taker essays. In Proceedings of the First Workshop on Metaphor in NLP, pages 11–20. Beata Beigman Klebanov and Michael Flor. 2013b. Word association profiles and their use for automated scoring of essays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1148–1158. Beata Beigman Klebanov, Michael Flor, and Binod Gyawali. 2016b. Topicality-Based Indices for Essay Scoring. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 63–72. Beata Beigman Klebanov, Binod Gyawali, and Yi Song. 2017. Detecting Good Arguments in a Non-Topic-Specific Way: An Oxymoron? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 244–249. Beata Beigman Klebanov, Chee Wee (Ben) Leong, and Michael Flor. 2018. A corpus of non-native written English annotated for metaphor. In Proceedings of the 2018 Conference of the North American 7805 Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 86–91. Beata Beigman Klebanov, Nitin Madnani, Jill Burstein, and Swapna Somasundaran. 2014. Content importance models for scoring writing from sources. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 247–252. Isaac Bejar, Michael Flor, Yoko Futagi, and Chaintanya Ramineni. 2014. On the vulnerability of automated scoring to construct-irrelevant response strategies (cirs): An illustration. Assessing Writing, 22:48–59. Isaac Bejar, Waverely VanWinkle, Nitin Madnani, William Lewis, and Michael Steier. 2013. Length of textual response as a construct-irrelevant response strategy: The case of shell language. ETS Research Report Series, 2013(1):1–39. Randy Elliot Bennett and Isaac I Bejar. 1998. Validity and automad scoring: It’s not only the scoring. Educational Measurement: Issues and Practice, 17(4):9–17. Stig Johan Berggren, Taraka Rama, and Lilja Øvrelid. 2019. Regression or classification? automated essay scoring for Norwegian. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 92–102. Kay Berkling. 2018. A 2nd longitudinal corpus for childrens writing with enhanced output for specific spelling patterns. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Steven Burrows, Iryna Gurevych, and Benno Stein. 2015. The Eras and Trends of Automatic Short Answer Grading. International Journal of Artificial Intelligence in Education, 25(1):60–117. Jill Burstein, Martin Chodorow, and Claudia Leacock. 2004. Automated essay evaluation: The criterion online writing service. AI Magazine, 25(3):27–36. Jill Burstein, Norbert Elliot, Beata Beigman Klebanov, Nitin Madnani, Diane Napolitano, Max Schwartz, Patrick Houghton, and Hilary Molloy. 2018. Writing mentor: Writing progress using self-regulated writing support. Journal of Writing Analytics, 2:314–328. Jill Burstein, Daniel Marcu, and Kevin Knight. 2003. Finding the WRITE Stuff: Automatic Identification of Discourse Structure in Student Essays. IEEE Intelligent Systems, 18(1):32–39. Jill Burstein, Joel Tetreault, and Slava Andreyev. 2010. Using Entity-Based Features to Model Coherence in Student Essays. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 681–684. Aoife Cahill, Martin Chodorow, and Michael Flor. 2018. Developing an e-rater Advisory to Detect Babel-generated Essays. Journal of Writing Analytics, 2:203–224. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Winston Carlile, Nishant Gurrapadi, Zixuan Ke, and Vincent Ng. 2018. Give me more feedback: Annotating argument persuasiveness and related attributes in student essays. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 621–631. Carol Chapelle, Elena Cotos, and Jooyoung Lee. 2015. Validity arguments for diagnostic assessment using automated writing evaluation. Language Testing, 32(3):385–405. Hongbo Chen and Ben He. 2013. Automated Essay Scoring by Maximizing Human-Machine Agreement. In Proceedings of EMNLP, pages 1741–1752. Ikkyu Choi and Yeonsuk Cho. 2018. The impact of spelling errors on trained raters’ scoring decisions. Language Education & Assessment, 1(2):45–58. Jaeho Choi. 2010. The Impact of Automated Essay Scoring (AES) for Improving English Language Learner’s Essay Writing. University of Virginia Charlottesville, VA. Brian E. Clauser, Michael T. Kane, and David B. Swanson. 2002. Validity issues for performancebased tests scored with computer-automated scoring systems. Applied Measurement in Education, 15(4):413–432. Paul Connolly, Ciara Keenan, and Karolina Urbanska. 2018. The trials of evidence-based practice in education: A systematic review of randomised controlled trials in education research 1980-2016. Educational Research. Ronan Cummins, Meng Zhang, and Ted Briscoe. 2016. Constrained Multi-Task Learning for Automated Essay Scoring. In Proceedings of ACL, pages 789– 799. Tirthankar Dasgupta, Abir Naskar, Lipika Dey, and Rupsa Saha. 2018. Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 93–102. Paul Deane. 2013. On the Relation between Automated Essay Scoring and Modern Views of the Writing Construct. Assessing Writing, 18(1):7–24. Paul Diederich, John French, and Sydell Carlton. 1961. Factors in judgments of writing ability. ETS Research Bulletin, RB-61-15. 7806 Fei Dong, Yue Zhang, and Jie Yang. 2017. Attentionbased recurrent convolutional neural network for automatic essay scoring. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 153–162. Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. CoRR, abs/1702.08608. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS 12, page 214226. Carol Englert, Yong Zhao, Kailonnie Dunsmore, Natalia Yevgenyevna Collings, and Kimberly Wolbers. 2007. Scaffolding the writing of students with disabilities through procedural facilitation: Using an internet-based technology to improve performance. Learning Disability Quarterly, 30(1):9–29. Michael Flor. 2012. Four types of context for automatic spelling correction. Traitement Automatique des Langues (TAL), 53(3):61–99. Michael Flor and Aoife Cahill. 2020. Automated scoring of open-ended written responses – possibilities and challenges. In Innovative Computer-based International Large-Scale Assessments. Springer Science Publishers. Michael Flor and Yoko Futagi. 2013. Producing an annotated corpus with automatic spelling correction. In S. Granger, G. Gilquin, and F. Meunier, editors, Twenty Years of Learner Corpus Research: Looking back, moving ahead, pages 139–154. Louvainla-Neuve: Presses universitaires de Louvain. Peter Foltz, Mark Rosenstein, Nicholas Dronen, and Scott Dooley. 2014. Automated feedback in a largescale implementation of a formative writing system: Implications for improving student writing. In Paper presented at the annual meeting of the American Educational Research Association, Philadelphia, PA. Josh Gardner, Christopher Brooks, and Ryan Baker. 2019. Evaluating the fairness of predictive student models through slicing analysis. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge, LAK19, page 225234. Debanjan Ghosh, Aquila Khanam, Yubo Han, and Smaranda Muresan. 2016. Coarse-grained argumentation features for scoring persuasive essays. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 549–554. Steve Graham and Dolores Perin. 2007. A meta-analysis of writing instruction for adolescent students. Journal of educational psychology, 99(3):445. Shai Gretz, Roni Friedman, Edo Cohen-Karlik, Assaf Toledo, Dan Lahav, Ranit Aharonov, and Noam Slonim. 2019. A large-scale dataset for argument quality ranking: Construction and analysis. Douglas Grimes and Mark Warschauer. 2010. Utility in a fallible tool: A multi-site case study of automated writing evaluation. The Journal of Technology, Learning and Assessment, 8(6). S. Hajian and J. Domingo-Ferrer. 2013. A methodology for direct and indirect discrimination prevention in data mining. IEEE Transactions on Knowledge and Data Engineering, 25(7):1445–1459. Michael Heilman, Aoife Cahill, Nitin Madnani, Melissa Lopez, Matthew Mulholland, and Joel Tetreault. 2014. Predicting grammaticality on an ordinal scale. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 174–180. Derrick Higgins, Jill Burstein, and Yigal Attali. 2006. Identifying Off-topic Student Essays without Topicspecific Training Data. Natural Language Engineering, 12(2):145–159. Derrick Higgins and Michael Heilman. 2014. Managing what we can measure: Quantifying the susceptibility of automated scoring systems to gaming behavior. Educational Measurement: Issues and Practice, 33(3):36–46. Andrea Horbach, Sebastian Stennmanns, and Torsten Zesch. 2018. Cross-Lingual Content Scoring. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 410–419. Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591–598. Ken Hyland and Fiona Hyland. 2006. Feedback on second language students’ writing. Language teaching, 39(2):83–101. Tsunenori Ishioka and Masayuki Kameda. 2006. Automated japanese essay scoring system based on articles written by experts. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 233–240. Association for Computational Linguistics. Cancan Jin, Ben He, Kai Hui, and Le Sun. 2018. TDNN: A Two-Stage Deep Neural Network for Prompt-Independent Automated Essay Scoring. In Proceedings of ACL, pages 1088–1097. Tuomo Kakkonen, Niko Myller, Jari Timonen, and Erkki Sutinen. 2005. Automatic essay grading with 7807 probabilistic latent semantic analysis. In Proceedings of the Second Workshop on Building Educational Applications Using NLP, EdAppsNLP 05, pages 29–36. Zixuan Ke and Vincent Ng. 2019. Automated essay scoring: a survey of the state of the art. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 6300–6308. AAAI Press. Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2018. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning, pages 2564–2572. Michael Kim, Omer Reingold, and Guy Rothblum. 2018. Fairness through computationally-bounded awareness. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 4842–4852. Curran Associates, Inc. David Klein and James Milgram. 2000. The role of long division in the K–12 curriculum. https:// www.csun.edu/~vcmth00m/longdivision.pdf. Pang Wei Koh and Percy Liang. 2017. Understanding Black-box Predictions via Influence Functions. In Proceedings of the 34th International Conference on Machine Learning, pages 1885–1894. Sachin Kumar, Soumen Chakrabarti, and Shourya Roy. 2017. Earth mover’s distance pooling over siamese lstms for automatic short answer grading. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, pages 2046– 2052. Claudia Leacock, Martin Chodorow, Michael Gamon, and Joel Tetreault. 2014. Automated grammatical error detection for language learners. Synthesis lectures on human language technologies, 7(1):1–170. Benoit Lemaire and Philippe Dessus. 2001. A system to assess the semantic content of student essays. Journal of Educational Computing Research, 24(3):305–320. Ran Levy, Shai Gretz, Benjamin Sznajder, Shay Hummel, Ranit Aharonov, and Noam Slonim. 2017. Unsupervised corpus–wide claim detection. In Proceedings of the 4th Workshop on Argument Mining, pages 79–84. Anastasiya Lipnevich and Jeffrey Smith. 2008. Response to assessment feedback: The effect of grades, praise, and source of information. ETS Research Report No. RR-08-30. Zachary Lipton. 2016. The Mythos of Model Interpretability. In Proceedings of the ICML Workshop on Human Interpretability in Machine Learning. Jeannette Littlemore, Tina Krennmayr, James Turner, and Sarah Turner. 2014. An investigation into metaphor use at different levels of second language writing. Applied Linguistics, 35(2):117–144. Chanunya Loraksa and Ratchata Peachavanish. 2007. Automatic Thai-language essay scoring using neural network and latent semantic analysis. In First Asia International Conference on Modelling Simulation (AMS’07), pages 400–402. Stephan Lorenzen, Niklas Hjuler, and Stephen Alstrup. 2019. Investigating writing style development in high school. CoRR, abs/1906.03072. Annie Louis and Ani Nenkova. 2013. What makes writing great? first experiments on article quality prediction in the science journalism domain. Transactions of the Association for Computational Linguistics, 1:341–352. Anastassia Loukina, Nitin Madnani, and Klaus Zechner. 2019. The Many Dimensions of Algorithmic Fairness in Educational Applications. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 1–10. Wencan Luo and Diane Litman. 2016. Determining the quality of a student reflective response. In Proceedings of the Florida Artificial Intelligence Research Society Conference. Nitin Madnani, Jill Burstein, John Sabatini, and Tenaha O’Reilly. 2013. Automated Scoring of a SummaryWriting Task Designed to Measure Reading Comprehension. In Proceedings of the Workshop on Innovative Use of NLP for Building Educational Applications, pages 163–168. Nitin Madnani, Aoife Cahill, Daniel Blanchard, Slava Andreyev, Diane Napolitano, Binod Gyawali, Michael Heilman, Chong Min Lee, Chee Wee Leong, Matthew Mulholland, and Brian Riordan. 2018. A robust microservice architecture for scaling automated scoring applications. ETS Research Report Series, 2018(1). Nitin Madnani, Aoife Cahill, and Brian Riordan. 2016. Automatically scoring tests of proficiency in music instruction. In Proceedings of the Workshop on Innovative Use of NLP for Building Educational Applications, pages 217–222. Nitin Madnani, Anastassia Loukina, and Aoife Cahill. 2017a. A Large Scale Quantitative Exploration of Modeling Strategies for Content Scoring. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 457–467. Nitin Madnani, Anastassia Loukina, Alina von Davier, Jill Burstein, and Aoife Cahill. 2017b. Building Better Open-Source Tools to Support Fairness in Automated Scoring. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 41–52. 7808 Elijah Mayfield, Michael Madaio, Shrimai Prabhumoye, David Gerritsen, Brittany McLaughlin, Ezekiel Dixon-Román, and Alan W Black. 2019. Equity beyond bias in language technologies for education. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 444–460. Danielle McNamara, Scott Crossley, and Rod Roscoe. 2013. Natural language processing in an intelligent writing strategy tutoring system. Behavior Research Methods, 45(2):499–515. Rabih Mezher and Nazlia Omar. 2016. A hybrid method of syntactic feature and latent semantic analysis for automatic arabic essay scoring. Journal of Applied Sciences, 16(5):209. Michael Mohler, Razvan Bunescu, and Rada Mihalcea. 2011. Learning to Grade Short Answer Questions using Semantic Similarity Measures and Dependency Graph Alignments. In Proceedings of ACL: HLT, pages 752–762. Ulrike Padó. 2016. Get Semantic With Me! The Usefulness of Different Feature Types for ShortAnswer Grading. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2186–2195. Ellis B Page. 1966. The Imminence of Grading Essays by Computer. The Phi Delta Kappan, 47(5):238– 243. Isaac Persing and Vincent Ng. 2013. Modeling thesis clarity in student essays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 260–269. Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and calibration. In Advances in Neural Information Processing Systems, pages 5680–5689. Donald Powers, Jill Burstein, Martin Chodorow, Mary Fowles, and Karen Kukich. 2001. Stumping E-rater: Challenging the Validity of Automated Essay Scoring. ETS Research Report Series, 2001(1):1–44. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Zahra Rahimi, Diane Litman, Richard Correnti, Elaine Wang, and Lindsay Clare Matsumura. 2017. Assessing students use of evidence and organization in response-to-text writing: Using natural language processing for rubric-based automated scoring. International Journal of Artificial Intelligence in Education, 27(4):694–728. Lakshmi Ramachandran, Jian Cheng, and Peter Foltz. 2015. Identifying Patterns For Short Answer Scoring Using Graph-based Lexico-Semantic Text Matching. In Proceedings of the Workshop on Innovative Use of NLP for Building Educational Applications, pages 97–106, Denver, Colorado. Chaitanya Ramineni and David Williamson. 2013. Automated essay scoring: Psychometric guidelines and practices. Assessing Writing, 18(1):25 – 39. Automated Assessment of Writing. Jim Ranalli, Stephanie Link, and Evgeny ChukharevHudilainen. 2017. Automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation. Educational Psychology, 37(1):8–25. Gaoqi Rao, Baolin Zhang, Endong Xun, and Lung-Hao Lee. 2017. Ijcnlp-2017 task 1: Chinese grammatical error diagnosis. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 1–8. Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129–140, New Orleans, Louisiana. Association for Computational Linguistics. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144. Stefan Riegl and Tony Veale. 2018. Live, die, evaluate, repeat: Do-over simulation in the generation of coherent episodic stories. In Proceedings of the Ninth International Conference on Computational Creativity, pages 80–87, Salamanca, Spain. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence-an automatic method for context dependent evidence detection. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 440–450. Brian Riordan, Michael Flor, and Robert Pugh. 2019. How to Account for Mispellings: Quantifying the Benefit of Character Representations in Neural Content Scoring Models. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 116–126, Florence, Italy. Brian Riordan, Andrea Horbach, Aoife Cahill, Torsten Zesch, and Chong Min Lee. 2017. Investigating Neural Architectures for Short Answer Scoring. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 159–168. 7809 JoAnn Leah Rock. 2007. The impact of short-term use of criterionsm on writing skills in ninth grade. ETS Research Report Series, 2007(1):i–24. Rod D Roscoe and Danielle S McNamara. 2013. Writing pal: Feasibility of an intelligent writing strategy tutor in the high school classroom. Journal of Educational Psychology, 105(4):1010. Alla Rozovskaya and Dan Roth. 2010. Annotating ESL errors: Challenges and rewards. In Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, pages 28–36. Keisuke Sakaguchi, Michael Heilman, and Nitin Madnani. 2015. Effective Feature Integration for Automated Short Answer Scoring. In Proceedings of NAACL: HLT, pages 1049–1054. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678. Mark Shermis, Cynthia Garvan, and Yanbo Diao. 2008. The impact of automated essay scoring on writing outcomes. Online Submission, ERIC. Yow-Ting Shiue, Hen-Hsen Huang, and Hsin-Hsi Chen. 2017. Detection of Chinese word usage errors for non-native Chinese learners with bidirectional LSTM. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 404–410, Vancouver, Canada. Association for Computational Linguistics. Louis Sobel, Milo Beckman, Damien Jiang, and Les Perelman. 2014. BABEL Generator. http:// babel-generator.herokuapp.com. Swapna Somasundaran, Jill Burstein, and Martin Chodorow. 2014. Lexical Chaining for Measuring Discourse Coherence Quality in Test-taker Essays. In Proceedings of COLING, pages 950–961. Swapna Somasundaran, Michael Flor, Martin Chodorow, Hillary Molloy, Binod Gyawali, and Laura McCulla. 2018. Towards evaluating narrative quality in student writing. Transactions of the Association for Computational Linguistics, 6:91–106. Swapna Somasundaran, Brian Riordan, Binod Gyawali, and Su-Youn Yoon. 2016. Evaluating argumentative and narrative essays using graphs. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1568–1578. Wei Song, Tong Liu, Ruiji Fu, Lizhen Liu, Hanshi Wang, and Ting Liu. 2016. Learning to identify sentence parallelism in student essays. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 794–803. Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P. Gummadi, Adish Singla, Adrian Weller, and Muhammad Bilal Zafar. 2018. A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 18, pages 2239–2248. Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1501–1510. Christian Stab and Iryna Gurevych. 2017. Recognizing insufficiently supported arguments in argumentative essays. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 980–990. Sara Stymne, Eva Pettersson, Beáta Megyesi, and Anne Palmér. 2017. Annotating errors in student texts: First experiences and experiments. In Joint 6th NLP4CALL and 2nd NLP4LA Nodalida workshop, pages 47–60. Jana Sukkarieh and Eleanor Bolge. 2008. Leveraging c-raters automated scoring capability for providing instructional feedback for short constructed responses. In Intelligent Tutoring Systems, pages 779– 783. Springer. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1630–1640. Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1882–1891. Tony Veale. 2018. A massive sarcastic robot: What a great idea! two approaches to the computational generation of irony. In Proceedings of the Ninth International Conference on Computational Creativity, Salamanca, Spain. Tony Veale, Hanyang Chen, and Guofu Li. 2017. I read the news today, oh boy - making metaphors topical, timely and humorously personal. In Distributed, Ambient and Pervasive Interactions, pages 696–709. Zarah Weiss and Detmar Meurers. 2019. Analyzing linguistic complexity and accuracy in academic language development of german across elementary and secondary school. In Proceedings of the 7810 Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 380–393. David Williamson, Xiaoming Xi, and Jay Breyer. 2012. A framework for evaluation and use of automated scoring. Educational Measurement: Issues and Practice, 31(1):2–13. Joshua Wilson. 2017. Associated effects of automated essay evaluation software on growth in writing quality for students with and without disabilities. Reading and Writing, 30(4):691–718. Joshua Wilson and Amanda Czik. 2016. Automated essay evaluation software in english language arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers & Education, 100(94-109). Joshua Wilson and Rod Roscoe. 2020. Automated writing evaluation and feedback: Multiple metrics of efficacy. Journal of Educational Computing Research, 58(1):87–125. Su-Youn Yoon, Aoife Cahill, Anastassia Loukina, Klaus Zechner, Brian Riordan, and Nitin Madnani. 2018. Atypical Inputs in Educational Applications. In Proceedings of NAACL, pages 60–67. Haoran Zhang and Diane Litman. 2018. Co-attention based neural network for source-dependent essay scoring. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 399–409. Junzhe Zhang and Elias Bareinboim. 2018. Fairness in decision-makingthe causal explanation formula. In Thirty-Second AAAI Conference on Artificial Intelligence. Mo Zhang, Jing Chen, and Chunyi Ruan. 2016. Evaluating the Advisory Flags and Machine Scoring Difficulty in the e-rater® Automated Scoring Engine. ETS Research Report Series, 2016(2):1–14. Ramon Ziai, Niels Ott, and Detmar Meurers. 2012. Short Answer Assessment: Establishing Links Between Research Strands. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 190–200.
2020
697
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7811 Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly Nora Kassner, Hinrich Sch¨utze Center for Information and Language Processing (CIS) LMU Munich, Germany [email protected] Abstract Building on Petroni et al. (2019), we propose two new probing tasks analyzing factual knowledge stored in Pretrained Language Models (PLMs). (1) Negation. We find that PLMs do not distinguish between negated (“Birds cannot [MASK]”) and non-negated (“Birds can [MASK]”) cloze questions. (2) Mispriming. Inspired by priming methods in human psychology, we add “misprimes” to cloze questions (“Talk? Birds can [MASK]”). We find that PLMs are easily distracted by misprimes. These results suggest that PLMs still have a long way to go to adequately learn human-like factual knowledge. 1 Introduction PLMs like Transformer-XL (Dai et al., 2019), ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have emerged as universal tools that capture a diverse range of linguistic and factual knowledge. Recently, Petroni et al. (2019) introduced LAMA (LAnguage Model Analysis) to investigate whether PLMs can recall factual knowledge that is part of their training corpus. Since the PLM training objective is to predict masked tokens, question answering (QA) tasks can be reformulated as cloze questions. For example, “Who wrote ‘Dubliners’?” is reformulated as “[MASK] wrote ‘Dubliners’.” In this setup, Petroni et al. (2019) show that PLMs outperform automatically extracted knowledge bases on QA. In this paper, we investigate this capability of PLMs in the context of (1) negation and what we call (2) mispriming. (1) Negation. To study the effect of negation on PLMs, we introduce the negated LAMA dataset. We insert negation elements (e.g., “not”) in LAMA cloze questions (e.g., “The theory of relativity was not developed by [MASK].”) – this gives us positive/negative pairs of cloze questions. Querying PLMs with these pairs and comparing the predictions, we find that the predicted fillers have high overlap. Models are equally prone to generate facts (“Birds can fly”) and their incorrect negation (“Birds cannot fly”). We find that BERT handles negation best among PLMs, but it still fails badly on most negated probes. In a second experiment, we show that BERT can in principle memorize both positive and negative facts correctly if they occur in training, but that it poorly generalizes to unseen sentences (positive and negative). However, after finetuning, BERT does learn to correctly classify unseen facts as true/false. (2) Mispriming. We use priming, a standard experimental method in human psychology (Tulving and Schacter, 1990) where a first stimulus (e.g., “dog”) can influence the response to a second stimulus (e.g., “wolf” in response to “name an animal”). Our novel idea is to use priming for probing PLMs, specifically mispriming: we give automatically generated misprimes to PLMs that would not mislead humans. For example, we add “Talk? Birds can [MASK]” to LAMA where “Talk?” is the misprime. A human would ignore the misprime, stick to what she knows and produce a filler like “fly”. We show that, in contrast, PLMs are misled and fill in “talk” for the mask. We could have manually generated more natural misprimes. For example, misprime “regent of Antioch” in “Tancred, regent of Antioch, played a role in the conquest of [MASK]” tricks BERT into chosing the filler “Antioch” (instead of “Jerusalem”). Our automatic misprimes are less natural, but automatic generation allows us to create a large misprime dataset for this initial study. Contribution. We show that PLMs’ ability to learn factual knowledge is – in contrast to human capabilities – extremely brittle for negated sentences and for sentences preceded by distracting material (i.e., misprimes). Data and code will be 7812 published.1 2 Data and Models LAMA’s cloze questions are generated from subject-relation-object triples from knowledge bases (KBs) and question-answer pairs. For KB triples, cloze questions are generated, for each relation, by a templatic statement that contains variables X and Y for subject and object (e.g, “X was born in Y”). We then substitute the subject for X and MASK for Y. In a question-answer pair, we MASK the answer. LAMA is based on several sources: (i) GoogleRE. 3 relations: “place of birth”, “date of birth”, “place of death”. (ii) T-REx (Elsahar et al., 2018). Subset of Wikidata triples. 41 relations. (iii) ConceptNet (Li et al., 2016). 16 commonsense relations. The underlying corpus provides matching statements to query PLMs. (iv) SQuAD (Rajpurkar et al., 2016). Subset of 305 context-insensitive questions, reworded as cloze questions. We use the source code provided by Petroni et al. (2019) and Wolf et al. (2019) to evaluate Transformer-XL large (Txl), ELMo original (Eb), ELMo 5.5B (E5B), BERT-base (Bb) and BERTlarge (Bl). Negated LAMA. We created negated LAMA by manually inserting a negation element in each template or question. For ConceptNet we only consider an easy-to-negate subset (see appendix). Misprimed LAMA. We misprime LAMA by inserting an incorrect word and a question mark at the beginning of a statement; e.g., “Talk?” in “Talk? Birds can [MASK].” We only misprime questions that are answered correctly by BERTlarge. To make sure the misprime is misleading, we manually remove correct primes for SQuAD and ConceptNet and automatically remove primes that are the correct filler for a different instance of the same relation for T-REx and ConceptNet. We create four versions of misprimed LAMA (A, B, C, D) as described in the caption of Table 3; Table 1 gives examples. 3 Results Negated LAMA. Table 2 gives spearman rank correlation ρ and % overlap in rank 1 predictions between original and negated LAMA. Our assumption is that the correct answers for a pair of positive question and negative question 1https://github.com/norakassner/LAMA primed negated Version Query A Dinosaurs? Munich is located in [MASK] . B Somalia? Munich is located in [MASK] . C Prussia? Munich is located in [MASK] . D Prussia? “This is great”. . . . “What a surprise.” “Good to know.” . . . Munich is located in [MASK] . Table 1: Examples for different versions of misprimes: (A) are randomly chosen, (B) are randomly chosen from correct fillers of different instances of the relation, (C) were top-ranked fillers for the original cloze question but have at least a 30% lower prediction probability than the correct object. (D) is like (C) except that 20 short neutral sentences are inserted between misprime and MASK sentence. should not overlap, so high values indicate lack of understanding of negation. The two measures are complementary and yet agree very well. The correlation measure is sensitive in distinguishing cases where negation has a small effect from those where it has a larger effect.2 % overlap is a measure that is direct and easy to interpret. In most cases, ρ > 85%; overlap in rank 1 predictions is also high. ConcepNet results are most strongly correlated but TREx 1-1 results are less overlapping. Table 4 gives examples (lines marked “N”). BERT has slightly better results. Google-RE date of birth is an outlier because the pattern “X (not born in [MASK])” rarely occurs in corpora and predictions are often nonsensical. In summary, PLMs poorly distinguish positive and negative sentences. We give two examples of the few cases where PLMs make correct predictions, i.e., they solve the cloze task as human subjects would. For “The capital of X is not Y” (TREX, 1-1) top ranked predictions are “listed”, “known”, “mentioned” (vs. cities for “The capital of X is Y”). This is appropriate since the predicted sentences are more common than sentences like “The capital of X is not Paris”. For “X was born in Y”, cities are predicted, but 2A reviewer observes that spearman correlation is generally high and wonders whether high spearman correlation is really a reliable indicator of negation not changing the answer of the model. As a sanity check, we also randomly sampled, for each query correctly answered by BERT-large (e.g., “Einstein born in [MASK]”), another query with a different answer, but the same template relation (e.g., “Newton born in [MASK]”) and computed the spearman correlation between the predictions for the two queries. In general, these positive-positive spearman correlations were significantly lower than those between positive (“Einstein born in [MASK]”) and negative (“Einstein not born in [MASK]”) queries (t-test, p < 0.01). There were two exceptions (not significantly lower): T-REx 1-1 and Google-RE birth-date. 7813 Facts Rels Txl Eb E5b Bb Bl ρ % ρ % ρ % ρ % ρ % Google-RE birth-place 2937 1 92.8 47.1 97.1 28.5 96.0 22.9 89.3 11.2 88.3 20.1 birth-date 1825 1 87.8 21.9 92.5 1.5 90.7 7.5 70.4 0.1 56.8 0.3 death-place 765 1 85.8 1.4 94.3 57.8 95.9 80.7 89.8 21.7 87.0 13.2 T-REx 1-1 937 2 89.7 88.7 95.0 28.6 93.0 56.5 71.5 35.7 47.2 22.7 N-1 20006 23 90.6 46.6 96.2 78.6 96.3 89.4 87.4 52.1 84.8 45.0 N-M 13096 16 92.4 44.2 95.5 71.1 96.2 80.5 91.9 58.8 88.9 54.2 ConceptNet 2996 16 91.1 32.0 96.8 63.5 96.2 53.5 89.9 34.9 88.6 31.3 SQuAD 305 91.8 46.9 97.1 62.0 96.4 53.1 89.5 42.9 86.5 41.9 Table 2: PLMs do not distinguish positive and negative sentences. Mean spearman rank correlation (ρ) and mean percentage of overlap in first ranked predictions (%) between the original and the negated queries for TransformerXL large (Txl), ELMo original (Eb), ELMo 5.5B (E5B), BERT-base (Bb) and BERT-large (Bl). for “X was not born in Y”, sometimes countries are predicted. This also seems natural: for the positive sentence, cities are more informative, for the negative, countries. Balanced corpus. Investigating this further, we train BERT-base from scratch on a synthetic corpus. Hyperparameters are listed in the appendix. The corpus contains as many positive sentences of form “xj is an” as negative sentences of form “xj is not an” where xj is drawn from a set of 200 subjects S and an from a set of 20 adjectives A. The 20 adjectives form 10 pairs of antonyms (e.g., “good”/”bad”). S is divided into 10 groups gm of 20. Finally, there is an underlying KB that defines valid adjectives for groups. For example, assume that g1 has property am = “good”. Then for each xi ∈g1, the sentences “xi is good” and “xi is not bad” are true. The training set is generated to contain all positive and negative sentences for 70% of the subjects. It also contains either only the positive sentences for the other 30% of subjects (in that case the negative sentences are added to test) or vice versa. Cloze questions are generated in the format “xj is [MASK]”/“xj is not [MASK]”. We test whether (i) BERT memorizes positive and negative sentences seen during training, (ii) it generalizes to the test set. As an example, a correct generalization would be “xi is not bad” if “xi is good” was part of the training set. The question is: does BERT learn, based on the patterns of positive/negative sentences and within-group regularities, to distinguish facts from non-facts. Table 5 (“pretrained BERT”) shows that BERT memorizes positive and negative sentences, but poorly generalizes to the test set for both positive and negative. The learning curves (see appendix) show that this is not due to overfitting the training data. While the training loss rises, the test precision fluctuates around a plateau. However, if we Corpus Relation Facts A B C D Google-RE birth-place 386 11.7 44.7 99.5 98.4 birth-date 25 72.0 91.7 100.0 88.0 death-place 88 14.8 47.1 98.9 98.9 T-REx 1-1 661 12.7 20.6 30.1 28.1 N-1 7034 22.1 48.3 59.9 41.2 N-M 2774 26.6 55.3 58.7 43.9 ConceptNet 146 52.1 59.6 82.9 70.6 SQuAD 51 33.3 68.6 60.8 Table 3: Absolute precision drop (from 100%, lower better) when mispriming BERT-large for the LAMA subset that was answered correctly in its original form. We insert objects that (A) are randomly chosen, (B) are randomly chosen from correct fillers of different instances of the relation (not done for SQuAD as it is not organized in relations), (C) were top-ranked fillers for the original cloze question but have at least a 30% lower prediction probability than the correct object. (D) investigates the effect of distance, manipulating (C) further by inserting a concatenation of 20 neutral sentences (e.g., “Good to know.”, see appendix) between misprime and cloze question. finetune BERT (“finetuned BERT”) on the task of classifying sentences as true/false, its test accuracy is 100%. (Recall that false sentences simply correspond to true sentence with a “not” inserted or removed.) So BERT easily learns negation if supervision is available, but fails without it. This experiment demonstrates the difficulty of learning negation through unsupervised pretraining. We suggest that the inability of pretrained BERT to distinguish true from false is a serious impediment to accurately handling factual knowledge. Misprimed LAMA. Table 3 shows the effect of mispriming on BERT-large for questions answered correctly in original LAMA; recall that Table 1 gives examples of sentences constructed in modes A, B, C and D. In most cases, mispriming with a highly ranked incorrect object causes a precision drop of over 60% (C). Example predictions can be found in Table 4 (lines marked “M”). This sensi7814 cloze question true top 3 words generated with log probs Google RE O Marcel Oopa died in the city of [MASK]. Paris Paris (-2.3), Lausanne (-3.3), Brussels (-3.3) N Marcel Oopa did not die in the city of [MASK]. Paris (-2.4), Helsinki (-3.5), Warsaw (-3.5) M Yokohama? Marcel Oopa died in the city of [MASK]. Yokohama (-1.0), Tokyo (-2.5), Paris (-3.0) O Anatoly Alexine was born in the city of [MASK]. Moscow Moscow (-1.2), Kiev (-1.6), Odessa (-2.5) N Anatoly Alexine was not born in the city of [MASK]. Moscow (-1.2), Kiev (-1.5), Novgorod (-2.5) M Kiev? Anatoly Alexine was born in the city of [MASK]. Kiev (-0.0), Moscow (-6.1), Vilnius (-7.0) TERx O Platonism is named after [MASK] . Plato Plato (-1.5), Aristotle (-3.5), Locke (-5.8) N Platonism is not named after [MASK]. Plato (-0.24), Aristotle (-2.5), Locke (-5.7) M Cicero? Platonism is named after [MASK]. Cicero (-2.3), Plato ( -3.5), Aristotle (-5.1) O Lexus is owned by [MASK] . Toyota Toyota (-1.4), Renault (-2.0), Nissan (-2.4) N Lexus is not owned by [MASK]. Ferrari (-1.0), Fiat (-1.4), BMW (-3.7) M Microsoft? Lexus is owned by [MASK] . Microsoft (-1.2), Google ( -2.1), Toyota (-2.6) Concept Net O Birds can [MASK]. fly fly (-0.5), sing (-2.3), talk (-2.8) N Birds cannot [MASK]. fly (-0.3), sing ( -3.6), speak (-4.1) M Talk? Birds can [MASK]. talk (-0.2), fly ( -2.5), speak (-3.9) O A beagle is a type of [MASK]. dog dog (-0.1), animal (-3.7), pigeon (-4.1) N A beagle is not a type of [MASK]. dog (-0.2), horse ( -3.8), animal (-4.1) M Pigeon? A beagle is a type of [MASK]. dog (-1.3), pigeon ( -1.4), bird (-2.2) SQuAD O Quran is a [MASK] text. religious religious (-1.0), sacred (-1.8), Muslim (-3.2) N Quran is not a [MASK] text. religious (-1.1), sacred ( -2.3), complete (-3.3) M Secular? Quran is a [MASK] text. religious (-1.5), banned ( -2.8), secular (-3.0) O Isaac’s chains are made out of [MASK]. silver silver (-1.9), gold (-2.1), iron (-2.2) N Isaac’s chains are not made out of [MASK]. iron (-1.2), metal ( -2.1), gold (-2.1) M Iron? Isaac’s chains are made out of [MASK]. iron (-0.4), steel ( -2.8), metal (-2.8) Table 4: BERT-large examples for (O) original , (N) negated and (M) misprimed (Table 3 C) LAMA. train test pos neg pos neg pretrained BERT 0.9 0.9 0.2 0.2 finetuned BERT 1.0 1.0 1.0 1.0 Table 5: Accuracy of BERT on balanced corpus. Pretrained BERT does not model negation well, but finetuned BERT classifies sentences as true/false correctly. tivity to misprimes still exists when the distance between misprime and cloze question is increased: the drop persists when 20 sentences are inserted (D). Striking are the results for Google-RE where the model recalls almost no facts (C). Table 4 (lines marked “M”) shows predicted fillers for these misprimed sentences. BERT is less but still badly affected by misprimes that match selectional restrictions (B). The model is more robust against priming with random words (A): the precision drop is on average more than 35% lower than for (D). We included the baseline (A) as a sanity check for the precision drop measure. These baseline results show that the presence of a misprime per se does not confuse the model; a less distracting misprime (different type of entity or a completely implausible answer) often results in a correct answer by BERT. 4 Discussion Whereas Petroni et al. (2019)’s results suggest that PLMs are able to memorize facts, our results indicate that PLMs largely do not learn the meaning of negation. They mostly seem to predict fillers based on co-occurrence of subject (e.g., “Quran”) and filler (“religious”) and to ignore negation. A key problem is that in the LAMA setup, not answering (i.e., admitting ignorance) is not an option. While the prediction probability generally is somewhat lower in the negated compared to the positive answer, there is no threshold across cloze questions that could be used to distinguish valid positive from invalid negative answers (cf. Table 4). We suspect that a possible explanation for PLMs’ poor performance is that negated sentences occur much less frequently in training corpora. Our synthetic corpus study (Table 5) shows that BERT is able to memorize negative facts that occur in the corpus. However, the PLM objective encourages the model to predict fillers based on similar sentences in the training corpus – and if the most similar statement to a negative sentence is positive, then the filler is generally incorrect. However, after finetuning, BERT is able to classify truth/falseness correctly, demonstrating that negation can be learned through supervised training. The mispriming experiment shows that BERT often handles random misprimes correctly (Table 3 A). There are also cases where BERT does the right thing for difficult misprimes, e.g., it robustly attributes “religious” to Quran (Table 4). In general, however, BERT is highly sensitive to misleading context (Table 3 C) that would not change human 7815 behavior in QA. It is especially striking that a single word suffices to distract BERT. This may suggest that it is not knowledge that is learned by BERT, but that its performance is mainly based on similarity matching between the current context on the one hand and sentences in its training corpus and/or recent context on the other hand. Poerner et al. (2019) present a similar analysis. Our work is a new way of analyzing differences between PLMs and human-level natural language understanding. We should aspire to develop PLMs that – like humans – can handle negation and are not easily distracted by misprimes. 5 Related Work PLMs are top performers for many tasks, including QA (Kwiatkowski et al., 2019; Alberti et al., 2019). PLMs are usually finetuned (Liu et al., 2019; Devlin et al., 2019), but recent work has applied models without finetuning (Radford et al., 2019; Petroni et al., 2019). Bosselut et al. (2019) investigate PLMs’ common sense knowledge, but do not consider negation explicitly or priming. A wide range of literature analyzes linguistic knowledge stored in pretrained embeddings (Jumelet and Hupkes, 2018; Gulordava et al., 2018; Giulianelli et al., 2018; McCoy et al., 2019; Dasgupta et al., 2018; Marvin and Linzen, 2018; Warstadt and Bowman, 2019; Kann et al., 2019). Our work analyzes factual knowledge. McCoy et al. (2019) show that BERT finetuned to perform natural language inference heavily relies on syntactic heuristics, also suggesting that it is not able to adequately acquire common sense. Warstadt et al. (2019) investigate BERT’s understanding of how negative polarity items are licensed. Our work, focusing on factual knowledge stored in negated sentences, is complementary since grammaticality and factuality are mostly orthogonal properties. Kim et al. (2019) investigate understanding of negation particles when PLMs are finetuned. In contrast, our focus is on the interaction of negation and factual knowledge learned in pretraining. Ettinger (2019) defines and applies psycho-linguistic diagnostics for PLMs. Our use of priming is complementary. Their data consists of two sets of 72 and 16 sentences whereas we create 42,867 negated sentences covering a wide range of topics and relations. Ribeiro et al. (2018) test for comprehension of minimally modified sentences in an adversarial setup while trying to keep the overall semantics the same. In contrast, we investigate large changes of meaning (negation) and context (mispriming). In contrast to adversarial work (e.g., (Wallace et al., 2019)), we do not focus on adversarial examples for a specific task, but on pretrained models’ ability to robustly store factual knowledge. 6 Conclusion Our results suggest that pretrained language models address open domain QA in datasets like LAMA by mechanisms that are more akin to relatively shallow pattern matching than the recall of learned factual knowledge and inference. Implications for future work on pretrained language models. (i) Both factual knowledge and logic are discrete phenomena in the sense that sentences with similar representations in current pretrained language models differ sharply in factuality and truth value (e.g., “Newton was born in 1641” vs. “Newton was born in 1642”). Further architectural innovations in deep learning seem necessary to deal with such discrete phenomena. (ii) We found that PLMs have difficulty distinguishing “informed” best guesses (based on information extracted from training corpora) from “random” best guesses (made in the absence of any evidence in the training corpora). This implies that better confidence assessment of PLM predictions is needed. (iii) Our premise was that we should emulate human language processing and that therefore tasks that are easy for humans are good tests for NLP models. To the extent this is true, the two phenomena we have investigated in this paper – that PLMs seem to ignore negation in many cases and that they are easily confused by simple distractors – seem to be good vehicles for encouraging the development of PLMs whose performance on NLP tasks is closer to humans. Acknowledgements. We thank the reviewers for their constructive criticism. This work was funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A and by the European Research Council (Grant No. 740516). The authors of this work take full responsibility for its content. 7816 References Chris Alberti, Kenton Lee, and Michael Collins. 2019. A BERT baseline for the natural questions. ArXiv, abs/1901.08634. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988, Florence, Italy. Association for Computational Linguistics. Ishita Dasgupta, Demi Guo, Andreas Stuhlm¨uller, Samuel J Gershman, and Noah D Goodman. 2018. Evaluating compositionality in sentence embeddings. arXiv preprint arXiv:1802.04302. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Allyson Ettinger. 2019. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240–248, Brussels, Belgium. Association for Computational Linguistics. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205, New Orleans, Louisiana. Association for Computational Linguistics. Jaap Jumelet and Dieuwke Hupkes. 2018. Do language models understand anything? on the ability of LSTMs to understand negative polarity items. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 222–231, Brussels, Belgium. Association for Computational Linguistics. Katharina Kann, Alex Warstadt, Adina Williams, and Samuel R. Bowman. 2019. Verb argument structure alternations in word and sentence embeddings. In Proceedings of the Society for Computation in Linguistics (SCiL) 2019, pages 287–297. Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 235–249, Minneapolis, Minnesota. Association for Computational Linguistics. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466. Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1445–1455, Berlin, Germany. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Association for Computational Linguistics. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association 7817 for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Fabio Petroni, Tim Rockt¨aschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Nina Poerner, Ulli Waltinger, and Hinrich Sch¨utze. 2019. BERT is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. ArXiv, abs/1911.03681. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856–865, Melbourne, Australia. Association for Computational Linguistics. Endel Tulving and Daniel Schacter. 1990. Priming and human memory systems. Science, 247(4940):301– 306. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China. Association for Computational Linguistics. Alex Warstadt and Samuel R. Bowman. 2019. Grammatical analysis of pretrained sentence encoders with acceptability judgments. ArXiv, abs/1901.03438. Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investigating BERT’s knowledge of language: Five analysis methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2877–2887, Hong Kong, China. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. A Appendix A.1 Details on LAMA We use source code provided by Petroni et al. (2019) 3. T-REx, parts of ConceptNet and SQuAD allow multiple true answers (N-M). To ensure single true objects for Google-RE, we reformulate the templates asking for location to specifically ask for cities (e.g., “born in [MASK]” to “born in the city of [MASK]”). We do not change any other templates. T-REx still queries for ”died in [MASK]”. A.1.1 Details on negated LAMA For ConceptNet we extract an easy-to-negate subset. The final subset includes 2,996 of the 11,458 samples. We proceed as follows: 1. We only negate sentences of maximal token sequence length 4 or if we find a match with one of the following patterns: “is a type of ”, “is made of”, “is part of”, “are made of.”, “can be made of”, “are a type of ”, “are a part off”. 2. The selected subset is automatically negated by a manually created verb negation dictionary. A.1.2 Details on misprimed LAMA To investigate the effect of distance between the prime and the cloze question, we insert a concatenation of up to 20 “neutral” sentences. The longest sequence has 89 byte pair encodings. The distance upon the full concatenation of all 20 sentences did not lessen the effect of the prime much. The used sentences are: ”This is great.”, ”This is interesting.”, ”Hold this thought.”, ”What a surprise.”, ”Good to know.”, ”Pretty awesome stuff.”, ”Nice seeing you.”, ”Let’s meet again soon.”, ”This is nice.”, 3github.com/facebookresearch/LAMA 7818 Figure 1: Training loss and test accuracy when pretraining BERT-base on a balanced corpus. The model is able to memorize positive and negative sentences seen during training but is not able to generalize to an unseen test set for both positive and negative sentences. ”Have a nice time.”, ”That is okay.”, ”Long time no see.”, ”What a day.”, ”Wonderful story.”, ”That’s new to me.”, ”Very cool.”, ”Till next time.”, ”That’s enough.”, ”This is amazing.”, ”I will think about it.” batch size 512 learning rate 6e-5 number of epochs 2000 max. sequence length 13 Table 6: Hyper-parameters for pretraining BERT-base on a balanced corpus of negative and positive sentences. batch size 32 learning rate 4e-5 number of epochs 20 max. sequence length 7 Table 7: Hyper-parameters for finetuning on the task of classifying sentences as true/false. A.2 Details on the balanced corpus We pretrain BERT-base from scratch on a corpus on equally many negative and positive sentences. We concatenate multiples of the same training data into one training file to compensate for the little amount of data. Hyper-parameters for pretraining are listed in Table 6. The full vocabulary is 349 tokens long. Figure 1 shows that training loss and test accuracy are uncorrelated. Test accuracy stagnates around 0.5 which is not more than random guessing as for each relation half of the adjectives hold. We finetune on the task of classifying sentences as true/false. We concatenate multiples of the same training data into one training file to compensate for the little amount of data. Hyperparameters for finetuning are listed in Table 7. We use source code provided by Wolf et al. (2019) 4. 4github.com/huggingface/transformers
2020
698
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7819–7827 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7819 On Forgetting to Cite Older Papers: An Analysis of the ACL Anthology Marcel Bollmann Department of Computer Science University of Copenhagen [email protected] Desmond Elliott Department of Computer Science University of Copenhagen [email protected] Abstract The field of natural language processing is experiencing a period of unprecedented growth, and with it a surge of published papers. This represents an opportunity for us to take stock of how we cite the work of other researchers, and whether this growth comes at the expense of “forgetting” about older literature. In this paper, we address this question through bibliographic analysis. We analyze the age of outgoing citations in papers published at selected ACL venues between 2010 and 2019, finding that there is indeed a tendency for recent papers to cite more recent work, but the rate at which papers older than 15 years are cited has remained relatively stable. 1 Introduction “This paper does not cite any literature from before the neural network era.” Scientific progress benefits from researchers “standing on the shoulders of giants” and one way for researchers to recognise those shoulders is by citing articles that influence and inform their work. The nature of citations in NLP publications has previously been analysed with regards to topic areas (Anderson et al., 2012; Gollapalli and Li, 2015; Mariani et al., 2019b), semantic relations (G´abor et al., 2016), gender issues (Vogel and Jurafsky, 2012; Schluter, 2018), the role of sharing software (Wieling et al., 2018), and citation and collaboration networks (Radev et al., 2016; Mariani et al., 2019a). Mohammad (2019) provides the most recent analysis of the ACL Anthology, looking at demographics, topic areas, and research impact via citation analysis. In this paper, we conduct a corpus analysis of papers published in recent ACL venues to determine whether the community is collectively forgetting about older papers as it experiences a period 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0 250 500 750 1000 1250 1500 1750 Number of publications Venue ACL EACL EMNLP NAACL CL TACL Figure 1: The distribution of the number of articles published between 2010–2019 in the ACL Anthology. of rapid growth (see Figure 1). The Association of Computational Linguistics (ACL) is one of the largest publishers of articles in natural language processing research: it maintains the open-access ACL Anthology1 of articles that date back to the 1960s, offering a rich resource for studying NLP publications. While the aforementioned analyses have mainly focused on incoming citations, our work targets outgoing citations. We focus on the age of citations in the References section of articles published at ACL venues between 2010 and 2019 (Sec. 2), with a view to studying three questions: 1. Do recently published papers have a tendency to cite more recently published papers, and less older literature? 2. Are older papers being cited less frequently in 2019 than they were in 2010? 3. Is there a difference between publication venues with regard to the age of citations? We find that the mean age of the papers cited does indeed decrease from 2010–2019, and that this de1https://www.aclweb.org/anthology/ 7820 crease is statistically significant, with a larger effect size in recent years (Sec. 3.1). We also find that there is no significant difference in the rate at which older papers are cited during this period (Sec. 3.2), and that there are marked differences between the citations in journal articles and conference proceedings (Sec. 3.3). Our findings show that, at a time of rapid growth, an increasing proportion of citations are going to recently published papers, but that researchers still acknowledge that they are standing on the shoulders of their peers. 2 Data The analysis in this paper is based on a subset of articles from the ACL Anthology. While corpora of NLP publications, including the ACL Anthology, already exist (Bird et al., 2008; Radev et al., 2009; Mariani et al., 2019a), none of them include publications newer than 2015. We compiled our own dataset because we are mostly interested in the papers published in recent years. The dataset is drawn from ACL venues: conference proceedings from meetings of the ACL, EACL (European Chapter of the ACL), NAACL (North American Chapter of the ACL), and EMNLP (Empirical Methods in NLP) as well as articles from the CL (Computational Linguistics) and TACL (Transactions of the ACL) journals. Anthology statistics Figure 1 shows the distribution of the articles in the corpus: the number of articles published in these venues steadily increases from 2010–2019. The CL and TACL journals publish articles at a steady rate; the ACL conference fluctuates in size, depending on whether it is co-located with NAACL; and the EACL conference nearly doubles in size each time it takes place. In terms of whether the field is rapidly growing, we note that there was a year-on-year increase of 42% between in 2017–2018 due to the increase in the number of papers published at NAACL and EMNLP, and a 34% increase between 2018–2019. Extracting citations To extract a list of references from an article, we first extract the text stream from the PDF file via pdftotext,2 then feed it into ParsCit (Councill et al., 2008) to obtain the references.3 For each reference in this list, we 2https://gitlab.freedesktop.org/ poppler/poppler 3We note that the ParsCit maintainers recommend a newer iteration of the tool, Neural-ParsCit (Prasad et al., 2018), but we could not easily replicate the same pipeline with it. then extract and keep the parsed “date”, “author”, and “title” entries. For 1.4% of the input files, this pipeline fails to extract any references; spotchecking reveals that many of those are not regular papers (but, e.g., book reviews or front matter), some PDFs have no embedded text, and others silently fail to parse. Citation age For each publication in our dataset, we want to consider how recently each paper in its reference list was published. We calculate the age of a cited paper by subtracting its year of publication from that of the citing paper. We only keep citations in the age range [0, 50] as values outside of this range typically appeared to be parsing errors.4 As only 0.95% of parsed reference dates fall outside of this range, the effect of excluding potentially valid citations is minimal. Identifying cited papers We use authors and titles of cited papers in order to identify which individual papers are being cited. We find that these entries are rather noisy in our ParsCit output; therefore, we use a heuristic based on fuzzy string matching to identify citations that are likely to refer to the same paper, despite differences in their author and/or title fields.5 Dataset6 The resulting dataset covers 8,722 papers published within 2010–2019 with a total of 264,957 extracted citations;7 for conference proceedings, we only include volumes that are marked as containing either full papers or short papers.8 3 Analysis 3.1 Are more recently published papers citing more recently published papers? Figure 2 shows the distribution of the age of cited articles with respect to the year in which the source article was published; Table 1 gives some complementary statistics. The mean age of a cited paper has steadily decreased since 2013, from 7.69 years to 5.53 years in 2019; the median has dropped from 6 to 3 years in the same period. 4For example, ParsCit mistakes the journal number for the year of publication, resulting in a ∼1,900 years old citation. 5The full algorithm is described in Appendix A. 6Datasets and code are available at: https://github. com/coastalcph/acl-citations 7This includes papers that were published on the ACL Anthology before November 6, 2019. 8In particular, this excludes papers from system demonstration, student research workshop, and industry tracks. 7821 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0 10 20 30 40 50 Age of citations Figure 2: Letter-value plot (Hofmann et al., 2017) showing the distribution of citation ages in the corpus, grouped by year of publication. The solid black lines denote the median, boxes correspond to quantiles. Significance and effect size To determine if the distribution of citation ages significantly differs between years, we perform Mann-Whitney U tests with p < 0.005 and Bonferroni correction on each pair of years. We calculate rank-biserial correlation scores to determine the effect size of these differences and convert them into common language effect size (CLES; McGraw and Wong, 1992) for easier interpretability.9 Results are shown in Figure 3: numbers correspond to (rounded) CLES values and can be interpreted as the probability that a randomly drawn citation from the column year will be older than a randomly drawn citation from the row year. For example, if we were to randomly draw a citation from a paper published in 2012 and one from a paper published in 2019, the former citation has a 59% probability of being strictly older than the latter (row “2019”, column “2012”). Greyed-out cells were not statistically significantly different according to the Mann-Whitney U test. The CLES scores show that a randomly drawn citation from more recent years (e.g. 2017–2019) has a significantly lower probability of being older than a randomly drawn citation from earlier years (e.g. 2010–2014). This can be seen by inspecting the columns and rows in the bottom right of Figure 3. 3.2 Are older papers cited less frequently in more recently published papers? While the previous section showed a downwards trend for average citation age in more recent pub9If r is the rank-biserial correlation coefficient, CLES is defined as r+1 2 . 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 49 49 49 44 42 39 36 48 48 48 43 41 38 34 44 45 44 42 40 37 33 44 45 44 42 40 37 33 44 45 44 42 40 37 33 49 49 49 44 43 39 36 49 50 51 52 52 49 45 42 38 50 51 53 53 53 50 47 42 39 53 54 56 56 56 53 50 49 42 56 57 59 59 59 56 53 51 48 47 46 45 45 47 47 46 47 46 47 47 48 Figure 3: Common language effect size (CLES) scores for the distribution of citation age (cf. Sec. 3.1 for interpretation); greyed-out cells indicate pairs where the difference in distribution was not statistically significant. Citation Age Year Count Median Mean SE 2010 12,919 5 7.27 .068 2011 12,662 5 7.38 .068 2012 14,679 5 7.63 .063 2013 21,363 6 7.69 .052 2014 21,208 6 7.66 .051 2015 25,616 5 7.21 .046 2016 26,465 4 7.00 .047 2017 30,511 4 6.69 .043 2018 42,962 3 6.26 .036 2019 56,572 3 5.53 .029 Table 1: Number of citations for each year of publication, along with median age, mean age, and standard error (SE) of the mean. lications, this does not imply that older papers are cited less frequently in absolute terms. Indeed, as there are more publications available to cite from recent years, it seems natural that they would constitute a larger relative share of cited papers, but this does not necessarily need to come at the cost of citing older papers less frequently. Figure 4 visualizes the average number of citations per paper, broken down by the age of the citation. We observe that this number steadily increases between 2010 and 2019, showing that publications in 2019 do indeed cite more papers than publications in 2010, on average. We also see that this increase is mostly due to citations of papers between 0 and 3 years old, while papers that were published 15 or more years ago are still cited at approximately the same rate now as in 2010. 7822 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0 5 10 15 20 25 30 Number of citations 0 5 10 15+ Citation age Figure 4: Average number of citations per paper with a given age. Bottom (darkest) area includes all citations of age 15 or older; each area above that represents citations of the next lower age. Tracking citations to individual papers While the citation rate for “old” papers has not changed, the distribution of papers being cited may have. To investigate this, we now also consider the author and title fields of citations to track which papers are being cited. This way, we can analyze e.g. to what extent “old” papers cited in 2010 overlap with those cited in 2019. Figure 5 shows the average number of citations to papers published 15 or more years ago—corresponding to the bottom area of Fig. 4—and additionally indicates which share of these papers have already been cited in 2010. We can see that in all the other years, more than half of these “old” citations are to papers that were not cited in 2010. Table 2 shows the most frequently cited “old” papers in 2019, additionally indicating in which year we can find the earliest citation to this paper in our dataset. Perhaps unsurprisingly, the most cited papers describe very broadly applicable resources or methods. Furthermore, two of these papers— introducing the bidirectional RNN and the LSTM, respectively—have only gathered citations from 2014 onwards, while another classic reinforcement learning paper was not cited before 2016. This suggests that in recent years, a substantial part of older citations is made up of deep learning papers that have not yet been (widely) cited in 2010. Ratio of papers to citations Figure 6 looks at the ratio of unique “old” papers being cited compared to the total number of citations. We observe that this ratio has steadily decreased since 2013, indicating that the stable number of citations goes 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0 500 1000 1500 2000 Number of citations Cited in 2010 Not cited in 2010 Figure 5: Average number of citations per papers with age 15 or older, distinguished by whether or not they (already) have been cited in 2010. 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0.56 0.58 0.60 0.62 0.64 0.66 0.68 Ratio of papers vs. citations Figure 6: Ratio of unique citations (i.e., papers) and total citations of age 15 or older. to a continuously decreasing pool of papers. In other words, there is a reduction in the variety of older papers being cited. 3.3 Do publication venues differ in how frequently older papers are cited? Journals invite submissions that are more substantial than conference papers; it is conceivable that this is reflected in the papers they cite. Figure 7 takes a closer look at citations 15 years or older by venue of publication. The four conference venues in our dataset behave very similarly, showing around 2–4 “old” citations on average. For CL papers, on the other hand, this figure is considerably larger (up to 17 such citations on average in 2017). TACL papers also show a trend towards more older citations, but not as strong as for CL. Overall, there is a clear difference in the average number of older citations in journal articles compared to conference proceedings. 7823 Citations First cited Paper 250 2010 Papineni et al. (2002). BLEU: a method for automatic evaluation of machine translation. 117 2010 Lin (2004). ROUGE: A package for automatic evaluation of summaries. 91 2014 Hochreiter & Schmidhuber (1997). Long short-term memory. 83 2016 Williams (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. 62 2010 Lafferty et al. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 60 2010 Marcus et al. (1993). Building a large annotated corpus of English: The Penn Treebank. 53 2010 Miller (1995). WordNet: a lexical database for English. 47 2010 Blei et al. (2003). Latent dirichlet allocation. 40 2014 Schuster & Paliwal (1997). Bidirectional recurrent neural networks. 39 2010 Hu & Liu (2004). Mining and summarizing customer reviews. Table 2: The most frequently cited papers in 2019 with citation age 15 or older (i.e., published before 2005). “First cited” is the year of the earliest extracted citation to this paper in our dataset. 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2 4 6 8 10 12 14 16 18 Number of citations Venue ACL EACL EMNLP NAACL CL TACL Figure 7: Average number of citations per paper that are 15 years or older, by venue of publication. 4 Conclusions We presented an analysis of citations in publications from major ACL venues between 2010 and 2019, focusing on the distribution of the age of cited papers. We found that recently published papers (0–3 years old) are cited significantly more often in publications from recent years (ca. 2015– 2019), while papers published 15 or more years ago are being cited at a stable rate. There is also a marked difference between journal and conference publications in the distribution of citation age: journal articles feature more citations to older papers. These findings could be due to the increasing difficulty of keeping up with the literature, given that many more papers are being published now, in addition to the deluge of papers that appear on preprint servers. Some areas of NLP research did also not exist 15 years ago, e.g. social media analysis, potentially making it challenging to cite older related work. Finally, since several influential neural network papers have been published in the 1990s (cf. Tab. 2), a mostly quantitative analysis is limited in its ability to determine, e.g., to what extent we still engage with older literature outside of this domain. A potential confound in our analysis is that some proceedings imposed a page limit for references; e.g., the ACL conference gave unlimited space for references in 2010, 2012, and from 2016 onwards, but imposed a page limit in 2011 and 2013–2015. We can still observe an increase in the average number of citations per paper during this latter period, so it seems unlikely that this had an effect. In addition, our analysis is limited to studying the age of the papers cited in the ACL Anthology – it does not make any claims about the complex network effects involved in researchers from particular institutions, countries, or sub-fields, and it does not study other venues that also publish NLP papers. Future work includes a deeper qualitative analysis of which (type of) papers are being cited; a more fine-grained analysis of different research topics in NLP to determine whether changes are more prevalent within certain areas than others; or extending the analysis to a larger set of the papers in the ACL Anthology. Acknowledgments We would like to thank the reviewers for their helpful comments and suggestions for further analyses. Marcel Bollmann was funded from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 845995. 7824 References Ashton Anderson, Dan Jurafsky, and Daniel A. McFarland. 2012. Towards a computational history of the ACL: 1980-2008. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 13–21, Jeju Island, Korea. Association for Computational Linguistics. Steven Bird, Robert Dale, Bonnie Dorr, Bryan Gibson, Mark Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir Radev, and Yee Fan Tan. 2008. The ACL Anthology reference corpus: A reference dataset for bibliographic research in computational linguistics. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco. European Language Resources Association (ELRA). Isaac Councill, C. Lee Giles, and Min-Yen Kan. 2008. ParsCit: an open-source CRF reference string parsing package. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), pages 661–667, Marrakech, Morocco. European Language Resources Association (ELRA). Kata G´abor, Ha¨ıfa Zargayouna, Davide Buscaldi, Isabelle Tellier, and Thierry Charnois. 2016. Semantic annotation of the ACL anthology corpus for the automatic analysis of scientific literature. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 3694–3701, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Sujatha Das Gollapalli and Xiaoli Li. 2015. EMNLP versus ACL: Analyzing NLP research over time. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2002–2006, Lisbon, Portugal. Association for Computational Linguistics. Heike Hofmann, Hadley Wickham, and Karen Kafadar. 2017. Letter-value plots: Boxplots for large data. Journal of Computational and Graphical Statistics, 26(3):469–477. Joseph Mariani, Gil Francopoulo, and Patrick Paroubek. 2019a. The NLP4NLP corpus (I): 50 years of publication, collaboration and citation in speech and language processing. Frontiers in Research Metrics and Analytics, 3:36. Joseph Mariani, Gil Francopoulo, Patrick Paroubek, and Fr´ed´eric Vernier. 2019b. The NLP4NLP corpus (II): 50 years of research in speech and language processing. Frontiers in Research Metrics and Analytics, 3:37. Kenneth O. McGraw and S. P. Wong. 1992. A common language effect size statistic. Psychological Bulletin, 111(2):361–365. Saif M. Mohammad. 2019. The state of NLP literature: A diachronic analysis of the ACL Anthology. arXiv preprint arXiv:1911.03562. Animesh Prasad, Manpreet Kaur, and Min-Yen Kan. 2018. Neural ParsCit: a deep learning-based reference string parser. International Journal on Digital Libraries, 19(4):323–337. Dragomir R. Radev, Mark Thomas Joseph, Bryan Gibson, and Pradeep Muthukrishnan. 2016. A bibliometric and network analysis of the field of computational linguistics. Journal of the Association for Information Science and Technology, 67(3):683–706. Dragomir R. Radev, Pradeep Muthukrishnan, and Vahed Qazvinian. 2009. The ACL Anthology Network Corpus. In Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries (NLPIR4DL), pages 54–61, Suntec City, Singapore. Association for Computational Linguistics. Natalie Schluter. 2018. The glass ceiling in NLP. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2793–2798, Brussels, Belgium. Association for Computational Linguistics. Adam Vogel and Dan Jurafsky. 2012. He said, she said: Gender in the ACL anthology. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 33–41, Jeju Island, Korea. Association for Computational Linguistics. Martijn Wieling, Josine Rawee, and Gertjan van Noord. 2018. Squib: Reproducibility in computational linguistics: Are we willing to share? Computational Linguistics, 44(4):641–649. 7825 A Fuzzy paper matching In Section 3.2, we track citations to individual papers, which requires identifying authors and titles of cited papers in addition to their year. Since this information is rather noisy in the output we obtain from ParsCit, we employ a simple matching algorithm. This algorithm heuristically matches citations with non-identical author and/or title fields which are likely to refer to the same paper. Concretely, we first preprocess the author and title fields as follows: 1. We convert strings to a pure ASCII representation.10 2. We cut off the title field after a dot–space (. ) sequence, as we found this to almost always indicate the start of the journal/proceedings/booktitle field (which was incorrectly interpreted as part of the title by ParsCit). We then treat two citations as referring to the same paper if all of the following criteria hold: 1. Their year of publication is identical. 2. They have the same number of authors. 3. All author last names can be fuzzy-matched. 4. All author first names can be fuzzy-matched or they start with the same character.11 5. Their titles can be fuzzy-matched. Two strings can be fuzzy-matched if their distance ratio12 is ≤95%. Quality of paper matching We found the described approach to work reasonably well on our citation data, though it unfortunately still results in many false negatives (i.e., papers that should have been matched but were not). Common problems include: • Papers that are cited with inconsistent author lists; e.g., the paper that introduced the Penn Treebank is cited as “Marcus, Santorini, Marcinkiewicz”, “Marcus, Marcinkiewicz, Santorini”, or “Marcus & Marcinkiewicz”. 10We achieve this by using https://github.com/ un33k/python-slugify. 11The motivation here is that some citation styles use full first names, while others only give initials. 12As implemented by https://github.com/ seatgeek/fuzzywuzzy. • Papers with both pre-print and peer-reviewed versions that were not published in the same year. • Parsing or text extraction errors. B Supplementary figures B.1 Oldest citation per paper Figure 8 shows the distribution of the oldest citation per paper in our dataset. This is motivated by the idea that while the average number of “old” citations per paper is stable (cf. Sec. 3.2), they might be distributed in an unbalanced way. In other words, there might be a subset of publications that does not cite any “older” work. Figure 8 shows that this is not really the case: the majority of papers in our dataset include a citation of age 15 or older. There are a few outliers, however: there are 15 papers in total which, according to our processing pipeline (cf. Sec. 2), do not include any citation older than 3 years. We manually check their original PDFs and find that one of these is a book review, three are extraction errors, and 11 actually do not contain any citation older than 3 years. 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0 10 20 30 40 50 Age of citations Figure 8: Letter-value plot (Hofmann et al., 2017) considering only the oldest citation per paper among all papers published in a given year. The solid black lines denote the median, boxes correspond to quantiles. B.2 Extended versions of previous figures Figure 9 shows the distribution of citation ages, analogous to Figure 2, but separately for each publication venue. Figure 10 shows the average number of citations per paper, analogous to Figure 7, but for a larger number of citation ages. 7826 0 10 20 30 40 50 Age of citations ACL EACL 0 10 20 30 40 50 Age of citations EMNLP NAACL 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0 10 20 30 40 50 Age of citations CL 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 TACL Figure 9: Letter-value plot (Hofmann et al., 2017) showing the distribution of citation ages by publication venue, grouped by year of publication. The solid black lines denote the median, boxes correspond to quantiles. 7827 0 20 40 60 80 Number of citations Age ≥0 Age ≥5 0 20 40 60 80 Number of citations Age ≥10 Age ≥15 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 0 20 40 60 80 Number of citations Age ≥20 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 Age ≥25 Venue ACL EACL EMNLP NAACL CL TACL Figure 10: Average number of citations per paper, separately by venue of publication, for a number of different citation ages.
2020
699
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 53–65 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 53 Guiding Variational Response Generator to Exploit Persona Bowen Wu1, Mengyuan Li2∗, Zongsheng Wang1, Yifu Chen3∗, Derek F. Wong4, Qihang Feng1, Junhong Huang1, Baoxun Wang1 1Platform and Content Group, Tencent 2Peking University, Beijing, China 3University of Chinese Academy of Sciences 4NLP2CT Lab / Department of Computer and Information Science, University of Macau {jasonbwwu,jasoawang,careyfeng,vincenthuang,asulewang}@tencent.com [email protected], [email protected], [email protected] Abstract Leveraging persona information of users in Neural Response Generators (NRG) to perform personalized conversations has been considered as an attractive and important topic in the research of conversational agents over the past few years. Despite of the promising progress achieved by recent studies in this field, persona information tends to be incorporated into neural networks in the form of user embeddings, with the expectation that the persona can be involved via End-to-End learning. This paper proposes to adopt the personalityrelated characteristics of human conversations into variational response generators, by designing a specific conditional variational autoencoder based deep model with two new regularization terms employed to the loss function, so as to guide the optimization towards the direction of generating both persona-aware and relevant responses. Besides, to reasonably evaluate the performances of various persona modeling approaches, this paper further presents three direct persona-oriented metrics from different perspectives. The experimental results have shown that our proposed methodology can notably improve the performance of persona-aware response generation, and the metrics are reasonable to evaluate the results. 1 Introduction As an essential research topic in generative conversational agents (a.k.a., chat-bots), Persona Modeling is of great importance for such deep neural network based intelligent interactive systems (Li et al., 2016b; Kottur et al., 2017; Wang et al., 2017). Apparently, user-personalitydependent responses provided by a chat-bot are able to significantly improve the consistency of its conversations, meanwhile, it is possible for users ∗* Contribution during the internship at Tencent. to flexibly customize the persona of a chat-bot based on some existent dialogues. As for the studies on this topic, with no doubt, incorporating persona factors into End-to-End generative models is an attractive topic with great challenges. The current studies mainly focus on adopting the explicit meta-data of user profiles (Qian et al., 2018; Chu et al., 2018) or character descriptions (Zhang et al., 2018; Mazare et al., 2018; Song et al., 2019) to generate persona-aware responses. However, on one hand, user profiles are usually highly privacy-related and thus it is difficult to obtain such information from users practically. On the other hand, little correlation can be explicitly observed between such meta-data profiles and persona characteristics of users. Especially, those character descriptions, tailor-made for the persona-aware response generation with the great cost of manual work, are only a variant of user profile innately in terms of different natural language forms. One of the reasonable and practically executable methodologies for introducing persona factors into conversation models is to adopt the real-valued user representation as a medium (Li et al., 2016b; Kottur et al., 2017; Liu et al., 2018; Al-Rfou et al., 2016). In particular, such user representations can be derived from users’ historical dialog utterances with rich linguistic and personality information involved. Taking persona representations as the guidance for generating customized responses becomes a widely accepted methodology due to the recent development of deep latent variable models (Zhao et al., 2017; Shen et al., 2017; Zhou and Wang, 2018). However, for current models, without the explicit learning objectives or constraints, the user representation is adopted in a passive way to reduce the model loss and KL divergence via end-to-end learning. In this case, it is highly possible that the 54 Figure 1: The architecture of the Persona-Aware Variational Response Generator (PAGenerator) described in this paper. ⊕represents the concatenation of inputs and CE denotes the cross-entropy of predictions. The dotted arrow line indicates the connection is optional, and the default model named PAGenerator decodes with the user embedding. employed embeddings will not work as effectively as expected. Consequently, it is necessary to employ explicit guidance to help variational response generators sense persona. From observations upon personacontained dialogs, there exist intuitive characteristics for directing the optimization of the personaaware variational response generation. Obviously, for a given user, the appropriately modeled and leveraged persona information can help to generate hidden variables semantically relevant with corresponding responses. Besides, since users may have their own linguistic style, the adoption of personal information in NRG aims to have direct influence on the degree of linguistic (e.g. lexical and syntactic) convergence for a specific user. This paper aims at exploring the explicit guidance to help the variational response generator exploit persona information hidden in the nonstructured contents produced by the users, by utilizing intuitive characteristics of personalized conversations for model training. The contributions of this paper can be summarized as follows: • A persona-aware variational response generator is proposed to exploit persona while modeling the conversations. • Based on the model, two regularization terms are presented to guide the model in encoding user information into the latent variables and converging to user-specific responses. • Three discriminative metrics are further introduced to evaluate the capabilities of persona-aware response generators. 2 Approach Based on the current progress on the development of latent variable models, we propose a personaaware variational response generator to automatically exploit persona from conversations, and utilize such personal information to model the future conversation. Besides, given that personal information can be exploited as optimization guidance to better modeling persona, we further introduce two regularization terms to guide the model learning. In the following section, we first describe the general structure of PAGenerator, and then explain the two additional regularization terms. 2.1 Persona-Aware Variational Response Generator Utilizing latent variables in response generation has become a widely accepted methodology in NRG due to their Bayesian essence. It helps to deal with external knowledge efficiently, e.g. Persona. Therefore, our proposed model is built based on the generation model with latent variables. The overall architecture of the single turn personaaware variational response generator proposed in this paper is illustrated in Figure 1. Let q, r, u stand for the query, the reply and the corresponding user of r, respectively, and eu stands for the embedding of user u. A bidirectional LSTM is first employed to encode the query and reply into fixed size vectors hq and hr. After that, the prior network (parametrized by θ) takes ue, hq as inputs to generate the distribution pθ(z|q, u) of latent variable z. Meanwhile, hq, hr are fed into a posterior network (parameterized by φ) to compute qφ(z|q, r). As we adopt the assumption that z follows isotropic Gaussian distribution, pθ(z|q, u) and qφ(z|q, r) are also normally 55 distributed, such that: pθ(z|q, u) ∼N(µp, σ2 pI) qφ(z|q, r) ∼N(µq, σ2 qI) (1) where the means and variances are computed as follows:  µp log(σ2 p)  = Wp  q u  + bp (2)  µq log(σ2 q)  = Wq  q r  + bq (3) where Wp, Wq, bp and bq are the trainable parameters. A sample of z using the reparametrization trick (Kingma and Welling, 2013) is then fed into the decoder as a part of input at each time step. In addition, the bag-of-word (BOW) loss (Zhao et al., 2017) is employed to tackle the latent variable vanishing problem, and PAGenerator is trained to maximize the variational lowerbound (Chung et al., 2015; Serban et al., 2017): L(θ, φ; q, r,u) = Eqφ(z|q,r)[log pθ(r|z, q, u)] −KL(qφ(z|q, r)∥pθ(z|q, u)) +Eqφ(z|q,r)[log p(rbow|z, q, u)] (4) 2.2 User Information Enhancing Regularization Ideally, we expect that the introduction of user embedding is fully utilized during model training. However, due to the KL vanishing problem, the training of PAGenerator suffers from the hazard that the rapid decrease of L in Equation 4 might be attributed to the strong fitting capability of the decoder on the training data, rather than the involvement of user embedding. Thus, we introduce a regularization term to promote the usage of user’s hidden information in latent variables. At the beginning, as illustrated in Figure 1, a general unk u is introduced to represent the case for user unspecified. Subsequently, taking the default user embedding eunk u as input, we obtain the KL divergence as KL(qφ(z|q, r)∥pθ(z|q, unk u)) from the network. In this case, once the real user u is introduced, a regularization term R1(θ, φ; q, r, u) can be constructed as follows: R1(θ,φ; q, r, u) = max(−γ1, KL(qφ(z|q, r)∥pθ(z|q, u)) −KL(qφ(z|q, r)∥pθ(z|q, unk u))) (5) where γ1 ∈R, γ1 > 0, and pθ(z|q, unk u) ∼ N(µ′ p, σ′2 p I). It should be noted that, according to the equation above, the two prior distributions are generated from the same network with partially different inputs (u VS. unk u), and the regularization constrains the prior distribution with specified user to be closer to the posterior distribution. Thus, the optimization encourages the utilization of user information and correspondingly inhibits the generated results from ignoring the user information. Meanwhile, R1 in our proposed model also alleviates the KL vanishing problem. 2.3 Variance Controlling Regularization The BOW loss forces the latent variables to predict the bag-of-words in the response. Therefore, the semantic distribution of z is required to be capable of representing the topics and wording of the target response. Besides, for a given query, the possible replies from a specific user should be more convergent to each other than those from an unknown user, due to each user’s unique preference on the topics and wording. Correspondingly, under the assumption that the distribution of z represents the user’s language preference, the specification of user information is expected to reduce the entropy of the isotropic Gaussian distribution of z, reflected by a lower standard deviation σp. On this basis, we introduce another regularization term R2(θ, φ; q, r, u) to control the variance: R2(θ,φ; q, r, u) = max(−γ2, σ2 p −σ′2 p ) (6) where γ2 ∈R and γ2 > 0. R2 prefers those z with decrease ≥γ2 in standard deviation σp after specifying users, and such decrease indicates the latent variables are more semantically convergent. On this basis, we update the new training objective of PAGenerator as follows: L′(θ,φ; q, r, u) = L(θ, φ; q, r, u) −R1(θ, φ; q, r, u) −R2(θ, φ; q, r, u) (7) By employing the two regularization terms to constrain the model training, L′(θ, φ; q, r, u) now also pays attention to the utilization of user information and language preference. 3 Specified Evaluation Metrics of Persona NRG In the previous section, two regularization terms are proposed to guide the model in the persona exploration. However, we still lack effective persona-focused metrics to quantify how well one 56 model is on learning persona. The currently applied metrics for persona-aware NRG evaluation, such as perplexity and BLEU, are used to evaluate the plain NRG models (Li et al., 2016b; Kottur et al., 2017). Apparently, such metrics are inadequate to evaluate the capacity of a response generator on capturing persona. Innately, an effective persona-aware response generator should be able to successfully identify and generate responses for users according to their language styles. Besides, the generated responses from different users should be diversified to each other in wording. Considering these properties, we propose the following metrics to measure the level of persona-aware in response generators. 3.1 Language Style Detection It is important for a persona-aware response generator to identify a user’s response from other userirrelevant ones, by detecting the user’s language style in responses. In this subsection, we propose User-Relative-Rank (uRank) to measure such capability. Given a query-response-user triple {q, r, u}, a pre-trained seq2seq model S2S and a model M to be evaluated, we first generate n user-irrelevant responses {r′ i|i ∈[1, n]} from S2S using beam search. For a desired persona-aware model M, it is expected to assign the ground truth response r with a higher probability than other user-irrelevant ones {r′ i|i ∈[1, n]}. Thus, taking S2S as reference, we set uRank to be 1 if M scores r a higher ranking position among r′ i than S2S, specifically: rankM = |{i|PM(r′ i) > PM(r)}| rankS2S = |{i|PS2S(r′ i) > PS2S(r)}| uRank = ( 1 if rankM < rankS2S 0 otherwise (8) where Pm(r) and Ps2s(r) are the probabilities of {q, r, u} given by M and s2s respectively, |X| presents the cardinal number of a set X, and the lower score of either rankM or rankS2S indicates a better ranking position. Overall, for model M, its average uRank for different queries denotes the rate of rank-promoted ground-truth replies. 3.2 Language Style Imitation Apart from perceiving users’ language styles, an effective persona-aware model should also be able to imitate language styles by generating responses satisfying users’ language behaviors. User-Language-Perplexity (uPPL) is proposed to measure this property. Given a user ui, to conduct such metric, a statistical language model LMi is first trained using the user’s utterances. After that, for a generated response r′, its corresponding uPPL is defined as the perplexity of r′ given by LMi. uPPL quantifies the power of a persona-aware model on generating responses similar to users’ history utterances. 3.3 Diversity between Users Finally yet importantly, due to the introduction of user information, given a query, we expect that responses for different users from a personaaware model should be also diversified. Therefore, Users-Distinct (uDistinct) is proposed in this paper to capture such property. Given a query qi and m different users {uj |j ∈[1, m]}, we generate different responses {r′ j |j ∈[1, m]} for each user using M. On this basis, Distinct-1 and Distinct2 (Li et al., 2016a) of the response set {r′ j |j ∈ [1, m]} are utilized to measure the in-group diversity of responses generated by M within users. Li et al. (2016b) also compare models through the case studies from the similar perspective. 4 Experiments 4.1 Datasets To evaluate the performance of our proposed method, we implement experiments on a Chinese Social Networking Service (SNS) corpus and the Cornell Movie Dialogues corpus (DanescuNiculescu-Mizil and Lee, 2011). The Chinese SNS corpus is crawled from a Chinese social network service Douban,1 containing totally 1,022,592 single-turn dialogues from 12,857 users; while the Cornell Movie Dialogues corpus consists of conversations from movie scrips. By cleaning up the Cornell corpus with the opensource script,2 we obtain 109,952 single-turn dialogues from 9,035 movie characters. The training/test ratios for the two corpora are around 200:1 and 50:1, respectively. Besides, for the Douban corpus, the mean, maximum, minimum, and the standard deviation values of the number of utterances for each user are 80, 1190, 33, and 49, respectively. Meanwhile, these statistics values are 14, 237, 4, and 22, correspondingly. 1https://www.douban.com/group 2https://github.com/suriyadeepan/datasets/ 57 There are two main differences between the two datasets: 1) The scenes of conversations are different. The dialogues in Douban are crawled from an open domain social media. By contrast, since the characters in Cornell movie corpus are assigned with fixed personas, the language styles and habits of users are more templatized. Besides, the language style in Cornell is more oral-like, with many personal pronouns. 2) The average number of utterances for each user of the Douban corpus is around 10 times more than that of Cornell. 4.2 Model Variations S2SA Vanilla sequence-to-sequence model with attention (Sordoni et al., 2015). fact bias S2SA with fact bias for persona modeling (Michel and Neubig, 2018). fact bias is originally proposed in NMT, it models user information as an additional bias vector learned through a factored model in the softmax layer. Speaker Model Framework proposed by Li et al. (2016b). This model is similar to S2SA + fact bias, except that the user information is added as a part of decoder input rather than bias in the softmax layer. VAE Standard Variational AutoEncoder for response generation (Serban et al., 2017). In our experiment, we replace the utterance with the query only and apply the auxiliary BOW loss (Zhao et al., 2017) in training. CVAE Conditional Variational AutoEncoder with user information as prior knowledge for modeling persona (Zhao et al., 2017). Similar to VAE, bagof-words loss is applied in CVAE. For a fair comparison, we use the same configuration for all models. The size of word embedding and user embedding are respectively set to 300 and 128. All the user embeddings, including that of the unknown user, are initialized randomly and trained during the optimizing. We employ a bi-directional LSTM of hidden size = 256 for encoding, and a LSTM of hidden size = 512 for decoding. For latent models, the dimension of z is set as 128. All models are optimized using Adam (Kingma and Ba, 2014) with learning rate = 2e−4 and batch size = 128. For latent models, we also use KL annealing (Bowman et al., 2016) (400,000 batches for Douban corpus and 100,000 batches for Cornell Movie corpus) to achieve better performance. 4.3 Automatic Evaluation Metrics To thoroughly evaluate our systems, both standard and persona-focused metrics are employed in our experiments. For standard metrics, we adopt unigram BLEU (BLEU-1) (Papineni et al., 2002) and Word Embedding metrics (Liu et al., 2016) including Embedding Average (Average), Vector Extrema (Extrema) and Greedy Matching (Greedy) to evaluate the semantics of generated responses with regards to ground truths. We use the pretrained word embeddings from (Song et al., 2018) for the Douban corpus and embeddings from (Pennington et al., 2014) for the Cornell movie corpus. The three proposed metrics (uRank, uPPL and uDistinct) are adopted to measure the performance of capturing persona. For uPPL, we use a bi-gram language model for perplexity computation. Since the effectiveness of uPPL relies on the quality of constructed user language models, we pretrain the SLM with the whole training data and afterwards finetune it using each user’s utterances. Besides, we drop the users with utterances less than 100 in Douban and 30 in Cornell. The value of uRank, which depends on the rankings of predicted probabilities of responses, is not stable for latent models due to the randomness on sampling z. Therefore, uRank for each latent model is computed by running 10 rounds, so that we obtain 10 ranking results and their corresponding uRank. Then we average the obtained 10 uRank as the final uRank for each latent enhanced model. The later experimental results show that uRank for any latent model varies slightly around ±0.005 for each round. 4.4 The Human Evaluation Criterion For further comparisons, we also use the crowdsourcing labeling resources of our organization to manually evaluate the relevance and the persona of generated responses. Since the degree of persona reflected in the response is even more difficult to be judged by humans, we simplify the annotation into a “yes or no” task, that is, annotators are only asked to decide whether the response can reflect persona for the given user. Before that, the annotators have to read all the utterances of each user to learn the persona for judging. Moreover, in practice, we limit the number of each user’s sample utterances to 100. However, the judgment is inevitably much more subjective. Thus, for each sample, we recruit 11 annotators to label and make the final determination by voting. The evaluation 58 of relevance is relatively easy. For the evaluation of relevance, each query-response pair is crossevaluated by 3 annotators, following the labeling criterion used in (Xing et al., 2017; Wang et al., 2018). The details of data sampling and labeling are given in the Supplementary Material. 5 Results & Analysis 5.1 Results on the Douban Corpus We first report the performance on the Douban corpus. The results of automatic evaluating metrics are illustrated in Table 1, numbers in bold mean that the improvement on that metric is statistically significant over other methods (p-value ≤0.01). It is observed that the BLEU-1 scores of various models are relatively low and close to each other. We attribute this to the fact that the semantics of possible responses for one query is highly diversified in terms of speaking styles and topics, there might be the situation that only a small portion of words share among the responses except those of high-frequency words (Mou et al., 2016; Liu et al., 2016). However, user enhanced models achieve higher BLEU-1 scores due to their capability in considering the preference of a user. Furthermore, by comparing the performances on embedding metrics, we find that all models obtain decent scores, but none of the models outperform the others significantly. Such phenomenons can also be observed in previous studies (Serban et al., 2017; Wang et al., 2019), since all the models generate responses semantically similar to the ground truths. Despite this, PAGenerator achieves the highest score on average, which suggests the responses generated by PAGenerator are more semantically relevant to the ground truths. While all models perform more or less the same on standard metrics, their experimental results on persona metrics are quite different. All personaaware NRG models outperform S2SA and VAE which contain no user information on the uRank, while the two variational models with user information significantly exceed the rest models. It shows that persona-aware response generators, especially those exploiting user embeddings to generate latent variables, are more sensitive on identifying users’ language styles. Among all models with user modeling, our proposed PAGenerator achieves the highest uRank. The advantage of introducing persona information into NRG is also reflected by uPPL. The replies given by the three models employing user embeddings are more consistent with the user’s language style, which indicates that user embedding is useful in learning language style automatically in an End-to-End NRG model. By contrast, since S2SA with fact bias focuses on learning user’s bias based on only unigrams, it struggles from achieving a high uPPL which scores from bigram perspective. Moreover, comparing the performance of CVAE to Speaker Model, it appears that utilizing latent variables in standard method cannot further improve uPPL. By contrast, the two new regularizations proposed for persona modeling can help PAGenerator generating replies with more specific persona, the uPPL of which is reduced by 21.2 points compared to CVAE. As mentioned in previous sections, uDistinct measures the diversity of the generated responses between different users. In general, latent models achieve higher uDistinct than non-latent ones as the randomness brought by the latent variables. Within latent models, the adoption of user information in CVAE only slightly improves its uDistinct compared to VAE without user specification. It indicates that user embeddings are ineffectively utilized in CVAE, and this is the motivation for us to propose new methods for variational response generator. The notable improvement in uDistinct can verify their effectiveness in exploiting persona. The cases can further demonstrate such improvements in Supplementary Material. Besides, the comparison among baseline models is consistent with the experiments in previous studies (Li et al., 2016b; Zhou and Wang, 2018), which indicates the proposed metrics are apposite for evaluating the capability of NRG models on capturing persona. 5.2 Human Evaluation To further evaluate the quality of generated responses from each model more subjectively, we also implement human labeling. As shown in Table 2, adjusting unigram distributions for users by fact bias reduces the quality of generated responses. By contrast, all other models produce more high-quality replies comparing with S2SA. Moreover, responses from PAGenerator achieve the best human evaluation result, which indicates that the improvement of persona capturing of PAGenerator does not reduce correlation. Meanwhile, in the last column, the trend of eval59 Methods BLEU Embedding Persona Metrics Average Extreme Greedy uRank uPPL uDist-1 uDist-2 S2SA (Sordoni et al., 2015) 0.29 0.834 0.615 0.666 0 200.4 0.115 0.113 fact bias (Michel and Neubig, 2018) 0.29 0.840 0.618 0.671 0.022 202.3 0.091 0.101 Speaker Model (Liu et al., 2016) 0.31 0.837 0.621 0.674 0.023 163.6 0.183 0.199 VAE (Serban et al., 2017) 0.30 0.830 0.609 0.659 0.017 225.9 0.367 0.467 CVAE (Zhao et al., 2017) 0.31 0.836 0.616 0.668 0.039 174.5 0.377 0.486 PAGenerator 0.31 0.845 0.622 0.670 0.044 153.3 0.406 0.524 Table 1: Evaluation results on Douban corpus. uDist is the abbreviation for uDistinct in the table. Methods Human Evaluation 0 1 2 Persona S2SA 60.0% 35.0% 5.0% 1.6% fact bias 70.0% 26.7% 3.3% 7.8% Speaker Model 53.2% 41.6% 5.2% 9.6% VAE 58.3% 35.0% 6.7% 3.8% CVAE 55.0% 38.8% 7.2% 12.2% PAGenerator 51.7% 38.3% 10.0% 13.4% Table 2: Human labeled results upon generated responses of models trained on the Douban corpus, with the beam width of 10. The Fleiss’ kappa (Fleiss and Cohen, 1973) on all annotations is around 0.65, which can be considered as “substantial agreement”. uated results on persona almost consists to those evaluated by proposed automatic evaluation metrics. The PAGenerator outperforms other models, and some particular parts of replies generated by persona-aware models can reflect the personality. Besides, due to the randomness, some responses given by S2SA and VAE are also labeled as persona-aware. However, fewer high-quality responses generated by S2SA compared to VAE, and thus, the proportion of S2SA is even lower. 5.3 Results on the Cornell Corpus As shown in Table 3, the overall trend of the experimental results on Cornell corpus is consistent with that on Douban corpus. The models that are aware of the specified user outperform others slightly on BLEU and Embedding metrics. Regards to persona metrics, the experimental results on Cornell corpus shows two main differences: a) The Speaker Model does not perform that well on user language style detection and generation, mainly because the training data of each user is less than that in Douban corpus. It is hard to automatically model the informative user embedding via target oriented learning without guidance. By contrast, utilizing the KL divergence as the guidance in CVAE effectively improves the experimental results. b) Due to the individual characteristics of movie characters, the user-embeddingenhanced models generate more diverse responses for different users, specially PAGenerator. 5.4 Human Evaluation Results on the Cornell Corpus As shown in Table 5, on the English dataset, the comparison results are almost consistent with that in Section 5.2. According to the judgment of annotators, our proposed model outperforms the others from both relevance and persona perspective. However, influenced by insufficient training conversations, the overall quality of generated responses for the Cornell queries is not as good as the ones given for the Douban corpus. We attribute this to the difference in the corpus size and the word distribution, which is described in Section 4.1. In detail, the quality of Cornell is influenced by insufficient training conversations. By contrast, the persona is reflected more obviously with the help of more templatized language styles and habits of Cornell. 5.5 Ablation Study To get a better intuition about how our proposed method works, we implement the ablation tests to analyze the contribution of each component of PAGenerator in persona exploitation. As illustrated in Table 4, adding the user embeddings as a part of decoder inputs brings positive improvements on all the persona-focused metrics. Without UE, the parameter size of PAGenerator reduces considerably, which is harmful to the model on fitting target data. Besides, without direct constraints from the decoder, user embeddings mainly act on reducing KL divergence rather than providing more informative latent variables. Besides, without UE, PAGenerator also significantly outperforms VAE in all metrics, which demonstrates that R1 and R2 are indeed useful for guiding the latent variables 60 Methods BLEU Embedding Persona Metrics Average Extreme Greedy uRank uPPL uDist-1 uDist-2 S2SA (Sordoni et al., 2015) 0.32 0.787 0.503 0.679 0 44.8 0.115 0.079 fact bias (Michel and Neubig, 2018) 0.30 0.785 0.501 0.676 0.044 39.3 0.127 0.095 Speaker Model (Liu et al., 2016) 0.33 0.796 0.510 0.681 0.056 41.7 0.228 0.225 VAE (Serban et al., 2017) 0.25 0.780 0.490 0.670 0.058 45.6 0.122 0.114 CVAE(Zhao et al., 2017) 0.28 0.800 0.502 0.689 0.085 37.0 0.223 0.251 PAGenerator 0.33 0.814 0.514 0.687 0.114 32.2 0.251 0.304 Table 3: Comparison of different approaches on the Cornell Movie Dialogues corpus. Methods uRank uPPL uDist-1/2 PAGenerator 0.114 32.2 0.251 / 0.304 w/o R1 0.117 29.6 0.209 / 0.246 w/o R2 0.118 37.2 0.251 / 0.319 w/o UE 0.063 43.5 0.149 / 0.139 Table 4: Ablation tests of PAGenerator on Cornell Movie Dialogue Corpus. ”w/o” denotes PAGenerator does not contain the specific component, for example, ”w/o UE” means the decoder of PAGenerator does not utilize the user embedding as input. to model the semantics under the query and users. Comparing the ablation results of w/o R1 with w/o R2, we can conclude that both regularizations promote uRank values. However, PAGenerator w/o R2 only achieves a mediocre result on uPPL, while only utilizing R2 damages the model’s ability in generating diverse responses for different users. We attribute this divergence to the trade-off between a) shared movie-style language between users and b) different language preferences among actors in the movie scripts. Since R1 promotes the divergence of z between the specified and unspecified users, removing R1 raises the difficulty for the model to generate diverse responses toward different users, reflected by the low uDistinct of w/o R1. However, promoting diversity will more or less sacrifice the model’s learning on the common shared movie-style patterns, which is vital in evaluating the language cohesion. Therefore, the performance of PAGenerator only with R1 on uPPL is less-than-ideal. In contrast, since R2 emphasizes those patterns often used by a given user, it encourages the distribution of user information to be more aggregate. These differences explain the opposite results of w/o R1 and w/o R2. In conclusion, the user embedding is an important constraint for the PAGenerator, and R1, R2 can be considered to deploy for different purposes. Furthermore, utilizing all components of PAGenerator described in Figure 1 guarantees a more balMethods Human Evaluation 0 1 2 Persona S2SA 70.6% 27.5% 1.9% 1.4% fact bias 72.2% 26.0% 1.8% 14.9% Speaker Model 62.2% 35.6% 2.2% 16.9% VAE 65.0% 31.6% 3.4% 1.1% CVAE 61.7% 34.0% 4.3% 21.6% PAGenerator 61.5% 33.8% 4.7% 22.8% Table 5: Human evaluation results on the Cornell Corpus. anced and relatively best performance in all three evaluated persona exploiting abilities. 6 Related Work 6.1 Persona-based Neural Models Persona-based neural conversation models can be categorized into two major research directions. One is to directly train a model from conversational data by considering the persona information (Li et al., 2016b; Kottur et al., 2017; Wang et al., 2017; Madotto et al., 2019), while the other approach makes use of the profiles or sideinformation of users to generate the aligned responses (Chu et al., 2018; Qian et al., 2018; Zhang et al., 2018; Mazare et al., 2018; Song et al., 2019). The work described in this paper belongs to the first research direction. Li et al. (2016b) and Kottur et al. (2017) enrich the models by training persona vectors directly and incorporating them into the decoder. Wang et al. (2017) propose three strategies to learn the language style instead of introducing new models. Apart from the development of the Personabased NRG models, recent researches also attempt to incorporate persona into neural machine translators. Michel and Neubig (2018) propose to learn speaker-specific parameters for the bias term in the output to promote user preferring unigrams, and Wuebker et al. (2018) introduce offset tensors to perform fine-tuning for each user. 61 6.2 Variational Response Generator The variational response generators have drawn much attention recently, due to the observation that it can be flexible to include the effect from conditions based on its Bayesian architecture (Zhao et al., 2017; Shen et al., 2017) and naturally promote diversity by involving sampling in the generate stage (Serban et al., 2017; Du et al., 2018; Shen et al., 2018). Zhao et al. (2017) and Shen et al. (2017) introduce frameworks taking various conditions to influence the model learning. Afterwards, Zhou and Wang (2018) include the emoji into the variational NRG model to generate responses with particular emotions. Actually, these models (Zhao et al., 2017; Shen et al., 2017; Zhou and Wang, 2018) can also be deployed to the persona-aware response generation scenario. The main difference is that the speaker of the response is unpredictable based on the query. Thus, we have introduced the architecture proposed by Zhao et al. (2017) and modified it to adapt to the persona-aware generation, for the meaningful comparison. Especially, Song et al. (2019) have utilized persona information into the CVAE architecture, except they focus on modeling and copying users’ explicit profiles. 7 Conclusions In this paper, we proposed a variational neural network to model the conversation as well as the persona of users. On the basis of the network, two regularization terms are designed to guide the model in emphasizing the importance of the hidden user information. In addition, to better reflect the persona characteristics of the response generation model, three metrics have been introduced to quantify the level of persona of the generated responses. Experimental results show that our approach significantly outperforms other baseline models and the proposed metrics are effective in evaluating the capabilities of models on generating persona-aware responses. Acknowledgments This work was supported in part by the National Natural Science Foundation of China (Grant No. 61672555), and the Science and Technology Development Fund, Macau SAR (Grant No. 0101/2019/A2). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. References Rami Al-Rfou, Marc Pickett, Javier Snaider, Yunhsuan Sung, Brian Strope, and Ray Kurzweil. 2016. Conversational contextual cues: The case of personalization and history for response ranking. arXiv preprint arXiv:1606.00372. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. CoNLL 2016, page 10. Eric Chu, Prashanth Vijayaraghavan, and Deb Roy. 2018. Learning personas from dialogue with attentive memory networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2638–2646. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2980–2988. Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011. Jiachen Du, Wenjie Li, Yulan He, Ruifeng Xu, Lidong Bing, and Xuan Wang. 2018. Variational autoregressive decoder for neural response generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3154–3163. Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement, 33(3):613– 619. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Satwik Kottur, Xiaoyu Wang, and V´ıtor Carvalho. 2017. Exploring personalized neural conversational models. In IJCAI, pages 3728–3734. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. 62 Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 994–1003. Association for Computational Linguistics. Bingquan Liu, Zhen Xu, Chengjie Sun, Baoxun Wang, Xiaolong Wang, Derek F Wong, and Min Zhang. 2018. Content-oriented user modeling for personalized response ranking in chatbots. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 26(1):122–133. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, and Pascale Fung. 2019. Personalizing dialogue agents via meta-learning. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 5454–5459. Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779. Paul Michel and Graham Neubig. 2018. Extreme adaptation for personalized neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 312–318. Association for Computational Linguistics. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3349–3358. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Assigning personality/profile to a chatting machine for coherent conversation generation. In IJCAI, pages 4279–4285. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pages 3295–3301. Xiaoyu Shen, Hui Su, Vera Demberg, and Shuzi Niu. 2018. Improving variational encoder-decoders in dialogue generation. national conference on artificial intelligence, pages 5456–5463. Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017. A conditional variational framework for dialog generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 504–509. Haoyu Song, Weinan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019. Exploiting persona information for diverse generation of conversational responses. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5190–5196. Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional skip-gram: Explicitly distinguishing left and right context for word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 175–180. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196–205. Di Wang, Nebojsa Jojic, Chris Brockett, and Eric Nyberg. 2017. Steering output style and topic in neural response generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2140–2150. Zongsheng Wang, Yunzhi Bai, Bowen Wu, Zhen Xu, Zhuoran Wang, and Baoxun Wang. 2018. A prospective-performance network to alleviate myopia in beam search for response generation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3608–3618. Zongsheng Wang, Zhuoran Wang, Yinong Long, Jianan Wang, Zhen Xu, and Baoxun Wang. 2019. Enhancing generative conversational service agents with dialog history and external knowledge. Computer Speech & Language, 54:71–85. 63 Joern Wuebker, Patrick Simianer, and John DeNero. 2018. Compact personalized models for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 881–886. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Thirty-First AAAI Conference on Artificial Intelligence. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213. Association for Computational Linguistics. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–664. Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1128–1137. Association for Computational Linguistics. A Details of Human Evaluation A.1 Labeling Dataset Preparation For each model with a given query set, three generated responses for each query are randomly sampled from the results given by the beam search with a beam size of 10. Then, a total of 3,000 query-response pairs are prepared for labeling. A.2 Labeling Criterion of Relevance The labeling criterion for judging the relevance between the response and the given query is described as follows: 0: the quality of response is poor, it is either irrelevant to the query, or grammatically incorrect. 1: although the response itself is acceptable as a reply, its content is not informative and dull. 2: the response is not only relevant and grammatically correct, but also informative or interesting. B Case Studies As shown in Figure 2, we have selected three users whose utterances can reflect their implicit personal features. For example, the gender of user U3 in the first case is probably female. The user U4 in the second case is very possible to be an animation fun. According to the conversation history of user U3 in the last case, it can be inferred that the user is in the trouble of losing weight. Correspondingly, from the responses generated by PAGenertor, we can observe that such implicit information are adopted by our proposed model to produce persona-aware results. Figure 3 gives additional cases generated by PAGenertor, CVAE and VAE respectively oriented to the same given query. Apparently, every independent user should have his/her own linguistic and personality characteristic. Thus, the results generated for different users are expected to maintain enough diversity. According to the cases in Figure 3, it can be seen that results of PAGenertor keep obvious diversity among different individuals, indicating its better capability of capturing persona of users. 64 Q: 告别了留了十多年的长直发 CVAE U1:我也是长发及腰 U2:那我就去尝试一下 U3:我已经剪完了 U4:我也是长发及腰 U5:我也不知道是什么颜色的 PAGenerator U1:我也想剪长发及腰 U2:那我就去尝试一下 U3:我就剪了刘海 U4:新的一年新的开始 U5:好勇敢 Conversation history of U3: … 这个色号我还蛮喜欢的 … Q:重温美人心计还是那样好看 CVAE U1: 我看的是电视剧版的 U2: 我还以为你是主角呢 U3: 嗯嗯我看你看的是《爱情公寓》 U4: … … 我就看了《<ALPHA> 》 U5: 我看的是电视剧版的 PAGenerator U1: 哈哈我看了几集就不看了 U2: 其实我也想看 U3: Emoji_113 我也看了 U4: 其实我也喜欢看,很像动画的剧情 U5: 恩恩,我看了几集就好了 Conversation history of U4: よく「殿下」と呼ばれそうだが 哦看到遥久美男子祭突然就懂了 头发是不是不能这么画 Q: 小心莫变成小胖妹了哦 CVAE U1: 哈哈· · 哈哈· · U2: 哎呦喂你也是哦 U3: 呵呵你说的对哦 U4: 嗯嗯、我也觉得这句话太对 U5: 呵呵你说的是实话嘛 PAGenerator U1: 哇咔咔我就知道了 U2: 我也不知道怎么搞成这样了 U3: 本来就是嘛我都快疯了 U4: 好吧我也是个小棉袄而已啦 U5: 我也不知道该怎么做的事了 Conversation history of U3: … 我的目标是还减五斤一百二就可以了 我开车去走路懒得走啊 老大你来接我嘛 … Q: I’ve kept long straight hair for a decade, now it’s time to farewell. CVAE U1:I have long hair as well. U2:I will give it a shot. U3:I’ve finished my haircut. U4:I have long hair as well. U5:I have no idea about what color it is. PAGenerator U1:I wanna try long hair too. U2:Then I’ll give a try. U3:I just did a fringe haircut. U4:New year new beginning. U5:So brave. Conversation history of U3: … I kind of like this cosmetic color. … Q: Just watched Beauty's Rival in Palace again, still fascinating. CVAE U1: I watched the cable version. U2: I though you are the starring. U3: em I think what you watched is iPartment U4: … … I only watched <ALPHA>. U5: I watched the cable version. PAGenerator U1: haha I only watched couple of beginning episodes. U2: In fact I wanna watch it as well. U3: Emoji_113 I watched as well. U4: I like it too, its story is quite like animation. U5: hmm,I watched some episodes. Conversation history of U4: よく「殿下」と呼ばれそうだが I immediately realized once I watched 遙か美男子祭り Can I do hair caricature this way? Q: Don’t be fatty, be aware. CVAE U1: haha..haha.. U2: hey, same to you. U3: Interesting, you said it right. U4: yeah, I think it’s so true. U5: Interesting, are you telling the truth? PAGenerator U1: wakaka I knew it. U2: I don’t know why it happened neither. U3: Indeed it is, I’m about to go crazy. U4: alright I’m just a sweet heart. U5: I’ve no idea about how to do this. Conversation history of U3: … My goal is to lose 2.5 kg more, 60 kg works for me. I plan to drive there. Hate to walk. Pick me a drive, boss. … Figure 2: Comparisons of cases generated by CVAE and PAGenerator. Especially, we also give the utterances from conversation histories of some users (Conversation history of Uk, where Uk denotes different users from different cases). The translated English version of the samples are listed on the right. 65 Q: 时间过的真快来公司都七年半了~ PAGenerator U1: 嗯嗯我们都是一个星期 U2: 我还以为你是几个月呢 U3: 你在哪里上班啊 U4: 我也是今天又开始上班了 U5: 有时间就可以去了 CVAE U1: 呵呵… 我都快过年了 U2: 是啊我都快到家了 U3: 不是吧我今天才到家 U4: 是啊我现在才到家 U5: 有什么好的啊 VAE A1: 是啊我都快一年了 A2: 是啊时间过得太快了 A3: 是啊现在还没开始呢 A4: 是的啊我都快一个月了 A5: 是啊时间太快了 Q: 送点她喜欢的小礼物, 说点暖心的话 PAGenerator U1: 祝你妈妈生日快乐身体健康 U2: 我还记得你送我礼物呢 U3: 你也要快乐哦 U4: 我也想送你礼物 U5: 你的愿望是最棒的 CVAE U1: 我也想买礼物啊 U2: 我也是这么想的 U3: 你要给我买糖吗 U4: 我也想给你买礼物 U5: 你要给我买糖吗 VAE A1: 我也想给你寄礼物 A2: 你要送我礼物吗 A3: 给你买礼物啊 A4: 我也要给你送礼物 A5: 你还记得我的礼物呢 Q: 快九月了有什么打算 PAGenerator U1: 我也不知道 U2: 等我回来了请你吃饭 U3: 过几天就回去了 U4: 我也不知道什么时候回学校了 U5: 我还没毕业呢 CVAE U1: 年年都过了 U2: 年年都没结婚了 U3: 我也不知道该怎么办呢 U4: 我也不知道是什么情况 U5: 我也不知道该怎么办了 VAE A1: 我也不知道是谁啊 A2: 是啊我都快一年了 A3: 计划赶不上变化 A4: 过几天就好了 A5: 年年都过了 Q: Time flies, I’ve been worked in this company for seven and a half years. PAGenerator U1: yeah, we are both for one week. U2: I thought you’ve been here for several months. U3: Where do you work? U4: I start to work today as well. U5: You can go if you have time. CVAE U1: interesting...I’m about to celebrate the new year. U2: Yeah, I’m about to return home. U3: It can’t be, I just came back home today. U4: Yes, I just came back home now. U5: Nothing good. VAE A1: Yes, even I have been worked for almost one year. A2: Yes, time flies. A3: Yes, It hasn’t started yet. A4: Yes, even I have been worked for almost one month. A5: Yes, time files. Q: Send her some little presents she likes, say some warming words, PAGenerator U1: Happy birthday to ur mom, wish her the best health. U2: I still remember your gift. U3: Be happy! U4: I want to give you a gift too. U5: Your wish is the best. CVAE U1: I want to buy some gifts too. U2: That’s exactly what I think. U3: You wanna buy me some sugar? U4: I want to buy gifts to you too. U5: You wanna buy me some sugar? VAE A1: I want to send you gifts too. A2: Do you plan to send me gifts? A3: Do you want my gift? A4: I want to send you gifts too. A5: Still remember my gift? Q: It’s almost September, any plan? PAGenerator U1: I have no idea. U2: I'll treat you to a meal when I come back. U3: I will come back soon, U4: I don’t know when I will return school. U5: I haven’t graduated yet. CVAE U1: Time is running. U2: yet I’m still single. U3: I don’t know what to do. U4: I don’t know what happened. U5: I don’t know what to do. VAE A1: I don’t know who it is. A2: Yeah it’s almost one year for me. A3: Changes run faster than plans. A4: It will be fine soon. A5: Time is running. Figure 3: Cases for comparing the PAGenerator, CVAE and VAE. It should be noted that VAE have not adopted user information. The translated English version of the samples are listed on the right.
2020
7
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 766–776 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 766 An Online Semantic-enhanced Dirichlet Model for Short Text Stream Clustering Jay Kumar and Junming Shao∗and Salah ud din Data Mining Lab, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China [email protected], [email protected] Wazir Ali SMILE Lab, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China Abstract Clustering short text streams is a challenging task due to its unique properties: infinite length, sparse data representation and cluster evolution. Existing approaches often exploit short text streams in a batch way. However, determine the optimal batch size is usually a difficult task since we have no prior knowledge when the topics evolve. In addition, traditional independent word representation in the graphical model tends to cause “term ambiguity” problem in short text clustering. Therefore, in this paper, we propose an Online Semantic-enhanced Dirichlet Model for short text stream clustering, called OSDM, which integrates the word-occurrence semantic information (i.e., context) into a new graphical model and clusters for each arriving short text automatically in an online way. Extensive results have demonstrated that OSDM gives better performance compared to many state-ofthe-art algorithms on both synthetic and realworld data sets. 1 Introduction A massive amount of short text data is constantly generated with online social platforms such as microblogs, Twitter and Facebook. Clustering of such short text streams has thus gained increasing attention in recent years due to many real-world applications like event tracking, hot topic detection, and news recommendation (Hadifar et al., 2019). However, due to the unique properties of short text streams such as infinite length, evolving patterns and sparse data representation, short text stream clustering is still a big challenge (Aggarwal et al., 2003; Mahdiraji, 2009). ∗*Corresponding author: Junming Shao During the past decade, many approaches have been proposed to address the text stream clustering problem from different points of view, and each method comes with specific advantages and drawbacks. Initially, traditional clustering algorithms for static data were enhanced and transformed for text streams (Zhong, 2005). Very soon, they are replaced by model-based algorithms such as LDA (Blei et al., 2003), DTM (Blei and Lafferty, 2006), TDPM (Ahmed and Xing, 2008), GSDMM(Yin and Wang, 2016b), DPMFP (Huang et al., 2013), TM-LDA (Wang et al., 2012), NPMM (Chen et al., 2019) and MStream (Yin et al., 2018), to mention a few. However, for most established approaches, they often work in a batch way, and assume the instances within a batch are interchangeable. This assumption usually cannot hold for topic-evolving text data corpus. Determining an optimal batch size is also a non-trivial task for different text streams (Howard and Ruder, 2018). Additionally, unlike long text documents, short text clustering further suffers from the lack of supportive term occurrence to capture semantics (Gong et al., 2018). For most existing short text clustering algorithms like Sumblr (Shou et al., 2013), DCT (Liang et al., 2016) and MStreamF (Yin et al., 2018), exploiting independent word representation in their cluster models tends to cause ambiguity. Let us show the following four tweets, for example: T1: “A regular intake of an Apple can improve your health and muscle stamina.” T1: “A glass of fresh apple juice is recommended for breakfast.” T2: “New Apple Watch can monitor your health.” 767 T2: “Apple will launch new smartphone iPhoneX this december.” Tweets of these two topics share few common terms, i.e., ’health’ or ’apple’. It creates an ambiguity if the model deals with only single term representation to calculate the similarity. However, the co-occurring terms representation (i.e., context) helps a model to identify the topic1 correctly. To solve these aforementioned issues, we propose an online semantic-enhanced dirichlet model for short text stream clustering. Compared to existing approaches, it has following advantages. (1) It allows processing each arriving short text in an online way. The online model is not only free of determining the optimal batch size, but also lends itself to handling large-scale data streams efficiently; (2) To the best of our knowledge, it is the first work to integrate semantic information for model-based online clustering, which is able to handle “term ambiguity" problem effectively and finally support high-quality clustering; (3) Equipped with Poly Urn Scheme, the number of clusters (topics) are determined automatically in our cluster model. 2 Related Work During the past decade, many text stream clustering algorithms have been proposed. Here, due to the space limitation, we only report some model-based approaches which are highly related to our work. For more details, please refer to comprehensive surveys, e.g., (Mahdiraji, 2009; Silva et al., 2013; Nguyen et al., 2015; Aggarwal, 2018). The early classical attempt for text clustering is Latent Dirichlet Allocation (LDA) (Blei et al., 2003). However, it cannot handle the temporal data for text streams. For this purpose, many LDA variants have been proposed to consider the text streams such as dynamic topic model (DTM) (Blei and Lafferty, 2006), dynamic mixture model (DMM) (Wei et al., 2007), temporal LDA (TLDA) (Wang et al., 2012), streaming LDA (S-LDA) (Amoualian et al., 2016), and dirichlet mixture model with feature partition (DPMFP) (Zhao et al., 2016). These models assume that each document contains rich content, and thus they are not suitable for dealing with the short text streams. Later, Dirichlet multinomial mixture model-based dynamic clustering topic (DCT) model was designed to deal with short text streams by assigning each 1Topic and cluster will be interchangeably used in this paper document with single topic (Liang et al., 2016). Very soon, GSDMM was proposed to extend DMM with collapsed gibbs sampling to infer the number of clusters (Yin and Wang, 2014). However, most of these models did not investigate the evolving topics (clusters) in text streams where the number of topics usually evolves over time. To automatically detecting the number of clusters, (Ahmed and Xing, 2008) proposed a temporal dirichlet process mixture model (TDMP). It divides the text stream into many chunks (batches), and assumes that the documents inside each batch are interchangeable. Later, GSDPMM was proposed with collapsed gibbs sampling to infer the number of clusters in each batch. In contrast to LDA, GSDPMM not only converges faster but also dynamically assigns the number of clusters over time (Yin and Wang, 2016a). However, both TDMP and GSDPMM models do not examine the evolving topics, and, these models process the text stream for multiple times. Thereafter, MStreamF (Yin et al., 2018) was thus proposed by incorporating a forgetting mechanism to cope with cluster evolution, and allows processing each batch only one time. The NPMM model (Chen et al., 2019) was recently introduced by using the word-embeddings to eliminate a cluster generating parameter of the model. In summary, for most existing approaches, they usually work in a batch way. However, determining optimal batch sizes for different text streams is usually a difficult task. More importantly, due to the intrinsic sparse data representation of shorttext data, the semantics, is little investigated in established approaches. Actually, they need to be carefully considered to decrease the term ambiguity in short text clustering. 3 Preliminaries Here, the problem statement is first given, followed with a brief introduction about dirichlet process and Poly Urn scheme. 3.1 Problem Formulation Formally, a text stream is continuous arrival of text documents over time: St = {dt}∞ t=1. Where dt denotes a document arrived at time t. Each document contains specific words dt = {w1, w2, . . . , wn} and may have different length. The key objective of the clustering task is to group similar documents into clusters: Z = {zt}∞ t=1, and each clus768 ter zt contains documents represented as zt = {dzt 1 , dzt 2 , . . . , dzt n }. For short text clustering, each document is the member of only one topic, so zi ∩zj = φ, where i ̸= j. 3.2 Dirichlet Process Dirichlet Process (DP) is a non-parametric stochastic processes to model the data (Teh et al., 2006). It is the process to draw a sample from (base) distribution, where each sample itself is a distribution, denoted as N ∼DP(α, N0). Here, N is the drawn sample from the base distribution N0. The drawing procedure of a sample from the distribution is controlled by a concentration parameter α. 3.3 Poly Urn Scheme (PUS) The procedure to draw the sequential samples N1, N2 . . . from a distribution is described by the poly urn scheme (Blackwell et al., 1973). It can be summarized as: Nn|N1:n−1 ∼ α α + n −1 + Pn−1 k=1 δ (Nn −Nk) α + n −1 Here, δ(x) = 1 if x = 0 and δ(x) = 0 otherwise. Initially, the urn is empty, so we draw a color from the base distribution i.e. N1 ∼N0, and put a ball of drawn color into the urn. In the next turn, either we draw a color from the distribution which is already drawn with probability of n−1 α+n−1 , or draw a new color with probability of αN0 α+n−1. Since, drawing samples from distribution is repeated, so the same color may appear more than once. This defines that we have K number of distinct colors and n number of draws. This condition is defined by a well-known process called Chinese restaurant process (CRP) (Ferguson and Thomas S Ferguson, 1973). In CRP, we suppose that there are infinite number of tables in a restaurant, and each table surrounds infinite number of empty chairs. The first customer sits on first table, and later on the next customer either chooses to sit on any occupied table with probability of nk α+n−1 or chooses an empty table with probability of α α+n−1. Here, nk is number of customers sitting on a specific table. A new customer is tend to be attracted towards a highly crowded table. This phenomenon is one part of our equation to understand creation of clusters over time. The CRP represents the draws from distribution G, while the stick-breaking process shows the property of G explicitly: G(N) = ∞ X k=1 θkδ (N −Nk) , Nk ∼N0 (1) The mixture weights θ = {θk}∞ k=1 can be formalized by θ ∼GEM(γ) (Neal, 2000). We exploit Equation (1) for the generative process of the Dirichlet process multinomial mixture model (DPMM) as follows. zd|θ ∼Mult(θ) d = 1, . . . , ∞ Nk|β ∼Dir(β) k = 1, . . . , ∞ d |zd, {Nk}∞ k=1 ∼p (d|Nzd) Here, zd is the assigned documents to the cluster, which are multinomial distributed. The probability of document d generated by topic z is summarized as: p (d|Nz) = Y w∈d Mult (w|Nz) (2) Here, the naive Bayes assumption is considered where words in a document are independently generated by the topic. Whereas, the sequential draw of the sample can be derived by following the CRP. It is also assumed that the position of words in a document is not considered while calculating the probability. 4 Proposed Approach This section gives a brief discussion about the representation and formulation of the proposed algorithm. 4.1 Model Representation We build our model upon the DPMM (Yin and Wang, 2016a), which is an extension of the DMM model to deal with evolving clusters. We call our model as OSDM (Online Semantic-enhanced Dirichlet Model), aiming at incorporating the semantic information and cluster evolution simultaneously for short text stream clustering in an online way. The graphical model of OSDM is given in Figure 1a. We show two major differences in our model to highlight the novelty. First, for word-topic distribution, we embed semantic information by capturing the ratio of word co-occurrence. Thereby, independent word generating process and word co-occurrence weight are well considered in topic generation. Secondly, our model works instance 769 𝑤𝑤𝑖𝑖 𝑤𝑤𝑗𝑗 𝑤𝑤 𝑺𝑺 𝒟𝒟 𝒩𝒩 ∞ β α θ 𝓏𝓏 𝑤𝑤𝑖𝑖 𝑤𝑤𝑗𝑗 𝑤𝑤 𝑺𝑺 𝒟𝒟 𝒩𝒩 ∞ β α θ 𝓏𝓏 (a) OSDM α β 𝒩𝒩 𝒟𝒟 Batch θ 𝔃𝔃 𝑤𝑤 α β 𝒟𝒟 Batch θ 𝔃𝔃 𝑤𝑤 ∞ 𝒩𝒩 ∞ (b) MStreamF Figure 1: The graphical representation of OSDM and MStream. Here MStream works in a batch way while OSDM works in an online way. by instance fashion to cluster the documents, instead of batch by batch. For comparison, Figure 1b further show the MStreamF (Yin et al., 2018) model. At initial stage before clustering documents of a batch, MStreamF update vocabulary set (active terms) from all the documents in a batch, then it starts the clustering each document of the batch. However, OSDM does not consider fixed number of documents to create vocabulary set, instead it incrementally updates with each arriving document. 4.2 Model Formulation Defining the relationship between documents and clusters is the most crucial task while dealing with the text stream clustering problem. The thresholdbased methodology (Nguyen et al., 2015) adapts similarity measures to define the homogeneity threshold between a cluster and a document. If the dissimilarity between the exiting clusters and a new arriving document is above the threshold, then a new cluster is created. However, due to the dynamic nature of the stream, it is very hard to define the similarity threshold manually. In contrast, we assume that documents are generated by DPMM (see Section 3). Most recent algorithm MStreamF improved DPMM to cluster short text documents in the stream. As a further study, we integrate the semantic component in DPMM model. Additionally, we integrate term importance on the basis of cluster frequency. The derived equation for calculating the probability of a document d choosing existing cluster z is given in Equation (3). p  zd = z|⃗z, ⃗d, α, β  =  mz D −1 + αD  ·   Q w∈d QNw d j=1 (nw z · lCFw) + β + j −1 QNd i=1 nz + V β + i −1  ·  1 + X wi∈d∧wj∈d cwij   (3) The first term of this Equation  mz D−1+αD  represents completeness of the cluster. Here, mz is the number of documents contained by the cluster z and D is the number of current documents in active clusters2. Whereas, α is the concentration parameter of the model. The middle term of the equation based on multinomial distribution (see Equation (2)) with psuedo weight of words β defines the homogeneity between a cluster and a document. Nd and Nw d represents total number of words and term frequency of word w in document d, respectively. The symbol nw z is the term frequency of the word w in the cluster z. The current vocabulary size of the model is represented by V . nz is the number of words in the cluster z. ICFw calculates the term importance over the active clusters in the model, which is defined as follows. ICF(w ∈d) = log  |Z| |wϵZ|  (4) Here, |Z| represents the number of active clusters in the model. The denominator part of Equation (4) is the number of those cluster which contains the word w. The term  1 + P wi∈d∧wj∈d cwij  defines the semantic weight of term co-occurrence between the cluster and a document. Formally, we define a value of an entry cwij in the co-occurrence matrix as follows. cwij = P d′⊆z nwi d′ P d′⊆z nwi d′ + P d′⊆z nwj d′ (wi, wj) ∈d′ (5) Here, nd′ z is frequency count of word wi in document d′. The ratio between wi and wj must satisfy the property cwij + cwji = 1 . We calculate the term co-occurrence weight of those terms which are 2Active clusters refer to those clusters which are not yet deleted from the model. 770 common in the cluster z and document d. Term cooccurrence matrix is constructed where two terms are co-occurred in a single document. Therefore, if the size of cluster feature set (discussed in Section 4.3) is |Vz|, then it is not necessary that the co-occurrence matrix would be |Vz| × |Vz|. So far, we have defined the probability of a document choosing existing cluster, then we have to define the probability for a document to creating a new cluster. By following the DPMM for infinite number of clusters, which transform θ ∼GEM(γ) into θ ∼GEM(αD), because the hyper-parameter for the mixture model should be dynamically change over time. Therefore, the probability of creating a new cluster is as follows. p  zd = z|⃗z¬d, ⃗d, α, β  =  αD D −1 + αD  ·   Q w∈d QNw d j=1 β + j −1 QNd i=1 V β + i −1   (6) Here, the pseudo number of clusters related documents in the model is represented as αD , and β is the pseudo term frequency of each word (exist in document) of the new cluster. 4.3 The cluster feature (CF) set The similarity-based text clustering approaches usually follow vector space model (VSM) to represent the cluster feature space (Din and Shao, 2020). However, a topic needs to be represented as the subspace of global feature space. Here, we use a micro-cluster feature set to represent each cluster. Namely, a cluster is represented as the summary statistics of a set of words of related documents. In our model, a cluster feature (CF) set is defined as a 6-tuple {mz, nw z , cwz, lenz, lz, uz}, where mz is the number of documents in the cluster z, nw z is the number of frequency of the word w in the cluster, cwz is the word to word co-occurrence matrix, lenz is the number of words in the cluster z which is sum of all frequencies of words, lz is the cluster weight, and uz is the last updated time stamp. The desirable addition property of cluster feature allows updating each micro-cluster in an online way. Definition 1: A document d can be added to a cluster z by using the addition property. mz = mz + 1 nw z = nw z + Nw d ∀w ∈d cwz = cwz ∪cwd Algorithm 1: OSDM Input: St : {dt}∞ t=1 , α : concentration parameter, β : pseudo weight of term in cluster, λ : decay factor Output: Cluster assignments zd 1 K = φ 2 while dt in St do 3 t = t + 1 4 K = removeOldZi(K) 5 K = reduceClusterWeight(λ, K) 6 foreach zi ∈K do 7 PZi = prob(zi, dt) using Eq. (3) 8 end 9 i = arg max i (PZi) 10 PZn = calculate the probability of new cluster using Eq. (6) 11 if PZi < PZn then 12 mzn = 1 13 nw zn = Nw dt 14 cwzn = cwdt 15 lenzn = lendt 16 lzn = 1, uzn = t 17 K = K ∪zn 18 else 19 mzi = mzi + 1 20 nw zi = nw zi + Nw dt 21 cwzi = cwzi ∪cwdt 22 lenzi = lenzi + lendt 23 lzi = 1, uzi = t 24 end 25 end lenz = lenz + lend Here, cwd is word to word co-occurrence of the document, and lend represents the number of total words in the document. The complexity of updating a cluster by adding a document is O(L), where L is the average length of the document. This property is useful to update evolving micro-clusters in the text stream clustering procedure. 4.4 OSDM Algorithm We propose a semantic-enhanced non-parametric dirichlet model to cluster the short text streams in an online way, called OSDM. The proposed algorithm allows processing each instance incrementally and updates the model accordingly. The procedure of OSDM is given in Algorithm 1. Initially, it creates a new cluster for the first doc771 ument and the document is assigned to the newly created CF set. Afterward, each arriving document in the stream either choose an existing cluster or generate a new cluster. The corresponding probability for choosing either of an existing cluster or a new cluster is computed using Equation (6) and (3), respectively. The CF vector with the highest probability is updated using the addition property. To deal with the cluster evolution (i.e., evolving topics) in text streams, many existing approaches often delete the old clusters by using some of the forgetting mechanisms (e.g., decay rate) (Zhong, 2005; Aggarwal and Yu, 2010; Islam et al., 2019). Instead of deleting old clusters, MStreamF (Yin et al., 2018) deletes old batches. In this study, we investigate the importance of each micro-cluster to handle the cluster evolution problem. Specifically, the importance of each micro-cluster is decreased over time if it is not updated. lz in CF stores weight of each cluster. If the weight is approximately equals to zero, then the cluster is removed from the model, i.e., it cannot capture recent topics in the text stream. For this purpose, we applied the exponential decay function, lz = lz × 2−λ×(△t). Here, △t is the elapsed time from the last update, and λ is the decay rate. The decay rate must be adjusted depending upon the applications at hand. The initial value of lz (See Line 16 of Algorithm 1) is set to 1. Afterward, the importance of microcluster is exponentially decreases over time. We can also store the deleted clusters in a permanent disk for offline analysis. Complexity Analysis. The OSDM algorithm always maintains the average ¯K number of current topics (CF sets). Every CF set store average ¯V number of words in nw z and at most | ¯Vz| × | ¯Vz| in cwz. Thus the space complexity of OSDM is O( ¯K( ¯V + ¯V 2) + V D), where V is the size of active vocabulary and D is the number of active documents. On other side, OSDM calculates the probability of arriving document with each cluster (see Line 6 of Algorithm 1). Therefore, the time complexity of OSDM is O( ¯K(L ¯V )), where L is the average size of arriving document. 5 Experimental Study 5.1 Datasets and evaluation metrics To evaluate the performance of the proposed algorithm, we conduct experiments on three real and two synthetic datasets. These datasets were also used in (Yin and Wang, 2016a; Liang et al., 2016; Qiang et al., 2018; Yin et al., 2018; Jia et al., 2018; Chen et al., 2019) to evaluate short text clustering models. In the preprocessing step, we removed stop words, converted all text into lowercase, and stemming. The description of the datasets is as follows. • News (Ns): This dataset is collected by (Yin and Wang, 2014), which contains 11,109 news title belong to 152 topics. • Reuters (Rs): Similar to (Yin and Wang, 2016b) we skip the documents with more than one class and obtained the dataset consists of 9,447 documents from 66 topics. • Tweets (Ts): This dataset contain 30,322 tweets which are relevant to 269 topics in the TREC 3 microblog. • News-T (Ns-T) and Reuters-T (Rs-T): Naturally, we may find a situation where topics in social media appear only for a certain time period and then disappear. However, the documents of each topic in original dataset is observed for long period of time. Therefore, to construct synthetic dataset we sorted documents datasets by topic in two datasets including Reuters and News. After sorting, we then divide each dataset into sixteen equal chunks and shuffled them. We adopted five different evaluation metrics for deep analysis of all algorithms, which include Normalized Mutual Information (NMI), Homogeneity (Ho.), V-Measure (VM), Accuracy (Acc.) and cluster Purity (Pur.). We utilized sklearn4 API to implement these metrics. We compute the measures on overall clustering results (Yin and Wang, 2014). Homogeneity measures that each cluster should have only members of a single class. Whereas, Vmeasure calculates how successfully the criteria of completeness and homogeneity are satisfied. Cluster purity measures the true positive instances in each cluster. The typical NMI measure calculates the overall clustering quality. 5.2 Baselines We have selected four state-of-the-art representative algorithms for stream text clustering to com3http://trec.nist.gov/data/microblog. html 4http://scikit-learn.org 772 pare OSDM (Os). A brief description of these algorithms are given as follows. (1) DTM (Blei and Lafferty, 2006) is an extension of Latent Dirichlet Allocation which traces the evolution of hidden topics from corpus over time. It was designed to deal with the sequential documents. (2) Sumblr (Sb) (Shou et al., 2013) is an online stream clustering algorithm for tweets. With only one pass, it enables the model to cluster the tweets efficiently while maintaining cluster statistics. (3) DMM (Yin and Wang, 2014) is a Dirichlet multinomial mixture model for short text clustering, which does not consider temporal dependency of instances. (4) MStreamF (Yin et al., 2018) is the latest model to deal with infinite number of latent topics in short text while processing one batch at a time. Two models of MStreamF were proposed, one with one-pass clustering process, and another with gibbs sampling. We refer to the former algorithm as MStreamF-O (MF-O) and the latter as MStreamF-G (MF-G). We try to find the optimal parameter values of all baseline algorithms with grid search. Finally, we set α = 0.01 for DTM, β = 0.02 for Sumblr. For MStreamF-O and MStreamF-G, we set α = 0.03 and β = 0.03. As defined in (Yin et al., 2018), we set the number of iterations to 10 and saved-batches = 2 for MStreamF-G. We set α = 0.3 and β = 0.3 for DMM. The DTM, DMM and Sumblr needs fixed number of cluster as input therefore we set K = 300, K = 170 and K = 80 for Tweets, News and Reuters datasets, respectively. We set α = 2e−3, β = 4e−5 and λ = 6e−6 for OSDM. The source code of OSDM is publicly available at: https://github.com/ JayKumarr/OSDM. 5.3 Comparison with state-of-the-art methods In this section, we provide a detailed comparative analysis of OSDM with state-of-the-art algorithms. The overall results are summarized in Table 1. We report NMI, Homogeneity, v-measure, purity and accuracy of each algorithm. Additionally, we also evaluate the performance of each algorithm over different time-stamps of the stream (see Figure 2).                 (a) News                 (b) News-T                    (c) Reuters                    (d) Reuters-T                     (e) Tweets Figure 2: The performance of different text steam clustering algorithm over time (in thousand points) in terms of NMI measure. Further, we studied the parameter sensitivity and runtime of OSDM, respectively. From Table 1, we can see that OSDM outperformed all baseline algorithms on almost every dataset in terms of all measures. Here, MStreamFG yielded much better results on the Ns-T data in terms of NMI measure. The reason behind might be the multiple iterations of each batch in the stream. However, MStreamF-G requires more execution time to process the data. In contrast, our proposed algorithm OSDM processes the data only once. And we can also observe that OSDM achieves the highest NMI in other data sets. In addition, the crucial part of evaluating the cluster similarity is measured by the homogeneity measure. We can see that OSDM outperformed all previous algorithms. It also shows the same statistics except for v-measure of DTM. Likewise, our model generates more pure clusters. Furthermore, to investigate the performance over time, we plot the performance of 773               (a) α               (b) β               (c) λ Figure 3: The sensitivity analysis with different parameters, including α, β and λ. Alg. Eva. Datasets Ns Ts Rs Ns-T Rs-T OSDM NMI 0.815 0.836 0.552 0.858 0.554 MF-O 0.685 0.746 0.361 0.803 0.381 MF-G 0.780 0.795 0.364 0.888 0.405 Sb 0.575 0.698 0.464 0.723 0.494 DTM 0.808 0.800 0.537 0.810 0.537 DMM 0.586 0.636 0.448 0.582 0.476 OSDM Ho. 0.951 0.936 0.954 0.900 0.964 MF-O 0.654 0.695 0.374 0.778 0.385 MF-G 0.751 0.738 0.319 0.900 0.343 Sb 0.547 0.758 0.402 0.747 0.574 DTM 0.833 0.822 0.659 0.837 0.657 DMM 0.588 0.622 0.466 0.565 0.497 OSDM VM 0.805 0.831 0.479 0.857 0.478 MF-O 0.684 0.744 0.361 0.803 0.380 MF-G 0.779 0.793 0.361 0.888 0.400 Sb 0.575 0.696 0.458 0.723 0.436 DTM 0.808 0.800 0.526 0.810 0.527 DMM 0.586 0.636 0.448 0.582 0.476 OSDM Pur. 0.907 0.890 0.962 0.851 0.972 MF-O 0.552 0.529 0.602 0.636 0.608 MF-G 0.653 0.801 0.530 0.835 0.606 Sb 0.414 0.609 0.609 0.580 0.770 DTM 0.767 0.749 0.793 0.765 0.795 DMM 0.456 0.473 0.673 0.398 0.694 OSDM Acc. 0.880 0.665 0.927 0.769 0.952 MF-O 0.420 0.246 0.577 0.584 0.447 MF-G 0.517 0.707 0.452 0.606 0.461 Sb 0.606 0.539 0.652 0.653 0.620 DTM 0.647 0.246 0.669 0.294 0.644 DMM 0.334 0.150 0.649 0.073 0.500 Table 1: The performance of different algorithms on five data sets in terms of different measures including Mutual Information (NMI), Homogeneity (Ho.), VMeasure (VM), Accuracy (Acc.) and cluster Purity (Pur.). all algorithms over time in Figure 2. 5.4 Sensitivity Analysis We perform sensitivity analysis for OSDM with respects to three input parameters: concentration parameter α, β, and decay function parameter λ on the Tweets dataset. From Figure 3a, we can observe the effect of α, which ranges from 9e−3 to 9e−1. The performance in terms of all evaluation measures is stable over the different values of parameters. The α parameter is responsible for finer clustering, that is why we can observe a little fluctuation in initial values. Figure 3b shows the performance on different values of β, which ranges from 1e−4 to 1e−2. As we already defined that we modified homogeneity part of the clustering model (see Equation (3)), and β is the related hyper-parameter. We can observe that after a certain range, the values of all the evaluation measure become stable. The crucial point to be observed is the stability of homogeneity on different values of β. Figure 3c shows effect of λ ranges from 9e−4 to 9e−6. Our model follows the forgetting mechanism on decay factor λ and the clusters are deleted from model when the value is approximately equals to zero. We can observe the performance of OSDM                     Figure 4: The runtime of different text stream clustering algorithms. 774 on different decay factors. It can be observed that the behavior of a given evaluation measure is stable over time. 5.5 Runtime To compare the runtime of different algorithms, we performed all experiments on a PC with core i53470 and 8GB memory. Figure 4 shows the runtime of all algorithms on the tweets dataset. We can observe that Sumblr required the highest execution time to cluster the instances. Whereas, the runtime of other algorithms are comparable. Due to simple execution process of each instance MStreamF-O took least time because it does not need to maintain semantic similarity. Comparatively, MStreamFG required much higher time than OSDM. The reason is that it needs to execute each batch data multiple times. Due to online nature, the overall speed of OSDM is more efficient than most existing algorithms, and the benefit is strengthened with more and more arriving instances. 6 Conclusion In this paper, we propose a new online semanticenhanced dirichlet model for short text stream clustering. In contrast to existing approaches, OSDM does not require to specify the batch size and the dynamic number evolving clusters. It dynamically assigns each arriving document into an existing cluster or generating a new cluster based on the poly urn scheme. More importantly, OSDM tried to incorporate semantic information in the proposed graphical representation model to remove the term ambiguity problem in short-text clustering. Building upon the semantic embedding and online learning, our method allows finding high-quality evolving clusters. Extensive results further demonstrate that OSDM has better performance compared to many state-of-the-art algorithms. Acknowledgments This work is supported by the National Natural Science Foundation of China (61976044), Fundamental Research Funds for the Central Universities (ZYGX2019Z014), Fok Ying-Tong Education Foundation for Young Teachers in the Higher Education Institutions of China (161062), National key research and development program (2016YFB0502300). References Charu C. Aggarwal. 2018. A survey of stream clustering algorithms. In Data Clustering, pages 231–258. Charu C Aggarwal, Jiawei Han, Jianyong Wang, Philip S. Yu, Jianyong Wang Jiawei Han, Philip S. Yu, Jiawei Han, Jianyong Wang, and Philip S. Yu. 2003. A Framework for Clustering Evolving Data Streams. In International conference on Very large data bases, pages 81–92. Charu C Aggarwal and Philip S. Yu. 2010. On Clustering Massive Text and Categorical Data Streams. Knowledge and Information Systems, 24(2):171– 196. Amr Ahmed and Eric P Xing. 2008. Dynamic NonParametric Mixture Models and The Recurrent Chinese Restaurant Process: with Applications to Evolutionary Clustering. In Proceedings of SIAM International Conference on Data Mining, pages 219– 230. Hesam Amoualian, Marianne Clausel, Eric Gaussier, Massih-Reza Amini, Marianne Clausel, MassihReza Amini, Éric Gaussier, and Massih-Reza Amini. 2016. Streaming-LDA: A Copula-based Approach to Modeling Topic Dependencies in Document Streams. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 695–704. ACM. David Blackwell, James B MacQueen, Others, and David R. Brillinger. 1973. Ferguson distributions via Pólya urn schemes. The annals of statistics, 1(2):353–355. David M. Blei and John D. Lafferty. 2006. Dynamic topic models. ACM International Conference Proceeding Series, 148:113–120. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3:993–1022. Junyang Chen, Zhiguo Gong, and Weiwen Liu. 2019. A nonparametric model for online topic discovery with word embeddings. Information Sciences, 504:32–47. Salah Ud Din and Junming Shao. 2020. Exploiting evolving micro-clusters for data stream classification with emerging class detection. Information Sciences, 507:404–420. Salah Ud Din, Junming Shao, Jay Kumar, Waqar Ali, Jiaming Liu, and Yu Ye. 2020. Online Reliable Semisupervised Learning on Evolving Data Streams. Information Sciences, 507. Thomas S Ferguson and Thomas S Ferguson. 1973. A Bayesian analysis of some nonparametric problems. The annals of statistics, 1(2):209–230. 775 Hongyu Gong, Tarek Sakakini, Suma Bhat, and Jinjun Xiong. 2018. Document similarity for texts of varying lengths via hidden topics. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 2341–2351. Amir Hadifar, Lucas Sterckx, Thomas Demeester, and Chris Develder. 2019. A Self-Training Approach for Short Text Clustering. In Proceedings of the 4th Workshop on Representation Learning for NLP (ACL), pages 194–199. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. Association for Computational Linguistics, 1:328– 339. Ruizhang Rui-zhang Huang, Guan Yu, Zhaojun Wang, Jun Zhang, and Liangxing Shi. 2013. Dirichlet process mixture model for document clustering with feature partition. IEEE Transactions on Knowledge and Data Engineering, 25(8):1748–1759. Md. Kamrul Islam, Md. Manjur Ahmed, and Kamal Z. Zamli. 2019. A buffer-based online clustering for evolving data stream. Information Sciences, 489:113–135. Caiyan Jia, Matthew B. Carson, Xiaoyang Wang, and Jian Yu. 2018. Concept decompositions for short text clustering by identifying word communities. Pattern Recognition, 76:691–703. Shangsong Liang, Emine Yilmaz, and Evangelos Kanoulas. 2016. Dynamic Clustering of Streaming Short Documents. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 995–1004. ACM. Alireza Rezaei Mahdiraji. 2009. Clustering data stream: A survey of algorithms. International Journal of Knowledge-Based and Intelligent Engineering Systems, 13(2):39–44. Radford M Neal. 2000. Markov chain sampling methods for Dirichlet process mixture models. Journal of computational and graphical statistics, 9(2):249– 265. Hai Long Nguyen, Yew Kwong Woon, and Wee Keong Ng. 2015. A survey on data stream clustering and classification. Knowledge and Information Systems, 45(3):535–569. Jipeng Qiang, Yun Li, Yunhao Yuan, and Xindong Wu. 2018. Short text clustering based on PitmanYor process mixture model. Applied Intelligence, 48(7):1802–1812. Junming Shao, Yue Tan, Lianli Gao, Qinli Yang, Claudia Plant, and Ira Assent. 2019. Synchronizationbased clustering on evolving data stream. Inf. Sci., 501:573–587. Lidan Shou, Zhenhua Wang, Ke Chen, and Gang Chen. 2013. Sumblr Continuous Summarization of Evolving Tweet Streams. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, pages 533– 542. ACM. Jonathan A. Silva, Elaine R. Faria, Rodrigo C. Barros, Eduardo R. Hruschka, André C. P. L. F. de Carvalho, and João Gama. 2013. Data stream clustering: A Survey. ACM Computing Surveys, 46(1):1–31. Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. Yu Wang, Eugene Agichtein, and Michele Benzi. 2012. TM-LDA: Efficient Online Modeling of Latent Topic Transitions in Social Media. In International conference on Knowledge discovery and data mining, pages 123–131. ACM. Xing Wei, Jimeng Sun, and Xuerui Wang. 2007. Dynamic mixture models for multiple time series. In International Joint Conference on Artificial Intelligence, pages 2909–2914. Jianhua Yin, Daren Chao, Zhongkun Liu, Wei Zhang, Xiaohui Yu, and Jianyong Wang. 2018. Modelbased Clustering of Short Text Streams. In ACM International Conference on Knowledge Discovery and Data Mining, pages 2634–2642. Jianhua Yin and Jianyong Wang. 2014. A dirichlet multinomial mixture model-based approach for short text clustering. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 233–242. ACM. Jianhua Yin and Jianyong Wang. 2016a. A modelbased approach for text clustering with outlier detection. In IEEE International Conference on Data Engineering, pages 625–636. Jianhua Yin and Jianyong Wang. 2016b. A Text Clustering Algorithm Using an Online Clustering Scheme for Initialization. In ACM International Conference on Knowledge Discovery and Data Mining, pages 1995–2004. Zhong Zhang, Chongming Gao, Chongzhi Liu, Qinli Yang, and Junming Shao. 2019. Towards robust arbitrarily oriented subspace clustering. In Database Systems for Advanced Applications - 24th International Conference, DASFAA 2019, Chiang Mai, Thailand, April 22-25, 2019, Proceedings, Part I, volume 11446 of Lecture Notes in Computer Science, pages 276–291. Springer. Yukun Zhao, Shangsong Liang, Zhaochun Ren, Jun Ma, Emine Yilmaz, and Maarten de Rijke. 2016. Explainable User Clustering in Short Text Streams. In International ACM conference on Research and Development in Information Retrieval, pages 155–164. 776 Shi Zhong. 2005. Efficient streaming text clustering. Neural Networks, 18(5-6):790–798.
2020
70
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7828–7838 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7828 Returning the N to NLP: Towards Contextually Personalized Classification Models Lucie Flek Mainz University of Applied Sciences Germany [email protected] Abstract Most NLP models today treat language as universal, even though socio- and psycholingustic research shows that the communicated message is influenced by the characteristics of the speaker as well as the target audience. This paper surveys the landscape of personalization in natural language processing and related fields, and offers a path forward to mitigate the decades of deviation of the NLP tools from sociolingustic findings, allowing to flexibly process the “natural” language of each user rather than enforcing a uniform NLP treatment. It outlines a possible direction to incorporate these aspects into neural NLP models by means of socially contextual personalization, and proposes to shift the focus of our evaluation strategies accordingly. 1 Introduction Our language is influenced by one’s individual characteristics as well as by the affinity to various sociodemographic groups (Bucholtz and Hall, 2005; McPherson et al., 2001; Eckert and McConnellGinet, 2013). Yet the majority of NLP models today treats language as universal, acknowledging that words have different meanings in different semantic context, but typically assuming that this context has the same meaning for everyone. In this paper, I propose that our focus shifts towards interpreting the language together with its userdependent, contextual personal and social aspects, in order to truly process the “natural” language of a user. I outline a possible direction to incorporate these aspects into neural NLP models, and suggest to adjust our evaluation strategies. The paper is structured with the following aims in mind: Sec. 2 provides historical context, seeking evidence on personalization needs. Sec. 3 reviews existing personalization work, as the personalization efforts and success stories are scattered across contributions to various applied tasks. Sec. 4 contemplates on how NLP personalization could be adopted as a process of several stages. Sec. 5 outlines an implementation proposal on contextually personalized classification models, building upon flexible, socially conditioned user representations. Sec. 6 proposes novel evaluation approaches reflecting the benefit of personalized models. Finally, Sec. 7 opens the discussion on ethical aspects, nonpersonalizable NLP tasks, and the role of industry in personal data collection and protection. 2 Historical context Since 1990s, with the rise of so-called empirical or statistical NLP area (Manning et al., 1999; Brill and Mooney, 1997), the focus on frequently appearing phenomena in large textual data sets unavoidably led to NLP tools supporting “standard English” for generic needs of an anonymous user. An NLP tool whether e.g. a POS tagger, dependency parser, machine translation model or a topic classifier - was typically provided as one trained model for one language (Toutanova et al., 2003; Klein and Manning, 2003; Morton et al., 2005), or, later on, for major underperforming domains, such as Twitter (Gimpel et al., 2011). However, enforcing artificial domain boundaries is suboptimal (Eisenstein, 2013). Neglecting the variety of users and use cases doesn’t make the tools universally applicable with the same performance - it only makes our community blind to the built-in bias towards the specifics of user profiles in training data (Hovy, 2015; Tatman, 2017). Meanwhile, in the information retrieval area, personalization has been incorporated from the early days - it is a long accepted paradigm that different users with different information needs might search for that need using the same query (Verhoeff et al., 1961) and that individual information needs evolve (Taylor, 1968). With the rising popularity 7829 of search engines in 1990s, the need for personalization in the interpretation of the query becomes obvious (Wilson, 1999). Exploiting logs of user search interactions allowed personalization at scale (Carbonell and Goldstein, 1998; Sanderson and Croft, 2012). In 2000s, it became acceptable to personalize search results using implicit information about user’s interests and activities, e.g. leveraging browsing history or even e-mail conversations (Teevan et al., 2005; Dou et al., 2007; Matthijs and Radlinski, 2011). Today, hardly any of us can imagine that searching e.g. for pizzeria from our cell phone would return the same list of results for everyone no matter our location. The area of recommendation systems has followed the IR trends, with more emphasis on the social than the personal component. Already early GroupLens Usenet experiments (Miller et al., 1997; Resnick et al., 1994) have shown the effectiveness of personalized article recommendations via collaborative filtering. Acknowledging the potential of personalizing via similar or related users, the focus moved towards exploiting information from user’s social networks (Guy et al., 2010; De Francisci Morales et al., 2012; Guy et al., 2009). Similar developments are emerging for example in the area of personalized language models (Ji et al., 2019; Wen et al., 2012; Yoon et al., 2017; McMahan et al., 2017), which are largely used e.g. in predictive writing, and in natural language generation (Oraby et al., 2018; Harrison et al., 2019), aiming e.g. at selecting and preserving a consistent personality and style within a discourse. Drawing inspiration from these areas, I argue it is natural for users to expect personalized approaches when an NLP system attempts to interpret their language, i.e., attempts to assign any label to a provided text segment, whether it is, e.g., a sentiment of their sentence, a part-of-speech of a word they used, a sense definition from a knowledge base, or even a translation. As I discuss in the following section, already basic personal information has been shown to be relevant for the system accuracy. 3 User traits and NLP models Inferring user traits We adjust our language with respect to the sociodemographic group we feel related to (McPherson et al., 2001; Bucholtz and Hall, 2005; Holmes and Meyerhoff, 2008; Eckert, 2012). This language adjustment can be, in turn, used in NLP algorithms to infer a range of individual user traits. Experiments have been conducted with estimating variables such as age (Rao et al., 2010; Nguyen et al., 2011), gender (Burger et al., 2011; Bamman et al., 2014; Sap et al., 2014), geolocation (Eisenstein et al., 2010), political preferences (Volkova et al., 2014), socio-economic status (Preot¸iuc-Pietro et al., 2015), impact (Lampos et al., 2014), and a range of psychological traits and issues (Schwartz et al., 2013; Park et al., 2015; Sumner et al., 2012; Guntuku et al., 2017; Coppersmith et al., 2014). While most of the above-listed experiments have been conducted on Twitter, a variety of other datasets have been used, including phone conversations (Mairesse et al., 2007; Ivanov et al., 2011), blogs (Mukherjee and Liu, 2010; Schler et al., 2006), Facebook (Markovikj et al., 2013), or YouTube (Filippova, 2012). Human judges show surprisingly inferior performance on user profiling tasks, grounding their judgement in topical stereotypes (Carpenter et al., 2017). However, albeit more accurate thanks to capturing stylistic variation elements, statistical models are prone to stereotype propagation as well (Costa-juss`a et al., 2019; Koolen and van Cranenburgh, 2017). While many experiments have been conducted using discrete variables for demographics and personality, real-valued continuous representation are preferable (Lynn et al., 2017). Numerous researchers have been pointing out that it would be more meaningful to create models building on recent developments in sociolinguistics, i.e. treating demographic variables as fluid and social, e.g. modeling what influences speakers to show more or less of their identity through language, or jointly modeling variation between and within speakers (Eckert and McConnell-Ginet, 2013; Nguyen et al., 2014; Bamman et al., 2014; Eisenstein, 2013). Improving NLP tasks with user traits Actively accounting for sociodemographic factors in text classification models leads to improved performance across NLP applications. So far, such studies have being conducted most prominently for English language, using age and gender variables, with the most focus on sentiment analysis tasks (Volkova et al., 2013; Hovy, 2015; Lynn et al., 2017; Yang and Eisenstein, 2017). Other explored tasks include topic detection, part-of-speech tagging (Hovy, 2015), prepositional phrase attachment, sarcasm detection (Lynn et al., 2017), fake news detection (Long et al., 2017; Potthast et al., 2018), or detection of mental health issues (Benton 7830 et al., 2016). Apart from demographic variables, personality traits play a role as well - e.g. in stance detection (Lynn et al., 2017), sarcasm detection, opinion change prediction (Lukin et al., 2017), prediction of regional life satisfaction or mortality rate (Zamani et al., 2018). NLP models can also improve by exploiting user’s past context and prior beliefs, e.g. for sarcasm (Bamman and Smith, 2015), stance prediction (Sasaki et al., 2018), persuasion (Durmus and Cardie, 2018) or conversation re-entry (Zeng et al., 2019). Methods used to incorporate the social and psychological variables to models are discussed in Sec. 5. Improving NLP tasks with social graphs An emerging line of research makes use of social interactions to derive information about the user representing each user as a node in a social graph and creating low dimensional user embeddings induced by neural architecture (Grover and Leskovec, 2016; Qiu et al., 2018). Including network information improves performance on profiling tasks such as predicting user gender (Farnadi et al., 2018) or occupation (Pan et al., 2019), as well as on detecting online behavior such as cyberbullying (Mathur et al., 2018), abusive language use (Qian et al., 2018; Mishra et al., 2018) or suicide ideation (Mishra et al., 2019). 4 NLP personalization as a process From the user experience perspective, personalization of NLP tools could be divided into three steps. Explicit input. In the first step, user is allowed to provide personal information for the NLP components explicitly. The depth of information provided can vary from specifying own age to taking personality questionnaires. This user behavior is somewhat similar to subscribing to topics of interest for personalized newsletters - user has a full control over the level of customization. However, results of increasing the burden on the user can be inferior to implicit inference (Teevan et al., 2005). Implicit inference. More conveniently, personal information about the user can be inferred implicitly by the system, as demonstrated e.g. by the models discussed in section 3. The result of such inference can be either a set of explicit labels, or latent user representation capturing similar information in a larger number of data-driven dimensions. For the user, such personalization might currently feel intrusive in the context of an NLP system, however, in many related research areas the user expectations are already altered (cf. Sec. 2). Contextualized implicit inference. In the third step, personalization includes also an intrauser modeling of different individual contexts based on user’s communication goals. This reflects the social science argument that an identity is the product rather than the source of linguistic and other semiotic practices, and identities are relationally constructed through several, often overlapping, aspects of the relationship between self and other, including similarity/difference, genuineness/artifice and authority/delegitimacy (Bucholtz and Hall, 2005). This approach is also aligned with NLP findings on social power in dialogue (Bracewell et al., 2012; Bramsen et al., 2011; Prabhakaran et al., 2012). Such solution can be perceived less invasive by the users, as the contextual adaptation may diminish the otherwise built-in stereotypes of language use (e.g. some users may prefer to use more emotionally charged words in private social contexts, but not necessarily in professional conversations). 5 Methods of incorporating psychosocial profiles into NLP models Early experiments used basic demographic variables directly as input features in the model (Volkova et al., 2013). Hovy (2015) uses age and gender as modifying factors for the input word embeddings. In a similar manner, Lynn et al. (2017) uses a multiplicative compositional function to combine continuous user trait scores, inferred via factor analysis, with original feature values, augmenting the feature set so that each feature exists with and without the trait information integrated. Benton et al. (2017) use age and gender as auxiliary tasks in a multitask learning setup for psychological labeling of users. Zamani and Schwartz (2017) apply a residualized control approach for their task, training a language model over the prediction errors of the model trained on sociodemographic variables only. Later they combine it with the factor analysis approach (Zamani et al., 2018). Benton et al. (2016) learns user representations by encoding user’s social network as a vector, where users with similar social networks have similar vector representations. A commonly used technique is to define the “context” for each node, for example by random walks, and train a predictive model to perform context prediction.Similar network-based learning is employed in node2vec (Grover and Leskovec, 2016). 7831 Yang and Eisenstein (2017) propose to use neural attention mechanisms in a social graph over followers, mentions and retweets, to leverage linguistic homophily. However, the user modeling approaches discussed so far focus on finding one representation for one user. A modern, personalized NLP system shall be able to capture not only the inherent semantic aspects of the analyzed discourse together with the latent vectorial representations of user characteristics, but also contextual user profiles based on an identity sought in their current social microenvironment. A strengthened industry-academia cooperation is crucial in such data collection (more on this in Sec. 7). Assuming the access to a larger online history of each user, we could draw a parallel to the design of the contextual word embeddings (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019), which train neural networks as language models, then use the context vectors provided for each word token as pretrained word vectors. With an increasing number of online corpora containing user metadata, we can use recurrent or attentive neural networks to create large-scale social representations of users in a similar manner, allowing multiple pretrained “senses” of each user identity - vector representations of user conversational styles, opinions, interests, etc., treating those representations as dynamically changing in different social contexts. These representations can be then matched to new users based on the sparse linguistic, sociodemographic, psychological, and network information available, and fine-tuned on the context of a given task in a given social microenvironment, e.g. based on the stable part of the personal vectorial representation of the other users present in the conversation. 6 Evaluation Currently, most of the NLP ground truth exists in the vacuum, “for everyone”. Our systems typically use labels obtained as an average or majority vote provided by a number of impersonated annotators, even for tasks where they highly disagree (Waseem, 2016; Stab and Gurevych, 2014). As pointed out in Bender and Friedman (2018), we rarely get to know anything about the people other than if they were “expert”1. If we truly aim at personalizing NLP systems, the first step is understanding who the recipients of our system decisions are. In contrast to 1read: undergrad students vs. lab colleagues IR, where the user of the interpreted result is normally the author of the query, in NLP the use cases vary. For example, rather than merely labeling a piece of text as a “sarcasm”, we shall ask (A) Did the author mean this statement as sarcasm? (B) Was this understood by others as sarcasm? What kind of users interprets this statement as sarcasm? In the tasks of type A, it is sensible to ask the authors themselves about the intended label (e.g. Are we correct this was a joke / positive review / supportive argument?. We shall further assess the value of the system personalization. E.g. a user may prefer a model that correctly interprets her sarcasm even when most annotators typically don’t recognize it. We can take inspiration from subjective measures used in evaluating spoken dialogue systems, such as A/B testing (Kohavi et al., 2014), customer satisfaction (Kelly et al., 2009; Kiseleva et al., 2016) or interestingness (Harrison et al., 2019; Oraby et al., 2018). Yet most of the tasks are of type B, where we implicitly try to label how a piece of text is perceived by others (e.g. hate speech, assertiveness, persuasiveness, hyperpartisan argumentation). Given that these “others” vary in their judgments (Kenny and Albright, 1987) and this variation is informative for NLP models (Plank et al., 2014; Chklovski and Mihalcea, 2003), I suggest we start caring in NLP explicitly about who these “others” are, and evaluate our models with respect to labels assigned by defined target groups of users (e.g. with regards to sociodemographics, personality, expertise in the task) rather than one objective truth. Initial exploration of this area has been started e.g. for perceived demographics (Volkova and Bachrach, 2016; Carpenter et al., 2017) and natural language inference (Pavlick and Kwiatkowski, 2019). 7 Ethical considerations The ability to automatically approximate personal characteristics of online users in order to improve language understanding algorithms requires us to consider a range of ethical concerns. Unfair use prevention It is almost impossible to prevent abuse of once released technology even when developed with good intentions (Jonas, 1983). Hence it may be more constructive to strive for an informed public, addressing the dual use danger with a preemptive disclosure (Rogaway, 2015; Hovy and Spruit, 2016) - letting potential abusers know that certain illegal and unethical purposes of 7832 using personalized models are not supported, and letting potential users know about the risk. For example the European Ethics Guidelines for Trustworthy AI foresee that “Digital records of human behaviour may allow AI systems to infer not only individuals’ preferences, but also their sexual orientation, age, gender, religious or political views.” and claim that “it must be ensured that data collected about them will not be used to unlawfully or unfairly discriminate against them.” Incorrect and stereotypical profiling Sociodemographic classification efforts risk invoking stereotyping and essentialism. Such stereotypes can cause harm even if they are accurate on average differences (Rudman and Glick, 2012). These can be emphasized by the semblance of objectivity created by the use of a computer algorithm (Koolen and van Cranenburgh, 2017). It is important we control for variables in the corpus as well as for own interpretation biases. Privacy protection Use of any data for personalization shall be transparent. Even public social media data shall be used with consent and in an aggregated manner, no individual posts shall be republished (Hewson and Buchanan, 2013). Regarding explicit consent, research shall take account of users’ expectations (Williams et al., 2017; Shilton and Sayles, 2016; Townsend and Wallace, 2016). Similar issue is discussed by Smiley et al. (2017) regarding NLG ethics, as NLG systems can incorporate the background and context of a user to increase the communication effectiveness of the text, but as a result may be missing alternative views. They suggest to address this limitation by making users aware of the use of personalization, similar to addressing provenance. Role of industry and academia in user data collection Privacy and controllability is an auxiliary task to personalization and adaptation (Torre, 2009). Strictly protecting user privacy when collecting user data for model personalization is of utmost importance for preserving user trust, which is why, perhaps counter-intuitively, I encourage stronger industry-academia collaborations to facilitate a less intrusive data treatment. An inspiration can be taken from the concept of differential privacy (Dwork, 2008), applied e.g. in the differentially private language models (McMahan et al., 2017), which allow to customize for the user without incorporating her private vocabulary information into the public cloud model. Similarly, doing academic research on personalized NLP classification tasks directly within industry applications such as mobile apps with explicit user consent would enable transparent experiments at scale, being potentially more secure than gathering and manipulating one-time academic data collections offline. It may also contribute to better generalizability of the conclusions than strictly academic case studies that are typically limited in scale. Personalization as a harmful ambiguity layer Given the field bias to reporting personalization results only when successful, no “unpersonalizable” tasks have been defined so far. With that, one question remains open - can we benefit from personalization everywhere across NLP, or are there cases where subjective treatment of a language is not desired, or even harmful? E.g., a legal text shall remain unambiguous to interpretation. On the other hand, the ability to understand it is subjective, and some users may appreciate lexical simplification (Xu et al., 2015). Are there objective NLP tasks as such, or can we segment all of those into an objective and subjective part of the application? 8 Conclusion Building upon Eisenstein (2013); Lynn et al. (2017), and Hovy (2018), I argue that, following the historical development in areas related to NLP, users are ready also for the personalization of text classification models, enabling more flexible adaptation to truly processing their “natural” language rather than enforcing a uniform NLP treatment for everyone. Reflecting the current possibilities with available web and mobile data, I propose to expand the existing user modeling approaches in deep learning models with contextual personalization, mirroring different facets of one user in dynamic, socially conditioned vector representations. Modeling demographic and personal variables as dynamic and social will allow to reflect the variety of ways individuals construct their identity by language, and to conduct novel sociolinguistic experiments to better understand the development in online communities. I suggest to also shift the focus of our evaluation strategies towards the individual aims and characteristics of the end users of our labeling models, rather than aggregating all variations into objective truths, which will allow us to pay more attention to present social biases in our models. 7833 References David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014. Gender identity and lexical variation in social media. Journal of Sociolinguistics, 18(2):135–160. David Bamman and Noah A Smith. 2015. Contextualized sarcasm detection on Twitter. In Ninth International AAAI Conference on Web and Social Media. Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604. Adrian Benton, Raman Arora, and Mark Dredze. 2016. Learning multiview embeddings of twitter users. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 14–19, Berlin, Germany. Association for Computational Linguistics. Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multitask learning for mental health conditions with limited social media data. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 152–162, Valencia, Spain. Association for Computational Linguistics. David Bracewell, Marc Tomlinson, and Hui Wang. 2012. Identification of social acts in dialogue. In Proceedings of COLING 2012, pages 375–390, Mumbai, India. The COLING 2012 Organizing Committee. Philip Bramsen, Martha Escobar-Molano, Ami Patel, and Rafael Alonso. 2011. Extracting social power relationships from natural language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 773–782, Portland, Oregon, USA. Association for Computational Linguistics. Eric Brill and Raymond J Mooney. 1997. An overview of empirical natural language processing. AI magazine, 18(4):13–13. Mary Bucholtz and Kira Hall. 2005. Identity and interaction: A sociocultural linguistic approach. Discourse studies, 7(4-5):585–614. John D Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on Twitter. In Proceedings of the conference on empirical methods in natural language processing, pages 1301–1309. Association for Computational Linguistics. Jaime G Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR, volume 98, pages 335–336. Jordan Carpenter, Daniel Preotiuc-Pietro, Lucie Flekova, Salvatore Giorgi, Courtney Hagan, Margaret L Kern, Anneke EK Buffone, Lyle Ungar, and Martin EP Seligman. 2017. Real men don’t say “cute” using automatic language analysis to isolate inaccurate aspects of stereotypes. Social Psychological and Personality Science, 8(3):310–322. Timothy Chklovski and Rada Mihalcea. 2003. Exploiting agreement and disagreement of human annotators for word sense disambiguation. In Proceedings of RANLP 2003. Glen Coppersmith, Mark Dredze, and Craig Harman. 2014. Quantifying mental health signals in Twitter. In Proceedings of the workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality, pages 51–60. Marta R Costa-juss`a, Christian Hardmeier, Will Radford, and Kellie Webster. 2019. Proceedings of the first workshop on gender bias in natural language processing. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing. Gianmarco De Francisci Morales, Aristides Gionis, and Claudio Lucchese. 2012. From chatter to headlines: harnessing the real-time web for personalized news recommendation. In Proceedings of the fifth ACM international conference on Web search and data mining, pages 153–162. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Zhicheng Dou, Ruihua Song, and Ji-Rong Wen. 2007. A large-scale evaluation and analysis of personalized search strategies. In Proceedings of the 16th international conference on World Wide Web, pages 581– 590. ACM. Esin Durmus and Claire Cardie. 2018. Exploring the role of prior beliefs for argument persuasion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1035–1045, New Orleans, Louisiana. Association for Computational Linguistics. Cynthia Dwork. 2008. Differential privacy: A survey of results. In International conference on theory and applications of models of computation, pages 1–19. Springer. Penelope Eckert. 2012. Three waves of variation study: The emergence of meaning in the study of sociolinguistic variation. Annual review of Anthropology, 41:87–100. 7834 Penelope Eckert and Sally McConnell-Ginet. 2013. Language and gender. Cambridge University Press. Jacob Eisenstein. 2013. What to do about bad language on the internet. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 359–369, Atlanta, Georgia. Association for Computational Linguistics. Jacob Eisenstein, Brendan O’Connor, Noah A Smith, and Eric P Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 1277–1287. Association for Computational Linguistics. Golnoosh Farnadi, Jie Tang, Martine De Cock, and Marie-Francine Moens. 2018. User profiling through deep multimodal fusion. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM ’18, pages 171– 179, New York, NY, USA. ACM. Katja Filippova. 2012. User demographics and language in an implicit social network. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 1478–1488, Stroudsburg, PA, USA. Association for Computational Linguistics. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 42–47, Portland, Oregon, USA. Association for Computational Linguistics. Aditya Grover and Jure Leskovec. 2016. Node2vec: Scalable feature learning for networks. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 855–864, New York, NY, USA. ACM. Sharath Chandra Guntuku, David B Yaden, Margaret L Kern, Lyle H Ungar, and Johannes C Eichstaedt. 2017. Detecting depression and mental illness on social media: an integrative review. Current Opinion in Behavioral Sciences, 18:43–49. Ido Guy, Naama Zwerdling, David Carmel, Inbal Ronen, Erel Uziel, Sivan Yogev, and Shila OfekKoifman. 2009. Personalized recommendation of social software items based on social relations. In Proceedings of the third ACM conference on Recommender systems, pages 53–60. ACM. Ido Guy, Naama Zwerdling, Inbal Ronen, David Carmel, and Erel Uziel. 2010. Social media recommendation based on people and tags. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 194–201. ACM. Vrindavan Harrison, Lena Reed, Shereen Oraby, and Marilyn Walker. 2019. Maximizing stylistic control and semantic accuracy in NLG: Personality variation and discourse contrast. arXiv preprint arXiv:1907.09527. Claire Hewson and Tom Buchanan. 2013. Ethics guidelines for internet-mediated research. The British Psychological Society. J Holmes and M Meyerhoff. 2008. The handbook of language and gender (vol. 25). Hoboken, NJ: Wiley. Dirk Hovy. 2015. Demographic factors improve classification performance. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 752–762, Beijing, China. Association for Computational Linguistics. Dirk Hovy. 2018. The social and the neural network: How to make natural language processing about people again. In Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media, pages 42–49. Dirk Hovy and Shannon L Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591–598. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Alexei V Ivanov, Giuseppe Riccardi, Adam J Sporka, and Jakub Franc. 2011. Recognition of personality traits from human spoken conversations. In Twelfth Annual Conference of the International Speech Communication Association. Shaoxiong Ji, Shirui Pan, Guodong Long, Xue Li, Jing Jiang, and Zi Huang. 2019. Learning private neural language modeling with attentive aggregation. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE. Hans Jonas. 1983. Das prinzip verantwortung. versuch einer ethik f¨ur die technologische zivilisation. Zeitschrift f¨ur Philosophische Forschung, 37(1):144–147. Diane Kelly et al. 2009. Methods for evaluating interactive information retrieval systems with users. Foundations and Trends R⃝in Information Retrieval, 3(1– 2):1–224. 7835 David A Kenny and Linda Albright. 1987. Accuracy in interpersonal perception: a social relations analysis. Psychological bulletin, 102(3):390. Julia Kiseleva, Kyle Williams, Jiepu Jiang, Ahmed Hassan Awadallah, Aidan C Crook, Imed Zitouni, and Tasos Anastasakos. 2016. Understanding user satisfaction with intelligent assistants. In Proceedings of the 2016 ACM on Conference on Human Information Interaction and Retrieval, pages 121–130. ACM. Dan Klein and Christopher D Manning. 2003. Fast exact inference with a factored model for natural language parsing. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 3–10. MIT Press. Ron Kohavi, Alex Deng, Roger Longbotham, and Ya Xu. 2014. Seven rules of thumb for web site experimenters. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1857–1866. ACM. Corina Koolen and Andreas van Cranenburgh. 2017. These are not the stereotypes you are looking for: Bias and fairness in authorial gender attribution. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 12–22, Valencia, Spain. Association for Computational Linguistics. Vasileios Lampos, Nikolaos Aletras, Daniel Preot¸iucPietro, and Trevor Cohn. 2014. Predicting and characterising user impact on Twitter. In 14th conference of the European chapter of the Association for Computational Linguistics 2014, EACL 2014, pages 405–413. Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, and Chu-Ren Huang. 2017. Fake news detection through multi-perspective speaker profiles. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 252–256, Taipei, Taiwan. Asian Federation of Natural Language Processing. Stephanie Lukin, Pranav Anand, Marilyn Walker, and Steve Whittaker. 2017. Argument strength is in the eye of the beholder: Audience effects in persuasion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 742–753. Veronica Lynn, Youngseo Son, Vivek Kulkarni, Niranjan Balasubramanian, and H Andrew Schwartz. 2017. Human centered nlp with user-factor adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1146–1155. Franc¸ois Mairesse, Marilyn A Walker, Matthias R Mehl, and Roger K Moore. 2007. Using linguistic cues for the automatic recognition of personality in conversation and text. Journal of artificial intelligence research, 30:457–500. Christopher D Manning, Christopher D Manning, and Hinrich Sch¨utze. 1999. Foundations of statistical natural language processing. MIT press. Dejan Markovikj, Sonja Gievska, Michal Kosinski, and David J Stillwell. 2013. Mining facebook data for predictive personality modeling. In Seventh International AAAI Conference on Weblogs and Social Media. Puneet Mathur, Rajiv Shah, Ramit Sawhney, and Debanjan Mahata. 2018. Detecting offensive tweets in Hindi-English code-switched language. In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media, pages 18– 26, Melbourne, Australia. Association for Computational Linguistics. Nicolaas Matthijs and Filip Radlinski. 2011. Personalizing web search using long term browsing history. In Proceedings of the fourth ACM international conference on Web search and data mining, pages 25– 34. ACM. H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2017. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963. Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415– 444. Bradley N. Miller, John T. Riedl, and Joseph A. Konstan. 1997. Experiences with grouplens: Marking usenet useful again. In Proceedings of the Annual Conference on USENIX Annual Technical Conference, ATEC ’97, pages 17–17, Berkeley, CA, USA. USENIX Association. Pushkar Mishra, Marco Del Tredici, Helen Yannakoudakis, and Ekaterina Shutova. 2018. Author profiling for abuse detection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1088–1098, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Rohan Mishra, Pradyumn Prakhar Sinha, Ramit Sawhney, Debanjan Mahata, Puneet Mathur, and Rajiv Ratn Shah. 2019. SNAP-BATNET: Cascading author profiling and social network graphs for suicide ideation detection on social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 147–156, Minneapolis, Minnesota. Association for Computational Linguistics. Thomas Morton, Joern Kottmann, Jason Baldridge, and Gann Bierner. 2005. Opennlp: A java-based nlp toolkit. In Proc. EACL. 7836 Arjun Mukherjee and Bing Liu. 2010. Improving gender classification of blog authors. In Proceedings of the 2010 conference on Empirical Methods in natural Language Processing, pages 207–217. Association for Computational Linguistics. Dong Nguyen, Noah A Smith, and Carolyn P Ros´e. 2011. Author age prediction from text using linear regression. In Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 115– 123. Association for Computational Linguistics. Dong Nguyen, Dolf Trieschnigg, A. Seza Do˘gru¨oz, Rilana Gravel, Mari¨et Theune, Theo Meder, and Franciska de Jong. 2014. Why gender and age prediction from tweets is hard: Lessons from a crowdsourcing experiment. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1950– 1961, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Shereen Oraby, Lena Reed, Shubhangi Tandon, TS Sharath, Stephanie Lukin, and Marilyn Walker. 2018. Controlling personality-based stylistic variation with neural natural language generators. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 180–190. Jiaqi Pan, Rishabh Bhardwaj, Wei Lu, Hai Leong Chieu, Xinghao Pan, and Ni Yi Puay. 2019. Twitter homophily: Network based prediction of user’s occupation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2633–2638, Florence, Italy. Association for Computational Linguistics. Gregory Park, H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Michal Kosinski, David J Stillwell, Lyle H Ungar, and Martin EP Seligman. 2015. Automatic personality assessment through social media language. Journal of personality and social psychology, 108(6):934. Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transactions of the Association for Computational Linguistics, 7:677–694. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 742–751, Gothenburg, Sweden. Association for Computational Linguistics. Martin Potthast, Johannes Kiesel, Kevin Reinartz, Janek Bevendorff, and Benno Stein. 2018. A stylometric inquiry into hyperpartisan and fake news. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 231–240. Vinodkumar Prabhakaran, Owen Rambow, and Mona Diab. 2012. Who’s (really) the boss? perception of situational power in written interactions. In Proceedings of COLING 2012, pages 2259–2274, Mumbai, India. The COLING 2012 Organizing Committee. Daniel Preot¸iuc-Pietro, Vasileios Lampos, and Nikolaos Aletras. 2015. An analysis of the user occupational class through twitter content. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1754–1764, Beijing, China. Association for Computational Linguistics. Jing Qian, Mai ElSherief, Elizabeth Belding, and William Yang Wang. 2018. Leveraging intra-user and inter-user representation learning for automated hate speech detection. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 118–123, New Orleans, Louisiana. Association for Computational Linguistics. Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, and Jie Tang. 2018. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM ’18, pages 459–467, New York, NY, USA. ACM. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in Twitter. In Proceedings of the 2nd international workshop on Search and mining usergenerated contents, pages 37–44. ACM. Paul Resnick, Neophytos Iacovou, Mitesh Suchak, Peter Bergstrom, and John Riedl. 1994. Grouplens: an open architecture for collaborative filtering of netnews. In Proceedings of the 1994 ACM conference on Computer supported cooperative work, pages 175–186. ACM. Phillip Rogaway. 2015. The moral character of cryptographic work. IACR Cryptology ePrint Archive, 2015:1162. Laurie A Rudman and Peter Glick. 2012. The social psychology of gender: How power and intimacy shape gender relations. Guilford Press. 7837 Mark Sanderson and W Bruce Croft. 2012. The history of information retrieval research. Proceedings of the IEEE, 100(Special Centennial Issue):1444–1451. Maarten Sap, Gregory Park, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Lyle Ungar, and Hansen Andrew Schwartz. 2014. Developing age and gender predictive lexica over social media. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1146–1151, Doha, Qatar. Association for Computational Linguistics. Akira Sasaki, Kazuaki Hanawa, Naoaki Okazaki, and Kentaro Inui. 2018. Predicting stances from social media posts using factorization machines. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3381–3390, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199–205. H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, Martin EP Seligman, et al. 2013. Personality, gender, and age in the language of social media: The open-vocabulary approach. PloS one, 8(9):e73791. Katie Shilton and Sheridan Sayles. 2016. ” we aren’t all going to be on the same page about ethics”: Ethical practices and challenges in research on digital and social media. In 2016 49th Hawaii International Conference on System Sciences (HICSS), pages 1909–1918. IEEE. Charese Smiley, Frank Schilder, Vassilis Plachouras, and Jochen L. Leidner. 2017. Say the right thing right: Ethics issues in natural language generation systems. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 103–108, Valencia, Spain. Association for Computational Linguistics. Christian Stab and Iryna Gurevych. 2014. Annotating argument components and relations in persuasive essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1501–1510. Chris Sumner, Alison Byers, Rachel Boochever, and Gregory J Park. 2012. Predicting dark triad personality traits from Twitter usage and a linguistic analysis of tweets. In 2012 11th International Conference on Machine Learning and Applications, volume 2, pages 386–393. IEEE. Rachael Tatman. 2017. Gender and dialect bias in youtube’s automatic captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 53–59. RS Taylor. 1968. Question-negotiation and information-seeking in libraries (vol. 29): College and research libraries. Jaime Teevan, Susan T Dumais, and Eric Horvitz. 2005. Personalizing search via automated analysis of interests and activities. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 449–456. ACM. Ilaria Torre. 2009. Adaptive systems in the era of the semantic and social web, a survey. User Modeling and User-Adapted Interaction, 19(5):433–486. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 252–259. Leanne Townsend and Claire Wallace. 2016. Social media research: A guide to ethics. Aberdeen: University of Aberdeen. J Verhoeff, William Goffman, and Jack Belzer. 1961. Inefficiency of the use of boolean functions for information retrieval systems. Communications of the ACM, 4(12):557–558. Svitlana Volkova and Yoram Bachrach. 2016. Inferring perceived demographics from user emotional tone and user-environment emotional contrast. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1567–1578, Berlin, Germany. Association for Computational Linguistics. Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political preferences from streaming communications. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 186–196, Baltimore, Maryland. Association for Computational Linguistics. Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring sentiment in social media: Bootstrapping subjectivity clues from multilingual twitter streams. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 505–510, Sofia, Bulgaria. Association for Computational Linguistics. Zeerak Waseem. 2016. Are you a racist or am I seeing things? annotator influence on hate speech detection on twitter. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 138– 142, Austin, Texas. Association for Computational Linguistics. 7838 Tsung-Hsien Wen, Hung-Yi Lee, Tai-Yuan Chen, and Lin-Shan Lee. 2012. Personalized language modeling by crowd sourcing with social network data for voice access of cloud applications. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 188–193. IEEE. Matthew L Williams, Pete Burnap, and Luke Sloan. 2017. Towards an ethical framework for publishing Twitter data in social research: Taking into account users’ views, online context and algorithmic estimation. Sociology, 51(6):1149–1168. Tom D Wilson. 1999. Models in information behaviour research. Journal of documentation, 55(3):249–270. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics, 3:283–297. Yi Yang and Jacob Eisenstein. 2017. Overcoming language variation in sentiment analysis with social attention. Transactions of the Association for Computational Linguistics, 5:295–307. Seunghyun Yoon, Hyeongu Yun, Yuna Kim, Gyu-tae Park, and Kyomin Jung. 2017. Efficient transfer learning schemes for personalized language modeling using recurrent neural network. In Workshops at the Thirty-First AAAI Conference on Artificial Intelligence. Mohammadzaman Zamani and H. Andrew Schwartz. 2017. Using twitter language to predict the real estate market. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 28–33, Valencia, Spain. Association for Computational Linguistics. Mohammadzaman Zamani, H. Andrew Schwartz, Veronica Lynn, Salvatore Giorgi, and Niranjan Balasubramanian. 2018. Residualized factor adaptation for community social media prediction tasks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3560– 3569, Brussels, Belgium. Association for Computational Linguistics. Xingshan Zeng, Jing Li, Lu Wang, and Kam-Fai Wong. 2019. Joint effects of context and user history for predicting online conversation re-entries. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2809– 2818, Florence, Italy. Association for Computational Linguistics.
2020
700
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7839–7859 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7839 To Test Machine Comprehension, Start by Defining Comprehension Jesse Dunietz∗, Gregory Burnham∗, Akash Bharadwaj, Owen Rambow, Jennifer Chu-Carroll, and David Ferrucci Elemental Cognition {jessed,gregb,akashb,owenr,jenniferc,davef} @elementalcognition.com Abstract Many tasks aim to measure MACHINE READING COMPREHENSION (MRC), often focusing on question types presumed to be difficult. Rarely, however, do task designers start by considering what systems should in fact comprehend. In this paper we make two key contributions. First, we argue that existing approaches do not adequately define comprehension; they are too unsystematic about what content is tested. Second, we present a detailed definition of comprehension—a TEMPLATE OF UNDERSTANDING—for a widely useful class of texts, namely short narratives. We then conduct an experiment that strongly suggests existing systems are not up to the task of narrative understanding as we define it. 1 Introduction Over the past few years, neural models (e.g., Chen et al., 2016; Devlin et al., 2019; Liu et al., 2019) have begun to match or even exceed human performance on MACHINE READING COMPREHENSION (MRC) benchmarks. In these tasks, systems demonstrate their comprehension of a passage by answering questions about it. Yet despite recent successes, MRC appears far from solved: systems continue to make basic, sometimes baffling mistakes, and they fail to generalize to new data. Such shortcomings have motivated a flurry of new MRC tasks, each designed to confront systems with questions deemed challenging for current methods. For example, tasks may ask questions requiring commonsense reasoning (Huang et al., 2019), multihop reasoning (Welbl et al., 2018), or inferences based on a second passage (Lin et al., 2019). This line of research assumes that ever-more“difficult” question-answering tasks will ultimately lead to more robust and useful reading comprehension. We argue that, while the question-answering *Equal contributions. format can be a fine choice for how to test comprehension, using difficulty as the basis for what to test is fundamentally flawed. To put it provocatively, the dominant MRC research paradigm is like trying to become a professional sprinter by glancing around the gym and adopting any exercises that look hard. The training may end up exercising some relevant muscles, but it is far too haphazard to achieve the ultimate goal. Like athletic training, MRC tasks are not an end in themselves; ultimately, they are meant to lead to real-world applications. Current tasks may suffice for sufficiently similar applications—e.g., chatbots that look up customer questions in product documentation. But many proposed NLP applications hinge on deeper comprehension. Early work (e.g., Dyer, 1982) pointed to examples like assistance with legal disputes and service contracts; more recent work suggests applications such as summarizing a patient’s clinical timeline (Jung et al., 2011). For such complex applications, machines will need to manipulate rich models of the world evoked by the text—e.g., to compare a claimant’s narrative to legal standards, or to build a causal model of a patient’s condition. From this broader perspective, the current paradigm falls short. Specifically, we claim that in the quest for difficulty, task designers overlook the issue of what content—what information expressed, implied, or relied on by the passage—systems should comprehend. MRC datasets are usually constructed by having humans cast about for supposedly tricky questions, most often questions based on reasoning. But the questions that result are scattershot, offering little assurance that even a high-scoring system has achieved a useful and robust understanding. We advocate for a different approach. We propose that the first step in defining MRC tasks should be specifying what content a system would likely need to understand for a given class of applica7840 tions. Only then can tasks systematically compile questions to probe for the internal model that the machine ought to have constructed. This paper demonstrates such an approach for applications that involve understanding narratives.1 After reviewing existing approaches to constructing MRC datasets (§2), we argue for narratives as a valuable MRC testbed (§3.1). Then, inspired by cognitive science research on reading comprehension, we propose a “template of understanding” (ToU) for stories—an account of what an internal model of a story should minimally contain (§3.2). We also suggest ways to operationalize our ToU as a story comprehension task (§4). Finally, we show evidence from a pilot ToU-based task that current MRC models are not up to the challenge (§5). 2 Existing MRC dataset designs This paper addresses how MRC tests can be made more systematic. Accordingly, we review existing tasks grouped by their data collection methods. We argue that each category falls short of testing a useful body of content in a satisfying way. 2.1 Manually written questions By far the most popular strategy for generating MRC questions is to have humans—usually crowd workers, but sometimes trained annotators—think of questions about each passage. The most straightforward version of this method gives annotators little to no guidance regarding what questions to ask. One early example is the TREC-8 dataset (Voorhees and Tice, 2000). In the more recent SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) entailment tasks, the only constraint on crowd workers was that they produce one entailed, one contradicted, and one neutral hypothesis for each premise sentence.2 Similarly, the workers who assembled NewsQA (Trischler et al., 2017) were told only that the questions had to be answerable with short phrases, and workers for SQuAD (Rajpurkar et al., 2016) were simply given a “good” and a “bad” example and encouraged to use original wording. 1We will use “narrative” and “story” interchangeably, roughly following the Wikipedia definition: “A narrative or story is an account of a series of related events, experiences, or the like, whether true...or fictitious.” 2Parts of the original RTE datasets (Dagan et al., 2006, etc.) were generated more systematically, but only in the sense that the outputs of NLP tools (e.g., translation or information extraction systems) were recorded as correct/incorrect examples of entailment. Little attention was paid to subject matter. The problem with such an open-ended generation process is that, absent stronger guidance, people tend to write simple questions that can be answered using lexical cues. (See, e.g., the dataset analysis in Rajpurkar et al., 2016.) This makes the tasks questionable measures of comprehension. The dominant solution is to incorporate trickier twists. NarrativeQA (Koˇcisk´y et al., 2018) and DuoRC (Saha et al., 2018) reduce lexical similarity between questions and passages by showing annotators only a second passage about the same events. Other datasets emphasize reasoning presumed to be difficult, such as incorporating information from multiple parts of the text. MCTest (Richardson et al., 2013) and MultiRC (Khashabi et al., 2018) ask for questions that rely on multiple sentences; ROPES (Lin et al., 2019) has annotators apply information from one passage to write questions on a second; and HotpotQA (Yang et al., 2018b) and QASC (Khot et al., 2019) require multi-hop reasoning. Other forms of reasoning tested include coreference resolution (Quoref, Dasigi et al., 2019; Winograd Schema Challange, Levesque et al., 2012), numerical reasoning (DROP, Dua et al., 2019), and commonsense reasoning (Cosmos QA, Huang et al., 2019). Tasks can also be made harder with devices such as unanswerable questions (SQuADRUn, Rajpurkar et al., 2018; NewsQA; CosmosQA) and filtering questions with an adversarial baseline (DROP; Quoref; QASC). These twists do make MRC harder. But to pursue hard questions is to overlook why easy questions seemed inadequate in the first place: MRC tasks are a means to an end, namely useful applications, and easy questions—e.g., questions that depend only on lexical cues—do not suffice for that end. The techniques above may help by guiding annotators to a different space of questions: intuition suggests that some of these harder questions are indeed useful ones. But such techniques are an incomplete solution, as difficulty is a weak proxy for utility. What matters is not the system’s sophistication per se; it is the alignment between the questions the system can answer and the ones a given application needs it to. Designing for difficulty still gives little assurance of such alignment. Perhaps a truly random walk through question space would eventually cover a representative set of useful questions, but annotators are biased toward questions that humans find interesting (see Gordon and Van Durme, 2013; Misra et al., 2016; Zhang 7841 et al., 2017). They do not think to ask questions whose answers seem obvious, even when those answers are essential to comprehension. If we do not delineate such facts and evaluate systems’ ability to manipulate them, we will never be satisfied that the systems have adequately understood the text. 2.2 Naturally occurring questions A second approach is to find questions “in the wild,” then retrospectively collect documents containing the answers. This is the approach of BoolQ (Clark et al., 2019) and MS MARCO (Nguyen et al., 2016), which compile search engine queries, and of ELI5 (Fan et al., 2019), which harvests questions from Reddit’s “Explain Like I’m Five” forum. Such datasets are clearly useful for answering common queries, a valuable application class in its own right. For more complex applications, however, common queries are, if anything, less thorough than annotators at probing important elements of understanding (particularly aspects humans find obvious). The mismatch between questions and passage content is exacerbated by finding the passages retrospectively: the questions do not even attempt to test most of what each passage discusses, making them an insufficient measure of MRC. 2.3 Questions from tests designed for humans The third strategy is to pull questions from tests written for humans. Examples include the early “Deep Read” corpus (Hirschman et al., 1999); the more recent TriviaQA (Joshi et al., 2017) and SearchQA (Dunn et al., 2017) datasets, which mine collections of trivia questions; the AI2 Reasoning Challenge (ARC; Clark et al., 2018), which asks questions from standardized science tests; and RACE (Lai et al., 2017), which draws from English learning materials for Chinese school students. Our chief concern about this approach echoes our concerns from §2.1: tests designed for humans rarely bother to test content that most humans find obvious. Accordingly, they gloss over vast swaths of understanding that machines do not yet have but which may be critical to applications. In addition, SearchQA, TriviaQA, and ARC find passages retrospectively, so again, the questions they ask only tangentially graze the content of each passage. 2.4 Automatically generated questions Several projects generate questions algorithmically. The CNN/Daily Mail datasets (Hermann et al., 2015) and ReCoRD (Zhang et al., 2018) produce cloze-style questions over news passages by masking out entities from summaries and below-the-fold sentences. ComplexWebQuestions (CWQ; Talmor and Berant, 2018) and WikiHop (Welbl et al., 2018) test for multi-hop reasoning by walking a structured knowledge base. Finally, bAbI (Weston et al., 2016) generates short texts and questions from a simple simulation of characters moving around. Each algorithm encodes assumptions about what is worth asking. In theory, then, the algorithmic approach could produce a satisfying MRC test: given appropriate inputs, the algorithm could aim to generate questions that cover important content. Indeed, our proposal in §4.1 can be seen as a question generation algorithm to be run by humans. In practice, however, algorithmic approaches have de-emphasized content. CNN/Daily Mail and ReCoRD capture explicit assertions about maskable entities, which do not amount to a principled body of content. The algorithms behind CWQ and WikiHop at least take as input some body of content, namely knowledge graphs. But the graphs include only a fraction—again, not a principled one—of the associated documents’ content, and the questions are further restricted to rely on multihop reasoning. Multi-hop reasoning is no doubt a major error source for MRC, but applications are driven by what propositions must be extracted; whether each proposition takes zero inference steps or seven is immaterial. Accordingly, multi-hop questions are worth investigating, but they are not a sufficiently well-motivated body of content to constitute a measure of reading comprehension. Similar remarks can be made about most of bAbI’s 20 “tasks”: grounded in simulations, their question generation algorithms start from known content, but target forms of reasoning. However, the tasks concerning time, positions, sizes, pathfinding, and motivations are closer to our content-first question generation strategy. These tasks are not driven by applications, and their synthetic passages are unrealistically simple, but among existing datasets, they are closest to our proposal. 2.5 Summary: What is missing The most clear-cut way to test reading comprehension would be to select passages, describe what should be comprehended from them, and design tests for that understanding. Yet few MRC datasets have even approximated this approach. Many impose little structure on what content is tested; the 7842 rest pick some “difficult” form(s) of analysis or linguistic phenomena, but rarely consider downstream goals to determine what the questions should be about. Metrics for difficult reasoning and linguistic phenomena (see, e.g., Gardner et al., 2019) are useful, but only as tools for error analysis and mitigation; they are not top-line performance metrics. In addition, many datasets to date suffer from two other problems: 1) they select passages after the questions are asked, meaning the questions test comprehension of only small portions of the passages; and/or 2) they ask very few questions whose answers are obvious to humans. These issues of content scope also intersect with issues of format. Many tasks have adopted a span extraction format, including TREC QA, NewsQA, and (most notably) SQuAD and its successors. This format immediately rules out questions about inferred events or entities, which may be essential to a complete interpretation.The main alternative is multiple choice (MC), used in tasks such as Cosmos QA, RACE, ARC, WikiHop, and every task in GLUE (Wang et al., 2019b) and SuperGLUE (Wang et al., 2019a). But MC has its own problem of providing extra hints via answer choices. We will return to the format issue in §4. But first, we propose a more systematic approach to constructing MRC datasets. 3 Defining deep story understanding Our approach starts from the content of a passage, which we define as the information it expresses, implies, or relies on. Specifically, we propose that task designers lay out a minimal body of content that MRC systems should demonstrate they understand. Exactly what that content is will vary from passage to passage, of course, but the key is to define a TEMPLATE OF UNDERSTANDING (ToU): a set of question templates that can be filled in with specific events and entities for any given passage. The answers to the fleshed-out questions will constitute a floor of understanding for the passage—a plausible lower bound on what content machines ought to comprehend. The natural next question is what content the ToU should cover. System needs will vary by application. To advance MRC writ large without limiting ourselves to a single application, we propose selecting a class of texts where one could reasonably predict a priori what content would be useful for applications. In the rest of this section, we endorse fictional narratives as a particularly promising class of texts and propose a ToU for them.3 3.1 The case for stories Stories have several convenient properties that recommend them as a testbed for MRC. Most importantly, applications that involve comprehending stories are numerous and diverse. Consider a legal aid tool: to assess whether a lawsuit may be warranted, it would have to comprehend an account of the events in question. Likewise, a tool that finds candidates for medical trials would need to read each patient history. (Appendix A fleshes out these scenarios.) These examples are not exceptional; applications in other domains will depend on stories in customer complaints, intelligence dispatches, financial news, and many other document types. Humans tend to think and communicate in terms of stories (see, e.g., Haidt, 2013; Mateas and Sengers, 1999; Bruner, 1991; Eck, 2006), so it is unsurprising that stories are ubiquitous in the content we want NLU tools to help us with. Additionally, stories come with a strong prior from cognitive science about what elements of understanding will be useful. Research on human reading comprehension (e.g., Graesser et al., 1994; Zwaan et al., 1995) suggests that humans attend primarily to the timeline of events, to the locations of entities and events, and to the causes and motivations of events and actions. For applications that involve story comprehension, we can expect that machines will need to understand these same dimensions. We can thus design a principled ToU for stories even without specifying an application. Stories’ content also makes them a particularly compelling demonstration of understanding, for two reasons. First, cognitive science suggests that humans make more inferences when reading narrative text than expository text (Graesser et al., 1994). In particular, a story entails a highly structured network of relations (timelines, causality, etc.). Thus, stories do exercise abilities beyond simple factoid extraction. Second, stories rely on a large body of implicit world knowledge. If a system is able to use and express that knowledge when reading stories, it will likely be able to apply the same knowledge even when comprehending other kinds of texts. Among stories, fictional ones offer the strongest test of comprehension: their contents cannot be 3To be clear, we are not claiming that fictional narratives are themselves an application; only that they are a class of texts that are useful for many applications. 7843 found in corpora, so systems must rely on comprehending the text (Richardson et al., 2013). Accordingly, we suggest using fictional narratives as the basis for developing a ToU and evaluating MRC. 3.2 A ToU for stories We propose four overlapping clusters of questions for story comprehension, corresponding to the four elements identified by Zwaan et al. (1995) as the ones humans attend to when reading stories. Further support for these questions, particularly the last two, comes from early work in computational story understanding: Schank and Abelson (1977) identify causal chains, plans and goals as crucial elements of understanding multi-sentence stories. 1. Spatial: Where are entities positioned over time, relative to landmarks and each other? How are they physically oriented? And where do events take place? 2. Temporal: What events and sub-events occur, and in what order? Also, for what blocks of that timeline do entities’ states hold true? 3. Causal: How do events and states lead mechanistically to the events and states described or implied by the text? 4. Motivational: How do agents’ beliefs, desires, and emotions lead to their actions? These question templates form the ToU. Systems should ideally be able to answer them about all entities and events that the story mentions or implies (though of course some entities/events are more important than others; see §4.1). We do not have a separate category for “who did what to whom” information, but we expect strong performance on the ToU to hinge on such analysis. In particular, much of this information is captured in the characterization of events for temporal questions. Of course, these four facets do not cover everything one might comprehend. They include nothing about the story’s message, or how it resembles other stories, or even most counting questions. The ToU merely provides a lower bound on what is needed. That said, many forms of reasoning (e.g., counting) can be reduced to deterministically manipulating the answers to multiple ToU questions. 4 Towards a story understanding task Our ToU provides a conceptual framework for stating what a machine should understand from a story. Spatial (sample entries): • Rover is in the yard from when he runs out the door until he runs inside. • Rover is in the house from when he runs inside until the end of the story. Temporal (sample entries): • Allie arrives just before Rover runs outside. • Rover barks just before he runs inside. • It is still raining at the end of the story. Motivational (sample entry): • Rover runs inside, rather than staying put, because: – If he runs inside, he will be inside, whereas if he does not he will be outside, because: * Rover is outside. * Running to a place results in being there. – If Rover is inside, he will not get rained on, whereas if he is outside he will, because: * It is raining. * When it is raining, things that are outside tend to get rained on, whereas things inside do not. – Rover would prefer not getting rained on to getting rained on, because: * Most dogs prefer not to get rained on. Figure 1: A partial RoU for the following simple story fragment: ...One day, it was raining. When Allie arrived, Rover ran out the door. He barked when he felt the rain. He ran right back inside. However, there remains the challenge of operationalizing the framework—i.e., of rigorously assessing whether a machine has that understanding. We do not claim to have solved this problem, but in this section we discuss two broad directions for further development: evaluating based on annotated answers to ToU questions and asking untrained humans to rank different answers. These approaches might even be combined to offer complementary perspectives on system performance. 4.1 Approach 1: Annotating ToU answers One class of approaches starts with trained annotators writing plain-English answers to each ToU question. The annotators are given guidelines for instantiating the ToU on new stories and for making answers detailed and thorough. We call an annotator’s answer document a RECORD OF UNDERSTANDING (RoU); see Figure 1 for an example. Conceptually, answering temporal and spatial questions is straightforward, but the causal and motivational questions require more definition. People accept many kinds of answers to such questions. It is therefore important to clarify what a good answer should include—i.e., what causal or motivational 7844 facts an MRC system should comprehend. We base our account of these questions on the philosophical literature on causality (see Schaffer, 2016) and on the social science literature on what explanations people seek (see Miller, 2019). Following this scholarship, we conceptualize a causal or motivational question as asking what root cause led the event or state from the story to happen rather than some alternative outcome. For example, in a story about Rover the dog, the question of why Rover came inside is taken to mean: Why did Rover come inside, rather than remaining where he was?4 The answer to such a question is a CAUSAL CHAIN tracing from the root cause to the event or state described in the story (see Figure 2 for examples). The links in the chain walk in lockstep through two parallel worlds: the REALIZED WORLD, where the root cause held true and led to the observed outcome; and an ALTERNATIVE WORLD, where the root cause would have been changed and led to some alternative outcome. For mechanistic causation, each link in the chain ends in an event that helped bring about the outcome described in the story. For example, two mechanistic links from Figure 2a are the plant looks brown (rather than green) because it is unhealthy (rather than healthy) and the plant is unhealthy because it has little light (rather than lots of light). For motivations, the structure is slightly different. Rather than the final link being an event that happened in the story, it is a statement of the agent’s preferences (in Figure 2b, Rover would prefer not being rained on to being rained on). The links leading to it are the future causes and effects that the agent imagines will lead from their action to their preferred outcome (e.g., going inside leading to being inside leading to not getting rained on). The causal chain provides the backbone of an explanation for an event or action, but the full explanation should recursively explain each link (e.g., Rover would prefer not being rained on to being rained on). Recursive explanations appeal to some combination of general knowledge about the world (e.g., Most dogs prefer not to get rained on) and 4Causality as contrast may seem unintuitive, particularly since “why” questions tend not to state a contrasting outcome. But the audience generally just infers a reasonable default. Beyond its support in the literature, contrast offers several advantages. It makes it far easier to match intuitions about what should factor into a causal explanation. It also naturally handles relative preferences, and allows explaining multiple aspects of an event—e.g., John walking carefully can be explained in contrast to both staying put and walking normally. story-specific SUPPORTING FACTS—e.g., the fact that Rover is outside. Supporting facts generally need to be recursively explained, as well. Even with guidelines, different annotators may give substantively different answers. In particular, they may drill down to different levels of detail in a causal chain before bottoming out in general knowledge—e.g., rather than stopping at dogs disliking rain, one annotator might explain that Rover disprefers rain because he dislikes getting wet, which in turn is because dogs often dislike getting wet. To handle such disagreements, we can adopt the pyramid method (Nenkova and Passonneau, 2004) from abstractive summarization, another task where annotators may provide different but equally sensible ground truths. Under this method, a reconciler merges RoUs into a single rubric by identifying shared content “nuggets” (e.g., that it is raining) and weighting each by how many annotators cited it. (See Voorhees [2004] for more on nuggets.) 4.1.1 Preliminary notes on RoU agreement We conducted a small pilot study on RoU annotation: with the help of 5 annotators, we iteratively crafted guidelines and tested them on 12 stories. Here we share some initial qualitative observations. For spatial annotations, agreement improved when annotators first drew a simple sketch of each scene, then translated their sketches into statements. This process seemed to help annotators notice implicit spatial facts. Some annotators also reported that sketches lowered the cognitive burden. For temporal annotations, annotators generally agreed on what events took place and the temporal relations between them. Disagreements stemmed mainly from choices of which implicit occurrences to annotate. We are exploring ways to promote consistency, including having annotators draw timelines to draw attention to missing events. We are also looking to incorporate prior art (e.g., TimeML; Pustejovsky et al., 2003) into our guidelines. On causal and motivational questions, we were pleasantly surprised by the conceptual consistency between annotators. Annotators appealed to similar causal assertions, even bottoming out in similarly detailed general rules. What was less consistent was structure—how causal chains were carved into links and how bullets were nested. Annotators also occasionally omitted self-evident general rules or supporting facts. We are optimistic that both issues can be improved by more examples and training. As expected, annotators occasionally differed on 7845 Realized world the plant is in the bedroom the plant has insufficient light the plant is unhealthy the plant is brown vs. vs. vs. vs. Alternative world the plant is somewhere well-lit the plant has sufficient light the plant is healthy the plant is green (a) A mechanistic causal chain for the question, “Why did the plant turn brown?” Realized world Rover runs in Rover is inside Rover does not get rained on Rover is more satisfied vs. vs. vs. vs. Alternative world Rover stays put Rover is outside Rover gets rained on Rover is less satisfied (b) A motivational causal chain for the question, “Why did Rover the dog run back inside when it started raining?” Figure 2: Example causal chains answering causal (above) and motivational (below) ToU questions. which causal contrasts to include. Such borderline judgments of salience may be inevitable, and seem to warrant use of the pyramid method. 4.1.2 Free-text evaluation It is difficult to evaluate a system directly on an RoU or a rubric, as they are written in plain English. One option is to pose broad ToU questions (e.g., “What events happened and in what order?”) and then to automatically compare systems’ full freetext answers to annotators’. But this would require an automated comparison metric, and existing metrics such as ROUGE and BLEU are concerned only with lexical similarity. Their correlation with humans’ quality judgments is substantial but not stellar (Callison-Burch et al., 2006), and high scores do not always indicate good answers in MRC (see Yang et al., 2018a; Nema and Khapra, 2018). Superficial similarity measures may prove particularly weak given how open-ended ToU questions are. Alternatively, human evaluators could read both the RoU-derived rubric and the system output and decide whether the output adequately covers each nugget from the rubric. This is how the pyramid method is typically applied in summarization. Still a third possibility is to have human evaluators ask targeted questions about each nugget from the rubric. The evaluators could then judge whether the system’s shorter free-text answers reflect a consistent understanding of that nugget. Such evaluation would be especially powerful if the evaluators knew the NLP systems’ typical shortcuts and could reword a given question accordingly: a suspicious evaluator could query for the same fact in multiple ways to verify that the system consistently gets it right. This would make results more satisfying than many MRC evaluations, as systems couldn’t rely on terse answers being interpreted charitably. Of course, using humans for the final evaluation is expensive, even if automated metrics are used during model development. Human evaluators also add variability and subjectivity, as they may probe differently for the same knowledge or find a given answer more or less convincing. Still, new tasks often start with human evaluation while the community fine-tunes what is worth measuring, and only later to progress to automated metrics that approximate human judgment. Such were the trajectories of topic model coherence (see Lau et al., 2014), summarization (see Yang et al., 2016), and machine translation (see Papineni et al., 2002), so it is a plausible pathway for RoU evaluation, too. 4.1.3 Thorough multiple-choice evaluation Free-response is a compelling format that is tricky to evaluate. Multiple-choice inverts the trade-off: it is less compelling, but much easier to evaluate. With the help of the ToU, a multiple-choice (MC) test can be fairly comprehensive. Question writers would first write out RoUs for a story, and perhaps reconcile them into a weighted rubric. They would then write MC questions targeting each nugget in the rubric: What goal is Rover pursuing by running inside rather than staying put? Where was Rover after he ran through the door? How were Rover, the house, and the rain positioned at the end of the story? Etc. Such a thorough MC test based on RoUs would be a step up from current tasks. The downside of an MC task is that, though easy to evaluate, it would be questionable as a measure of comprehension. All MC tasks suffer from the same lack of naturalness: questions do not normally come with candidate answers, and ranking candidates is simply easier than the tasks MRC 7846 should ultimately support. Furthermore, systems learn to exploit incidental surface features in the question, sometimes performing well even without seeing the passage (Kaushik and Lipton, 2018). When humans take MC tests, we can make strong assumptions about what they must know or do to succeed; an NLP system offers no such assurances. In the long run, then, we do not see multiple choice as an adequate format for demonstrating MRC. Still, such tests offer some leverage for progress in the short term. 4.2 Approach 2: Competing to satisfy judges The RoU guidelines put a stake in the ground as to how ToU questions should be answered. But as noted above, ToU questions, particularly “why” questions, admit many good answers. The ones canonicalized by the guidelines and by annotators following them may not always be the most useful. Consequently, it may prove beneficial to appeal directly to human intuition about what understanding entails. We have assumed that what lets humans perform story-related tasks is that they possess some internal answers to the ToU. If we further assume that humans can be led to favor machine answers that resemble their own internal ones, then humans should make good judges of answer quality even without the guidance of RoUs. Accordingly, we could let humans judge system’s full free-text answers based only on intuitive preferences. Evaluators could still be guided to ask ToU questions thoroughly, but extensive guidelines would not be needed: neither asking questions nor recognizing good answers demands nearly as much specification as stating canonical answers. Whereas the approaches in §4.1 must strive for replicability in humans’ answers, this approach seeks replicability only in humans’ judgments of answers. We suggest two ways to achieve this. First, in the absence of a rubric, we suspect that answers would best be judged via pairwise comparisons. For free-text writing, humans generally find comparative assessment easier than absolute scoring (Pollitt, 2012), and comparison is already used to evaluate natural-language generation (see, e.g., Yatskar et al., 2014). Comparisons also mitigate the difficulty of spotting errors of omission: when evaluators see an incomplete answer in isolation, they may gloss over or mentally fill in what was left unsaid. Comparing against a more complete competing answer makes it easier to notice gaps. Second, evaluators can be guided to tease apart their judgments into several desirable dimensions of explanations—e.g., accuracy, depth, and coherence—just as is often done for natural language generation. Pilot studies would be required to refine the dimensions and their specifications. 5 Current MRC systems do not comprehend stories If current systems performed well on the ToU, our argument would be moot. This section presents evidence that they do not. 5.1 Data and experimental setup To test existing systems, the questions must be presented in a form the systems can handle. Many systems were designed for span extraction, but the ToU does not lend itself to answering with text spans. Instead, we report on experiments with a pilot version of the MC task described in §4.1.3. To construct the test, we selected the first two narrative stories in the dev set of RACE (Lai et al., 2017). Based on our preliminary annotation guidelines, one annotator read both stories, drafted an RoU for each, and wrote a question for each statement in the rough RoUs. The annotator then collaborated with several others to write distractor answers, each characterized by one or more of the following: small surface variations on the correct answer that change the meaning; language from the passage, especially words that appear near words from the question; and language that might plausibly collocate with words from the question. As an additional test for robustness, questions came in “variant groups”: each question was paired with a variant, or occasionally more than one, that asks for the same information in a different way (see Figure 3). The distractors were often altered as well. We then evaluated accuracy in two ways: counting each question independently and counting each variant group as one unit. In the latter method, the group is marked correct only if both variants were answered correctly. This simulates a suspicious evaluator re-asking the question and deducting points if the model does not consistently exhibit the desired understanding. The resulting dataset contains a total of 201 questions (98 variant groups). 29% are spatial or temporal; the remaining 71% are causal or motivational. The questions average 5.1 options, with a minimum of 4. (Including many distractors somewhat 7847 Q) What actually happened when Mr. Green and the man drove together? A) They came to a small house. B) They came to a hotel. C) They traveled around the country. D) They stopped several times at the side of the road. Q’) How did the man’s directions actually turn out? A) The directions the man gave led to where the man wanted to go. B) The directions the man gave led to where Mr. Green wanted to go. C) The directions Mr. Green gave led to where the man wanted to go. D) The directions Mr. Green gave led to where Mr. Green wanted to go. Figure 3: An example variant group from our ToUbased questions; correct answers in italics. In the associated RACE story, a man tricks Mr. Green into driving him home under the pretense of guiding Mr. Green to a hotel. See Appendix B for the full story text. mitigates the weaknesses of the MC format.) All questions are included in the supplementary materials; Appendix B shows many examples. For validation, the questions were presented to two colleagues with non-technical degrees. They scored 96% and 91% (measured on variant groups), suggesting that motivated, well-educated humans have little trouble with our questions. Finally, we put the questions to XLNet (Yang et al., 2019),5 a large, transformer-based language model trained with generalized autoregression on BooksCorpus and English Wikipedia. After finetuning, the model achieves 81.75% on the original RACE task (within 5 points of the best nonensemble model at the time of the experiments). 5.2 Results and Discussion Our results (Table 1) show that XLNet performs poorly. On individual questions, it scores just 37%, closing less than a third of the gap between chance and human performance. This strongly suggests that whatever XLNet is doing, it is not learning the ToU’s crucial elements of world understanding. Furthermore, the system’s performance is brittle, with many correct answers attributable to luck and/or unreliable cues: when moving from questions to variant groups, human performance falls just 3 points. XLNet’s performance, on the other 5For questions with more than four answers, we split the answers across multiple sub-questions, all of whose answer sets contained the correct answer. We counted the question correct only if that answer was chosen across all answer sets. Chance performance was adjusted accordingly. All By question type Spatial + Temporal Causal + Motivational Per question 37% 33% 38% Chance 15% 20% 13% Human (avg.) 96% 93% 97% Per variant group 20% 14% 23% Chance 4% 5% 5% Human (avg.) 93% 90% 95% Table 1: XLNet accuracy on our ToU-based questions. hand, falls 17 points, which leaves the system closing just 18% of the chance-vs.-human gap. Although we tested only XLNet, all the other models that currently dominate the leaderboards are similar pre-trained language models; none has any distinguishing characteristic that might be expected to produce dramatically better results on our dataset. Likewise, no existing dataset is so much more systematic than RACE that fine-tuning on it should dramatically improve results on our dataset. Especially given that multiple-choice tests are artificially easy for systems (see §4.1.3), our pilot experiment offers strong evidence that existing MRC systems do not succeed on the ToU. 6 Taking the ToU idea forward Our ToU for stories is a first attempt at defining what MRC systems should comprehend in a principled, systematic way. Drawing on work in psychology, philosophy, and pedagogy, we have argued for the ToU as a minimal standard and a valuable target for MRC. We have also shown it to be beyond the reach of current systems. We therefore suggest that the NLP community further build on our ToU. This includes refining and perhaps expanding the questions; better defining the answers and evaluation procedures; building MRC corpora based on the ToU; and developing better-performing systems. We ourselves are working on all four, and we welcome collaboration. But even beyond our ToU, the broader point stands: existing MRC approaches are not satisfactorily testing for a systematic set of content. Our efforts demonstrate that it is possible, with a sufficiently interdisciplinary approach, to define a plausible floor for comprehension for a given class of applications. If MRC is to achieve its ultimate goals, we—the NLP community—owe it to ourselves to ensure that our reading comprehension tests actually test for the comprehension we desire. 7848 References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Jerome Bruner. 1991. The narrative construction of reality. Critical Inquiry, 18(1):1–21. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of BLEU in machine translation research. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the CNN/ Daily Mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2358–2367, Berlin, Germany. Association for Computational Linguistics. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? Try ARC, the AI2 reasoning challenge. CoRR, arXiv:1803.05457v1 [cs.AI]. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW’05, pages 177–190, Berlin, Heidelberg. Springer-Verlag. Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5927–5934, Hong Kong, China. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur G¨uney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. CoRR, arXiv:1704.05179v3 [cs.CL]. Michael George Dyer. 1982. In-depth Understanding: A Computer Model of Integrated Processing for Narrative Comprehension. Ph.D. thesis, Yale University, New Haven, CT, USA. AAI8220836. Jill Eck. 2006. An analysis of the effectiveness of storytelling with adult learners in supervisory management. Master’s thesis, University of Wisconsin– Stout, Menomonie, WI. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics. Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, and Sewon Min. 2019. On making reading comprehension more comprehensive. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 105–112, Hong Kong, China. Association for Computational Linguistics. Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC ’13, pages 25–30, New York, NY, USA. ACM. Arthur C Graesser, Murray Singer, and Tom Trabasso. 1994. Constructing inferences during narrative text comprehension. Psychological Review, 101(3):371. Jonathan Haidt. 2013. The Righteous Mind: Why Good People are Divided by Politics and Religion, chapter 12. Vintage Books. 7849 Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 1693–1701, Cambridge, MA, USA. MIT Press. Lynette Hirschman, Marc Light, Eric Breck, and John D. Burger. 1999. Deep Read: A reading comprehension system. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 325–332, College Park, Maryland, USA. Association for Computational Linguistics. Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2391–2401, Hong Kong, China. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. Hyuckchul Jung, James Allen, Nate Blaylock, Will de Beaumont, Lucian Galescu, and Mary Swift. 2011. Building timelines from narrative clinical records: Initial results based on deep natural language understanding. In Proceedings of BioNLP 2011 Workshop, BioNLP ’11, pages 146– 154, Stroudsburg, PA, USA. Association for Computational Linguistics. Divyansh Kaushik and Zachary C. Lipton. 2018. How much reading does reading comprehension require? A critical investigation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5010–5015, Brussels, Belgium. Association for Computational Linguistics. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 252–262, New Orleans, Louisiana. Association for Computational Linguistics. Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2019. QASC: A dataset for question answering via sentence composition. CoRR, arXiv:1910.11473 [cs.CL]. Tom´aˇs Koˇcisk´y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317– 328. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785–794, Copenhagen, Denmark. Association for Computational Linguistics. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539, Gothenburg, Sweden. Association for Computational Linguistics. Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The Winograd Schema Challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, KR’12, pages 552–561. AAAI Press. Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. 2019. Reasoning over paragraph effects in situations. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 58– 62, Hong Kong, China. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. CoRR, arXiv:1907.11692v1 [cs.CL]. Michael Mateas and Phoebe Sengers. 1999. Narrative intelligence. In Narrative Intelligence: Papers from the 1999 Fall Symposium, pages 1–10, Stanford, CA, USA. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1–38. Ishan Misra, C Lawrence Zitnick, Margaret Mitchell, and Ross Girshick. 2016. Seeing through the human reporting bias: Visual classifiers from noisy humancentric labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2930–2939. Preksha Nema and Mitesh M. Khapra. 2018. Towards a better metric for evaluating question generation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3950–3959, Brussels, Belgium. Association for Computational Linguistics. 7850 Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 145–152, Boston, Massachusetts, USA. Association for Computational Linguistics. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. CoRR, arXiv:1611.09268v3 [cs.CL]. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Alastair Pollitt. 2012. Comparative judgement for assessment. International Journal of Technology and Design Education, 22(2):157–170. James Pustejovsky, Jos´e M Casta˜no, Robert Ingria, Roser Saur´ı, Robert J Gaizauskas, Andrea Setzer, Graham Katz, and Dragomir R Radev. 2003. TimeML: Robust specification of event and temporal expressions in text. In New directions in question answering: Papers from the AAAI Spring Symposium, volume 3, pages 28–34, Stanford, CA, USA. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784– 789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203, Seattle, Washington, USA. Association for Computational Linguistics. Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan. 2018. DuoRC: Towards complex language understanding with paraphrased reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683–1693, Melbourne, Australia. Association for Computational Linguistics. Jonathan Schaffer. 2016. The metaphysics of causation. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, fall 2016 edition. Metaphysics Research Lab, Stanford University. Roger C. Schank and Robert P. Abelson. 1977. Scripts, Plans, Goals and Understanding. Lawrence Erlbaum Associates, Hillsdale, NJ. Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 641–651, New Orleans, Louisiana. Association for Computational Linguistics. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200, Vancouver, Canada. Association for Computational Linguistics. Ellen M. Voorhees. 2004. Overview of the TREC 2003 question answering track. In Proceedings of the Twelfth Text REtrieval Conference (TREC 2003), pages 54–68. National Institute of Standards and Technology. Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’00, pages 200–207, New York, NY, USA. ACM. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 3266–3280. Curran Associates, Inc. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations (ICLR 2019), New Orleans, LA, USA. OpenReview.net. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association for Computational Linguistics, 6:287–302. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2016. Towards AI-complete question answering: A set of prerequisite toy tasks. In 7851 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. An Yang, Kai Liu, Jing Liu, Yajuan Lyu, and Sujian Li. 2018a. Adaptations of ROUGE and BLEU to better evaluate machine reading comprehension task. In Proceedings of the Workshop on Machine Reading for Question Answering, pages 98–104, Melbourne, Australia. Association for Computational Linguistics. Qian Yang, Rebecca J Passonneau, and Gerard De Melo. 2016. PEAK: Pyramid evaluation via automated knowledge extraction. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2673–2680. AAAI Press. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 5753– 5763. Curran Associates, Inc. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018b. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics. Mark Yatskar, Michel Galley, Lucy Vanderwende, and Luke Zettlemoyer. 2014. See no evil, say no evil: Description generation from densely labeled images. In Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014), pages 110–120, Dublin, Ireland. Association for Computational Linguistics and Dublin City University. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: Bridging the gap between human and machine commonsense reading comprehension. CoRR, arXiv:1810.12885v1 [cs.CL]. Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal common-sense inference. Transactions of the Association for Computational Linguistics, 5:379–395. Rolf A. Zwaan, Mark C. Langston, and Arthur C. Graesser. 1995. The construction of situation models in narrative comprehension: An event-indexing model. Psychological Science, 6(5):292–297. 7852 A Examples of applying the ToU to stories for applications In the main text (§3.1), we suggested that many advanced applications hinge on understanding the elements captured by our ToU for stories. Here we offer several examples from two domains. A.1 Law For the foreseeable future, legal decision-making will be the province of lawyers, not AI. However, one plausible use for MRC in a legal setting is as a screening tool for helping non-lawyers determine whether a case has enough merit to bother bringing in a lawyer. For example, consider the first-person narrative below (fictional, but based on an amalgam of several real news stories): My property borders on public lands where hunting is allowed. Last month, a hunter tracked a buck onto my property. He claims he didn’t see my boundary sign. He ended up stepping up onto the remains of an old stone wall, which crumbled, and he broke his wrist. Now he’s saying I can give him $10K now and he’ll walk away, or else he’s going to sue me for much more. Before contracting a lawyer, the property owner may want to assess whether there is any merit to the threat. On the other side of the deal, a law firm that offers free initial consultations may wish to avoid wasting time on cases that are clear non-starters. A second legal application for NLU tools might be helping a lawyer search for precedents. For instance, a tool could help with the narrative above (or perhaps a third-person version of it) by looking for cases with similar elements—e.g., an accidental trespass resulting in injury. To assist in such application scenarios, a system would of course need information about legal codes. But it would also have to understand what happened in the cases it is trying to analyze. To that end, the answers to ToU questions would be essential, as demonstrated in Table 2. The table shows ToU questions and answers that would be key to understanding the landowner’s situation. (These questions are ones the system would answer for itself while reading, not necessarily questions it would be asked by a user.) A.2 Medicine Medicine also offers ample opportunity for an MRC system competent in the narrative ToU to assist doctors and researchers. Narratives pervade electronic health records in the form of doctors’ notes, which record information ranging from patient history to detailed descriptions of surgical procedures. One narrative-based medical application is helping doctors understand a prior doctor’s rationale. Currently, doctors often spend time sifting through a patient’s records to understand why a prior doctor made a certain decision. The reasoning is often explained, but many documents must be searched to find the relevant note. For example, consider the real medical note below,6 recorded after a routine follow-up appointment following breast cancer treatment: She underwent radiation treatment ending in May 2008. She then started on Arimidex, but unfortunately she did not tolerate the Arimidex and I changed her to Femara. She also did not tolerate the Femara and I changed it to tamoxifen. She did not tolerate the tamoxifen and therefore when I saw her on 11/23/09, she decided that she would take no further antiestrogen therapy. She met with me again on 02/22/10, and decided she wants to rechallenge herself with tamoxifen. When I saw her on 04/28/10, she was really doing quite well with tamoxifen. She tells me 2 weeks after that visit, she developed toxicity from the tamoxifen and therefore stopped it herself. She is not going take to any further tamoxifen. A future doctor may wonder why the patient is not on hormone therapy, which would be standard procedure. This explanatory note may be hard to find amongst the many notes in the patient’s record. A second medical application is finding patients who qualify for medical trials. For instance, a pharmaceutical company might develop a new anti-estrogen drug that they believe has milder side effects. They would then want to find patients who had already tried several anti-estrogen drugs, 6Quoted from https://www.mtsamples.com/ site/pages/sample.asp?Type=96-&Sample= 1939-Breast%20Cancer%20Followup%20-%201 7853 Question type ToU question Example (partial) answer to ToU question Significance to legal application Spatial Where was the hunter when he broke his wrist? On the landowner’s property. The locations of events are legally relevant in many ways. For one, property owners may be held liable for injuries that occur on their property. Additionally, however, property owners may not be liable for injuries suffered by trespassers. Spatial Where was the boundary sign? On the boundary between the public lands and the writer’s property. The presence of a sign may shield the landowner from responsibility, but recognizing that means understanding that it would mark the boundary between the two properties. Temporal When did the stone wall fall into disrepair? Sometime before the story started. How long the wall has been in disrepair may be legally relevant. Since the exact timing was not given, the system might flag this question for further clarification. Temporal Has the hunter sued? No, although he may do so in the future. If the hunter had already sued, the landowner might need representation whether or not the suit had merit. Causal Why did the hunter break his wrist (rather than his wrist remaining intact)? Because he stepped onto the wall (rather than stepping elsewhere), which led to him falling (rather than remaining upright, because the wall was in disrepair rather than better condition), which led to him breaking his wrist (rather than his wrist remaining intact). The wall’s disrepair was allegedly an important causal factor in the injury, making it more plausible that the landowner could be held responsible. Motivational Why did the hunter claim he didn’t see a sign (rather than saying nothing of signs)? He would prefer that others believe that he entered the property unwittingly (rather than deliberately), either because he in fact enter unwittingly or because he would like to deny his deliberate violation. He believes that if he says he did not see a sign, others will be more likely to believe this (whereas if he says nothing, they may assume he saw the sign). The hunter’s claim of unwitting entry could be motivated either by true innocence or by deception, which affects whether it should be believed—and unwitting entry may be treated differently by the law. The system may want to flag this claim for follow-up questions about its plausibility. Causal Why did the hunter enter the private property (rather than stopping at the boundary)? Possibly because the hunter didn’t see the sign (rather than seeing it), so he remained unaware he was crossing the boundary (rather than realizing he was). There may be a mechanistic (non-motivational) explanation for why the hunter did not stop at the boundary, and again, unintentional entry may be legally different. Also, the landowner may have been responsible for posting signs that would keep people away from his property if there were any hazards. Motivational (recursive explanation for the end of the previous causal chain) Why might being aware of the boundary have made the hunter stop, whereas being unaware of it (may have) led him to cross it? The hunter likely prefers staying within the law to violating it. If he had known he was at the boundary of private property, he would have known that continuing past the boundary would be illegal trespass, but not knowing about the boundary meant he did not know continuing could be trespassing. The hunter suggested that missing the sign led to accidentally entering the property, but that claim hinges on the assumption that had he known about the property line, he would have respected it. That may be a challengeable assumption. Motivational Why did the hunter threaten to sue, rather than suing immediately? The hunter would prefer to get less money than to possibly get more money but experience the hassle of a lawsuit and risk getting nothing. He believed that if he threatened, the property owner might be afraid of losing more money and give him the $10,000 (whereas if the hunter sued immediately he would have no chance to avoid the hassle and risk). It is possible that the very act of extorting money via a threat of a lawsuit has legal implications. Also, this action by the hunter may indicate that he considers the risk of losing the case high or that he is otherwise reluctant to pursue a lawsuit, which may affect what course of action the landowner ultimately wants to take. Table 2: Example ToU questions and answers for a legal application. 7854 Question type ToU question Example (partial) answer to ToU question Significance to medical application Temporal When did the patient start and stop taking tamoxifen? Multiple times: She started taking it sometime after May 2008 and stopped taking it by 11/23/09. Then, she started taking it again on 02/22/10, and stopped taking it by mid-May 2010. A clinical trial may be seeking patients who kept stopping and starting a specific drug. It may also be important how long the side effects took to develop. Also note that if the question of interest is really a counting question (“how many times”), this relies most of all on an underlying temporal understanding like the one captured by the ToU. Causal/ Motivational Why is the patient not taking an antiestrogen drug (rather than taking one)? She was taking Arimidex, and it caused strong side effects (rather than her having mild or no side effects). Preferring fewer side effects, she therefore tried Femara (rather than continuing with Arimidex). Femara also caused side effects, so for the same reasons as before, she tried switching to tamoxifen (rather than continuing the Femara), but it also caused side effects. The patient preferred not experiencing the side effects to having the medical benefits of the drugs, so she decided not to take any such drug (rather than continuing with one of the above). A future doctor may expect the patient to be on an anti-estrogen drug, as that is standard for someone with her history of breast cancer. Understanding that the patient has tried many drugs and decided to stop them may inform the doctor’s course of action. The doctor might proceed differently if he determined that she had stopped for some other reason—e.g., that she simply lapsed in a prescription. Also, a clinical trial may be seeking patients who stopped taking a drug because of side effects. Furthermore, the trial might be seeking specifically patients who stopped taking the drug at the advice of the doctor. Table 3: Example ToU questions and answers for a medical application. perhaps multiple times, and had toxicity problems with all of them. Currently, research hospitals find patients for a given clinical trial by employing humans to read through the hospital’s database of medical notes and determine which patients meet the trial’s criteria. To assist in such application scenarios, an automated system would have to understand medical notes like the one above. In the rationale-finding application, it would have to interpret the note well enough to recognize that it explains the current medical regimen; in the patient-finding application, the system would have to recognize that this patient went on and off of several anti-estrogen drugs because of side effects. Again, understanding the answers to ToU questions would be essential, as demonstrated in Table 3. B Example ToU-based multiple-choice questions on a RACE story B.1 The story Mr. Green was traveling around the country in his car. One evening he was driving along a road and looking for a small hotel when he saw an old man at the side of the road. He stopped his car and said to the old man, “I want to go to the Sun Hotel. Do you know it?” “Yes.” The old man answered. “I’ll show you the way.” He got into Mr. Green’s car and they drove for about twelve miles. When they came to a small house, the old man said, “Stop here.” Mr. Green stopped and looked at the house. “But this isn’t a hotel.” He said to the old man. “No,” the old man answered, “This is my house. And now I’ll show you the way to the Sun Hotel. Turn around and go back nine miles. Then you’ll see the Sun Hotel on the left.” B.2 RACE’s original questions Answers marked correct by RACE are italicized. Q1. Where did Mr. Green want to sleep that night? A) In his car. B) In his own house. C) In a hotel. D) In the old man’s house. Q2. Why did Mr. Green stop his car? A) Because he found a hotel. B) Because the lights were red. C) Because he saw an old man. D) Because he saw a friend. 7855 Q3. Where did the old man promise to take Mr. Green? A) To Mr. Green’s house. B) To the old man’s house. C) To the SunHotel. [sic] D) To the country. Q4. Why didn’t the old man stop Mr. Green when they passed the hotel? A) Because he wanted Mr. Green to sleep in his house. B) Because he wanted to get home. C) Because he didn’t see the hotel. D) Because he didn’t know the hotel. Q5. How far was it from the place where Mr. Green met the old man to the Sun Hotel? A) About nine miles. B) About three miles. C) About twenty-one miles. D) About twelve miles. B.3 A sampling of our ToU-based questions Correct answers are italicized. Questions are numbered with the IDs used in our dataset, which is available in this paper’s supplementary data. The first number in each question ID indicates the variant group; the second number is a groupindependent question index. B.3.1 Causal chains The questions below target different parts of causal chains explaining why the agents in the story took the actions that they did. The first five ask about why Mr. Green stopped his car (vs. continuing to drive); the next five ask about why the old man said he would show Mr. Green the way (vs. just giving him directions). Q1-1. Why did Mr. Green stop his car the first time? A) Because if he stopped his car, he could ask the man something. B) Because if he stopped his car, he could make a new friend. C) Because if he stopped his car, the old man could get in. D) Because the directions he asked for said to stop the car. E) Because if he stopped his car, he could drive for about twelve more miles. F) Because he got a flat tire. G) Because he was driving along a road. H) Because he was traveling around the country. I) Because he said, “I want to go to the Sun Hotel”. Q2-3. Why did Mr. Green want to ask the man something? A) Because there was something he didn’t know. B) Because he liked to ask questions. C) Because there was a chance to make a friend. D) Because he didn’t want to drive past the man without helping him. E) Because if he stopped his car, he could drive for about twelve miles. F) Because he got a flat tire. G) Because he was driving along a road. H) Because he said, “I want to go to the Sun Hotel”. Q3-7. Before they spoke at all, what did Mr. Green hope the man would be able to do? A) Tell him where the hotel was. B) Tell him where the small house was. C) Get in his car. D) Drive for about twelve miles. E) Take him to his house. F) Take him to the hotel. G) See an old man. Q4-9. What did Mr. Green hope the conversation with the old man would enable him to do? A) Get where he was going B) Travel around the country C) See what he was seeing D) Stop and look at a house E) Drive with the old man F) Come to a small house G) Turn around and go back 7856 Q5-11. What was Mr. Green trying to do throughout the story? A) To stay at the small hotel B) To drive along a road C) To pass the small hotel D) To come to a small house E) To see the old man F) To stop at the side of the road G) To speak with the old man Q6-12. Why did the old man make his initial offer to Mr. Green? A) The old man was appearing to help Mr. Green while actually tricking him. B) The old man was appearing to trick Mr. Green while actually helping him. C) Mr. Green was appearing to help the old man while actually tricking him. D) Mr. Green was appearing to trick the old man while actually helping him. Q7-14. Why did the old man say he would show Mr. Green the way instead of just giving directions? A) So Mr. Green would let him into his car. B) So Mr. Green would stop his car. C) So Mr. Green would say something to the old man. D) So he could answer Mr. Green. E) So they could go to the hotel. F) So Mr. Green would take him to the hotel. Q10-20. Where did the old man expect he and Mr. Green would drive together to? A) The house B) The Sun Hotel C) The side of the road D) Back nine miles Q11-22. Why did the man want to ride with Mr. Green? A) He wanted to get home. B) He wanted to get to the hotel. C) He wanted to stand at the side of the road. D) He wanted to answer Mr. Green. E) He wanted to get into Mr. Green’s car. Q13-26. What is one reason the man’s plan worked? A) Mr. Green wouldn’t know where they were really going. B) Mr. Green wouldn’t know what his name really was. C) Mr. Green wouldn’t know how old he really was. D) He wanted to see the hotel on the left. E) He showed Mr. Green the way to the hotel. B.3.2 General knowledge For causal and motivational questions, an RoU often includes abstract general knowledge. To interrogate these components of understanding, we we wrote questions where the answer choices do not mention any of the entities in the story. Below are general knowledge questions that target the same two events as the questions immediately above. While we thought these questions might be especially difficult, XLNet handled them about as well as the causal/motivational questions whose answer choices explicitly mentioned story entities. Q21-44. What is part of the reason why Mr. Green stopped driving when he first saw the man? A) In order to ask someone a question, you have to be close to them. B) In order to get where you’re going, you need to stop your car. C) When you travel around the country, you stop your car. D) When the evening arrives, you drive your car home. E) When you’re looking for a hotel, you often stop your car. F) People often pick up hitchhikers. G) People often stop to help others. Q22-47. Why did Mr. Green think the man on the side of the road might be able to help him? A) Often a person in a given area is familiar with the geography of that area. B) Often a person in a given area gives out useful items. C) Often one person can give a ride to another person. D) Often a person on the side of the road needs help. 7857 Q23-48. Why did Mr. Green want to know where the hotel was? A) Getting to a place usually requires knowing where the place is. B) Driving around the country usually requires knowing where you are. C) Talking with a person usually requires seeing where they are. D) Getting into a car usually requires knowing where the car is. Q24-51. Why was Mr. Green seeking the old man’s help in the first place? A) People like to sleep comfortably at night. B) People like to travel in a leisurely manner around the country. C) People like to talk amiably with each other. D) People like to see interesting sights on the road. E) People like to be driven directly to their homes. Q25-52. Why did the old man say he would show Mr. Green the way, the first time? A) People sometimes trick others for their own gain. B) People sometimes trick others in order to help them. C) People sometimes help others for selfless reasons. D) People sometimes help others for selfish reasons. Q26-54. Why did the old man first say he would show Mr. Green the way instead of just giving directions? A) To show someone the way means going along with them whereas giving directions means just telling them information. B) To show someone the way means just giving them information whereas giving directions means going along with them. C) Giving directions is more effective than showing someone the way. D) Giving directions is less effective than showing someone the way. E) Giving directions is more friendly than showing someone the way. F) Giving directions is less friendly than showing someone the way. Q28-58. Why did the old man expect to be able to control the route as he rode with Mr. Green? A) When taking directions, people generally go where they are told to go. B) When taking directions, people usually go somewhere other than where they are told to go. C) When on vacation, people generally follow their itineraries. D) When driving with strangers, people are generally very careful. E) When going to a small house, people generally ride together. Q29-60. What helps explain why the man wanted to accompany Mr. Green on his drive? A) People usually want to go home at night. B) People usually want to go to a hotel at night. C) People usually want to travel around the country. D) People usually want to drive with each other. Q30-62. Why did the old man trick Mr. Green? A) Being driven home by someone is nice and convenient. B) Traveling around the country with someone is fun and exciting. C) Stopping and looking at someone’s house is interesting and enjoyable. D) Answering someone’s questions is fulfilling and helpful. Q31-64. What is one reason the man’s plan worked? A) If someone is unfamiliar with an area, they won’t realize if they’re going the wrong way. B) If someone is familiar with an area, they won’t realize if they’re going the wrong way. C) If someone is unfamiliar with an area, they will realize if they’re going the wrong way. D) If someone is traveling around the country by car, they will drive an old man’s home. E) If someone wants to go to a hotel, they will go to a small house first. B.3.3 Spatio-temporal questions The questions below target the spatial and temporal information in the story, asking how things were physically arranged at different points in time. 7858 Q37-76. Who was in the car at first? A) Mr. Green B) Both Mr. Green and the old man C) The old man D) Neither Mr. Green nor the old man Q38-78. Who was in the car when Mr. Green drove to the small house? A) Both Mr. Green and the old man B) Mr. Green C) The old man D) Neither Mr. Green nor the old man Q39-80. Who was probably in the car when Mr. Green drove away from the small house? A) Mr. Green B) Both Mr. Green and the old man C) The old man D) Neither Mr. Green nor the old man Q40-82. Who was at the small house at first? A) Neither Mr. Green nor the old man B) Mr. Green C) Both Mr. Green and the old man D) The old man Q41-84. Who was at the small house when Mr. Green arrived there? A) Both Mr. Green and the old man B) Mr. Green C) The old man D) Neither Mr. Green nor the old man Q42-86. Who was likely at the small house a short while after the story ends? A) The old man B) Mr. Green C) Both Mr. Green and the old man D) Neither Mr. Green nor the old man Q53-109. When driving to the old man’s, on which side did they pass the hotel? A) The car passed the hotel on the right side of the road B) The car passed the hotel on the left side of the road C) The car passed the house on the left side of the road D) The car passed the house on the right side of the road Q54-111. How were Mr. Green, the car, the old man, and the window probably situated when Mr. Green stopped to ask the man a question? A) Mr. Green in the car, the window down, the man on the side of the road B) Mr. Green in the car, the window down, the man in the car C) Mr. Green in the car, the window up, the man on the side of the road D) Mr. Green in the car, the window up, the man in the car E) Mr. Green out of the car, the window down, the man in the car F) Mr. Green out of the car, the window up, the man in the car Q55-113. While the two men drove to the old man’s house, how was the scene likely arranged? A) Mr. Green and the man next to each other, in the car B) The man next to Mr. Green next to the car C) The car in the man and Mr. Green D) Mr. Green next to the man next to the car E) The man at his house and Mr. Green in the car F) Mr. Green at the hotel and the man at his house G) Mr. Green at his house and the man at the hotel Q56-115. When Mr. Green was actually going the right way at the end, how was the scene likely arranged? A) The man at his house and Mr. Green in the car B) Mr. Green and the man next to each other, in the car C) The man next to Mr. Green next to the car D) The car in the man and Mr. Green E) Mr. Green next to the man next to the car F) Mr. Green at the hotel and the man at his house G) Mr. Green at his house and the man at the hotel B.3.4 More variant groups As described in the paper, for each question we wrote a second version that targeted essentially the same information in a different way. Below are additional examples of such variant groups. 7859 Q19-39. Why could the man still help Mr. Green by showing him the way at the end of the story? A) Mr. Green still didn’t know how to get to the hotel. B) Mr. Green still didn’t know that he was at the man’s house. C) Mr. Green was still looking at the house. D) The old man knew where Mr. Green’s car was. Q19-40. What information was Mr. Green missing that the man provided when he showed him the way the second time? A) Mr. Green didn’t know how to get to the hotel. B) Mr. Green didn’t know that he was at the old man’s house. C) Mr. Green didn’t know who the old man was. D) The old man knew where Mr. Green’s car was. Q46-94. Who was in the car just before Mr. Green met the old man? A) Mr. Green B) Both Mr. Green and the old man C) The old man D) Neither Mr. Green nor the old man Q46-95. Who was in the car when Mr. Green approached the spot where he met the old man? A) Mr. Green B) Both Mr. Green and the old man C) The old man D) Neither Mr. Green nor the old man Q22-45. Why did Mr. Green want to speak to the old man? A) People ask questions when they lack information. B) People are interested in the places they travel. C) People are often very curious. D) Old men at the side of the road sometimes know the future. E) People ask questions before letting people into their cars. F) People interrogate hitchhikers before picking them up. Q22-46. Why did Mr. Green think the old man might be able to help him? A) Sometimes one person has information another person doesn’t. B) Sometimes one person trades a car for another person’s house. C) Sometimes one person gives a ride to another person. D) Sometimes one person on the side of the road gets in another person’s car.
2020
701
Gender Gap in Natural Language Processing Research: Disparities in Authorship and Citations Saif M. Mohammad National Research Council Canada Ottawa, Canada [email protected] Abstract Disparities in authorship and citations across gender can have substantial adverse consequences not just on the disadvantaged genders, but also on the field of study as a whole. Measuring gender gaps is a crucial step towards addressing them. In this work, we examine female first author percentages and the citations to their papers in Natural Language Processing (1965 to 2019). We determine aggregatelevel statistics using an existing manually curated author–gender list as well as first names strongly associated with a gender. We find that only about 29% of first authors are female and only about 25% of last authors are female. Notably, this percentage has not improved since the mid 2000s. We also show that, on average, female first authors are cited less than male first authors, even when controlling for experience and area of research. Finally, we discuss the ethical considerations involved in automatic demographic analysis. 1 Introduction Gender gaps are quantitative measures of the disparities in social, political, intellectual, cultural, or economic success due to one’s gender. Gender gaps can also refer to disparities in access to resources (such as healthcare and education), which in turn lead to disparities in success. We need to pay attention to gender gaps not only because they are inherently unfair but also because better gender balance leads to higher productivity, better health and well-being, greater economic benefits, better decision making, as well as political and economic stability (Skjelsboek and Smith, 2001; Woetzel et al., 2015; Hakura et al., 2016; Mehta et al., 2017; Gallego and Guti´errez, 2018). Historically, gender has often been considered binary (male and female), immutable (cannot change), and physiological (mapped to biological sex). However, those views have been discredited (Hyde et al., 2019; Richards et al., 2017; Darwin, 2017; Lindsey, 2015; Kessler and McKenna, 1978). Gender is complex, and does not necessarily fall into binary male or female categories (e.g. nonbinary people), and also does not necessarily correspond to one’s assigned gender at birth. Society has often viewed different gender groups differently, imposing unequal social and power structures (Lindsey, 2015). The World Economic Forum’s 2018 Global Gender Gap Report (which examined data from more than 144 countries) highlighted the gender gap between men and women in Artificial Intelligence as particularly alarming (WEF, 2018). It indicated that only 22% of the professionals in AI are women and that this low representation in a transformative field requires urgent action—otherwise, the AI gap has the potential to widen other gender gaps. Other studies have identified substantial gender gaps in science (H˚akanson, 2005; Larivi`ere et al., 2013; King et al., 2017; Andersen and Nielsen, 2018). Perez (2019) discusses, through numerous examples, how there is a considerable lack of disaggregated data for women and how that is directly leading to negative outcomes in all spheres of their lives, including health, income, safety, and the degree to which they succeed in their endeavors. This holds true even more for transgender people. Our work obtains disaggregated data for female Natural Language Processing (NLP) researchers and determines the degree of gender gap between female and male NLP researchers. (NLP is an interdisciplinary field that includes scholarly work on language and computation with influences from Artificial Intelligence, Computer Science, Linguistics, Psychology, and Social Sciences to name a few.) We hope future work will explore other gender gaps (e.g., between trans and cis people). Measuring gender gaps is a crucial step towards addressing them. We examine tens of thousands of articles in the ACL Anthology (AA) (a digital repository of public domain NLP articles) for disparities in female authorship.1 We also conduct experiments to determine whether female first authors are cited more or less than male first authors, using citation counts extracted from Google Scholar (GS). We extracted and aligned information from the ACL Anthology and Google Scholar to create a dataset of tens of thousands of NLP papers and their citations as part of a broader project on analyzing NLP Literature.2 We refer to this dataset as the NLP Scholar Dataset. We determined aggregatelevel statistics for female and male researchers in the NLP Scholar dataset using an existing manually curated author–gender list as well as first names that are strongly associated with a gender. Note that attempts to automatically infer gender of individuals can lead to harm (Hamidi et al., 2018). Our work does not aim to infer gender of individual authors. We use name–gender association information to determine aggregate-level statistics for male and female researchers. Further, one may not know most researchers they cite, other than from reading their work. Thus perceived gender (from the name) can lead to unconscious effects, e.g., Dion et al. (2018) show that all male and mixed author teams cite fewer papers by female authors than all female teams. Further, seeing only a small number of female authors cited can demoralize young researchers entering the field. We do not explore the reasons behind gender gaps. However, we will note that the reasons are often complex, intersectional, and difficult to disentangle. We hope that this work will increase awareness of gender gaps and inspire concrete steps to improve inclusiveness and fairness in research. It should also be noted that even though this paper focuses on female–male disparities, there are many aspects to demographic diversity including: representation from transgender people; representation from various nationalities and race; representation by people who speak a diverse set of languages; 1https://www.aclweb.org/anthology/ 2Mohammad (2019) presents an overview of the many research directions pursued, using this data. Notably, Mohammad (2020a) explores questions such as: how well cited are papers of different types (journal articles, conference papers, demo papers, etc.)? how well cited are papers published in different time spans? how well cited are papers from different areas of research within NLP? etc. Mohammad (2020c) presents an interactive visualization tool that allows users to search for relevant related work in the ACL Anthology. diversity by income, age, physical abilities, etc. All of these factors impact the breadth of technologies we create, how useful they are, and whether they reach those that need it most. Resources for the NLP Scholar project can be accessed through the project homepage.3 2 Related Work Pilcher (2017) shows that names function not only to identify individuals, but also to manage gender throughout one’s life. There is a strong cultural norm in various parts of the world to assign a first name to newborns as per their category of sex (Pilcher, 2017; Barry III and Harper, 2014; Lieberson et al., 2000; Alford, 1987). Pilcher (2017) argues that, throughout their life, the first name plays a role in repeatedly categorizing a person as being male or female. People may change their name and appearance to manage their gender (Connell, 2010; Pilcher, 2017). People may choose a name that is not associated with male or female categorizations (Connell, 2010). The strong normative tendency to use names to signal gender has led to a large body of work on automatically determining gender by one’s first name, not just for scientometric analysis discussed below, but also for language studies, social sciences, public health, and commerce. However, this can also lead to misgendering, which can cause significant pain and harm (Hamidi et al., 2018). (Misgendering is when a person—or in this case, a machine— associates someone with a gender with which they do not identify.) Further, work that does not explicitly consider gender to be inclusive of trans people can reinforce stereotypes such as the dichotomy of gender. We expect gender disparities to be different depending on the groups being compared: female– male, trans–cis, and so on. Our work does not aim to infer gender of individual authors. We obtain disaggregated statistics for women, specifically, to study the disparities between female and male NLP researchers. We discuss ethical considerations further in Section 6. See also Mihaljevi´c et al. (2019) for a discussion on ethical considerations in using author name to estimate gender statistics in the Gender Gap in Science Project—a large ongoing project tracking gender gaps in Mathematical and Natural Sciences.4 3http://saifmohammad.com/WebPages/nlpscholar.html 4https://gender-gap-in-science.org Most studies on gender and authorship have found substantial gender disparities in favor of male researchers. They include work on ∼1700 articles from journals of library and information science (H˚akanson, 2005), on ∼12 million articles from the Web of Science (for Sociology, Political Science, Economics, Cardiology and Chemistry) (Ghiasi et al., 2016; Andersen and Nielsen, 2018), on ∼2 million mathematics articles (Mihaljevi´c-Brandt et al., 2016), on ∼1.6 million articles from PubMed life science and biomedical research (Mishra et al., 2018), on ∼1.5 million articles from fifty disciplines published in JSTOR (King et al., 2017), and on ∼0.5 million publications from US research universities (Duch et al., 2012). There also exists some work that shows that in fields such as linguistics (LSA, 2017) and psychology (Willyard, 2011), female and male participation is either close to parity or tilted in favor of women. In NLP research, Schluter (2018) showed that there are barriers in the paths of women researchers, delaying their attainment of mentorship status (as estimated through last author position in papers). Anderson et al. (2012) examine papers from 1980 to 2008 to track the ebb and flow of topics within NLP, and the influence of researchers from outside NLP on NLP. Vogel and Jurafsky (2012) examined about 13,000 papers from 1980 to 2008 to determine basic authorship statistics by women and men. Gender statistics were determined by a combination of automatic and manual means. The automatic method relied on lists of baby names from various languages. They found that female authorship has been steadily increasing from 1980 to 2008. Our work examines a much larger set of NLP papers (1965–2019), re-examines some of the questions raised in Vogel and Jurafsky (2012), and explores several new questions, especially on first author gender and disparities in citation. 3 Data We extracted and aligned information from the ACL Anthology (AA) and Google Scholar (GS) to create a dataset of tens of thousands of NLP papers and their citations. We aligned the information across AA and GS using the paper title, year of publication, and first author last name. Details about the dataset, as well as an analysis of the volume of research in NLP over the years, are available in Mohammad (2020b). We summarize key information below. 3.1 ACL Anthology Data The ACL Anthology is available through its website and a github repository.5 We extracted paper title, names of authors, year of publication, and venue of publication from the repository.6 As of June 2019, AA had ∼50K entries; however, this includes some entries that are not truly research publications (for example, forewords, prefaces, programs, schedules, indexes, invited talks, appendices, session information, newsletters, lists of proceedings, etc.). After discarding them, we are left with 44,894 papers.7 Inferring Aggregate Gender Statistics: The ACL Anthology does not record author demographic information. To infer aggregate statistics for male and female authors, we create two bins of authors: A-Mname (authors that have self-reported as males or with names commonly associated with males) and A-Fnames (authors that have self-reported as females or with names commonly associated with females).8 We made use of three resources to populate A-Mname and A-Fname: 1. A manually curated list of 11,932 AA authors and their genders provided by Vogel and Jurafsky (2012) (VJ-AA list) (3,359 female and 8,573 male).9 2. A list of 55,924 first names that are strongly associated with females and 30,982 first names that are strongly associated with males, that we generated from the US Social Security Administration’s (USSA) published database of names and genders of newborns.10 3. A list of 26,847 first names that are strongly associated with females and 23,614 first names that are strongly associated with males, that we generated from a list of 9,300,182 5https://www.aclweb.org/anthology/ https://github.com/acl-org/acl-anthology 6Multiple authors can have the same name and the same authors may use multiple variants of their names in papers. The AA volunteer team handles such ambiguities using both semi-automatic and manual approaches (fixing some instances on a case-by-case basis). Additionally, AA keeps a file that includes canonical forms of author names. 7We used simple keyword searches for terms such as foreword, invited talk, program, appendix and session in the title to pull out entries that were likely to not be research publications. These were then manually examined to verify that they did not contain any false positives. 8Note that a person may have a name commonly associated with one gender but belong to a different gender. 9https://nlp.stanford.edu/projects/gender.shtml 10https://www.ssa.gov/oact/babynames/limits.html firstname–gender association list P R F from USSA data 98.4 69.8 81.7 from PUBMED data 98.3 81.4 89.1 Table 1: Precision (P), Recall (R), and F-score (F) of how well the first name and gender association matches information in the VJ-AA list. PUBMED authors and their genders (Torvik and Smalheiser, 2009; Smith et al., 2013).11 We acknowledge that despite a large expatriate population, the US census information is not representative of the names from around the world. Further, Chinese origin names tend not to be as strongly associated with gender as names from other parts of the world. However, it should be noted that Vogel and Jurafsky (2012) made special effort to include information from a large number of Asian AA authors in their list. The PUBMED list is also noted for having a substantial coverage of Asian names (Torvik and Smalheiser, 2009). We determined first name–gender association, by calculating the percentages of first names corresponding to male and female genders as per each of the PUBMED and USSA fullname–gender lists. We consider a first name to be strongly associated with a gender if the percentage is ≥95%.12 Table 1 shows how well the first name and gender association matches with the VJ-AA list. Given the high precision (over 98%) of the USSA and PUBMED lists of gender-associated first names, we use them (in addition to the VJAA list) to populate the M-names and F-names bins. Eventually, the A-Mname and A-Fname bins together had 28,682 (76%) of the 37,733 AA authors. Similarly, we created bins for Papers whose First Author is from A-Mname (P-FA-Mname), Papers whose First Author is from A-Fname (PFA-Fname), Papers whose Last Author is from AMname (P-LA-Mname), and Papers whose Last Author is from A-Fname (P-LA-Fname) to estimate aggregate-level statistics for papers with male and female first and last authors. P-FA-Mname and P-FA-Fname together have 37,297 (83%) AA papers (we will refer to this subset as AA*), P-LAMname and P-LA-Fname together have 39,368 (88%) AA papers (we will refer to this subset as AA**). 11https://experts.illinois.edu/en/datasets/genni-ethnea-forthe-author-ity-2009-dataset-2 12A choice of other percentages such as 90% or 99% would also have been reasonable. NLP Academic Age as a Proxy for Experience in NLP: First author percentage may vary due to experience, area of research within NLP, venue of publication, etc. To gauge experience, we use the number of years one has been publishing in AA; we will refer to to this as the NLP Academic Age. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18. Note that NLP academic age is not always an accurate reflection of one’s research experience. Also, one can publish NLP papers outside of AA. 3.2 Google Scholar Data Google Scholar allows researchers to create and edit public author profiles called Google Scholar Profiles. Authors can include their papers (along with their citation information) on this page. We extracted citation information from Google Scholar profiles of authors who published at least three papers in the ACL Anthology.13 This yielded citation information for 1.1 million papers in total. We will refer to this dataset as the NLP Subset of the Google Scholar Dataset, or GScholar-NLP for short. Note that GScholar-NLP includes citation counts not just for NLP papers, but also for nonNLP papers published by authors who have at least three papers in AA. GScholar-NLP includes 32,985 of the 44,894 papers in AA (about 75%). We will refer to this subset of the ACL Anthology papers as AA′. The citation analyses presented in this paper are on AA′. 4 Gender Gap in Authorship We use the datasets described in §3.1 (especially AA* and AA**) to answer a series of questions on female authorship. Since we do not have full selfreported information for all authors, these should be treated as estimates. First author is a privileged position in the author list that is usually reserved for the researcher that has done the most work and writing.14 In NLP, first authors are also often students. Thus we are especially interested in gender gaps that effect them. The last author position is often reserved for the most senior or mentoring researcher. We explore last author disparities only briefly (in Q1). 13This is allowed by GS’s robots exclusion standard. 14A small number of papers have more than one first author. This work did not track that. Q1. What percentage of the authors in AA are female? What percentage of the AA papers have female first authors (FFA)? What percentage of the AA papers have female last authors (FLA)? How have these percentages changed since 1965? A. Overall, we estimate that about 29.7% of the 28,682 authors are female; about 29.2% of the first authors in 37,297 AA* papers are female; and about 25.5% of the last authors in 39,368 AA** papers are female. Figure 1 shows how these percentages have changed over the years. Discussion: Across the years, the percentage of female authors overall is close to the percentage of papers with female first authors. (These percentages are around 28% and 29%, respectively, in 2018.) However, the percentage of female last authors is markedly lower (hovering at about 25% in 2018). These numbers indicate that, as a community, we are far from obtaining male–female parity. A further striking (and concerning) observation is that the female author percentages have not improved since the year 2006. To put these numbers in context, the percentage of female scientists worldwide (considering all areas of research) has been estimated to be around 30%. The reported percentages for many computer science sub-fields are much lower.15 The percentages are much higher for certain other fields such as psychology (Willyard, 2011) and linguistics (LSA, 2017). Q2. How does FFA vary by paper type and venue? A. Figure 2 shows FFA percentages by paper type and venue. Discussion: Observe that FFA percentages are lowest for CoNLL, EMNLP, IJCNLP, and system demonstration papers (21% to 24%). FFA percentages for journals, other top-tier conferences, SemEval, shared task papers, and tutorials are the next lowest (24% to 28%). The percentages are markedly higher for LREC, *Sem, and RANLP (33% to 36%), as well as for workshops (31.7%). Q3. How does female first author percentage change with NLP academic age? A. In order to determine these numbers, every paper in AA* was placed in a bin corresponding to NLP academic age: if the paper’s first author had an academic age of 1 in the year when the 15https://unesdoc.unesco.org/ark:/48223/pf0000235155 paper was published, then the paper is placed in bin 1; if the paper’s first author had an academic age of 2 in the year when the paper was published, then the paper is placed in bin 2; and so on. The bins for later years contained fewer papers. This is expected as senior authors in NLP often work with students, and students are encouraged to be first authors. Thus, we combine some of the bins in later years: one bin for academic ages between 10 and 14; one for 15 to 19; one for 20 to 34; and one for 35 to 50. Once the papers are assigned to the bins, we calculate the percentage of papers in each bin that have a female first author. Figure 3 shows the results. Discussion: Observe that, with the exception of the 35 to 50 academic age bin, FFA% is highest (30%) at age 1 (first year of publication). There is a period of decline in FFA% until year 6 (27.4%)—this difference is statistically significant (t-test, p < 0.01). This might be a potential indicator that graduate school has a progressively greater negative impact on the productivity of women than of men. (Academic age 1 to 6 often correspond to the period when the first author is in graduate school or in a temporary post-doctoral position.) After year 6, we see a recovery back to 29.4% by year 8, followed by a period of decline once again. Q4. How does female first author percentage vary by area of research (within NLP)? Which areas have higher-than-average FFA%? Which areas have lower-than-average FFA%? How does FFA% correlate with popularity of an area—that is, does FFA% tend to be higher- or lower-than-average in areas where lots of authors are publishing? A. We use word bigrams in the titles of papers to sample papers from various areas.16 The title has a privileged position in a paper. Primarily, it conveys what the paper is about. For example, a paper with machine translation in the title is likely about machine translation. Figure 4 shows the list of top 66 bigrams that occur in the titles of more than 100 AA* papers (in decreasing order of the bigram frequency). For each bigram, the figure also shows the percentage of papers with a female first author. In order to determine whether there is a correlation between the number of papers corresponding to a bigram and FFA%, we calculated the Spearman’s rank correlation between the rank of a bigram by 16Other approaches such as clustering are also reasonable; however, results with those might not be easily reproducible. Figure 1: Female authorship percentages in AA over the years: overall, as first author, and as last author. Figure 2: FFA percentage by venue and paper type. The number of FFA papers is shown in parenthesis. Figure 3: FFA percentage by academic age. The number of FFA papers is shown in parenthesis. number of papers and the rank of a bigram by FFA%. This correlation was found to be only 0.16. This correlation is not statistically significant at p < 0.01 (two-sided p-value = 0.2). Experiments with lower thresholds (174 bigrams occurring in 50 or more papers and 1408 bigrams occurring in 10 or more papers) also resulted in very low and non-significant correlation numbers (0.11 and 0.03, respectively). Discussion: Observe that FFA% varies substantially depending on the bigram. It is particularly low for title bigrams such as dependency parsing, language models, finite state, context free, and neural models; and markedly higher than average for domain specific, semantic relations, dialogue system, spoken dialogue, document summarization, and language resources. However, the rank correlation experiments show that there is no correlation between the popularity of an area (number of papers that have a bigram in the title) and the percentage of female first authors. To obtain further insights, we also repeat some of the experiments described above for unigrams in paper titles. We found that FFA rates are relatively high in non-English European language research such as papers on Russian, Portuguese, French, and Italian. FFA rates are also relatively high for work on prosody, readability, discourse, dialogue, paraphrasing, and individual parts of speech such as adjectives and verbs. FFA rates are particularly low for papers on theoretical aspects of statistical modelling, and for terms such as WMT, parsing, markov, recurrent, and discriminative. Figure 4: Top 66 bigrams in AA* titles and FFA% (30%: light grey; <30%: blue; >30%: green). 5 Gender Gap in Citations Research articles can have impact in a number of ways—pushing the state of the art, answering crucial questions, finding practical solutions that directly help people, etc. However, individual measures of research impact are limited in scope—they measure only some kinds of contributions. The most commonly used metrics of research impact are derived from citations including: number of citations, average citations, h-index, and impact factor (Bornmann and Daniel, 2009). Despite their limitations, citation metrics have substantial impact on a researcher’s scientific career; often through a combination of funding, the ability to attract talented students and collaborators, job prospects, and other opportunities in the wider research community. Thus, disparities in citations (citation gaps) across demographic attributes such as gender, race, and location have direct real-world adverse implications. This often also results in the demoralization of researchers and marginalization of their work— thus negatively impacting the whole field. Therefore, we examine gender disparities in citations in NLP. We use a subset of the 32,985 AA′ papers (§3.2) that were published from 1965 to 2016 for the analysis (to allow for at least 2.5 years for the papers to collect citations). There are 26,949 such papers. Q5. How well cited are women and men? A. For all three classes (females, males, and gender unknown), Figure 5 shows: a bar graph of number of papers, a bar graph of total citations received, and box and whisker plots for citations received by individuals. The whiskers are at a distance of 1.5 times the inter-quartile length. Number of citations pertaining to key points such as 25th percentile, median, and 75th percentile are indicated on the left of the corresponding horizontal bars. Discussion: On average, female first author papers have received markedly fewer citations than male first author papers (37.6 compared to 50.4). The difference in median is smaller (11 compared to 13). The difference in the distributions of males and females is statistically significant (Kolmogorov–Smirnov test, p < 0.01 ).17 The large difference in averages and smaller difference in medians suggests that there are markedly more 17Kolmogorov–Smirnov (KS) test is a non-parametric test that can be applied to compare any two distributions without making assumptions about the nature of the distributions. Figure 5: #papers, total citations, box plot of citations per paper: for female, male, gender-unknown first authors. The orange dashed lines mark averages. very heavily cited male first-author papers than female first-author papers. The differences in citations, or citation gap, across genders may itself vary: (1) by period of time; (2) due to confounding factors such as academic age and areas of research. We explore these next. Q6. How has the citation gap across genders changed over the years? A. Figure 6 (left side) shows the citation statistics across four time periods. Discussion: Observe that female first authors have always been a minority in the history of ACL; however, on average, their papers from the early years (1965 to 1989) received a markedly higher number of citations than those of male first authors from the same period. We can see from the graph that this changed in the 1990s when male first-author papers obtained markedly more citations on average. The citation gap reduced considerably in the 2000s, and the 2010–2016 period saw a further reduction. It remains to be seen whether the citation gap for these 2010–2016 papers widens in the coming years. It is also interesting to note that the genderunknown category has almost bridged the gap with the male category in terms of average citations. Further, the proportion of the gender-unknown Figure 6: Citation gap across genders for papers: published in different time spans (left); by academic age (right). authors has steadily increased over the years— arguably, an indication of better representation of authors from around the world in recent years.18 Q7. How have citations varied by gender and academic age? Is the citation gap a side effect of a greater proportion of new-to-NLP female first authors than new-to-NLP male first authors? A. Figure 6 (right side) shows citation statistics broken down by gender and academic age. Discussion: The graphs show that female first authors consistently receive fewer citations than male first authors across the spans of their academic age. (The gap is highest at academic age 4 and lowest at academic age 7.) Thus, the citation gap is likely due to factors beyond differences in average academic age between men and women. 18Our method is expected to have a lower coverage of names from outside North America and Europe because USSA and PUBMED databases historically have had fewer names from outside North America and Europe. Q8. How prevalent is the citation gap across areas of research within NLP? Is the gap simply because more women work in areas that receive low numbers of citations (regardless of gender)? A. On average, male first authors are cited more than female first authors in 54 of the 66 areas (82% of the areas) discussed earlier in Q4 and Figure 4. Female first authors are cited more in the sets of papers whose titles have: word sense, sentiment analysis, information extraction, neural networks, neural network, semeval 2016, language model, document summarization, multi document, spoken dialogue, dialogue systems, and speech tagging. If women chose to work in areas that happen to attract less citations by virtue of the area, then we would not expect to see citation gaps in so many areas. Recall also that we already showed that FFA% is not correlated with rank of popularity of an area (Q4). Thus it is unlikely that the choice of area of research is behind the gender gap. 6 Limitations and Ethical Considerations Q9. What are the limitations and ethical considerations involved with this work? A. Data is often a representation of people (Zook et al., 2017). This is certainly the case here and we acknowledge that the use of such data has the potential to harm individuals. Further, while the methods used are not new, their use merits reflection. Analysis focused on women and men leaves out non-binary people.19 Additionally, not disaggregating cis and trans people means that the statistics are largely reflective of the more populous cis class. We hope future work will obtain disaggregated information for various genders. However, careful attention must be paid as some gender classes might include too few NLP researchers to ensure anonymity even with aggregate-level analysis. The use of female- and male-gender associated names to infer population level statistics for women and men excludes people that do not have such names and people from some cultures where names are not as strongly associated with gender. Names are not immutable (they can be changed to indicate or not indicate gender) and people can choose to keep their birth name or change it (providing autonomy). However, changing names can be quite difficult. Also, names do not capture gender fluidity or contextual gender. A more inclusive way of obtaining gender information is through self-reported surveys. However, challenges persist in terms of how to design effective and inclusive questionnaires (Bauer et al., 2017; Group, 2014). Further, even with self-report textboxes that give the respondent the primacy and autonomy to express gender, downstream research often ignores such data or combines information in ways beyond the control of the respondent.20 Also, as is the case here, it is not easy to obtain self-reported historical information. Social category detection can potentially lead to harm, for example, depriving people of opportunities simply because of their race or gender. However, one can also see the benefits of NLP techniques and social category detection in public health (e.g., developing targeted initiatives to improve health outcomes of vulnerable populations), 19Note that as per widely cited definitions, nonbinary people are considered transgender, but most transgender people are not non-binary. Also, trans people often use a name that is more associated with their gender identity. 20https://reallifemag.com/counting-the-countless/ as well as in psychology and social science (e.g., to better understand the unique challenges of belonging to a social category). A larger list of ethical considerations associated with the NLP Scholar project is available through the project webpage.21 Mihaljevi´c et al. (2019) also discusses the ethical considerations in using author names to infer gender statistics in the Gender Gap in Science Project.22 7 Conclusions We analyzed the ACL Anthology to show that only ∼30% have female authors, ∼29% have female first authors, and ∼25% have female last authors. Strikingly, even though some gains were made in the early years of NLP, overall FFA% has not improved since the mid 2000s. Even though there are some areas where FFA% is close to parity with male first authorship, most areas have a substantial gap in the numbers for male and female authorship. We found no correlation between popularity of research area and FFA%. We also showed how FFA% varied by paper type, venue, academic age, and area of research. We used citation counts extracted from Google Scholar to show that, on average, male first authors are cited markedly more than female first authors, even when controlling for experience and area of work. Thus, in NLP, gender gaps exist both in authorship and citations. This paper did not explore the reasons behind the gender gaps. However, the inequities that impact the number of women pursuing scientific research (Roos, 2008; Foschi, 2004; Buchmann, 2009) and biases that impact citation patterns unfairly (Brouns, 2007; Feller, 2004; Gupta et al., 2005) are well-documented. These factors play a substantial role in creating the gender gap, as opposed to differences in innate ability or differences in quality of work produced by these two genders. If anything, past research has shown that self-selection in the face of inequities and adversity leads to more competitive, capable, and confident cohorts (Nekby et al., 2008; Hardies et al., 2013). Acknowledgments Many thanks to Rebecca Knowles, Ellen Riloff, Tara Small, Isar Nejadgholi, Dan Jurafsky, Rada Mihalcea, Isabelle Augenstein, Eric Joanis, Michael Strube, Shubhanshu Mishra, and Cyril 21http://saifmohammad.com/WebPages/nlpscholar.html 22https://gender-gap-in-science.org Goutte for the tremendously helpful discussions. Many thanks to Cassidy Rae Henry, Luca Soldaini, Su Lin Blodgett, Graeme Hirst, Brendan T. O’Connor, Luc´ıa Santamar´ıa, Lyle Ungar, Emma Manning, and Peter Turney for discussions on the ethical considerations involved with this work. References Richard Alford. 1987. Naming and identity: A crosscultural study of personal naming practices. Hraf Press. Jens Peter Andersen and Mathias Wullum Nielsen. 2018. Google Scholar and Web of Science: Examining gender differences in citation coverage across five scientific disciplines. Journal of Informetrics, 12(3):950–959. Ashton Anderson, Dan McFarland, and Dan Jurafsky. 2012. Towards a computational history of the ACL: 1980–2008. In Proceedings of the Workshop on Rediscovering 50 Years of Discoveries, pages 13–21. Herbert Barry III and Aylene S Harper. 2014. Unisex names for babies born in pennsylvania 1990–2010. Names, 62(1):13–22. Greta R Bauer, Jessica Braimoh, Ayden I Scheim, and Christoffer Dharma. 2017. Transgender-inclusive measures of sex/gender for population surveys: Mixed-methods evaluation and recommendations. PloS one, 12(5):e0178043. Lutz Bornmann and Hans-Dieter Daniel. 2009. The state of h index research. EMBO reports, 10(1):2–6. Margo Brouns. 2007. The making of excellence: Gender bias in academia. In Exzellenz in Wissenschaft und Forschung - neue Wege in der Gleichstellungspolitik, pages 23–42. Wissenshaftsrat. Claudia Buchmann. 2009. Gender inequalities in the transition to college. Teachers College Record, 111(10):2320–2345. Catherine Connell. 2010. Doing, undoing, or redoing gender? Learning from the workplace experiences of transpeople. Gender & Society, 24(1):31–55. Helana Darwin. 2017. Doing gender beyond the binary: A virtual ethnography. Symbolic Interaction, 40(3):317–334. Michelle L Dion, Jane Sumner, and Sara McLaughlin Mitchell. 2018. Gendered citation patterns across political science and social science methodology fields. Political Analysis, 26(3):312–327. Jordi Duch, Xiao Han T Zeng, Marta Sales-Pardo, Filippo Radicchi, Shayna Otis, Teresa K Woodruff, and Lu´ıs A Nunes Amaral. 2012. The possible role of resource requirements and academic career-choice risk on gender differences in publication rate and impact. PloS one, 7(12):e51332. Irwin Feller. 2004. Measurement of scientific performance and gender bias. In Gender and Excellence in the Making, pages 35–39. Luxembourg: Office for Official Publications of the European Communities. Marta Foschi. 2004. Blocking the use of gender-based double standards for competence. In Gender and Excellence in the Making, pages 51–55. Luxembourg: Office for Official Publications of the European Communities. Juan Miguel Gallego and Luis H Guti´errez. 2018. An integrated analysis of the impact of gender diversity on innovation and productivity in manufacturing firms. Technical report, Inter-American Development Bank. Gita Ghiasi, Vincent Larivi`ere, and Cassidy Sugimoto. 2016. Gender differences in synchronous and diachronous self-citations. In 21st International Conference on Science and Technology Indicators-STI 2016. Book of Proceedings. GenIUSS Group. 2014. Best practices for asking questions to identify transgender and other gender minority respondents on population-based surveys. eScholarship, University of California. Namrata Gupta, Carol Kemelgor, Stefan Fuchs, and Henry Etzkowitz. 2005. Triple burden on women in science: A cross-cultural analysis. Current science, pages 1382–1386. Malin H˚akanson. 2005. The impact of gender on citations: An analysis of college & research libraries, journal of academic librarianship, and library quarterly. College & Research Libraries, 66(4):312– 323. Dalia S Hakura, Mumtaz Hussain, Monique Newiak, Vimal Thakoor, and Fan Yang. 2016. Inequality, gender gaps and economic growth: Comparative evidence for sub-Saharan Africa. International Monetary Fund. Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M Branham. 2018. Gender recognition or gender reductionism? The social implications of embedded gender recognition systems. In Proceedings of the 2018 CHI conference on human factors in computing systems, pages 1–13. Kris Hardies, Diane Breesch, and Jo¨el Branson. 2013. Gender differences in overconfidence and risk taking: Do self-selection and socialization matter? Economics Letters, 118(3):442–444. Janet Shibley Hyde, Rebecca S Bigler, Daphna Joel, Charlotte Chucky Tate, and Sari M van Anders. 2019. The future of sex and gender in psychology: Five challenges to the gender binary. American Psychologist, 74(2):171. Suzanne J Kessler and Wendy McKenna. 1978. Gender: An ethnomethodological approach. IL: The University of Chicago Press. Molly M King, Carl T Bergstrom, Shelley J Correll, Jennifer Jacquet, and Jevin D West. 2017. Men set their own cites high: Gender and self-citation across fields and over time. Socius, 3:2378023117738903. Vincent Larivi`ere, Chaoqun Ni, Yves Gingras, Blaise Cronin, and Cassidy R Sugimoto. 2013. Bibliometrics: Global gender disparities in science. Nature News, 504(7479):211. Stanley Lieberson, Susan Dumais, and Shyon Baumann. 2000. The instability of androgynous names: The symbolic maintenance of gender boundaries. American Journal of Sociology, 105(5):1249–1287. Linda L Lindsey. 2015. The sociology of gender theoretical perspectives and feminist frameworks. In Gender roles, pages 23–48. Routledge. The Linguistic Society of America LSA. 2017. The state of linguistics in higher education annual report 2017. Technical report, The Linguistic Society of America. Sangeeta Mehta, Karen EA Burns, Flavia R Machado, Alison E Fox-Robichaud, Deborah J Cook, Carolyn S Calfee, Lorraine B Ware, Ellen L Burnham, Niranjan Kissoon, John C Marshall, et al. 2017. Gender parity in critical care medicine. American journal of respiratory and critical care medicine, 196(4):425–429. Helena Mihaljevi´c, Marco Tullney, Luc´ıa Santamar´ıa, and Christian Steinfeldt. 2019. Reflections on gender analyses of bibliographic corpora. Frontiers in Big Data, 2:29. Helena Mihaljevi´c-Brandt, Luc´ıa Santamar´ıa, and Marco Tullney. 2016. The effect of gender in the publication patterns in mathematics. PLoS One, 11(10):e0165367. Shubhanshu Mishra, Brent D Fegley, Jana Diesner, and Vetle I Torvik. 2018. Self-citation is the hallmark of productive authors, of any gender. PloS one, 13(9):e0195773. Saif M. Mohammad. 2019. The state of NLP literature: A diachronic analysis of the ACL Anthology. arXiv preprint arXiv:1911.03562. Saif M. Mohammad. 2020a. Examining citations of natural language processing literature. In Proceedings of the 2020 Annual Conference of the Association for Computational Linguistics, Seattle, USA. Saif M. Mohammad. 2020b. NLP Scholar: A dataset for examining the state of NLP research. In Proceedings of the 12th Language Resources and Evaluation Conference (LREC-2020), Marseille, France. Saif M. Mohammad. 2020c. NLP Scholar: An interactive visual explorer for Natural Language Processing literature. In Proceedings of the 2020 Annual Conference of the Association for Computational Linguistics, Seattle, USA. Lena Nekby, Peter Thoursie, and Lars Vahtrik. 2008. Gender and self-selection into a competitive environment: Are women more overconfident than men? Economics Letters, 100(3):405–407. Caroline Criado Perez. 2019. Invisible women: Exposing data bias in a world designed for men. Random House. Jane Pilcher. 2017. Names and “doing gender”: How forenames and surnames contribute to gender identities, difference, and inequalities. Sex roles, 77(1112):812–822. Cristina Richards, Walter Pierre Bouman, and M Barker. 2017. Non-binary genders. London: Pal grave Macmillan. Patricia A Roos. 2008. Together but unequal: Combating gender inequity in the academy. Journal of Workplace Rights, 13(2):185–199. Natalie Schluter. 2018. The glass ceiling in NLP. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2793–2798, Brussels, Belgium. Inger Skjelsboek and Dan Smith. 2001. Gender, peace and conflict. Sage. Brittany N Smith, Mamta Singh, and Vetle I Torvik. 2013. A search engine approach to estimating temporal changes in gender orientation of first names. In Proceedings of the 13th ACM/IEEE-CS joint conference on Digital libraries, pages 199–208. ACM. Vetle I Torvik and Neil R Smalheiser. 2009. Author name disambiguation in medline. ACM Transactions on Knowledge Discovery from Data (TKDD), 3(3):1–29. Adam Vogel and Dan Jurafsky. 2012. He said, she said: Gender in the ACL Anthology. In Proceedings of the Special Workshop on Rediscovering 50 Years of Discoveries, pages 33–41, Jeju Island, Korea. World Economic Forum WEF. 2018. The global gender gap report 2018. Technical report, World Economic Forum, Geneva, Switzerland. Cassandra Willyard. 2011. Men: A growing minority. GradPSYCH Magazine, 9(1):40. Jonathan Woetzel et al. 2015. The power of parity: How advancing women’s equality can add $12 trillion to global growth. Technical report, McKinsey Global Institute. Matthew Zook, Solon Barocas, Danah Boyd, Kate Crawford, Emily Keller, Seeta Pena Gangadharan, Alyssa Goodman, Rachelle Hollander, Barbara A. Koenig, Jacob Metcalf, Arvind Narayanan, Alondra Nelson, and Frank Pasquale. 2017. Ten simple rules for responsible big data research. PLOS Computational Biology, 13(3):1–10.
2020
702
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7871 BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension Mike Lewis*, Yinhan Liu*, Naman Goyal*, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer Facebook AI [email protected],[email protected],[email protected] Abstract We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new stateof-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.1 1 Introduction Self-supervised methods have achieved remarkable success in a wide range of NLP tasks (Mikolov et al., 2013; Peters et al., 2018; Devlin et al., 2019; Joshi et al., 2019; Yang et al., 2019; Liu et al., 2019). The most successful approaches have been variants of masked language models, which are denoising autoencoders that are trained to reconstruct text where a random subset of the words has been masked out. Recent work has shown gains by improving the distribution of 1Code and pre-trained models for BART are available at https://github.com/pytorch/fairseq and https://huggingface.co/transformers masked tokens (Joshi et al., 2019), the order in which masked tokens are predicted (Yang et al., 2019), and the available context for replacing masked tokens (Dong et al., 2019). However, these methods typically focus on particular types of end tasks (e.g. span prediction, generation, etc.), limiting their applicability. In this paper, we present BART, which pre-trains a model combining Bidirectional and Auto-Regressive Transformers. BART is a denoising autoencoder built with a sequence-to-sequence model that is applicable to a very wide range of end tasks. Pretraining has two stages (1) text is corrupted with an arbitrary noising function, and (2) a sequence-to-sequence model is learned to reconstruct the original text. BART uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes (see Figure 1). A key advantage of this setup is the noising flexibility; arbitrary transformations can be applied to the original text, including changing its length. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where arbitrary length spans of text (including zero length) are replaced with a single mask token. This approach generalizes the original word masking and next sentence prediction objectives in BERT by forcing the model to reason more about overall sentence length and make longer range transformations to the input. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa (Liu et al., 2019) with comparable training resources on GLUE (Wang et al., 2018) and SQuAD (Rajpurkar et al., 2016), and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks. For example, it improves performance by 3.5 ROUGE over previous work on XSum (Narayan et al., 2018). BART also opens up new ways of thinking about fine tuning. We present a new scheme for machine translation where a BART model is stacked above a few additional transformer layers. These layers are trained 7872 Bidirectional Encoder A _ C _ E B D (a) BERT: Random tokens are replaced with masks, and the document is encoded bidirectionally. Missing tokens are predicted independently, so BERT cannot easily be used for generation. Autoregressive Decoder A B C D E <s> A B C D (b) GPT: Tokens are predicted auto-regressively, meaning GPT can be used for generation. However words can only condition on leftward context, so it cannot learn bidirectional interactions. Autoregressive Decoder Bidirectional Encoder A B C D E A _ B _ E <s> A B C D (c) BART: Inputs to the encoder need not be aligned with decoder outputs, allowing arbitary noise transformations. Here, a document has been corrupted by replacing spans of text with a mask symbols. The corrupted document (left) is encoded with a bidirectional model, and then the likelihood of the original document (right) is calculated with an autoregressive decoder. For fine-tuning, an uncorrupted document is input to both the encoder and decoder, and we use representations from the final hidden state of the decoder. Figure 1: A schematic comparison of BART with BERT (Devlin et al., 2019) and GPT (Radford et al., 2018). to essentially translate the foreign language to noised English, by propagation through BART, thereby using BART as a pre-trained target-side language model. This approach improves performance over a strong back-translation MT baseline by 1.1 BLEU on the WMT Romanian-English benchmark. To better understand these effects, we also report an ablation analysis that replicates other recently proposed training objectives. This study allows us to carefully control for a number of factors, including data and optimization parameters, which have been shown to be as important for overall performance as the selection of training objectives (Liu et al., 2019). We find that BART exhibits the most consistently strong performance across the full range of tasks we consider. 2 Model BART is a denoising autoencoder that maps a corrupted document to the original document it was derived from. It is implemented as a sequence-to-sequence model with a bidirectional encoder over corrupted text and a left-to-right autoregressive decoder. For pre-training, we optimize the negative log likelihood of the original document. 2.1 Architecture BART uses the standard sequence-to-sequence Transformer architecture from (Vaswani et al., 2017), except, following GPT, that we modify ReLU activation functions to GeLUs (Hendrycks & Gimpel, 2016) and initialise parameters from N(0, 0.02). For our base model, we use 6 layers in the encoder and decoder, and for our large model we use 12 layers in each. The architecture is closely related to that used in BERT, with the following differences: (1) each layer of the decoder additionally performs cross-attention over the final hidden layer of the encoder (as in the transformer sequence-to-sequence model); and (2) BERT uses an additional feed-forward network before wordprediction, which BART does not. In total, BART contains roughly 10% more parameters than the equivalently sized BERT model. 2.2 Pre-training BART BART is trained by corrupting documents and then optimizing a reconstruction loss—the cross-entropy between the decoder’s output and the original document. Unlike existing denoising autoencoders, which are tailored to specific noising schemes, BART allows us to apply any type of document corruption. In the extreme case, where all information about the source is lost, BART is equivalent to a language model. We experiment with several previously proposed and novel transformations, but we believe there is a significant potential for development of other new alternatives. The transformations we used are summarized below, and examples are shown in Figure 2. Token Masking Following BERT (Devlin et al., 2019), random tokens are sampled and replaced with [MASK] elements. Token Deletion Random tokens are deleted from the input. In contrast to token masking, the model must 7873 A B C . D E . A . C . E . A _ . D _ E . A _C . _ E . C . D E . A B Document Rotation Token Masking Token Deletion Text Infilling D E . A B C . Sentence Permutation Figure 2: Transformations for noising the input that we experiment with. These transformations can be composed. decide which positions are missing inputs. Text Infilling A number of text spans are sampled, with span lengths drawn from a Poisson distribution (λ = 3). Each span is replaced with a single [MASK] token. 0-length spans correspond to the insertion of [MASK] tokens. Text infilling is inspired by SpanBERT (Joshi et al., 2019), but SpanBERT samples span lengths from a different (clamped geometric) distribution, and replaces each span with a sequence of [MASK] tokens of exactly the same length. Text infilling teaches the model to predict how many tokens are missing from a span. Sentence Permutation A document is divided into sentences based on full stops, and these sentences are shuffled in a random order. Document Rotation A token is chosen uniformly at random, and the document is rotated so that it begins with that token. This task trains the model to identify the start of the document. 3 Fine-tuning BART The representations produced by BART can be used in several ways for downstream applications. 3.1 Sequence Classification Tasks For sequence classification tasks, the same input is fed into the encoder and decoder, and the final hidden state of the final decoder token is fed into new multi-class linear classifier. This approach is related to the CLS token in BERT; however we add the additional token to the end so that representation for the token in the decoder can attend to decoder states from the complete input (Figure 3a). 3.2 Token Classification Tasks For token classification tasks, such as answer endpoint classification for SQuAD, we feed the complete document into the encoder and decoder, and use the top hidden state of the decoder as a representation for each word. This representation is used to classify the token. 3.3 Sequence Generation Tasks Because BART has an autoregressive decoder, it can be directly fine tuned for sequence generation tasks such as abstractive question answering and summarization. In both of these tasks, information is copied from the input but manipulated, which is closely related to the denoising pre-training objective. Here, the encoder input is the input sequence, and the decoder generates outputs autoregressively. 3.4 Machine Translation We also explore using BART to improve machine translation decoders for translating into English. Previous work Edunov et al. (2019) has shown that models can be improved by incorporating pre-trained encoders, but gains from using pre-trained language models in decoders have been limited. We show that it is possible to use the entire BART model (both encoder and decoder) as a single pretrained decoder for machine translation, by adding a new set of encoder parameters that are learned from bitext (see Figure 3b). More precisely, we replace BART’s encoder embedding layer with a new randomly initialized encoder. The model is trained end-to-end, which trains the new encoder to map foreign words into an input that BART can de-noise to English. The new encoder can use a separate vocabulary from the original BART model. We train the source encoder in two steps, in both cases backpropagating the cross-entropy loss from the output of the BART model. In the first step, we freeze most of BART parameters and only update the randomly initialized source encoder, the BART positional embeddings, and the self-attention input projection matrix of BART’s encoder first layer. In the second step, we train all model parameters for a small number of iterations. 4 Comparing Pre-training Objectives BART supports a much wider range of noising schemes during pre-training than previous work. We compare a range of options using base-size models (6 encoder and 6 decoder layers, with a hidden size of 768), evaluated on a representative subset of the tasks we will consider for the full large scale experiments in §5. 4.1 Comparison Objectives While many pre-training objectives have been proposed, fair comparisons between these have been difficult to perform, at least in part due to differences in training data, training resources, architectural differ7874 Pre-trained Decoder Pre-trained Encoder label A B C D E <s> A B C D E (a) To use BART for classification problems, the same input is fed into the encoder and decoder, and the representation from the final output is used. Randomly Initialized Encoder α β γ δ ε Pre-trained Decoder Pre-trained Encoder A B C D E <s> A B C D (b) For machine translation, we learn a small additional encoder that replaces the word embeddings in BART. The new encoder can use a disjoint vocabulary. Figure 3: Fine tuning BART for classification and translation. ences between models, and fine-tuning procedures. We re-implement strong pre-training approaches recently proposed for discriminative and generation tasks. We aim, as much as possible, to control for differences unrelated to the pre-training objective. However, we do make minor changes to the learning rate and usage of layer normalisation in order to improve performance (tuning these separately for each objective). For reference, we compare our implementations with published numbers from BERT, which was also trained for 1M steps on a combination of books and Wikipedia data. We compare the following approaches: Language Model Similarly to GPT (Radford et al., 2018), we train a left-to-right Transformer language model. This model is equivalent to the BART decoder, without cross-attention. Permuted Language Model Based on XLNet (Yang et al., 2019), we sample 1/6 of the tokens, and generate them in a random order autoregressively. For consistency with other models, we do not implement the relative positional embeddings or attention across segments from XLNet. Masked Language Model Following BERT (Devlin et al., 2019), we replace 15% of tokens with [MASK] symbols, and train the model to independently predict the original tokens. Multitask Masked Language Model As in UniLM (Dong et al., 2019), we train a Masked Language Model with additional self-attention masks. Self attention masks are chosen randomly with the follow proportions: 1/6 left-to-right, 1/6 right-to-left, 1/3 unmasked, and 1/3 with the first 50% of tokens unmasked and a left-to-right mask for the remainder. Masked Seq-to-Seq Inspired by MASS (Song et al., 2019), we mask a span containing 50% of tokens, and train a sequence to sequence model to predict the masked tokens. For the Permuted LM, Masked LM and Multitask Masked LM, we use two-stream attention (Yang et al., 2019) to efficiently compute likelihoods of the output part of the sequence (using a diagonal self-attention mask on the output to predict words left-to-right). We experiment with (1) treating the task as a standard sequence-to-sequence problem, where the source input to the encoder and the target is the decoder output, or (2) adding the source as prefix to the target in the decoder, with a loss only on the target part of the sequence. We find the former works better for BART models, and the latter for other models. To most directly compare our models on their ability to model their fine-tuning objective (the log likelihood of the human text), we report perplexity in Table 1. 4.2 Tasks SQuAD (Rajpurkar et al., 2016) an extractive question answering task on Wikipedia paragraphs. Answers are text spans extracted from a given document context. Similar to BERT (Devlin et al., 2019), we use concatenated question and context as input to the encoder of BART, and additionally pass them to the decoder. The model includes classifiers to predict the start and end indices of each token. MNLI (Williams et al., 2017), a bitext classification task to predict whether one sentence entails another. The fine-tuned model concatenates the two sentences with appended an EOS token, and passes them to both the BART encoder and decoder. In contrast to BERT, the representation of the EOS token is used to classify the sentences relations. ELI5 (Fan et al., 2019), a long-form abstractive question answering dataset. Models generate answers conditioned on the concatenation of a question and supporting documents. XSum (Narayan et al., 2018), a news summarization dataset with highly abstractive summaries. ConvAI2 (Dinan et al., 2019), a dialogue response generation task, conditioned on context and a persona. CNN/DM (Hermann et al., 2015), a news summarization dataset. Summaries here are typically closely related to source sentences. 7875 Model SQuAD 1.1 MNLI ELI5 XSum ConvAI2 CNN/DM F1 Acc PPL PPL PPL PPL BERT Base (Devlin et al., 2019) 88.5 84.3 Masked Language Model 90.0 83.5 24.77 7.87 12.59 7.06 Masked Seq2seq 87.0 82.1 23.40 6.80 11.43 6.19 Language Model 76.7 80.1 21.40 7.00 11.51 6.56 Permuted Language Model 89.1 83.7 24.03 7.69 12.23 6.96 Multitask Masked Language Model 89.2 82.4 23.73 7.50 12.39 6.74 BART Base w/ Token Masking 90.4 84.1 25.05 7.08 11.73 6.10 w/ Token Deletion 90.4 84.1 24.61 6.90 11.46 5.87 w/ Text Infilling 90.8 84.0 24.26 6.61 11.05 5.83 w/ Document Rotation 77.2 75.3 53.69 17.14 19.87 10.59 w/ Sentence Shuffling 85.4 81.5 41.87 10.93 16.67 7.89 w/ Text Infilling + Sentence Shuffling 90.8 83.8 24.17 6.62 11.12 5.41 Table 1: Comparison of pre-training objectives, including approaches inspired by BERT, MASS, GPT, XLNet and UniLM. All models are a similar size to BERT Base and are trained for 1M steps on the same data. Entries in the bottom two blocks are trained on identical data using the same code-base, and fine-tuned with the same procedures. Entries in the second block are inspired by pre-training objectives proposed in previous work, but have been simplified to focus on evaluation objectives (see §4.1). Performance varies considerably across tasks, but the BART models with text infilling demonstrate the most consistently strong performance. 4.3 Results Results are shown in Table 1. Several trends are clear: Performance of pre-training methods varies significantly across tasks The effectiveness of pre-training methods is highly dependent on the task. For example, a simple language model achieves the best ELI5 performance, but the worst SQUAD results. Token masking is crucial Pre-training objectives based on rotating documents or permuting sentences perform poorly in isolation. The successful methods either use token deletion or masking, or self-attention masks. Deletion appears to outperform masking on generation tasks. Left-to-right pre-training improves generation The Masked Language Model and the Permuted Language Model perform less well than others on generation, and are the only models we consider that do not include left-to-right auto-regressive language modelling during pre-training. Bidirectional encoders are crucial for SQuAD As noted in previous work (Devlin et al., 2019), just left-to-right decoder performs poorly on SQuAD, because future context is crucial in classification decisions. However, BART achieves similar performance with only half the number of bidirectional layers. The pre-training objective is not the only important factor Our Permuted Language Model performs less well than XLNet (Yang et al., 2019). Some of this difference is likely due to not including other architectural improvements, such as relative-position embeddings or segment-level recurrence. Pure language models perform best on ELI5 The ELI5 dataset is an outlier, with much higher perplexities than other tasks, and is the only generation task where other models outperform BART. A pure language model performs best, suggesting that BART is less effective when the output is only loosely constrained by the input. BART achieves the most consistently strong performance. With the exception of ELI5, BART models using text-infilling perform well on all tasks. 5 Large-scale Pre-training Experiments Recent work has shown that downstream performance can dramatically improve when pre-training is scaled to large batch sizes (Yang et al., 2019; Liu et al., 2019) and corpora. To test how well BART performs in this regime, and to create a useful model for downstream tasks, we trained BART using the same scale as the RoBERTa model. 5.1 Experimental Setup We pre-train a large model with 12 layers in each of the encoder and decoder, and a hidden size of 1024. Following RoBERTa (Liu et al., 2019), we use a batch size of 8000, and train the model for 500000 steps. Documents are tokenized with the same byte-pair encoding as GPT-2 (Radford et al., 2019). Based on the results in Section §4, we use a combination of text infilling and sentence permutation. We mask 30% of tokens in each 7876 MNLI SST QQP QNLI STS-B RTE MRPC CoLA m/mm Acc Acc Acc Acc Acc Acc Mcc BERT 86.6/93.2 91.3 92.3 90.0 70.4 88.0 60.6 UniLM 87.0/85.9 94.5 92.7 70.9 61.1 XLNet 89.8/95.6 91.8 93.9 91.8 83.8 89.2 63.6 RoBERTa 90.2/90.2 96.4 92.2 94.7 92.4 86.6 90.9 68.0 BART 89.9/90.1 96.6 92.5 94.9 91.2 87.0 90.4 62.8 Table 2: Results for large models on GLUE tasks. BART performs comparably to RoBERTa and XLNet, suggesting that BART’s uni-directional decoder layers do not reduce performance on discriminative tasks. SQuAD 1.1 SQuAD 2.0 EM/F1 EM/F1 BERT 84.1/90.9 79.0/81.8 UniLM -/80.5/83.4 XLNet 89.0/94.5 86.1/88.8 RoBERTa 88.9/94.6 86.5/89.4 BART 88.8/94.6 86.1/89.2 Table 3: BART gives similar results to XLNet and RoBERTa on question answering. document, and permute all sentences. Although sentence permutation only shows significant additive gains on the CNN/DM summarization dataset, we hypothesised that larger pre-trained models may be better able to learn from this task. To help the model better fit the data, we disabled dropout for the final 10% of training steps. We use the same pre-training data as Liu et al. (2019), consisting of 160Gb of news, books, stories, and web text. 5.2 Discriminative Tasks Tables 3 and 2 compares the performance of BART with several recent approaches on the well-studied SQuAD and GLUE tasks (Warstadt et al., 2018; Socher et al., 2013; Dolan & Brockett, 2005; Agirre et al., 2007; Williams et al., 2017; Dagan et al., 2006; Levesque et al., 2011). The most directly comparable baseline is RoBERTa, which was pre-trained with the same resources, but a different objective. Overall, BART performs similarly, with only small differences between the models on most tasks. suggesting that BART’s improvements on generation tasks do not come at the expense of classification performance. 5.3 Generation Tasks We also experiment with several text generation tasks. BART is fine-tuned as a standard sequence-to-sequence model from the input to the output text. During finetuning we use a label smoothed cross entropy loss (Pereyra et al., 2017), with the smoothing parameter set to 0.1. During generation, we set beam size as 5, remove duplicated trigrams in beam search, and tuned the model with min-len, max-len, length penalty on the validation set (Fan et al., 2017). Summarization To provide a comparison with the state-of-the-art in summarization, we present results on two summarization datasets, CNN/DailyMail and XSum, which have distinct properties (Table 4). Summaries in the CNN/DailyMail tend to resemble source sentences. Extractive models do well here, and even the baseline of the first-three source sentences is highly competitive. Nevertheless, BART outperforms all existing work. In contrast, XSum is highly abstractive, and extractive models perform poorly. BART outperforms the best previous work, based on RoBERTa, by roughly 3.5 points on all ROUGE metrics—representing a significant advance in performance on this problem. Qualitatively, sample quality is high (see §6). We also conduct human evaluation (Table 5). Annotators were asked to choose the better of two summaries for a passage. One summary was from BART, and the other was either a human reference or publicly available output from the BERTSUMEXTABS model. As with automated metrics, BART significantly outperforms prior work. However, it has not reach human performance on this task. Dialogue We evaluate dialogue response generation on CONVAI2 (Dinan et al., 2019), in which agents must generate responses conditioned on both the previous context and a textually-specified persona. BART outperforms previous work on two automated metrics. Abstractive QA We use the recently proposed ELI5 dataset to test the model’s ability to generate long freeform answers. We find BART outperforms the best previous work by 1.2 ROUGE-L, but the dataset remains a challenging, because answers are only weakly specified by the question. 5.4 Translation We also evaluated performance on WMT16 RomanianEnglish, augmented with back-translation data from Sennrich et al. (2016). We use a 6-layer transformer source encoder to map Romanian into a representation that BART is able to de-noise into English, following the approach introduced in §3.4. 7877 CNN/DailyMail XSum R1 R2 RL R1 R2 RL Lead-3 40.42 17.62 36.67 16.30 1.60 11.95 PTGEN (See et al., 2017) 36.44 15.66 33.42 29.70 9.21 23.24 PTGEN+COV (See et al., 2017) 39.53 17.28 36.38 28.10 8.02 21.72 UniLM 43.33 20.21 40.51 BERTSUMABS (Liu & Lapata, 2019) 41.72 19.39 38.76 38.76 16.33 31.15 BERTSUMEXTABS (Liu & Lapata, 2019) 42.13 19.60 39.18 38.81 16.50 31.27 ROBERTASHARE (Rothe et al., 2019) 40.31 18.91 37.62 41.45 18.79 33.90 BART 44.16 21.28 40.90 45.14 22.27 37.25 Table 4: Results on two standard summarization datasets. BART outperforms previous work on summarization on both tasks and all metrics, including those based on large-scale pre-training. XSum Prefer BART Prefer Baseline vs. BERTSUMEXTABS 73% 27% vs. Reference 33% 67% Table 5: Human Evaluation on XSum: BART summaries are preferred to those from previous work, but not to human-written reference summaries. ConvAI2 Valid F1 Valid PPL Seq2Seq + Attention 16.02 35.07 Best System 2 19.09 17.51 BART 20.72 11.85 Table 6: BART outperforms previous work on conversational response generation. Perplexities are renormalized based on official tokenizer for ConvAI2. Experiment results are presented in Table 8. We compare our results against a baseline Transformer architecture (Vaswani et al., 2017) with Transformerlarge settings (the baseline row). We show the performance of both steps of our model in the fixed BART and tuned BART rows. For each row we experiment on the original WMT16 Romanian-English augmented with back-translation data. We use a beam width of 5 and a length penalty of α = 1. Preliminary results suggested that our approach was less effective without back-translation data, and prone to overfitting—future work should explore additional regularization techniques. 6 Qualitative Analysis BART shows large improvements on summarization metrics, of up to 3.5 points over the prior state-of-theart. To understand BART’s performance beyond automated metrics, we analyse its generations qualitatively. Table 9 shows representative example summaries generated by BART, illustrating its main strengths and ELI5 R1 R2 RL Best Extractive 23.5 3.1 17.5 Language Model 27.8 4.7 23.1 Seq2Seq 28.3 5.1 22.8 Seq2Seq Multitask 28.9 5.4 23.1 BART 30.6 6.2 24.3 Table 7: BART achieves state-of-the-art results on the challenging ELI5 abstractive question answering dataset. Comparison models are from Fan et al. (2019). RO-EN Baseline 36.80 Fixed BART 36.29 Tuned BART 37.96 Table 8: BLEU scores of the baseline and BART on WMT’16 RO-EN augmented with back-translation data. BART improves over a strong back-translation baseline by using monolingual English pre-training. weaknesses. Examples are taken from WikiNews articles published after the creation of the pre-training corpus, to eliminate the possibility of the events described being present in the model’s training data. Following Narayan et al. (2018), we remove the first sentence of the article prior to summarizing it, so there is no easy extractive summary of the document. Unsurprisingly, model output is fluent and grammatical English. However, outputs are also highly abstractive, with few copied phrases. Summaries are generally factually accurate, and integrate supporting evidence from across the input document with background knowledge (for example, correctly completing names, or inferring that PG&E operates in California). In the first example, inferring that fish are protecting reefs from some effects of global warming requires nontrivial inference. However, the claim that the work was published in Science is not supported by the source— 7878 Source Document (abbreviated) BART Summary The researchers examined three types of coral in reefs off the coast of Fiji ... The researchers found when fish were plentiful, they would eat algae and seaweed off the corals, which appeared to leave them more resistant to the bacterium Vibrio coralliilyticus, a bacterium associated with bleaching. The researchers suggested the algae, like warming temperatures, might render the corals’ chemical defenses less effective, and the fish were protecting the coral by removing the algae. Fisheries off the coast of Fiji are protecting coral reefs from the effects of global warming, according to a study in the journal Science. Sacoolas, who has immunity as a diplomat’s wife, was involved in a traffic collision ... Prime Minister Johnson was questioned about the case while speaking to the press at a hospital in Watford. He said, “I hope that Anne Sacoolas will come back ... if we can’t resolve it then of course I will be raising it myself personally with the White House.” Boris Johnson has said he will raise the issue of US diplomat Anne Sacoolas’ diplomatic immunity with the White House. According to Syrian state media, government forces began deploying into previously SDF controlled territory yesterday. ... On October 6, US President Donald Trump and Turkish President Recep Tayyip Erdoan spoke on the phone. Then both nations issued statements speaking of an imminent incursion into northeast Syria ... . On Wednesday, Turkey began a military offensive with airstrikes followed by a ground invasion. Syrian government forces have entered territory held by the US-backed Syrian Democratic Forces (SDF) in response to Turkey’s incursion into the region. This is the first time anyone has been recorded to run a full marathon of 42.195 kilometers (approximately 26 miles) under this pursued landmark time. It was not, however, an officially sanctioned world record, as it was not an ”open race” of the IAAF. His time was 1 hour 59 minutes 40.2 seconds. Kipchoge ran in Vienna, Austria. It was an event specifically designed to help Kipchoge break the two hour barrier. Kenyan runner Eliud Kipchoge has run a marathon in less than two hours. PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow. Power has been turned off to millions of customers in California as part of a power shutoff plan. Table 9: Example summaries from the XSum-tuned BART model on WikiNews articles. For clarity, only relevant excerpts of the source are shown. Summaries combine information from across the article and prior knowledge. and, in general, the main limitation of the model is a tendency to hallucinate unsupported information. These samples demonstrate that the BART pretraining has learned a strong combination of natural language understanding and generation. 7 Related Work Early methods for pretraining were based on language models. GPT (Radford et al., 2018) only models leftward context, which is problematic for some tasks. ELMo (Peters et al., 2018) concatenates left-only and right-only representations, but does not pre-train interactions between these features. Radford et al. (2019) demonstrated that very large language models can act as unsupervised multitask models. BERT (Devlin et al., 2019) introduced masked language modelling, which allows pre-training to learn interactions between left and right context words. Recent work has shown that very strong performance can be achieved by training for longer (Liu et al., 2019), by tying parameters across layers (Lan et al., 2019), and by masking spans instead of words (Joshi et al., 2019). Predictions are not made auto-regressively, reducing the effectiveness of BERT for generation tasks. UniLM (Dong et al., 2019) fine-tunes BERT with an ensemble of masks, some of which allow only leftward context. Like BART, this allows UniLM to be used for both generative and discriminative tasks. A difference is that UniLM predictions are conditionally independent, whereas BART’s are autoregressive. BART reduces the mismatch between pre-training and generation tasks, because the decoder is always trained on uncorrupted context. MASS (Song et al., 2019) is perhaps the most similar 7879 model to BART. An input sequence where a contiguous span of tokens is masked is mapped to a sequence consisting of the missing tokens. BART differs in masking more but shorter spans from the input, and in always predicting the complete output. Table 1 shows that in a controlled comparison, BART’s pre-training objective outperforms MASS on five out of six tasks. XL-Net (Yang et al., 2019) extends BERT by predicting masked tokens auto-regressively in a permuted order. This objective allows predictions to condition on both left and right context. In contrast, the BART decoder works left-to-right during pre-training, matching the setting during generation. Concurrently, Raffel et al. (2019) pre-trained a denoising sequence-to-sequence model named T5, experimenting with a similar range of noising tasks. BART uses a slightly different objective, in which spans are masked from the input but the complete output is predicted, to improve the decoder’s language modelling ability. BART achieves higher performance with similar model sizes, particularly on summarization. T5 demonstrates that by scaling to very large models sizes, denoising sequence-to-sequence pre-training can achieve new state-of-the-art results on many tasks. Several papers have explored using pre-trained representations to improve machine translation. The largest improvements have come from pre-training on both source and target languages (Song et al., 2019; Lample & Conneau, 2019), but this requires pretraining on all languages of interest. Other work has shown that encoders can be improved using pre-trained representations (Edunov et al., 2019), but gains in decoders are more limited. We show how BART can be used to improve machine translation decoders. 8 Conclusions We introduced BART, a pre-training approach that learns to map corrupted documents to the original. BART performs comparably to RoBERTa on discriminative tasks, and achieves new state-of-the-art results on several text generation tasks. Future work should explore new methods for corrupting documents for pretraining, perhaps tailoring them to specific end tasks. References Eneko Agirre, Llu’is M‘arquez, and Richard Wicentowski (eds.). Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval2007). Association for Computational Linguistics, Prague, Czech Republic, June 2007. Ido Dagan, Oren Glickman, and Bernardo Magnini. The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, pp. 177– 190. Springer, 2006. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171– 4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/ v1/N19-1423. URL https://www.aclweb. org/anthology/N19-1423. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. The second conversational intelligence challenge (convai2). arXiv preprint arXiv:1902.00098, 2019. William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the International Workshop on Paraphrasing, 2005. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Unified language model pretraining for natural language understanding and generation. arXiv preprint arXiv:1905.03197, 2019. Sergey Edunov, Alexei Baevski, and Michael Auli. Pre-trained language model representations for language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019. Angela Fan, David Grangier, and Michael Auli. Controllable abstractive summarization. arXiv preprint arXiv:1711.05217, 2017. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. Eli5: Long form question answering. arXiv preprint arXiv:1907.09190, 2019. Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in neural information processing systems, pp. 1693–1701, 2015. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529, 2019. Guillaume Lample and Alexis Conneau. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019. 7880 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019. Hector J Levesque, Ernest Davis, and Leora Morgenstern. The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, volume 46, pp. 47, 2011. Yang Liu and Mirella Lapata. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345, 2019. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. Shashi Narayan, Shay B Cohen, and Mirella Lapata. Don’t give me the details, just the summary! topicaware convolutional neural networks for extreme summarization. arXiv preprint arXiv:1808.08745, 2018. Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf, 2018. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. Leveraging pre-trained checkpoints for sequence generation tasks. arXiv preprint arXiv:1907.12461, 2019. Abigail See, Peter J Liu, and Christopher D Manning. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368, 2017. Rico Sennrich, Barry Haddow, and Alexandra Birch. Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, 2016. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pp. 1631–1642, 2013. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. Mass: Masked sequence to sequence pretraining for language generation. In International Conference on Machine Learning, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008, 2017. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. Neural network acceptability judgments. arXiv preprint 1805.12471, 2018. Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426, 2017. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019.
2020
703
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7881 BLEURT: Learning Robust Metrics for Text Generation Thibault Sellam Dipanjan Das Ankur P. Parikh Google Research New York, NY {tsellam, dipanjand, aparikh }@google.com Abstract Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-ofthe-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution. 1 Introduction In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm (Sutskever et al., 2014; Bahdanau et al., 2015) which can tackle a wide array of tasks including translation (Koehn, 2009), summarization (Mani, 1999; Chopra et al., 2016), structureddata-to-text generation (McKeown, 1992; Kukich, 1983; Wiseman et al., 2017) dialog (Smith and Hipp, 1994; Vinyals and Le, 2015) and image captioning (Fang et al., 2015). However, progress is increasingly impeded by the shortcomings of existing metrics (Wiseman et al., 2017; Ma et al., 2019; Tian et al., 2019). Human evaluation is often the best indicator of the quality of a system. However, designing crowd sourcing experiments is an expensive and high-latency process, which does not easily fit in a daily model development pipeline. Therefore, NLG researchers commonly use automatic evaluation metrics, which provide an acceptable proxy for quality and are very cheap to compute. This paper investigates sentence-level, referencebased metrics, which describe the extent to which a candidate sentence is similar to a reference one. The exact definition of similarity may range from string overlap to logical entailment. The first generation of metrics relied on handcrafted rules that measure the surface similarity between the sentences. To illustrate, BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), two popular metrics, rely on N-gram overlap. Because those metrics are only sensitive to lexical variation, they cannot appropriately reward semantic or syntactic variations of a given reference. Thus, they have been repeatedly shown to correlate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy (Liu et al., 2016; Novikova et al., 2017; Chaganty et al., 2018). Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics. To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments. The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM (Ma et al., 2018, 2019). Current approaches largely fall into two categories. Fully learned metrics, such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings. Conversely, hybrid metrics, such as YiSi and BERTscore combine trained elements, e.g., contextual embeddings, with handwritten logic, e.g., as token alignment rules. The first category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and fit the ratings distribution tightly. Fur7882 thermore, learned metrics can be tuned to measure task-specific properties, such as fluency, faithfulness, grammar, or style. On the other hand, hybrid metrics offer robustness. They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed. And indeed, the IID assumption is particularly problematic in NLG evaluation because of domain drifts, that have been the main target of the metrics literature, but also because of quality drifts: NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, especially for newer research tasks. An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrapolate. Our insight is that it is possible to combine expressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before fine-tuning it on human ratings. To this end, we introduce BLEURT,1 a text generation metric based on BERT (Devlin et al., 2019). A key ingredient of BLEURT is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals. To demonstrate our approach, we train BLEURT for English and evaluate it under different generalization regimes. We first verify that it provides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs). We then stress-test its ability to cope with quality drifts with a synthetic benchmark based on WMT 2017. Finally, we show that it can easily adapt to a different domain with three tasks from a data-to-text dataset, WebNLG 2017 (Gardent et al., 2017). Ablations show that our synthetic pretraining scheme increases performance in the IID setting, and is critical to ensure robustness when the training data is scarce, skewed, or out-of-domain. The code and pre-trained models are available online2. 1Bilingual Evaluation Understudy with Representations from Transformers. We refer the intrigued reader to Papineni et al. 2002 for a justification of the term understudy. 2http://github.com/google-research/ bleurt 2 Preliminaries Define x = (x1, .., xr) to be the reference sentence of length r where each xi is a token and let ˜x = (˜x1, .., ˜xp) be a prediction sentence of length p. Let {(xi, ˜xi, yi)}N n=1 be a training dataset of size N where yi ∈R is the human rating that indicates how good ˜xi is with respect to xi. Given the training data, our goal is to learn a function f : (x, ˜x) →y that predicts the human rating. 3 Fine-Tuning BERT for Quality Evaluation Given the small amounts of rating data available, it is natural to leverage unsupervised representations for this task. In our model, we use BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019), which is an unsupervised technique that learns contextualized representations of sequences of text. Given x and ˜x, BERT is a Transformer (Vaswani et al., 2017) that returns a sequence of contextualized vectors: v[CLS], vx1, ..., vxr, v1, ..., v˜xp = BERT(x, ˜x) where v[CLS] is the representation for the special [CLS] token. As described by Devlin et al. (2019), we add a linear layer on top of the [CLS] vector to predict the rating: ˆy = f(x, ˜x) = W ˜v[CLS] + b where W and b are the weight matrix and bias vector respectively. Both the above linear layer as well as the BERT parameters are trained (i.e. fine-tuned) on the supervised data which typically numbers in a few thousand examples. We use the regression loss ℓsupervised = 1 N PN n=1 ∥yi −ˆy∥2. Although this approach is quite straightforward, we will show in Section 5 that it gives state-of-theart results on WMT Metrics Shared Task 17-19, which makes it a high-performing evaluation metric. However, fine-tuning BERT requires a sizable amount of IID data, which is less than ideal for a metric that should generalize to a variety of tasks and model drift. 4 Pre-Training on Synthetic Data The key aspect of our approach is a pre-training technique that we use to “warm up” BERT before fine-tuning on rating data.3 We generate a large 3To clarify, our pre-training scheme is an addition, not a replacement to BERT’s initial training (Devlin et al., 2019) and happens after it. 7883 number of of synthetic reference-candidate pairs (z, ˜z), and we train BERT on several lexical- and semantic-level supervision signals with a multitask loss. As our experiments will show, BLEURT generalizes much better after this phase, especially with incomplete training data. Any pre-training approach requires a dataset and a set of pre-training tasks. Ideally, the setup should resemble the final NLG evaluation task, i.e., the sentence pairs should be distributed similarly and the pre-training signals should correlate with human ratings. Unfortunately, we cannot have access to the NLG models that we will evaluate in the future. Therefore, we optimized our scheme for generality, with three requirements. (1) The set of reference sentences should be large and diverse, so that BLEURT can cope with a wide range of NLG domains and tasks. (2) The sentence pairs should contain a wide variety of lexical, syntactic, and semantic dissimilarities. The aim here is to anticipate all variations that an NLG system may produce, e.g., phrase substitution, paraphrases, noise, or omissions. (3) The pre-training objectives should effectively capture those phenomena, so that BLEURT can learn to identify them. The following sections present our approach. 4.1 Generating Sentence Pairs One way to expose BLEURT to a wide variety of sentence differences is to use existing sentence pairs datasets (Bowman et al., 2015; Williams et al., 2018; Wang et al., 2019). These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs (z, ˜z) by randomly perturbing 1.8 million segments z from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations ˜z. Let us describe those techniques. Mask-filling with BERT: BERT’s initial training task is to fill gaps (i.e., masked tokens) in tokenized sentences. We leverage this functionality by inserting masks at random positions in the Wikipedia sentences, and fill them with the language model. Thus, we introduce lexical alterations while maintaining the fluency of the sentence. We use two masking strategies—we either introduce the masks at random positions in the sentences, or we create contiguous sequences of masked tokens. More details are provided in the Appendix. Backtranslation: We generate paraphrases and perturbations with backtranslation, that is, round trips from English to another language and then back to English with a translation model (Bannard and Callison-Burch, 2005; Ganitkevitch et al., 2013; Sennrich et al., 2016). Our primary aim is to create variants of the reference sentence that preserves semantics. Additionally, we use the mispredictions of the backtranslation models as a source of realistic alterations. Dropping words: We found it useful in our experiments to randomly drop words from the synthetic examples above to create other examples. This method prepares BLEURT for “pathological” behaviors or NLG systems, e.g., void predictions, or sentence truncation. 4.2 Pre-Training Signals The next step is to augment each sentence pair (z, ˜z) with a set of pre-training signals {τk}, where τk is the target vector of pre-training task k. Good pre-training signals should capture a wide variety of lexical and semantic differences. They should also be cheap to obtain, so that the approach can scale to large amounts of synthetic data. The following section presents our 9 pretraining tasks, summarized in Table 1. Additional implementation details are in the Appendix. Automatic Metrics: We create three signals τBLEU, τROUGE, and τBERTscore with sentence BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and BERTscore (Zhang et al., 2020) respectively (we use precision, recall and F-score for the latter two). Backtranslation Likelihood: The idea behind this signal is to leverage existing translation models to measure semantic equivalence. Given a pair (z, ˜z), this training signal measures the probability that ˜z is a backtranslation of z, P(˜z|z), normalized by the length of ˜z. Let Pen→fr(zfr|z) be a translation model that assigns probabilities to French sentences zfr conditioned on English sentences z and let Pfr→en(z|zfr) be a translation model that assigns probabilities to English 7884 Task Type Pre-training Signals Loss Type BLEU τBLEU Regression ROUGE τROUGE = (τROUGE-P, τROUGE-R, τROUGE-F) Regression BERTscore τBERTscore = (τBERTscore-P, τBERTscore-R, τBERTscore-F) Regression Backtrans. likelihood τen-fr,z|˜z, τen-fr,˜z|z, τen-de,z|˜z, τen-de,˜z|z Regression Entailment τentail = (τEntail, τContradict, τNeutral) Multiclass Backtrans. flag τbacktran flag Multiclass Table 1: Our pre-training signals. sentences given french sentences. If |˜z| is the number of tokens in ˜z, we define our score as τen-fr,˜z|z = log P(˜z|z) |˜z| , with: P(˜z|z) = X zfr Pfr→en(˜z|zfr)Pen→fr(zfr|z) Because computing the summation over all possible French sentences is intractable, we approximate the sum using z∗ fr = arg max Pen→fr(zfr|z) and we assume that Pen→fr(z∗ fr|z) ≈1: P(˜z|z) ≈Pfr→en(˜z|z∗ fr) We can trivially reverse the procedure to compute P(z|˜z), thus we create 4 pre-training signals τen-fr,z|˜z, τen-fr,˜z|z, τen-de,z|˜z, τen-de,˜z|z with two pairs of languages (en ↔de and en ↔fr) in both directions. Textual Entailment: The signal τentail expresses whether z entails or contradicts ˜z using a classifier. We report the probability of three labels: Entail, Contradict, and Neutral, using BERT finetuned on an entailment dataset, MNLI (Devlin et al., 2019; Williams et al., 2018). Backtranslation flag: The signal τbacktran flag is a Boolean that indicates whether the perturbation was generated with backtranslation or with maskfilling. 4.3 Modeling For each pre-training task, our model uses either a regression or a classification loss. We then aggregate the task-level losses with a weighted sum. Let τk describe the target vector for each task, e.g., the probabilities for the classes Entail, Contradict, Neutral, or the precision, recall, and Fscore for ROUGE. If τk is a regression task, then the loss used is the ℓ2 loss i.e. ℓk = ∥τk − ˆτk∥2 2/|τk| where |τk| is the dimension of τk and ˆτk is computed by using a task-specific linear layer on top of the [CLS] embedding: ˆτk = Wτk ˜v[CLS] + bτk. If τk is a classification task, we use a separate linear layer to predict a logit for each class c: ˆτkc = Wτkc ˜v[CLS] +bτkc, and we use the multiclass cross-entropy loss. We define our aggregate pre-training loss function as follows: ℓpre-training = 1 M M X m=1 K X k=1 γkℓk(τ m k , ˆτ m k ) (1) where τ m k is the target vector for example m, M is number of synthetic examples, and γk are hyperparameter weights obtained with grid search (more details in the Appendix). 5 Experiments In this section, we report our experimental results for two tasks, translation and data-to-text. First, we benchmark BLEURT against existing text generation metrics on the last 3 years of the WMT Metrics Shared Task (Bojar et al., 2017). We then evaluate its robustness to quality drifts with a series of synthetic datasets based on WMT17. We test BLEURT’s ability to adapt to different tasks with the WebNLG 2017 Challenge Dataset (Gardent et al., 2017). Finally, we measure the contribution of each pre-training task with ablation experiments. Our Models: Unless specified otherwise, all BLEURT models are trained in three steps: regular BERT pre-training (Devlin et al., 2019), pre-training on synthetic data (as explained in Section 4), and fine-tuning on task-specific ratings (translation and/or data-to-text). We experiment with two versions of BLEURT, BLEURT and BLEURTbase, respectively based on BERTLarge (24 layers, 1024 hidden units, 16 heads) and BERT-Base (12 layers, 768 hidden units, 12 heads) (Devlin et al., 2019), both uncased. We use batch size 32, learning rate 1e-5, and 800,000 steps for pre-training and 40,000 steps for finetuning. We provide the full detail of our training setup in the Appendix. 7885 model cs-en de-en fi-en lv-en ru-en tr-en zh-en avg τ / r τ / r τ / r τ / r τ / r τ / r τ / r τ / r sentBLEU 29.6 / 43.2 28.9 / 42.2 38.6 / 56.0 23.9 / 38.2 34.3 / 47.7 34.3 / 54.0 37.4 / 51.3 32.4 / 47.5 MoverScore 47.6 / 67.0 51.2 / 70.8 NA NA 53.4 / 73.8 56.1 / 76.2 53.1 / 74.4 52.3 / 72.4 BERTscore w/ BERT 48.0 / 66.6 50.3 / 70.1 61.4 / 81.4 51.6 / 72.3 53.7 / 73.0 55.6 / 76.0 52.2 / 73.1 53.3 / 73.2 BERTscore w/ roBERTa 54.2 / 72.6 56.9 / 76.0 64.8 / 83.2 56.2 / 75.7 57.2 / 75.2 57.9 / 76.1 58.8 / 78.9 58.0 / 76.8 chrF++ 35.0 / 52.3 36.5 / 53.4 47.5 / 67.8 33.3 / 52.0 41.5 / 58.8 43.2 / 61.4 40.5 / 59.3 39.6 / 57.9 BEER 34.0 / 51.1 36.1 / 53.0 48.3 / 68.1 32.8 / 51.5 40.2 / 57.7 42.8 / 60.0 39.5 / 58.2 39.1 / 57.1 BLEURTbase -pre 51.5 / 68.2 52.0 / 70.7 66.6 / 85.1 60.8 / 80.5 57.5 / 77.7 56.9 / 76.0 52.1 / 72.1 56.8 / 75.8 BLEURTbase 55.7 / 73.4 56.3 / 75.7 68.0 / 86.8 64.7 / 83.3 60.1 / 80.1 62.4 / 81.7 59.5 / 80.5 61.0 / 80.2 BLEURT -pre 56.0 / 74.7 57.1 / 75.7 67.2 / 86.1 62.3 / 81.7 58.4 / 78.3 61.6 / 81.4 55.9 / 76.5 59.8 / 79.2 BLEURT 59.3 / 77.3 59.9 / 79.2 69.5 / 87.8 64.4 / 83.5 61.3 / 81.1 62.9 / 82.4 60.2 / 81.4 62.5 / 81.8 Table 2: Agreement with human ratings on the WMT17 Metrics Shared Task. The metrics are Kendall Tau (τ) and the Pearson correlation (r, the official metric of the shared task), divided by 100. model cs-en de-en et-en fi-en ru-en tr-en zh-en avg τ / DA τ / DA τ / DA τ / DA τ / DA τ / DA τ / DA τ / DA sentBLEU 20.0 / 22.5 31.6 / 41.5 26.0 / 28.2 17.1 / 15.6 20.5 / 22.4 22.9 / 13.6 21.6 / 17.6 22.8 / 23.2 BERTscore w/ BERT 29.5 / 40.0 39.9 / 53.8 34.7 / 39.0 26.0 / 29.7 27.8 / 34.7 31.7 / 27.5 27.5 / 25.2 31.0 / 35.7 BERTscore w/ roBERTa 31.2 / 41.1 42.2 / 55.5 37.0 / 40.3 27.8 / 30.8 30.2 / 35.4 32.8 / 30.2 29.2 / 26.3 32.9 / 37.1 Meteor++ 22.4 / 26.8 34.7 / 45.7 29.7 / 32.9 21.6 / 20.6 22.8 / 25.3 27.3 / 20.4 23.6 / 17.5* 26.0 / 27.0 RUSE 27.0 / 34.5 36.1 / 49.8 32.9 / 36.8 25.5 / 27.5 25.0 / 31.1 29.1 / 25.9 24.6 / 21.5* 28.6 / 32.4 YiSi1 23.5 / 31.7 35.5 / 48.8 30.2 / 35.1 21.5 / 23.1 23.3 / 30.0 26.8 / 23.4 23.1 / 20.9 26.3 / 30.4 YiSi1 SRL 18 23.3 / 31.5 34.3 / 48.3 29.8 / 34.5 21.2 / 23.7 22.6 / 30.6 26.1 / 23.3 22.9 / 20.7 25.7 / 30.4 BLEURTbase -pre 33.0 / 39.0 41.5 / 54.6 38.2 / 39.6 30.7 / 31.1 30.7 / 34.9 32.9 / 29.8 28.3 / 25.6 33.6 / 36.4 BLEURTbase 34.5 / 42.9 43.5 / 55.6 39.2 / 40.5 31.5 / 30.9 31.0 / 35.7 35.0 / 29.4 29.6 / 26.9 34.9 / 37.4 BLEURT -pre 34.5 / 42.1 42.7 / 55.4 39.2 / 40.6 31.4 / 31.6 31.4 / 34.2 33.4 / 29.3 28.9 / 25.6 34.5 / 37.0 BLEURT 35.6 / 42.3 44.2 / 56.7 40.0 / 41.4 32.1 / 32.5 31.9 / 36.0 35.5 / 31.5 29.7 / 26.0 35.6 / 38.1 Table 3: Agreement with human ratings on the WMT18 Metrics Shared Task. The metrics are Kendall Tau (τ) and WMT’s Direct Assessment metrics divided by 100. The star * indicates results that are more than 0.2 percentage points away from the official WMT results (up to 0.4 percentage points away). . 5.1 WMT Metrics Shared Task Datasets and Metrics: We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the official WMT test set, which include several thousand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. The test sets for years 2018 and 2019 are noisier, as reported by the organizers and shown by the overall lower correlations. We evaluate the agreement between the automatic metrics and the human ratings. For each year, we report two metrics: Kendall’s Tau τ (for consistency across experiments), and the official WMT metric for that year (for completeness). The official WMT metric is either Pearson’s correlation or a robust variant of Kendall’s Tau called DARR, described in the Appendix. All the numbers come from our own implementation of the benchmark.4 Our results are globally consistent with the official results but we report small differences in 2018 and 2019, marked in the tables. 4The official scripts are public but they suffer from documentation and dependency issues, as shown by a README file in the 2019 edition which explicitly discourages using them. Models: We experiment with four versions of BLEURT: BLEURT, BLEURTbase, BLEURT -pre and BLEURTbase -pre. The first two models are based on BERT-large and BERT-base. In the latter two versions, we skip the pre-training phase and fine-tune directly on the WMT ratings. For each year of the WMT shared task, we use the test set from the previous years for training and validation. We describe our setup in further detail in the Appendix. We compare BLEURT to participant data from the shared task and automatic metrics that we ran ourselves. In the former case, we use the the best-performing contestants for each year, that is, chrF++, BEER, Meteor++, RUSE, Yisi1, ESIM and Yisi1-SRL (Mathur et al., 2019). All the contestants use the same WMT training data, in addition to existing sentence or token embeddings. In the latter case, we use Moses sentenceBLEU, BERTscore (Zhang et al., 2020), and MoverScore (Zhao et al., 2019). For BERTscore, we use BERT-large uncased for fairness, and roBERTa (the recommended version) for completeness (Liu et al., 2019). We run MoverScore on WMT 2017 using the scripts published by the authors. Results: Tables 2, 3, 4 show the results. For years 2017 and 2018, a BLEURT-based metric 7886 model de-en fi-en gu-en kk-en lt-en ru-en zh-en avg τ / DA τ / DA τ / DA τ / DA τ / DA τ / DA τ / DA τ / DA sentBLEU 19.4 / 5.4 20.6 / 23.3 17.3 / 18.9 30.0 / 37.6 23.8 / 26.2 19.4 / 12.4 28.7 / 32.2 22.7 / 22.3 BERTscore w/ BERT 26.2 / 17.3 27.6 / 34.7 25.8 / 29.3 36.9 / 44.0 30.8 / 37.4 25.2 / 20.6 37.5 / 41.4 30.0 / 32.1 BERTscore w/ roBERTa 29.1 / 19.3 29.7 / 35.3 27.7 / 32.4 37.1 / 43.1 32.6 / 38.2 26.3 / 22.7 41.4 / 43.8 32.0 / 33.6 ESIM 28.4 / 16.6 28.9 / 33.7 27.1 / 30.4 38.4 / 43.3 33.2 / 35.9 26.6 / 19.9 38.7 / 39.6 31.6 / 31.3 YiSi1 SRL 19 26.3 / 19.8 27.8 / 34.6 26.6 / 30.6 36.9 / 44.1 30.9 / 38.0 25.3 / 22.0 38.9 / 43.1 30.4 / 33.2 BLEURTbase -pre 30.1 / 15.8 30.4 / 35.4 26.8 / 29.7 37.8 / 41.8 34.2 / 39.0 27.0 / 20.7 40.1 / 39.8 32.3 / 31.7 BLEURTbase 31.0 / 16.6 31.3 / 36.2 27.9 / 30.6 39.5 / 44.6 35.2 / 39.4 28.5 / 21.5 41.7 / 41.6 33.6 / 32.9 BLEURT -pre 31.1 / 16.9 31.3 / 36.5 27.6 / 31.3 38.4 / 42.8 35.0 / 40.0 27.5 / 21.4 41.6 / 41.4 33.2 / 32.9 BLEURT 31.2 / 16.9 31.7 / 36.3 28.3 / 31.9 39.5 / 44.6 35.2 / 40.6 28.3 / 22.3 42.7 / 42.4 33.8 / 33.6 Table 4: Agreement with human ratings on the WMT19 Metrics Shared Task. The metrics are Kendall Tau (τ) and WMT’s Direct Assessment metrics divided by 100. All the values reported for Yisi1 SRL and ESIM fall within 0.2 percentage of the official WMT results. 0.00 0.25 0.50 0.75 1.00 −2 −1 0 1 Ratings Density (rescaled) Dataset Test Train/Validation Skew factor 0 0.5 1.0 1.5 3.0 Figure 1: Distribution of the human ratings in the train/validation and test datasets for different skew factors. dominates the benchmark for each language pair (Tables 2 and 3). BLEURT and BLEURTbase are also competitive for year 2019: they yield the best results for all language pairs on Kendall’s Tau, and they come first for 3 out of 7 pairs on DARR. As expected, BLEURT dominates BLEURTbase in the majority of cases. Pre-training consistently improves the results of BLEURT and BLEURTbase. We observe the largest effect on year 2017, where it adds up to 7.4 Kendall Tau points for BLEURTbase (zh-en). The effect is milder on years 2018 and 2019, up to 2.1 points (tr-en, 2018). We explain the difference by the fact that the training data used for 2017 is smaller than the datasets used for the following years, so pre-training is likelier to help. In general pretraining yields higher returns for BERT-base than for BERT-large—in fact, BLEURTbase with pretraining is often better than BLEURT without. Takeaways: Pre-training delivers consistent improvements, especially for BLEURT-base. BLEURT yields state-of-the art performance for all years of the WMT Metrics Shared task. 5.2 Robustness to Quality Drift We assess our claim that pre-training makes BLEURT robust to quality drifts, by constructing ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● BLEURT No Pretrain. BLEURT w. Pretrain 0 1 2 3 0 1 2 3 0.0 0.2 0.4 0.6 Test Set skew Kendall Tau w. Human Ratings ● ● ● ● ● BERTscore BLEU train sk. 0 train sk. 0.5 train sk. 1.0 train sk. 1.5 train sk. 3.0 Figure 2: Agreement between BLEURT and human ratings for different skew factors in train and test. a series of tasks for which it is increasingly pressured to extrapolate. All the experiments that follow are based on the WMT Metrics Shared Task 2017, because the ratings for this edition are particularly reliable.5 Methodology: We create increasingly challenging datasets by sub-sampling the records from the WMT Metrics shared task, keeping low-rated translations for training and high-rated translations for test. The key parameter is the skew factor α, that measures how much the training data is leftskewed and the test data is right-skewed. Figure 1 demonstrates the ratings distribution that we used in our experiments. The training data shrinks as α increases: in the most extreme case (α = 3.0), we use only 11.9% of the original 5,344 training records. We give the full detail of our sampling methodology in the Appendix. We use BLEURT with and without pre-training and we compare to Moses sentBLEU and BERTscore. We use BERT-large uncased for both BLEURT and BERTscore. 5The organizers managed to collect 15 adequacy scores for each translation, and thus the ratings are almost perfectly repeatable (Bojar et al., 2017) 7887 Split by System Split by Input fluency grammar semantics 0/9 systems 0 records 2/9 systems 1,174 records 3/9 systems 1,317 records 5/9 systems 2,424 records 0/224 inputs 0 records 38/224 inputs 836 records 66/224 inputs 1,445 records 122/224 inputs 2,689 records 0.0 0.2 0.4 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 Num. Systems/Inputs Used for Training and Validation Kentall Tau w. Human Ratings Metric BLEU TER Meteor BERTscore BLEURT −pre −wmt BLEURT −wmt BLEURT Figure 3: Absolute Kendall Tau of BLEU, Meteor, and BLEURT with human judgements on the WebNLG dataset, varying the size of the data used for training and validation. Results: Figure 2 presents BLEURT’s performance as we vary the train and test skew independently. Our first observation is that the agreements fall for all metrics as we increase the test skew. This effect was already described is the 2019 WMT Metrics report (Ma et al., 2019). A common explanation is that the task gets more difficult as the ratings get closer—it is easier to discriminate between “good” and “bad” systems than to rank “good” systems. Training skew has a disastrous effect on BLEURT without pre-training: it is below BERTscore for α = 1.0, and it falls under sentBLEU for α ≥1.5. Pre-trained BLEURT is much more robust: the only case in which it falls under the baselines is α = 3.0, the most extreme drift, for which incorrect translations are used for train while excellent ones for test. Takeaways: Pre-training makes BLEURT significantly more robust to quality drifts. 5.3 WebNLG Experiments In this section, we evaluate BLEURT’s performance on three tasks from a data-to-text dataset, the WebNLG Challenge 2017 (Shimorina et al., 2019). The aim is to assess BLEURT’s capacity to adapt to new tasks with limited training data. Dataset and Evaluation Tasks: The WebNLG challenge benchmarks systems that produce natural language description of entities (e.g., buildings, cities, artists) from sets of 1 to 5 RDF triples. The organizers released the human assessments for 9 systems over 223 inputs, that is, 4,677 sentence pairs in total (we removed null values). Each input comes with 1 to 3 reference descriptions. The submissions are evaluated on 3 aspects: semantics, grammar, and fluency. We treat each type of rating as a separate modeling task. The data has no natural split between train and test, therefore we experiment with several schemes. We allocate 0% to about 50% of the data to training, and we split on both the evaluated systems or the RDF inputs in order to test different generalization regimes. Systems and Baselines: BLEURT -pre -wmt, is a public BERT-large uncased checkpoint directly trained on the WebNLG ratings. BLEURT -wmtwas first pre-trained on synthetic data, then fine-tuned on WebNLG data. BLEURT was trained in three steps: first on synthetic data, then on WMT data (16-18), and finally on WebNLG data. When a record comes with several references, we run BLEURT on each reference and report the highest value (Zhang et al., 2020). We report four baselines: BLEU, TER, Meteor, and BERTscore. The first three were computed by the WebNLG competition organizers. We ran the latter one ourselves, using BERTlarge uncased for a fair comparison. Results: Figure 3 presents the correlation of the metrics with human assessments as we vary the share of data allocated to training. The more pretrained BLEURT is, the quicker it adapts. The vanilla BERT approach BLEURT -pre -wmt requires about a third of the WebNLG data to dominate the baselines on the majority of tasks, and it still lags behind on semantics (split by system). In 7888 1 task 0%: no pre−training N−1 tasks 0%: all pre−training tasks BERTscore entail backtrans method_flag BLEU ROUGE −BERTscore −entail −backtrans −method_flag −BLEU −ROUGE −15 −10 −5 0 5 Pretraining Task Relative Improv./Degradation (%) BLEURT BLEURTbase Figure 4: Improvement in Kendall Tau on WMT 17 varying the pre-training tasks. contrast, BLEURT -wmt is competitive with as little as 836 records, and BLEURT is comparable with BERTscore with zero fine-tuning. Takeaways: Thanks to pre-training, BLEURT can quickly adapt to the new tasks. BLEURT finetuned twice (first on synthetic data, then on WMT data) provides acceptable results on all tasks without training data. 5.4 Ablation Experiments Figure 4 presents our ablation experiments on WMT 2017, which highlight the relative importance of each pre-training task. On the left side, we compare BLEURT pre-trained on a single task to BLEURT without pre-training. On the right side, we compare full BLEURT to BLEURT pretrained on all tasks except one. Pre-training on BERTscore, entailment, and the backtranslation scores yield improvements (symmetrically, ablating them degrades BLEURT). Oppositely, BLEU and ROUGE have a negative impact. We conclude that pre-training on high quality signals helps BLEURT, but that metrics that correlate less well with human judgment may in fact harm the model.6 6 Related Work The WMT shared metrics competition (Bojar et al., 2016; Ma et al., 2018, 2019) has inspired 6Do those results imply that BLEU and ROUGE should be removed from future versions of BLEURT? Doing so may indeed yield slight improvements on the WMT Metrics 2017 shared task. On the other hand the removal may hurt future tasks in which BLEU or ROUGE actually correlate with human assessments. We therefore leave the question open. the creation of many learned metrics, some of which use regression or deep learning (Stanojevic and Sima’an, 2014; Ma et al., 2017; Shimanaka et al., 2018; Chen et al., 2017; Mathur et al., 2019). Other metrics have been introduced, such as the recent MoverScore (Zhao et al., 2019) which combines contextual embeddings and Earth Mover’s Distance. We provide a head-to-head comparison with the best performing of those in our experiments. Other approaches do not attempt to estimate quality directly, but use information extraction or question answering as a proxy (Wiseman et al., 2017; Goodrich et al., 2019; Eyal et al., 2019). Those are complementary to our work. There has been recent work that uses BERT for evaluation. BERTScore (Zhang et al., 2020) proposes replacing the hard n-gram overlap of BLEU with a soft-overlap using BERT embeddings. We use it in all our experiments. Bertr (Mathur et al., 2019) and YiSi (Mathur et al., 2019) also make use of BERT embeddings to capture similarity. SumQE (Xenouleas et al., 2019) fine-tunes BERT for quality estimation as we describe in Section 3. Our focus is different—we train metrics that are not only state-of-the-art in conventional IID experimental setups, but also robust in the presence of scarce and out-of-distribution training data. To our knowledge no existing work has explored pretraining and extrapolation in the context of NLG. Previous studies have used noising for referenceless evaluation (Duˇsek et al., 2019). Noisy pre-training has also been proposed before for other tasks such as paraphrasing (Wieting et al., 2016; Tomar et al., 2017) but generally not with synthetic data. Generating synthetic data via paraphrases and perturbations has been commonly used for generating adversarial examples (Jia and Liang, 2017; Iyyer et al., 2018; Belinkov and Bisk, 2018; Ribeiro et al., 2018), an orthogonal line of research. 7 Conclusion We presented BLEURT, a reference-based text generation metric for English. Because the metric is trained end-to-end, BLEURT can model human assessment with superior accuracy. Furthermore, pre-training makes the metrics robust particularly robust to both domain and quality drifts. Future research directions include multilingual NLG evaluation, and hybrid methods involving both humans and classifiers. 7889 Acknowledgments Thanks to Eunsol Choi, Nicholas FitzGerald, Jacob Devlin, and to the members of the Google AI Language team for the proof-reading, feedback, and suggestions. We also thank Madhavan Kidambi and Ming-Wei Chang, who implemented blank-filling with BERT. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of ACL. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In Proceedings of ICLR. Ondˇrej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the wmt17 metrics shared task. In Proceedings of WMT. Ondˇrej Bojar, Yvette Graham, Amir Kamran, and Miloˇs Stanojevi´c. 2016. Results of the wmt16 metrics shared task. In Proceedings of WMT. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. Proceedings of EMNLP. Arun Tejasvi Chaganty, Stephen Mussman, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evaluation. Proceedings of ACL. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. Proceedings of ACL. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of NAACL HLT. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL HLT. Ondˇrej Duˇsek, Karin Sevegnani, Ioannis Konstas, and Verena Rieser. 2019. Automatic quality estimation for natural language generation: Ranting (jointly rating and ranking). Proceedings of INLG. Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question answering as an automatic evaluation metric for news article summarization. In Proceedings of NAACL HLT. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll´ar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In Proceedings of CVPR. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings NAACL HLT. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Proceedings of INLG. Ben Goodrich, Mohammad Ahmad Saleh, Peter Liu, and Vinay Rao. 2019. Assessing the factual accuracy of text generation. In Proceedings of ACM SIGKDD. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. Proceedings of NAACL HLT. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. Proceedings of EMNLP. Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press. Karen Kukich. 1983. Design of a knowledge-based report generator. In Proceedings of ACL. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Workshop on Text Summarization Branches Out. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. Proceedings of EMNLP. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692. Qingsong Ma, Ondˇrej Bojar, and Yvette Graham. 2018. Results of the wmt18 metrics shared task: Both characters and embeddings achieve good performance. In Proceedings of WMT. Qingsong Ma, Yvette Graham, Shugen Wang, and Qun Liu. 2017. Blend: a novel combined mt metric based on direct assessment–casict-dcu submission to wmt17 metrics task. In Proceedings of WMT. Qingsong Ma, Johnny Wei, Ondˇrej Bojar, and Yvette Graham. 2019. Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges. In Proceedings of WMT. 7890 Inderjeet Mani. 1999. Advances in automatic text summarization. MIT press. Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2019. Putting evaluation in context: Contextual embeddings improve machine translation evaluation. In Proceedings of ACL. Kathleen McKeown. 1992. Text generation. Cambridge University Press. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. Proceedings of EMNLP. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Proceedings of ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. Proceedings of ACL. Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2018. Ruse: Regressor using sentence embeddings for automatic machine translation evaluation. In Proceedings of WMT. Anastasia Shimorina, Claire Gardent, Shashi Narayan, and Laura Perez-Beltrachini. 2019. Webnlg challenge: Human evaluation results. Technical report. Ronnie W Smith and D Richard Hipp. 1994. Spoken natural language dialog systems: A practical approach. Oxford University Press. Milos Stanojevic and Khalil Sima’an. 2014. Beer: Better evaluation as ranking. In Proceedings of WMT. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS. Ran Tian, Shashi Narayan, Thibault Sellam, and Ankur P Parikh. 2019. Sticking to the facts: Confident decoding for faithful data-to-text generation. arXiv:1910.08684. Gaurav Singh Tomar, Thyago Duque, Oscar T¨ackstr¨om, Jakob Uszkoreit, and Dipanjan Das. 2017. Neural paraphrase identification of questions with noisy pretraining. Proceedings of the First Workshop on Subword and Character Level Models in NLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. Proceedings of ICML. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. Proceedings of ICLR. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. Proceedings of ICLR. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. Proceedings of NAACL HLT. Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in data-to-document generation. Proceedings of EMNLP. Stratos Xenouleas, Prodromos Malakasiotis, Marianna Apidianaki, and Ion Androutsopoulos. 2019. Sumqe: a bert-based summary quality estimation model supplementary material. In Proceedings of EMNLP. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. Proceedings of ICLR. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance. Proceedings of EMNLP. A Implementation Details of the Pre-Training Phase This section provides implementation details for some of the pre-training techniques described in the main paper. A.1 Data Generation Random Masking: We use two masking strategies. The first strategy samples random words in the sentence and it replaces them with masks (one for each token). Thus, the masks are scattered across the sentence. The second strategy creates contiguous sequences: it samples a start position s, a length l (uniformly distributed), and it masks all the tokens spanned by words between positions s and s + l. In both cases, we use up to 15 masks per sentence. Instead of running the language model once and picking the most likely token at each position, we use beam search (the beam size 8 by default). This enforces consistency and avoids repeated sequences, e.g., “,,,”. 7891 Backtranslation: Consider English and French. Given a forward translation model Pen→fr(zfr|zen) and backward translation model Pfr→en(zen|zfr), we generate ˜z as follows: ˜z = arg max zen (Pfr→en(zen|z∗ fr)) where z∗ fr = arg maxzfr (Pfr→en(zfr|z)). For the translations, we use a Transformer model (Vaswani et al., 2017), trained on EnglishGerman with the tensor2tensor framework.7 Word dropping: Given a synthetic example (z, ˜z) we generate a pair (z, ˜z′), by randomly dropping words from ˜z. We draw the number of words to drop uniformly, up to the length of the sentence. We apply this transformation on about 30% of the data generated with the previous method. A.2 Pre-Training Tasks We now provide additional details on the signals we used for pre-training. Automatic Metrics: As shown in the table, we use three types of signals: BLEU, ROUGE, and BERTscore. For BLEU, we used the original Moses SENTENCEBLEU8 implementation, using the Moses tokenizer and the default parameters. For ROUGE, we used the seq2seq implementation of ROUGE-N.9 We used a custom implementation of BERTSCORE, based on BERT-large uncased. ROUGE and BERTscore return three scores: precision, recall, and F-score. We use all three quantities. Backtranslation Likelihood: We compute all the losses using custom Transformer model (Vaswani et al., 2017), trained on two language pairs (English-French and EnglishGerman) with the tensor2tensor framework. Normalization: All the regression labels are normalized before training. A.3 Modeling Setting the weights of the pre-training tasks: We set the weights γk with grid search, optimizing BLEURT’s performance on WMT 17’s 7https://github.com/tensorflow/ tensor2tensor 8https://github.com/moses-smt/ mosesdecoder/blob/master/mert/ sentence-bleu.cpp 9https://github.com/google/seq2seq/ blob/master/seq2seq/metrics/rouge.py validation set. To reduce the size of the grid, we make groups of pre-training tasks that share the same weights: (τBLEU, τROUGE, τBERTscore), (τen-fr,z|˜z, τen-fr,˜z|z, τen-de,z|˜z, τen-de,˜z|z), and (τentail, τbacktran flag). B Experiments–Supplementary Material B.1 Training Setup for All Experiments We user BERT’s public checkpoints10 with Adam (the default optimizer), learning rate 1e-5, and batch size 32. Unless specified otherwise, we use 800,00 training steps for pre-training and 40,000 steps for fine-tuning. We run training and evaluation in parallel: we run the evaluation every 1,500 steps and store the checkpoint that performs best on a held-out validation set (more details on the data splits and our choice of metrics in the following sections). We use Google Cloud TPUs v2 for learning, and Nvidia Tesla V100 accelerators for evaluation and test. Our code uses Tensorflow 1.15 and Python 2.7. B.2 WMT Metric Shared Task Metrics. The metrics used to compare the evaluation systems vary across the years. The organizers use Pearson’s correlation on standardized human judgments across all segments in 2017, and a custom variant of Kendall’s Tau named “DARR” on raw human judgments in 2018 and 2019. The latter metrics operates as follows. The organizers gather all the translations for the same reference segment, they enumerate all the possible pairs (translation1, translation2), and they discard all the pairs which have a “similar” score (less than 25 points away on a 100 points scale). For each remaining pair, they then determine which translation is the best according both human judgment and the candidate metric. Let |Concordant| be the number of pairs on which the NLG metrics agree and |Discordant| be those on which they disagree, then the score is computed as follows: |Concordant| −|Discordant| |Concordant| + |Discordant| The idea behind the 25 points filter is to make the evaluation more robust, since the judgments collected for WMT 2018 and 2019 are noisy. Kendall’s Tau is identical, but it does not use the filter. 10https://github.com/google-research/ bert 7892 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 2 4 6 0 200 400 600 800 Number of Pretraining Steps (*1,000) Rel. Kendall Tau Improvement (%) ● ● BLEURT BLEURTbase Figure 5: Improvement in Kendall Tau accuracy on all language pairs of the WMT Metrics Shared Task 2017, varying the number of pre-training steps. 0 steps corresponds to 0.555 Kendall Tau for BLEURTbase and 0.580 for BLEURT. Training setup. To separate training and validation data, we set aside a fixed ratio of records in such a way that there is no “leak” between the datasets (i.e., train and validation records that share the same source). We use 10% of the data for validation for years 2017 and 2018, and 5% for year 2019. We report results for the models that yield the highest Kendall Tau across all records on validation data. The weights associated to each pretraining task (see our Modeling section) are set with grid search, using the train/validation setup of WMT 2017. Baselines. we use three metrics: the Moses implementation of sentenceBLEU,11 BERTscore,12 and MoverScore,13 which are all available online. We run the Moses tokenizer on the reference and candidate segments before computing sentenceBLEU. B.3 Robustness to Quality Drift Data Re-sampling Methodology: We sample the training and test separately, as follows. We split the data in 10 bins of equal size. We then sample each record in the dataset with probabilities 1 Bα and 1 (11−B)α for train and test respectively, where B is the bin index of the record between 1 and 10, and α is a predefined skew factor. The skew factor α controls the drift: a value of 0 has no effect (the ratings are centered around 0), and value of 3.0 yields extreme differences. Note that 11https://github.com/moses-smt/ mosesdecoder/blob/master/mert/ sentence-bleu.cpp 12https://github.com/Tiiiger/bert_score 13https://github.com/AIPHES/ emnlp19-moverscore the sizes of the datasets decrease as α increases: we use 50.7%, 30.3%, 20.4%, and 11.9% of the original 5,344 training records for α = 0.5, 1.0, 1.5, and 3.0 respectively. B.4 Ablation Experiment–How Much Pre-Training Time is Necessary? To understand the relationship between pretraining time and downstream accuracy, we pretrain several versions of BLEURT and we fine-tune them on WMT17 data, varying the number of pretraining steps. Figure 5 presents the results. Most gains are obtained during the first 400,000 steps, that is, after about 2 epochs over our synthetic dataset.
2020
704
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7893–7905 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7893 Distilling Knowledge Learned in BERT for Text Generation Yen-Chun Chen1, Zhe Gan1, Yu Cheng1, Jingzhou Liu2, Jingjing Liu1 1Microsoft Dynamics 365 AI Research 2Carnegie Mellon University {yen-chun.chen,zhe.gan,yu.cheng,jinjl}@microsoft.com; [email protected] Abstract Large-scale pre-trained language model such as BERT has achieved great success in language understanding tasks. However, it remains an open question how to utilize BERT for language generation. In this paper, we present a novel approach, Conditional Masked Language Modeling (C-MLM), to enable the finetuning of BERT on target generation tasks. The finetuned BERT (teacher) is exploited as extra supervision to improve conventional Seq2Seq models (student) for better text generation performance. By leveraging BERT’s idiosyncratic bidirectional nature, distilling knowledge learned in BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation. Experiments show that the proposed approach significantly outperforms strong Transformer baselines on multiple language generation tasks such as machine translation and text summarization. Our proposed model also achieves new state of the art on IWSLT German-English and EnglishVietnamese MT datasets.1 1 Introduction Large-scale pre-trained language model, such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018) and BERT (Devlin et al., 2019), has become the de facto first encoding step for many natural language processing (NLP) tasks. For example, BERT, pre-trained with deep bidirectional Transformer (Vaswani et al., 2017) via masked language modeling and next sentence prediction, has revolutionized the state of the art in many language understanding tasks, such as natural language inference (Bowman et al., 2015) and question answering (Rajpurkar et al., 2016). 1Code is available at https://github.com/ChenRocks/DistillBERT-Textgen. However, beyond common practice of finetuning BERT for language understanding (Wang et al., 2019), applying BERT to language generation still remains an open question. Text generation aims to generate natural language sentences conditioned on certain input, with applications ranging from machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015), text summarization (Nallapati et al., 2016; Gehring et al., 2017; Chen and Bansal, 2018), to image captioning (Vinyals et al., 2015; Xu et al., 2015; Gan et al., 2017). In this work, we study how to use BERT for better text generation, which is still a relatively unexplored territory. Intuitively, as BERT is learned with a generative objective via Masked Language Modeling (MLM) during the pre-training stage, a natural assumption is that this training objective should have learned essential, bidirectional, contextual knowledge that can help enhance text generation. Unfortunately, this MLM objective is not auto-regressive, which encumbers its direct application to auto-regressive text generation in practice. We tackle this challenge by proposing a novel and generalizable approach to distilling knowledge learned in BERT for text generation tasks. We first propose a new Conditional Masked Language Modeling (C-MLM) task, inspired by MLM but requiring additional conditional input, which enables finetuning pre-trained BERT on a target dataset. In order to extract knowledge from the finetuned BERT and apply it to a text generation model, we leverage the finetuned BERT as a teacher model that generates sequences of word probability logits for the training samples, and treat the text generation model as a student network, which can effectively learn from the teacher’s outputs for imitation. The proposed approach improves text generation by providing a good estimation on word probability distribution for each token in a sentence, consum7894 ing both the left and the right context, the exploitation of which encourages conventional text generation models to plan ahead. At inference time, the teacher model (BERT) is not required thus the decoding speed is as fast as the underlying student model. Text generation models are usually trained via Maximum Likelihood Estimation (MLE), or teacher forcing (Bengio et al., 2015): at each time step, it maximizes the likelihood of the next word conditioned on its previous ground-truth words. This corresponds to optimizing one-step-ahead prediction. As there is no explicit signal towards global planning in the training objective, the generation model may incline to focusing on local structure rather than global coherence. With our proposed approach, BERT’s looking into the future ability can act as an effective regularization method, capturing subtle long-term dependencies that ensure global coherence and in consequence boost model performance on text generation. An alternative way to leverage BERT for text generation is to initialize the parameters of the encoder or decoder of Seq2Seq with pretrained BERT, and then finetuning on the target dataset. However, this approach requires the encoder/decoder to be identical to BERT, inevitably making the final text generation model too large. Our approach, on the other hand, is modular and compatible to any text-generation model, and has no restriction on model size or model architecture (e.g., LSTM or Transformer). The main contributions of this work are threefold: (i) We present a novel approach to utilizing BERT for text generation. The proposed method induces sequence-level knowledge into the conventional one-step-ahead and teacher-forcing training paradigm, by introducing an effective regularization term to MLE training loss. (ii) We conduct comprehensive evaluation on multiple text generation tasks, including machine translation and text summarization. Experiments show that our proposed approach significantly outperforms strong Transformer baselines and is generalizable to different tasks. (iii) The proposed model achieves new state of the art on both IWSLT14 German-English and IWSLT15 English-Vietnamese datasets. 2 Related Work Pre-trained Language Models Prior to largescale pre-trained language model, word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) were widely used for NLP tasks. Recently, CoVe (McCann et al., 2017) introduced (conditional) language models pre-trained on paired machine translation corpus. ELMo (Peters et al., 2018) learned a contextual language model on a large corpus with bidirectional RNN. GPT (Radford et al., 2018) used unidirectional Transformer to achieve better contextualized word representation. By fine-tuning pre-trained language models, ULMFit (Howard and Ruder, 2018) also achieved promising results on text classification. In our study, we focus on BERT due to its superior performance on multiple language understanding tasks. However, different from previous work exploiting BERT for language understanding tasks, here we aim to apply BERT to text generation. To the best of our knowledge, this is still a relatively unexplored space. The proposed approach is also model-agnostic and can be applied to other pretrained language models as well. BERT for Text Generation There has been some recent attempt on applying BERT to text generation. Specifically, Lample and Conneau (2019) trained cross-lingual MLM and demonstrated promising results for cross-lingual natural language inference (Conneau et al., 2018) and unsupervised neural machine translation (NMT) (Lample et al., 2018). Wang and Cho (2019) formulated BERT as a Markov Random Field LM and showed preliminary results on unsupervised text generation with improved diversity. Zhang et al. (2019a) utilized an encoder with BERT and a two-stage decoder for text summarization. Song et al. (2019) proposed Masked Seq2Seq (MASS) pre-training, demonstrating promising results on unsupervised NMT, text summarization and conversational response generation. Concurrent with our work, Ghazvininejad et al. (2019) proposed a similar conditional MLM for constant-time translation, and Yang et al. (2019) studied how to fine-tune BERT for NMT. Our approach is novel in the sense that we do not directly use the parameters of BERT in the Seq2Seq model. Instead, BERT acts as an effective regularization to the MLE training loss, by proactively injecting future information for predicting the present. Right-to-Left Generation Our work also shares a high-level intuition with those approaches that try to regularize left-to-right generative models with 7895 Conditional MLM [SEP] [SEP] [MASK] [CLS] Encoder Decoder Attention Knowledge Distillation Input Sequence Partial Output Sequence Input Sequence Masked Output Sequence BERT as Teacher Seq2Seq as Student Figure 1: Illustration of distilling knowledge from BERT for text generation. See Section 3.2 and 3.3 for details. a right-to-left counterpart. Specifically, Liu et al. (2016) trained a separate reverse NMT and performed joint decoding at inference time to enforce agreement between forward and reverse models. Twin Networks (Serdyuk et al., 2018) used a backward RNN jointly trained with a forward RNN decoder by matching their hidden states. Zhang et al. (2019b) further extended the idea to Transformer with joint training, so that the forward and the backward models iteratively improve each other. Our proposed approach stems from a similar intuition. However, we focus on using pre-trained language model such as BERT to regularize an auto-regressive generation model. Knowledge Distillation Our method shares the same loss formulation as Knowledge Distillation (KD) proposed in Buciluˇa et al. (2006); Hinton et al. (2015); Kim and Rush (2016), where a smaller student model is trained on soft labels provided by a larger teacher model. More recently, Tan et al. (2019) applied KD to multilingual NMT, and Sun et al. (2019) proposed patient KD for BERT model compression. Compared with these previous studies, where both the teacher and the student are trained on the same task, our approach is different in the sense that the BERT teacher is not designed to perform the student’s generation task. We focus on using KD to leverage the learned knowledge in BERT for text generation, while previous work mostly focused on model compression. 3 Approach In this section, we present our proposed approach to distilling the knowledge in BERT for text generation in generic sequence-to-sequence (Seq2Seq) setting. We first review Seq2Seq learning in Section 3.1, and then describe the proposed approach in Section 3.2 and 3.3. 3.1 Sequence-to-Sequence Learning Seq2Seq learning (Sutskever et al., 2014) aims to generate a sequence of discrete output Y = (y1, . . . , yN) of length N, conditioned on a sequence of discrete input X = (x1, . . . , xM) of length M. A Seq2Seq model learns parameters θ to estimate the conditional likelihood Pθ(Y |X), typically trained via Maximum Likelihood Estimation (MLE), or equivalently, minimizing the crossentropy loss: Lxe(θ) = −log Pθ(Y |X) (1) = − N X t=1 log Pθ(yt|y1:t−1, X) , where each conditional probability can be calculated via an attention-based recurrent neural network (RNN) (Bahdanau et al., 2015; Luong et al., 2015), Transformer (Vaswani et al., 2017), or any other neural sequence-generation models. 3.2 Finetune BERT with Conditional MLM This generic Seq2Seq learning framework is the state of the art on a wide range of text generation tasks. Using modern deep neural networks, the conditional probabilities can be readily modeled as a sequence of classifications over the word vocabulary. However, during training, in order to generate the t-th token yt, the model only sees a partial sentence y1:t−1 from the ground-truth training data. Intuitively, it is reasonable to assume that a bidirectional model can be more informative than a left7896 to-right generation model, since additional context from the right (or future) is also incorporated to predict the current word. Unfortunately, this additional information is not utilized in a standard Seq2Seq model, since it can only be trained in a left-to-right manner, where the future context is masked out to prevent each word from indirectly “seeing itself”. To compensate this single-directional limitation of Seq2Seq setting, we propose a new conditional language model (C-MLM) to enable the finetuning of BERT on target generation task, in hope that the finetuned bidirectional BERT can be utilized for better text generation. BERT (Devlin et al., 2019) is a deep bidirectional Transformer trained via Masked Language Modeling (MLM).2 In a similar setting, where the input is a sequence pair (X, Y ),3 15% of the tokens are randomly masked. Formally, we denote the masked token sets as Xm and Y m, and the disjoint counterpart (i.e., the unmasked tokens) as Xu and Y u, respectively. The trained BERT model aims to estimate the joint probability: P(xm 1 , . . . , xm i , ym 1 , . . . , ym j |Xu, Y u) , (2) where i and j denote the number of masked tokens in X and Y , respectively. Each xm ⋆∈Xm, and each ym ⋆∈Y m. Eqn. (2) can be trained with the standard word-level cross-entropy loss. We aim to marry MLM pre-training with Seq2Seq learning, to leverage bidirectional language model for text generation. To this end, we propose a conditional-MLM, a variant of MLM that allows further finetuning of pre-trained BERT on target dataset. For example, for machine translation, X and Y represent the source and the target sentence, respectively. We first concatenate them together and randomly mask 15% of the tokens only in Y , then train the network to model the joint probability: P(ym 1 , . . . , ym j |X, Y u) . (3) The above C-MLM objective is similar to the conditional language modeling (LM) objective in Eqn. (1), but conditional LM only permits predicting a word based on its left context. C-MLM is also related to Masked Seq2Seq (MASS) pretraining (Song et al., 2019). However, in MASS, 2Besides MLM, Devlin et al. (2019) also introduced the next sentence prediction task for training BERT. We omit this task since it is unrelated to our work. 3The two sequences are consecutive paragraphs sampled from a very large corpus such as Wikipedia. the encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and the decoder tries to predict this masked fragment, which is different from our model design. The final goal is also different: MASS focuses on Seq2Seq pre-training, while we focus on leveraging BERT for text generation. In our experiments, we observe that the C-MLM task can obtain high accuracy and good generalization on word prediction. However, it is not feasible to generate sequential output directly from C-MLM. Instead, we use knowledge distillation to distill the knowledge learned from the finetuned BERT into a Seq2Seq model for direct text generation, which will be explained in the next sub-section. 3.3 Knowledge Distillation for Generation Our inspiration springs from the observation that the probability distribution of the masked word ym t is estimated using both yu 1:t−1 and yu t+1:N from Y u. In other words, the distribution for a given word P(ym t |X, Y u) contains information from both backward and forward contexts, which is a desirable benefit for providing sequence-level global guidance. This probability distribution can be considered as soft targets for a text generation model to mimic from, which potentially contains more useful and fine-grained information than the usual hard-assigned, one-hot label, therefore enhancing conventional left-to-right generation models to look into the future. In a knowledge distillation setting, the BERT model can be considered as a teacher, while the Seq2Seq model acts as a student. Specifically, the Seq2Seq model can be trained with the following objective function: Lbidi(θ) = − X w∈V h Pφ(yt = w|Y u, X)· (4) log Pθ(yt = w|y1:t−1, X) i , where Pφ(yt) is the soft target estimated by the finetuned BERT with learned parameters φ, and V denotes the output vocabulary. Note that φ is fixed during the distillation process. An illustration of this learning process is provided in Figure 1, which aims to match the word probability distribution Pθ(yt) provided by the student with Pφ(yt) provided by the teacher (i.e., distillation). To further improve the Seq2Seq student model, hard-assigned labels are also utilized. The final 7897 model is trained with the following compound objective: L(θ) = αLbidi(θ) + (1 −α)Lxe(θ) , (5) where α is a hyper-parameter for tuning the relative importance of the two training targets: soft estimation from finetuned BERT, and ground-truth hard label. Note that our proposed approach only has a minimal requirement on the architecture of the incorporated Seq2Seq model. As long as the model is trained to estimate word-level probability as in Eqn. (1), it can be trained jointly with the proposed objective function Eqn. (5). At a higher level, the additional loss term Lbidi can be interpreted as a sequence-level objective function. Our auto-regressive (or causal) model θ tries to predict the probability distribution that matches the estimation the bidirectional teacher model predicts, hence encouraging the planning of future (right context) for generation. 4 Experiments In this section, we describe our experiments on two well-studied text generation tasks: machine translation, and abstractive text summarization. 4.1 Datasets Machine Translation We consider two relatively small-scale datasets, IWSLT15 EnglishVietnamese (En-Vi, 113k training samples) and IWSLT14 German-English (De-En, 160k training samples), and one medium-scale dataset, WMT14 English-German (En-De, 4.5M training samples). For IWSLT15 En-Vi, we use the pre-processed dataset provided by Luong and Manning (2015). We use tst2012 as dev set and test on tst2013. For IWSLT14 De-En, we follow the pre-processing steps and the same train/dev/test split as in Wu et al. (2019). For WMT14 En-De, we follow the preprocessing steps in Vaswani et al. (2017) for fair comparison. We use newstest2013 as the dev set and newstest2014 as the test set. We report BLEU scores (Papineni et al., 2002) for evaluation of MT performance following the Moses script.4 Abstractive Summarization For summarization, we conduct experiments on the Gigaword summarization dataset (Rush et al., 2015). Note that 4For fair comparison to previous work, we report tokenized BLEU scores using https://github.com/mosessmt/mosesdecoder/blob/master/scripts/generic/multibleu.perl, and for WMT14 En-De, we further split the compound words after tokenization. the original train/valid/test split of Gigaword is 3.8M/190k/2k. In our experiments, we observed severe distribution mismatch between the validation and test data. See Table 4, 5, and Sec. 4.4 for detailed discussion. Therefore, we further sampled 5k/5k dev/test-dev splits from the validation set and tuned hyper-parameters on the dev set only. We report ROUGE scores (Lin, 2004) on test-dev for the evaluation of our proposed approach, and include results on the standard test split for the comparison with prior work. 4.2 Implementation Details Our implementation is based on the PyTorch (Paszke et al., 2017) version of OpenNMT (Klein et al., 2018) seq2seq toolkit. We use the ‘base’ model of 6-layer Transformer with 512hidden 8-head attention blocks and 2048-hidden feed-forward layer for all experiments, with label smoothing regularization (LSR) (Szegedy et al., 2016) of 0.1.5 We batch examples with similar sequence length, and count batch size by the number of tokens. For MT we use the pre-trained BERT-base-multilingual-cased model, and for summarization we use BERT-base-uncased as the starting point of BERT finetuning.6 We use the corresponding pre-trained byte-pair-encoding (Sennrich et al., 2016) shipped together with the BERT model for tokenization. For all training methods of all Transformer models, the learning rate schedule is set to lr = η · d−0.5 model ·min(step−0.5, step·warmup steps−1.5), where dmodel = 512 is the attention representation size (Vaswani et al., 2017). For all BERT finetuning, we follow Devlin et al. (2019) and use a triangular learning rate schedule with maximum learning rate η. The parameters are updated with the Adam optimizer (Kingma and Ba, 2015). In the distillation stage, we pre-compute BERT’s prediction logits of the training data7 and use top-K distillation (Tan et al., 2019) to reduce computation overhead and memory footprint, where K is set to 8 across all the experiments.8 5Our method can also be viewed as a ‘learned LSR’. The results reported of our proposed method are trained together with regular LSR, showing the effectiveness of our teacher. 6BERT pre-trained models are available at https://github.com/google-research/bert. Our finetuning implementation is modified from code available at https://github.com/huggingface/pytorch-pretrained-BERT. 7The masking strategy is described in the supplementary. 8We also tune the temperature T for the softmax applied at the teacher’s logits. Different from the original KD, we 7898 De-En Models dev test Our Implementations Transformer (base) 35.27 34.09 + BERT teacher 36.93 35.63 Other Reported Results ConvS2S + MRT‡ 33.91 32.85 Transformer (big)⋄ 34.4† Lightweight Conv⋄ 34.8† Dyn. Convolution⋄ 35.2† Table 1: BLEU scores for IWSLT14 German-English translation. (†) tuned with checkpoint averaging. (‡) from Edunov et al. (2018). (⋄) from Wu et al. (2019). En-Vi Models tst2012 tst2013 Our Implementations RNN 23.37 26.80 + BERT teacher 25.14 27.59 Transformer (base) 27.03 30.76 + BERT teacher 27.85 31.51 Other Reported Results RNN† 26.1 Seq2Seq-OT⋆ 24.5 26.9 ELMo⋄ 29.3 CVT⋄ 29.6 Table 2: BLEU scores for IWSLT15 EnglishVietnamese translation. (†) from Luong et al. (2017). (⋆) from Chen et al. (2019). (⋄) from Clark et al. (2018). For the detailed values of the hyper-parameters for each experiment, please refer to the supplementary material. We found it necessary to train longer with Lbidi, since it is still improving after the step at which the baseline Transformer starts to plateau. At inference time, we use beam search with beam size 4 and length penalty (Wu et al., 2016) of 0.6 across all the models. All the hyper-parameters are tuned on the development set. Note that our Transformer baselines achieve higher scores than the reference implementation on each dataset (in most cases comparable to the state-of-the-art). 4.3 Results on Machine Translation We first validate our proposed text generation approach on machine translation task. Experimental results are summarized in Table 1, 2 and 3, which show that our model significantly improves over the strong Transformer baseline across all three do not apply the same T on the student. In preliminary experiment we found high T of Seq2Seq results in much worse performance. We hypothesize the low-entropy nature of conditioned text generation is not suitable for temperature scaling. En-De Models NT2013 NT2014 Our Implementations Transformer (base) 25.95 26.94 + BERT teacher 26.22 27.53 Other Reported Results Transformer (base)⋄ 25.8 27.3† Transformer (big)⋆‡ 26.5 29.3† Dyn. Convolution•‡ 26.9±0.2 29.7† Table 3: BLEU scores for WMT14 English-German translation. (†) tuned with checkpoint averaging. (‡) trained on WMT16, a slightly different version of training data. (⋄) from Vaswani et al. (2017). (⋆) from Ott et al. (2018). (•) from Wu et al. (2019). datasets. Note that our baseline is the ‘base’ model of Transformer, which has 44M trainable parameters, and the reference implementation by Wu et al. (2019) of the ‘big’ model with 176M parameters.9 For IWSLT German-English translation, our method improves over the Transformer baseline by 1.54 BLEU points, and achieves new state of the art. Our approach outperforms previously-reported results such as ConvS2S+MRT, a convolutionalbased model (Gehring et al., 2017) with minimum risk training (Edunov et al., 2018), and Lightweight and Dynamic Convolution (Wu et al., 2019). Note that Wu et al. (2019) also tuned checkpoint averaging, which creates a soft ensemble effect. And their model has roughly the same amount of parameters as Transformer (big). For IWSLT English-Vietnamese translation, since most prior work experimented with RNN models, we also report RNN-based results here. This also suggests that our method is modelagnostic. Our best model outperforms Seq2SeqOT (Chen et al., 2019) that utilizes optimal transport for sequence-level training, as well as the ELMo and CVT results reported in Clark et al. (2018).10 For WMT14 English-German translation, our method still improves over the well-tuned Transformer baseline. We also report the scores of Transformer (big) and state-of-the-art Dynamic Convolution model (Wu et al., 2019) for reference. 4.4 Results on Abstractive Summarization Table 4 and Table 5 show the results of our approach on abstractive summarization task, where 9Parameter counts exclude word embedding and final linear projection, which mostly depends on the vocabulary size. BERT-base has 86M trainable parameters. 10The CVT results used a much larger RNN and CNNbased character embedding, as well as a customized structure. Therefore, we did not try to use RNN to match their results. 7899 GW Models R-1 R-2 R-L Dev Transformer (base) 46.64 24.37 43.17 + BERT teacher 47.35 25.11 44.04 Test-Dev Transformer (base) 46.84 24.80 43.58 + BERT teacher 47.90 25.75 44.53 Table 4: ROUGE F1 scores for Gigaword abstractive summarization on our internal test-dev split. GW Models R-1 R-2 R-L Seq2Seq† 36.40 17.77 33.71 CGU‡ 36.3 18.0 33.8 FTSumg⋆ 37.27 17.65 34.24 E2Tcnn⋄ 37.04 16.66 34.93 Re3Sum• 37.04 19.03 34.46 Trm + BERT teacher 37.57 18.59 34.82 Table 5: ROUGE F1 scores for Gigaword abstractive summarization on the official test set (Trm: Transformer). (†) from Nallapati et al. (2016). (‡) from Lin et al. (2018). (⋆) from Cao et al. (2018b). (⋄) from Amplayo et al. (2018). (•) from Cao et al. (2018a). R-1, R-2, and R-L denote F1 scores of ROUGE1, ROUGE-2, and ROUGE-L, respectively. Our method shows improvement on all the metrics, as shown in Table 4. We observe a large gap between dev and test scores, which suggests that the data in the test set is very different from that in the validation set, as mentioned in Section 4.1. Given the fact that the official test split contains only 1,951 noisy examples,11 we believe that our results on the dev/test-dev sets further strengthens our claim. On the test split, our best model is comparable to state-of-the-art models that use much more complex architectures specifically designed for summarization. CGU (Lin et al., 2018) augmented convolutional gating units. FTSumg (Cao et al., 2018b) leveraged extra information extraction and dependency parsing features. E2Tcnn (Amplayo et al., 2018) utilized entities provided by an external entity linking system. Re3Sum (Cao et al., 2018a) carefully designed a retrieve-and-rerank pipeline with human-written soft templates. Despite that our model has no summarization-specific model design, we still achieve comparable performance to these models on all the metrics. 11When we manually inspected the test set data, we found many corrupted examples such as extremely short input articles, meaningless summary, and dominating unknown words. Methods De-En En-Vi (dev) (tst2012) Transformer (base) 35.27 27.03 Trm + BERTl2r 35.20 26.99 Trm + BERTsm 36.32 27.68 Trm + BERT 36.93 27.85 Table 6: Ablation study. (Trm: Transformer) 4.5 Ablation Study There are several possible factors that could contribute to the performance gain: additional parameters of BERT, extra data (pretraining corpus) of BERT, and the bidirectional nature. To better understand the key contributions of our method, we conduct an ablation study described in the following. We finetune 2 extra teachers: BERTsm and BERTl2r. For BERTsm, we use a smaller BERT (6 layers) for C-MLM finetuning, which has approximately the same number of parameters as Transformer-base.12 For BERTl2r, we use the full BERT model but finetune it using left-to-right LM as in the conventional Seq2Seq model. Next, we apply the proposed KD method to train the Transformer on En-Vi and De-En MT tasks. Results are shown in Table 6. BERTsm still works well though the full BERT provides further improvement. On the other hand, BERTl2r slightly hurts the performance. We hypothesize that it generates noisy learning targets for the student, hence the performance drop. Empirically, we show that the bidirectional knowledge could be more important than the extra parameters, while the pre-trained weights remain useful for more stable C-MLM training. 4.6 Generation for Different Lengths We next analyze the effect of our proposed approach on different output lengths. We plot the BLEU scores on MT w.r.t. different output generation lengths N on the development set.13 Results are provided in Figure 2 and Figure 3. For IWSLT German-English dataset (Figure 2: Left), we can see a shared trend that the proposed Lbidi objective gains higher BLEU points on longer translation pairs. For WMT English-German (Figure 3), we can see that although the proposed method performs much worse when the output sentences 12We still use the pretrained weights of BERT, otherwise the C-MLM does not converge very well. 13For Gigaword summarization, almost all summaries are short sentences (less than 0.5% of the summaries contain more than 16 words), so we omit the analysis. 7900 Figure 2: BLEU scores on IWSLT German-English and English-Vietnamese for different output lengths. Reference my mother says that i started reading at the age of two , although i think four is probably close to the truth . Transformer my mother says that i started reading with two years , but i think that four of them probably correspond to the truth . (39.6) Ours my mother says that i started reading at the age of two , but i think four is more likely to be the truth . (65.2) Reference we already have the data showing that it reduces the duration of your flu by a few hours . Transformer we ’ve already got the data showing that it ’s going to crash the duration of your flu by a few hours . (56.6) Ours we already have the data showing that it reduces the duration of your flu by a few hours . (100.0) Reference we now know that at gombe alone , there are nine different ways in which chimpanzees use different objects for different purposes . Transformer we know today that alone in gombe , there are nine different ways that chimpanzees use different objects in different ways . (35.8) Ours we now know that in gombe alone , there are nine different ways that chimpanzees use different objects for different purposes . (71.5) Table 7: Qualitative examples from IWSLT German-English translation. Numbers inside the parenthesis are sentence-level BLEU scores. Red word is where the baseline Transformer makes a mistake without considering the possible future phrase and fails to recover. On the other hand, our model makes the right decision at the blue word, hence generates more coherent sentence. Please refer to Section 4.7 for detailed explanation. Figure 3: BLEU scores on WMT English-German for different output lengths. are very short, it achieves relatively consistent improvement on longer cases, hence resulting in overall BLEU improvement. For IWSLT EnglishVietnamese (Figure 2: Right), we see a similar trend when the length N > 24. 4.7 Qualitative Examples In Table 7, we show some translation examples on IWSLT German-English dataset. In the first example, the baseline Transformer cannot recover from ‘with’ and ‘of’, which renders the full sentence not making much sense. “I started reading with...” would make sense from the left context; however, if the model also considers the right context “the age of two”, the word ‘with’ would be assigned with lower probability by the soft labels provided by the BERT teacher. Even though at test-time the model cannot ‘look ahead’, the soft-targets at trainingtime prevents the over-confidence of the model on one-hot label; hence the better generalization at the test-time. Similarly, other examples show that our model can generate text more coherently w.r.t. the context on the right (underlined in Table 7), thus making more accurate and natural translation. 5 Conclusion In this work, we propose a novel and generic approach to utilizing pre-trained language models to 7901 improve text generation without explicit parameter sharing, feature extraction, or augmenting with auxiliary tasks. Our proposed Conditional MLM mechanism leverages unsupervised language models pre-trained on large corpus, and then adapts to supervised sequence-to-sequence tasks. Our distillation approach indirectly influences the text generation model by providing soft-label distributions only, hence is model-agnostic. Experiments show that our model improves over strong Transformer baselines on multiple text generation tasks such as machine translation and abstractive summarization, and achieves new state-of-the-art on some of the translation tasks. For future work, we will explore the extension of Conditional MLM to multimodal input such as image captioning. References Reinald Kim Amplayo, Seonjae Lim, and Seung-won Hwang. 2018. Entity commonsense representation for neural abstractive summarization. In NAACL. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In NIPS. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Cristian Buciluˇa, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In KDD. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018a. Retrieve, rerank and rewrite: Soft template based neural summarization. In ACL. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018b. Faithful to the original: Fact aware neural abstractive summarization. In AAAI. Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019. Improving sequence-to-sequence learning via optimal transport. In ICLR. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In ACL. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In EMNLP. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In NAACL. Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, and Li Deng. 2017. Semantic compositional networks for visual captioning. In CVPR. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In ICML. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Constant-time machine translation with conditional masked language models. arXiv preprint arXiv:1904.09324. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In ACL. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In EMNLP. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Guillaume Klein, Yoon Kim, Yuntian Deng, Vincent Nguyen, Jean Senellart, and Alexander Rush. 2018. OpenNMT: Neural machine translation toolkit. In AMTA. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In ICLR. 7902 Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In ACL Text Summarization Branches Out Workshop. Junyang Lin, Xu Sun, Shuming Ma, and Qi Su. 2018. Global encoding for abstractive summarization. In ACL. Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Agreement on targetbidirectional neural machine translation. In NAACL. Minh-Thang Luong, Eugene Brevdo, and Rui Zhao. 2017. Neural machine translation (seq2seq) tutorial. https://github.com/tensorflow/nmt. Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domain. In IWSLT. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In NIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In CoNLL. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In WMT. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS Autodiff Workshop. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In EMNLP. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Dmitriy Serdyuk, Nan Rosemary Ke, Alessandro Sordoni, Adam Trischler, Chris Pal, and Yoshua Bengio. 2018. Twin networks: Matching the future for sequence generation. In ICLR. Kaitao Song Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. In ICML. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model compression. arXiv preprint arXiv:1908.09355. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In CVPR. Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and TieYan Liu. 2019. Multilingual neural machine translation with knowledge distillation. In ICLR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In CVPR. Alex Wang and Kyunghyun Cho. 2019. Bert has a mouth, and it must speak: Bert as a markov random field language model. arXiv preprint arXiv:1902.04094. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In ICLR. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In ICLR. 7903 Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML. Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Yong Yu, Weinan Zhang, and Lei Li. 2019. Towards making the most of bert in neural machine translation. arXiv preprint arXiv:1908.05672. Haoyu Zhang, Jianjun Xu, and Ji Wang. 2019a. Pretraining-based natural language generation for text summarization. arXiv preprint arXiv:1902.09243. Zhirui Zhang, Shuangzhi Wu, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. 2019b. Regularizing neural machine translation by target-bidirectional agreement. In AAAI. 7904 A Implementaion Details and Hyper-parameter Values We run all experiments on single GPU of NVIDIA Titan RTX or V100 except for WMT En-De we use 4 V100s for training. Note that for large batch sizes that do not fit in GPU memory, we use the gradient accumulation tricks as in Ott et al. (2018). Batch sizes are counted in number of tokens. Note that all the hyper-parameters are tuned on the development set only. To compute the logits (soft labels) from teacher, we repeat a training pair for 7 times and create a circular mask as illustrated in Figure 4. This mask approximates the 15% masking rate of the BERT training. From the masked positions we can obtain soft probabilities predicted by the BERT teacher for each output tokens y. These logits are precomputed once for the training set so that we do not have to repeatedly sample random masks and run forward pass of BERT while training. IWSLT De-En For C-MLM fine-tuning, we train for 100k steps with 5k warmup steps, η = 5 · 10−5, and batch size of 16k tokens. For baseline model, we train for 50k steps with 4k warmup steps and batch size of 6k tokens. The learning rate η is set to 1. For the proposed model, we train for 100k steps with 8k warmup steps and batch size of 6k tokens. The learning rate η is set to 2, α = 0.5, and T = 10. Seq2Seq model uses dropout (Srivastava et al., 2014) of 0.3 in both cases. IWSLT En-Vi For C-MLM fine-tuning and baseline Transformer, the hyper-parameters are identical to that of IWSLT De-En. For the proposed model, we train for 100k steps with 8k warmup steps and batch size of 6k tokens. The learning rate η is set to 2, α = 0.1, and T = 5. Dropout is still 0.1. WMT En-De For C-MLM fine-tuning, we train for 100k steps with 5k warmup steps, η = 5 · 10−5, and batch size of 512k tokens. For baseline model, we train for 30k steps with 4k warmup steps and batch size of 384k tokens. The learning rate η is set to 4. Since this is our largest dataset and training is slow, for the proposed model we use the baseline Transformer to initialize the Seq2Seq student. For the proposed model, we continue training for 50k steps with 4k warmup steps and batch size of 64k tokens. The learning rate η is Figure 4: Illustration of the masking strategy for computing the teacher soft labels. Gray slashed boxes denote the [MASK] positions. set to 2, α = 0.1, and T = 5. Seq2Seq model uses dropout of 0.1 in both cases. Gigaword For C-MLM fine-tuning, we train for 100k steps with 5k warmup steps, η = 5 · 10−5, and batch size of 64k tokens. For baseline model, we train for 50k steps with 4k warmup steps and batch size of 40k tokens. The learning rate η is set to 1. For the proposed model, we train for 70k steps with 4k warmup steps and batch size of 36k tokens. The learning rate η is set to 2, α = 0.1, and T = 10. Seq2Seq model uses dropout of 0.1 in both cases. B Additional Generation Examples We show Gigaword summarization examples in Table 9 and extra En-DE generation examples in Table 8. Qualitatively, our Transformer + BERT Teacher outperforms baseline Transformer and generate more coherent sentences. 7905 Reference the political climate in the u.s. at the time was tense , and there were debates going on about immigration . Transformer the political climate in the u.s. was back then , and there was constant disasters . (29.5) Ours the political climate in the united states at the time was tense , and there were ongoing shifting debates . (57.3) Reference it would be immoral to leave these young people with a climate system spiraling out of control . Transformer it would be immoral to let these young people leave a climate system that was out of control . (44.6) Ours it would be immoral to leave these young people with a climate system out of control . (84.3) Reference the tahltan have called for the creation of a tribal heritage reserve which will set aside the largest protected area in british columbia . Transformer tahltan demands the institution of a tribe in british columbia that should make the largest protection area in british columbia . (19.9) Ours the tahltan demands to build a tribe reserve that should be the largest protected area in british columbia . (32.2) Table 8: Qualitative examples from IWSLT German-English translation. Numbers inside the parenthesis are sentence-level BLEU scores. Red word is where the baseline Transformer makes a mistake without considering the possible future phrase and fails to recover. On the other hand, our model makes the right decision at the blue word, hence generates more coherent sentence. Please refer to Section 4.6 in the main paper for detailed explanation. Reference china offers tax exemptions for laid-off workers Transformer china encourages laid-off workers to seek employment Ours china offers tax exemptions to laid-off workers Reference swiss police arrest britons who allegedly ran rental car racket Transformer three britons arrested in swiss luxury hotel Ours swiss police arrest three britons in rental car racket case Reference south korea stocks extend declines as kia concerns intensify Transformer south korean stocks fall for #th time in # days ; kia leads Ours south korean stocks fall as kia troubles intensify Table 9: Qualitative examples from the Gigaword summarization dataset. Baseline model suffers from early mistakes. Our model generates more coherent summaries.
2020
705
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7906–7917 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7906 ESPRIT: Explaining Solutions to Physical Reasoning Tasks Nazneen Fatema Rajani1∗ Rui Zhang2∗ Yi Chern Tan2 Stephan Zheng1 Jeremy Weiss2 Aadit Vyas2 Abhijit Gupta2 Caiming Xiong1 Richard Socher1 Dragomir Radev1,2 ∗Equal contribution. 1 Salesforce Research 2 Yale University {nazneen.rajani, stephan.zheng, dradev}@salesforce.com {r.zhang, yichern.tan, dragomir.radev}@yale.edu Abstract Neural networks lack the ability to reason about qualitative physics and so cannot generalize to scenarios and tasks unseen during training. We propose ESPRIT, a framework for commonsense reasoning about qualitative physics in natural language that generates interpretable descriptions of physical events. We use a two-step approach of first identifying the pivotal physical events in an environment and then generating natural language descriptions of those events using a data-to-text approach. Our framework learns to generate explanations of how the physical simulation will causally evolve so that an agent or a human can easily reason about a solution using those interpretable descriptions. Human evaluations indicate that ESPRIT produces crucial fine-grained details and has high coverage of physical concepts compared to even human annotations. Dataset, code and documentation are available at https://github.com/ salesforce/esprit. 1 Introduction Humans learn to understand and reason about physical laws just by living in this world and doing everyday things. AI models, on the other hand, lack this ability and so are unable to generalize to new scenarios that require reasoning about abstract physical concepts like gravity, mass, inertia, friction, and collisions (Bakhtin et al., 2019). We propose Explaining Solutions to Physical ReasonIng Tasks (ESPRIT), a framework for explaining qualitative physics reasoning using natural language. Neural networks with knowledge of qualitative physics would have commonsense reasoning abilities about the way the world works (Forbus, 1988). In turn, this could, for example, improve performance on tasks that involve interacting with humans and make human-robot interactions more efficient and trustworthy. Figure 1: An example from the PHYRE dataset (Bakhtin et al., 2019) consisting of a goal, an initial scene, a solution – the action of adding a red ball, and the resulting simulation rollout. Each object color corresponds to an object type. Red: user-added dynamic object; Green and Blue: dynamic goal object; Gray: dynamic scene object; Black: static scene object. Ideally, AI systems would reason about and generate natural language commonsense explanations of physical concepts that are relevant to their behavior and prediction. A key intuition is that natural language can provide an efficient low-dimensional representation of complicated physical concepts. To equip AI systems with this ability, we collected a set of open-ended natural language human explanations of qualitative physics simulations. The explanations include descriptions of the initial scene, i.e., before any physics is at play, and a sequence of identified pivotal events in a physics simulation. Three physical concepts are crucial for our simulation to reach a specified goal state: gravity, collision, and friction. Our work attempts to build an interpretable framework for qualitative physics reasoning with strong generalization abilities mirroring those of humans. ESPRIT is the first-ever framework that unifies commonsense physical reasoning and interpretability using natural language explanations. Our framework consists of two phases: (1) identifying the pivotal physical events in tasks, and (2) generating natural language descriptions for the initial scene and the pivotal events. In the first phase, 7907 Figure 2: The end-to-end ESPRIT framework for identifying pivotal physical events, extracting the features from pivotal events in a table, and explaining solutions using a table-to-text model for natural language generation. The purple bar is a static goal object. our model learns to classify key physical events that are crucial to achieving a specified goal whereas in the second phase, our model generates natural language descriptions of physical laws for the events selected in the first phase. We demonstrate ESPRIT on the PHYsical REasoning (PHYRE) benchmark (Bakhtin et al., 2019). PHYRE provides a set of physics simulation puzzles where each puzzle has an initial state and a goal state. The task is to predict the action of placing one or two bodies (specifically, red balls of variable diameters) in the simulator to achieve a given goal. Figure 1 shows an example of a task with a specified goal. The input to ESPRIT is a sequence of frames from a physics simulation and the output is a natural language narrative that reflects the locations of the objects in the initial scene and a description of the sequence of physical events that would lead to the desired goal state, as shown in Figure 2. The first phase of the framework uses a neural network classifier to identify salient frames from the simulation. For the second phase we experimented with table-to-text models (Puduppully et al., 2019a,b) as well as pre-trained language models (Radford et al., 2018). We evaluated our framework for natural language generated reasoning using several automated and human evaluations with a focus on the understanding of qualitative physics and the ordering of a natural sequence of physical events. We found that our model achieves very high performance for phase one (identifying frames with salient physical events) and that, for phase two, the table-to-text models outperform pre-trained language models on qualitative physics reasoning. 2 Dataset 2.1 PHYRE Benchmark We build our dataset by extending PHYRE (Bakhtin et al., 2019), a recent benchmark dataset for PHYsical REasoning.1 PHYRE consists of a set of physics puzzles in a simulated 2D environment. This environment follows simple deterministic Newtonian physics with a constant downward gravitational force and a small amount of friction. All objects (balls, bars, standing sticks, and jars) are non-deformable, and each object color corresponds to an object type: red is the user-added dynamic object; green and blue are used for dynamic objects that are part of the goal state; purple is for static goal objects; gray is for dynamic scene objects; black is for static scene objects. Each task starts with an initial scene and has a goal state, described in natural language. The task can be solved by placing one or two red balls in the simulation environment and choosing their sizes in a way that when the simulation runs according to the laws of physics the goal state is achieved. No further action can be taken after the simulation starts. In this paper, we focus on the 25 task templates in the PHYRE dataset that involve the placement 1https://phyre.ai/ 7908 Templates 25 Tasks 2441 Train/Val/Test 1950/245/246 Objects / Task 14 Frames / Task 658 Events / Task 54 Salient Events / Task 7 Tokens / Initial State Description 36 Tokens / Simulation Description 45 Vocabulary Size 2172 Table 1: Statistics for the ESPRIT Dataset. of a single ball to reach the goal state. Each template defines a set of 100 similar tasks generated by using different parameters for a template such as positions and sizes of objects. All tasks within the same template have the same goal (e.g., “make the blue ball touch the green ball”) but somewhat different initial configurations. 2.2 Representing Frames as Structured Tables We represent the simulation frames as structured tables by extracting information using the simulator module in the PHYRE API.2 The simulations consist of 60 frames per second. For each object, we collect its id, type (boundary, bar, jar, circle), color (red, green, blue, purple, gray, black), state (dynamic, static), and (x, y) coordinates. Jars also have an angle of rotation, width, base length, and side length (referred to as just length). Bars have length, width, and angle of rotation while circles have a radius. For each collision between two objects, we collect the (x, y) coordinates, velocity as a (vx, vy) vector, and the angle of rotation in radians for each object involved in the collision. Extracting data from the PHYRE simulator. To track the motion of objects through a simulation, we intercepted PHYRE’s built-in simulator. First, we created a dictionary of objects and their attributes in the simulation’s initial scene (including the predicted action that was performed). It is important to note that the dictionary contains properties of both static and dynamic objects. But because static objects such as the simulation boundary are not affected by the physics in the simulation and their properties never change. So, unless a static object is involved in a collision, we did not 2https://phyre.ai/docs/simulator.html collect any other data about that object during the simulation. Once this initial pass was made, we extracted the images of frames generated for the 2500 singleball simulations. Each simulation was run for a maximum of 1000 time steps or approximately 16 seconds. After the initial action is taken, a simulation is considered successful if it reaches the goal state and remains in that state for at least 180 consecutive time steps, the equivalent of three seconds. If a simulation does not satisfy this goal condition, it is considered unsuccessful. In this way, we found solution simulations for 2441 out of 2500 tasks. The remaining 59 task simulations seem more complex and would possibly require a prohibitive number of trials (> 10000) to reach the goal successfully and so we excluded those from our dataset. Finally, we mapped the dictionary of objects and attributes in the initial state to the frames derived from the simulator so that we could track how the object’s properties change from one frame to another. Generating tables. The three physical concepts at play in the simulations – friction, collision, and gravity are either a cause or an effect of some collision. Therefore, collisions were the most common physical event in the simulations (average = 54 per task) and so we decided to only record collisions. For every collision extracted, we applied a window of size 3 to fetch frames before and after the collisions to remove any noise and get the more precise timestamp of the collision. Because pivotal events in a solution simulation only occur when two objects collide or separate, like a ball falling onto another or a ball rolling off of an elevated bar, we treat both cases identically. 2.3 Two-stage Annotation Procedure Based on the simulation screenshots of the initial state and the collision, we employed a two-stage annotation procedure using Amazon MTurk. In the first stage, we showed the goal, the initial state, and all collisions during the simulation. We asked annotators to pick pivotal or salient events by selecting all and only the collisions that are causally related to the placement of the red ball and are necessary for the completion of the goal. In the second stage, we collected human annotations of natural language descriptions for the initial scene and explanations for the sequence of salient colli7909 sions annotated during the first stage. We showed the annotators the goal, the initial state with the red ball added, an animated GIF of the simulation, and the frames of salient collisions. We asked them to include descriptions of the shape, color, and position of the objects involved. The annotations for the initial scene and salient collisions are collected in separate text boxes. 2.4 Data Statistics Our data statistics are summarized in Table 1. We generated solutions for 2441 tasks, covering 25 different templates. These tasks have an average of 14 objects, 658 total frames, and 54 collision events. We split the tasks randomly into 1950 train, 245 validation, and 246 test. On average, each task has 7 events marked as salient by the annotators. Also, on average the description of the initial state and simulation each have about 40 tokens, with a vocabulary size of 2172. 3 Tasks and Methods ESPRIT includes the following components: Pivotal event detection. Given all the collision events in the simulation, select collisions that are crucial to achieving the goal state. Pivotal or salient collisions are collisions that fulfill the following two criteria: (i) causally related to the placement of the red ball, and (ii) necessary for the completion of the given goal. To train a classifier to detect salient events, we use the following features from the table representation: collision time step, each object’s shape, position (x, y), velocity (vx, vy), and angle of rotation. This totals 13 input features. The first object is often static, such as the boundary, while the second is often dynamic, such as the user-placed red circle. We experimented with a decision tree and a neural network MLP classifier to compare with a baseline that classifies every frame as salient. The MLP has three layers with 128, 128, and 32 nodes. There is a 15% dropout to avoid overfitting and batch normalization between each layer. Finally, a sigmoid node converts the output into a probability from 0 to 1 (anything above 50% is classified as salient). The models are trained on 59179 collisions (52066 negative, 7113 positive) and tested on 6893 collisions (6000 negative, 893 positive). Natural language description of initial states. Given a list of objects and their attributes (color, position, type) in the initial frames, generate a corresponding natural language description of the initial scene. The generated text should faithfully describe all the objects in the corresponding input frame. Natural language explanations for sequences of pivotal events. Given a sequence of pivotal events for a simulation and the goal, generate a natural language description to explain the solution simulation. The generated text should faithfully summarize the simulation by explaining the causal sequence of salient events in it. The goal of natural language generation for our task is to explain the pivotal physical events in the simulation so that an end user can solve the task more efficiently and reliably. Hence, we experimented with treating the physical event description generation as (1) Table-to-Text Generation and as (2) Language Modeling. The salient event detection component of our system serves as the content selection component of the natural language generation pipeline. We describe the two approaches in the following sections. 3.1 Table-to-Text Generation For the initial state description, the input is the structured table representation of the initial state, and the model generates a textual description conditioned on the input table. Similarly, for the salient events explanation, the model produces the description given the structured table representation of all the salient events as the input. Effective table-totext generation can be leveraged to teach AI agents to solve tasks in natural language and output explanation for the steps in the task solution. For both generation tasks, we use the model from Puduppully et al. (2019b) which is a neural model for table-to-text generation by explicitly modeling entities.3 Since our desired generations are “entity coherent”, in that their coherence depends on the introduction and discussion of entities in discourse (Karamanis et al., 2004), the entity-based table-totext generation model is a proper method for our task. Unlike previous neural models treating entities as ordinary tokens, following Puduppully et al. (2019b), we explicitly create entity representations for our objects in the physical environment and update their representation as the text is generated. The model input is a list of table records 3We also tried to use Puduppully et al. (2019a), but it requires a domain-specific relation extraction model to generate a specialized input, so we could not use it. 7910 as {rj,l}L l=1,j=1,...,|r| where |r| is the number of records for this example, and L is the number of features for each record. For example, rj,1 are values and rj,2 are entities. The output y is description with words y = [y1, . . . , y|y|] where |y| is the length of the description. Encoder. We first create embeddings rj,l of the features rj,l, and then use a feed-forward layer to obtain the record embeddings rj. rj = ReLU(Wr[rj,1, . . . , rj,L] + br), where Wr and br are model parameters. From the record embeddings, we then use two methods to create the encoder outputs {ej}|r| j=1: • AVG. We use ej = rj, and the first hidden state of the decoder is the average of the record representations: avg({ej}|r| j=1). • BiLSTM. To account for the chronological order in the physical simulation, we use a BiLSTM over [r1, . . . , r|r|], whose hidden states are extracted as {ej}|r| j=1. The first hidden state of the decoder is initialized with the concatenation of the final step hidden states of the BiLSTM. Entity memory. For each unique entity k (i.e., one of rj,2 values), we compute xk as the average embeddings of all records which satisfy rj,2 = k. During each decoding step t, we maintain an entity memory representation ut,k, and initialize it at t = −1 as: ut=−1,k = Wixk, where Wi is a model parameter. Denote the hidden state of the decoder at t as dt. We update the entity representation uk,t at each t with a gating mechanism as follows: γt = σ(Wddt + bd), δt,k = γt ⊙σ(Wedt + be + Wfut−1,k + bf), ˜ut,k = Wgdt, ut,k = (1 −δt,k) ⊙ut−1,k + δt,k ⊙˜ut,k, where Wd,e,f,g and bd,e,f are model parameters, and ⊙is element-wise product. γt indicates if there should be an update at t, and δt,k controls the update by interpolating between the previous ut−1,k and candidate entity memory ˜ut,k. Hierarchical attention. We then use a hierarchical attention mechanism such that the decoder can first focus on entities and then the records for these entities. We can rearrange the encoder output ej in two-dimensional gk,z, where k is index for entities and z is the index for records of corresponding entities. For each entity, we can compute the attention over its records along z, and compute the entity context vector st,k: αt,k,z ∝exp(d⊺ t Wagk,z), X z αt,k,z = 1, st,k = X z αt,k,zgk,z. Then we compute the higher level attention over entities along k, and compute the encoder context vector qt: φt,k ∝exp(d⊺ t Whut,k), X k φt,k = 1, qt = X k φt,kst,k. Decoder. The encoder context vector qt is then used in the decoder to compute a probability for each output token yt: datt t = tanh(Wc[dt; qt]), pgen(yt|y<t, r) = softmax yt (Wydatt t + by). In both generation tasks, we fine-tune the entity model provided by Puduppully et al. (2019b) for 125 epochs. We use the same training hyperparameters and select the best model using token-match accuracy following Puduppully et al. (2019b). 3.2 Language Modeling We fine-tune a language model (LM) to generate descriptions of the initial state and explanations for sequences of pivotal physical events using the training split of our dataset. We use the pre-trained GPT-large (Radford et al., 2018) LM, which is a multi-layer transformer-based (Vaswani et al., 2017) model. For the generation of initial state descriptions, the LM is fine-tuned conditioned on the objects (such as ball, jar, etc.) and their attributes (such as dynamic, static, color, size, etc.) extracted from the simulator described in Section 2.2 and the human written descriptions. So, the input context during training is defined as follows: Cinit = o1, o2, . . . , on, “In the physical simulation ” 7911 where o1, o2, ..., on is the list of extracted objects with their attributes, e.g., “small red dynamic ball”. The model is trained to generate the initial scene description s according to a conditional language modeling objective. The objective is to maximize: X i log P(si|si−k, . . . , si−1, Cinit; Θ), where k is the size of the context window (in our case k is always greater than the length of s so that the entire explanation is within the context). The conditional probability P is modeled by a neural network with parameters Θ conditioned on Cinit and previous tokens. For explanations of the salient physical events in the simulation, the LM is fine-tuned conditioned on the initial state descriptions and the human generated reasoning. So, the input context during training is defined as follows: Csim = “init scene. The red ball is placed and ” The model is trained to generate the physical reasoning r by maximizing the following objective: X i log P(ri|ri−k, . . . , ri−1, Csim; Θ). We generate sequences of maximum length 40, use a batch size of 12, train for a maximum of 50 epochs, selecting the best model based on validation BLEU and perplexity scores. The learning rate was set to 10−6, warmed up linearly with proportion 0.01 and weight decay 0.01. We experimented both with temperature 1.0 and lower temperatures (0.1, 0.2) to restrict generation to the physics domain and avoid diversity. For word sampling, we tried top k as 3 and 5 as well as greedy (k = 1). We found that the temperature of 0.1 with k = 3 worked best. We note that it is not fair to compare the generated text by the table-to-text model and the LM because the input to the table-to-text model is structured with fine-grained details while the input to the LM is an unstructured prompt. A promising approach would be one that uses a table encoder with a pre-trained language model that is more robust and generalizable. 4 Evaluation Metrics We evaluate our models using both automatic metrics and human evaluations. 4.1 Automatic Metrics We use precision, recall, and F1 for the pivotal event classification task which can be formulated as a binary classification problem. For the natural language description of initial frames and solution simulations, we use automatic metrics including BLEU-1, BLEU-2, ROUGE L, and METEOR using the implementation from Sharma et al. (2017). 4.2 Human Evaluations The automated metrics for generation evaluation are very crude and do not measure the correctness and coverage of actual physical concepts or even the natural ordering in which physical events occur in a given simulation. For example, an object first falls and then it hits the ground or an object first falls on some other object which then causes the second object to be set in motion. So, we deployed human evaluations to measure the quality of the physical concepts captured by our language generation models in terms of validity and coverage. To measure the validity of initial scene descriptions, we showed humans the generated description for a task, the initial frames from that task, and three random distractor initial scenes from other tasks which may or may not be from the same template. Then, we asked them to select the frame that belongs to the task being described. This evaluates how faithful and accurate the generated description is to the input initial state. If the generated text does not include a detailed description of the objects, their attributes, and their positions, it would be difficult for humans to map them to the correct initial scene. For evaluating the validity of pivotal events descriptions, we showed humans the generated text for a task, the initial state of that task, and three distractor initial states generated from the same task but with positions of the red ball that do not solve the task. Then, we asked them to select the correct initial state with the red ball that would eventually reach the task goal. A good simulation description should give higher accuracy for humans to choose the correct solution. Note that we also evaluated the human generated initial state description and pivotal events description by asking annotators to match the human natural language descriptions that we collected and found the average accuracy to only be 70.2% for the initial scene description and 44.7% for the pivotal events description (Ta7912 Precision Recall F1 Positive 0.01 0.11 0.02 Decision Tree 0.87 0.86 0.87 MLP 0.90 0.91 0.90 Table 2: Results on pivotal events classification. ble 4). This is because of reporting bias, i.e., humans rarely state events that are obvious (Forbes and Choi, 2017). For example, a falling ball would bounce multiple times or an object pushed off an elevated bar by another object would have a projectile motion. Lack of such fine-grained explanations is what makes the human evaluation of human generated descriptions especially for the sequence of pivotal events have poor accuracy. The PHYRE tasks incorporate three physical concepts in every simulation — gravity, collision, friction. So, to measure coverage, we show humans just the natural language description of the simulation and ask them to select words that would imply any of the three concepts. For example, “rolling” or “slipping” would imply friction, “falling” would imply gravity, “hit” would imply collision, etc. We note that many physical concepts are very abstract and even difficult to be noticed visually, let alone describe in natural language. For example, moving objects slow down due to friction, but this physical concept is so innate that humans would not generally use words that imply friction to describe what they see. This metric gives us an overview of what degree of coverage the text generation models have for each of the three physical concepts. For all our human evaluations we used MTurk and collected 3 annotations per instance and report the majority. We paid Turkers 50 cents per instance for the validity evaluation and 50 cents per instance for the coverage evaluation. 5 Experimental Results and Discussion Table 2 summarizes the performance of the pivotal events classifiers. The decision tree and MLP classifiers get very high performance with 0.87 and 0.9 F1 scores respectively. The baseline classifies every event as pivotal and thus performs very poorly. From the decision tree, we extract feature importance values for each of the 13 input variables described in Section 3. The most important variable is the time step of the collision, with a weight of 0.178. The most important features for classification were an object’s collision position, its velocity, and then its angle of rotation. Given such strong results for identifying pivotal events, we were able to predict the salient events of previously unseen simulations and that helped in the next step of generation descriptions of salient events. Table 3 shows the performance of the three text generation models using automatic metrics. The table-to-text models perform better than the language model on most of the metrics. The AVG model performs slightly better than the BiLSTM on both generation tasks. However, these metrics are a very crude measure of physical reasoning performance and are not intuitive. The human evaluations, on the other hand, are more informative and insightful. Human evaluation – validity. While the GPT model can achieve scores comparable to the datato-text models using automated metrics, its performance using human evaluation is as good as chance, as shown in Table 4. We found that the GPT LM generation was very high-level and is not useful for humans to decide which tasks (among the correct and distractor choices) the generated solution explanation of the initial state and pivotal events match. By contrast, AVG and BiLSTM have significantly higher accuracy, mainly because their output is more fine-grained and so gives a more thorough explanation of the solution. Surprisingly, the human annotations of the descriptions that we collected as ground truth are not perfect either, indicating that humans tend to produce sentences that are not sufficiently discriminate and even sometimes skip obvious details such as whether the ball rolls to the left vs. right. Human evaluation – coverage. Table 5 shows the results for coverage of physical concepts. The outputs of the GPT model are repetitive and not grammatical, containing little explanation of physical concepts. AVG and BiLSTM, on the other hand, can generate text that contains fine-grained descriptions of physical concepts even sometimes better than those generated by humans. This is because humans don’t describe everyday commonsense concepts using fine-grained language, while the AVG and BiLSTM models tend to generate long detailed descriptions containing various words for gravity (e.g., falls, drop, slope, land), friction (e.g., roll, slide, trap, travel, stuck, remain), and collision (e.g., hit, collide, impact, land, pin, bounce). 7913 Initial state description Pivotal events description BLEU-1 BLEU-2 ROUGE L METEOR BLEU-1 BLEU-2 ROUGE L METEOR GPT (Radford et al., 2018) 15.37 2.25 20.08 9.93 24.32 3.89 26.82 12.14 AVG (Puduppully et al., 2019b) 15.37 11.38 22.53 24.09 20.53 15.89 29.11 27.38 BiLSTM (Puduppully et al., 2019b) 14.74 10.59 21.35 23.00 20.36 15.48 27.93 26.91 Table 3: Automatic evaluation of initial state and pivotal events descriptions on the test set. Initial state Pivotal events Random classifier 25.0 25.0 GPT (Radford et al., 2018) 23.8 26.8 AVG (Puduppully et al., 2019b) 50.8 36.6 BiLSTM (Puduppully et al., 2019b) 58.1 40.2 Human annotation 70.2 44.7 Table 4: Human evaluation for validity accuracy of initial state and simulation descriptions on test set. Gravity Friction Collision GPT (Radford et al., 2018) 89.3 2.0 16.0 AVG (Puduppully et al., 2019b) 100.4 61.6 71.8 BiLSTM (Puduppully et al., 2019b) 99.2 70.7 71.1 Human annotation 96.7 43.0 57.0 Table 5: Human evaluation for coverage accuracy of physical concepts in simulation descriptions on test set. Input records ... green|green circle 0|OBJ COLOR| INITIAL STATE circle|green circle 0|OBJ TYPE| INITIAL STATE dynamic|green circle 0|OBJ STATE| INITIAL STATE 76|green circle 0|X| INITIAL STATE 162|green circle 0|Y| INITIAL STATE... Gold annotation The red and green balls fall. The red ball lands on the ground and the green ball lands on the red ball and rolls to the right over the black vertical bar. Generation (AVG) The red ball lands in the cubby and the green ball lands on top and a little to the right, sending the green ball right. It rolls over the short black wall of the cage and onto the floor, where it keeps rolling right towards the purple goal... Generation (BiLSTM) The red ball falls and knocks the green ball off of its curved black platform and to the left. It rolls leftwards and continues falling until it lands on the purple floor... Table 6: Example input records, gold annotation, and generated simulation description from the AVG and BiLSTM models, taken from example 00014:394. We show only a short segment of the actual input records. Qualitative analysis. An example of the model inputs and outputs is in Table 6 and taken from simulation id 00014:394. Here we make two observations. First, the generated descriptions are not as succinct as the gold annotations, because our model is obtained from fine-tuning an entity-based model pre-trained on generating long Rotowire game summaries (Wiseman et al., 2017). Second, the output generated by the BiLSTM model predicts the incorrect direction of motion for the green ball, an error that is occasionally seen across generation descriptions of both models. This indicates that a table-to-text paradigm for generating such solution explanations is not adequate for learning the direction of motion for the physical reasoning required for these explanations. 6 Related Work Qualitative physics and Visual reasoning. Qualitative physics aims to represent and reason about the behavior of physical systems (Forbus, 1988). McCloskey and Kohl (1983); McCloskey et al. (1983) suggests that people use simplified intuitive theories to understand the physical world in day-to-day life. Earlier works explored using probabilistic simulations to train physical inference through physical simulations (Battaglia et al., 2013; Zhang et al., 2016). Recent papers use neural networks over visual inputs to predict future pixels (Finn et al., 2016; Lerer et al., 2016; Mirza et al., 2016; Du and Narasimhan, 2019) or make qualitative predictions (Groth et al., 2018; Li et al., 2016, 2017; Janner et al., 2019; Wu et al., 2015; Mao et al., 2019). Furthermore, several frameworks and benchmarks have been introduced to test visual reasoning such as PHYRE (Bakhtin et al., 2019), Mujoco (Todorov et al., 2012), and Intphys (Riochet et al., 2018), some of which are combined with natural language for question answering such as NLVR (Suhr et al., 2017), CLEVR (Johnson et al., 2017), and VQA (Antol et al., 2015). In a parallel work, Yi et al. (2020) introduced the CLEVRER dataset for reasoning about collision events from videos with different types of questions. In contrast, we develop the ability to reason and explain the behavior of dynamic physical systems by generating natural language. Natural language explanations and Commonsense reasoning. Several recent papers propose 7914 to use natural language for explanation and commonsense reasoning (Lei et al., 2016; Camburu et al., 2018; Forbes and Choi, 2017; Chai et al., 2018; Forbes et al., 2019; Rajani et al., 2019; DeYoung et al., 2020). Lei et al. (2016), for example, generate textual rationales for sentiment analysis by highlighting phrases in the input. Forbes and Choi (2017) learn the physical knowledge of actions and objects from natural language. Camburu et al. (2018) propose e-SNLI by generating explanations for the natural language inference problem at a cost of performance. Rajani et al. (2019) propose to use LMs to generate explanations that can be used during training and inference in a classifier and significantly improve CommonsenseQA performance. Bisk et al. (2020) propose to use a question answering task to test the model’s physical commonsense and reasoning ability. In contrast to the previous work, we focus on identifying pivotal physical events and then generating natural language explanations for them. We find that this two-step approach works more effectively. Table-to-text generation. Table-to-text generation aims to produce natural language output from structured input. Applications include generating sports commentaries from game records (Tanaka-Ishii et al., 1998; Chen and Mooney, 2008; Taniguchi et al., 2019), weather forecasts (Liang et al., 2009; Konstas and Lapata, 2012; Mei et al., 2016), biographical texts from Wikipedia infoboxes (Lebret et al., 2016; Sha et al., 2018; Liu et al., 2018; Perez-Beltrachini and Lapata, 2018), descriptions of knowledge bases (ODonnell et al., 2000; Trisedya et al., 2018; Zhu et al., 2019; Yu et al., 2019) and source code (Iyer et al., 2016), and dialog response generation from slot-value pairs (Wen et al., 2015). Recently, neural encoder-decoder models (Sutskever et al., 2014; Cho et al., 2014) based on attention (Bahdanau et al., 2015; Luong et al., 2015) and copy mechanisms (Gu et al., 2016; Gulcehre et al., 2016) have shown promising results on table-to-text tasks (Wiseman et al., 2017; Gehrmann et al., 2018; Puduppully et al., 2019a,b; Iso et al., 2019; Castro Ferreira et al., 2019; Zhao et al., 2020; Chen et al., 2020a). While traditional methods use different modules for each generation stage in a pipeline (Reiter and Dale, 2000), neural table-to-text models are trained on large-scale datasets, relying on representation learning for generating coherent and grammatical texts. Puduppully et al. (2019a) propose a neural network approach that first selects data records to be mentioned and then generates a summary from the selected data, in an end-to-end fashion. Chen et al. (2020b) use pre-trained language models to generate descriptions for tabular data in a few shot setting. 7 Conclusions and Future Directions ESPRIT uses a two-step approach for qualitative physical reasoning. To train models that can describe physical tasks, we collected open-ended natural language text descriptions of initial states and pivotal physical events in a 2D simulation from human annotators. We then trained a model to identify these pivotal events and then fine-tuned on pre-trained table-to-text generation and language models without using the image representations of the actual simulation frames. Our results indicate that table-to-text models perform better than language models on generating valid explanations of physical events but there is a lot more room for improvement compared to human annotations. We hope that the dataset we collected will facilitate research in using natural language for physical reasoning. Reinforcement Learning (RL) agents may be able to solve physical tasks much more efficiently by leveraging natural language reasoning as opposed to model-free approaches that are often highly sample-inefficient. An RL agent that leverages natural language descriptions of physical events to reason about the solution for a given goal (similar to Zhong et al. (2020)) or for reward shaping (similar to Goyal et al. (2019)) could be a compelling line of future research. More importantly, having a model that can meaningfully reason about commonsense qualitative physics could be interpretable and more robust, as they might focus on the parts of physical dynamics that are relevant for generalization to new scenarios. Such systems are widely applicable to self-driving cars or tasks that involve human-AI interactions, such as robots performing everyday human tasks like making coffee or even collaboratively helping with rescue operations. Acknowledgments We would like to thank Abhinand Sivaprasad for his helpful discussions and annotations. We also thank the anonymous reviewers for their feedback. 7915 References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In ICCV. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Anton Bakhtin, Laurens van der Maaten, Justin Johnson, Laura Gustafson, and Ross Girshick. 2019. PHYRE: A new benchmark for physical reasoning. In NeurIPS. Peter W Battaglia, Jessica B Hamrick, and Joshua B Tenenbaum. 2013. Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, 110(45):18327–18332. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: Reasoning about physical commonsense in natural language. In AAAI. Oana-Maria Camburu, Tim Rockt¨aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: natural language inference with natural language explanations. In NeurIPS. Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural datato-text generation: A comparison between pipeline and end-to-end architectures. In EMNLP-IJCNLP. Joyce Y Chai, Qiaozi Gao, Lanbo She, Shaohua Yang, Sari Saba-Sadiya, and Guangyue Xu. 2018. Language to action: Towards interactive task learning with physical agents. In AAMAS. David L Chen and Raymond J Mooney. 2008. Learning to sportscast: a test of grounded language acquisition. In ICML. Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020a. Logical natural language generation from open-domain tables. In ACL. Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020b. Few-shot NLG with pre-trained language model. In ACL. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In EMNLP. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2020. ERASER: A benchmark to evaluate rationalized nlp models. In ACL. Yilun Du and Karthik Narasimhan. 2019. Taskagnostic dynamics priors for deep reinforcement learning. In ICML. Chelsea Finn, Ian Goodfellow, and Sergey Levine. 2016. Unsupervised learning for physical interaction through video prediction. In NeurIPS. Maxwell Forbes and Yejin Choi. 2017. Verb physics: Relative physical knowledge of actions and objects. In ACL. Maxwell Forbes, Ari Holtzman, and Yejin Choi. 2019. Do neural language representations learn physical commonsense? In CogSci. Kenneth D Forbus. 1988. Qualitative physics: Past, present, and future. In Exploring artificial intelligence, pages 239–296. Elsevier. Sebastian Gehrmann, Falcon Dai, Henry Elder, and Alexander Rush. 2018. End-to-end content and plan selection for data-to-text generation. In INLG. Prasoon Goyal, Scott Niekum, and Raymond J. Mooney. 2019. Using natural language for reward shaping in reinforcement learning. In IJCAI. Oliver Groth, Fabian B Fuchs, Ingmar Posner, and Andrea Vedaldi. 2018. ShapeStacks: Learning visionbased physical intuition for generalised object stacking. In ECCV. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In ACL. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In ACL. Hayate Iso, Yui Uehara, Tatsuya Ishigaki, Hiroshi Noji, Eiji Aramaki, Ichiro Kobayashi, Yusuke Miyao, Naoaki Okazaki, and Hiroya Takamura. 2019. Learning to select, track, and generate for data-to-text. In ACL. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In ACL. Michael Janner, Sergey Levine, William T Freeman, Joshua B Tenenbaum, Chelsea Finn, and Jiajun Wu. 2019. Reasoning about physical interactions with object-oriented prediction and planning. In ICLR. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In CVPR. Nikiforos Karamanis, Massimo Poesio, Chris Mellish, and Jon Oberlander. 2004. Evaluating centeringbased metrics of coherence. In ACL. Ioannis Konstas and Mirella Lapata. 2012. Unsupervised concept-to-text generation with hypergraphs. In NAACL. 7916 R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In EMNLP. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In EMNLP. Adam Lerer, Sam Gross, and Rob Fergus. 2016. Learning physical intuition of block towers by example. In ICML. Wenbin Li, Seyedmajid Azimi, Aleˇs Leonardis, and Mario Fritz. 2016. To fall or not to fall: A visual approach to physical stability prediction. arXiv preprint arXiv:1604.00066. Wenbin Li, Ales Leonardis, and Mario Fritz. 2017. Visual stability prediction and its application to manipulation. In 2017 AAAI Spring Symposium Series. Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In ACL-IJCNLP. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In AAAI. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP. Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In ICLR. Michael McCloskey and Deborah Kohl. 1983. Naive physics: The curvilinear impetus principle and its role in interactions with moving objects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 9(1):146. Michael McCloskey, Allyson Washburn, and Linda Felch. 1983. Intuitive physics: the straight-down belief and its origin. Journal of Experimental Psychology: Learning, Memory, and Cognition, 9(4):636. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? selective generation using LSTMs with coarse-to-fine alignment. In NAACL. Mehdi Mirza, Aaron Courville, and Yoshua Bengio. 2016. Generalizable features from unsupervised learning. arXiv preprint arXiv:1612.03809. Michael ODonnell, Alistair Knott, Jon Oberlander, and Chris Mellish. 2000. Optimising text quality in generation from relational databases. In INLG. Laura Perez-Beltrachini and Mirella Lapata. 2018. Bootstrapping generators from noisy data. In NAACL. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019a. Data-to-text generation with content selection and planning. In AAAI. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019b. Data-to-text generation with entity modeling. In ACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In ACL. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press. Ronan Riochet, Mario Ynocente Castro, Mathieu Bernard, Adam Lerer, Rob Fergus, V´eronique Izard, and Emmanuel Dupoux. 2018. IntPhys: A framework and benchmark for visual intuitive physics reasoning. arXiv preprint arXiv:1803.07616. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Orderplanning neural text generation from structured data. In AAAI. Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation. arXiv preprint arXiv:1706.09799. Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual reasoning. In ACL. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NeurIPS. Kumiko Tanaka-Ishii, Kˆoiti Hasida, and Itsuki Noda. 1998. Reactive content selection in the generation of real-time soccer commentary. In COLING. Yasufumi Taniguchi, Yukun Feng, Hiroya Takamura, and Manabu Okumura. 2019. Generating live soccer-match commentary from play data. In AAAI. Emanuel Todorov, Tom Erez, and Yuval Tassa. 2012. MuJoCo: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE. Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. GTR-LSTM: A triple encoder for sentence generation from RDF data. In ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. 7917 Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In EMNLP. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In EMNLP. Jiajun Wu, Ilker Yildirim, Joseph J Lim, Bill Freeman, and Josh Tenenbaum. 2015. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In NeurIPS. Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B Tenenbaum. 2020. CLEVRER: Collision events for video representation and reasoning. In ICLR. Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019. CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases. In EMNLP-IJCNLP. Renqiao Zhang, Jiajun Wu, Chengkai Zhang, William T Freeman, and Joshua B Tenenbaum. 2016. A comparative evaluation of approximate probabilistic simulation and deep neural networks as accounts of human physical scene understanding. In CogSci. Chao Zhao, Marilyn Walker, and Snigdha Chaturvedi. 2020. Bridging the structural gap between encoding and decoding for data-to-text generation. In ACL. Victor Zhong, Tim Rockt¨aschel, and Edward Grefenstette. 2020. RTFM: Generalising to new environment dynamics via reading. In ICLR. Yaoming Zhu, Juncheng Wan, Zhiming Zhou, Liheng Chen, Lin Qiu, Weinan Zhang, Xin Jiang, and Yong Yu. 2019. Triple-to-text: Converting rdf triples into high-quality natural languages via optimizing an inverse kl divergence. In SIGIR.
2020
706
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7918–7928 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7918 Iterative Edit-Based Unsupervised Sentence Simplification Dhruv Kumar,1 Lili Mou,2 Lukasz Golab,1 Olga Vechtomova1 1University of Waterloo 2Department of Computing Science, University of Alberta Alberta Machine Intelligence Institute (Amii) {d35kumar,lgolab,ovechtomova}@uwaterloo.ca [email protected] Abstract We present a novel iterative, edit-based approach to unsupervised sentence simplification. Our model is guided by a scoring function involving fluency, simplicity, and meaning preservation. Then, we iteratively perform word and phrase-level edits on the complex sentence. Compared with previous approaches, our model does not require a parallel training set, but is more controllable and interpretable. Experiments on Newsela and WikiLarge datasets show that our approach is nearly as effective as state-of-the-art supervised approaches.1 1 Introduction Sentence simplification is the task of rewriting text to make it easier to read, while preserving its main meaning and important information. Sentence simplification is relevant in various real-world and downstream applications. For instance, it can benefit people with autism (Evans et al., 2014), dyslexia (Rello et al., 2013), and low-literacy skills (Watanabe et al., 2009). It can also serve as a preprocessing step to improve parsers (Chandrasekar et al., 1996) and summarization systems (Klebanov et al., 2004). Recent efforts in sentence simplification have been influenced by the success of machine translation. In fact, the simplification task is often treated as monolingual translation, where a complex sentence is translated to a simple one. Such simplification systems are typically trained in a supervised way by either phrase-based machine translation (PBMT, Wubben et al., 2012; Narayan and Gardent, 2014; Xu et al., 2016) or neural machine translation (NMT, Zhang and Lapata, 2017; Guo et al., 2018; Kriz et al., 2019). Recently, sequence-to-sequence 1Code is released at https://github.com/ ddhruvkr/Edit-Unsup-TS (Seq2Seq)-based NMT systems are shown to be more successful and serve as the state of the art. However, supervised Seq2Seq models have two shortcomings. First, they give little insight into the simplification operations, and provide little control or adaptability to different aspects of simplification (e.g., lexical vs. syntactical simplification). Second, they require a large number of complexsimple aligned sentence pairs, which in turn require considerable human effort to obtain. In previous work, researchers have addressed some of the above issues. For example, AlvaManchego et al. (2017) and Dong et al. (2019) explicitly model simplification operators such as word insertion and deletion. Although these approaches are more controllable and interpretable than standard Seq2Seq models, they still require large volumes of aligned data to learn these operations. To deal with the second issue, Surya et al. (2019) recently proposed an unsupervised neural text simplification approach based on the paradigm of style transfer. However, their model is hard to interpret and control, like other neural network-based models. Narayan and Gardent (2016) attempted to address both issues using a pipeline of lexical substitution, sentence splitting, and word/phrase deletion. However, these operations can only be executed in a fixed order. In this paper, we propose an iterative, editbased unsupervised sentence simplification approach, motivated by the shortcomings of existing work. We first design a scoring function that measures the quality of a candidate sentence based on the key characteristics of the simplification task, namely, fluency, simplicity, and meaning preservation. Then, we generate simplified candidate sentences by iteratively editing the given complex sentence using three simplification operations (lexical simplification, phrase extraction, deletion and reordering). Our model seeks the best simplified 7919 Figure 1: An example of three edit operations on a given sentence. Note that dropping clauses or phrases is common in text simplification datasets. candidate sentence according to the scoring function. Compared with Narayan and Gardent (2016), the order of our simplification operations is not fixed and is decided by the model. Figure 1 illustrates an example in which our model first chooses to delete a sentence fragment, followed by reordering the remaining fragments and replacing a word with a simpler synonym. We evaluate our approach on the Newsela (Xu et al., 2015) and WikiLarge (Zhang and Lapata, 2017) corpora. Experiments show that our approach outperforms previous unsupervised methods and even performs competitively with state-ofthe-art supervised ones, in both automatic metrics and human evaluations. We also demonstrate the interpretability and controllability of our approach, even without parallel training data. 2 Related Work Early work used handcrafted rules for text simplification, at both the syntactic level (Siddharthan, 2002) and the lexicon level (Carroll et al., 1999). Later, researchers adopted machine learning methods for text simplification, modeling it as monolingual phrase-based machine translation (Wubben et al., 2012; Xu et al., 2016). Further, syntactic information was also considered in the PBMT framework, for example, constituency trees (Zhu et al., 2010) and dependency trees (Bingel and Søgaard, 2016). Narayan and Gardent (2014) performed probabilistic sentence splitting and deletion, followed by MT-based paraphrasing. Nisioi et al. (2017) employed neural machine translation (NMT) for text simplification, using a sequence-to-sequence (Seq2Seq) model (Sutskever et al., 2014). Zhang and Lapata (2017) used reinforcement learning to optimize a reward based on simplicity, fluency, and relevance. Zhao et al. (2018a) integrated the transformer architecture and paraphrasing rules to guide simplification learning. Kriz et al. (2019) produced diverse simplifications by generating and re-ranking candidates by fluency, adequacy, and simplicity. Guo et al. (2018) showed that simplification benefits from multi-task learning with paraphrase and entailment generation. Martin et al. (2019) enhanced the transformer architecture with conditioning parameters such as length, lexical and syntactic complexity. Recently, edit-based techniques have been developed for text simplification. Alva-Manchego et al. (2017) trained a model to predict three simplification operators (keep, replace, and delete) from aligned pairs. Dong et al. (2019) employed a similar approach but in an end-to-end trainable manner with neural networks. However, these approaches are supervised and require large volumes of parallel training data; also, their edits are only at the word level. By contrast, our method works at both word and phrase levels in an unsupervised manner. For unsupervised sentence simplification, Surya et al. (2019) adopted style-transfer techniques, using adversarial and denoising auxiliary losses for content reduction and lexical simplification. However, their model is based on a Seq2Seq network, which is less interpretable and controllable. They cannot perform syntactic simplification since syntax typically does not change in style-transfer tasks. Narayan and Gardent (2016) built a pipeline-based unsupervised framework with lexical simplification, sentence splitting, and phrase deletion. However, these operations are separate components in the pipeline, and can only be executed in a fixed order. Unsupervised edit-based approaches have recently been explored for natural language generation tasks, such as style transfer, paraphrasing, and sentence error correction. Li et al. (2018) proposed edit-based style transfer without parallel supervision. They replaced style-specific phrases with those in the target style, which are retrieved from the training corpus. Miao et al. (2019) used Metropolis–Hastings sampling for constrained sentence generation. In this paper, we model text generation as a search algorithm, and design search objective and search actions specifically for text simplification. Concurrent work further shows the success of search-based unsupervised text generation for paraphrasing (Liu et al., 2020) and summa7920 rization (Schumann et al., 2020). 3 Model In this section, we first provide an overview of our approach, followed by a detailed description of each component, namely, the scoring function, the edit operations, and the stopping criteria. 3.1 Overview We first define a scoring function as our search objective. It allows us to impose both hard and soft constraints, balancing the fluency, simplicity, and adequacy of candidate simplified sentences (Section 3.2). Our approach iteratively generates multiple candidate sentences by performing a sequence of lexical and syntactic operations. It starts from the input sentence; in each iteration, it performs phrase and word edits to generate simplified candidate sentences (Section 3.3). Then, a candidate sentence is selected according to certain criteria. This process is repeated until none of the candidates improve the score of the source sentence by a threshold value. The last candidate is returned as the simplified sentence (Section 3.4). 3.2 Scoring Function Our scoring function is the product of several individual scores that evaluate various aspects of a candidate simplified sentence. This is also known as the product-of-experts model (Hinton, 2002). SLOR score from a syntax-aware language model (feslor). This measures the language fluency and structural simplicity of a candidate sentence. A probabilistic language model (LM) is often used as an estimate of sentence fluency (Miao et al., 2019). In our work, we make two important modifications to a plain LM. First, we replace an LM’s estimated sentence probability with the syntactic log-odds ratio (SLOR, Pauls and Klein, 2012), to better measure fluency and human acceptability. According to Lau et al. (2017), SLOR shows the best correlation to human acceptability of a sentence, among many sentence probability-based scoring functions. SLOR was also shown to be effective in unsupervised text compression (Kann et al., 2018). Given a trained language model (LM) and a sentence s, SLOR is defined as SLOR(s) = 1 |s|(ln(PLM(s)) −ln(PU(s))) (1) where PLM is the sentence probability given by the language model, PU(s) = Q w∈s P(w) is the product of the unigram probability of a word w in the sentence, and |s| is the sentence length. SLOR essentially penalizes a plain LM’s probability by unigram likelihood and the length. It ensures that the fluency score of a sentence is not penalized by the presence of rare words. Consider two sentences, “I went to England for vacation” and “I went to Senegal for vacation.” Even though both sentences are equally fluent, a standard LM will give a higher score to the former, since the word “England” is more likely to occur than “Senegal.” In simplification, SLOR is preferred for preserving rare words such as named entities.2 Second, we use a syntax-aware LM, i.e., in addition to words, we use part-of-speech (POS) and dependency tags as inputs to the LM (Zhao et al., 2018b). For a word wi, the input to the syntaxaware LM is [e(wi); p(wi); d(wi)], where e(wi) is the word embedding, p(wi) is the POS tag embedding, and d(wi) is the dependency tag embedding. Note that our LM is trained on simple sentences. Thus, the syntax-aware LM prefers a syntactically simple sentence. It also helps to identify sentences that are structurally ungrammatical. Cosine Similarity (fcos). Cosine similarity is an important measure of meaning preservation. We compute the cosine value between sentence embeddings of the original complex sentence (c) and the generated candidate sentence (s), where our sentence embeddings are calculated as the idf weighted average of individual word embeddings. Our sentence similarity measure acts as a hard filter, i.e., fcos(s) = 1 if cos(c, s) > τ, or fcos(s) = 0 otherwise, for some threshold τ. Entity Score (fentity). Entities help identify the key information of a sentence and therefore are also useful in measuring meaning preservation. Thus, we count the number of entities in the sentence as part of the scoring function, where entities are detected by a third-party tagger. Length (flen). This score is proportional to the inverse of the sentence length. It forces the model to generate shorter and simpler sentences. However, we reject sentences shorter than a specified length (≤6 tokens) to prevent over-shortening. 2Note that we do not use SLOR to evaluate lexicon simplicity, which will later be evaluated by the Flesch reading ease (FRE) score. The SLOR score, in fact, preserves rare words, so that we can better design dictionary-based word substitution for lexical simplification (Section 3.3). 7921 Figure 2: Constituency parse tree is used for detecting phrases. FRE (ffre). The Flesch Reading Ease (FRE) score (Kincaid et al., 1975) measures the ease of readability in text. It is based on text features such as the average sentence length and the average number of syllables per word. A higher scores indicate that the text is simpler to read. We compute the overall scoring function as the product of individual scores. f(s) = feslor(s)α · ffre(s)β · (1/flen(s))γ ·fentity(s)δ · fcos(s) (2) where the weights α, β, γ, and δ balance the relative importance of the different scores. Recall that the cosine similarity measure does not require a weight since it is a hard indicator function. In Section 4.5, we will experimentally show that the weights defined for different scores affect different characteristics of simplification and thus provide more adaptability and controllability. 3.3 Generating Candidate Sentences We generate candidate sentences by editing words and phrases. We use a third-party parser to obtain the constituency tree of a source sentence. Each clause- and phrase-level constituent (e.g., S, VP, and NP) is considered as a phrase. Since a constituent can occur at any depth in the parse tree, we can deal with both long and short phrases at different granularities. In Figure 2, for example, both “good” (ADJP) and “tasted good” (VP) are constituents and thus considered as phrases, whereas “tasted” is considered as a single word. For each phrase, we generate a candidate sentence using the edit operations explained below, with Figure 1 being a running example. Removal. For each phrase detected by the parser, this operation generates a new candidate sentence by removing that phrase from the source sentence. In Figure 1, our algorithm can drop the phrase “according to a Seattle based reporter,” which is not the main clause of the sentence. The removal operation allows us to remove peripheral information in a sentence for content reduction. Extraction. This operation simply extracts a selected phrase (including a clause) as the candidate sentence. This allows us to select the main clause in a sentence and remove remaining peripheral information. Reordering. For each phrase in a sentence, we generate candidate sentences by moving the phrase before or after another phrase (identified by clauseand phrase-level constituent tags). In the running example, the phrase “In 2016 alone” is moved between the phrases “12 billion dollars” and “on constructing theme parks.” As seen, the reordering operation is able to perform syntactic simplification. Substitution. In each phrase, we identify the most complex word as the rarest one according to the idf score. For the selected complex word, we generate possible substitutes using a two-step strategy. First, we obtain candidate synonyms by taking the union of the WordNet synonym set (Miller, 1995) and the closest words from GloVe (Pennington et al., 2014) and Word2Vec (Mikolov et al., 2013) embeddings (where embedding closeness is measured by Euclidean distance). Second, a candidate synonym is determined to be an appropriate simple substitute if it satisfies the following conditions: a) it has a lower idf score than the complex word, where the scores are computed from the target simple sentences, b) it is not a morphological inflection of the complex word, c) its word embedding exceeds a cosine similarity threshold to the complex word, and, d) it is has the same partof-speech and dependency tags in the sentence as the complex word. We then generate candidate sentences by replacing the complex word with all qualified lexical substitutes. Notably, we do not replace entity words identified by entity taggers. In our example sentence, consider the phrase “constructing theme parks.” The word “constructing” is chosen as the word to be simplified, and is replaced with “building.” As seen, this operation performs lexical simplification. 3.4 The Iterative Algorithm Given an input complex sentence, our algorithm iteratively performs edits to search for a higher7922 scoring candidate. In each iteration, we consider all the operations (i.e., removal, extraction, reordering, and substitution). Each operation may generate multiple candidates (e.g., multiple words for substitution); we filter out a candidate sentence if the improvement does not pass an operation-specific threshold. We choose the highest-scoring sentence from those that are not filtered out. Our algorithm terminates if no edit passes the threshold, and the final candidate is our generated simplified sentence. Our algorithm includes a filtering step for each operation. We only keep a candidate sentence if it is better than the previous one by a multiplicative factor, i.e., f(c)/f(s) > rop (3) where s is the sentence given by the previous iteration, and c is a candidate generated by operator op from s. Notably, we allow different thresholds for each operation. This provides control over different aspects of simplification, namely, lexicon simplification, syntactic simplification, and content reduction. A lower threshold for substitution, for example, encourages the model to perform more lexical simplification. 4 Experiments 4.1 Data We use the Newsela (Xu et al., 2015) and the WikiLarge datasets (Zhang and Lapata, 2017) for evaluating our model. Newsela is a collection of 1,840 news articles written by professional editors at 5 reading levels for children. We use the standard split and exclude simple-complex sentence pairs that are one reading level apart, following Zhang and Lapata (2017). This gives 95,208 training, 1,129 validation, and 1,077 test sentences. The WikiLarge dataset is currently the largest text simplification corpus. It contains 296,402, 2,000, and 359 complex-simple sentence pairs for training, validation, and testing, respectively. The training set of WikiLarge consists of automatically aligned sentence pairs from the normal and simple Wikipedia versions. The validation and test sets contain multiple human-written references, against which we evaluate our algorithm. For each corpus, we only use its training set to learn a language model of simplified sentences. For the WikiLarge dataset, we also train a Word2Vec embedding model from scratch on its source and target training sentences. These embeddings are used to obtain candidate synonyms in the substitution operation. 4.2 Training Details For the LM, we use a two-layer, 256-dimensional recurrent neural network (RNN) with the gated recurrent unit (GRU, Chung et al., 2014). We initialize word embeddings using 300-dimensional GloVe (Pennington et al., 2014); out-of-vocabulary words are treated as UNK, initialized uniformly in the range of ±0.05. Embeddings for POS tags and dependency tags are 150-dimensional, also initialized randomly. We fine-tune all embeddings during training. We use the Averaged Stochastic Gradient Descent (ASGD) algorithm (Polyak and Juditsky, 1992) to train the LM, with 0.4 as the dropout and 32 as the batch size. For the Newsela dataset, the thresholds rop in the scoring function are set to 1.25 for all the edit operations. All the weights in our scoring function (α, β, γ, δ) are set to 1. For the WikiLarge dataset, the thresholds are set as 1.25 for the removal and reordering operations, 0.8 for substitution, and 5.0 for extraction. The weights in the scoring function (α, β, γ, δ) are set to 0.5, 1.0, 0.25 and 1.0, respectively. We use CoreNLP (Manning et al., 2014) to construct the constituency tree and Spacy3 to generate part-of-speech and dependency tags. 4.3 Competing Methods We first consider the reference to obtain an upperbound for a given evaluation metric. We also consider the complex sentence itself as a trivial baseline, denoted by Complex. Next, we develop a simple heuristic that removes rare words occurring ≤250 times in the simple sentences of the training corpus, denoted by Reduce-250. As discussed in Section 4.4, this simple heuristic demonstrates the importance of balancing different automatic evaluation metrics. For unsupervised competing methods, we compare with Surya et al. (2019), which is inspired by unsupervised neural machine translation. They proposed two variants, UNMT and UNTS, but their results are only available for WikiLarge. 3https://spacy.io/ 7923 Method SARI↑ Add↑ Delete↑ Keep↑ BLEU↑ GM↑ FKGL↓ Len Reference 70.13 100 83.74 3.20 12.75 Baselines Complex 2.82 21.30 7.75 8.62 23.06 Reduce-250 28.39 11.79 18.29 -0.23 14.48 Supervised Methods PBMT-R 15.77 3.07 38.34 5.90 18.1 16.89 7.59 23.06 Hybrid 28.61* 0.95* 78.86* 6.01* 14.46 20.34 4.03 12.41 EncDecA 24.12 2.73 62.66 6.98 21.68 22.87 5.11 16.96 Dress 27.37 3.08 71.61 7.43 23.2 25.2 4.11 14.2 Dress-Ls 26.63 3.21 69.28 7.4 24.25 25.41 4.21 14.37 DMass 31.06 1.25 84.12 7.82 11.92 19.24 3.60 15.07 S2S-All-FA 30.73 2.64 81.6 7.97 19.55 24.51 2.60 10.81 Edit-NTS 30.27* 2.71* 80.34* 7.76* 19.85 24.51 3.41 10.92 EncDecP 28.31 23.72 25.91 EntPar 33.22 2.42 89.32 7.92 11.14 19.24 1.34 7.88 Unsupervised Methods (Ours) RM+EX 26.07 2.35 68.35 7.5 27.22 26.64 2.95 12.9 RM+EX+LS 26.26 2.28 68.94 7.57 27.17 26.71 2.93 12.88 RM+EX+RO 26.99 2.47 70.88 7.63 26.31 26.64 3.14 12.81 RM+EX+LS+RO 27.11 2.40 71.26 7.67 26.21 26.66 3.12 12.81 RM+EX+LS+RO† 30.44 2.05 81.77 7.49 17.36 22.99 2.24 9.61 Table 1: Results on the Newsela dataset. † denotes the model with parameters tuned by SARI; other variants are tuned by the geometric mean (GM). ↑The higher, the better. ↓The lower, the better. * indicates a number that is different from that reported in the original paper. This is due to a mistreatment of capitalization in the previous work (confirmed by personal correspondence). We also compare our model with supervised methods. First, we consider non-neural phrasebased machine translation (PBMT) methods: PBMT-R (Wubben et al., 2012), which re-ranks sentences generated by PBMT for diverse simplifications; SBMT-SARI (Xu et al., 2016), which uses an external paraphrasing database; and Hybrid (Narayan and Gardent, 2014), which uses a combination of PBMT and discourse representation structures. Next, we compare our method with neural machine translation (NMT) systems: EncDecA, which is a vanilla Seq2Seq model with attention (Nisioi et al., 2017); Dress and Dress-Ls, which are based on deep reinforcement learning (Zhang and Lapata, 2017); DMass (Zhao et al., 2018a), which is a transformer-based model with external simplification rules; EncDecP, which is an encoder-decoder model with a pointermechanism; EntPar, which is based on multi-task learning (Guo et al., 2018); S2S-All-FA, which a reranking based model focussing on lexical simplification (Kriz et al., 2019); and Access, which is based on the transformer architecture (Martin et al., 2019). Finally, we compare with a supervised edit-based neural model, Edit-NTS (Dong et al., 2019). We evaluate our model with a different subset of operations, i.e., removal (RM), extraction (EX), reordering (RO), and lexical substitution (LS). In our experiments, we test the following variants: RM+EX, RM+EX+LS, RM+EX+RO, and RM+EX+LS+RO. 4.4 Automatic Evaluation Tables 1 and 2 present the results of the automatic evaluation on the Newsela and WikiLarge datasets, respectively. We use the SARI metric (Xu et al., 2016) to measure the simplicity of the generated sentences. SARI computes the arithmetic mean of the n-gram F1 scores of three rewrite operations: adding, deleting, and keeping. The individual F1-scores of these operations are reported in the columns “Add,” “Delete,” and “Keep.” We also compute the BLEU score (Papineni et al., 2002) to measure the closeness between a candidate and a reference. Xu et al. (2016) and Sulem et al. (2018) show that BLEU correlates with human judgement on fluency and meaning preservation for text simplification.4 4This does not hold when sentence splitting is involved. In our datasets, however, sentence splitting is rare, for example, 0.18% in the Newsela validation set). 7924 Method SARI↑ Add↑ Delete↑ Keep↑ BLEU↑ FKGL↓ Len Baselines Complex 27.87 99.39 22.61 Supervised Methods PBMT-R 38.56 5.73 36.93 73.02 81.09 8.33 22.35 Hybrid 31.40 1.84 45.48 46.87 48.67 4.56 13.38 EncDecA 35.66 2.99 28.96 75.02 89.03 8.42 21.26 Dress 37.08 2.94 43.15 65.15 77.41 6.59 16.14 Dress-Ls 37.27 2.81 42.22 66.77 80.44 6.62 16.39 Edit-NTS 38.23 3.36 39.15 72.13 86.69 7.30 18.87 EntPar 37.45 81.49 7.41 Access 41.87 7.28 45.79 72.53 75.46 7.22 22.27 Models using external knowledge base SBMT-SARI 39.96 5.96 41.42 72.52 73.03 7.29 23.44 DMass 40.45 5.72 42.23 73.41 7.79 Unsupervised Methods UNMT 35.89 1.94 37.68 68.04 70.61 8.23 21.85 UNTS 37.20 1.50 41.27 68.81 74.02 7.84 19.05 RM+EX 36.46 1.68 35.17 72.54 88.90 6.47 18.62 RM+EX+LS 37.85 2.31 43.65 67.59 73.62 6.30 18.45 RM+EX+RO 36.54 1.73 36.10 71.79 85.07 6.89 19.24 RM+EX+LS+RO 37.58 2.30 43.97 66.46 70.15 6.69 19.54 Table 2: Results on the WikiLarge dataset. ↑The higher, the better. ↓The lower, the better. In addition, we include a few intrinsic measures (without reference) to evaluate the quality of a candidate sentence: the Flesch–Kincaid grade level (FKGL) evaluating the ease of reading, as well as the average length of the sentence. A few recent text simplification studies (Dong et al., 2019; Kriz et al., 2019) did not use BLEU for evaluation, noticing that the complex sentence itself achieves a high BLEU score (albeit a low SARI score), since the complex sentence is indeed fluent and preserves meaning. This is also shown by our Complex baseline. For the Newsela dataset, however, we notice that the major contribution to the SARI score is from the deletion operation. By analyzing previous work such as EntPar, we find that it reduces the sentence length to a large extent, and achieves high SARI due to the extremely high F1 score of “Delete.” However, its BLEU score is low, showing the lack of fluency and meaning. This is also seen from the high SARI of (Reduce-250) in Table 1. Ideally, we want both high SARI and high BLEU, and thus, we calculate the geometric mean (GM) of them as the main evaluation metric for the Newsela dataset. On the other hand, this is not the case for WikiLarge, since none of the models can achieve high SARI by using only one operation among “Add,” “Delete,” and “Keep.” Moreover, the complex sentence itself yields an almost perfect BLEU score (partially due to the multi-reference nature of WikiLarge). Thus, we do not use GM, and for this dataset, SARI is our main evaluation metric. Overall results on Newsela. Table 1 shows the results on Newsela. By default (without †), validation is performed using the GM score. Still, our unsupervised text simplification achieves a SARI score around 26–27, outperforming quite a few supervised methods. Further, we experiment with SARI-based validation (denoted by †), following the setting of most previous work (Dong et al., 2019; Guo et al., 2018). We achieve 30.44 SARI, which is competitive with state-of-the-art supervised methods. Our model also achieves high BLEU scores. As seen, all our variants, if validated by GM (without †), outperform competing methods in BLEU. One of the reasons is that our model performs text simplification by making edits on the original sentence instead of rewriting it from scratch. In terms of the geometric mean (GM), our unsupervised approach outperforms all previous work, showing a good balance between simplicity and content preservation. The readability of our generated sentences is further confirmed by the intrinsic FKGL score. 7925 Method SARI↑ Add↑ Delete↑ Keep↑ BLEU↑ GM↑ FKGL↓ Len RM+EX+LS+RO 27.11 2.40 71.26 7.67 26.21 26.66 3.12 12.81 −SLOR 27.63 2.22 73.20 7.49 24.14 25.83 2.61 12.37 −syntax-awareness 26.91 2.16 71.19 7.39 24.98 25.93 3.65 12.76 Table 3: Ablation test of the SLOR score based on syntax-aware language modeling. Value SARI↑ BLEU↑ GM↑ FRE↑ Len Effect of threshold rop 1.0 29.20 21.69 25.17 83.75 11.75 1.1 28.38 23.59 25.87 82.83 12.17 1.2 27.45 25.54 26.48 81.98 12.62 1.3 26.60 26.47 26.53 81.47 13.07 Effect of weight α for feslor 0.75 27.04 25.75 26.39 83.46 12.46 1.25 26.91 25.96 26.43 81.26 12.96 1.50 26.74 25.20 25.96 80.94 13.06 2.0 26.83 24.29 25.53 80.11 13.15 Effect of weight β for ffre 0.5 26.42 25.53 25.97 78.61 13.20 1.5 27.38 26.04 26.70 84.31 12.58 2.0 27.83 25.27 26.52 87.03 12.26 3.0 28.29 23.69 26.52 90.34 11.91 Effect of weight γ for 1/flen 0.5 24.54 25.06 24.80 80.49 14.55 2.0 29.00 21.65 25.06 82.69 10.93 3.0 29.93 19.05 23.88 82.20 10.09 4.0 30.44 17.36 22.99 80.86 9.61 Effect of weight δ for fentity 0.5 27.81 24.68 26.20 83.6 12.01 2.0 25.44 24.63 25.03 79.36 14.28 Table 4: Analysis of the threshold value of the stopping criteria and relative weights in the scoring function. Overall results on WikiLarge. For the Wikilarge experiments in Table 2, we perform validation on SARI, which is the main metric in this experiment. Our model outperforms existing unsupervised methods, and is also competitive with state-of-the-art supervised methods. We observe that lexical simplification (LS) is important in this dataset, as its improvement is large compared with the Newsela experiment in Table 1. Additionally, reordering (RO) does not improve performance, as it is known that WikiLarge does not focus on syntactic simplification (Xu et al., 2016). The best performance for this experiment is obtained by the RM+EX+LS model. 4.5 Controllability We now perform a detailed analysis of the scoring function described in Section 3.2 to understand the effect on different aspects of simplification. We use the RM+EX+LS+RO variant and the Newsela corpus as the testbed. The SLOR score with syntax-aware LM. We analyze our syntax-aware SLOR score in the search objective. First, we remove the SLOR score and use the standard sentence probability. We observe that SLOR helps preserve rare words, which may be entities. As a result, the readability score (FKGL) becomes better (i.e., lower), but the BLEU score decreases. We then evaluate the importance of using a structural LM instead of a standard LM. We see a decrease in both SARI and BLEU scores. In both cases, the GM score decreases. Threshold values and relative weights. Table 4 analyzes the effect of the hyperparameters of our model, namely, the threshold in the stopping criteria and the relative weights in the scoring function. As discussed in Section 3.4, we use a threshold as the stopping criteria for our iterative search algorithm. For each operation, we require that a new candidate should be better than the previous iteration by a multiplicative threshold rop in Equation (3). In this analysis, we set the same threshold for all operations for simplicity. As seen in Table 4, increasing the threshold leads to better meaning preservation since the model is more conservative (making fewer edits). This is shown by the higher BLEU and lower SARI scores. Regarding the weights for each individual scoring function, we find that increasing the weight β for the FRE readability score makes sentences shorter, more readable, and thus simpler. This is also indicated by higher SARI values. When sentences are rewarded for being short (with large γ), SARI increases but BLEU decreases, showing less meaning preservation. The readability scores initially increase with the reduction in length, but then decrease. Finally, if we increase the weight δ for the entity score, the sentences become longer and more complex since the model is penalized more for deleting entities. In summary, the above analysis shows the controllability of our approach in terms of different simplification aspects, such as simplicity, meaning preservation, and readability. 7926 4.6 Human Evaluation We conducted a human evaluation on the Newsela dataset since automated metrics may be insufficient for evaluating text generation. We chose 30 sentences from the test set for annotation and considered a subset of baselines. For our model variants, we chose RM+EX+LS+RO, considering both validation settings (GM and SARI). We followed the evaluation setup in Dong et al. (2019), and measure the adequacy (How much meaning from the original sentence is preserved?), simplicity (Is the output simper than the original sentence?), and fluency (Is the output grammatical?) on a five-point Likert scale. We recruited three volunteers, one native English speaker and two non-native fluent English speakers. Each of the volunteer was given 30 sentences from different models (and references) in a randomized order. Additionally, we asked the volunteers to measure the number of instances where models produce incorrect details or generate text that is not implied by the original sentence. We did this because neural models are known to hallucinate information (Rohrbach et al., 2018). We report the average count of false information per sentence, denoted as FI. We observe that our model RM+EX+LS+RO (when validated by GM) performs better than Hybrid, a combination of PBMT and discourse representation structures, in all aspects. It also performs competitively with remaining supervised NMT models. For adequacy and fluency, Dress-Ls performs the best since it produces relatively longer sentences. For simplicity, S2S-All-FA performs the best since it produces shorter sentences. Thus, a balance is needed between these three measures. As seen, RM+EX+LS+RO ranks second in terms of the average score in the list (reference excluded). The human evaluation confirms the effectiveness of our unsupervised text simplification, even when compared with supervised methods. We also compare our model variants RM+EX+LS+RO (validated by GM) and RM+EX+LS+RO† (validated by SARI). As expected, the latter generates shorter sentences, performing better in simplicity but worse in adequacy and fluency. Regarding false information (FI), we observe that previous neural models tend to generate more false information, possibly due to the vagueness in Method A↑ S↑ F↑ Avg↑ FI↓ Hybrid 2.63 2.74 2.39 2.59 0.03 Dress-Ls 3.29 3.05 4.11 3.48 0.2 EntPar 1.92 2.97 3.16 2.68 0.47 S2S-All-FA 2.25 3.24 3.90 3.13 0.3 Edit-NTS 2.37 3.17 3.73 3.09 0.23 RM+EX+LS+RO 2.97 3.09 3.78 3.28 0.03 RM+EX+LS+RO† 2.58 3.21 3.33 3.04 0.07 Reference 2.91 3.49 4.46 3.62 0.77 Table 5: Human evaluation on Newsela, where we measure adequacy (A), simplicity (S), fluency (F), and their average score (Avg), based on 1–5 Likert scale. We also count average instances of false information per sentence (FI). the continuous space. By contrast, our approach only uses neural networks in the scoring function, but performs discrete edits of words and phrases. Thus, we achieve high fidelity (low FI) similar to the non-neural Hybrid model, which also performs editing on discourse parsing structures with PBMT. In summary, our model takes advantage of both neural networks (achieving high adequacy, simplicity, and fluency) and traditional phrase-based approaches (achieving high fidelity). Interestingly, the reference of Newsela has a poor (high) FI score, because the editors wrote simplifications at the document level, rather than the sentence level. 5 Conclusion We proposed an iterative, edit-based approach to text simplification. Our approach works in an unsupervised manner that does not require a parallel corpus for training. In future work, we plan to add paraphrase generation to generate diverse simple sentences. Acknowledgments We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), under grant Nos. RGPIN-2019-04897, RGPIN-2020-04465, and the Canada Research Chair Program. Lili Mou is also supported by AltaML, the Amii Fellow Program, and the Canadian CIFAR AI Chair Program. This research was supported in part by Compute Canada (www. computecanada.ca). 7927 References Fernando Alva-Manchego, Joachim Bingel, Gustavo Paetzold, Carolina Scarton, and Lucia Specia. 2017. Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP), pages 295–305. Joachim Bingel and Anders Søgaard. 2016. Text simplification as tree labeling. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Short Papers, pages 337– 343. John Carroll, Guido Minnen, Darren Pearce, Yvonne Canning, Siobhan Devlin, and John Tait. 1999. Simplifying text for language-impaired readers. In Ninth Conference of the European Chapter of the Association for Computational Linguistics (EACL). R. Chandrasekar, Christine Doran, and B. Srinivas. 1996. Motivations and methods for text simplification. In COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 3393–3402. Richard Evans, Constantin Or˘asan, and Iustin Dornescu. 2014. An evaluation of syntactic simplification rules for people with autism. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations, pages 131–140. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Dynamic multi-level multi-task learning for sentence simplification. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 462–476. Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800. Katharina Kann, Sascha Rothe, and Katja Filippova. 2018. Sentence-level fluency evaluation: References help, but can be spared! In Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL), pages 313–323. J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Beata Beigman Klebanov, Kevin Knight, and Daniel Marcu. 2004. Text simplification for informationseeking applications. In OTM Confederated International Conferences” On the Move to Meaningful Internet Systems”, pages 735–747. Springer. Reno Kriz, Jo˜ao Sedoc, Marianna Apidianaki, Carolina Zheng, Gaurav Kumar, Eleni Miltsakaki, and Chris Callison-Burch. 2019. Complexity-weighted loss and diverse reranking for sentence simplification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 3137–3147. Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Cognitive Science, 41(5):1202–1241. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies NAACL-HLT, pages 1865– 1874. Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie Zhou, and Sen Song. 2020. Unsupervised paraphrasing by simulated annealing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics (ACL): System Demonstrations, pages 55– 60. Louis Martin, Benoˆıt Sagot, ´Eric de la Clergerie, and Antoine Bordes. 2019. Controllable sentence simplification. arXiv preprint arXiv:1910.02677. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. CGMH: Constrained sentence generation by Metropolis-Hastings sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 6834–6842. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. George A. Miller. 1995. Wordnet: a lexical database for english. Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 435–445. 7928 Shashi Narayan and Claire Gardent. 2016. Unsupervised sentence simplification using deep semantics. In Proceedings of the 9th International Natural Language Generation conference (INLG), pages 111– 120. Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Short Papers, pages 85–91. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318. Adam Pauls and Dan Klein. 2012. Large-scale syntactic language modeling with treelets. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL), pages 959–968. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Boris T Polyak and Anatoli B Juditsky. 1992. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838–855. Luz Rello, Clara Bayarri, Azuki G`orriz, Ricardo BaezaYates, Saurabh Gupta, Gaurang Kanvinde, Horacio Saggion, Stefan Bott, Roberto Carlini, and Vasile Topac. 2013. Dyswebxia 2.0!: more accessible text for people with dyslexia. In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility, page 25. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallucination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4035–4045. Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova, and Katja Markert. 2020. Discrete optimization for unsupervised sentence summarization with word-level extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Advaith Siddharthan. 2002. An architecture for a text simplification system. In Language Engineering Conference, 2002. Proceedings, pages 64–71. IEEE. Elior Sulem, Omri Abend, and Ari Rappoport. 2018. BLEU is not suitable for the evaluation of text simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 738–744. Sai Surya, Abhijit Mishra, Anirban Laha, Parag Jain, and Karthik Sankaranarayanan. 2019. Unsupervised neural text simplification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2058–2068. I Sutskever, O Vinyals, and QV Le. 2014. Sequence to sequence learning with neural networks. Advances in NIPS. Willian Massami Watanabe, Arnaldo Candido Junior, Vin´ıcius Rodriguez Uzˆeda, Renata Pontin de Mattos Fortes, Thiago Alexandre Salgueiro Pardo, and Sandra Maria Alu´ısio. 2009. Facilita: reading assistance for low-literacy readers. In Proceedings of the 27th ACM international conference on Design of communication, pages 29–36. Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1015–1024. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics (TACL), 3:283–297. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 584–594. Sanqiang Zhao, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018a. Integrating transformer and paraphrase rules for sentence simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3164–3173. Yang Zhao, Zhiyuan Luo, and Akiko Aizawa. 2018b. A language model based evaluator for sentence compression. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), Short Papers, pages 170–175. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING), pages 1353–1361.
2020
707
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7929–7942 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7929 Logical Natural Language Generation from Open-Domain Tables Wenhu Chen1, Jianshu Chen2, Yu Su3, Zhiyu Chen1 and William Yang Wang1 University of California, Santa Barbara, CA, USA1 Tencent AI Lab, Bellevue, WA, USA2 The Ohio State University, Columbus, Ohio, USA3 {wenhuchen, zhiyuchen, william}@cs.ucsb.edu [email protected], [email protected] Abstract Neural natural language generation (NLG) models have recently shown remarkable progress in fluency and coherence. However, existing studies on neural NLG are primarily focused on surface-level realizations with limited emphasis on logical inference, an important aspect of human thinking and language. In this paper, we suggest a new NLG task where a model is tasked with generating natural language statements that can be logically entailed by the facts in an open-domain semi-structured table. To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset (Chen et al., 2019) featured with a wide range of logical/symbolic inferences as our testbed, and propose new automatic metrics to evaluate the fidelity of generation models w.r.t. logical inference. The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order. In our experiments, we comprehensively survey different generation architectures (LSTM, Transformer, Pre-Trained LM) trained with different algorithms (RL, Adversarial Training, Coarse-toFine) on the dataset and made following observations: 1) Pre-Trained LM can significantly boost both the fluency and logical fidelity metrics, 2) RL and Adversarial Training are trading fluency for fidelity, 3) Coarse-to-Fine generation can help partially alleviate the fidelity issue while maintaining high language fluency. The code and data are available at https: //github.com/wenhuchen/LogicNLG. 1 Introduction Neural network models, especially the recent wave of massive models like BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019), have shown the ability to generate natural language text at an astonishing level of fluency and coherence. For the generated text to fulfill its purpose, however, a critNation Gold Medal Silver Medal Bronze Medal Sports Canada 3 1 2 Ice Hockey Mexico 2 3 1 Baseball Colombia 1 3 0 Roller Skating Sentence: Canada obtained 1 more gold medal than Mexico. Sentence: Canada obtained the most gold medals in the game. Medal Table from Tournament Sentence: Canada has got 3 gold medals in the tournament. Sentence: Mexico got 3 silver medals and 1 bronze medal. Surface-level Generation Logical Natural Language Generation Figure 1: Table-to-text generation examples with and without implicit logical inference. Logical NLG requires a generation model to generate natural language statements that can be logically entailed by the facts in the table instead of simply restating certain superficial facts in natural language. ical property that is necessary but often overlooked is fidelity, i.e., what is generated should be faithful to the underlying data, knowledge, or meaning representation. A line of recent work has started to address the surface-level fidelity issue of natural language generation (NLG) by encouraging the model to learn to reuse the verbatim of certain inputs through copy mechanism (See et al., 2017; Gu et al., 2016; Wiseman et al., 2017; Liu et al., 2018), structured attention (Liu et al., 2018), or planning and selection/entity modeling (Puduppully et al., 2019a,b). While shown to be effective, most such methods so far are primarily focused on surfacelevel realization and simply restate the facts in the underlying data (Figure 1). However, humans have the ability to generalize beyond superficial facts (e.g., “Canada has got 3 gold medals.”) by inferring and communicating with new statements that can be entailed from these facts (e.g., “Canada obtained the most gold medals.”). We believe it is important for NLG models to be able to generalize beyond the superficla facts given to them as well. Therefore, we propose a new task, logical NLG, where a model is tasked 7930 Colombia has 4 medals in total. 5 ? ? ? 2 more silver medals than Canada.[Logic: Diff] [Logic: Total] [Wrong ] ? ? ? Figure 2: When making the decision at the third step, the model needs to foresee the future tokens to ensure logical consistency. There is no back-tracking once the model makes a wrong decision like “5”. with generating natural language statements that can be logically entailed by the given data (i.e., the premises). The new task requires a model to jointly reason and generate sentences that are consistent both linguistically and logically. Since there are a variety of reasoning/inference tasks such as natural language inference (Bowman et al., 2015) and commonsense reasoning (Talmor et al., 2019), to avoid confusion, this paper is specifically focused on inferences involving symbolic operations over the given table (Pasupat and Liang, 2015). To empower research in this direction, we collect a new corpus LOGICNLG based on the existing TabFact (Chen et al., 2019), which brings two major renovations to the existing NLG paradigm: 1) the text involves diversified types of logical inferences including math operations like max/min/sum/add, comparison operations like same/different, and counting operations like total/only. A more detailed description of logical inference is listed in the Appendix. 2) while existing datasets are often restricted to a specific domain such as weather (Liang et al., 2009), restaurant (Duˇsek et al., 2019), NBA (Wiseman et al., 2017), etc, LOGICNLG uses open-domain tables without prior knowledge about their schema. As such, existing methods based on surface-level copying (See et al., 2017; Gu et al., 2016; Puduppully et al., 2019a) becomes insufficient, so are the existing fidelity evaluation based on the surfacelevel information extraction (Wiseman et al., 2017; Rohrbach et al., 2018; Dhingra et al., 2019), which extracts surface triples in a certain pre-defined form (i.e. subj-pred-obj, n-gram) and compare them with the surface content given in the knowledge. Most neural generation models follow a monotonic generation schema from left to right with the current prediction only depending on the preceding words. Logical NLG poses unique challenges to the traditional generation scheme due to the mismatch between sequence order and logical order. As illustrated in Figure 2, the word “2” is derived from the logical inference of ‘diff(Silver medal of Colombia, Silver medal of Canada)) →2.’ In other words, the logical order of word “2” should be after “more”, “silver”, and “Canada”, while the sequence order of “2” is before those words. Since the monotonic generation scheme is purely based on sequence order while agnostic to logical order, existing NLG models struggle to maintain the fidelity as they cannot model the logical dependency on future tokens. To alleviate such an order mismatch, an NLG model must have the capability to plan ahead for the next few steps before generation. In this context, we believe LOGICNLG to be an important testbed to study such a planing/inference ability in generation models (Ford et al., 2018; Welleck et al., 2019). In this paper, we further propose a non-monotonic coarse-to-fine generation model and show that it is able to alleviate the order mismatch problem and achieve better performance. The contribution of this work is three-fold: i) We propose a new research problem of logical natural language generation, and provide novel metrics to approximately evaluate the logical fidelity of generation models. ii) We justify the mismatch problem between sequence order and logical order of the traditional monotonic generation scheme in logical NLG. iii) We conduct comprehensive experiments with state-of-the-art neural generation models under both automatic and human evaluation, which demonstrates the challenges and opportunities for future research on logic NLG. 2 Dataset and Problem Definition Existing NLG datasets (Chen and Mooney, 2008; Duˇsek et al., 2019; Lebret et al., 2016; Liang et al., 2009) are mainly composed of surface-level description over the given records. Though ROTOWIRE (Wiseman et al., 2017) involves sporadic inference in the long document, and the inference is restricted to domain-specific knowledge (e.g. double-double, smash, triple-double and other NBA-related terms). Hence, we need a better testbed for studying the proposed problem. Statistics We construct a dataset based on TabFact (Chen et al., 2019), which is a table-based factchecking dataset with rich logical inferences in the annotated statements. Specifically, we took their positive statements (the sentences which are en7931 Vocab Examples Vocab/Sent Tables Domain Source Inference Schema WEATHERGOV 394 22.1K 0.01 22.1K Weather Crawled No Known WikiBIO 400K 728K 0.54 728K Biography Crawled No Limited ROTOWIRE 11.3K 4.9K 0.72 4.9K NBA Annotated Few Known LOGICNLG 122K 37.0K 3.31 7.3K Open Annotated Rich Unlimited Table 1: Comparison of LOGICNLG against existing NLG datasets in different aspects. Nation Gold Medal Silver Medal Canada 3 1 Mexico 2 3 Colombia 1 3 Canada obtained 3 gold medals during the tournament. Canada obtained 1 more gold medal than Mexico. Canada obtained the most gold medals. Colombia has 4 medals in total. (Canada,Gold,3) Fail to extract triple Fail to extract triple (Colombia, Medal, 4) Logical Inference Surface Level IE Verify: Supported Verify: Refuted Figure 3: Evaluation of surface-level generation vs. logical natural language generation. It suffices to use IE-based evaluation (Wiseman et al., 2017; Rohrbach et al., 2018) to verify surface-level generation, but it causes either “empty triple” or “false negative” problems to verify logical NLG. tailed by the knowledge in the table) collected from “complex channel” (required to annotate sentences with logical inference) as our target text. To prevent confusion with the original dataset, we name this table-to-text dataset as LOGICNLG, which contains 28,450 training, 4,260 validation and 4,305 test examples based on 7,392 open-domain tables crawled from Wikipedia. Each table has 5 different examples covering diverse types of logical inference. More detailed statistics and comparisons are listed in Table 1. LOGICNLG is distinguished from the existing datasets due to: i) It involves very rich logical inference, every annotated sentence involves certain types of inference with minimum domain-specific knowledge. The open-domain characteristic simulates a realistic setting, where we cannot enumerate the possible inference based on the scheme, which poses great challenges to the model’s generalization capability. ii) It is mainly composed of short sentences with an average length of 11 and a simple syntactic structure, which isolates from other linguistic complexity to focus on the problem of logical inference. The dataset contains tables with open schema crawled from diversified domains Figure 4. The major categories are sports, politics, and entertainment. The schema diversity of the tables make the rule-based system infeasible to apply. Besides, most of the tables have very rich numeral records, which provide a great testbed for logical inference. Problem Definition Here, we formally define our proposed table-to-text generation task. The input is a table T with its title denoted as a natural language sequence W. The table T = {Ti,j|i ≤ RT , j ≤CT } has RT rows and CT columns with 0% 10% 20% 30% 40% Domain Distribution of Tables Team/Player (Sports) Compeition (Sports) Politics Entertaiment Celebrity Science Figure 4: The domain distribution of LOGICNLG. the Tij being the content in the (i, j)-th cell. Tij could be a word, a number, a phrase or even a natural language sentence. The annotated statement is a sentence Y = y1, y2, · · · , yn, we aim to train a neural generation model p(Y |T) to generate statement ˆY which are both fluent and logically (numerically) supported by the given table T. 3 Automatic Evaluation In this section, we discuss the evaluation of our proposed NLG task. The fluency evaluation is simply based on the standard metrics like Perplexity (Bengio et al., 2003) and BLEU-1,2,3 (Papineni et al., 2002) based on NLTK (Bird, 2006). The most challenging problem is to evaluate the logical fidelity of the generated sentences, which is also the core problem of our paper. The existing IE-based extractive evaluation (Wiseman et al., 2017) leads to two issues as shown in Figure 3: 1) Empty Extraction: the sentence can not be formulated as (subject, predicate, object) structure, thus the IE system fail to extract triples for verification. 2) False Negative: the sentence is a logical composition (instead of surface form) of the fact from the table, the IE system cannot match it against the table. For these reasons, we test two approximate automatic metrics: 7932 Sentence: Canada obtained 1 more gold medal than Mexico Eq(Hop(Filter(Nation==Canada), Gold Medal)… 1) Parsing [Link->Search] True False Sentence: Canada obtained 1 more gold medal than Mexico Table: In the first row …. In the second row, …. Linearize NLI Orig: Canada obtained 1 more gold medal than Mexico Adv: Canada obtained 1 less gold medal than Mexico Model 𝑝(𝑌|𝑇) 𝑝(𝑌!"#|𝑇) > 𝑝$%&(𝑌|𝑇) Execute ✓ ✕ ✓ ✕ ✓ ✕ Figure 5: The parsing-based and adversarial evaluation to measure model’s correctness in logical reasoning. Parsing-based Evaluation We first propose a model-based evaluation method, which aims to directly extract the meaning representation from the generated sentence and execute it against the table to verify its correctness. Our evaluation is based on weakly-supervised semantic parsing (Liang et al., 2009, 2013), the basic idea is to first link entities and predicates in the sentence, and then use linked entities to perform a breadth-first search to synthesize potential logical forms, finally, a scorer is used to re-rank these logical forms and filter out spurious ones. The logical form returns a binary value of True to indicate whether its logic is supported by the knowledge. The basic idea is shown in the upper part of Figure 5, the implementation details are in the Appendix. We pre-train the semantic parser fγ on the training set (T, Y ) ∈Dtrain with weakly supervised algorithm, at test time, we use it to parse a sentence Y into a set of logical forms, which is re-ranked to obtain the highest logical form Pbest. We compute the ratio of Pbest returning “true” on Dtest to approximate model’s fidelity. SP-Acc = E (T, ˆY )∈Dtest I(Pbest →True|Pbest = fγ( ˆY )) where I is the indicator function. NLI-based Evaluation We then propose another model-based evaluation method to complement the parsing-based evaluation (which is sensitive to semantic variation), the basic idea follows (Kry´sci´nski et al., 2019) to evaluate the entailment score between the table and the generated sentence. The NLI model is based on TableBERT (Chen et al., 2019), which linearizes the table into textual form and uses it as the evidence for natural language inference. The model is trained with TabFact (Chen et al., 2019) dataset containing both positive/negative samples. During the evaluation, we use this NLI model to predict the entailment relationship based on the likelihood of pNLI(Y |T). Finally, we compute the ratio of “entailed” to approximate model’s fidelity: NLI-Acc = E (T, ˆY )∈Dtest I(pNLI(Y |T) > 0.5) where I is the indicator function. Adversarial Evaluation Adversarial evaluation (Goodfellow et al., 2014; Kannan and Vinyals, 2017) is used to study the generation model’s robustness in logical reasoning. Specifically, we hire human workers from Amazon Mechanical Turk1 to annotate adversarial examples for the test/validation set by simply changing minimum words to revert the logic of the sentence. Such adversarial examples preserve linguistic components like length and style except the logic-related words to specifically disentangle the generation model’s reasoning skill. As drawn in the lower part of Figure 5, the original sentence modifies its word “more” into “less” as an adversarial example. There are two principles the workers need to follow to make their jobs accepted: 1) the modified words/phrases should be roughly equally frequent to balance the language prior, for example, the number “1” is better swapped with “2,3” rather than “9999” which rarely appears in the corpus. 2) the perturbation should be diverse enough to cover different aspects of logical reasoning skills. We use the generation model p(Y |T; β) to score the original sentence Y and the adversarial sentence Yadv. If the confidence of the original example is higher than its adversarial counterpart, we count it as a successful defense, otherwise as a failed defense. We use the success rate to approximate model’s logical reasoning capability. Adv-Acc = E (T,Y,Yadv)∈Dtest[I(p(Y |T) > p(Yadv|T))] where I is the indicator function. 1https://www.mturk.com/ 7933 Discussion Both types of metrics have pros and cons, the SP-Acc and NLI-Acc are two metrics unbiased as it measures the peak samples in the model’s likelihood, however, both metrics are based on imperfect models and thus their evaluation scores are inaccurate. SP-Acc is more sensitive to number/calculation errors, and NLI-Acc is more sensitive to semantic errors, therefore, we report both of them to help increase the metrics’ robustness. In contrast, the adversarial evaluation score is accurate in terms of reflecting the model’s reasoning capability on the given samples. However, as the provided samples might not lie in the high-confidence area of the model’s distribution, it is biased in reflecting the model’s general reasoning capability. Though these fidelity metric models are prone to errors, in section 6, we show their consistency with human judgment, which reveals their potential to assist human evaluation. 4 Baselines In this section, we design comprehensive baseline models to perform logical NLG. Specifically, we consider the following two cases: non-pretrained models (LSTM/Transformer) with copy mechanism and pre-trained models (GPT-2 and BERT) with sub-word unit. We train these models with three different algorithms: Maximum Likelihood, Adversarial Training, and Reinforcement Learning. 4.1 Non-pretrained Models Here we mainly consider two table encoding methods, namely field-infusing and field-gating. These two methods differ in their strategies to coalesce the field information into cells. After the table is represented as a sequence of vectors, a decoder based on LSTM (Hochreiter and Schmidhuber, 1997) or Transformer (Vaswani et al., 2017) is applied to generate text token by token. The two methods are depicted in the upper part of Figure 6: Field-Infusing This strategy is inspired by Lebret et al. (2016). We first use an LSTM (Hochreiter and Schmidhuber, 1997) to encode the table field text word by word and then use the last output zi as field representation. This representation is concatenated with the embedding of row index #j and word embedding at each cell to obtain a position-aware cell embedding ek for each word inside the cell. We stack transformers layers on top of the cell embedding to obtain the table representation as hi ∈RD with D as the dimension. Field-Gating This strategy is inspired by by Liu et al. (2018). Like the previous strategy, we first use an LSTM (Hochreiter and Schmidhuber, 1997) to obtain field representation zi. The field representation is concatenated with ending distance information as the input to an additional field gate built inside the LSTM as suggested in Liu et al. (2018), such a field gate is used to control whether the current cell is already encoded. Such a mechanism can help LSTM to identify the boundary between different cells to grasp local information. 4.2 Pre-trained Models To further enhance the fluency and resolve the out-of-vocabulary problem, we use pre-trained language models and finetune them on LOGICNLG. Specifically, we consider two models based on GPT-2 (Radford et al., 2019) and BERT (Devlin et al., 2019), respectively, and name them as GPTTableGen and BERT-TableGen. Table Linearization We follow previous work on linearizing knowledge base as natural language (Liu et al., 2019; Zhang et al., 2019) to propose “table linearization”, which uses template to flatten the table T as a document PT = w1, · · · , w|T| fed into pre-trained language models to generate statement Y , where we use wi to denote the i-th word in the generated paragraph PT and |T| to denote the length of the paragraph (the word wi is either a table entry or a functional word in the template). As depicted in the left bottom part of Figure 6, the original table T is transformed into a paragraph by horizontally scanning each cell T11 →T1,CT →TRT ,CT in the table. GPT-TabGen we directly feed the paragraph PT as the input to the pre-trained GPT-2 model and generate the output sentence Y . We finetune the model on LOGICNLG by maximizing the likelihood of p(Y |PT ; β), with β denoting the parameters of GPT-2 model (Radford et al., 2019). BERT-TabGen 1) we encode the linearized paragraph PT using the pre-trained BERT model into the source representation h1, · · · , h|T|. 2) at the i-th time step, we replace all the words in the groundtruth statement Y after i-th time step by <MASK> token and use BERT to encode the partially masked Y i as gi 1, · · · , gi n. 3) we use an attention layer fθ to obtain the output hidden states ˆgi 1, · · · , ˆgi n, where ˆgi i is used to predict the word ˆyi. We jointly optimize β of BERT and θ to maximize 7934 𝑧" 𝑧#," 𝑧#," 𝑧%," Nation Gold Medal Sports 𝑧# 𝑧% 𝑧" 𝑒" Canada 𝑒# #1 #1 3 #1 Ice Hockey 𝑒% 𝑒' #1 ℎ" ℎ# ℎ% ℎ' Field-Infusing Encoder 𝑧" 𝑧#," 𝑧#," 𝑧%," Nation,0 Gold, 0 Medal, 0 Sports, 1 𝑧# 𝑧% 𝑧" 𝑒" Canada 𝑒# 3 Ice 𝑒% 𝑒' ℎ" ℎ# ℎ% ℎ' 𝑧%," Sports, 0 𝑧% 𝑒) ℎ) Hockey First row Cell Embed X X X X #1 #1 #1 Pre-trained GPT-2 Columbia has 4 medals in total. Prefix Pre-trained BERT Columbia has [MASK] [MASK] [MASK]. Pre-trained BERT Multi-Layered Transformer Attention Given the table of “Tournament Medal Table”. In the 1st row, the nation is Canada, Gold Medal is 1, Silver Medal is 1, Sports is Ice Hockey. In the 2nd row, the nation is Mexico, Gold Medal is 2, Silver Medal 3, Sports is Baseball, … Roller Skating. Table Templatization 𝑃+ GPT-TabGen BERT-TabGen Non-Pretrained Model Pretrained Model #1 Decoder (LSTM/Transformer) ℎ, 𝑔, 𝑔., 𝑦.% = 4 𝑌% Field-Gated Encoder Decoder (LSTM/Transformer) Field words Figure 6: The Non-pretrained and Pre-trained generation models, the detailed table is shown in Figure 1. the likelihood of generating text Y conditioned on the table and the masked partial sentence. As BERT is a bidirectional model, we need to re-encode the target sentence at each step to get gi 1:n. Therefore, the generation is finished with n passes. 4.3 Training Except for the standard maximum likelihood training, we also use the following training algorithms: Adversarial Regularization To encourage the model to ground on the table rather than relying on artificial language priors (Ramakrishnan et al., 2018), we use an adversarial regularization to enhance the maximum likelihood training. Specifically, we first perform entity resolution to locate all the numbers, count, entities in the sentence and then randomly replace them with entities or numbers appearing in the table T. These perturbed samples Yadv are used as adversarial examples to regularize the model’s behavior. Formally, we optimize β to maximize the objective: argmax β log p(Y |T; β) −λ log p(Yadv|T; β) where λ is the controlling hyper-parameter. Reinforcement Learning The maximum likelihood training is a fluency-driven objective, which is inconsistent with the goal of logical consistency. To bridge the gap, we view the generation problem from the reinforcement learning perspective to optimize the long-term fidelity. We use the trained semantic parser to assign reward to the policy p(yi|y1:i−1; β). At i-th step, the generator will sample different actions yi and roll-out from i + 1th step to produce a full sequence starting from yi using greedy search. The full sentence receives a binary score r(Y, T) from the semantic parser as reward. Formally, we optimize the objective: argmax β E yi∼p(yi|y1:i−1)[ E yi+1:n[r(y1:n, T)]] log p(yi|y1:i−1; β) where we only use one trajectory to approximate the inner roll-out expectation for efficiency. 5 Coarse-to-Fine Generation As discussed before, the baseline models follow the monotonic generation scheme and suffer from the mismatch between sequence order and logical order (Figure 2). In this section, we propose an imperfect remedy for such a situation based on the coarse-to-fine generation paradigm. Before plunging into technical details, it is helpful to first realize the resemblance between logical NLG and semantic parsing (Dong and Lapata, 2018). Compared to traditional NLG tasks like machine translation and summarization, logical NLG is closer to semantic parsing in the sense that a model may make catastrophic errors that are impossible to be corrected at later steps (Figure 2). Therefore, we take inspiration from semantic parsing models (Dong and Lapata, 2018) that have proven effective in mitigating such errors and propose a coarse-to-fine generation scheme. We break down generation into two phases. In the first phase, 7935 𝑃! [ENT] GPT-2 Canada obtained 1 more gold medal than Mexico. obtained [ENT]. more [ENT] than [ENT] Figure 7: Coarse-to-fine generation scheme: first generates a template, and then realize the surface form. It exposes more context to the surface realization model for better capturing logical dependency. the model only generates a template which determines the global logical structure, while in the second phase the model generates the final, grounded sentence conditioned on the template generated in the first phase. As depicted in Figure 7, we use the entity linker (Section 3) to identify the entities and numbers in the original sentence Y and replace them with placeholder “[ENT]”, which we call as the template YT . During the generation of GPT-TabGen, instead of directly predicting the final sentence Y , we first predict the template YT and then Y . The process is simply realized by maximizing the overall likelihood of p( ˜Y |T; β), where ˜Y = [YT ; [SEP]; Y ]. Unlike template-based or delexicalized generation (Reiter and Dale, 1997; Wen et al., 2015), which uses rigid slot filling prone to grammatic errors, our fine-grained generation has the flexibility to modify the surface form of non-slot words, which alleviates the linguistic coherence problem (Sharma et al., 2017). By decoupling sentence structure generation and entity grounding, our proposed coarse-to-fine scheme could partially alleviate the mismatch problem. For example, the generation of “Canada” is now aware of “more than” in the latter part of the sentence, which exposes the model to more context than standard monotonic models to help make logically consistent decisions though the dependency on the “1” and “Mexico” is still not captured. The proposed two-step generation could be viewed as the first step towards a fully non-monotonic generation model to solve such mismatch problem. 6 Experiments In this section, we explain the experimental details and then comprehensively report the automatic evaluation of different generation models and training algorithms. Finally, we will conduct detailed human evaluation and error analysis. 6.1 Experiment Setup For the non-pretrained models, we fix the hidden size of both LSTM and transformer to be 256, the transformer is 3-layered with 4 heads, while LSTM is also 3-layered. We use Adam optimizer (Kingma 0 0.2 0.4 0.6 0.8 Non-Sense Wrong Partial Correct Correct Human Evaluation Results on Different Models Transoformer GPT-2 Adv-Reg RL Coarse-to-Fine 0 0.1 0.2 0.3 0.4 0.5 0.6 Superlative Only Before/After Count Comparison Both/Neither Sum/Diff Average Unique Generation Accuracy of different logic types Figure 8: The human evaluation results of different models on the sampled sentences. and Ba, 2015) with a learning rate of 2e-4 to jointly optimize the parameters and keep the model with the best perplexity on the validation set. During test time, we use a greedy search to generate text and calculate the BLEU-1,2,3 scores with the 5 references from the table. For the pre-trained models, we base our implementation on Huggingface’s Transformer (Wolf et al., 2019) for both BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019) with subword unit vocabulary of 30K. During linearization, we found that using the whole table compromises the performance greatly, partly due to 1) over-length issue with pre-trained LM, 2) too much irrelevant information input. Therefore, we propose to use partial table as input, specifically, we run entity linking over the sentences to detect the linked columns of the table and only linearize the partial table as input PT . Both are finetuned using Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e-6. In both adversarial training and reinforcement learning algorithms, we add maximum likelihood objective to stabilize the training, we select the appropriate balancing factor based on the validation Adv-Acc socre. For coarse-to-fine training, we first warm up the model to generate the template sequence and then finetune it on the concatenated full sequence. Model selection is based on the bleu-3 score on validation split. 7936 Model Training PPL BLEU-1 BLEU-2 BLEU-3 SP-Acc NLI-Acc Adv-Acc Field-Gating + LSTM MLE 27.7 42.3 19.5 6.9 38.0 56.8 56.2 Field-Gating + Trans MLE 26.8 44.1 20.9 8.3 38.5 57.3 58.1 Field-Infusing + LSTM MLE 27.9 43.1 19.7 7.1 38.6 57.1 56.9 Field-Infusing + Trans MLE 26.9 43.7 20.9 8.4 38.9 57.3 58.2 BERT-TabGen (sm) MLE 7.5 47.8 26.3 11.9 42.2 68.1 62.4 GPT-TabGen (sm) MLE 8.8 48.8 27.1 12.6 42.1 68.7 62.3 GPT-TabGen (sm) Adv-Reg 12.1 45.8 23.1 9.6 40.9 68.5 64.7 GPT-TabGen (sm) RL 11.3 45.1 23.6 9.1 43.1 67.7 61.9 GPT-Coarse-to-Fine (sm) MLE 46.6 26.8 13.3 42.7 72.2 64.9 BERT-TabGen (lg) MLE 6.3 49.1 27.7 13.5 44.4 73.9 64.0 GPT-TabGen (med) MLE 6.8 49.6 28.2 14.2 44.7 74.6 64.3 GPT-TabGen (med) Adv-Reg 10.1 47.2 24.0 10.8 44.1 73.0 65.4 GPT-TabGen (med) RL 10.0 46.4 24.1 10.0 45.5 73.3 63.7 GPT-Coarse-to-Fine (med) MLE 49.0 28.3 14.6 45.3 76.4 66.0 Table 2: The experimental results of different models on the test split of LOGICNLG, where we split the table into non-pretrained LSTM/Transformer, small pre-trained LM (sm) and medium/large pre-trained LM (med/lg). 6.2 Experimental Results We first perform an automatic evaluation to approximately measure the performance of different models and then conduct an in-depth human evaluation to have a better understanding. Automatic Evaluation: The experimental results are summarized in Table 2, where we comprehensively survey different architectures and training algorithms. For the non-pretrained models, we observe that Transformer is slightly better than LSTM and two different table encoding strategies achieve similar results. In contrast, pre-trained models are much better at lowering the perplexity, besides the generated sentences significantly outperform the non-pretrained models in terms of both fluency and fidelity score with GPT-TabGen and BERT-TabGen achieving similar performance. As the BERT-TabGen runs much slower due to multiple passes of decoding, we favor GPT-TabGen in the following experiments. With the adversarial regularization and reinforcement training, the model can only improve the optimized fidelity metric, with the fluency scores dropping significantly. Such phenomena confirm our assumption about the caveats of the monotonic generation paradigm. For the proposed coarse-to-fine generation scheme, as the “[ENT]” tokens are replaced by entity names, which normally contain a phrase like “Feb 2nd”. Such n-gram phrase substitution preserves the completeness of entity names and thus leads to higher 2/3/4-gram matches, which translates to higher BLEU-3 and lower BLEU-1 in Table 2. The proposed coarse-to-fine generation can yield reasonable improvement over NLI-Acc and Adv-Acc, which demonstrates its advantages of in capturing logical dependency. Human Evaluation To further investigate the quality of the generated text, we propose to perform human evaluation. Specifically, we sample 200 sentences from different models and distribute them independently to human experts (graduate students from the computer science department) to verify their quality. Specifically, the quality measure is categorized into categories: 1) non-sense: the sentence does not make much sense, which is mainly due to disfluency or repetition problem. 2) wrong: a fluent sentence with wrong logic. 3) partial-correct: the sentence contains more than one fact, at least one of them is correct 4) correct: the high-quality in both fluency and logic correctness. We demonstrate the results in Figure 8, from which we observe that pre-training significantly decreases the non-sense proportion. However, the RL and Adv-Reg both harm the fluency and lead to more non-sense sentences. In contrast, the coarse-to-fine model can maintain the non-sense proportion while significantly increasing correct/partial-correct sentences. From human evaluation, even the best performing model can get slightly over 20% of its prediction logically correct, which reflects the challenges of LOGICNLG for existing paradigm. Evaluation Metrics We here analyze the effectiveness of the defined automatic evaluation metrics for fidelity evaluation. For the Parsing-based evaluation and NLI-based evaluation, we use the adversarial set (containing positive/negative sample pairs) to evaluate their consistency with human judges. Parsing-based model only achieves an ac7937 curacy of 60%, while NLI-based model achieves a higher accuracy of 65%. It indicates that the fidelity measurement model is itself a very challenging problem and the existing models are still in a premature stage. Therefore, the exact number of SP-Acc or NLI-Acc cannot reliably reflect the exact proportion of sentences logically entailed by the table. However, we still believe they are informative for model development based on the following reasons: 1) the automatic fidelity scores are quite stable, not sensitive to random initialization or different configurations, 2) when comparing different models (Transformer vs. GPT-2 vs. RL/Adv-Reg vs. Coarse-to-Fine), the trends of different automatic scores are consistent with human evaluation, which indicates its potential in assisting the development of new models. Fine-grained Analysis To better understand the generation model’s reasoning capability in regarding different logical operations, we pick the most frequent 9 operations (definition in the Appendix) and analyze the best model’s capability in expressing these different logic. We demonstrate our human evaluation in Figure 8 to make the following inspections: 1) the model performs best in justifying the order of different entities (before/after) and relating two entities (both/neither/comparison). 2) the model performs reasonably well at superlative and count operation. 3) the generation model performs much worse in operations like “only, unique”. 4) the model is not able to perform mathematical aggregation like average, sum, etc. Overall, the string-based operations are easier than numericbased operations, how to infuse the numeric knowledge is an open research question to move forward. 7 Related Work Natural Language Generation Natural language generation is a long-standing problem (Kukich, 1983; Holmes-Higgin, 1994; Reiter and Dale, 1997), which involves generating text from records or data. Recently, many neural-based generation models have been proposed (Puduppully et al., 2019a,b; Lebret et al., 2016; Wiseman et al., 2018) to achieve impressive performance on the existing datasets (Chen and Mooney, 2008; Liang et al., 2009; Lebret et al., 2016; Duˇsek et al., 2019; Wiseman et al., 2017) since the annotated text are mostly surface-level annotation without logical inference. Unlike them, LOGICNLG has rich inference, which poses great challenges to existing models and evaluations. Non-monotonic Generation There have been attempts recently to study the problem of nonmonotonic text generation, which aims to teach the generation model to learn the generation order without external supervision (Ford et al., 2018; Welleck et al., 2019; Gu et al., 2019; Mansimov et al., 2019). These models have shown to learn rational generation order to approach similar performance as the left-to-right case. These approaches are useful at capturing more sophisticated dependency within the sentence, which provides a plausible direction to pursue in LOGICNLG. Factualness Evaluation Fidelity is an important research topic in generation, In ROTOWIRE (Wiseman et al., 2017) and MSCOCO (Lin et al., 2014), IE-based extractive evaluation (Rohrbach et al., 2018; Dhingra et al., 2019) are adopted for surfacelevel matching to replace costly human evaluation. In abstractive summarization, Goodrich et al. (2019) proposes NER + Relation Classification method to investigate fidelity in generated summarization while Kry´sci´nski et al. (2019) proposes to use NLI models to understand the entailment between generated text with the given document. These evaluations are beyond surface-level to study more sophisticated linguistic phenomena like paraphrasing, compression, entailment, inclusion, etc, which are common in summarization tasks. 8 Conclusion In this paper, we propose logical NLG to study the logical inference problem in generation. We conduct comprehensive experiments to show the existing NLG models are restricted by its monotonic nature and conclude this to be a proper nextstep problem to study NLG systems. There are still some unsolved problems for Logical NLG, e.g. how to improve the quality of automatic metrics to better help human automatically judge models’ performances. To promote the research in this direction, we host a LogicNLG challenge2 to help better benchmark the current progress. 9 Acknowledgement The authors would like to thank the anonymous reviewers for their thoughtful comments. 2https://competitions.codalab.org/ competitions/24527 7938 References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Steven Bird. 2006. Nltk: the natural language toolkit. In Proceedings of the COLING/ACL on Interactive presentation sessions, pages 69–72. Association for Computational Linguistics. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. David L Chen and Raymond J Mooney. 2008. Learning to sportscast: a test of grounded language acquisition. In Proceedings of the 25th international conference on Machine learning, pages 128–135. ACM. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019. Tabfact: A largescale dataset for table-based fact verification. arXiv preprint arXiv:1909.02164. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884–4895. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In ACL. Ondˇrej Duˇsek, Jekaterina Novikova, and Verena Rieser. 2019. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG Challenge. arXiv preprint arXiv:1901.11528. Nicolas Ford, Daniel Duckworth, Mohammad Norouzi, and George Dahl. 2018. The importance of generation order in language modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2942–2946. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Ben Goodrich, Vinay Rao, Peter J Liu, and Mohammad Saleh. 2019. Assessing the factual accuracy of generated text. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 166–175. ACM. Jiatao Gu, Qi Liu, and Kyunghyun Cho. 2019. Insertion-based decoding with automatically inferred generation order. TACL. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Paul Holmes-Higgin. 1994. Text generationusing discourse strategies and focus constraints to generate natural language text by kathleen r. mckeown, cambridge university press, 1992, pp 246,£ 13.95, isbn 0521-43802-0. The Knowledge Engineering Review, 9(4):421–422. Anjuli Kannan and Oriol Vinyals. 2017. Adversarial evaluation of dialogue models. arXiv preprint arXiv:1701.08198. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR. Wojciech Kry´sci´nski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Evaluating the factual consistency of abstractive text summarization. arXiv preprint arXiv:1910.12840. Karen Kukich. 1983. Design of a knowledge-based report generator. In Proceedings of the 21st annual meeting on Association for Computational Linguistics, pages 145–150. Association for Computational Linguistics. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213. Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 91–99. Association for Computational Linguistics. Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics, 39(2):389–446. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. 7939 Angli Liu, Jingfei Du, and Veselin Stoyanov. 2019. Knowledge-augmented language model and its application to unsupervised named-entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1142– 1150. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Thirty-Second AAAI Conference on Artificial Intelligence. Elman Mansimov, Alex Wang, and Kyunghyun Cho. 2019. A generalized framework of sequence generation with application to undirected sequence models. arXiv preprint arXiv:1905.12790. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470– 1480. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019a. Data-to-text generation with content selection and planning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6908–6915. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019b. Data-to-text generation with entity modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2023–2035, Florence, Italy. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. 2018. Overcoming language priors in visual question answering with adversarial regularization. In Advances in Neural Information Processing Systems, pages 1541–1551. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57–87. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallucination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4035–4045. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083. Shikhar Sharma, Jing He, Kaheer Suleman, Hannes Schulz, and Philip Bachman. 2017. Natural language generation in dialogue using lexicalized and delexicalized data. ICLR Workshop. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Sean Welleck, Kiant´e Brantley, Hal Daum´e Iii, and Kyunghyun Cho. 2019. Non-monotonic sequential text generation. In International Conference on Machine Learning, pages 6716–6726. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3174–3187. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Ningyu Zhang, Shumin Deng, Zhanlin Sun, Jiaoayan Chen, Wei Zhang, and Huajun Chen. 2019. Relation adversarial network for low resource knowledgegraph completion. arXiv preprint arXiv:1911.03091. 7940 A Dataset Examples In order to give readers a better sense of the statements in LOGICNLG, we demonstrate some typical examples below as Figure 9 and Figure 10. Each table in the dataset is associated with five different examples covering diversified inference skills. For example, Figure 9 requires ‘all’ operation to identify multiple rows having the same value on certain properties. Figure 10 requires the model to perform superlative, or count operation to identify the numerically highest number. B Logical Operation Distribution The dataset consists of the most common types of logical inference in our daily communication, to help the readers understand the semantic meaning of these inference, we list their definition and some examples below: • superlative: operations involving max,min or other comparison operation to get the lowest or highest value. Sentence: xxx is the tallest player in xxx team. • only: operation to identify the single entity which has a unique property the other entries do not have. Sentence: xxx is the only person to win all the games. • before/after: operations to compare time or spatial order. Sentence: xxx is born before xxx. • count: operations to enumerate the amount of entries meeting certain criterion. Sentence: there are two people from the central united states. • comparison: operations to compare two or given number of entities. Sentence: xxx has better income than xxx. • both/neither: operations to summarize the common properties of two entries. Sentence: xxx and xxx are both from the same country. • sum/diff: operations to perform numeric summation or difference between numbers. Sentence: xxx gives 1 more dollars than xxxx in the donation. • average: the average number of people attending the game is 500. • unique: the uniq operation in sql to assemble summarize different entities. Sentence: from the table, players are from 4 unique countries. C Semantic Parser Specifically, the scorer is realized by a matching model fγ, which takes a logic form P and the statement Y to output a consistency score fγ(P, Y ) between range of [0,1] with higher value indicating better consistency. As no groundtruth logical forms are provided, we utilize weakly supervised training. The set of logical forms generated is denoted as P, the logical forms returning binary value of True is viewed as pseudo positive example P+ and the logical forms returning False is treated as pseudo negative example P−. We propose to optimize the following objective to discriminate two sets: argmax γ E (T,Y )∈Dtrain [ E P ∈P+[fγ(P, Y )] − E P ∈P−[fγ(P, Y )]] As demonstrated in Figure 11, our semantic parser is comprised of three different parts, namely a resolution model, a breadth-first search model and a ranker model. The resolution model will try to figure out what are the entities appearing in the table and what are the numbers it needs to infer. These results are pushed to a buffer as the initial point, then the BFS search will try to compose plausible logical forms based on the values from the buffer. However, most of the synthesized logical forms are not relevant to the semantics the sentence is aimed to express. In the end, we need to train a ranker, which can learn to identify the most consistent logical form and use that to represent the formal semantics of given sentence. D Qualitative Example Next, we demonstrate some generated samples in Figure 12, which are generated from a table crawled from Wikipedia page3. Though most of the text generated by the model is coherent and reasonable, we do observe some disfluency like repetition, contradiction, erroneous sentences like the sentence 5. For the other sentences, three of them are logically correct, the first sentence contains quite complex logic with three different symbolic operations “argmax, argmin, after”. The fourth and sixth sentences involve operations like “filter, count”. In contrast, the second and third examples 3https://en.wikipedia.org/wiki/2007% E2%80%9308_Golden_State_Warriors_season 7941 larry nelson , jack nicklaus , and lee trevino all shot 8 strokes over par larry nelson , lee trevino , and dave stockton each won two pga championships in the 1970s - 1980s jack nicklaus had more pga championship wins than larry nelson and lee tevino combined dave stockton shot five strokes worse than larry nelson , jack nicklaus , and lee trevino three golfers shot worse than 8 strokes over par Figure 9: Example from LOGICNLG. the lowest attendance when fullham won was 7563 fullham fc only played one time at venue h fullham fc played three times in the month of january fullham fc faced the wycombe wanderers two times in the month of january the only defeat of fullham for the 4 first months of 2002 fc was when they face chelsea Figure 10: Example from LOGICNLG. Canada obtained 1 more gold medal than Mexico. Entity /Number Resolution Canada obtained 1 more gold medal than Mexico. Function Trigger Greater Diff Less more Breadth -First Search Canada, …, Mexico Filter … Filter(Nation==Canada) Filter(Nation==Mexico) Hop(?, Gold Medal) Hop(?, Gold Medal) …….. Filter(Nation== Mexico) Filter(Nation==Canada) Hop(?, Gold Medal) Hop(?, Gold Medal) Greater(?,?) ROOT Filter(Gold Medal== 1) Hop(?,Nation) Eq(?, Canada) …….. Semantic-Parsing Evaluation Figure 11: The BFS-based parser used in our evaluation. are factually incorrect as the team only competes with “Seattle” once and the 3 games are not in a row. We can see that the errors are quite diversified, it is difficult to debug what is the source of these errors by simply looking into the deep generation model. In the future, more interpretable generation model need to be built to make the inference process more transparent. 7942 Date Visitor Score Home Attendance Leading Player Record 12 / 2 golden state warriors 109 - 96 seattle supersonics 11461 stephen jackson 9 - 7 12 / 3 orlando magic 117 - 123 golden state warriors 18527 stephen jackson 9 - 8 12 / 7 miami heat 120 - 113 golden state warriors 19596 stephen jackson 11 - 8 12 / 28 denver nuggets 120 - 124 golden state warriors 20001 stephen jackson 17 - 13 12 / 16 golden state warriors 87 - 109 detroit pistons 22076 matt barnes 13 - 11 12 / 17 golden state warriors 125 - 117 memphis grizzlies 10549 stephen jackson 14 - 11 ✓1. The game with the lowest in Attendance took place after the game with the highest in Attendance. ✕2. The Golden State Warrior played against the Seattle Supersonics 2 time. ✕3. The Warrior won 3 game in a row during the 2007 - 08 Season. ✓4. The Golden State Warrior lost 2 game when playing at Home. ✕5. There were 4 time that was a Leading Scorer, and 4 time that was a Leading Scorer. ✓6. Stephen Jackson was the leading scorer 5 different times during the 2007 - 08 Season. Title: Golden State Warrior: NBA Season 2007-2008 Figure 12: The statements generated by GPT-TabGen model with random sampling.
2020
708
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7943–7960 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7943 Neural CRF Model for Sentence Alignment in Text Simplification Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, Wei Xu Department of Computer Science and Engineering The Ohio State University {jiang.1530, maddela.4, lan.105,zhong.536, xu.1265}@osu.edu Abstract The success of a text simplification system heavily depends on the quality and quantity of complex-simple sentence pairs in the training corpus, which are extracted by aligning sentences between parallel articles. To evaluate and improve sentence alignment quality, we create two manually annotated sentence-aligned datasets from two commonly used text simplification corpora, Newsela and Wikipedia. We propose a novel neural CRF alignment model which not only leverages the sequential nature of sentences in parallel documents but also utilizes a neural sentence pair model to capture semantic similarity. Experiments demonstrate that our proposed approach outperforms all the previous work on monolingual sentence alignment task by more than 5 points in F1. We apply our CRF aligner to construct two new text simplification datasets, NEWSELA-AUTO and WIKI-AUTO, which are much larger and of better quality compared to the existing datasets. A Transformer-based seq2seq model trained on our datasets establishes a new state-of-the-art for text simplification in both automatic and human evaluation.1 1 Introduction Text simplification aims to rewrite complex text into simpler language while retaining its original meaning (Saggion, 2017). Text simplification can provide reading assistance for children (Kajiwara et al., 2013), non-native speakers (Petersen and Ostendorf, 2007; Pellow and Eskenazi, 2014), nonexpert readers (Elhadad and Sutaria, 2007; Siddharthan and Katsos, 2010), and people with language disorders (Rello et al., 2013). As a preprocessing step, text simplification can also improve 1Code and data are available at: https://github. com/chaojiang06/wiki-auto. Newsela data need to be requested at: https://newsela.com/data/. the performance of many natural language processing (NLP) tasks, such as parsing (Chandrasekar et al., 1996), semantic role labelling (Vickrey and Koller, 2008), information extraction (Miwa et al., 2010) , summarization (Vanderwende et al., 2007; Xu and Grishman, 2009), and machine translation (Chen et al., 2012; ˇStajner and Popovic, 2016). Automatic text simplification is primarily addressed by sequence-to-sequence (seq2seq) models whose success largely depends on the quality and quantity of the training corpus, which consists of pairs of complex-simple sentences. Two widely used corpora, NEWSELA (Xu et al., 2015) and WIKILARGE (Zhang and Lapata, 2017), were created by automatically aligning sentences between comparable articles. However, due to the lack of reliable annotated data,2 sentence pairs are often aligned using surface-level similarity metrics, such as Jaccard coefficient (Xu et al., 2015) or cosine distance of TF-IDF vectors (Paetzold et al., 2017), which fails to capture paraphrases and the context of surrounding sentences. A common drawback of text simplification models trained on such datasets is that they behave conservatively, performing mostly deletion, and rarely paraphrase (Alva-Manchego et al., 2017). Moreover, WIKILARGE is the concatenation of three early datasets (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011) that are extracted from Wikipedia dumps and are known to contain many errors (Xu et al., 2015). To address these problems, we create the first high-quality manually annotated sentence-aligned datasets: NEWSELA-MANUAL with 50 article sets, and WIKI-MANUAL with 500 article pairs. We design a novel neural CRF alignment model, which utilizes fine-tuned BERT to measure semantic similarity and leverages the similar order of content be2Hwang et al. (2015) annotated 46 article pairs from Simple-Normal Wikipedia corpus; however, its annotation is noisy, and it contains many sentence splitting errors. 7944 Figure 1: An example of sentence alignment between an original news article (right) and its simplified version (left) in Newsela. The label ai for each simple sentence si is the index of complex sentence cai it aligns to. tween parallel documents, combined with an effective paragraph alignment algorithm. Experiments show that our proposed method outperforms all the previous monolingual sentence alignment approaches (ˇStajner et al., 2018; Paetzold et al., 2017; Xu et al., 2015) by more than 5 points in F1. By applying our alignment model to all the 1,882 article sets in Newsela and 138,095 article pairs in Wikipedia dump, we then construct two new simplification datasets, NEWSELA-AUTO (666,645 sentence pairs) and WIKI-AUTO (488,332 sentence pairs). Our new datasets with improved quantity and quality facilitate the training of complex seq2seq models. A BERT-initialized Transformer model trained on our datasets outperforms the stateof-the-art by 3.4% in terms of SARI, the main automatic metric for text simplification. Our simplification model produces 25% more rephrasing than those trained on the existing datasets. Our contributions include: 1. Two manually annotated datasets that enable the first systematic study for training and evaluating monolingual sentence alignment; 2. A neural CRF sentence alinger and a paragraph alignment algorithm that employ finetuned BERT to capture semantic similarity and take advantage of the sequential nature of parallel documents; 3. Two automatically constructed text simplification datasets which are of higher quality and 4.7 and 1.6 times larger than the existing datasets in their respective domains; 4. A BERT-initialized Transformer model for automatic text simplification, trained on our datasets, which establishes a new state-of-theart in both automatic and human evaluation. 2 Neural CRF Sentence Aligner We propose a neural CRF sentence alignment model, which leverages the similar order of content presented in parallel documents and captures editing operations across multiple sentences, such as splitting and elaboration (see Figure 1 for an example). To further improve the accuracy, we first align paragraphs based on semantic similarity and vicinity information, and then extract sentence pairs from these aligned paragraphs. In this section, we describe the task setup and our approach. 2.1 Problem Formulation Given a simple article (or paragraph) S of m sentences and a complex article (or paragraph) C of n sentences, for each sentence si (i ∈[1, m]) in the simple article, we aim to find its corresponding sentence cai (ai ∈[0, n]) in the complex article. We use ai to denote the index of the aligned sentence, where ai = 0 indicates that sentence si is not aligned to any sentence in the complex article. The full alignment a between article (or paragraph) pair S and C can then be represented by a sequence of alignment labels a = (a1, a2, . . . , am). Figure 1 shows an example of alignment labels. One specific aspect of our CRF model is that it uses a varied number of labels for each article (or paragraph) pair rather than a fixed set of labels. 2.2 Neural CRF Sentence Alignment Model We learn P(a|S, C), the conditional probability of alignment a given an article pair (S, C), using 7945 linear-chain conditional random field: P(a|S, C) = exp(Ψ(a, S, C)) P a∈A exp(Ψ(a, S, C)) = exp(P|S| i=1 ψ(ai, ai−1, S, C)) P a∈A exp(P|S| i=1 ψ(ai, ai−1, S, C))) (1) where |S| = m denotes the number of sentences in article S. The score P|S| i=1 ψ(ai, ai−1, S, C) sums over the sequence of alignment labels a = (a1, a2, . . . , am) between the simple article S and the complex article C, and could be decomposed into two factors as follows: ψ(ai, ai−1, S, C) = sim(si, cai) + T(ai, ai−1) (2) where sim(si, cai) is the semantic similarity score between the two sentences, and T(ai, ai−1) is a pairwise score for alignment label transition that ai follows ai−1. Semantic Similarity A fundamental problem in sentence alignment is to measure the semantic similarity between two sentences si and cj. Prior work used lexical similarity measures, such as Jaccard similarity (Xu et al., 2015), TF-IDF (Paetzold et al., 2017), and continuous n-gram features (ˇStajner et al., 2018). In this paper, we fine-tune BERT (Devlin et al., 2019) on our manually labeled dataset (details in §3) to capture semantic similarity. Alignment Label Transition In parallel documents, the contents of the articles are often presented in a similar order. The complex sentence cai that is aligned to si, is often related to the complex sentences cai−1 and cai+1, which are aligned to si−1 and si+1, respectively. To incorporate this intuition, we propose a scoring function to model the transition between alignment labels using the following features: g1 = |ai −ai−1| g2 = 1(ai = 0, ai−1 ̸= 0) g3 = 1(ai ̸= 0, ai−1 = 0) g4 = 1(ai = 0, ai−1 = 0) (3) where g1 is the absolute distance between ai and ai−1, g2 and g3 denote if the current or prior sentence is not aligned to any sentence, and g4 indicates whether both si and si−1 are not aligned to any sentences. The score is computed as follows: T(ai, ai−1) = FFNN([g1, g2, g3, g4]) (4) where [, ] represents concatenation operation and FFNN is a 2-layer feedforward neural network. We provide more implementation details of the model in Appendix A.1. 2.3 Inference and Learning During inference, we find the optimal alignment ˆa: ˆa = argmax a P(a|S, C) (5) using Viterbi algorithm in O(mn2) time. During training, we maximize the conditional probability of the gold alignment label a∗: log P(a∗|S, C) =Ψ(a∗, S, C)− log X a∈A exp(Ψ(a, S, C)) (6) The second term sums the scores of all possible alignments and can be computed using forward algorithm in O(mn2) time as well. 2.4 Paragraph Alignment Both accuracy and computing efficiency can be improved if we align paragraphs before aligning sentences. In fact, our empirical analysis revealed that sentence-level alignments mostly reside within the corresponding aligned paragraphs (details in §4.4 and Table 3). Moreover, aligning paragraphs first provides more training instances and reduces the label space for our neural CRF model. We propose Algorithm 1 and 2 for paragraph alignment. Given a simple article S with k paragraphs S = (S1, S2, . . . , Sk) and a complex article C with l paragraphs C = (C1, C2, . . . , Cl), we first apply Algorithm 1 to calculate the semantic similarity matrix simP between paragraphs by averaging or maximizing over the sentence-level similarities (§2.2). Then, we use Algorithm 2 to generate the paragraph alignment matrix alignP. We align paragraph pairs if they satisfy one of the two conditions: (a) having high semantic similarity and appearing in similar positions in the article pair (e.g., both at the beginning), or (b) two continuous paragraphs in the complex article having relatively high semantic similarity with one paragraph in the simple side, (e.g., paragraph splitting or fusion). The difference of relative position in documents 7946 Algorithm 1: Pairwise Paragraph Similarity Initialize: simP ∈R2×k×l to 02×k×l for i ←1 to k do for j ←1 to l do simP[1, i, j] = avg sp∈Si  max cq∈Cj simSent(sp, cq)  simP[2, i, j] = max sp∈Si,cq∈Cj simSent(sp, cq) end end return simP Algorithm 2: Paragraph Alignment Algorithm Input :simP ∈R2×k×l Initialize: alignP ∈Ik×l to 0k×l for i ←1 to k do jmax = argmax j simP[1, i, j] if simP[1, i, jmax] > τ1 and d(i, jmax) < τ2 then alignP[i, jmax] = 1 end for j ←1 to l do if simP[2, i, j] > τ3 then alignP[i, j] = 1 end if j > 1 & simP[2, i, j] > τ4 & simP[2, i, j −1] > τ4 & d(i, j) < τ5 & d(i, j −1) < τ5 then alignP[i, j] = 1 alignP[i, j −1] = 1 end end end return alignP is defined as d(i, j) = | i k −j l |, and the thresholds τ1 - τ5 in Algorithm 2 are selected using the dev set. Finally, we merge the neighbouring paragraphs which are aligned to the same paragraph in the simple article before feeding them into our neural CRF aligner. We provide more details in Appendix A.1. 3 Constructing Alignment Datasets To address the lack of reliable sentence alignment for Newsela (Xu et al., 2015) and Wikipedia (Zhu et al., 2010; Woodsend and Lapata, 2011), we designed an efficient annotation methodology to first manually align sentences between a few complex and simple article pairs. Then, we automatically aligned the rest using our alignment model trained on the human annotated data. We created two sentence-aligned parallel corpora (details in §5), which are the largest to date for text simplification. 3.1 Sentence Aligned Newsela Corpus Newsela corpus (Xu et al., 2015) consists of 1,932 English news articles where each article (level 0) is Newsela Newsela -Manual -Auto Article level # of original articles 50 1,882 # of article pairs 500 18,820 Sentence level # of original sent. (level 0) 2,190 59,752 # of sentence pairs 1.01M† 666,645 # of unique complex sent. 7,001 195,566 # of unique simple sent. 8,008 246,420 avg. length of simple sent. 13.9 14.8 avg. length of complex sent. 21.3 24.9 Labels of sentence pairs # of aligned (not identical) 5,182 666,645 # of partially-aligned 14,023 # of not-aligned 0.99M – Text simplification phenomenon # of sent. rephrasing (1-to-1) 8,216 307,450 # of sent. copying (1-to-1) 3,842 147,327 # of sent. splitting (1-to-n) 4,237 160,300 # of sent. merging (n-to-1) 232 – # of sent. fusion (m-to-n) 252 – # of sent. deletion (1-to-0) 6,247 – Table 1: Statistics of our manually and automatically created sentence alignment annotations on Newsela. † This number includes all complex-simple sentence pairs (including aligned, partially-aligned, or notaligned) across all 10 combinations of 5 readability levels (level 0-4), of which 20,343 sentence pairs between adjacent readability levels were manually annotated and the rest of labels were derived. re-written by professional editors into four simpler versions at different readability levels (level 1-4). We annotate sentence alignments for article pairs at adjacent readability levels (e.g., 0-1, 1-2) as the alignments between non-adjacent levels (e.g., 02) can be then derived automatically. To ensure efficiency and quality, we designed the following three-step annotation procedure: 1. Align paragraphs using CATS toolkit (ˇStajner et al., 2018), and then correct the automatic paragraph alignment errors by two in-house annotators.3 Performing paragraph alignment as the first step significantly reduces the number of sentence pairs to be annotated from every possible sentence pair to the ones within the aligned paragraphs. We design an efficient visualization toolkit for this step, for which a screenshot can be found in Appendix E.2. 2. For each sentence pair within the aligned paragraphs, we ask five annotators on the Figure 3We consider any sentence pair not in the aligned paragraph pairs as not-aligned. This assumption leads to a small number of missing sentence alignments, which are manually corrected in Step 3. 7947 Figure 2: Manual inspection of 100 random sentence pairs from our corpora (NEWSELA-AUTO and WIKIAUTO) and the existing Newsela (Xu et al., 2015) and Wikipedia (Zhang and Lapata, 2017) corpora. Our corpora contain at least 44% more complex rewrites (Deletion + Paraphrase or Splitting + Paraphrase) and 27% less defective pairs (Not Aligned or Not Simpler). Eight4 crowdsourcing platform to classify into one of the three categories: aligned, partiallyaligned, or not-aligned. We provide the annotation instructions and interface in Appendix E.1. We require annotators to spend at least ten seconds per question and embed one test question in every five questions. Any worker whose accuracy drops below 85% on test questions is removed. The inter-annotator agreement is 0.807 measured by Cohen’s kappa (Artstein and Poesio, 2008). 3. We have four in-house annotators (not authors) verify the crowdsourced labels. We manually aligned 50 article groups to create the NEWSELA-MANUAL dataset with a 35/5/10 split for train/dev/test, respectively. We trained our aligner on this dataset (details in §4), then automatically aligned sentences in the remaining 1,882 article groups in Newsela (Table 1) to create a new sentence-aligned dataset, NEWSELA-AUTO, which consists of 666k sentence pairs predicted as aligned and partially-aligned. NEWSELA-AUTO is considerably larger than the previous NEWSELA (Xu et al., 2015) dataset of 141,582 pairs, and contains 44% more interesting rewrites (i.e., rephrasing and splitting cases) as shown in Figure 2. 4https://www.figure-eight.com/ 3.2 Sentence Aligned Wikipedia Corpus We also create a new version of Wikipedia corpus by aligning sentences between English Wikipedia and Simple English Wikipedia. Previous work (Xu et al., 2015) has shown that Wikipedia is much noisier than the Newsela corpus. We provide this dataset in addition to facilitate future research. We first extract article pairs from English and Simple English Wikipedia by leveraging Wikidata, a well-maintained database that indexes named entities (and events etc.) and their Wikipedia pages in different languages. We found this method to be more reliable than using page titles (Coster and Kauchak, 2011) or cross-lingual links (Zhu et al., 2010; Woodsend and Lapata, 2011), as titles can be ambiguous and cross-lingual links may direct to a disambiguation or mismatched page (more details in Appendix B). In total, we extracted 138,095 article pairs from the 2019/09 Wikipedia dump, which is two times larger than the previous datasets (Coster and Kauchak, 2011; Zhu et al., 2010) of only 60∼65k article pairs, using an improved version of the WikiExtractor library.5 Then, we crowdsourced the sentence alignment annotations for 500 randomly sampled document pairs (10,123 sentence pairs total). As document length in English and Simple English Wikipedia articles vary greatly,6 we designed the following annotation strategy that is slightly different from Newsela. For each sentence in the simple article, we select the sentences with the highest similarity scores from the complex article for manual annotation, based on four similarity measures: lexical similarity from CATS (ˇStajner et al., 2018), cosine similarity using TF-IDF (Paetzold et al., 2017), cosine similarity between BERT sentence embeddings, and alignment probability by a BERT model fine-tuned on our NEWSELA-MANUAL data (§3.1). As these four metrics may rank the same sentence at the top, on an average, we collected 2.13 complex sentences for every simple sentence and annotated the alignment label for each sentence pair. Our pilot study showed that this method captured 93.6% of the aligned sentence pairs. We named this manually labeled dataset WIKI-MANUAL with a train/dev/test split of 350/50/100 article pairs. Finally, we trained our alignment model on this 5https://github.com/attardi/wikiextractor 6The average number of sentences in an article is 9.2 ± 16.5 for Simple English Wikipedia and 74.8 ± 94.4 for English Wikipedia. 7948 Task 1 (aligned&partial vs. others) Task 2 (aligned vs. others) Precision Recall F1 Precision Recall F1 Similarity-based models Jaccard (Xu et al., 2015) 94.93 76.69 84.84 73.43 75.61 74.51 TF-IDF (Paetzold et al., 2017) 96.24 83.05 89.16 66.78 69.69 68.20 LR (ˇStajner et al., 2018) 93.11 84.96 88.85 73.21 74.74 73.97 Similarity-based models w/ alignment strategy (previous SOTA) JaccardAlign (Xu et al., 2015) 98.66 67.58 80.22† 51.34 86.76 64.51† MASSAlign (Paetzold et al., 2017) 95.49 82.27 88.39† 40.98 87.11 55.74† CATS (ˇStajner et al., 2018) 88.56 91.31 89.92† 38.29 97.39 54.97† Our CRF Aligner 97.86 93.43 95.59 87.56 89.55 88.54 Table 2: Performance of different sentence alignment methods on the NEWSELA-MANUAL test set. † Previous work was designed only for Task 1 and used alignment strategy (greedy algorithm or dynamic programming) to improve either precision or recall. Task 1 Task 2 P R F1 P R F1 Neural sentence pair models Infersent 92.8 69.7 79.6 87.8 74.0 80.3 ESIM 91.5 71.2 80.0 82.5 73.7 77.8 BERTScore 90.6 76.5 83.0 83.2 74.3 78.5 BERTembedding 84.7 53.0 65.2 77.0 74.7 75.8 BERTfinetune 93.3 84.3 88.6 90.2 80.0 84.8 + ParaAlign 98.4 84.2 90.7 91.9 79.0 85.0 Neural CRF aligner Our CRF Aligner 96.5 90.1 93.2 88.6 87.7 88.1 + gold ParaAlign 97.3 91.1 94.1 88.9 88.0 88.4 Table 3: Ablation study of our aligner on dev set. annotated dataset to automatically align sentences for all the 138,095 document pairs (details in Appendix B). In total, we yielded 604k non-identical aligned and partially-aligned sentence pairs to create the WIKI-AUTO dataset. Figure 2 illustrates that WIKI-AUTO contains 75% less defective sentence pairs than the old WIKILARGE (Zhang and Lapata, 2017) dataset. 4 Evaluation of Sentence Alignment In this section, we present experiments that compare our neural sentence alignment against the stateof-the-art approaches on NEWSELA-MANUAL (§3.1) and WIKI-MANUAL (§3.2) datasets. 4.1 Existing Methods We compare our neural CRF aligner with the following baselines and state-of-the-art approaches: 1. Three similarity-based methods: Jaccard similarity (Xu et al., 2015), TF-IDF cosine similarity (Paetzold et al., 2017) and a logistic regression classifier trained on our data with lexical features from ˇStajner et al. (2018). 2. JaccardAlign (Xu et al., 2015), which uses Jaccard coefficient for sentence similarity and a greedy approach for alignment. 3. MASSAlign (Paetzold et al., 2017), which combines TF-IDF cosine similarity with a vicinity-driven dynamic programming algorithm for alignment. 4. CATS toolkit (ˇStajner et al., 2018), which uses character n-gram features for sentence similarity and a greedy alignment algorithm. 4.2 Evaluation Metrics We report Precision, Recall and F1 on two binary classification tasks: aligned + partially-aligned vs. not-aligned (Task 1) and aligned vs. partiallyaligned + not-aligned (Task 2). It should be noted that we excluded identical sentence pairs in the evaluation as they are trivial to classify. 4.3 Results Table 2 shows the results on NEWSELA-MANUAL test set. For similarity-based methods, we choose a threshold based on the maximum F1 on the dev set. Our neural CRF aligner outperforms the stateof-the-art approaches by more than 5 points in F1. In particular, our method performs better than the previous work on partial alignments, which contain many interesting simplification operations, such as sentence splitting and paraphrasing with deletion. Similarly, our CRF alignment model achieves 85.1 F1 for Task 1 (aligned + partially-aligned vs. not-aligned) on the WIKI-MANUAL test set. It outperforms one of the previous SOTA approaches CATS (ˇStajner et al., 2018) by 15.1 points in F1. We provide more details in Appendix C. 4.4 Ablation Study We analyze the design choices crucial for the good performance of our alignment model, namely CRF component, the paragraph alignment and the BERTbased semantic similarity measure. Table 3 shows the importance of each component with a series of ablation experiments on the dev set. 7949 Newsela Wikipedia Auto Old Auto Old # of article pairs 13k 7.9k 138k 65k # of sent. pairs (train) 394k 94k 488k 298k # of sent. pairs (dev) 43k 1.1k 2k 2k # of sent. pairs (test) 44k 1k 359 359 avg. sent. len (complex) 25.4 25.8 26.6 25.2 avg. sent. len (simple) 13.8 15.7 18.7 18.5 Table 4: Statistics of our newly constructed parallel corpora for sentence simplification compared to the old datasets (Xu et al., 2015; Zhang and Lapata, 2017). CRF Model Our aligner achieves 93.2 F1 and 88.1 F1 on Task 1 and 2, respectively, which is around 3 points higher than its variant without the CRF component (BERTfinetune + ParaAlign). Modeling alignment label transitions and sequential predictions helps our neural CRF aligner to handle sentence splitting cases better, especially when sentences undergo dramatic rewriting. Paragraph Alignment Adding paragraph alignment (BERTfinetune + ParaAlign) improves the precision on Task 1 from 93.3 to 98.4 with a negligible decrease in recall when compared to not aligning paragraphs (BERTfinetune). Moreover, paragraph alignments generated by our algorithm (Our Aligner) perform close to the gold alignments (Our Aligner + gold ParaAlign) with only 0.9 and 0.3 difference in F1 on Task 1 and 2, respectively. Semantic Similarity BERTfinetune performs better than other neural models, including Infersent (Conneau et al., 2017), ESIM (Chen et al., 2017), BERTScore (Zhang et al., 2020) and pretrained BERT embedding (Devlin et al., 2019). For BERTScore, we use idf weighting, and treat simple sentence as reference. 5 Experiments on Automatic Sentence Simplification In this section, we compare different automatic text simplification models trained on our new parallel corpora, NEWSELA-AUTO and WIKI-AUTO, with their counterparts trained on the existing datasets. We establish a new state-of-the-art for sentence simplification by training a Transformer model with initialization from pre-trained BERT checkpoints. 5.1 Comparison with existing datasets Existing datasets of complex-simple sentences, NEWSELA (Xu et al., 2015) and WIKILARGE (Zhang and Lapata, 2017), were aligned using lexical similarity metrics. NEWSELA dataset (Xu et al., 2015) was aligned using JaccardAlign (§4.1). WIKILARGE is a concatenation of three early datasets (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011) where sentences in Simple/Normal English Wikipedia and editing history were aligned by TF-IDF cosine similarity. For our new NEWSELA-AUTO, we partitioned the article sets such that there is no overlap between the new train set and the old test set, and vice-versa. Following Zhang and Lapata (2017), we also excluded sentence pairs corresponding to the levels 0–1, 1–2 and 2–3. For our WIKIAUTO dataset, we eliminated sentence pairs with high (>0.9) or low (<0.1) lexical overlap based on BLEU scores (Papineni et al., 2002), following ˇStajner et al. (2015). We observed that sentence pairs with low BLEU are often inaccurate paraphrases with only shared named entities and the pairs with high BLEU are dominated by sentences merely copied without simplification. We used the benchmark TURK corpus (Xu et al., 2016) for evaluation on Wikipedia, which consists of 8 human-written references for sentences in the validation and test sets. We discarded sentences in TURK corpus from WIKI-AUTO. Table 4 shows the statistics of the existing and our new datasets. 5.2 Baselines and Simplification Models We compare the following seq2seq models trained using our new datasets versus the existing datasets: 1. A BERT-initialized Transformer, where the encoder and decoder follow the BERTbase architecture. The encoder is initialized with the same checkpoint and the decoder is randomly initialized (Rothe et al., 2020). 2. A randomly initialized Transformer with the same BERTbase architecture as above. 3. A BiLSTM-based encoder-decoder model used in Zhang and Lapata (2017). 4. EditNTS (Dong et al., 2019),7 a state-of-theart neural programmer-interpreter (Reed and de Freitas, 2016) approach that predicts explicit edit operations sequentially. In addition, we compared our BERT-initialized Transformer model with the released system outputs from Kriz et al. (2019) and EditNTS (Dong et al., 2019). We implemented our LSTM and Transformer models using Fairseq.8 We provide the model and training details in Appendix D.1. 7https://github.com/yuedongP/EditNTS 8https://github.com/pytorch/fairseq 7950 Evaluation on our new test set Evaluation on old test set SARI add keep del FK Len SARI add keep del FK Len Complex (input) 11.9 0.0 35.5 0.0 12 24.3 12.5 0.0 37.7 0.0 11 22.9 Models trained on old dataset (original NEWSELA corpus released in (Xu et al., 2015)) Transformerrand 33.1 1.8 22.1 75.4 6.8 14.2 34.1 2.0 25.5 74.8 6.7 14.2 LSTM 35.6 2.8 32.1 72.0 8.2 16.9 36.2 2.5 34.9 71.3 7.7 16.3 EditNTS 35.5 1.8 30.0 75.4 7.1 14.1 36.1 1.7 32.8 73.8 7.0 14.1 Transformerbert 34.4 2.4 25.2 75.8 7.0 14.5 35.1 2.7 27.8 74.8 6.8 14.3 Models trained on our new dataset (NEWSELA-AUTO) Transformerrand 35.6 3.2 28.4 75.0 7.1 14.4 35.2 2.5 29.7 73.5 7.0 14.2 LSTM 35.8 3.9 30.5 73.1 7.0 14.3 36.4 3.3 33.0 72.9 6.6 14.0 EditNTS 35.8 2.4 29.4 75.6 6.3 11.6 35.7 1.8 31.1 74.2 6.1 11.5 Transformerbert 36.6 4.5 31.0 74.3 6.8 13.3 36.8 3.8 33.1 73.4 6.8 13.5 Simple (reference) – – – – 6.6 13.2 – – – – 6.2 12.6 Table 5: Automatic evaluation results on NEWSELA test sets comparing models trained on our dataset NEWSELAAUTO against the existing dataset (Xu et al., 2015). We report SARI, the main automatic metric for simplification, precision for deletion and F1 scores for adding and keeping operations. Add scores are low partially because we are using one reference. Bold typeface and underline denote the best and the second best performances respectively. For Flesch-Kincaid (FK) grade level and average sentence length (Len), we consider the values closest to reference as the best. Model F A S Avg. LSTM 3.44 2.86 3.31 3.20 EditNTS (Dong et al., 2019)† 3.32 2.79 3.48 3.20 Rerank (Kriz et al., 2019)† 3.50 2.80 3.46 3.25 Transformerbert (this work) 3.64 3.12 3.45 3.40 Simple (reference) 3.98 3.23 3.70 3.64 Table 6: Human evaluation of fluency (F), adequacy (A) and simplicity (S) on the old NEWSELA test set. †We used the system outputs shared by the authors. Model Train F A S Avg. LSTM old 3.57 3.27 3.11 3.31 LSTM new 3.55 2.98 3.12 3.22 Transformerbert old 2.91 2.56 2.67 2.70 Transformerbert new 3.76 3.21 3.18 3.39 Simple (reference) — 4.34 3.34 3.37 3.69 Table 7: Human evaluation of fluency (F), adequacy (A) and simplicity (S) on NEWSELA-AUTO test set. 5.3 Results In this section, we evaluate different simplification models trained on our new datasets versus on the old existing datasets using both automatic and human evaluation. 5.3.1 Automatic Evaluation We report SARI (Xu et al., 2016), Flesch-Kincaid (FK) grade level readability (Kincaid and Chissom, 1975), and average sentence length (Len). While SARI compares the generated sentence to a set of reference sentences in terms of correctly inserted, kept and deleted n-grams (n ∈{1, 2, 3, 4}), FK measures the readability of the generated sentence. We also report the three rewrite operation scores used in SARI: the precision of delete (del), the F1scores of add (add), and keep (keep) operations. Tables 5 and 8 show the results on Newsela and Figure 3: Manual inspection of 100 random sentences generated by Transformerbert trained on NEWSELAAUTO and existing NEWSELA datasets, respectively. Wikipedia datasets respectively. Systems trained on our datasets outperform their equivalents trained on the existing datasets according to SARI. The difference is notable for Transformerbert with a 6.4% and 3.7% increase in SARI on NEWSELA-AUTO test set and TURK corpus, respectively. Larger size and improved quality of our datasets enable the training of complex Transformer models. In fact, Transformerbert trained on our new datasets outperforms the existing state-of-the-art systems for automatic text simplification. Although improvement in SARI is modest for LSTM-based models (LSTM and EditNTS), the increase in F1 scores for addition and deletion operations indicate that the models trained on our datasets make more meaningful changes to the input sentence. 5.3.2 Human Evaluation We also performed human evaluation by asking five Amazon Mechanical Turk workers to rate fluency, adequacy and simplicity (detailed instructions in Appendix D.2) of 100 random sentences generated by different simplification models trained on NEWSELA-AUTO and the existing dataset. Each 7951 SARI add keep del FK Len Complex (input) 25.9 0.0 77.8 0.0 13.6 22.4 Models trained on old dataset (WIKILARGE) LSTM 33.8 2.5 65.6 33.4 11.6 20.6 Transformerrand 33.5 3.2 64.1 33.2 11.1 17.7 EditNTS 35.3 3.0 63.9 38.9 11.1 18.5 Transformerbert 35.3 4.4 66.0 35.6 10.9 17.9 Models trained on our new dataset (WIKI-AUTO) LSTM 34.0 2.8 64.0 35.2 11.0 19.3 Transformerrand 34.7 3.3 68.8 31.9 11.7 18.7 EditNTS 36.4 3.6 66.1 39.5 11.6 20.2 Transformerbert 36.6 5.0 67.6 37.2 11.4 18.7 Simple (reference) – – – – 11.7 20.2 Table 8: Automatic evaluation results on Wikipedia TURK corpus comparing models trained on WIKIAUTO and WIKILARGE (Zhang and Lapata, 2017). worker evaluated these aspects on a 5-point Likert scale. We averaged the ratings from five workers. Table 7 demonstrates that Transformerbert trained on NEWSELA-AUTO greatly outperforms the one trained on the old dataset. Even with shorter sentence outputs, our Transformerbert retained similar adequacy as the LSTM-based models. Our Transformerbert model also achieves better fluency, adequacy, and overall ratings compared to the SOTA systems (Table 6). We provide examples of system outputs in Appendix D.3. Our manual inspection (Figure 3) also shows that Transfomerbert trained on NEWSELA-AUTO performs 25% more paraphrasing and deletions than its variant trained on the previous NEWSELA (Xu et al., 2015) dataset. 6 Related Work Text simplification is considered as a text-totext generation task where the system learns how to simplify from complex-simple sentence pairs. There is a long line of research using methods based on hand-crafted rules (Siddharthan, 2006; Niklaus et al., 2019), statistical machine translation (Narayan and Gardent, 2014; Xu et al., 2016; Wubben et al., 2012), or neural seq2seq models (Zhang and Lapata, 2017; Zhao et al., 2018; Nisioi et al., 2017). As the existing datasets were built using lexical similarity metrics, they frequently omit paraphrases and sentence splits. While training on such datasets creates conservative systems that rarely paraphrase, evaluation on these datasets exhibits an unfair preference for deletion-based simplification over paraphrasing. Sentence alignment has been widely used to extract complex-simple sentence pairs from parallel articles for training text simplification systems. Previous work used surface-level similarity metrics, such as TF-IDF cosine similarity (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011; Paetzold et al., 2017), Jaccard-similarity (Xu et al., 2015), and other lexical features (Hwang et al., 2015; ˇStajner et al., 2018). Then, a greedy (ˇStajner et al., 2018) or dynamic programming (Barzilay and Elhadad, 2003; Paetzold et al., 2017) algorithm was used to search for the optimal alignment. Another related line of research (Smith et al., 2010; Tufis, et al., 2013; Tsai and Roth, 2016; Gottschalk and Demidova, 2017; Aghaebrahimian, 2018; Thompson and Koehn, 2019) aligns parallel sentences in bilingual corpora for machine translation. 7 Conclusion In this paper, we proposed a novel neural CRF model for sentence alignment, which substantially outperformed the existing approaches. We created two high-quality manually annotated datasets (NEWSELA-MANUAL and WIKI-MANUAL) for training and evaluation. Using the neural CRF sentence aligner, we constructed two largest sentencealigned datasets to date (NEWSELA-AUTO and WIKI-AUTO) for text simplification. We showed that a BERT-initalized Transformer trained on our new datasets establishes new state-of-the-art performance for automatic sentence simplification. Acknowledgments We thank three anonymous reviewers for their helpful comments, Newsela for sharing the data, Ohio Supercomputer Center (Center, 2012) and NVIDIA for providing GPU computing resources. We also thank Sarah Flanagan, Bohan Zhang, Raleigh Potluri, and Alex Wing for help with data annotation. This research is supported in part by the NSF awards IIS-1755898 and IIS-1822754, ODNI and IARPA via the BETTER program contract 19051600004, ARO and DARPA via the SocialSim program contract W911NF-17-C-0095, Figure Eight AI for Everyone Award, and Criteo Faculty Research Award to Wei Xu. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, ARO, DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. 7952 References Ahmad Aghaebrahimian. 2018. Deep neural networks at the service of multilingual parallel sentence extraction. In Proceedings of the 27th International Conference on Computational Linguistics. Fernando Alva-Manchego, Joachim Bingel, Gustavo Paetzold, Carolina Scarton, and Lucia Specia. 2017. Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proceedings of the Eighth International Joint Conference on Natural Language Processing. Ron Artstein and Massimo Poesio. 2008. Survey article: Inter-coder agreement for computational linguistics. Computational Linguistics. Regina Barzilay and Noemie Elhadad. 2003. Sentence alignment for monolingual comparable corpora. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. Ohio Supercomputer Center. 2012. Oakley supercomputer. http://osc.edu/ark:/19495/ hpc0cvqn. R. Chandrasekar, Christine Doran, and B. Srinivas. 1996. Motivations and methods for text simplification. In The 16th International Conference on Computational Linguistics. Han-Bin Chen, Hen-Hsen Huang, Hsin-Hsi Chen, and Ching-Ting Tan. 2012. A simplification-translationrestoration framework for cross-domain SMT applications. In Proceedings of the 24th International Conference on Computational Linguistics. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. William Coster and David Kauchak. 2011. Simple English Wikipedia: A new text simplification task. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Noemie Elhadad and Komal Sutaria. 2007. Mining a lexicon of technical terms and lay equivalents. In Biological, translational, and clinical language processing. Simon Gottschalk and Elena Demidova. 2017. Multiwiki: interlingual text passage alignment in wikipedia. ACM Transactions on the Web. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. William Hwang, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning sentences from standard Wikipedia to simple Wikipedia. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics. S´ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. Montreal neural machine translation systems for WMT’15. In Proceedings of the Tenth Workshop on Statistical Machine Translation. Tomoyuki Kajiwara, Hiroshi Matsumoto, and Kazuhide Yamamoto. 2013. Selecting proper lexical paraphrase for children. In Proceedings of the 25th Conference on Computational Linguistics and Speech Processing. Robert P. Jr.; Rogers Richard L.; Kincaid, J. Peter; Fishburne and Brad S. Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. research branch report. Reno Kriz, Jo˜ao Sedoc, Marianna Apidianaki, Carolina Zheng, Gaurav Kumar, Eleni Miltsakaki, and Chris Callison-Burch. 2019. Complexity-weighted loss and diverse reranking for sentence simplification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun’ichi Tsujii. 2010. Entity-focused sentence simplification for relation extraction. In Proceedings of the 23rd International Conference on Computational Linguistics. Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Christina Niklaus, Matthias Cetto, Andr´e Freitas, and Siegfried Handschuh. 2019. Transforming complex sentences into a semantic hierarchy. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 7953 Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Gustavo Paetzold, Fernando Alva-Manchego, and Lucia Specia. 2017. MASSAlign: Alignment and annotation of comparable documents. In Proceedings of the IJCNLP 2017, System Demonstrations. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. David Pellow and Maxine Eskenazi. 2014. An open corpus of everyday documents for simplification tasks. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing. Sarah E Petersen and Mari Ostendorf. 2007. Text simplification for language learners: A corpus analysis. In Proceedings of Workshop on Speech and Language Technology for Education. Scott E. Reed and Nando de Freitas. 2016. Neural programmer-interpreters. In 4th International Conference on Learning Representations. Luz Rello, Ricardo Baeza-Yates, and Horacio Saggion. 2013. The impact of lexical simplification by verbal paraphrases for people with and without dyslexia. In Proceedings of the 14th International Conference on Computational Linguistics and Intelligent Text Processing. Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics. Horacio Saggion. 2017. Automatic text simplification. Synthesis Lectures on Human Language Technologies. Advaith Siddharthan. 2006. Syntactic simplification and text cohesion. Research on Language and Computation. Advaith Siddharthan and Napoleon Katsos. 2010. Reformulating discourse connectives for non-expert readers. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sentences from comparable corpora using document level alignment. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Sanja ˇStajner, Hannah B´echara, and Horacio Saggion. 2015. A deeper exploration of the standard PB-SMT approach to text simplification and its evaluation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Sanja ˇStajner, Marc Franco-Salvador, Paolo Rosso, and Simone Paolo Ponzetto. 2018. CATS: A tool for customized alignment of text simplification corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation. Sanja ˇStajner and Maja Popovic. 2016. Can text simplification help machine translation? In Proceedings of the 19th Annual Conference of the European Association for Machine Translation. Brian Thompson and Philipp Koehn. 2019. Vecalign: Improved sentence alignment in linear time and space. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual wikification using multilingual embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics. Dan Tufis,, Radu Ion, S, tefan Dumitrescu, and Dan S, tef˘anescu. 2013. Wikipedia as an SMT training corpus. In Proceedings of the International Conference Recent Advances in Natural Language Processing. Lucy Vanderwende, Hisami Suzuki, Chris Brockett, and Ani Nenkova. 2007. Beyond sumbasic: Taskfocused summarization with sentence simplification and lexical expansion. Inf. Process. Manage. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. David Vickrey and Daphne Koller. 2008. Sentence simplification for semantic role labeling. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. 7954 Kristian Woodsend and Mirella Lapata. 2011. Learning to simplify sentences with quasi-synchronous grammar and integer programming. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics. Wei Xu and Ralph Grishman. 2009. A parse-and-trim approach with information significance for Chinese sentence compression. In Proceedings of the 2009 Workshop on Language Generation and Summarisation. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In International Conference on Learning Representations. Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Sanqiang Zhao, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018. Integrating transformer and paraphrase rules for sentence simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics. 7955 A Neural CRF Alignment Model A.1 Implementation Details We used PyTorch9 to implement our neural CRF alignment model. For the sentence encoder, we used Huggingface implementation(Wolf et al., 2019) of BERTbase 10 architecture with 12 layers of Transformers. When fine-tuning the BERT model, we use the representation of [CLS] token for classification. We use cross entropy loss and update the weights in all layers. Table 9 summarizes the hyperparameters of our model. Table 10 provides the thresholds for our paragraph alignment Algorithm 2, which were chosen based on NEWSELAMANUAL dev data. Parameter Value Parameter Value hidden units 768 # of layers 12 learning rate 0.00002 # of heads 12 max sequence length 128 batch size 8 Table 9: Parameters of our neural CRF sentence alignment model. Threshold Value τ1 0.1 τ2 0.34 τ3 0.9998861788416304 τ4 0.998915818299745 τ5 0.5 Table 10: The thresholds in paragraph alignment Algorithm 2 for Newsela data. For Wikipedia data, we tailored our paragraph alignment algorithm (Algorithm 3 and 4). Table 11 provides the thresholds for Algorithm 4, which were chosen based on WIKI-MANUAL dev data. Threshold Value τ1 0.991775706637882 τ2 0.8 τ3 0.5 τ4 5 τ5 0.9958 Table 11: The thresholds in paragraph alignment Algorithm 4 for Wikipedia data. B Sentence Aligned Wikipedia Corpus We present more details about our pre-processing steps for creating the WIKI-MANUAL and WIKIAUTO corpora here. In Wikipedia, Simple English 9https://pytorch.org/ 10https://github.com/google-research/bert Algorithm 3: Pairwise Paragraph Similarity Initialize: simP ∈R1×k×l to 01×k×l for i ←1 to k do for j ←1 to l do simP[1, i, j] = max sp∈Si,cq∈Cj simSent(sp, cq) end end return simP Algorithm 4: Paragraph Alignment Algorithm Input :simP ∈R1×k×l Initialize: alignP ∈Ik×l to 0k×l for i ←1 to k do cand = [] for j ←1 to l do if simP[1, i, j] > τ1 & d(i, j) < τ2 then cand.append(j) end end range = max(cand) −min(cand) if len(cand) > 1 & range/l > τ3 & range > τ4 then dist = [] for m ∈cand do dist.append(abs(m −i)) end jcloest = cand[argmin n dist[n]] for m ∈cand do if m ̸= jcloest&simP[1, i, m] ≤τ5 then cand.remove(m) end end end for m ∈cand do alignP[i, m] = 1 end end return alignP is considered as a language by itself. When extracting articles from Wikipedia dump, we removed the meta-page and disambiguation pages. We also removed sentences with less than 4 tokens and sentences that end with a colon. After the pre-processing and matching steps, there are 13,036 article pairs in which the simple article contains only one sentence. In most cases, that one sentence is aligned to the first sentence in the complex article. However, we find that the patterns of these sentence pairs are very repetitive (e.g., XXX is a city in XXX. XXX is a football player in XXX.). Therefore, we use regular expressions to filter out the sentences with repetitive patterns. Then, we use a BERT model fine-tuned on the WIKI-MANUAL dataset to compute the semantic similarity of each sentence pair and keep the ones with a similarity larger than a threshold 7956 tuned on the dev set. After filtering, we ended up with 970 aligned sentence pairs in total from these 13,036 article pairs. C Sentence Alignment on Wikipedia In this section, we compare different approaches for sentence alignment on the WIKI-MANUAL dataset. Tables 12 and 13 report the performance for Task 1 (aligned + partially-aligned vs. not-aligned) on dev and test set. To generate prediction for MASSAlign, CATS and two BERTfinetune methods, we first utilize the method in §3.2 to select candidate sentence pairs, as we found this step helps to improve their accuracy. Then we apply the similarity metric from each model to calculate the similarity of each candidate sentence pair. We tune a threshold for max f1 on the dev set and apply it to the test set. Candidate sentence pairs with a similarity larger than the threshold will be predicted as aligned, otherwise not-aligned. Sentence pairs that are not selected as candidates will also be predicted as not-aligned. Dev set P R F MASSAlign (Paetzold et al., 2017) 72.9 79.5 76.1 CATS (ˇStajner et al., 2018) 65.6 82.7 73.2 BERTfinetune (NEWSELA-MANUAL) 82.6 83.9 83.2 BERTfinetune (WIKI-MANUAL) 87.9 85.4 86.6 + ParaAlign 88.6 85.4 87.0 Our CRF Aligner (WIKI-MANUAL) 92.4 85.8 89.0 Table 12: Performance of different sentence alignment methods on the WIKI-MANUAL dev set for Task 1. Test set P R F MASSAlign (Paetzold et al., 2017) 68.6 72.5 70.5 CATS (ˇStajner et al., 2018) 68.4 74.4 71.3 BERTfinetune (NEWSELA-MANUAL) 80.6 78.8 79.6 BERTfinetune (WIKI-MANUAL) 86.3 82.4 84.3 + ParaAlign 86.6 82.4 84.5 Our CRF Aligner (WIKI-MANUAL) 89.3 81.6 85.3 Table 13: Performance of different sentence alignment methods on the WIKI-MANUAL test set for Task 1. D Sentence Simplification D.1 Implementation Details We used Fairseq11 toolkit to implement our Transformer (Vaswani et al., 2017) and LSTM (Hochreiter and Schmidhuber, 1997) baselines. For the Transformer baseline, we followed BERTbase 12 11https://github.com/pytorch/fairseq 12https://github.com/google-research/bert Parameter Value Parameter Value hidden units 768 batch size 32 filter size 3072 max len 100 # of layers 12 activation GELU attention heads 12 dropout 0.1 loss CE seed 13 Table 14: Parameters of our Transformer model. Parameter Value Parameter Value hidden units 256 batch size 64 embedding dim 300 max len 100 # of layers 2 dropout 0.2 lr 0.001 optimizer Adam clipping 5 epochs 30 min vocab freq 3 seed 13 Table 15: Parameters of our LSTM model. architecture for both encoder and decoder. We initialized the encoder using BERTbase uncased checkpoint. Rothe et al. (2020) used a similar model for sentence fusion and summarization. We trained each model using Adam optimizer with a learning rate of 0.0001, linear learning rate warmup of 40k steps and 200k training steps. We tokenized the data with BERT WordPiece tokenizer. Table 14 shows the values of other hyperparameters. For the LSTM baseline, we replicated the LSTM encoder-decoder model used by Zhang and Lapata (2017). We preprocessed the data by replacing the named entities in a sentence using spaCy13 toolkit. We also replaced all the words with frequency less than three with <UNK>. If our model predicted <UNK>, we replaced it with the aligned source word (Jean et al., 2015). Table 15 summarizes the hyperparameters of LSTM model. We used 300dimensional GloVe word embeddings (Pennington et al., 2014) to initialize the embedding layer. 13https://spacy.io/ 7957 D.2 Human Evaluation Figure 4: Instructions provided to Amazon Mechanical Turk workers to evaluate generated simplified sentences. We used the same instructions as described in Kriz et al. (2019). 7958 D.3 Example System Outputs Examples Generated by LSTM baseline Complex (input) In Seattle , eight activists between ages 10 and 15 petitioned Washington state last year to adopt stricter science-based regulations to protect them against climate change. Simple (reference) In Seattle, eight youths between 10 to 15 years old petitioned the state of Washington to change the law. New (this work) in seattle , eight activists between ages 10 and 15 asked washington state last year to keep the environment safe. (Phrasal Praphrase + Deletion) Old (Xu et al., 2015) in seattle , eight activists between ages 10 and 15 asked washington state last year to adopt stricter science - based rules to protect them against climate change. (Lexical Paraphrase) Complex (input) He recognized that another recommendation would be controversial with police groups: independent investigations after police shootings. Simple (reference) He admitted that police would not like one of the recommendations. New (this work) he thought another suggestion would be against the police. (Phrasal Paraphrase + Deletion) Old (Xu et al., 2015) he recognized that another suggestion would be controversial with police groups. (Lexical Paraphrase + Deletion) Complex (input) The Philadelphia Museum of Art has two famous selfie spots , both from the movie ” Rocky. ” Simple (reference) The Philadelphia Museum of Art has two big selfie spots. New (this work) the philadelphia museum of art has two picture spots. (Lexical Paraphrase + Deletion) Old (Xu et al., 2015) the philadelphia museum of art has two famous spots. (Deletion) Generated by Transformerbert Complex (input) Some Chicago residents got angry about it. Simple (reference) The plan made some people angry. New (this work) some people in chicago were angry. (Phrasal Paraphrase) Old (Xu et al., 2015) some chicago residents got angry. (Deletion) Complex (input) Emissions standards have been tightened , and the government is investing money in solar , wind and other renewable energy. Simple (reference) China has also put a great deal of money into solar, wind and other renewable energy. New (this work) the government is putting aside money for new types of energy. (Phrasal Paraphrase + Deletion) Old (Xu et al., 2015) the government is investing in money , wind and other equipment. (Lexical Paraphrase + Deletion) Complex (input) On Feb. 9 , 1864 , he was sitting for several portraits , including the one used for the $5 bill. Simple (reference) On Feb. 9, 1864, several artists painted pictures of him. New (this work) on feb. 9, 1864 , he was sitting for several portraits. (Deletion) Old (Xu et al., 2015) on feb 9, 1864 , he was sitting for several , including the $ 5 bill for the bill. (Deletion) Table 16: Examples of simplified sentences generated by LSTM and Transformerbert models trained on our new NEWSELA-AUTO (this work) and old existing NEWSELA (Xu et al., 2015) datasets. The source sentences are from our new NEWSELA-AUTO test set. Models trained on our new data rephrase the input sentence more often than the models trained on old data. Bold indicates deletions or paraphrases. Examples Complex (input) Now at age 9, his teachers say Richie reads at the level of a student in high school, and his vocabulary is well above those of his classmates. Simple (reference) He reads like a high school student. LSTM now he is age 9. EditNTS (Dong et al., 2019) he say his classmates are using a special job. Rerank (Kriz et al., 2019) but it is well above those of his classmates. Transfomerbert (this work) now at age 9 , his teachers say that richie reads high schoolwork. Complex (input) He can recall the special feeling when, at age 7, he built his first kite and saw it waft into the air. Simple (reference) He can remember the special feeling when he built his first kite . LSTM he can remember the people when he was age 7. EditNTS (Dong et al., 2019) , at age 7, he built his first kite. Rerank (Kriz et al., 2019) he could remember the special feeling when. Transfomerbert (this work) he can remember the special feeling when he was 7 years old. Complex (input) Following the action , two middle-aged brothers slid down a hill holding signs. Simple (reference) For example , two grownup brothers slid down a hill holding signs . LSTM <unk> middle - aged brothers slid down a hill holding signs. EditNTS (Dong et al., 2019) two middle-aged brothers , 14 , heard down a hill signs. Rerank (Kriz et al., 2019) he made a hill holding signs. Transfomerbert (this work) two middle-aged brothers slid down a hill holding signs. Table 17: Examples of simplifications generated by our best model, Transformerbert, and other baselines, namely, EditNTS (Dong et al., 2019), Rerank (Kriz et al., 2019) and LSTM on the old NEWSELA test set. Both LSTM and Transformerbert are trained on NEWSELA-AUTO. For EditNTS and Rerank, we use the system outputs shared by their original authors. Bold indicates new phrases introduced by the model. 7959 E Annotation Interface E.1 Crowdsourcing Annotation Interface Figure 5: Instructions and an example question for our crowdsourcing annotation on the Figure Eight platform. 7960 E.2 In-house Annotation Interface Figure 6: Annotation interface for correcting the crowdsourced alignment labels.
2020
709
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 777–788 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 777 Generative Semantic Hashing Enhanced via Boltzmann Machines Lin Zheng1, Qinliang Su1,2∗, Dinghan Shen3, Changyou Chen4 1School of Data and Computer Science, Sun Yat-sen University 2Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, China 3Microsoft Dynamics 365 AI 4CSE Department, SUNY at Buffalo [email protected], [email protected] [email protected], [email protected] Abstract Generative semantic hashing is a promising technique for large-scale information retrieval thanks to its fast retrieval speed and small memory footprint. For the tractability of training, existing generative-hashing methods mostly assume a factorized form for the posterior distribution, enforcing independence among the bits of hash codes. From the perspectives of both model representation and code space size, independence is always not the best assumption. In this paper, to introduce correlations among the bits of hash codes, we propose to employ the distribution of Boltzmann machine as the variational posterior. To address the intractability issue of training, we first develop an approximate method to reparameterize the distribution of a Boltzmann machine by augmenting it as a hierarchical concatenation of a Gaussian-like distribution and a Bernoulli distribution. Based on that, an asymptotically-exact lower bound is further derived for the evidence lower bound (ELBO). With these novel techniques, the entire model can be optimized efficiently. Extensive experimental results demonstrate that by effectively modeling correlations among different bits within a hash code, our model can achieve significant performance gains. 1 Introduction Similarity search, also known as nearest-neighbor search, aims to find items that are similar to a query from a large dataset. It plays an important role in modern information retrieval systems and has been used in various applications, ranging from plagiarism analysis (Stein et al., 2007) to content-based multimedia retrieval (Lew et al., 2006), etc. However, looking for nearest neighbors in the Euclidean space is often computationally ∗Corresponding author. prohibitive for large-scale datasets (calculating cosine similarity with high-dimensional vectors is computationally-expensive). Semantic hashing circumvents this problem by representing semantically similar documents with compact and binary codes. Accordingly, similar documents can be retrieved by evaluating the hamming distances of their hash codes much more efficiently. To obtain similarity-preserving hash codes, extensive efforts have been made to learn hash functions that can preserve the similarity information of original documents in the binary embedding space (Shen et al., 2015; Liu et al., 2016). Existing methods often require the availability of label information, which is often expensive to obtain in practice. To avoid the use of labels, generative semantic hashing methods have been developed. Specifically, the variational autoencoder (VAE) is first employed for semantic hashing in (Chaidaroon and Fang, 2017), and their model is termed VDSH. As a two-step process, the continuous document representations obtained from VAE are directly converted into binary hash codes. To resolve the two-step training problem, Bernoulli priors are leveraged as the prior distribution in NASH (Shen et al., 2018), replacing the continuous Gaussian prior in VDSH. By utilizing straight-through (ST) technique (Bengio et al., 2013), their model can be trained in an end-to-end manner, while keeping the merits of VDSH. Recently, to further improve the quality of hash codes, mixture priors are investigated in BMSH (Dong et al., 2019), while more accurate gradient estimators are studied in Doc2hash (Zhang and Zhu, 2019), both under a similar framework as NASH. Due to the training-tractability issue, the aforementioned generative hashing methods all assume a factorized variational form for the posterior, e.g., independent Gaussian in VDSH and independent Bernoulli in NASH, BMSH and Doc2hash. This assumption prevents the models from capturing 778 dependencies among the bits of hash codes. Although uncorrelated bits are sometimes preferred in hashing, as reported in (Zhang and Li, 2014), this may not apply to generative semantic hashing. This is due to the fact that the independent assumption could severely limit a model’s ability to yield meaningful representations and thereby produce high-quality hash codes. Moreover, as the code length increases (to e.g. 128 bits), the number of possible codes (or simply the code space) will be too large for a dataset with limited number of data points. As a result, we advocate that correlations among bits of a hash code should be considered properly to restrict the embedding space, and thus enable a model to work effectively under a broad range of code lengths. To introduce correlations among bits of hash codes, we propose to adopt the Boltzmann-machine (BM) distribution (Ackley et al., 1985) as a variational posterior to capture various complex correlations. One issue with this setting, relative to existing efficient training methods, is the inefficiency brought in training. To address this issue, we first prove that the BM distribution can be augmented as a hierarchical concatenation of a Gaussian-like distribution and a Bernoulli distribution. Using this result, we then show that samples from BM distributions can be well reparameterized easily. To enable efficient learning, an asymptotically-exact lower bound of the standard evidence lower bound (ELBO) is further developed to deal with the notorious problem of the normalization term in Boltzmann machines. With the proposed reparameterization and the new lower bound, our model can be trained efficiently as the previous generative hashing models that preserve no bit correlations. Extensive experiments are conducted to evaluate the performance of the proposed model. It is observed that on all three public datasets considered, the proposed model achieves the best performance among all comparable models. In particular, thanks to the introduced correlations, we observe the performance of the proposed model does not deteriorate as the code length increases. This is surprising and somewhat contrary to what has been observed in other generative hashing models. 2 Preliminaries Generative Semantic Hashing In the context of generative semantic hashing, each document is represented by a sequence of words x = {w1, w2, · · · , w|x|}, where wi is the i-th word and is denoted by a |V |-dimensional one-hot vector; |x| and |V | denotes the document size (number of words) and the vocabulary size, respectively. Each document x is modeled by a joint probability: pθ(x, s) = pθ(x|s)p(s), (1) where s is a latent variable representing the document’s hash code. With the probability pθ(x, s) trained on a set of documents, the hash code for a document x can be derived directly from the posterior distribution pθ(s|x). In existing works, the likelihood function, or the decoder takes a form pθ(x|s) = Q|x| i=1 pθ(wi|s) with pθ(wi|s) ≜ exp(sT Ewi + bi) P|V | j=1 exp(sT Eej + bj) , (2) where E ∈Rm×|V | is the matrix connecting the latent code s and the one-hot representation of words; and ej is the one-hot vector with the only ‘1’ locating at the i-th position. Documents could be modelled better by using more expressive likelihood functions, e.g., deep neural networks, but as explained in (Shen et al., 2018), they are more likely to destroy the crucial distance-keeping property for semantic hashing. Thus, the simple form of (2) is often preferred in generative hashing. As for the prior distribution p(s), it is often chosen as the standard Gaussian distribution as in VDSH (Chaidaroon and Fang, 2017), or the Bernoulli distribution as in NASH and BMSH (Shen et al., 2018; Dong et al., 2019). Inference Probabilistic models can be trained by maximizing the log-likelihood log pθ(x) with pθ(x) = R s pθ(x, s)ds. However, due to the intractability of calculating pθ(x), we instead optimize its evidence lower bound (ELBO), i.e., L = Eqφ(s|x)  log pθ(x|s)p(s) qφ(s|x)  , (3) where qφ(s|x) is the proposed variational posterior parameterized by φ. It can be shown that log pθ(x) ≥L holds for any qφ(s|x) , and that if qφ(s|x) is closer to the true posterior pθ(s|x), the bound L will be tighter. Training then reduces to maximizing the lower bound L w.r.t. θ and φ. In VDSH (Chaidaroon and Fang, 2017), qφ(s|x) takes the form of an independent Gaussian distribution qφ(s|x) = N s|µφ(x), diag(σ2 φ(x))  , (4) 779 where µφ(x) and σφ(x) are two vector-valued functions parameterized by multi-layer perceptrons (MLP) with parameters φ. Later, in NASH and BMSH (Shen et al., 2018; Dong et al., 2019), qφ(s|x) is defined as an independent Bernoulli distribution, i.e., qφ(s|x) = Bernoulli(gφ(x)), (5) where gφ(x) is also vector-valued function parameterized by a MLP. The value at each dimension represents the probability of being 1 at that position. The MLP used to parameterize the posterior qφ(s|x) is also referred to as the encoder network. One key requirement for efficient end-to-end training of generative hashing method is the availability of reparameterization for the variational distribution qφ(s|x). For example, when qφ(s|x) is a Gaussian distribution as in (4), a sample s from it can be efficiently reparameterized as s = µφ(x) + σφ(x) · ϵ (6) with ϵ ∼N(0, I). When qφ(s|x) is a Bernoulli distribution as in (5), a sample from it can be reparameterized as s = sign (gφ(x) −ϵ) + 1 2 (7) where ϵ ∈Rm with elements ϵi ∼uniform(0, 1). With these reparameterization tricks, the lower bound in (3) can be estimated by the sample s as L ≈log pθ(x|sφ)p(sφ) qφ(sφ|x) , (8) where s has been denoted as sφ to explicitly indicate its dependence on φ. To train these hashing models, the backpropagation algorithm can be employed to estimate the gradient of (8) w.r.t. θ and φ easily. However, it is worth noting that in order to use the reparameterization trick, all existing methods assumed a factorized form for the proposed posterior qφ(s|x), as shown in (4) and (5). This suggests that the binary bits in hash codes are independent of each other, which is not the best setting in generative semantic hashing. 3 Correlation-Enhanced Generative Semantic Hashing In this section, we present a scalable and efficient approach to introducing correlations into the bits of hash codes, by using a Boltzmann-machine distribution as the variational posterior with approximate reparameterization. 3.1 Boltzmann Machine as the Variational Posterior Many probability distributions defined over binary variables s ∈{0, 1}m are able to capture the dependencies. Among them, the most famous one should be the Boltzmann-machine distribution (Ackley et al., 1985), which takes the following form: b(s) = 1 Z e 1 2 sT Σs+µT s, (9) where Σ ∈Rm×m and µ ∈Rm are the distribution parameters; and Z ≜P s e 1 2 sT Σs+µT s is the normalization constant. The Boltzmann-machine distribution can be adopted to model correlations among the bits of a hash code. Specifically, by restricting the posterior to the Boltzmann form qφ(s|x) = 1 Zφ e−Eφ(s) (10) and substituting it into the lower bound of (3), we can write the lower bound as: L = Eqφ(s|x)  logpθ(x|s)p(s) e−Eφ(s)  + log Zφ, (11) where Eφ(s) ≜−1 2sT Σφ(x)s −µT φ(x)s; and Σφ(x) and µφ(x) are functions parameterized by the encoder network with parameters φ and x as input. One problem with such modeling is that the expectation term Eqφ(s|x)[·] in (11) cannot be expressed in a closed form due to the complexity of qφ(s|x). Consequently, one cannot directly optimize the lower bound L w.r.t. θ and φ. 3.2 Reparameterization An alternative way is to approximate the expectation term by using the reparameterized form of a sample s from qφ(s|x), as was done in the previous uncorrelated generative hashing models (see (6) and (7)). Compared to existing simple variational distributions, there is no existing work on how to reparameterize the complicated Boltzmannmachine distribution. To this end, we first show that the Boltzmann-machine distribution can be equivalently written as the composition of an approximate correlated Gaussian distribution and a Bernoulli distribution. Proposition 1. A Boltzmann-machine distribution b(s) = 1 Z e 1 2 sT Σs+µT s with Σ ≻0 can be equivalently expressed as the composition of two distributions, that is, b(s) = Z p(s|r)p(r)dr, (12) 780 where p(r) = 1 Z Qm i=1(eri + 1) · N(r; µ, Σ); p(s|r) = Qm i=1 p(si|ri) with si and ri denoting the i-th element of s and r; and p(si|ri) ≜ Bernoulli(σ(ri)) with σ(·) being the sigmoid function. Proof. See Appendix A.1 for details. Based on Proposition 1, we can see that a sample from the Boltzmann-machine distribution qφ(s|x) in (10) can be sampled hierarchically as r ∼qφ(r|x) and s ∼Bernoulli(σ(r)), (13) where qφ(r|x)= 1 Z m Y i=1 (eri + 1) · N(r; µφ(x), Σφ(x)) (14) and σ(·) is applied to its argument element-wise. From the expression of qφ(r|x), we can see that for small values of ri, the influence of (eri + 1) on the overall distribution is negligible, and thus qφ(r|x) can be well approximated by the Gaussian distribution N(r; µφ(x), Σφ(x)). For relatively large ri, the term (eri + 1) will only influence the distribution mean, roughly shifting the Gaussian distribution N(r; µφ(x), Σφ(x)) by an amount approximately equal to its variance. For problems of interest in this paper, the variances of posterior distribution are often small, hence it is reasonable to approximate samples from qφ(r|x) by those from N(r; µφ(x), Σφ(x)). With this approximation, we can now draw samples from Boltzmann-machine distribution qφ(s|x) in (10) approximately by the two steps below r ∼N(r; µφ(x), Σφ(x)), (15) s ∼Bernoulli(σ(r)). (16) For the Gaussian sample r ∼N(r; µφ(x), Σφ(x)), similar to (6), it can be reparameterized as r = µφ(x) + Lφ(x) · ϵ, (17) where Lφ(x) is the Cholesky decomposition matrix of Σφ(x) with Σφ(x) = Lφ(x)LT φ(x); and ϵ ∈Rm with ϵ ∼N(0, I). It should be noted that in practice, we can define the function Lφ(x) in advance and then obtain Σφ(x) as Σφ(x) = Lφ(x)LT φ(x), thus the Cholesky decomposition is not needed. Given the Gaussian sample r, similar to the reparameterization of Bernoulli variables in (7), we can reparameterize the Bernoulli sample s ∼ Bernoulli(σ(r)) as s = sign(σ(r)−u)+1 2 , where u ∈Rm with each element ui ∼uniform(0, 1). By combining the above reparameterizations, a sample from the Boltzmann-machine distribution qφ(s|x) can then be approximately reparameterized as sφ = sign (σ(µφ(x)+Lφ(x) · ϵ)−u)+1 2 , (18) where the subscript φ is to explicitly indicate that the sample s is expressed in terms of φ. With the reparameterization sφ, the expectation term in (11) can be approximated as log pθ(x|sφ)p(sφ) e−Eφ(sφ) . Consequently, the gradients of this term w.r.t. both θ and φ can be evaluated efficiently by backpropagation, with the only difficulty lying at the non-differentiable function sign(·) of sφ in (18). Many works have been devoted to estimate the gradient involving discrete random variables (Bengio et al., 2013; Jang et al., 2017; Maddison et al., 2017; Tucker et al., 2017; Grathwohl et al., 2018; Yin and Zhou, 2019). Here, we adopt the simple straight-through (ST) technique (Bengio et al., 2013), which has been found performing well in many applications. By simply treating the hard threshold function sign(·) as the identity function, the ST technique estimates the gradient as ∂sφ ∂φ ≈1 2 ∂[σ(µφ(x) + Lφ(x)ϵ) −u] ∂φ . (19) Then, the gradient of the first term in ELBO L w.r.t. φ can be computed efficiently by backpropagation. 3.3 An Asymptotically-Exact Lower Bound To optimize the ELBO in (11), we still need to calculate the gradient of log Zφ, which is known to be notoriously difficult. A common way is to estimate the gradient ∂log Zφ ∂φ by MCMC methods (Tieleman, 2008; Desjardins et al., 2010; Su et al., 2017a,b), which are computationally expensive and often of high variance. By noticing a special form of the ELBO (11), we develop a lower bound for the ELBO L, where the log Zφ term can be conveniently cancelled out. Specifically, we introduce another probability distribution h(s) and lower bound the original ELBO: eL = L −KL(h(s)||qφ(s|x)). (20) Since KL(·) ≥0, we have eL(θ, φ) ≤L holds for all h(s), i.e., eL is a lower bound of L, and equals to the ELBO L when h(s) = qφ(s|x). For the choice 781 of h(s), it should be able to reduce the gap between eL and L as much as possible, while ensuring that the optimization is tractable. Balancing on the two sides, a mixture distribution is used hk(s) = 1 k k X i=1 p(s|r(i)), (21) where k denotes the number of components; p(s|r(i)) is the multivariate Bernoulli distribution and r(i) is the i-th sample drawn from qφ(r|x) as defined in (14). By substituting hk(s) into (20) and taking the expectation w.r.t. r(i), we have eLk≜L−Eqφ(r(1···k)|x)[KL(hk(s)||qφ(s|x))] (22) where qφ(r(1··· ,k)|x) = Qk i=1 qφ(r(i)|x). It can be proved that the bound eLk gradually approaches the ELBO L as k increases, and finally equals to it as k →∞. Specifically, we have Proposition 2. For any integer k, the lower bound eLk of the ELBO satisfies the conditions: 1) eLk+1 ≥ eLk; 2) limk→∞eLk = L. Proof. See Appendix A.2 for details. By substituting L in (11) and hk(s) in (21) into (22), the bound can be further written as eLk = Eqφ(s|x)  logpθ(x|s)p(s) e−Eφ(s)  −Eqφ(r(1···k)|x)  Ehk(s)  log hk(s) e−Eφ(s)  , (23) where the log Zφ term is cancelled out since it appears in both terms but has opposite signs. For the first term in (23), as discussed at the end of Section 3.1, it can be approximated as log pθ(x|sφ)p(sφ) e−Eφ(sφ) . For the second term, each sample r(i) for i = 1, · · · , k can be approximately reparameterized like that in (17). Given the r(i) for i = 1, · · · , k, samples from hk(s) can also be reparameterized in a similar way as that for Bernoulli distributions in (7). Thus, samples drawn from r(1···k) ∼qφ(r(1···k)|x) and s ∼hk(s) are also reparameterizable, as detailed in Appendix A.3. By denoting this reparametrized sample as ˜sφ, we can approximate the second term in (23) as log hk(˜sφ) e−Eφ(˜sφ) . Thus the lower bound (23) becomes eLk ≈log pθ(x|sφ)p(sφ) e−Eφ(sφ) −log hk(˜sφ) e−Eφ(˜sφ) . (24) With the discrete gradient estimation techniques like the ST method, the gradient of eLk w.r.t. θ and φ can then be evaluated efficiently by backpropagation. Proposition 2 indicates that the exact eLk gets closer to the ELBO as k increases, so better bound can be expected for the approximated eLk as well when k increases. In practice, a moderate value of k is found to be sufficient to deliver a good performance. 3.4 Low-Rank Perturbation for the Covariance Matrix In the reparameterization of a Gaussian sample, rφ = µφ(x) + Lφ(x) · ϵ in (17), a m × m matrix Lφ(x) is required, with m denoting the length of hash codes. The elements of Lφ(x) are often designed as the outputs of neural networks parameterized by φ. Therefore, if m is large, the number of neural network outputs will be too large. To overcome this issue, a more parameter-efficient strategy called Low-Rank Perturbation is employed, which restricts covariance matrix to the form Σ = D + UU ⊤, (25) where D is a diagonal matrix with positive entries and U = [u1, u2, · · · uv] is a low-rank perturbation matrix with ui ∈Rm and v ≪m. Under this low-rank perturbed Σ, the Gaussian samples can be reparameterized as rφ = µφ(x) + D1/2 φ (x) · ϵ1 + Uφ(x) · ϵ2, (26) where ϵ1 ∼N (0, Im) and ϵ2 ∼N (0, Iv). We can simply replace (17) with the above expression in any place that uses r. In this way, the number of neural network outputs can be dramatically reduced from m2 to mv. 4 Related Work Semantic Hashing (Salakhutdinov and Hinton, 2009) is a promising technique for fast approximate similarity search. Locality-Sensitive Hashing, one of the most popular hashing methods (Datar et al., 2004), projects documents into low-dimensional hash codes in a randomized manner. However, the method does not leverage any information of data, and thus generally performs much worse than those data-dependent methods. Among the datadependent methods, one of the mainstream methods is supervised hashing, which learns a function that could output similar hash codes for semantically similar documents by making effective use of 782 the label information (Shen et al., 2015; Liu et al., 2016). Different from supervised methods, unsupervised hashing pays more attention to the intrinsic structure of data, without making use of the labels. Spectral hashing (Weiss et al., 2009), for instance, learns balanced and uncorrelated hash codes by seeking to preserve a global similarity structure of documents. Self-taught hashing (Zhang et al., 2010), on the other hand, focuses more on preserving local similarities among documents and presents a two-stage training procedure to obtain such hash codes. In contrast, to generate highquality hash codes, iterative quantization (Gong et al., 2013) aims to minimize the quantization error, while maximizing the variance of each bit at the same time. Among the unsupervised hashing methods, the idea of generative semantic hashing has gained much interest in recent years. Under the VAE framework, VDSH (Chaidaroon and Fang, 2017) was proposed to first learn continuous the documents’ latent representations, which are then cast into binary codes. While semantic hashing is achieved with generative models nicely, the twostage training procedure is problematic and is prone to result in local optima. To address this issue, NASH (Shen et al., 2018) went one step further and presented an integrated framework to enable the end-to-end training by using the discrete Bernoulli prior and the ST technique, which is able to estimate the gradient of functions with discrete variables. Since then, various directions have been explored to improve the performance of NASH. (Dong et al., 2019) proposed to employ the mixture priors to improve the model’s capability to distinguish documents from different categories, and thereby improving the quality of hash codes. On the other hand, a more accurate gradient estimator called Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) is explored in Doc2hash (Zhang and Zhu, 2019) to replace the ST estimator in NASH. More recently, to better model the similarities between different documents, (Hansen et al., 2019) investigated the combination of generative models and ranking schemes to generate hash codes. Different from the aforementioned generative semantic hashing methods, in this paper, we focus on how to incorporate correlations into the bits of hash codes. 5 Experiments 5.1 Experimental Setup Datasets Following previous works, we evaluate our model on three public benchmark datasets: i) Reuters21578, which consists of 10788 documents with 90 categories; ii) 20Newsgroups, which contains 18828 newsgroup posts from 20 different topics; iii) TMC, which is a collection of 21519 documents categorized into 22 classes. Training Details For the conveniences of comparisons, we use the same network architecture as that in NASH and BMSH. Specifically, a 2-layer feed-forward neural network with 500 hidden units and a ReLU activation function is used as an inference network, which receives the TF-IDF of a document as input and outputs the mean and covariance matrix of the Gaussian random variables r. During training, the dropout (Srivastava et al., 2014) is used to alleviate the overfitting issue, with the keeping probability selected from {0.8, 0.9} based on the performance on the validation set. The Adam optimizer (Kingma and Ba, 2014) is used to train our model, with the learning rate set to 0.001 initially and then decayed for every 10000 iterations. For all experiments on different datasets and lengths of hash codes, the rank v of matrix U is set to 10 and the number of component k in the distribution hk(s) is set to 10 consistently, although a systematic ablation study is conducted in Section 5.5 to investigate their impacts on the final performances. Baselines The following unsupervised semantic hashing baselines are adopted for comparisons: Locality Sensitive Hashing (LSH) (Datar et al., 2004), Stack Restricted Boltzmann Machines (S-RBM) (Salakhutdinov and Hinton, 2009), Spectral Hashing (SpH) (Weiss et al., 2009), Self-Taught Hashing (STH) (Zhang et al., 2010), Variational Deep Semantic Hashing (VDSH) (Chaidaroon and Fang, 2017), Neural Architecture for Generative Semantic Hashing (NASH) (Shen et al., 2018), and Semantic Hashing model with a Bernoulli Mixture prior (BMSH)(Dong et al., 2019). Evaluation Metrics The performance of our proposed approach is measured by retrieval precision i.e., the ratio of the number of relevant documents to that of retrieved documents. A retrieved document is said to be relevant if its label is the same as that of the query one. Specifically, during the eval783 uating phase, we first pick out top 100 most similar documents for each query document according to the hamming distances of their hash codes, from which the precision is calculated. The precisions averaged over all query documents are reported as the final performance. 5.2 Results of Generative Semantic Hashing The retrieval precisions on datasets TMC, Reuters and 20Newsgroups are reported in Tables 1, 2 and 3, respectively, under different lengths of hash codes. Compared to the generative hashing method NASH without considering correlations, we can see that the proposed method, which introduces correlations among bits by simply employing the distribution of Boltzmann machine as the posterior, performs significantly better on all the three datasets considered. This strongly corroborates the benefits of taking correlations into account when learning the hash codes. From the tables, we can also observe that the proposed model even outperforms the BMSH, an enhanced variant of NASH that employs more complicated mixture distributions as a prior. Since only the simplest prior is used in the proposed model, larger performance gains can be expected if mixture priors are used as in BMSH. Notably, a recent work named RBSH is proposed in (Hansen et al., 2019), which improves NASH by specifically ranking the documents according to their similarities. However, since it employs a different data preprocessing technique as the existing works, we cannot include its results for a direct comparison here. Nevertheless, we trained our model on their preprocessed datasets and find that our method still outperforms it. For details about the results, please refer to Appendix A.4. Moreover, when examining the retrieval performance of hash codes under different lengths, it is observed that the performance of our proposed method never deteriorates as the code length increases, while other models start to perform poorly after the length of codes reaching a certain level. For the most comparable methods like VDSH, NASH and BMSH, it can be seen that the performance of 128 bits is generally much worse than that of 64 bits. This phenomenon is illustrated more clearly in Figure 1. This may attribute to the reason that for hash codes without correlations, the number of codes will increase exponentially as the code length increases. Because the code space is too large, the probability of assigning similar items Method 8 bits 16 bits 32 bits 64 bits 128 bits LSH 0.4388 0.4393 0.4514 0.4553 0.4773 S-RBM 0.4846 0.5108 0.5166 0.5190 0.5137 SpH 0.5807 0.6055 0.6281 0.6143 0.5891 STH 0.3723 0.3947 0.4105 0.4181 0.4123 VDSH 0.4330 0.6853 0.7108 0.4410 0.5847 NASH 0.5849 0.6573 0.6921 0.6548 0.5998 BMSH n.a. 0.7062 0.7481 0.7519 0.7450 Ours 0.6959 0.7243 0.7534 0.7606 0.7632 Table 1: Precision of the top 100 retrieved documents on TMC dataset. Method 8 bits 16 bits 32 bits 64 bits 128 bits LSH 0.2802 0.3215 0.3862 0.4667 0.5194 S-RBM 0.5113 0.5740 0.6154 0.6177 0.6452 SpH 0.6080 0.6340 0.6513 0.6290 0.6045 STH 0.6616 0.7351 0.7554 0.7350 0.6986 VDSH 0.6859 0.7165 0.7753 0.7456 0.7318 NASH 0.7113 0.7624 0.7993 0.7812 0.7559 BMSH n.a. 0.7954 0.8286 0.8226 0.7941 Ours 0.7589 0.8212 0.8420 0.8465 0.8482 Table 2: Precision of the top 100 retrieved documents on Reuters dataset. to nearby binary codes may decrease significantly. But for the proposed model, since the bits of hash codes are correlated to each other, the effective number of codes can be determined by the strength of correlations among bits, effectively restricting the size of code space. Therefore, even though the code length increases continually, the performance of our proposed model does not deteriorate. 5.3 Empirical Study of Computational Efficiency To show the computational efficiency of our proposed method, we also report the average running time per epoch in GPU on TMC dataset, which is of the largest among the considered ones, in Table 4. As a benchmark, the average training time of vanilla NASH is 2.553s per epoch. It can be seen that because of to the use of low-rank parameterization of the covariance matrix, the proposed model can be trained almost as efficiently as vanilla NASH, but deliver a much better performance. 5.4 Hash Codes Visualization To further investigate the capability of different models in generating semantic-preserving binary codes, we project the hash codes produced by VDSH, NASH and our proposed model on 20Newsgroups datasets onto a two-dimensional plane by using the widely adopted UMAP technique (McInnes 784 8 16 32 64 128 Number of Bits 0.4 0.5 0.6 0.7 Precision (%) TMC 8 16 32 64 128 Number of Bits 0.3 0.4 0.5 0.6 0.7 0.8 Precision (%) Reuters LSH S-RBM SpH STH VDSH NASH BMSH Ours 8 16 32 64 128 Number of Bits 0.1 0.2 0.3 0.4 0.5 0.6 Precision (%) 20Newsgroups Figure 1: Retrieval precisions of unsupervised hashing methods on three datasets under different code lengths. (a) VDSH (b) NASH (c) Ours Figure 2: Visualization of the 128-bit hash codes learned by VDSH, NASH and our model on 20Newsgroups dataset respectively. Each data point in the figure above denotes a hash code of the corresponding document, and each color represents one category. Method 8 bits 16 bits 32 bits 64 bits 128 bits LSH 0.0578 0.0597 0.0666 0.0770 0.0949 S-RBM 0.0594 0.0604 0.0533 0.0623 0.0642 SpH 0.2545 0.3200 0.3709 0.3196 0.2716 STH 0.3664 0.5237 0.5860 0.5806 0.5443 VDSH 0.3643 0.3904 0.4327 0.1731 0.0522 NASH 0.3786 0.5108 0.5671 0.5071 0.4664 BMSH n.a. 0.5812 0.6100 0.6008 0.5802 Ours 0.4389 0.5839 0.6183 0.6279 0.6359 Table 3: Precision of the top 100 retrieved documents on 20Newsgroups dataset. et al., 2018) and then visualize them on the twodimensional planes, as shown in Figure 2. It can be seen that the hash codes produced by VDSH are quite mixed for documents from different categories, while those produced by NASH are more distinguishable, consistent with the hypothesis that NASH is able to produce better codes than VDSH thanks to the end-to-end training. From the figure, we can further observe that the hash codes produced by our proposed method are the most distinguishable among all three methods considered, corroborating the benefits of introducing correlations among the bits of hash codes. Value of v Value of k Avg. Time (seconds) 1 1 2.934 1 5 3.124 5 1 3.137 5 5 3.353 10 5 3.403 10 10 3.768 Table 4: Average running time per epoch on TMC dataset under different values of v and k. 5.5 Analyses on the Impacts of v and k Ranks v Low-rank perturbed covariance matrix enables the proposed model to trade-off between complexity and performance. That is, larger v allows the model to capture more dependencies among latent variables, but the required computational complexity also increases. To investigate its impacts, we evaluate the performance of the 64bit hash codes obtained from the proposed model under different values of v, with the other key parameter k fixed to 10. The result is listed in the left half of Table 5. Notably, the proposed model with v = 0 is equivalent to NASH since there is not any correlation between the binary random variables. It can be seen that as the number of ranks 785 Value of v Precision Value of k Precision 0 0.7812 1 0.8300 1 0.8353 3 0.8391 5 0.8406 5 0.8395 10 0.8465 10 0.8465 Table 5: Left: Retrieval precisions under different values of v with k fixed to be 10 on Reuters dataset; Right: Retrieval precision under different values of k with v fixed to be 10 on Reuters dataset. increases, the retrieval precisions also increase, justifying the hypothesis that employing the posteriors with correlations can increase the model’s representational capacity and thereby improves the hash codes’ quality in turn. It is worth noting that the most significant performance improvement is observed between the models with v = 0 and v = 1, and then as the value of v continues to increase, the improvement becomes relatively small. This indicates that it is feasible to set the v to a relatively small value to save computational resources while retaining competitive performance. The number of mixture components k As stated in Section 3.3, increasing the number of components k in the mixture distribution hk(s) will reduce the gap between the lower bound eLk and the ELBO L. To investigate the impacts of k, the retrieval precisions of the proposed model are evaluated under different values of k, while setting the other key parameter v = 10. It can be seen from the right half of Table 5 that as the number of components k increases, the retrieval precision also increases gradually, suggesting that a tighter lower bound eLk can always indicate better hash codes. Hence, if more mixture components are used, better hash codes can be expected. Due to the sake of complexity, only 10 components are used at most in the experiments. 6 Conclusion In this paper, by employing the distribution of Boltzmann machine as the posterior, we show that correlations can be efficiently introduced into the bits. To facilitate training, we first show that the BM distribution can be augmented as a hierarchical concatenation of a Gaussian-like distribution and a Bernoulli distribution. Then, an asymptoticallyexact lower bound of ELBO is further developed to tackle the tricky normalization term in Boltzmann machines. Significant performance gains are observed in the experiments after introducing correlations into the bits of hash codes. Acknowledgements This work is supported by the National Natural Science Foundation of China (NSFC) (No. 61806223, U1711262, U1501252, U1611264, U1711261), National Key R&D Program of China (No. 2018YFB1004404), and Fundamental Research Funds for the Central Universities (No. 191gjc04). Also, CC appreciates the support from Yahoo! Research. References David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski. 1985. A learning algorithm for boltzmann machines. Cognitive Science, 9(1):147 – 169. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. 2013. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation. arXiv preprint arXiv:1308.3432. Suthee Chaidaroon and Yi Fang. 2017. Variational deep semantic hashing for text documents. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’17, pages 75–84, New York, USA. Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. 2004. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the Twentieth Annual Symposium on Computational Geometry, SCG ’04, pages 253–262, New York, USA. Guillaume Desjardins, Aaron Courville, Yoshua Bengio, Pascal Vincent, and Olivier Delalleau. 2010. Tempered markov chain monte carlo for training of restricted boltzmann machines. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 145–152, Chia Laguna Resort, Sardinia, Italy. Wei Dong, Qinliang Su, Dinghan Shen, and Changyou Chen. 2019. Document hashing with mixture-prior generative models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5229–5238, Hong Kong, China. Yunchao Gong, Svetlana Lazebnik, Albert Gordo, and Florent Perronnin. 2013. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans. Pattern Anal. Mach. Intell., 35(12):2916–2929. 786 Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, and David Duvenaud. 2018. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. In International Conference on Learning Representations (ICLR 2018). Casper Hansen, Christian Hansen, Jakob Grue Simonsen, Stephen Alstrup, and Christina Lioma. 2019. Unsupervised neural generative semantic hashing. In Proceedings of the 42Nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’19, pages 735–744, New York, USA. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparametrization with gumble-softmax. In International Conference on Learning Representations (ICLR 2017). Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. Michael S. Lew, Nicu Sebe, Chabane Djeraba, and Ramesh Jain. 2006. Content-based multimedia information retrieval: State of the art and challenges. ACM Trans. Multimedia Comput. Commun. Appl., 2(1):1–19. H. Liu, R. Wang, S. Shan, and X. Chen. 2016. Deep supervised hashing for fast image retrieval. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2064–2072. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. In International Conference on Learning Representations (ICLR 2017). Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. 2018. Umap: Uniform manifold approximation and projection. The Journal of Open Source Software, 3(29):861. Ruslan Salakhutdinov and Geoffrey Hinton. 2009. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969 – 978. Dinghan Shen, Qinliang Su, Paidamoyo Chapfuwa, Wenlin Wang, Guoyin Wang, Ricardo Henao, and Lawrence Carin. 2018. NASH: Toward end-to-end neural architecture for generative semantic hashing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2041–2050, Melbourne, Australia. F. Shen, C. Shen, W. Liu, and H. T. Shen. 2015. Supervised discrete hashing. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 37–45. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Benno Stein, Sven Meyer zu Eissen, and Martin Potthast. 2007. Strategies for retrieving plagiarized documents. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’07, pages 825–826, New York, USA. Qinliang Su, Lawrence Carin, et al. 2017a. A probabilistic framework for nonlinearities in stochastic neural networks. In Advances in Neural Information Processing Systems, pages 4486–4495. Qinliang Su, Xuejun Liao, Chunyuan Li, Zhe Gan, and Lawrence Carin. 2017b. Unsupervised learning with truncated gaussian graphical models. In ThirtyFirst AAAI Conference on Artificial Intelligence. Tijmen Tieleman. 2008. Training restricted boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 1064–1071, New York, USA. George Tucker, Andriy Mnih, Chris J Maddison, John Lawson, and Jascha Sohl-Dickstein. 2017. Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models. In Advances in Neural Information Processing Systems, pages 2627–2636. Yair Weiss, Antonio Torralba, and Rob Fergus. 2009. Spectral hashing. In Advances in Neural Information Processing Systems, pages 1753–1760. Mingzhang Yin and Mingyuan Zhou. 2019. ARM: Augment-REINFORCE-merge gradient for stochastic binary networks. In International Conference on Learning Representations. Dell Zhang, Jun Wang, Deng Cai, and Jinsong Lu. 2010. Self-taught hashing for fast similarity search. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’10, pages 18–25, New York, USA. Dongqing Zhang and Wu-Jun Li. 2014. Large-scale supervised multimodal hashing with semantic correlation maximization. In Twenty-Eighth AAAI Conference on Artificial Intelligence. Yifei Zhang and Hao Zhu. 2019. Doc2hash: Learning discrete latent variables for documents retrieval. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2235– 2240, Minneapolis, Minnesota. 787 A Appendices A.1 Proof of Proposition 1 Proof. Making use of completing the square technique, the joint distribution of r and s can be decomposed as: q(s, r) = q(s|r)q(r) = e−1 2 (r−µ)⊤Σ−1(r−µ)+r⊤s |2πΣ| 1 2 Z = e−1 2 [r−(Σs+µ)]⊤Σ−1[r−(Σs+µ)] |2πΣ| 1 2 Z eµ⊤s+ 1 2 s⊤Σs = q(r|s)q(s), where q(r|s) = N(r; Σs + µ, Σ), q(s) = 1 Z eµ⊤s+ 1 2 s⊤Σs. From above, we show that the marginal distribution q(s) is a Boltzmann machine distribution. A.2 Proof of Proposition 2 We show the following facts about the proposed lower bound of ELBO eLk. First, For any integer k, we have eLk+1 ≥eLk. For brevity we denote Eqφ(r(1,··· ,k)|x) as Er1..k. First, due to the symmetry of indices, the following equality holds: Er1..kEq(s|r(1))log hk(s)=Er1..kEq(s|r(i))log hk(s). From this, we have Er1..kEq(s|r(1)) log hk(s) = 1 k k X i=1 Er1..kEq(s|r(1)) log hk(s) = 1 k k X i=1 Er1..kEq(s|r(i)) log hk(s) = Er1..kEhk(s) log hk(s), and Er1..k+1Ehk+1(s) log hk+1(s) = 1 k + 1 k+1 X i=1 Er1..k+1Eq(s|r(i)) log hk+1(s) = Er1..k+1Eq(s|r(1)) log hk+1(s) = 1 k k X i=1 Er1..k+1Eq(s|r(i)) log hk+1(s) = Er1..k+1Ehk(s) log hk+1(s). (27) Applying the equality (27) gives us: eLk+1 −eLk = Er1..k [KL(hk(s)||q(s|x))] −Er1..k+1 [KL(hk+1(s)||q(s|x))] = Er1..k+1 [KL(hk(s)||q(s|x)) −KL(hk+1(s)||q(s|x))] =Er1..k+1  Ehk(s)loghk(s)−Ehk+1(s)loghk+1(s)  = Er1..k+1  Ehk(s)log hk(s)−Ehk(s)log hk+1(s)  = Er1..k+1 [KL(hk(s)||hk+1(s))] ≥0. We now show that limk→∞eLk = L. According to the strong law of large numbers, hk(s) = 1 k Pk j q(s|r(j)) converges to Eq(r|x) [q(s|r)] = q(s|x) almost surely. We then have lim k→∞Er1..k [KL(hk(s)||q(s|x))] = 0. Therefore, eLk approaches L as k approaches infinity. A.3 Derivation of reparameterization for hk(s) Recall that hk(s) = 1 k Pk j=1 q(s|r(j) φ ). We show that it can be easily reparameterized. Specifically, we could sample from such a mixture distribution through a two-stage procedure: (i) choosing a component c ∈{1, 2, · · · , k} from a uniform discrete distribution, which is then transformed as a k-dimensional one-hot vector ˜c; (ii) drawing a sample from the selected component, i.e. q(s|r(c) φ ). Moreover, we define a matrix Rφ(x) ∈Rm×k with its columns consisting of r(1) φ , r(2) φ , · · · , r(k) φ , each of which can be also reparameterized. In this way, a sample ˜sφ from the distribution hk(s) can be simply expressed as ˜sφ = sign (σ(Rφ˜c) −u) + 1 2 which can be seen as selecting a sample r(c) φ and then passing it through a perturbed sigmoid function. Therefore, during training, the gradients of φ are simply back-propagated through the chosen sample r(c) φ . A.4 Comparisons between RBSH and our method As discussed before, the main reason that we cited this paper but didn’t compare with it is that the 788 Number of Bits 20Newsgroup TMC RBSH Ours RBSH Ours 8 0.5190 0.5393 0.7620 0.7667 16 0.6087 0.6275 0.7959 0.7975 32 0.6385 0.6647 0.8138 0.8203 64 0.6655 0.6941 0.8224 0.8289 128 0.6668 0.7005 0.8193 0.8324 Table 6: Precision of the top 100 received documents on 20Newsgroup and TMC datasets. datasets in (Hansen et al., 2019) are preprocessed differently as ours. Therefore, it is inappropriate to include the performance of the model from (Hansen et al., 2019) into the comparisons of our paper directly. Our work is a direct extension along the research line of VDSH and NASH. In our experiments, we followed their setups and used the preprocessed datasets that are publicized by them. However, in (Hansen et al., 2019), the datasets are preprocessed by themselves. The preprocessing procedure influences the final performance greatly, as observed in the reported results. To see how our model performs compared to (Hansen et al., 2019), we evaluate our model on the 20Newsgroup and TMC datasets that are preprocessed by the method in (Hansen et al., 2019). The results are reported in Table 6, where RBSH is the model from (Hansen et al., 2019). We can see that using the same preprocessed datasets, our model overall performs better than RBSH, especially in the case of long codes. It should be emphasized that the correlation-introducing method proposed in this paper can be used with all existing VAE-based hashing models. In this paper, the base model is NASH, and when they are used together, we see a significant performance improvement. Since the RBSH is also a VAE-based hashing model, the proposed method can also be used with it to introduce correlations into the code bits, and significant improvements can also be expected.
2020
71
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7961–7975 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7961 One Size Does Not Fit All: Generating and Evaluating Variable Number of Keyphrases Xingdi Yuan†∗ Tong Wang†∗ Rui Meng‡∗ Khushboo Thaker‡ Peter Brusilovsky‡ Daqing He‡ Adam Trischler† †Microsoft Research, Montr´eal ‡School of Computing and Information, University of Pittsburgh {eric.yuan, tong.wang}@microsoft.com [email protected] Abstract Different texts shall by nature correspond to different number of keyphrases. This desideratum is largely missing from existing neural keyphrase generation models. In this study, we address this problem from both modeling and evaluation perspectives. We first propose a recurrent generative model that generates multiple keyphrases as delimiter-separated sequences. Generation diversity is further enhanced with two novel techniques by manipulating decoder hidden states. In contrast to previous approaches, our model is capable of generating diverse keyphrases and controlling number of outputs. We further propose two evaluation metrics tailored towards the variable-number generation. We also introduce a new dataset (STACKEX) that expands beyond the only existing genre (i.e., academic writing) in keyphrase generation tasks. With both previous and new evaluation metrics, our model outperforms strong baselines on all datasets. 1 Introduction Keyphrase generation is the task of automatically predicting keyphrases given a source text. Desired keyphrases are often multi-word units that summarize the high-level meaning and highlight certain important topics or information of the source text. Consequently, models that can successfully perform this task should be capable of not only distilling high-level information from a document, but also locating specific, important snippets therein. To make the problem even more challenging, a keyphrase may or may not be a substring of the source text (i.e., it may be present or absent). Moreover, a given source text is usually associated with ∗These authors contributed equally. The order is determined by a fidget spinner. Dataset #Train #Valid #Test Mean Var %Pre KP20K ≈514k ≈20k ≈20k 5.3 14.2 63.3% IN S P E C – 1500 500 9.6 22.4 78.5% KR A P I V I N – 1844 460 5.2 6.6 56.2% NUS – 211 11.5 64.6 51.3% SE MEV A L – 144 100 15.7 15.1 44.5% ST A C KEX ≈298k ≈16k ≈16k 2.7 1.4 57.5% Table 1: Statistics of various datasets. Mean and Var indicate the mean and variance of target phrase numbers, %Pre denotes percentage of present keyphrases. a set of multiple keyphrases. Thus, keyphrase generation is an instance of the set generation problem, where both the size of the set and the size (i.e., the number of tokens in a phrase) of each element can vary depending on the source. Similar to summarization, keyphrase generation is often formulated as a sequence-to-sequence (Seq2Seq) generation task in most prior studies (Meng et al., 2017; Chen et al., 2018a; Ye and Wang, 2018; Chen et al., 2018b). Conditioned on a source text, Seq2Seq models generate phrases individually or as a longer sequence jointed by delimiting tokens. Since standard Seq2Seq models generate only one sequence at a time, thus to generate multiple phrases, a common approach is to over-generate using beam search (Reddy et al., 1977) with a large beam width. Models are then evaluated by taking a fixed number of top predicted phrases (typically 5 or 10) and comparing them against the ground truth keyphrases. Though this approach has achieved good empirical results, we argue that it suffers from two major limitations. Firstly, models that use beam search to generate multiple keyphrases generally lack the ability to determine the dynamic number of keyphrases needed for different source texts. Meanwhile, the parallelism in beam search also fails to model the inter-relation among the generated phrases, which can often result in diminished diversity in the output. Although certain existing models 7962 take output diversity into consideration during training (Chen et al., 2018a; Ye and Wang, 2018), the effort is significantly undermined during decoding due to the reliance on over-generation and phrase ranking with beam search. Secondly, the current evaluation setup is rather problematic, since existing studies attempt to match a fixed number of outputs against a variable number of ground truth keyphrases. Empirically, the number of keyphrases can vary drastically for different source texts, depending on a plethora of factors including the length or genre of the text, the granularity of keyphrase annotation, etc. For the several commonly used keyphrase generation datasets, for example, the average number of keyphrases per data point can range from 5.3 to 15.7, with variances sometimes as large as 64.6 (Table 1). Therefore, using an arbitrary, fixed number k to evaluate entire datasets is not appropriate. In fact, under this evaluation setup, the F1 score for the oracle model on the KP20K dataset is 0.858 for k = 5 and 0.626 for k = 10, which apparently poses serious normalization issues as evaluation metrics. To overcome these problems, we propose novel decoding strategies and evaluation metrics for the keyphrase generation task. The main contributions of this work are as follows: 1. We propose a Seq2Seq based keyphrase generation model capable of generating diverse keyphrases and controlling number of outputs. 2. We propose new metrics based on commonly used F1 score under the hypothesis of variable-size outputs from models, which results in improved empirical characteristics over previous metrics based on a fixed k. 3. An additional contribution of our study is the introduction of a new dataset for keyphrase generation: ST A C KEX. With its marked difference in genre, we expect the dataset to bring added heterogeneity to keyphrase generation evaluation. 2 Related Work 2.1 Keyphrase Extraction and Generation Traditional keyphrase extraction has been studied extensively in past decades. In most existing literature, keyphrase extraction has been formulated as a two-step process. First, lexical features such as part-of-speech tags are used to determine a list of phrase candidates by heuristic methods (Witten et al., 1999; Liu et al., 2011; Wang et al., 2016; Yang et al., 2017). Second, a ranking algorithm is adopted to rank the candidate list and the top ranked candidates are selected as keyphrases. A wide variety of methods were applied for ranking, such as bagged decision trees (Medelyan et al., 2009; Lopez and Romary, 2010), Multi-Layer Perceptron, Support Vector Machine (Lopez and Romary, 2010) and PageRank (Mihalcea and Tarau, 2004; Le et al., 2016; Wan and Xiao, 2008). Recently, Zhang et al. (2016); Luan et al. (2017); Gollapalli et al. (2017) used sequence labeling models to extract keyphrases from text; Subramanian et al. (2017) used Pointer Networks to point to the start and end positions of keyphrases in a source text; Sun et al. (2019) leveraged graph neural networks to extract keyphrases. The main drawback of keyphrase extraction is that sometimes keyphrases are absent from the source text, thus an extractive model will fail predicting those keyphrases. Meng et al. (2017) first proposed the CopyRNN, a neural model that both generates words from vocabulary and points to words from the source text. Based on the CopyRNN architecture, Chen et al. (2018a); Zhao and Zhang (2019) leveraged attention to help reducing duplication and improving coverage. Ye and Wang (2018) proposed semi-supervised methods by leveraging both labeled and unlabeled data for training. Chen et al. (2018b); Ye and Wang (2018) proposed to use structure information (e.g., title of source text) to improve keyphrase generation performance. Chan et al. (2019) introduced RL to the keyphrase generation task. Chen et al. (2019a) retrieved similar documents from training data to help producing more accurate keyphrases. 2.2 Sequence to Sequence Generation Sequence to Sequence (Seq2Seq) learning was first introduced by Sutskever et al. (2014); together with the soft attention mechanism of (Bahdanau et al., 2014), it has been widely used in natural language generation tasks. G¨ulc¸ehre et al. (2016); Gu et al. (2016) used a mixture of generation and pointing to overcome the problem of large vocabulary size. Paulus et al. (2017); Zhou et al. (2017) applied Seq2Seq models on summary generation tasks, while Du et al. (2017); Yuan et al. (2017) generated questions conditioned on documents and answers from machine comprehension 7963 datasets. Seq2Seq was also applied on neural sentence simplification (Zhang and Lapata, 2017) and paraphrase generation tasks (Xu et al., 2018). 3 Model Architecture Given a piece of source text, our objective is to generate a variable number of multi-word phrases. To this end, we opt for the sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) framework as the basis of our model, combined with attention and pointer softmax mechanisms in the decoder. Since each data example contains one source text sequence and multiple target phrase sequences (dubbed ON E2MA N Y, and each sequence can be of multi-word), two paradigms can be adopted for training Seq2Seq models. The first one (Meng et al., 2017) is to divide each ON E2MA N Y data example into multiple ON E2ON E examples, and the resulting models (e.g., CopyRNN) can generate one phrase at once and must rely on beam search technique to produce more unique phrases. To enable models to generate multiple phrases and control the number to output, we propose the second training paradigm ON E2SE Q, in which we concatenate multiple phrases into a single sequence with a delimiter ⟨sep⟩, and this concatenated sequence is then used as the target for sequence generation during training. An overview of the model’s structure is shown in Figure 1.1 Notations In the following subsections, we use w to denote input text tokens, x to denote token embeddings, h to denote hidden states, and y to denote output text tokens. Superscripts denote time-steps in a sequence, and subscripts e and d indicate whether a variable resides in the encoder or the decoder of the model, respectively. The absence of a superscript indicates multiplicity in the time dimension. L refers to a linear transformation and Lf refers to it followed by a non-linear activation function f. Angled brackets, ⟨⟩, denote concatenation. 3.1 Sequence to Sequence Generation We develop our model based on the standard Seq2Seq (Sutskever et al., 2014) model with attention mechanism (Bahdanau et al., 2014) and pointer softmax (G¨ulc¸ehre et al., 2016). Due to 1We release the code, datasets and model outputs for reproducing our results in https://github.com/memray/ OpenNMT-kpg-release. space limit, we describe this basic Seq2Seq model in Appendix A. 3.2 Mechanisms for Diverse Generation There are usually multiple keyphrases for a given source text because each keyphrase represents certain aspects of the text. Therefore keyphrase diversity is desired for the keyphrase generation. Most previous keyphrase generation models generate multiple phrases by over-generation, which is highly prone to generate similar phrases due to the nature of beam search. Given our objective to generate variable numbers of keyphrases, we need to adopt new strategies for achieving better diversity in the output. Recall that we represent variable numbers of keyphrases as delimiter-separated sequences. One particular issue we observed during error analysis is that the model tends to produce identical tokens following the delimiter token. For example, suppose a target sequence contains n delimiter tokens at time-steps t1, . . . , tn. During training, the model is rewarded for generating the same delimiter token at these time-steps, which presumably introduces much homogeneity in the corresponding decoder states ht1 d , . . . , htn d . When these states are subsequently used as inputs at the time-steps immediately following the delimiter, the decoder naturally produces highly similar distributions over the following tokens, resulting in identical tokens being decoded. To alleviate this problem, we propose two plug-in components for the sequential generation model. 3.2.1 Semantic Coverage We propose a mechanism called semantic coverage that focuses on the semantic representations of generated phrases. Specifically, we introduce another uni-directional recurrent model GRUSC (dubbed target encoder) which encodes decoder-generated tokens yτ, where τ ∈[0, t), into hidden states ht SC. This state is then taken as an extra input to the decoder GRU, modifying equation of the decoder GRU to: ht d = GRUd(⟨xt d, ht SC⟩, ht−1 d ). (1) If the target encoder were to be updated with the training signal from generation (i.e., backpropagating error from the decoder GRU to the target encoder), the resulting decoder is essentially a 2-layer GRU with residual connections. Instead, inspired 7964 Source Encoder MLP <s> linear PCA <sep> <sep> linear PCA <sep> convex function convex function <sep> SVD Decoder Target Encoder : A : C : B </s> SVD Figure 1: The architecture of the proposed model for improving keyphrase diversity. A represents last states of a bi-directional source encoder; B represents the last state of target encoder; C indicates decoder states where target tokens are either delimiters or end-of-sentence tokens. During orthogonal regularization, all C states are used; during target encoder training, we maximize mutual information between states A with B. Red dash arrow indicates a detached path, i.e., no back-propagation through such path. by previous representation learning works (Logeswaran and Lee, 2018; van den Oord et al., 2018; Hjelm et al., 2018), we train the target encoder in an self-supervised fashion (Figure 1). Specifically, due to the autoregressive nature of the RNN-based decoder, we follow Contrastive Predictive Coding (CPC) (van den Oord et al., 2018), where a NoiseContrastive Estimation(NCE) loss is used to maximize a lower bound on mutual information. That is, we extract target encoder’s final hidden state vector hM SC, where M is the length of target sequence, and use it as a general representation of the target phrases. We train by maximizing the mutual information between these phrase representations and the final state of the source encoder hT e as follows. For each phrase representation vector hM SC, we take the encodings HT e = {hT e,1, . . . , hT e,N} of N different source texts, where hT e,true is the encoder representation for the current source text, and the remaining N −1 are negative samples (sampled at random) from the training data. The target encoder is trained to minimize the classification loss: LSC = −log g(hT e,true, hM SC) P i∈[1,N] g(hT e,i, hM SC), g(ha, hb) = exp(h⊤ a Bhb) (2) where B is bi-linear transformation. The motivation here is to constrain the overall representation of generated keyphrase to be semantically close to the overall meaning of the source text. With such representations as input to the decoder, the semantic coverage mechanism can potentially help to provide useful keyphrase information and guide generation. 3.2.2 Orthogonal Regularization We also propose orthogonal regularization, which explicitly encourages the delimiter-generating decoder states to be different from each other. This is inspired by Bousmalis et al. (2016), who use orthogonal regularization to encourage representations across domains to be as distinct as possible. Specifically, we stack the decoder hidden states corresponding to delimiters together to form matrix H = ⟨ht1 d , . . . , htn d ⟩and use the following equation as the orthogonal regularization loss: LOR = H⊤H ⊙(1 −In) 2 , (3) where H⊤is the matrix transpose of H, In is the identity matrix of rank n, ⊙indicates element wise multiplication, ∥M∥2 indicates L2 norm of each element in a matrix M. This loss function prefers orthogonality among the hidden states ht1 d , . . . , htn d and thus improves diversity in the tokens following the delimiters. 3.2.3 Training Loss We adopt the widely used negative log-likelihood loss in our sequence generation model, denoted as LNLL. The overall loss we use for optimization is: L = LNLL + λOR · LOR + λSC · LSC, (4) where λOR and λSC are hyper-parameters. 3.3 Decoding Strategies According to different task requirements, various decoding methods can be applied to generate the target sequence y. Prior studies Meng et al. (2017); Yang et al. (2017) focus more on generating excessive number of phrases by leveraging beam 7965 search to proliferate the output phrases. In contrast, models trained under ON E2SE Q paradigm are capable of determining the proper number of phrases to output. In light of previous research in psychology (Van Zandt and Townsend, 1993; Forster and Bednall, 1976), we name these two decoding/search strategies as Exhaustive Decoding and Self-terminating Decoding, respectively, due to their resemblance to the way humans behave in serial memory tasks. Simply speaking, the major difference lies in whether a model is capable of controlling the number of phrases to output. We describe the detailed decoding strategies used in this study as follows: 3.3.1 Exhaustive Decoding As traditional keyphrase tasks evaluate models with a fixed number of top-ranked predictions (say Fscore @5 and @10), existing keyphrase generation studies have to over-generate phrases by means of beam search (commonly with a large beam size, e.g., 150 and 200 in (Chen et al., 2018b; Meng et al., 2017), respectively), a heuristic search algorithm that returns K approximate optimal sequences. For the ON E2ON E setting, each returned sequence is a unique phrase itself. But for ON E2SE Q, each produced sequence contains several phrases and additional processes (Ye and Wang, 2018) are needed to obtain the final unique (ordered) phrase list. It is worth noting that the time complexity of beam search is O(Bm), where B is the beam width, and m is the maximum length of generated sequences. Therefore the exhaustive decoding is generally very computationally expensive, especially for ON E2SE Q setting where m is much larger than in ON E2ON E. It is also wasteful as we observe that less than 5% of phrases generated by ON E2SE Q models are unique. 3.3.2 Self-terminating Decoding An innate characteristic of keyphrase tasks is that the number of keyphrases varies depending on the document and dataset genre, therefore dynamically outputting a variable number of phrases is a desirable property for keyphrase generation models 2. Since our model is trained to generate a variable number of phrases as a single sequence joined by delimiters, we can obtain multiple phrases by simply decoding a single sequence for each given 2Note this is fundamentally different from other NLG tasks. In specific, the number of keyphrases is variable, the length of each keyphrase is also variable. source text. The resulting model thus implicitly performs the additional task of dynamically estimating the proper size of the target phrase set: once the model believes that an adequate number of phrases have been generated, it outputs a special token </s> to terminate the decoding process. One notable attribute of the self-terminating decoding strategy is that, by generating a set of phrases in a single sequence, the model conditions its current generation on all previously generated phrases. Compared to the exhaustive strategy (i.e., phrases being generated independently by beam search in parallel), our model can model the dependency among its output in a more explicit fashion. Additionally, since multiple phrases are decoded as a single sequence, decoding can be performed more efficiently than exhaustive decoding by conducting greedy search or beam search on only the top-scored sequence. 4 Evaluating Keyphrase Generation Formally, given a source text, suppose that a model predicts a list of unique keyphrases ˆY = (ˆy1, . . . , ˆym) ordered by the quality of the predictions ˆyi, and that the ground truth keyphrases for the given source text is the oracle set Y. When only the top k predictions ˆY:k = (ˆy1, . . . , ˆymin(k,m)) are used for evaluation, precision, recall, and F1 score are consequently conditioned on k and defined as: P@k = | ˆY:k ∩Y| | ˆY:k| , R@k = | ˆY:k ∩Y| |Y| , F1@k = 2 ∗P@k ∗R@k P@k + R@k . (5) As discussed in Section 1, the number of generated keyphrases used for evaluation can have a critical impact on the quality of the resulting evaluation metrics. Here we compare three choices of k and the implications on keyphrase evaluation for each choice: • F1@k: where k is a pre-defined constant (usually 5 or 10). Due to the high variance of the number of ground truth keyphrases, it is often that | ˆY:k| ≤k < |Y|, and thus R@k — and in turn F1@k — of an oracle model can be smaller than 1. This undesirable property is unfortunately prevalent in the evaluation metrics adopted by all existing keyphrase generation studies to our knowledge. A simple remedy is to set k as a variable number which is specific to each data example. Here we define two new metrics: 7966 Kp20K Inspec Krapivin NUS SemEval Model @5 @10 @O @5 @10 @O @5 @10 @O @5 @10 @O @5 @10 @O Abstractive Neural CopyRNN (Meng et al.) 32.8 25.5 – 29.2 33.6 – 30.2 25.2 – 34.2 31.7 – 29.1 29.6 – CopyRNN* 31.7 27.3 33.5 24.4 28.9 29.0 30.5 26.6 32.5 37.6 35.2 40.6 31.8 31.8 31.7 CorrRNN (Chen et al.) 31.8 27.8 35.8 33.0 32.0 32.0 ParaNetT +CoAtt (Zhao and Zhang) 36.0 28.9 29.6 35.7 32.9 28.2 36.0 35.0 31.1 31.2 catSeqTG-2RF1† (Chan et al.) 32.1 35.7 25.3 28.0 30.0 34.8 37.5 25.5 28.7 29.8 KG-KE-KR-M† (Chen et al.) 31.7 28.2 38.8 25.7 28.4 31.4 27.2 25.0 31.7 28.9 28.6 38.4 20.2 22.3 30.3 CatSeq (Ours) 31.4 27.3 31.9 29.0 30.0 30.7 30.7 27.4 32.4 35.9 34.9 38.3 30.2 30.6 31.0 CatSeqD (Ours) 34.8 29.8 35.7 27.6 33.3 33.1 32.5 28.5 37.1 37.4 36.6 40.6 32.7 35.2 35.7 Extractive IR TfIdf (Hasan and Ng) 7.2 9.4 6.3 16.0 24.4 20.8 6.7 9.3 6.8 11.2 14.0 12.2 8.8 14.7 11.3 TextRank (Mihalcea and Tarau) 18.1 15.1 18.4 28.6 33.9 33.5 18.5 16.0 21.1 23.0 21.6 23.8 21.7 22.6 22.9 KEA (Witten et al.) 4.6 4.4 5.1 2.2 2.2 2.2 1.8 1.7 1.7 7.3 7.1 8.1 6.8 6.5 6.6 Maui (Medelyan et al.) 0.5 0.5 0.4 3.5 4.6 3.9 0.5 0.7 0.6 0.4 0.6 0.6 1.1 1.4 1.1 Extractive Neural DivGraphPointer (Sun et al.) 36.8 29.2 38.6 41.7 46.0 40.2 40.1 38.9 36.3 29.7 w/ Additional Data Semi-Multi (Ye and Wang) 32.8 26.4 32.8 31.8 32.3 25.4 36.5 32.6 31.9 31.2 TG-Net (Chen et al.) 37.2 31.5 31.5 38.1 34.9 29.5 40.6 37.0 31.8 32.2 Table 2: Performance (F1-score) of present keyphrase prediction on scientific publications datasets. Best/secondbest performing score in each column is highlighted with bold/underline. We also list results from literature where models that are not directly comparable (i.e., models leverage additional data and pure extractive models). Note model names with † represent its F1@O is computed by us using existing works’ released keyphrase predictions.3 • F1@O: O denotes the number of oracle (ground truth) keyphrases. In this case, k = |Y|, which means for each data example, the number of predicted phrases taken for evaluation is the same as the number of ground truth keyphrases. • F1@M: M denotes the number of predicted keyphrases. In this case, k = | ˆY| and we simply take all the predicted phrases for evaluation without truncation. By simply extending the constant number k to different variables accordingly, both F1@O and F1@M are capable of reflecting the nature of variable number of phrases for each document, and a model can achieve the maximum F1 score of 1.0 if and only if it predicts the exact same phrases as the ground truth. Another merit of F1@O is that it is independent from model outputs, therefore we can use it to compare existing models. 5 Datasets and Experiments In this section, we report our experiment results on multiple datasets and compare with existing models. We use CatSeq to refer to the delimiter3We acknowledge that F1@O scores of Chan et al. (2019) and Chen et al. (2019a) might be not completely comparable with ours. This is due to additional post-processing and filtering methods might have been applied in different work. We elaborate the data pre-processing and evaluation protocols used in this work in Appendix E. concatenated sequence-to-sequences model described in Section 3; CatSeqD refers to the model augmented with orthogonal regularization and semantic coverage mechanism. To construct target sequences for training CatSeq and CatSeqD, ground truth keyphrases are sorted by their order of first occurrence in the source text. Keyphrases that do not appear in the source text are appended to the end. This order may guide the attention mechanism to attend to source positions in a smoother way. Implementation details can be found in Appendix D. As for the pre-processing and evaluation, we follow the same steps as in (Meng et al., 2017). More details are provide in Appendix E for reproducing our results. We include a set of existing models (Meng et al., 2017; Chen et al., 2018a; Chan et al., 2019; Zhao and Zhang, 2019; Chen et al., 2019a) as baselines, they all share same behavior of abstractive keyphrase generation with our proposed model. Specially for computing existing model’s scores with our proposed new metrics (F1@O and F1@M), we implemented our own version of CopyRNN (Meng et al., 2017) based on their open sourced code, denoted as CopyRNN*. We also report the scores of models from Chan et al. and Chen et al. based on their publicly released outputs. We also include a set of models that use sim7967 Present Absent Model F1@5 F1@10 F1@O R@10 R@50 TfIdf 8.0 8.9 5.2 TextRank 12.1 10.1 11.6 KEA 4.9 4.8 5.3 Maui 35.8 23.3 51.8 CopyRNN* 44.2 30.3 66.2 48.8 66.0 CatSeq 48.3 45.5 63.5 40.7 42.2 CatSeqD 48.7 43.9 65.6 54.8 65.7 Table 3: Model performance on STACKEX dataset. ilar strategies but can not directly compare with. This includes four non-neural extractive models: TfIdf (Hasan and Ng, 2010), TextRank (Mihalcea and Tarau, 2004), KEA (Witten et al., 1999), and Maui (Medelyan et al., 2009); one neural extractive model (Sun et al., 2019); and two neural models that use additional data (e.g., title) (Ye and Wang, 2018; Chen et al., 2019b). In Section 5.3, we apply the self-terminating decoding strategy. Since no existing model supports such decoding strategy, we only report results from our proposed models. They can be used for comparison in future studies. 5.1 Experiments on Scientific Publications Our first dataset consists of a collection of scientific publication datasets, namely KP20K, INSPEC, KR A P I V I N, NUS, and SE MEV A L, that have been widely used in existing literature (Meng et al., 2017; Chen et al., 2018a; Ye and Wang, 2018; Chen et al., 2018b; Chan et al., 2019; Zhao and Zhang, 2019; Chen et al., 2019a; Sun et al., 2019). KP20K, for example, was introduced by Meng et al. (2017) and comprises more than half a million scientific publications. For each article, the abstract and title are used as the source text while the author keywords are used as target. The other four datasets contain much fewer articles, and thus used to test transferability of our model. We report our model’s performance on the present-keyphrase portion of the KP20K dataset in Table 2.4 To compare with previous works, we provide compute F1@5 and F1@10 scores. The new proposed F1@O metric indicates consistent ranking with F1@5/10 for most cases. Due to its target number sensitivity, we find that its value is closer to F1@5 for KP20K and KR A P I V I N where average target keyphrases is less and closer to F1@10 for the other three datasets. 4We show experiment results on absent data in Appendix B. KP20K STACKEX Model F1@O F1@M F1@O F1@M Greedy Search CatSeq 33.1 32.4 59.2 56.3 CatSeqD 33.4 33.9 59.6 59.3 Top Ranked Sequence in Beam Search CatSeq 24.3 25.1 52.4 52.7 CatSeqD 31.9 33.4 56.5 57.0 Table 4: F1@O and F1@M when generating variable number of keyphrases (self-terminating decoding). From the result we can see that our CatSeqD outperform existing abstractive models on most of the datasets. Our implemented CopyRNN* achieves better or comparable performance against the original model, and on NUS and SemEval the advantage is more salient. As for the proposed models, both CatSeq and CatSeqD yield comparable results to CopyRNN, indicating that ONE2SEQ paradigm can work well as an alternative option for the keyphrase generation task. CatSeqD outperforms CatSeq on all metrics, suggesting the semantic coverage and orthogonal regularization help the model to generate higher quality keyphrases and achieve better generalizability. To our surprise, on the metric F1@10 for KP20K and KRAPIVIN (average number of keyphrases is only 5), where high-recall models like CopyRNN are more favored, CatSeqD is still able to outperform ONE2ONE baselines, indicating that the proposed mechanisms for diverse generation are effective. 5.2 Experiments on The STACKEX Dataset Inspired by the StackLite tag recommendation task on Kaggle, we build a new benchmark based on the public StackExchange data5. We use questions with titles as source, and user-assigned tags as target keyphrases. We provide details regarding our data collection in Appendix C. Since oftentimes the questions on StackExchange contain less information than in scientific publications, there are fewer keyphrases per data point in STACKEX (statistics are shown in Table 1). Furthermore, StackExchange uses a tag recommendation system that suggests topic-relevant tags to users while submitting questions; therefore, we are more likely to see general terminology such as 5https://archive.org/details/stackexchange, we choose 19 computer science related topics from Oct. 2017 dump. 7968 Model KP20K Inspec Krapivin NUS SemEval CatSeq 31.9 30.7 32.3 38.3 31.0 + Orth. Reg. 31.1 29.3 31.0 36.5 29.5 + Sem. Cov. 32.9 32.1 34.5 40.2 32.9 CatSeqD 35.7 33.1 37.1 40.6 35.7 Table 5: Ablation study with F1@O scores on five scientific publication datasets. Linux and Java6. This characteristic challenges models with respect to their ability to distill major topics of a question rather than selecting specific snippets from the text. We report our models’ performance on ST A C KEX in Table 3. Results show CatSeqD performs the best in general; on the absent-keyphrase generation tasks, it outperforms CatSeq by a large margin. 5.3 Generating Variable Number Keyphrases One key advantage of our proposed model is the capability of predicting the number of keyphrases conditioned on the given source text. We thus conduct a set of experiments on KP20K and STACKEX present keyphrase generation tasks, as shown in Table 4, to study such behavior. We adopt the selfterminating decoding strategy (Section 3.3), and use both F1@O and F1@M (Section 4) to evaluate. In these experiments, we use beam search as in most Natural Language Generation (NLG) tasks, i.e., only use the top ranked prediction sequence as output. We compare the results with greedy search. Since no existing model is capable of generating variable number of keyphrases, in this subsection we only report performance on such setting from CatSeq and CatSeqD. From Table 4 we observe that in the variable number generation setting, greedy search outperforms beam search consistently. This may because beam search tends to generate short and similar sequences. We can also see the resulting F1@O scores are generally lower than results reported in previous subsections, this suggests an over-generation decoding strategy may still benefit from achieving higher recall. 6 Analysis and Discussion 6.1 Ablation Study We conduct an ablation experiment to study the effects of orthogonal regularization and semantic coverage mechanism on CatSeq. As shown in Table 5, semantic coverage provides significant boost to CatSeq’s performance on all datasets. Orthogonal regularization hurts performance when is solely applied to CatSeq model. Interestingly, when both components are enabled (CatSeqD), the model outperforms CatSeq by a noticeable margin on all datasets, this suggests the two components help keyphrase generation in a synergistic way. One future direction is to apply orthogonal regularization directly on target encoder, since the regularizer can potentially diversify target representations at phrase level, which may further encourage diverse keyphrase generation in decoder. 6.2 Visualizing Diversified Generation To verify our assumption that target encoding and orthogonal regularization help to boost the diversity of generated sequences, we use two metrics, one quantitative and one qualitative, to measure diversity of generation. First, we simply calculate the average unique predicted phrases produced by both CatSeq and CatSeqD in experiments shown in Section 5.1 (beam size is 50). The resulting numbers are 20.38 and 89.70 for CatSeq and CatSeqD respectively. Second, from the model running on the KP20K validation set, we randomly sample 2000 decoder hidden states at k steps following a delimiter (k = 1, 2, 3) and apply an unsupervised clustering method (t-SNE (van der Maaten and Hinton, 2008)) on them. From the Figure 2 we can see that hidden states sampled from CatSeqD are easier to cluster while hidden states sampled from CatSeq yield one mass of vectors with no obvious distinct clusters. Results on both metrics suggest target encoding and orthogonal regularization indeed help diversifying generation of our model. 6.3 Qualitative Analysis To illustrate the difference of predictions between our proposed models, we show an example chosen from the KP20K validation set in Appendix F. In this example there are 29 ground truth phrases. Neither of the models is able to generate all of the 6One example is shown in Appendix F. 7969 Figure 2: t-SNE results on decoder hidden states. Upper row: CatSeq; lower row: CatSeqD; column k shows hidden states sampled from tokens at k steps following a delimiter. keyphrases, but it is obvious that the predictions from CatSeq all start with “test”, while predictions from CatSeqD are diverse. This to some extent verifies our assumption that without the target encoder and orthogonal regularization, decoder states following delimiters are less diverse. 7 Conclusion and Future Work We propose a recurrent generative model that sequentially generates multiple keyphrases, with two extra modules that enhance generation diversity. We propose new metrics to evaluate keyphrase generation. Our model shows competitive performance on a set of keyphrase generation datasets, including one introduced in this work. In future work, we plan to investigate how target phrase order affects the generation behavior, and further explore set generation in an order invariant fashion. Acknowledgments This work is supported by the National Science Foundation under grant No. 1525186. This research was also supported in part by the University of Pittsburgh Center for Research Computing through the resources provided. The authors thank the anonymous ACL reviewers for their helpful feedback and suggestions. References Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., and Erhan, D. (2016). Domain separation networks. CoRR, abs/1608.06019. Chan, H. P., Chen, W., Wang, L., and King, I. (2019). Neural keyphrase generation via reinforcement learning with adaptive rewards. arXiv preprint arXiv:1906.04106. Chen, J., Zhang, X., Wu, Y., Yan, Z., and Li, Z. (2018a). Keyphrase generation with correlation constraints. CoRR, abs/1808.07185. Chen, W., Chan, H. P., Li, P., Bing, L., and King, I. (2019a). An integrated approach for keyphrase generation via exploring the power of retrieval and extraction. arXiv preprint arXiv:1904.03454. Chen, W., Gao, Y., Zhang, J., King, I., and Lyu, M. R. (2018b). Title-guided encoding for keyphrase generation. CoRR, abs/1808.08575. Chen, W., Gao, Y., Zhang, J., King, I., and Lyu, M. R. (2019b). Title-guided encoding for keyphrase generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6268–6275. Cho, K., Van Merri¨enboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Du, X., Shao, J., and Cardie, C. (2017). Learning to ask: Neural question generation for reading comprehension. CoRR, abs/1705.00106. Forster, K. I. and Bednall, E. S. (1976). Terminating and exhaustive search in lexical access. Memory & Cognition, 4(1):53–61. Gollapalli, S. D., Li, X., and Yang, P. (2017). Incorporating expert knowledge into keyphrase extraction. In AAAI, pages 3180–3187. AAAI Press. Gu, J., Lu, Z., Li, H., and Li, V. O. K. (2016). Incorporating copying mechanism in sequence-to-sequence learning. CoRR, abs/1603.06393. G¨ulc¸ehre, C¸ ., Ahn, S., Nallapati, R., Zhou, B., and Bengio, Y. (2016). Pointing the unknown words. CoRR, abs/1603.08148. Hasan, K. S. and Ng, V. (2010). Conundrums in unsupervised keyphrase extraction: making sense of the state-of-the-art. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 365–373. Association for Computational Linguistics. Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., and Bengio, Y. (2018). Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670. Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 7970 Klein, G., Kim, Y., Deng, Y., Senellart, J., and Rush, A. M. (2017). OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL. Le, T. T. N., Nguyen, M. L., and Shimazu, A. (2016). Unsupervised keyphrase extraction: Introducing new kinds of words to keyphrases. 29th Australasian Joint Conference, Hobart, TAS, Australia, December 5-8, 2016. Liu, Z., Chen, X., Zheng, Y., and Sun, M. (2011). Automatic keyphrase extraction by bridging vocabulary gap. ACL. Logeswaran, L. and Lee, H. (2018). An efficient framework for learning sentence representations. CoRR, abs/1803.02893. Lopez, P. and Romary, L. (2010). Humb: Automatic key term extraction from scientific articles in grobidp. the 5th International Workshop on Semantic Evaluation. Luan, Y., Ostendorf, M., and Hajishirzi, H. (2017). Scientific information extraction with semi-supervised neural tagging. CoRR, abs/1708.06075. Medelyan, O., Frank, E., and Witten, I. H. (2009). Human-competitive tagging using automatic keyphrase extraction. EMNLP. Meng, R., Zhao, S., Han, S., He, D., Brusilovsky, P., and Chi, Y. (2017). Deep keyphrase generation. In ACL. Mihalcea, R. and Tarau, P. (2004). Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404–411. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. (2017). Automatic differentiation in pytorch. In NIPS-W. Paulus, R., Xiong, C., and Socher, R. (2017). A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304. Reddy, D. R. et al. (1977). Speech understanding systems: A summary of results of the five-year research effort. Department of Computer Science. CamegieMell University, Pittsburgh, PA, 17. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. Subramanian, S., Wang, T., Yuan, X., and Trischler, A. (2017). Neural models for key phrase detection and question generation. CoRR, abs/1706.04560. Sun, Z., Tang, J., Du, P., Deng, Z.-H., and Nie, J.-Y. (2019). Divgraphpointer: A graph pointer network for extracting diverse keyphrases. arXiv preprint arXiv:1905.07689. Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In NIPS. van den Oord, A., Li, Y., and Vinyals, O. (2018). Representation learning with contrastive predictive coding. CoRR, abs/1807.03748. van der Maaten, L. and Hinton, G. (2008). Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9:2579–2605. Van Zandt, T. and Townsend, J. T. (1993). Selfterminating versus exhaustive processes in rapid visual and memory search: An evaluative review. Perception & Psychophysics, 53(5):563–580. Wan, X. and Xiao, J. (2008). Single document keyphrase extraction using neighborhood knowledge. AAAI. Wang, M., Zhao, B., and Huang, Y. (2016). Ptr: Phrasebased topical ranking for automatic keyphrase extraction in scientific publications. ICONIP 2016. Witten, I. H., Paynter, G. W., Frank, E., Gutwin, C., and Nevill-Manning, C. G. (1999). Kea: Practical automatic keyphrase extraction. In DL ’99. Xu, Q., Zhang, J., Qu, L., Xie, L., and Nock, R. (2018). D-page: Diverse paraphrase generation. CoRR. Yang, Z., Hu, J., Salakhutdinov, R., and Cohen, W. W. (2017). Semi-supervised qa with generative domainadaptive nets. In ACL. Ye, H. and Wang, L. (2018). Semi-supervised learning for neural keyphrase generation. CoRR, abs/1808.06773. Yuan, X., Wang, T., G¨ulc¸ehre, C¸ ., Sordoni, A., Bachman, P., Subramanian, S., Zhang, S., and Trischler, A. (2017). Machine comprehension by text-to-text neural question generation. CoRR, abs/1705.02012. Zhang, Q., Wang, Y., Gong, Y., and Huang, X. (2016). Keyphrase extraction using deep recurrent neural networks on twitter. In EMNLP. Zhang, X. and Lapata, M. (2017). Sentence simplification with deep reinforcement learning. CoRR, abs/1703.10931. Zhao, J. and Zhang, Y. (2019). Incorporating linguistic constraints into keyphrase generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5224–5233. Zhou, Q., Yang, N., Wei, F., and Zhou, M. (2017). Selective encoding for abstractive sentence summarization. CoRR, abs/1704.07073. 7971 A Sequence to Sequence Generation A.1 The Encoder-Decoder Model Given a source text consisting of N words w1 e, . . . , wN e , the encoder converts their corresponding embeddings x1 e, . . . , xN e into a set of N realvalued vectors he = (h1 e, . . . , hN e ) with a bidirectional GRU (Cho et al., 2014): ht e,fwd = GRUe,fwd(xt e, ht−1 e,fwd), ht e,bwd = GRUe,bwd(xt e, ht+1 e,bwd), ht e = ⟨ht e,fwd, ht e,bwd⟩. (6) Dropout (Srivastava et al., 2014) is applied to both xe and he for regularization. The decoder is a uni-directional GRU, which generates a new state ht d at each time-step t from the word embedding xt d and the recurrent state ht−1 d : ht d = GRUd(xt d, ht−1 d ).7 (7) The initial state h0 d is derived from the final encoder state hN e by applying a single-layer feedforward neural net (FNN): h0 d = Ltanh 0 (hN e ). (8) Dropout is applied to both the embeddings xd and the GRU states hd. A.2 Attentive Decoding When generating token yt, in order to better incorporate information from the source text, an attention mechanism (Bahdanau et al., 2014) is employed to infer the importance αt,i of each source word wi e given the current decoder state ht d. This importance is measured by an energy function with a 2-layer FNN: energy(ht d, hi e) = L1(Ltanh 2 (⟨ht d, hi e⟩)). (9) The output over all decoding steps t thus define a distribution over the source sequence: αt = softmax(energy(ht d, he)). (10) These attention scores are then used as weights for a refined representation of the source encodings, which is then concatenated to the decoder state ht d to derive a generative distribution pa: pa(yt) = Lsoftmax 3 (Ltanh 4 (⟨ht d, X i αt,i · hi e⟩)), (11) 7During training (with teacher forcing), wt d is the ground truth target token at previous time-step t−1; during evaluation, wt d = yt−1, is the prediction at the previous time-step. where the output size of L3 equals to the target vocabulary size. Subscript a indicates the abstractive nature of pa since it is a distribution over a prescribed vocabulary. A.3 Pointer Softmax We employ the pointer softmax (G¨ulc¸ehre et al., 2016) mechanism to switch between generating a token yt (from a vocabulary) and pointing (to a token in the source text). Specifically, the pointer softmax module computes a scalar switch st at each generation time-step and uses it to interpolate the abstractive distribution pa(yt) over the vocabulary (see Equation 11) and the extractive distribution px(yt) = αt over the source text tokens: p(yt) = st · pa(yt) + (1 −st) · px(yt), (12) where st is conditioned on both the attentionweighted source representation P i αt,i · hi e and the decoder state ht d: st = Lsigmoid 5 (tanh(L6( X i αt,i · hi e) + L7(ht d))). (13) B Experiment Results on KP20K Absent Subset Generating absent keyphrases on scientific publication datasets is a rather challenging problem. Existing studies often achieve seemingly good performance by measuring recall on tens and sometimes hundreds of keyphrases produced by exhaustive decoding with a large beam size — thus completely ignoring precision. We report the models’ Recall@10/50 scores on the absent portion of five scientific paper datasets in Table 6 to be in line with previous studies. The absent keyphrase prediction highly prefers recall-oriented models, therefore CopyRNN with beam size of 200 is innately proper for this task setting. However, from the results we observe that with the help of exhaustive decoding and diverse mechanisms, CatSeqD is able to perform comparably to CopyRNN model, and it generally works better for top predictions. Even though the trend of models’ performance somewhat matches what we observe on the present data, we argue that it is hard to compare different models’ performance on such scale. We argue that STACKEX is better testbeds for absent keyphrase generation. 7972 Kp20K Inspec Krapivin NUS SemEval Model R@10 R@50 R@10 R@50 R@10 R@50 R@10 R@50 R@10 R@50 CopyRNN (Meng et al., 2017) 11.5 18.9 5.1 10.1 11.6 19.5 7.8 14.4 4.9 7.5 CopyRNN* (Meng et al., 2017) 3.3 8.7 4.0 8.3 4.0 8.1 2.4 8.1 0.5 2.6 CatSeq (ours) 6.0 6.2 2.8 2.9 7.0 7.4 3.7 3.1 2.5 2.5 CatSeqD (ours) 11.7 15.1 5.2 7.1 12.0 14.5 8.4 11.0 4.6 6.3 Table 6: Performance of absent keyphrase prediction on scientific publications datasets. Best/second-best performing score in each column is highlighted with bold/underline. C ST A C KEX Data Collection We download the public data dump from https: //archive.org/details/stackexchange, and choose 19 computer science related topics from Oct. 2017 dump. We select computer science forums (CS/AI), using “title” + “body” as source text and “tags” as the target keyphrases. After removing questions without valid tags, we collect 330,965 questions. We thus randomly select 16,000 for validation, and another 16,000 as test set. Note some questions in StackExchange forums contain large blocks of code, resulting in long texts (sometimes more than 10,000 tokens after tokenization), this is difficult for most neural models to handle. Consequently, we truncate texts to 300 tokens and 1,000 tokens for training and evaluation splits respectively. D Implementation Details Implementation details of our proposed models are as follows. In all experiments, the word embeddings are initialized with 100-dimensional random matrices. The number of hidden units in both the encoder and decoder GRU are 150. The number of hidden units in target encoder GRU is 150. The size of vocabulary is 50,000. In all experiments, we use a dropout rate of 0.1. The numbers of hidden units in MLPs described in Section 3 are as follows. During negative sampling, we randomly sample 16 samples from the same batch, thus target encoding loss in Equation 2 is a 17-way classification loss. In CatSeqD, we select both λOR and λSC in Equation 4 from [0.01, 0.03, 0.1, 0.3, 1.0] using validation sets. The selected values are listed in Table 7. We use Adam (Kingma and Ba, 2014) as the step rule for optimization. The learning rate is 1e−3. The model is implemented using PyTorch (Paszke et al., 2017) and OpenNMT (Klein et al., 2017). For exhaustive decoding, we use a beam size of 50 and a maximum sequence length of 40. Experiment Setting λOR λSC Table 2 1.0 0.03 Table 3 0.03 0.1 Table 4, KP20K Greedy 1.0 0.3 Table 4, KP20K Top Rank 1.0 0.3 Table 4, STACKEX Greedy 1.0 0.3 Table 4, STACKEX Top Rank 1.0 0.3 Table 5, CatSeq + Orth. Reg. 0.3 0.0 Table 5, CatSeq + Sem. Cov. 0.0 0.03 Table 5, CatSeqD Same as Table 2 Table 6 Same as Table 2 Table 7: Semantic coverage and orthogonal regularization coefficients. Following Meng et al. (2017), lowercase and stemming are performed on both the ground truth and generated keyphrases during evaluation. We leave out 2,000 data examples as validation set for both KP20K and STACKEX and use them to identify optimal checkpoints for testing. And all the scores reported in this paper are from checkpoints with best performances (F1@O) on validation set. In Section 6.2, we use the default parameters for t-SNE in sklearn (learning rate is 200.0, number of iterations is 1000, as defined in 8). E Dataset and Evaluation Details We strictly follow the data pre-processing and evaluation protocols provided by Meng et al. (2017). We pre-process both document texts and groundtruth keyphrases, including word segmentation, lowercasing and replacing all digits with symbol <digit>. In the datasets, examples with empty ground-truth keyphrases are removed. 8https://scikit-learn.org/stable/ modules/generated/sklearn.manifold.TSNE. html 7973 We evaluate models’ performance on predicting present and absent phrases separately. Specifically, we first lowercase the text, then we determine the presence of each ground-truth keyphrase by checking whether it is a sub-string of the source text (we use Porter Stemmer 9). To evaluate present phrase performance, we compute Precision/Recall/F1score (see 14-16 for formulas) for each document taking only present ground-truth keyphrases as target and ignore the absent ones. P@k = #(correct@k) min{k, #(pred)} (14) R = #(correct@k) #(target) (15) F1@k = 2 ∗P@k ∗R P@k + R (16) where #(pred) and #(target) are the number of predicted and ground-truth keyphrases respectively; and #(correct@k) is the number of correct predictions among the first k results. We report the macro-averaged scores over documents that have at least one present ground-truth phrases (corresponding to the column #PreDoc in Table 8), and similarly to the case for absent phrase evaluation. F Examples of KP20K and ST A C KEX with Model Prediction See Table 9 and Figure 3. 9https://www.nltk.org/api/nltk.stem. html#module-nltk.stem.porter 7974 Dataset #Doc #KP #PreDoc #PreKP #AbsDoc #AbsKP KP20K 19,987 105,181 19,048 66,595 16,357 38,586 IN S P E C 500 4,913 497 3,858 381 1,055 KR A P I V I N 460 2,641 437 1,485 417 1,156 NUS 211 2,461 207 1,263 195 1,198 SE MEV A L 100 1,507 100 671 99 836 ST A C KEX 16,000 43,131 13,475 24,809 10,984 18,322 DUC 308 2,484 308 2,421 38 63 Table 8: Statistics on number of documents and keyphrases of each test set. #Doc#KP denotes the number of documents/ground-truth keyphrases in the dataset. #PreKP/#AbsKP denotes the number of present/absent groundtruth keyphrases, and #PreDoc/#AbsDoc denotes the number of documents that contain at least one present/absent ground-truth keyphrase. Source Integration of a Voice Recognition System in a Social Robot Human-robot interaction Human-robot interaction ( HRI ) (1) is one of the main fields in the study and research of robotics. Within this field, dialogue systems and interaction by voice play an important role. When speaking about human-robot natural dialogue we assume that the robot has the capability to accurately recognize what the human wants to transmit verbally and even its semantic meaning, but this is not always achieved. In this article we describe the steps and requirements that we went through in order to endow the personal social robot Maggie , developed at the University Carlos III of Madrid, with the capability of understanding the natural language spoken by any human. We have analyzed the different possibilities offered by current software/hardware alternatives by testing them in real environments. We have obtained accurate data related to the speech recognition capabilities in different environments, using the most modern audio acquisition systems and analyzing not so typical parameters such as user age, gender, intonation, volume, and language. Finally, we propose a new model to classify recognition results as accepted or rejected, based on a second automatic speech recognition ( ASR ) opinion.This new approach takes into account the precalculated success rate in noise intervals for each recognition framework, decreasing the rate of false positives and false negatives. CatSeq voice recognition system ; social robot ; human robot interaction ; voice recognition ; hri ; speech recognition ; automatic speech recognition ; noise intervals ; noise ; human robot ; automatic speech ; natural language CatSeqD human robot interaction ; voice recognition ; social robotics ; social robots ; integration ; speech recognition ; hri ; social robot ; robotics ; voice recognition system ; recognition ; asr ; automatic speech recognition ; Ground Truth asr ; automatic speech recognition ; dialogue ; human robot interaction ; maggie ; social robot ; speech recognition ; voice recognition ; Table 9: Example from KP20K validation set, and predictions generated by CatSeq and CatSeqD models. . 7975 Figure 3: Example from the STACKEX dataset, we show the screenshot of the original web page to better present the example. Note the input to the model is the entire question (including the code), we removed the format information in the dataset. Also note on the bottom of the screenshot it shows the 3 keyphrases (in this example all absent) which we collected as the ground-truth keyphrases in our dataset. Ground Truth: javascript ; jquery ; event handling CatSeq Prediction: javascript; c#; jquery; php; linq; comparative review; ecmascript 6; asp . js; beginner; strings; performance; datetime CatSeqD Prediction: javascript ; jquery ; performance ; event handling ; array ; twitter bootstrap ; beginner ; algorithm ; indexarray ; optimization ; event programming ; datetime ; comparative review ; ecmascript 6 ; indexof ; dry ; php ; r ; java ; coffeescript ; combinatorics ; dom ; html ; event tracking ; strings ; python ; ruby ; natural language processing ; animation ; angular . js ; homework ; parameters ; jquery ui ; functional programming ; google app engine ; . net ; python 2 . 7 ; c# ; php5 ; validation ; regex ; parsing ; formatting ; hash table ; object oriented ; web scraping ; python 3 . x ; python 3 . x programming ; python 2 . net ; python 2 . 6 ; python 2 . sql ; mysql ; object oriented design ; actionscript
2020
710
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7976–7986 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7976 R3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge Tuhin Chakrabarty1,2∗, Debanjan Ghosh3, Smaranda Muresan2,4 and Nanyun Peng1 1Information Sciences Institute, University of Southern California 2Department of Computer Science, Columbia University 3Educational Testing Service, 4Data Science Institute, Columbia University {tuhin.chakrabarty, smara}@columbia.edu [email protected], [email protected] Abstract We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence. Our method employs a retrieve-andedit framework to instantiate two major characteristics of sarcasm: reversal of valence and semantic incongruity with the context, which could include shared commonsense or world knowledge between the speaker and the listener. While prior works on sarcasm generation predominantly focus on context incongruity, we show that combining valence reversal and semantic incongruity based on commonsense knowledge generates sarcastic messages of higher quality based on several criteria. Human evaluation shows that our system generates sarcasm better than human judges 34% of the time, and better than a reinforced hybrid baseline 90% of the time. 1 Introduction Studies have shown that the use of sarcasm or verbal irony, can increase creativity on both the speakers and the addressees (Huang et al., 2015), and can serve different communicative purposes such as evoking humor and diminishing or enhancing critique (Burgers et al., 2012). Thus, developing computational models that generate sarcastic messages could impact many downstream applications, such as better conversational agents and creative or humorous content creation. While most computational work has focused on sarcasm detection (Davidov et al., 2010; Gonz´alez-Ib´a˜nez et al., 2011; Riloff et al., 2013; Ghosh et al., 2015; Joshi et al., 2015b; Muresan et al., 2016; Ghosh and Veale, 2017; Ghosh et al., 2017, 2018), research on sarcasm generation is in its infancy (Joshi et al., 2015a; Mishra et al., 2019). Sarcasm generation ∗The research was conducted when the author was at USC/ISI. Literal Input 1 I hate getting sick from fast food. GenSarc1 I love getting sick from fast food. GenSarc2 [I love getting sick from fast food.] [ Stomach ache is just an additional side effect.] Human 1 Shout out to the Mc donalds for giving me bad food and making me sick right before work in two hours. Literal Input 2 I inherited unfavorable genes from my mother. GenSarc3 I inherited great genes from my mother. GenSarc4 [I inherited great genes from my mother.] [Ugly goes down to the bone.] Human 2 Great I inherited all of my mother’s GOOD genes Table 1: Table showing a literal or non sarcastic input sentence and respective sarcastic outputs. GenSarc1 and GenSarc3 simply reverses the valence, while GenSarc2 and GenSarc4 add commonsense context to create incongruity or enhance the humorous effect. is a challenging problem since the generated utterance should have at least five characteristics (a.k.a. “sarcasm factors”) (Burgers et al., 2012): 1) be evaluative; 2) be based on a reversal of valence between the literal and intended meaning; 3) be based on a semantic incongruity with the context, which can include shared commonsense or world knowledge between the speaker and the addressee; 4) be aimed at some target, and 5) be relevant to the communicative situation in some way. To simplify the problem, we focus on the task of generating a sarcastic utterance starting from a non-sarcastic utterance that conveys the speaker’s intended meaning and that is evaluative. Consider the examples in Table 1. Given the literal input “I hate getting sick from fast food” or “I inherited unfavorable genes from my mother”, our task is to generate a sarcastic message that would convey this intended literal meaning. In this simplifying task, we are not concerned with the fifth characteristic, while the first and to some degree, the fourth are specified by the input (literal) utterances. 7977 Given the lack of “training” data for the sarcasm generation task, we propose a novel unsupervised approach that has three main modules guided by the above mentioned sarcasm factors: 1. Reversal of Valence: To generate sarcastic utterances that satisfy the second characteristic we identify the evaluative word and use negation or lexical antonyms to generate the sarcastic utterance by reversing the valence (Section 4.1). For example, given, “I hate getting sick from fast food” this module will generate “I love getting sick from fast food” (GenSarc1 in Table 1). 2. Retrieval of Commonsense Context: Adding commonsense context could be important to make explicit the semantic incongruity factor (e.g., GenSarc4 vs. GenSarc3 in Table 1), or could enhance the humorous effect of the generated sarcastic message (e.g., GenSarc2 vs. GenSarc1 in Table 1). We propose an approach where retrieved relevant commonsense context sentences are to be added to the generated sarcastic message. At first, we use a pre-trained language model fine-tuned on the ConceptNet (Speer et al., 2017) called COMET (Bosselut et al., 2019) to generate relevant commonsense knowledge. COMET gives us that, “inherited unfavorable genes from my mother” causes “to be ugly” or that “getting sick from fast food” causes “stomach ache” (Section 4.2.1). The derived commonsense concept is then used to retrieve relevant sentences — from a corpus — that could be added to the sentence obtained through reversal of valence (e.g., “Stomach ache is just an additional side effect” in Table 1) (Section 4.2.2). 3. Ranking of Semantic Incongruity: The previous module generates a list of candidate commonsense contexts. Next, we measure contradiction between each of these commonsense contexts and the sentence generated by the reversal of valence approach (module 1) and select the commonsense context that received the highest contradiction score. Finally, we concatenate the selected context to the sentence obtained through reversal of valence. Here, conceptually, contradiction detection is aimed to capture the semantic incongruity between the output of valence reversal and its context. Contradiction scores are obtained from a model trained on the Multi-Genre NLI Corpus (Williams et al., 2018) (Section 4.3). We test our approach on 150 non-sarcastic utterances randomly sampled from two existing data sets. We conduct human evaluation using several criteria: 1) how sarcastic is the generated message; 2) how humorous it is; 3) how creative it is; and 4) how grammatical it is. Evaluation via Amazon’s Mechanical Turk (MTurk) shows that our system is better 34% of the time compared to humans and 90% of the time compared to a recently published reinforced hybrid baseline (Mishra et al., 2019). We also present a thorough ablation study of several variations of our system demonstrating that incorporating more sarcasm factors (e.g., reversal of valence, commonsense context, and semantic incongruity) lead to higher quality sarcastic utterances. We make the code and data from our experiments publicly available. 1 2 Related Work 2.1 Sarcasm Generation Research on sarcasm generation is in its infancy. Joshi et al. (2015a) proposed SarcasmBot, a sarcasm generation system that implements eight rulebased sarcasm generators, each of which generates a certain type of sarcastic expression. Peled and Reichart (2017) introduced a novel task of sarcasm interpretation, defined as the generation of a nonsarcastic utterance conveying the same message as the original sarcastic one. They use supervised machine translation models for the same in presence of parallel data. However, it is impractical to assume the existence of large corpora for training supervised generative models using deep neural nets; we hence resort to unsupervised approaches. Mishra et al. (2019) employed reinforced neural seq2seq learning and information retrieval based approaches to generate sarcasm. Their models are trained using only unlabeled non-sarcastic and sarcastic opinions. They generated sarcasm as a disparity between positive sentiment context and negative situational context. We, in contrast, model sarcasm using semantic incongruity with the context which could include shared commonsense or world knowledge. 1https://github.com/tuhinjubcse/ SarcasmGeneration-ACL2020 7978 2.2 Style Transfer Prior works looked into unsupervised text style/sentiment transfer (Shen et al., 2017; Fu et al., 2017; Li et al., 2018), which transfers a sentence from one style to another without changing the content. This is relevant to the reversal of valence for sarcasm generation. However, these transformations are mainly at the lexical and syntax levels rather than pragmatic level; in contrast, sarcastic utterances often include additional information associated with the context they occur (Regel, 2009), which is beyond text style/sentiment transfer. 2.3 Use of Commonsense for Irony Detection The study of irony and sarcasm are closely related as sarcasm is defined as, “the use of verbal irony to mock someone or show contempt”. Van Hee et al. (2018) addressed the challenge of modeling implicit or prototypical sentiment in the framework of automatic irony detection. They first manually annotated stereotypical ironic situations (e.g., flight delays) and later addressed the implicit sentiment held towards such situations automatically by using both a lexico-semantic commonsense knowledge base and a data-driven method. They however used it for irony detection, while we are focused on sarcasm generation.2 3 Sarcasm Factors Used in Generation A sarcastic utterance must satisfy the sarcasm factors, i.e., the inherent characteristics of sarcasm (Attardo, 2000; Burgers et al., 2012). In this research, we leverage the use of two particular factors to generate sarcasm. One is the reversal of valence and the other is the semantic incongruity with the context, which could include shared commonsense or world knowledge between the speaker and the hearer. 3.1 Reversal of Valence The first key sarcasm factor is the reversal of valence between the literal and the intended meaning (Burgers et al., 2012). Reversal of valence can be achieved in two ways: when the literal meaning of the sarcastic message is positive (e.g., “that is a great outfit” if the outfit is ugly) or when the literal 2While we do not directly model the negative intent in sarcasm, the generated output could lead to sarcastic messages rather than just ironic depending on the initial target given in the non-sarcastic message (E.g a sample generation “Our politicians have everything under control. The nation is in danger of falling into anarchy.”) meaning is negative (e.g., “that is an ugly dress” if the dress is really beautiful). Arguably, the former is more likely to appear in sarcastic utterances. As the intended meaning is generally the opposite of its literal meaning in sarcastic utterances (Gibbs, 1986), using lexical antonym of negative sentiment words or negation can be used to convert a non-sarcastic utterance to its sarcastic version. For example, given a non-sarcastic utterance “Zero visibility in fog makes driving difficult”, one could identify the evaluative negative word difficult and replace it with its antonym easy, thereby converting the utterance to the sarcastic “Zero visibility in fog makes driving easy”. Likewise, “Drunk driving should be taken seriously” can be converted to its sarcastic counterpart, “Drunk driving should not be taken seriously” by using negation. We propose a generation approach that is able to capture the reversal of valence (Section 4.1). 3.2 Semantic Incongruity The second sarcasm factor, semantic incongruity, appears between the literal evaluation and the context, as in the example “I love getting sick from fast food”, where we have semantic incongruity between the positive word “love” and the negative situation “getting sick”. However, often, the negative situation is absent from the utterance, and thus additional pragmatic inference is needed to understand the sarcastic intent. For example, the listener might miss the sarcastic intent in “zero visibility in fog makes driving easy”, where the speaker meant to convey that it can cause “accidents”. Adding “suffered three cracked ribs in an accident.” makes the sarcastic intent more explicit, while maintaining the acerbic wit of the speaker. In the next section, we propose a novel generation approach that incorporates such relevant commonsense knowledge as context for semantic incongruity (Section 4.2 and Section 4.3). 4 Unsupervised Sarcasm Generation An overview of the sarcasm generation pipeline is shown in Figure 1. In this section, we detail the three main modules that are designed to instantiate the key sarcasm factors. 4.1 Reversal of Valence As sarcasm is a type of verbal irony used to mock or convey contempt, in most sarcastic messages we encounter a positive sentiment towards a nega7979 Figure 1: Our complete pipeline for sarcasm generation. The components with highlighted background denote Reversal of Valence, Retrieval of Commonsense Context and Ranking based on Semantic Incongruity respectively tive situation (i.e., ironic criticism (Kreuz and Link, 2002)). This observation is also supported by research on sarcasm detection, particularly on social media. Hence, for our sarcasm generation task, we focus on transforming a literal utterance with negative valence into positive valence. To implement the reversal of valence, as highlighted in the yellow background in Figure 1, we first identify the evaluative words and replace them with their lexical antonyms using WordNet (Miller, 1995). As we expect the evaluative words to be negative words, we rely on the word level negative scores obtained from SentiWordNet (Esuli and Sebastiani, 2006). In the absence of words with negative polarity, we check if there is the negation word not or words ending with n’t and remove these words. In case there are both negative words and not (or words ending in n’t), we handle only one of them. Given the non sarcastic example “zero visibility in fog makes driving difficult” shown in Figure 1 and which we use as our running example, the reversal of valence module generates “zero visibility in fog makes driving easy”. 4.2 Retrieval of Commonsense Context As discussed before, a straightforward reversal of valence might not generate sarcastic messages that display a clear semantic incongruity, and thus, additional context is needed. We propose an approach to retrieve relevant context for the sarcastic message based on commonsense knowledge. First, we generate commonsense knowledge based on ConcepNet (e.g., “driving in zero visibility” causes “accidents”) (Section 4.2.1). Second, we retrieve candidate context sentences that contain the commonsense concept from a retrieval corpus (Section 4.2.2) and edit them for grammatical consistency with the input message (Section 4.2.3). 4.2.1 Commonsense Reasoning We extract nouns, adjectives, adverbs, and verbs from the non-sarcastic input messages and feed them as input to COMET (Bosselut et al., 2019) model to generate commonsense knowledge (highlighted in green background in Figure 1). COMET is an adaptation framework for constructing commonsense knowledge based on pre-trained language models. It initiates with a pre-trained GPT (Radford et al., 2018) model and fine-tune on commonsense knowledge tuples (in our case, ConceptNet (Speer et al., 2017)). These tuples provide COMET with the knowledge base structure and relations that must be learned, and COMET adapts the representations that the language model learned from the pre-training stage to add novel nodes to the seed knowledge graph. Our work only leverages the causes relation. For instance, from our running example, we first remove the stopwords and then extract nouns, adjectives, adverbs, and verbs including the terms zero, visibility, fog,makes driving, and difficult to feed to COMET as inputs. In turn, COMET returns the probable causes with their probability scores. For the running example, COMET returns with the highest probability that 7980 these terms may cause an accident (illustrated in Figure 2). For further details regarding COMET please see Bosselut et al. (2019). 4.2.2 Retrieving Sentences Containing Commonsense Concepts Once we obtain the most probable output from COMET, the next step is to retrieve sentences containing the commonsense word or phrase from a retrieval corpus. We impose several constraints: (a) the retrieved sentences should contain the commonsense concept at the beginning or at the end; (b) sentence length should be less than twice the number of tokens in the non-sarcastic input to keep a consistency between the length of the non-sarcastic input and its sarcastic version. If none of the commonsense phrase is present in the retrieval corpus, we retrieve sentences containing the nouns within the top most phrase. For example, if COMET yields microwave burger awful causes the phrase food to spoil, and this phrase does not appear in any sentence in the retrieval corpus, we search for food and later replace it in the retrieved sentence with food to spoil. COMET often returns output with common phrases such as you to be, you to get, person will be, you have which we also removed while keeping the main content word (i.e the commonsense concept) We use Sentencedict.com, an online sentence dictionary as the retrieval corpus, where one can find high quality sentences for almost every word obeying the above constraints. 3 4.2.3 Grammatical Consistency We first check whether the retrieved sentences are consistent with the non-sarcastic input in terms of the pronouns. If the pronouns are mismatched, then we modify the pronoun of the retrieved sentence to match the pronoun of the non-sarcastic input. In case, the non-sarcastic input does not have any pronoun, but the retrieved sentence does, we simply change that pronoun to “I”. For example, if the non-sarcastic input sentence is “Ignoring texts is literally the worst part of communication.” and the retrieved commonsense sentence is “He has never suffered the torment of rejection.”, we modify the retrieved sentence to “I have never suffered the torment of rejection.” to have consistency among the pronoun use. After correcting the pronouns and proper names (in the same way as pronoun correction), we feed the corrected sentences into the Neural Grammatical Error Corrections System 3https://sentencedict.com/ (Zhao et al., 2019) to correct any pronoun or gender specific errors introduced by the replacements. 4.3 Ranking for Semantic Incongruity After the grammatical error correction, the next step is to select the best context sentence from the retrieved results. Since we expect the context sentences to be incongruous with the sentence generated by the reversal of valence approach (Section 4.1), we rank the context sentences by semantic incongruity scores and select the best candidate. We frame the problem of semantic incongruity based on the Natural Language Inference (NLI) (Bowman et al., 2015) task. The Multi-Genre NLI (Williams et al., 2018) covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization, making it an ideal choice as our NLI Dataset. We first fine-tune RoBERTa-large (Liu et al., 2019), a state-of-the-art pre-trained language model for a 3-way classification (i.e., contradiction, entailment, and neutral) by training on the Multi-NLI dataset. Next, for each retrieved sentence, we treat it as the premise and the sentence generated by the reversal of valence as the hypothesis, and thus, obtain a contradiction score from the trained model. Finally, the scores obtained for the contradiction class are used as a proxy for the degree of semantic incongruity and we select the context with the highest score. Figure 1 shows the region with light purple background as our incongruity ranking module. 4.4 Implementation Details We use the pre-trained COMET model 4 for commonsense reasoning with a greedy decoding of five to generate a commonsense phrase and return the topmost that has no lexical overlap with the input. If the generated phrase contains stopwords in the beginning we remove them. For incorporating semantic incongruity, we use the RoBERTalarge model with 355M parameters and fine-tune on MNLI. For grammatical error correction model, we use an open source pre-trained model.5 5 Experimental Setup 5.1 Dataset Ghosh et al. (2020) released a dataset of 4,762 pairs of speakers sarcastic messages and hearers interpretations by conducting a crowdsourcing experiment. 4https://github.com/atcbosselut/comet-commonsense 5https://github.com/zhawe01/fairseq-gec 7981 Figure 2: Model predictions from COMET. The edges are sorted by probability Peled and Reichart (2017) introduced a dataset of 3,000 sarcastic tweets, each interpreted by five human judges and present a novel task of sarcasm interpretation. Both datasets were collected using the hashtag #sarcasm from Twitter. We merge these two datasets and choose non-sarcastic utterances no longer than 15 words. For each literal non-sarcastic utterance we also keep the corresponding gold sarcastic message, which is useful for evaluation and comparison purposes. We randomly select 150 utterances as part of the test set (i.e., five times more than the size of the test data in Mishra et al. (2019)), while assuring such utterances do not contain high lexical overlap. We allow this constraint to evaluate how our method(s) deal with diverse data. 5.2 Systems for Experiment Here, we benchmark the quality of the generated sarcastic messages by comparing multiple systems. 1. Full Model (FM): This model consists of all the three modules aimed at capturing reversal of valence, commonsense context, and semantic incongruity, respectively. 2. Reversal of Valence (RV): This model relies only on the reversal of valence component. 3. No Reversal of Valence (NoRV): This model only retrieves commonsense context and ranks them based on semantic incongruity. 4. No Semantic Incongruity (NSI): This model relies only on the reversal of valence and retrieval of commonsense context, without ranking based on semantic incongruity. A randomly selected retrieved sentence is used. 5. MTS2019: We make use of the model released by Mishra et al. (2019) as it is the stateof-the-art sarcasm generation system.6 6. Human (Gold) Sarcasm: As described in Section 5.1, we have gold sarcasm created by humans for every non-sarcastic utterance. 5.3 Evaluation Criteria BLEU (Papineni et al., 2002) is one of the most widely used automatic evaluation metric for generation tasks such as Machine Translation. However, for creative text generation, it is not ideal to expect significant n-gram overlaps between the machinegenerated and the gold-standard utterances. Hence, we performed a human evaluation. We evaluate a total of 900 generated utterances since our ablation study consisted of six different systems with 150 utterances each. Sarcasm is often linked with intelligence, creativity, and wit; thus we propose a set of 4 criteria to evaluate the generated output: (1) Creativity (“How creative are the utterances ?”), (2) Sarcasticness (“How sarcastic are the utterances ?”), (3) Humour (“How funny are the sentences ?”) (Skalicky and Crossley, 2018), and (4) Grammaticality (“How grammatical are the sentences ?”). We design a MTurk task where Turkers were asked to rate outputs from all the six systems. Each Turker was given the non-sarcastic utterance as well as a group of sarcastic utterances generated by all the six systems (randomly shuffled). Each criteria was rated on a scale from 1 (not at all) to 5 (very). Finally, each utterance was rated by three individual Turkers. 55, 59, 66, and 60 Turkers 6https://github.com/TarunTater/sarcasm generation 7982 System Sarcasticness Creativity Humor Grammaticality State-of-the-art (Mishra et al., 2019) 1.63 1.60 1.50 1.46 Human Generated 3.57 3.16 3.18 3.98 Reversal of Valence (RV) 3.00 2.80 2.72 4.29 No Reversal of Valence (NoRV) 1.79 2.28 2.09 3.91 No Semantic Incongruity (NSI) 3.04 2.99 2.90 3.68 Full Model (FM) 3.23* 3.24 3.08* 3.69 Table 2: Average scores for generated sarcasm from all systems as judged by the Turkers. The scale ranges from 1 (not at all) to 5 (very). For creativity and grammaticality, our models are comparable to human annotation and significantly better than the state-of-the-art (p < 0.001). For sarcasticness and humor, the full model is ranked 2nd by a small margin against the human generated message (denoted by *). Aspect FM vs Human FM vs MTS2019 win% lose% win% lose% Sarcasticness 34.0 55.3 90.0 6.0 Creativity 48.0 36.0 95.3 4.0 Humor 40.6 48.0 90.0 4.0 Grammaticality 26.6 56.6 98.0 1.3 Table 3: Pairwise comparison between the full model (FM) and human generated sarcasm, and between the full model (FM) and the state-of-the-art model in Mishra et al. (2019). Win % (lose %) is the percentage of the FM gets a higher (lower) average score compared to the other method for the 150 human-rated sentences. The rest are ties. attempted the HITs (inter-annotator agreement of 0.59, 0.53, 0.47 and 0.66 for the tasks on creativity, sarcasticness, humour and grammaticality, respectively using Spearman’s correlation coefficient). 6 Experimental Results 6.1 Quantitative Scores Table 2 presents the scores for the above mentioned metrics of different systems averaged over 150 test utterances. Our full model as well as the variations that ablated some components improve over the state-of-the-art (Mishra et al., 2019) on all the criteria. The ablation in Table 2 shows that our full model is superior to individual modules in terms of sarcasticness, creativity and humor. For grammaticality, we observe that the Turkers scored shorter sentences higher (e.g., RV), which also explains why NoRV model received a higher score than the full model. NoRV otherwise performed worse than all the other variations. In terms of creativity, our full model attains the highest average scores over all the other models including sarcastic utterances composed by humans. For grammaticality, the reversal of valence model is the best, even better than human generFigure 3: Pie chart comparing the success rate of all the variations of our model. ated ones. The performance of the full model is the second best in terms of the sarcasticness and humor, only slightly worse than human-generated sarcasm, showing the effectiveness of our approach that captures various factors of sarcasm. 6.2 Pairwise game between Full Model, State-of-the-art and Humans Table 3 displays the pairwise comparisons between the full model (FM) and human generated sarcasm, and FM and Mishra et al. (2019), respectively. Given a pair of inputs, we decide win/lose/tie by comparing the average scores (over three Turkers) of both outputs. We see that FM dominates Mishra et al. (2019) on all the metrics and human-generated sarcasm on the creativity metric. For sarcasticness, although humans are better, the FM model still has a 34% winning rate. 6.3 Ablation Study We focus our ablation study on the metric of sarcasticness, as we consider this as the main criterion for the success of generating sarcasm. As shown in Figure 3, our best model (FM) outperforms individ7983 Non Sarcastic System Sarcasm S C H G I inherited unfavorable genes from my mother. FM I inherited great genes from my mother. Ugly goes down to the bone. 5.0 4.0 3.6 3.6 RV I inherited great genes from my mother. 3.0 2.6 2.0 2.3 NoRV Ugly goes down to the bone. 3.0 2.6 3.0 4.0 NSI I inherited great genes from my mother. She makes me feel dowdy and ugly. 2.6 3.6 3.0 4.0 MTS2019 Butch tagging bullies apc seymour good temper good mentor. 1.3 1.0 1.3 2.0 Human Great I inherited all of my mother’s GOOD genes 2.3 4.3 2.0 2.6 It is not fun to date a drug addict. FM It is fun to date a drug addict. Spent the night in a police cell after his arrest. 4.3 5.0 4.6 5.0 RV It is fun to date a drug addict. 5.0 2.3 2.0 4.6 NoRV Spent the night in a police cell after his arrest. 1.0 1.0 2.0 2.6 NSI It is fun to date a drug addict. The feds completely screwed up the arrest. 3.3 4.3 2.0 2.6 MTS2019 Butch is a powerful addict in gente he is an optimist great fun. 2.6 2.0 1.0 1.3 Human Dating a drug addict .. Wouldn’t that be fun. 3.0 1.6 2.6 4.0 I hate getting sick from fast food. FM I love getting sick from fast food. Stomach ache is just an additional side effect. 3.3 3.6 5.0 3.6 RV I love getting sick from fast food. 3.3 2.6 3.6 5.0 NoRV Stomach ache is just an additional side effect. 1.3 2.6 3.6 3.3 NSI I love getting sick from fast food. I ate too much and got a terrible stomach ache. 2.3 3.3 4.3 5.0 MTS2019 I hate love sick to ikes sword lowest **** giving stains giving stains on printers making pound accidents work bikinis in 1.0 1.3 1.3 1.0 Human Shout out to the mcdonalds for giving me bad food and making me sick right before work in two hours. 4.0 4.3 4.0 4.3 Burnt popcorn is gross. FM Burnt popcorn is lovely. The smell made me want to vomit. 4.6 3.0 3.3 5.0 RV Burnt popcorn is lovely. 4.0 2.0 3.6 5.0 NoRV The smell made me want to vomit. 1.0 2.0 3.6 4.6 NSI Burnt popcorn is lovely. Hold the bag in case I vomit. 4.3 2.3 4.3 5.0 MTS2019 reggae burnt popcorn lol . 2.3 1.3 2.0 1.0 Human Gotta love the smell of burnt microwave popcorn. 3.3 3.3 4.0 4.0 Table 4: Examples of generated outputs from different systems. S, C, H, G represent Sarcasticness, Creativity, Humor and Grammaticality. Text in bolded black represents the commonsense word/phrase obtained from COMET given the non-sarcastic utterance. ual ablation modules. We filtered out 60 examples from the 150 with no ties. The ablation component employing just Reversal of Valence is second best for sarcasticness according to Figure 3. Further, to understand the extent to which ranking the retrieved sentence based on the degree of incongruity helped generate better sarcasm, we took the outputs from FM and NSI for comparisons. Out of the 150 utterances, 119 times there was no tie. Our best model (FM) wins 66% of the time while the NSI model wins 34% of the cases. 7 Qualitative Analysis Table 4 demonstrates several generation outputs from different modules associated with human ratings for different criteria. We notice that often one of our modules generate better sarcasm than humans. For instance, for the first and the second example in Table 4, all of FM, RV and NSI are better than human generated sarcasm. In general, the generations from the FM model are more humorous, which is also an useful criterion to evaluate sarcasm besides sarcasticness (Skalicky and Crossley, 2018). We also observe that Turkers consistently rated generations from the FM model more sarcastic than 7984 the NSI model suggesting that there is a correlation between human scores of sarcasticness and incongruity. To support this observation, we took the contradiction scores from the RoBERTa model for both best ranked retrieved sentences (FM) and the randomly selected retrieved sentences (NSI). We then computed a correlation between the sarcasticness scores given by the humans and the automatic contradiction scores for both the best ranked retrieved sentences (FM) and the randomly selected retrieved sentences (NSI). For FM model we obtain a higher Pearson correlation coefficient compared to NSI suggesting the important role of incongruency for sarcasm. 7.1 Limitations While our best model combining different sarcasm factors does outperform the system with individual factors, there are sometimes exceptions. We notice, in few cases, the simple reversal of valence (RV) strategy is enough to generate sarcasm. For instance, for the literal input “It is not fun to date a drug addict”, just removing the negation word leads to a full score on sarcasticness without the additional commonsense module. Future work would include building a model that can decide whether just the RV strategy is sufficient or if we need to add additional commonsense context to it. Although incorporating incongruity ranking is useful, there are several cases when a randomly retrieved message may obtain better sarcasticness score. Table 5 presents such an example. Even though the retrieved message “Please stop whirling me round; it makes me feel sick.” scores lower than “The very thought of it makes me feel sick.”, in terms of incongruity with respect to “I love being put in the hospital for dehydration”, the former received a higher sarcasticness score that suggests the incongruity scores obtained from NLI are not perfect. The ordering of the commonsense context and the valence reversed sentence is predetermined in our generation. Specifically, we always append the retrieved commonsense context after the valence reversed output. Changing the order can sometimes make the sarcasm better and more humorous. The reason for our current ordering choice is that we always treat the valence reversed version as hypothesis and the commonsense retrieved sentence as premise for the NLI model. We attempted reversing the order in preliminary experiments but NSI I love being put in the hospital for dehydration. Please stop whirling me round; it makes me feel sick. FM I love being put in the hospital for dehydration. The very thought of it makes me feel sick. Table 5: Sarcastic Generation from (FM) and (NSI) where NSI scores higher for sacrasticness received poor scores from the entailment model. In future, we would like to generate more diverse sarcasm that are not tied to a fixed pattern. Finally, the generations are dependent on COMET and thus the quality will be governed by the accuracy of the COMET model. 8 Conclusion We address the problem of unsupervised sarcasm generation that models several sarcasm factors including reversal of valence and semantic incongruity with the context. The key contribution of our approach is the modeling of commonsense knowledge in a retrieve-and-edit generation framework. A human-based evaluation based on four criteria shows that our generation approach significantly outperforms a state-of-the-art model. Compared with human generated sarcasm, our model shows promise particularly for creativity, humor and sarcasticness, but less for grammaticality. A bigger challenge in sarcasm generation and more generally, creative text generation, is to capture the difference between creativity (novel but well-formed material) and nonsense (ill-formed material). Language models conflate the two, so developing methods that are nuanced enough to recognize this difference is key to future progress. Acknowledgments This work was supported in part by the MCS program under Cooperative Agreement N6600119-2-4032 and the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. The authors would like to thank Christopher Hidey, John Kropf, Anusha Bala and Christopher Robert Kedzie for useful discussions. The authors also thank members of PLUSLab at the University Of Southern California and the anonymous reviewers for helpful comments. 7985 References Salvatore Attardo. 2000. Irony markers and functions: Towards a goal-oriented theory of irony and its processing. Rask, 12(1):3–20. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Christian Burgers, Margot Van Mulken, and Peter Jan Schellens. 2012. Verbal irony differences in usage across written genres. Journal of Language and Social Psychology, 31(3):290–310. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. CoNLL. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In LREC, volume 6, pages 417–422. Citeseer. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2017. Style transfer in text: Exploration and evaluation. In Proceedings of AAAI. Aniruddha Ghosh and Tony Veale. 2017. Magnets for sarcasm: Making sarcasm detection timely, contextual and very personal. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 482–491, Copenhagen, Denmark. Association for Computational Linguistics. Debanjan Ghosh, Alexander R Fabbri, and Smaranda Muresan. 2018. Sarcasm analysis using conversation context. Computational Linguistics, 44(4):755– 792. Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. 2015. Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words. In proceedings of the 2015 conference on empirical methods in natural language processing, pages 1003–1012. Debanjan Ghosh, Elena Musi, Kartikeya Upasani, and Smaranda Muresan. 2020. Interpreting verbal irony: Linguistic strategies and the connection to the type of semantic incongruity. Proceedings of the Society for Computation in Linguistics, 3(9):76–87. Debanjan Ghosh, Alexander Richard Fabbri, and Smaranda Muresan. 2017. The role of conversation context for sarcasm detection in online interactions. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 186– 196, Saarbr¨ucken, Germany. Association for Computational Linguistics. Raymond W Gibbs. 1986. On the psycholinguistics of sarcasm. Journal of Experimental Psychology: General, 115(1):3. Roberto Gonz´alez-Ib´a˜nez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: A closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, HLT ’11, pages 581–586, Stroudsburg, PA, USA. Association for Computational Linguistics. Li Huang, Francesca Gino, and Adam Galinsky. 2015. The highest form of intelligence: Sarcasm increases creativity for both expressers and recipients. Organizational Behavior and Human Decision Processes, 131. Aditya Joshi, Anoop Kunchukuttan, Pushpak Bhattacharyya, and Mark James Carman. 2015a. Sarcasmbot: An open-source sarcasm-generation module for chatbots. In WISDOM Workshop at KDD. Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015b. Harnessing context incongruity for sarcasm detection. In ACL, pages 757–762. Roger J Kreuz and Kristen E Link. 2002. Asymmetries in the use of verbal irony. Journal of Language and Social Psychology, 21(2):127–143. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865–1874. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Abhijit Mishra, Tarun Tater, and Karthik Sankaranarayanan. 2019. A modular architecture for unsupervised sarcasm generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6146–6155, Hong Kong, China. Association for Computational Linguistics. 7986 Smaranda Muresan, Roberto Gonzalez-Ibanez, Debanjan Ghosh, and Nina Wacholder. 2016. Identification of nonliteral language in social media: A case study on sarcasm. Journal of the Association for Information Science and Technology, 67(11):2725– 2737. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Lotem Peled and Roi Reichart. 2017. Sarcasm SIGN: Interpreting sarcasm with sentiment based monolingual machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1690–1700, Vancouver, Canada. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. In URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. Stefanie Regel. 2009. The comprehension of figurative language: electrophysiological evidence on the processing of irony. Ph.D. thesis, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig. Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 704–714. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pages 6830–6841. Stephen Skalicky and Scott Crossley. 2018. Linguistic features of sarcasm and metaphor production quality. In Proceedings of the Workshop on Figurative Language Processing, pages 7–16. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. Cynthia Van Hee, Els Lefever, and V´eronique Hoste. 2018. We usually dont like going to the dentist: Using common sense to detect irony on twitter. Computational Linguistics, 44(4):793–832. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156–165.
2020
711
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7987–7998 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7987 Structural Information Preserving for Graph-to-Text Generation Linfeng Song1, Ante Wang2, Jinsong Su2⇤, Yue Zhang3, Kun Xu1, Yubin Ge4 and Dong Yu1 1Tencent AI Lab, Bellevue, WA, USA 2Xiamen University, Xiamen, China 3School of Engineering, Westlake University, China 4University of Illinois at Urbana-Champaign, USA Abstract The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs. As a crucial defect, the current state-of-the-art models may mess up or even drop the core structural information of input graphs when generating outputs. We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information. In particular, we introduce two types of autoencoding losses, each individually focusing on different aspects (a.k.a. views) of input graphs. The losses are then back-propagated to better calibrate our model via multi-task training. Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline. Our code is available at http://github.com/ Soistesimmer/AMR-multiview. 1 Introduction Many text generation tasks take graph structures as their inputs, such as Abstract Meaning Representation (AMR) (Banarescu et al., 2013), Knowledge Graph (KG) and database tables. For example, as shown in Figure 1(a), AMR-to-text generation is to generate a sentence that preserves the meaning of an input AMR graph, which is composed by a set of concepts (such as “boy” and “want-01”) and their relations (such as “:ARG0” and “:ARG1”). Similarly, as shown in Figure 1(b), KG-to-text generation is to produce a sentence representing a KG, which contains worldwide factoid information of entities (such as “Australia” and “Above the Veil”) and their relations (such as “followedBy”). Recent efforts on graph-to-text generation tasks mainly focus on how to effectively represent input graphs, so that an attention mechanism can better transfer input knowledge to the decoder when ⇤Corresponding author followedBy (a) want-01 boy eat-01 ARG1 ARG0 ARG0 girl Above the Veil Aenir Into Battle precededBy Austrilia country (b) lunch ARG2 ARG1 beautiful mod Figure 1: (a) An AMR graph meaning “The boy wants the beautiful girl to eat lunch with him.”, and (b) A knowledge graph carrying the meaning “Above the Veil is an Australian novel and the sequel to Aenir. It was followed by Into the Battle.” generating sentences. Taking AMR-to-text generation as an example, different graph neural networks (GNNs) (Beck et al., 2018; Song et al., 2018; Guo et al., 2019; Ribeiro et al., 2019) have been introduced to better represent input AMRs than a sequence-to-sequence model (Konstas et al., 2017), and later work (Zhu et al., 2019; Cai and Lam, 2019; Wang et al., 2020) showed that relationaware Transformers can achieve even better results than GNNs. These advances for encoding have largely pushed the state-of-the-art performance. Existing models are optimized by maximizing the conditional word probabilities of a reference sentence, a common signal for training language models. As a result, these models can learn to produce fluent sentences, but some crucial input concepts and relations may be messed up or even dropped. Taking the AMR in Figure 1(a) as an example, a model may produce “the girl wants the boy to go”, which conveys an opposite meaning to the AMR graph. In particular, this can be very likely if “the girl wants” appears much more frequent than “the boy wants” in the training corpus. This is a very important issue, because of its wide existence across many neural graph-to-text 7988 generation models, hindering the usability of these models for real-world applications (Duˇsek et al., 2018, 2019; Balakrishnan et al., 2019). A potential solution for this issue is improving the training signal to enhance preserving of structural information. However, little work has been done to explore this direction so far, probably because designing such signals is non-trivial. As a first step towards this goal, we propose to enrich the training signal with additional autoencoding losses (Rei, 2017). Standard autoencoding for graph-to-sequence tasks requires reconstructing (parsing into) input graphs, while the parsing algorithm for one type of graphs (such as knowledge graphs) may not generalize to other graph types or may not even exist. To make our approach general across different types of graphs, we propose to reconstruct different views of each input graph (rather than the original graph), where each view highlights one aspect of the graph and is easy to produce. Then through multi-task learning, the autoencoding losses of all views are back-propagated to the whole model so that the model can better follow the input semantic constraints. Specifically, we break each input graph into a set of triples to form our first view, where each triple (such as “want-01 :ARG0 boy” in Figure 1(a)) contains a pair of entities and their relation. As the next step, the alignments between graph nodes and target words are generated to ground this view into the target sentence for reconstruction. Our second view is the linearization of each input graph produced by depth-first graph traversal, and this view is reconstructed token-by-token from the last decoder state. Overall the first view highlights the local information of each triple relation, the second view focuses on the global semantic information of the entire graph. Experiments on AMR-to-text generation and WebNLG (Gardent et al., 2017) show that our graph-based multi-view autoencoding loss improves the performance of a state-of-the-art baseline by more than 2 BLEU points without introducing any parameter during decoding. Besides, human studies show that our approach is indeed beneficial for preserving more concepts and relations from input graphs. 2 Related Work Previous work for neural graph-to-text generation (Konstas et al., 2017; Song et al., 2018; Beck et al., 2018; Trisedya et al., 2018; Marcheggiani and Perez-Beltrachini, 2018; Xu et al., 2018; Cao and Clark, 2019; Damonte and Cohen, 2019; Hajdik et al., 2019; Koncel-Kedziorski et al., 2019; Hong et al., 2019; Song et al., 2019; Su et al., 2017) mainly studied how to effectively represent input graphs, and all these models are trained only with the standard language modeling loss. As the most similar one to our work, Tu et al. (2017) proposed an encoder-decoder-reconstructor model for machine translation, which is trained not only to translate each source sentence into its target reference, but also to translate the target reference back into the source text (reconstruction). Wiseman et al. (2017) extended the reconstruction loss of Tu et al. (2017) on table-to-text generation, where a table contains multiple records that fit into several fields.We study a more challenging topic on how to reconstruct a complex graph structure rather than a sentence or a table, and we propose two general and effective methods that reconstruct different complementary views of each input graph. Besides, we propose methods to breakdown the whole (graph, sentence) pair into smaller pieces of (edge, word) pairs with alignments, before training our model to reconstruct each edge given the corresponding word. On the other hand, neither of the previous efforts tried to leverage this valuable information. Our work is remotely related to the previous efforts on string-to-tree neural machine translation (NMT) (Aharoni and Goldberg, 2017; Wu et al., 2017; Wang et al., 2018), which aims at generating target sentences with their syntactic trees. One major difference is that their goal is producing grammatical outputs, while ours is preserving input structures. Besides, our multi-view reconstruction framework is a detachable component on top of the decoder states for training, so no extra error propagation (for structure prediction) can be introduced. Conversely, their models generate trees together with target sentences, thus extra efforts (Wu et al., 2017) are introduced to alleviate error propagation. Finally, there exist transition-based algorithms (Nivre, 2003) to convert tree parsing into the prediction of transition actions, while we study reconstructing graphs, where there is no common parsing algorithm for all graph types. Autoencoding loss by input reconstruction was mainly adopted on sequence labeling tasks, such as named entity recognition (NER) (Rei, 2017; Liu et al., 2018a; Jia et al., 2019), simile detection 7989 (Liu et al., 2018b) and sentiment analysis (Rei and Søgaard, 2019). Since input reconstruction is not intuitively related to these tasks, the autoencoding loss only serves as more training signals. Different from these efforts, we leverage autoencoding loss as a means to preserve input knowledge. Besides, we study reconstructing complex graphs, proposing a general multi-view approach for this goal. 3 Base: Structure-Aware Transformer Formally, an input for graph-to-text generation can be represented as G = hV , Ei, where V is the set of graph nodes and E corresponds to all graph edges. Each edge e 2 E is a triple (vi, l, vj), showing labelled relation between two connected nodes vi and vj. Given a graph, we choose a recent relation-aware transformer model (Zhu et al., 2019) as our baseline to generate the ground-truth sentence y = (y1, . . . , yN) that contain the same meaning as the input graph. It exhibits the state-ofthe-art performance for AMR-to-text generation. 3.1 Structure-aware Transformer Encoder Similar to the standard model (Vaswani et al., 2017), the structure-aware Transformer encoder stacks multiple self-attention layers on top of an embedding layer to encode linearized graph nodes. Taking the l-th layer for example, it consumes the states of its preceding layer (hl−1 1 . . . hl−1 N , or the embedding layer when l is 1) and its states are then updated by a weighted sum: hl i = X j2[1..N] ↵ij(hl−1 j W P + γijW R1), (1) where γij is the vector representation of the relation between nodes vi and vj, and W P and W R1 are model parameters. The weights, such as ↵ij, are obtained by relation-aware self-attention: ↵ij = exp(eij) P k2[1..N] exp(eik) (2) eij = # hl−1 i W Q$# hl−1 j W K + γijW R2$| pdh (3) where W Q, W K and W R2 are model parameters, and dh denotes the encoder-state dimension. The encoder adopts L self-attention layers and HL = (hL 1 . . . hL |V |) represents the concatenated top-layer hidden states of the encoder, which will be used in attention-based decoding. Compared with the standard model, this encoder introduces the vectorized structural information (such as γij) for all node pairs. Given a node pair vi and vj, generating such information involves two main steps. First, a sequence of graph edge labels along the path from vi to vj are obtained, where a direction symbol is added to each label to distinguish the edge direction. For instance, the label sequence from “boy” to “girl” in Figure 1(a) is “:ARG0" :ARG1# :ARG0#”. As the next step, the label sequence is treated as a single (feature) token and represented by the corresponding embedding vector, and this vector is taken as the vectorized structural information γij from vi to vj. Since there are a large number of features, only the most frequent 20K are kept, while the rest are mapped into a special UNK feature.1 3.2 Standard Transformer Decoder The decoder is the same as the standard Transformer architecture, which stacks an embedding layer, multiple (L) self-attention layers and a linear layer with softmax activation to generate target sentences in a word-by-word manner. Each target word yi and decoder state si are generated sequentially by the self-attention decoder: yi, si = SADecoder([HL; s1...si−1], yi−1), (4) where SADecoder() is the function of decoding one step with the self-attention-based decoder. 3.3 Training with Language Modeling Loss Same as most previous work, this model is trained with the standard language modeling loss that minimizes the negative log-likelihood of conditional word probabilities: lbase = − X i2[1..N] log p(yi|y1, ..., yi−1; G) = − X i2[1..N] p(yi|si; ✓), (5) where ✓represents all model parameters. 4 Multi-View Autoencoding Losses Figure 2 visualizes the training framework using our multi-view autoencoding losses, where the 1Zhu et al. (2019) also mentions other (such as CNN-based or self-attention-based) alternatives to calculate γij. While the GPU memory consumption of these alternatives is a few times more than our baseline, ours actually shows a comparable performance. 7990 Attention Encoder Decoder The boy wants the beautiful girl to eat lunch with him ARG0 ARG1 ARG0 want :ARG0 boy :ARG1 eat ( :ARG0 (girl :mod beautiful) :ARG1 lunch :ARG2 boy) want-01 boy eat-01 ARG1 ARG0 ARG0 girl lunch ARG2 ARG1 beautiful mod mod ARG1 ARG2 Language modeling loss View 1: triple relations View 2: linearized graph Figure 2: The training framework using multi-view autoencoding losses. attention-based encoder-decoder model with the language modeling loss is the baseline. Our losses are produced by reconstructing the two proposed views (surrounded by slashed or dotted box) of the input graph, where each view represents a different aspect of the input. With the proposed losses, we expect to better refine our model for preserving the structural information of input graphs. 4.1 Loss 1: Reconstructing Triple Relations with Biaffine Attention Our first view breaks each input graph into a set of triples, where each triple (such as “want-01 :ARG0 boy” in Figure 1(a)) contains a pair of nodes and their labeled relation. Next, we use pre-generated alignments between graph nodes and target words to ground the graph triples onto the target sentence. As illustrated in the slashed blue box of Figure 2, the result contains several labeled arcs, each connecting a word pair (such as “wants” and “boy”). While each arc represents a local relation, their combination implies the global input structure. For certain types of graphs, a node can have multiple words. To deal with this situation, we use the first word of both associated graph nodes when grounding a graph edge onto the target sentence. Next, we also connect the first word of each grounded entity with the other words of the entity in order to represent the whole-entity information in the sentence. Taking the edge “followedBy” in Figure 1(b) as an example, we first ground it onto the target sentence to connect words “Above” and “Into”. Next, we create edges with label “compound” from “Above” to words “the” and “Veil”, and from “Into” to words “the” and “Battle” to indicate the two associated entity mentions. For many tasks on graph-to-text generation, the node-to-word alignments can be easily generated from off-the-shell toolkits. For example, in AMRto-text generation, there have been several aligners (Pourdamghani et al., 2014; Flanigan et al., 2016; Wang and Xue, 2017; Liu et al., 2018c; Szubert et al., 2018) available for linking AMR nodes to words. For knowledge graphs, the alignments can be produced by simple rule-based matching or an entity-linking system. The resulting structure with labeled arcs connecting word pairs resembles a dependency tree, and thus we employ a deep biaffine model (Dozat and Manning, 2017) to predict this structure from the decoder states. More specifically, the model factorizes the probability for making each arc into two parts: an unlabeled factor and a labeled one. Given the decoder states s1, . . . , sN, the representation for each word yi as the head or the modifier of any unlabeled factor is calculated by passing its hidden state si through the corresponding multi-layer perceptrons (MLPs): rarc−h i = MLParc−head(si) (6) rarc−m i = MLParc−mod(si), (7) The (unnormalized) scores for the unlabeled factors with any possible head word given the modifier yi are calculated as: φarc i = Rarc−hU | ararc−m i + Rarc−hva, (8) where Rarc−h is the concatenation of all rarc−h i , and U a and va are model parameters. Similarly, the representations for word yi being the head or the modifier of a labeled factor are calculated by 7991 two additional MLPs: rlabel−h i = MLPlabel−head(si) (9) rlabel−m i = MLPlabel−mod(si), (10) and the (unnormalized) scores for all relation labels given the head word yj and the modifier yi are calculated as: φlabel i,j = rlabel−h j U lrlabel−m i + (rlabel−h j ⊕rlabel−m i )|V l + bl, (11) where U l, V l and bl are model parameters. The overall conditional probability of a labeled arc with label l, head word yj and modifier yi is calculated by the following chain rule: p(yj, l|yi) = p(l|yj, yi) · p(yj|yi) = softmax(φlabel i,j )[l] · softmax(φarc i )[j], (12) where [x] in the subscript represents choosing the x-th item from the corresponding vector. As the final step, the loss for reconstructing this view is defined as the negative log-likelihood of all target arcs E0 (the grounded triples from E): lauto1 = X (yj,l,yi)2E0 −log p(yj, l|yi) (13) 4.2 Loss 2: Reconstructing Linearized Graphs with a Transformer Decoder As a supplement to our first loss for reconstructing the local information of each grounded triple, we introduce the second loss for predicting the whole graph as a linearized sequence. To minimize the loss of the graph structural information caused by linearization, we adopt an algorithm based on depth-first traversal (Konstas et al., 2017), which inserts brackets to preserve graph scopes. One linearized AMR graph is shown in the red dotted box of Figure 2, where the node suffixes (such as “-01”) representing word senses are removed. One may argue that we could directly predict the original graph so that no structural information would be lost. However, each type of graphs can have their own parsing algorithm due to their unique properties (such as directed vs undirected, rooted vs unrooted, etc). Such an exact prediction will hurt the generality of the proposed approach. Conversely, our solution is general, as linearization works for most types of graphs. From Figure 2 we can observe that the inserted brackets clearly infer the original graph structure. Besides, previous work (Iyer et al., 2017; Konstas et al., 2017) has shown the effectiveness of generating linearized graphs as sequences for graph parsing, which also confirms our observation. Given a linearized graph represented as a sequence of tokens x1, . . . , xM, where each token xi can be a graph node, a edge label or a inserted bracket, we adopt another standard Transformer decoder (SADecoderg) to produce the sequence: xi, ti = SADecoderg([S; t1...ti−1], xi−1), (14) where S = (s1 . . . sN) denotes the concatenated states for the target sentence (Equation 4), and the loss for reconstructing this view is defined as the negative log-likelihood for the linearized graph: lauto2 = − X i2[1..M] log p(xi|ti; ✓), (15) where ✓represents model parameters. 4.3 Discussion and Comparison Our autoencoding modules function as detachable components based on the target-side decoder states, and thus this brings two main benefits. First, our approaches are not only orthogonal to the recent advances (Li et al., 2016; Kipf and Welling, 2017; Veliˇckovi´c et al., 2018) on the encoder side for representing graphs, but also flexible with other decoders based on multi-layer LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014). Second, no extra error propagation is introduced, as our approach does not affect the normal sentencedecoding process. In addition to the different aspects both losses focus on, each has some merits and disadvantages over the other. In terms of training speed, calculating Loss 1 can be faster than Loss 2, because predicting the triple relations can be done in parallel, while it is not feasible for generating a linearized graph. Besides, calculating Loss 1 suffers from less variances, as the triple relations are agnostic to the token order determined by input files. Conversely, graph linearization is highly sensitive to the input order. One major merit for Loss 2 is the generality, as node-to-word alignments may not be easily obtained, especially for multi-lingual tasks. 4.4 Training with Autoencoding Losses The final training signal with both proposed autoencoding losses is formalized as: lfinal = lbase + ↵lauto1 + βlauto2, (16) 7992 where ↵and β are coefficients for our proposed losses. Both coefficient values are selected by a development experiment. 5 Experiments We study the effectiveness of our autoencoding training framework on AMR-to-text generation and KG-to-text generation. BLEU (Papineni et al., 2002) and Meteor (Denkowski and Lavie, 2014) scores are reported for comparison. Following previous work, we use the multi-bleu.perl from Moses2 for BLEU evaluation. 5.1 Data AMR datasets3 We take LDC2015E86 that contains 16,833, 1,368 and 1,371 instances for training, development and testing, respectively. Each instance contains a sentence and an AMR graph. Following previous work, we use a standard AMR simplifier (Konstas et al., 2017) to preprocess our AMR graphs, and take the PTB-based Stanford tokenizer4 to tokenize the sentences. The node-toword alignments are produced by the ISI aligner (Pourdamghani et al., 2014). We use this dataset for our primary experiments. We also report our numbers on LDC2017T10, a later version of AMR dataset that has 36521, 1,368 and 1,371 instances for training, development and testing, respectively. WebNLG (Gardent et al., 2017) This dataset consists of 18,102 training and 871 development KG-text pairs, where each KG is a subgraph of DBpedia5 that can contain up to 7 relations (triples). The testset has two parts: seen, containing 971 pairs where the KG entities and relations belong to the DBpedia categories that are seen in the training data, and unseen, where the entities and relations come from unseen categories. Same as most previous work, we evaluate our model on the seen part, and this is also more relevant to our setup. We follow Marcheggiani and Perez-Beltrachini (2018) to preprocess the data. To obtain the alignments between a KG and a sentence, we use a method based on heuristic string matching. For more detail, we remove any abbreviations from a KG node (such as “New York (NY)” is changed to “New York”), before finding the first phrase in the 2http://www.statmt.org/moses/ 3https://amr.isi.edu/download.html 4https://nlp.stanford.edu/software/tokenizer.shtml 5https://wiki.dbpedia.org/ Figure 3: Development results on LDC2015E86. sentence that matches the longest prefix of the node. As a result, we find a match for 91% KG nodes. 5.2 Settings For model hyperparameters, we follow the setting of our baseline (Zhu et al., 2019), where 6 selfattention layers are adopted with 8 heads for each layer. Both sizes of embedding and hidden states are set to 512, and the batch token-size is 4096. The embeddings are randomly initialized and updated during training. All models are trained for 300K steps using Adam (Kingma and Ba, 2014) with β1 = 0.1. Byte-pair encoding (BPE) (Sennrich et al., 2016) with 10K operations is applied to all datasets. We use 1080Ti GPUs for experiments. For our approach, the multi-layer perceptrons for deep biaffine classifiers (Equations 6, 7, 9 and 10) take two layers of 512 units. The Transformer decoder (Equation 14) for predicting linearized graphs takes the same embedding and hidden sizes as the baseline decoder (Equation 4). 5.3 Development Results Figure 3 shows the devset performances of using either Loss 1 (triple relations) or Loss 2 (linearized graph) under different coefficients. It shows the baseline performance when a coefficient equals to 0. There are large improvements in terms of BLEU score when increasing the coefficient of either loss from 0. These results indicate the effectiveness of our autoencoding training framework. The performance of our model with either loss slightly goes down when further increasing the coefficient. One underlying reason is that an over-large coefficient will dilute the primary signal on language modeling, which is more relevant to the BLEU metric. Particularly, we observe the highest performances when ↵and β are 0.05 and 0.15, respectively, and thus we set our coefficients using these values for the remaining experiments. 7993 Model BLEU Time LSTM (Konstas et al., 2017) 22.00 – GRN (Song et al., 2018) 23.28 – DCGCN (Guo et al., 2019) 25.70 – RA-Trans-SA (Zhu et al., 2019) 29.66 – RA-Trans-F-ours 29.11 0.25 + Loss 1 (triple relations) 30.47 0.38 + Loss 2 (linearized graph) 31.13 0.52 + Both 31.41 0.61 DCGCN (0.3M) 33.2 – GRN (2M) 33.6 – LSTM (20M) 33.8 – DCGCN (ensemble, 0.3M) 35.3 – Table 1: Main test results on LDC2015E86. Numbers such as “2M” means the number of extra silver data being used, and “ensemble” indicates model ensemble. 5.4 Main Results Table 1 shows the main comparison results with existing work for AMR-to-text generation, where “Time” represents the average time (seconds) for training one step. The first group corresponds to the reported numbers of previous models on this dataset, and their main difference is the encoder for presenting graphs: LSTM (Konstas et al., 2017) applies a multi-layer LSTM on linearized AMRs, GRN (Song et al., 2018) and DCGCN (Guo et al., 2019) adopt graph neural networks to encode original AMRs, and RA-Trans-SA is the best performing model of Zhu et al. (2019), using self attention to model the relation path for each node pair. The second group reports our systems, where the RA-Trans-F-ours baseline is our implementation of the feature-based model of Zhu et al. (2019). It shows a highly competitive performance on this dataset. Applying Loss 1 alone achieves an improvement of 1.36 BLEU points, and Loss 2 alone obtains 0.66 more points than Loss 1. One possible reason is that Loss 2, which aims to reconstruct the whole linearized graph, can provide more informative features. Using both losses, we observe roughly a 2.3-point gain in terms of BLEU, indicating that both losses are complementary. Regarding Meteor, RA-Trans-SA reports 35.45, the highest among all previously reported numbers. The RA-Trans-F-ours baseline gets 35.0 that is slightly worse than RA-Trans-SA. Applying Loss 1 or Loss 2 alone gives a number of 35.5 and 36.1, respectively. Using both losses, our approach achieves 36.2 that is better than RA-Trans-SA. Model Recall (%) RA-Trans-F-ours 78.00 + Both 85.13 Table 2: Human study for the recall of input relations on LDC2015E86. Regarding the training speed, adopting Loss 2 requires double amount of time compared with the baseline, being much slower than Loss 1. This is because the biaffine attention calculations for different word pairs are parallelizable, while it is not for producing a linearized graph. Using both losses together, we observe a moderately longer training process (1.4-times slower) than the baseline. Please note that our autoencoding framework only affects the offline training procedure, leaving the online inference process unchanged. The last group shows additional higher numbers produced by systems that use the ensemble of multiple models and/or additional silver data. They suffer from problems such as requiring massive computation resources and taking a long time for training. We leave exploring additional silver data and ensemble for further work. 5.5 Quantitative Human Study on Preserving Input Relation Our multi-view autoencoding framework aims at preserving input relations, thus we further conduct a quantitative human study to estimate this aspect. To this end, we first extract all interactions of a subject, a predict and an object (corresponding to the AMR fragment “pred :ARG0 subj :ARG1 obj”) from each AMR graph, and then check how many interactions are preserved by the output of a model. The reason for considering this type of interaction comes from two folds: first, they convey fundamental information forming the backbone of a sentence, and second, they can be easily extracted from graphs and evaluated by human judges. As shown in Table 2, we choose 200 AMRsentence pairs to conduct this study and compare our model with the baseline in terms of the recall number, showing the percent of preserved interactions. To determine if a sentence preserves an interaction, we ask 3 people with NLP background to make their decisions and choose the majority vote as the human judgement. Out of the 491 interactions, the baseline only preserves 78%. With our multi-view autoencoding losses, 7.13% more 7994 (r / recommend-01 :ARG0 (i / i) :ARG1 (g / go-02 :ARG0 (y / you) :purpose (s / see-01 :ARG0 y :ARG1 (p / person :ARG0-of (h / have-rel-role-91 :ARG1 y :ARG2 (d / doctor))) :mod (t / too))) :ARG2 y) Ref: i ’d recommend you go and see your doctor too . Baseline: i should go to see your doctor too . Our approach: i recommend you to go to see your doctor too . (c / country :mod (o / only) :ARG0-of (h / have-03 :ARG1 (p / policy :consist-of (t / target-01 :ARG1 (a / aircraft :ARG0-of (t2 / traffic-01 :ARG1 (d / drug))))) :time (c3 / current)) :domain (c2 / country :wiki “Colombia” :name (n / name :op1 “Colombia”))) Ref: colombia is the only country that currently has a policy of targeting drug trafficking aircraft . Baseline: colombia is the only country with drug trafficking policy . Our approach: colombia is the only country with the current policy of targets for drug trafficking aircraft . Table 3: Example system outputs. interactions are preserved, which further confirms the effectiveness of our approach. 5.6 Case Study As shown in Table 3, we further demonstrate several typical examples from our human study for better understanding how our framework helps preserve structural input information. Each example includes an input AMR, a reference sentence (Ref), the baseline output (Baseline) and the generated sentence by our approach (Our approach). For the first example, the baseline output drops the key predicate “recommend” and fails to preserve the fact that “you” is the subject of “go”. The reason can be that “I should go to” occurs frequently in the training corpus. On the other hand, the extra signals produced by our multi-view framework enhance the input semantic information, guiding our model to generate a correct sentence Model BLEU RA-Trans-F-ours + Loss 1 30.47 w/o edge label 29.39 RA-Trans-F-ours + Loss 2 31.13 w/o edge label 30.36 random linearization 31.07 Table 4: Ablation study for both views. with the exact meaning of the input AMR. The second example shows a similar situation, where the baseline generates a natural yet short sentence that drops some important information from the input graph. As a result of the information loss, the resulting sentence conveys an opposite meaning (“with drug trafficking policy”) to the input (“targeting drug trafficking aircraft”). This is a typical problem suffered by many neural graph-tosequence models. Our multi-view framework helps recover the correct meaning: “policy of target for drug trafficking aircraft”. 5.7 Ablation Study As shown in Table 4, we conduct an ablation study on LDC2015E86 to analyze how important each part of the input graphs is under our framework. For Loss 1, we test the situation when no edge labels are available, and as a result, we observe a large performance drop of 1.0+ BLEU points. This is quite intuitive, because edge labels carry important relational knowledge between the two connected nodes. Therefore, discarding these labels will cause loss of significant semantic information. For Loss 2, we also observe a large performance decrease when edge labels are dropped, confirming the observation for Loss 1. In addition, we study the effect of random graph linearization, where the order for picking children is random rather than following the left-to-right order at each stage of the depth-first traversal procedure. The motivation is to investigate the robustness of Loss 2 regarding input variances, as an organized input order (such as an alphabetical order for children) may not be available for certain graph-to-sequence tasks. We observe a marginal performance drop of less than 0.1 BLEU points, indicating that our approach is very robust for input variances. It is likely because different linearization results still indicate the same graph. Besides, one previous study (Konstas et al., 2017) shows a very similar observation. 7995 Model BLEU Meteor DCGCN 27.60 – RA-Trans-CNN 31.82 36.38 RA-Trans-F-ours 31.77 37.2 + Loss 1 33.98 37.5 + Loss 2 34.13 37.8 + Both 34.21 38.0 Table 5: Main test results on LDC2017T10. Model BLEU Meteor ADAPT 60.59 44.0 GCNEC 55.90 39.0 RA-Trans-F-ours 60.51 42.2 + Loss 1 61.78 43.6 + Loss 2 62.29 43.5 + Both 62.89 44.2 Table 6: Main test results on WebNLG 5.8 Main Results on LDC2017T10 Table 5 compares our results on LDC2017T10 with the highest numbers reported by single models without extra silver training data. RA-Trans-CNN is another model by Zhu et al. (2019) that adopt a convolutional neural network (LeCun et al., 1990) to model the relation path for each node pair. Again, the RA-Trans-F baseline achieves a comparable score with RA-Trans-CNN, and our approach improves the baseline by nearly 2.5 BLEU points, indicating its superiority. Regarding Meteor score, our advantage (1.62 points) over the previous state-of-the-art system on this dataset is larger than that (0.75 points) on LDC2015E86. Since LDC2017T10 has almost one time more training instances than LDC2015E86, we may conclude that the problem of dropping input information may not be effectively reduced by simply adding more supervised data, and as a result, our approach can still be effective on a larger dataset. This conclusion can also be confirmed by comparing the gains of our approach on both AMR datasets regarding BLEU score (2.3 vs 2.5). 5.9 Main Results on WebNLG Table 6 shows the comparison of our results with previous results on the WebNLG testset. ADAPT (Gardent et al., 2017) is based on the standard encoder-decoder architecture (Cho et al., 2014) with byte pair encoding (Sennrich et al., 2016), and it was the best system of the challenge. GCNEC (Marcheggiani and Perez-Beltrachini, 2018) is a recent model using a graph convolution network (Kipf and Welling, 2017) for encoding KGs. Our baseline shows a comparable performance with the previous state of the art. Based on this baseline, applying either loss leads to a significant improvement, and their combination brings a gain of more than 2 BLEU points. Although the baseline already achieves a very high BLEU score, yet the gains on this task are still comparable with those on AMR-to-text generation. This observation may imply that the problem of missing input structural knowledge can be ubiquitous among many graphto-text problems, and as a result, our approach can be widely helpful across many tasks. Following previous work, we also report Meteor scores, where our approach shows a gain of 2 points against the baseline and our final number is comparable with ADAPT. Similar with the gains on the BLEU metric, Loss 1 is comparable with Loss 2 regarding Meteor, and their combination is more useful than applying each own. 6 Conclusion We proposed reconstructing input graphs as autoencoding processes to encourage preserving the input semantic information for graph-to-text generation. In particular, the auxiliary losses for recovering two complementary views (triple relations and linearized graph) of input graphs are introduced, so that our model is trained to retain input structures for better generation. Our training framework is general for different graph types. Experiments on two benchmarks showed the effectiveness of our framework under both the automatic BLEU metric and human judgements. Acknowledge Both Te An and Jinsong Su were supported by the National Key R&D Program of China (No. 2019QY1803), National Natural Science Foundation of China (No. 61672440), and the Scientific Research Project of National Language Committee of China (No. YB135-49). Yue Zhang was supported by the joint research program between BriteDreams robotics and Westlake University. We thank the anonymous reviewers for their constructive suggestions. 7996 References Roee Aharoni and Yoav Goldberg. 2017. Towards string-to-tree neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained decoding for neural NLG from compositional representations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Deng Cai and Wai Lam. 2019. Graph transformer for graph-to-sequence learning. In Thirty-Fourth AAAI Conference on Artificial Intelligence. Kris Cao and Stephen Clark. 2019. Factorising amr generation through syntax. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Marco Damonte and Shay B Cohen. 2019. Structural neural encoders for amr-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency parsing. In International Conference on Learning Representations. Ondˇrej Duˇsek, David M Howcroft, and Verena Rieser. 2019. Semantic noise matters for neural natural language generation. In Proceedings of the 12th International Conference on Natural Language Generation. Ondˇrej Duˇsek, Jekaterina Novikova, and Verena Rieser. 2018. Findings of the e2e nlg challenge. In Proceedings of the 11th International Conference on Natural Language Generation. Jeffrey Flanigan, Chris Dyer, Noah A Smith, and Jaime Carbonell. 2016. Cmu at semeval-2016 task 8: Graph-based amr parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Proceedings of the 10th International Conference on Natural Language Generation. Zhijiang Guo, Yan Zhang, Zhiyang Teng, and Wei Lu. 2019. Densely connected graph convolutional networks for graph-to-sequence learning. Transactions of the Association for Computational Linguistics, 7. Valerie Hajdik, Jan Buys, Michael Wayne Goodman, and Emily M Bender. 2019. Neural text generation from rich semantic representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8). Xudong Hong, Ernie Chang, and Vera Demberg. 2019. Improving language generation from feature-rich tree-structured data with relational graph convolutional encoders. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), pages 75–80. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 963–973. Chen Jia, Xiaobo Liang, and Yue Zhang. 2019. Crossdomain ner using cross-domain language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations. 7997 Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, and Hannaneh Hajishirzi. 2019. Text generation from knowledge graphs with graph transformers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Yann LeCun, Bernhard E Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne E Hubbard, and Lawrence D Jackel. 1990. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, pages 396–404. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2016. Gated graph sequence neural networks. In International Conference on Learning Representations. Liyuan Liu, Jingbo Shang, Xiang Ren, Fangzheng Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018a. Empower sequence labeling with task-aware neural language model. In Thirty-Second AAAI Conference on Artificial Intelligence. Lizhen Liu, Xiao Hu, Wei Song, Ruiji Fu, Ting Liu, and Guoping Hu. 2018b. Neural multitask learning for simile recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, and Ting Liu. 2018c. An amr aligner tuned by transitionbased parser. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep graph convolutional encoders for structured data to text generation. In Proceedings of the 11th International Conference on Natural Language Generation. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the Eighth International Conference on Parsing Technologies. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning english strings with abstract meaning representation graphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Marek Rei and Anders Søgaard. 2019. Jointly learning to label sentences and tokens. In Proceedings of the AAAI Conference on Artificial Intelligence. Leonardo FR Ribeiro, Claire Gardent, and Iryna Gurevych. 2019. Enhancing amr-to-text generation with dual graph representations. In Proceedings of EMNLP-IJCNLP. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using amr. Transactions of the Association for Computational Linguistics, 7:19–31. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amrto-text generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xiaodong Shi, and Yang Liu. 2017. Lattice-based recurrent neural network encoders for neural machine translation. In Thirty-First AAAI Conference on Artificial Intelligence. Ida Szubert, Adam Lopez, and Nathan Schneider. 2018. A structured syntax-semantics interface for englishamr alignment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. Gtr-lstm: A triple encoder for sentence generation from rdf data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction. In Thirty-First AAAI Conference on Artificial Intelligence. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 7998 Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations. Chuan Wang and Nianwen Xue. 2017. Getting the most out of amr parsing. In Proceedings of the 2017 conference on empirical methods in natural language processing. Tianming Wang, Xiaojun Wan, and Hanqi Jin. 2020. Amr-to-text generation with graph transformer. Transactions of the Association for Computational Linguistics, 8:19–33. Xinyi Wang, Hieu Pham, Pengcheng Yin, and Graham Neubig. 2018. A tree-based decoder for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263. Shuangzhi Wu, Dongdong Zhang, Nan Yang, Mu Li, and Ming Zhou. 2017. Sequence-to-dependency neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, and Vadim Sheinin. 2018. Sql-to-text generation with graph-to-sequence model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 931–936. Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, and Guodong Zhou. 2019. Modeling graph structure in transformer for better amr-to-text generation. In Proceedings of EMNLP-IJCNLP.
2020
712
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999–8009 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7999 A Joint Neural Model for Information Extraction with Global Features Ying Lin1, Heng Ji1, Fei Huang2, Lingfei Wu3 1University of Illinois at Urbana-Champaign 2Alibaba DAMO Academy 3IBM Research {yinglin8,hengji}@illinois.edu, [email protected], [email protected] Abstract Most existing joint neural models for Information Extraction (IE) use local task-specific classifiers to predict labels for individual instances (e.g., trigger, relation) regardless of their interactions. For example, a VICTIM of a DIE event is likely to be a VICTIM of an ATTACK event in the same sentence. In order to capture such cross-subtask and cross-instance inter-dependencies, we propose a joint neural framework, ONEIE, that aims to extract the globally optimal IE result as a graph from an input sentence. ONEIE performs end-to-end IE in four stages: (1) Encoding a given sentence as contextualized word representations; (2) Identifying entity mentions and event triggers as nodes; (3) Computing label scores for all nodes and their pairwise links using local classifiers; (4) Searching for the globally optimal graph with a beam decoder. At the decoding stage, we incorporate global features to capture the cross-subtask and cross-instance interactions. Experiments show that adding global features improves the performance of our model and achieves new state-of-the-art on all subtasks. As ONEIE does not use any language-specific feature, we prove it can be easily applied to new languages or trained in a multilingual manner. Our code and models for English, Spanish and Chinese are publicly available for research purpose 1. 1 Introduction Information Extraction (IE) aims to extract structured information from unstructured texts. It is a complex task comprised of a wide range of subtasks, such as named, nominal, and pronominal mention extraction, entity linking, entity coreference resolution, relation extraction, event extraction, and event coreference resolution. Early efforts typically perform IE in a pipelined fashion, 1 http://blender.cs.illinois.edu/software/ oneie which leads to the error propagation problem and disallows interactions among components in the pipeline. As a solution, some researchers propose joint inference and joint modeling methods to improve local prediction (Roth and Yih, 2004; Ji and Grishman, 2005; Ji et al., 2005; Sil and Yates, 2013; Li et al., 2014; Durrett and Klein, 2014; Miwa and Sasaki, 2014; Lu and Roth, 2015; Yang and Mitchell, 2016; Kirschnick et al., 2016). Due to the success of deep learning, neural models have been widely applied to various IE subtasks (Collobert et al., 2011; Chiu and Nichols, 2016; Chen et al., 2015; Lin et al., 2016). Recently, some efforts (Wadden et al., 2019; Luan et al., 2019) revisit global inference approaches by designing neural networks with embedding features to jointly model multiple subtasks. However, these methods use separate local task-specific classifiers in the final layer and do not explicitly model the interdependencies among tasks and instances. Figure 1 shows a real example where the local argument role classifier predicts a redundant PERSON edge. The model should be able to avoid such mistakes if it is capable of learning and leveraging the fact that it is unusual for an ELECT event to have two PERSON arguments. PER Erdogan PER Abdullah Gul End-Position resigned Elect won person person Example: Prime Minister Abdullah Gul resigned earlier Tuesday to make way for Erdogan, who won a parliamentary seat in by-elections Sunday. person Figure 1: A typical error made by local classifiers without global constraints. To address this issue, we propose a joint neu8000 earthquake killed The 19 people and injured 300 in Kashmir region , India Identification Classification Trigger Entity Role Relation Decoding Encoding Die PER victim Injure Die PER victim Injure Die PER victim Injure ORG PER victim victim ⬇ Injure-victim-ORG ⬆ Injure-victim-PER ... ... Beam search Score vectors Information network Figure 2: An illustration of our end-to-end joint information extraction framework ONEIE at the test stage. We do not show all pairwise links for simplicity purposes. ral framework, ONEIE, to perform end-to-end IE with global constraints. As Figure 2 shows, instead of predicting separate knowledge elements using local classifiers, ONEIE aims to extract a globally optimal information network for the input sentence. When comparing candidate information networks during the decoding process, we not only consider individual label scores for each knowledge element, but evaluate cross-subtask and cross-instance interactions in the network. In this example, a graph with the INJURE-VICTIM-ORG (the VICTIM of an INJURE event is an ORG entity) structure is demoted. Experiments show that our framework achieves comparable or better results compared to the state-of-the-art end-to-end architecture (Wadden et al., 2019). To the best of our knowledge, ONEIE is the first end-to-end neural IE framework that explicitly models cross-subtask and cross-instance interdependencies and predicts the result as a unified graph instead of isolated knowledge elements. Because ONEIE does not rely on language-specific features, it can be rapidly applied to new languages. Furthermore, global features in our framework are highly explainable and can be explicitly analyzed. 2 Task Given a sentence, our ONEIE framework aims to extract an information network representation (Li et al., 2014), where entity mentions and event triggers are represented as nodes, and relations and event-argument links are represented as edges. In other words, we perform entity, relation, and event extraction within a unified framework. In this section, we will elaborate these tasks and involved terminologies. Entity Extraction aims to identify entity mentions in text and classify them into pre-defined entity types. A mention can be a name, nominal, or pronoun. For example, “Kashmir region” should be recognized as a location (LOC) named entity mention in Figure 2. Relation Extraction is the task of assigning a relation type to an ordered pair of entity mentions. For example, there is a PART-WHOLE relation between “Kashmir region” and “India”. Event Extraction entails identifying event triggers (the words or phrases that most clearly express event occurrences) and their arguments (the words or phrases for participants in those events) in unstructured texts and classifying these phrases, respectively, for their types and roles. An argument can be an entity, time expression, or value (e.g., MONEY, JOB-TITLE, CRIME). For example, in Figure 2, the word “injured” triggers an INJURE event and “300” is the VICTIM argument. We formulate the task of extracting information networks as follows. Given an input sentence, our goal is to predict a graph G = (V, E), where V and E are the node and edge sets respectively. Each node vi = ⟨ai, bi, li⟩∈V represents an entity mention or event trigger, where a and b are the start and end word indices, and l is the node type label. Each edge eij = ⟨i, j, lij⟩∈E is represented similarly, whereas i and j denote the indices of involved nodes. For example, in Figure 2, the trigger “injured” is represented as ⟨7, 7, INJURE⟩, the entity mention “Kashmir region” is represented as ⟨10, 8001 11, LOC⟩, and the event-argument edge between them is ⟨2, 3, PLACE⟩. 3 Approach As Figure 2 illustrates, our ONEIE framework extracts the information network from a given sentence in four steps: encoding, identification, classification, and decoding. We encode the input sentence using a pre-trained BERT encoder (Devlin et al., 2019) and identify entity mentions and event triggers in the sentence. After that, we compute the type label scores for all nodes and pairwise edges among them. During decoding, we explore possible information networks for the input sentence using beam search and return the one with the highest global score. 3.1 Encoding Given an input sentence of L words, we obtain the contextualized representation xi for each word using a pre-trained BERT encoder. If a word is split into multiple word pieces (e.g., Mondrian → Mon, ##dr, ##ian), we use the average of all piece vectors as its word representation. While previous methods typically use the output of the last layer of BERT, our preliminary study shows that enriching word representations using the output of the third last layer of BERT can substantially improve the performance on most subtasks. 3.2 Identification At this stage, we identify entity mentions and event triggers in the sentence, which will act as nodes in the information network. We use a feedforward network FFN to compute a score vector ˆyi = FFN(xi) for each word, where each value in ˆyi represents the score for a tag in a target tag set2. After that, we use a conditional random fields (CRFs) layer to capture the dependencies between predicted tags (e.g., an I-PER tag should not follow a B-GPE tag). Similar to (Chiu and Nichols, 2016), we calculate the score of a tag path ˆz = {ˆz1, ..., ˆzL} as s(X, ˆz) = L X i=1 ˆyi,ˆzi + L+1 X i=1 Aˆzi−1,ˆzi, where X = {x1, ..., xL} is the contextualized representations of the input sequence, ˆyi,ˆzi is the ˆzi-th 2We use the BIO tag scheme, in which the prefix B- marks the beginning of a mention, and I- means inside of a mention. A token not belonging to any mention is tagged with O. component of the score vector ˆyi, and Aˆzi−1,ˆzi is the (ˆzi−1, ˆzi) entry in matrix A that indicates the transition score from tag ˆzi−1 to ˆzi. The weights in A are learned during training. We append two special tags <start> and <end> to the tag path as ˆz0 and ˆzL+1 to denote the start and end of the sequence. At the training stage, we maximize the log-likelihood of the gold-standard tag path as log p(z|X) = s(X, z) −log X ˆz∈Z es(X,ˆz), where Z is the set of all possible tag paths for a given sentence. Thus, we define the identification loss as LI = −log p(z|X). In our implementation, we use separate taggers to extract entity mentions and event triggers. Note that we do not use types predicted by the taggers. Instead, we make a joint decision for all knowledge elements at the decoding stage to prevent error propagation and utilize their interactions to improve the prediction of node type. 3.3 Classification We represent each identified node as vi by averaging its word representations. After that, we use separate task-specific feed-forward networks to calculate label scores for each node as ˆyt i = FFNt(vi), where t indicates a specific task. To obtain the label score vector for the edge between the i-th and j-th nodes, we concatenate their span representations and calculate the vector as ˆyt k = FFNt(vi, vj). For each task, the training objective is to minimize the following cross-entropy loss Lt = −1 Nt Nt X i=1 yt i log ˆyt i, where yt i is the true label vector and Nt is the number of instances for task t. If we ignore the inter-dependencies between nodes and edges, we can simply predict the label with the highest score for each knowledge element and thus generate the locally best graph ˆG. The score of ˆG can be calculated as s′( ˆG) = X t∈T Nt X i=1 max ˆyt i, where T is the set of tasks. We refer to s′( ˆG) as the local score of ˆG. 8002 Categary Description Role 1. The number of entities that act as <rolei> and <rolej> arguments at the same time. 2. The number of <event typei> events with <number> <rolej> arguments. 3. The number of occurrences of <event typei>, <rolej>, and <entity typek> combination. 4. The number of events that have multiple <rolei> arguments. 5. The number of entities that act as a <rolei> argument of an <event typej> event and a <rolek> argument of an <event typel> event at the same time. Relation 6. The number of occurrences of <entity typei>, <entity typej>, and <relation typek> combination. 7. The number of occurrences of <entity typei> and <relation typej> combination. 8. The number of occurrences of a <relation typei> relation between a <rolej> argument and a <rolek> argument of the same event. 9. The number of entities that have a <relation typei> relation with multiple entities. 10. The number of entities involving in <relation typei> and <relation typej> relations simultaneously. Trigger 11. Whether a graph contains more than one <event typei> event. Table 1: Global feature categories. 3.4 Global Features A limitation of local classifiers is that they are incapable of capturing inter-dependencies between knowledge elements in an information network. We consider two types of inter-dependencies in our framework. The first type of inter-dependency is Crosssubtask interactions between entities, relations, and events. Consider the following sentence. “A civilian aid worker from San Francisco was killed in an attack in Afghanistan.” A local classifier may predict “San Francisco” as a VICTIM argument because an entity mention preceding “was killed” is usually the victim despite the fact that a GPE is unlikely to be a VICTIM. To impose such constraints, we design a global feature as shown in Figure 3(a) to evaluate whether the structure DIE-VICTIM-GPE exists in a candidate graph. Another type of inter-dependency is Crossinstance interactions between multiple event and/or relation instances in the sentence. Take the following sentence as an example. “South Carolina boy, 9, dies during hunting trip after his father accidentally shot him on Thanksgiving Day.” It can be challenging for a local classifier to predict “boy” as the VICTIM of the ATTACK event triggered by “shot” due to the long distance between these two words. However, as shown in Figure 3(b), if an entity is the VICTIM of a DIE event, it is also likely to be the VICTIM of an ATTACK event in the same sentence. Motivated by these observations, we design a set of global feature templates (event schemas) as listed in Table 1 to capture cross-subtask and crossinstance interactions, while the model fills in all possible types to generate features and learns the (a) Cross-subtask Interaction (b) Cross-instance Interactions PER dies Die Attack boy victim victim shot Die San Francisco killed victim GPE Figure 3: Examples of inter-dependencies between elements in information networks. (a) A VICTIM edge is unlikely to exist between a GPE entity and a DIE event trigger. (b) The VICTIM of a DIE event is likely to be the VICTIM of an ATTACK event in the same sentence. weight of each feature during training. Given a graph G, we represent its global feature vector as f G = {f1(G), ..., fM(G)}, where M is the number of global features and fi(·) is a function that evaluates a certain feature and returns a scalar. For example, fi(G) = ( 1, G has multiple ATTACK events 0, otherwise. Next, ONEIE learns a weight vector u ∈RM and calculates the global feature score of G as the dot product of f G and u. We define the global score of G as the sum of its local score and global feature score, namely s(G) = s′(G) + uf G, We make the assumption that the gold-standard graph for a sentence should achieve the highest global score. Therefore, we minimize the following loss function LG = s( ˆG) −s(G), 8003 where ˆG is the graph predicted by local classifiers and G is the gold-standard graph. Finally, we optimize the following joint objective function during training L = LI + X t∈T Lt + LG 3.5 Decoding As we have discussed, because local classifiers ignore interactions among elements in an information network, they may predict contradictory results or fail to predict difficult edges that require information from other elements. In order to address these issues, ONEIE makes a joint decision for all nodes and their pairwise edges to obtain the globally optimal graph. The basic idea is to calculate the global score for each candidate graph and select the one with the highest score. However, exhaustive search is infeasible in many cases as the size of search space grows exponentially with the number of nodes. Therefore, we design a beam search-based decoder as Figure 4 depicts. Given a set of identified nodes V and the label scores for all nodes and their pairwise links, we perform decoding with an initial beam set B = {K0}, where K0 is an order-zero graph. At each step i, we expand each candidate in B in node step and edge step as follows. Node step: We select vi ∈V and define its candidate set as Vi = {⟨ai, bi, l(k) i ⟩|1 ≤k ≤βv}, where l(k) i denotes the label with the k-th highest local score for vi, and βv is a hyper-parameter that controls the number of candidate labels to consider. We update the beam set by B ←{G + v|(G, v) ∈B × Vi}, Edge step: We iteratively select a previous node vj ∈V, j < i and add possible edges between vj and vi. Note that if vi is a trigger, we skip vj if it is also a trigger. At each iteration, we construct a candidate edge set as Eij = {⟨j, i, l(k) ij ⟩|1 ≤k ≤ βe}, where l(k) ij is the label with k-th highest score for eij and βe is a threshold for the number of candidate labels. Next, we update the beam set by B ←{G + e|(G, e) ∈B × Eij}, At the end of each edge step, if |B| is larger than the beam width θ, we rank all candidates by global score in descending order and keep the top θ ones. After the last step, we return the graph with the highest global score as the information network for the input sentence. 4 Experiments 4.1 Data We perform our experiments on the Automatic Content Extraction (ACE) 2005 dataset3, which provides entity, value, time, relation, and event annotations for English, Chinese, and Arabic. Following Wadden et al. (2019)’s pre-processing4, we conduct experiments on two datasets, ACE05-R that includes named entity and relation annotations, and ACE05-E that includes entity, relation, and event annotations. We keep 7 entity types, 6 coarsegrained relation types, 33 event types, and 22 argument roles. In order to reinstate some important elements absent from ACE05-R and ACE05-E, we create a new dataset, ACE05-E+, by adding back the order of relation arguments, pronouns, and multi-token event triggers, which have been largely ignored in previous work. We also skip lines before the <text> tag (e.g., headline, datetime) as they are not annotated. In addition to ACE, we derive another dataset, ERE-EN, from the Entities, Relations and Events (ERE) annotation task created under the Deep Exploration and Filtering of Test (DEFT) program because it covers more recent articles. Specifically, we extract 458 documents and 16,516 sentences from three ERE datasets, LDC2015E29, LDC2015E68, and LDC2015E78. For ERE-EN, we keep 7 entity types, 5 relation types, 38 event types, and 20 argument roles. To evaluate the portability of our model, we also develop a Chinese dataset from ACE2005 and a Spanish dataset from ERE (LDC2015E107). We refer to these datasets as ACE05-CN and ERE-ES respectively. 4.2 Experimental Setup We optimize our model with BertAdam for 80 epochs with a learning rate of 5e-5 and weight decay of 1e-5 for BERT, and a learning rate of 1e-3 and weight decay of 1e-3 for other parameters. We use use the bert-base-multilingual-cased 3https://www.ldc.upenn.edu/collaborations/ past-projects/ace 4https://github.com/dwadden/dygiepp 8004 Node Step E11 Candidate 1 of node E1 E12 Candidate 2 of node E1 Node Step E11 E11 E12 E12 T11 T12 T11 T12 E11 E11 E12 E12 T11 T12 T11 T12 E11 E11 E12 E12 T11 T12 T11 T12 Edge Step Add v1 Add v2 Add e1,2 Sort FAC Fine Campbell fines PER Fine Campbell fines entity entity Example: He also brought a check from Campbell to pay the fines and fees. E12 E11 E11 E12 T11 T12 T11 T12 E11 E11 E12 E12 T11 T12 T12 T11 Prune Keep Top Graphs Sort by global score E1: Campbell T1: fine R11 R11 R11 R11 R12 R12 R12 R12 R11 R12 R11 R11 R12 R11 R12 R12 ... ... Figure 4: An illustration of our decoding algorithm. At each step, we expand each candidate graph by adding a new node and possible edges between it and existing nodes. After that, we rank all expanded graphs and keep the top ones. Dataset Split #Sents #Entities #Rels #Events Train 10,051 26,473 4,788 ACE05-R Dev 2,424 6,362 1,131 Test 2,050 5,476 1,151 Train 17,172 29.006 4,664 4,202 ACE05-E Dev 923 2,451 560 450 Test 832 3,017 636 403 Train 6,841 29,657 7,934 2,926 ACE05-CN Dev 526 2,250 596 217 Test 547 2,388 672 190 Train 19,240 47,525 7,152 4,419 ACE05-E+ Dev 902 3,422 728 468 Test 676 3,673 802 424 Train 14,219 38,864 5,045 6,419 ERE-EN Dev 1,162 3,320 424 552 Test 1,129 3,291 477 559 Train 7,067 11,839 1,698 3,272 ERE-ES Dev 556 886 120 210 Test 546 811 108 269 Table 2: Dataset statistics. model5 for ACE05-CN and ERE-ES, and use the bert-large-cased model for other datasets. Following (Wadden et al., 2019), we use two-layer FFNs with a dropout rate of 0.4 for local classifiers. We use 150 hidden units for entity and relation extraction, and 600 hidden units for event extraction. For global features, we set βv and βe to 2 and set θ to 10. In our experiments, we use random seeds and report averaged scores across runs. We use the same criteria as (Zhang et al., 2019; Wadden et al., 2019) for evaluation as follows. • Entity: An entity mention is correct if its offsets and type match a reference entity. • Relation: A relation is correct if its relation type 5https://huggingface.co/transformers/ pretrained_models.html is correct and the offsets of the related entity mentions are correct. • Trigger: A trigger is correctly identified (TrigI) if its offsets match a reference trigger. It is correctly classified (Trig-C) if its event type also matches the reference trigger. • Argument: An argument is correctly identified (Arg-I) if its offsets and event type match a reference argument mention. It is correctly classified (Arg-C) if its role label also matches the reference argument mention. 4.3 Overall Performance In Table 3, we compare our results with two models: (1) DYGIE++ (Wadden et al., 2019), the stateof-the-art end-to-end IE model that utilizes multisentence BERT encodings and span graph propagation; (2) BASELINE that follows the architecture of ONEIE but only uses the output of the last layer of BERT and local classifiers. We can see that our model consistently outperforms DYGIE++ and BASELINE on ACE05-R and ACE05-E. In (Wadden et al., 2019), the authors show that combining triggers predicted by a four-model ensemble optimized for trigger detection can improve the performance of event extraction. While we also report our results using a four-model ensemble in Table 4 for fair comparison, we hold the opinion that the single-model scores in Table 3 better reflect the actual performance of ONEIE and should be used for future comparison. Table 5 shows the performance of ONEIE on two new datasets, ACE05-E+ and ERE-EN. In Table 6 we list salient global features learned by the model. Take feature #9 as an example, if a 8005 Dataset Task DYGIE++ BASELINE ONEIE ACE05-R Entity 88.6 88.8 Relation 63.4 67.5 ACE05-E Entity 89.7 90.2 90.2 Trig-I 76.6 78.2 Trig-C 69.7 73.5 74.7 Arg-I 53.0 56.4 59.2 Arg-C 48.8 53.9 56.8 Table 3: Results on ACE2005 datasets (F-score, %). Dataset Task DYGIE++* ONEIE* ACE05-E Entity 90.7 90.3 Trig-I 76.5 78.6 Trig-C 73.6 75.2 Arg-I 55.4 60.7 Arg-C 52.5 58.6 Table 4: Experiment results on ACE05-E (F-score, %). DYGIE++* and ONEIE* use a four-model ensemble optimized for trigger detection. Task Entity Trig-I Trig-C Arg-I Arg-C Relation ACE05-E+ 89.6 75.6 72.8 57.3 54.8 58.6 ERE-EN 87.0 68.4 57.0 50.1 46.5 53.2 Table 5: New benchmark results (F-score, %). candidate graph contains multiple ORG-AFF edges incident to the same node, the model will demote this graph by adding a negative value into its global score. We also observe that the weights of about 9% global features are almost not updated, which indicates that they are barely found in both goldstandard and predicted graphs. In Table 8, we perform qualitative analysis on concrete examples. 4.4 Porting to Another Language As Table 7, we evaluate the proposed framework on ACE05-CN and ERE-ES. The results show that ONEIE works well on Chinese and Spanish data without any special design for the new language. We also observe that adding English training data can improve the performance on Chinese and Spanish. 4.5 Remaining Challenges We have analyzed 75 of the remaining errors and in Figure 5 we present the distribution of various error types which need more features and knowledge acquisition to address in the future. In this section, we will discuss some main categories with examples. Need background knowledge. Most of current IE methods ignore external knowledge such as entity attributes and scenario models. For examPositive Feature Weight 1 A TRANSPORT event has only one DESTINATION argument 2.61 2 An ATTACK event has only one PLACE argument 2.31 3 A TRANSPORT event has only one ORIGIN argument 2.01 4 An END-POSITION event has only one PERSON argument 1.51 5 A PER-SOC relation exists between two PER entities 1.08 6 A GEN-AFF relation exists between ORG and LOC entities 0.96 7 A BENEFICIARY argument is a PER entity 0.93 8 A GEN-AFF relation exists between ORG and GPE entities 0.90 Negative Feature Weight 9 An entity has an ORG-AFF relation with multiple entities -3.21 10 An entity has an PART-WHOLE relation with multiple entities -2.49 11 An event has two PLACE arguments -2.47 12 A TRANSPORT event has multiple DESTINATION arguments -2.25 13 An entity has a GEN-AFF relation with multiple entities -2.02 14 An ATTACK event has multiple PLACE arguments -1.86 15 An entity has a PHYS relation with multiple entities -1.69 16 An event has multiple VICTIM arguments -1.61 Table 6: Salient positive and negative global features. Dataset Training Entity Relation Trig-C Arg-C ACE05-CN CN 88.5 62.4 65.6 52.0 CN+EN 89.8 62.9 67.7 53.2 ERE-ES ES 81.3 48.1 56.8 40.3 ES+EN 81.8 52.9 59.1 42.3 Table 7: Results on ACE05-CN and ERE-ES (F-score, %). For ACE05-CN, EN refers to ACE05-E+. For ERE-ES, EN refers to ERE-EN. 16.0% 17.3% 12.0% 13.3% 6.7% 8.0% 18.7% 4.0% 4.0% Underspecified definition Need Background Knowledge Annotation error Generic entity & uncertain event Need syntactic structure Multiple events per trigger Rare word Metaphor Cross-sentence reasoning Figure 5: Distribution of remaining errors. ple, in the following sentence, “And Putin’s media aide, Sergei Yastrzhembsky, told Kommersant Russia would not forgive the Iraqi debt”, our model mistakenly identifies “Kommersan” as a person instead of organization. With entity linking, we 8006 Sentence & Analysis Baseline +Global Features #1: Russia’s foreign minister expressed outrage at suggestions from a top Washington official last week that Moscow should forgive the eight billion dollars in Soviet-era debt that Baghdad owes it, as a gesture of good will. ⋆Global feature category: 8 ⋆Analysis: It is unlikely for a person to have an ORG-AFF relation with multiple entities. GPE Russia PER minister GPE Washington PER official ORG-AFF ORG-AFF ORG-AFF ORG-AFF GPE Russia PER minister GPE Washington PER official ORG-AFF ORG-AFF #2: They also deployed along the border with Israel. ⋆Global feature category: 9 ⋆Analysis: It is uncommon that an ORIGIN argument and a DESTINATION argument have a PART-WHOLE relation. LOC border GPE Israel Transport deployed PART-WHOLE destination origin LOC border GPE Israel Transport deployed PART-WHOLE origin #3: Prime Minister Abdullah Gul resigned earlier Tuesday to make way for Erdogan , who won a parliamentary seat in byelections Sunday. ⋆Global feature categories: 2 and 5 ⋆Analysis: 1. An ELECT usually has only one PERSON argument; 2. An entity is unlikely to act as a PERSON argument for ENDPOSITION and ELECT events at the same time. person PER Erdogan PER Abdullah Gul End-Position resigned Elect won person person PER Erdogan PER Abdullah Gul End-Position resigned Elect won person person #4: Diller will continue to play a critical role in the future of Vivendi ’s entertainment arm. ⋆Global feature category: 6 ⋆Analysis: A PART-WHOLE relation should not exist between PER and ORG entities. PER Vivendi ORG arm PER Diller PART-WHOLE PER Vivendi ORG arm PER Diller #5: He also brought a check from Campbell to pay the fines and fees. ⋆Global feature category: 3 ⋆Analysis: As “Campbell” is likely to be an ENTITY argument of a FINE event, the model corrects its entity type from FAC to PER. FAC Campbell Fine fines PER Campbell Fine fines entity Table 8: Examples showing how global features improve the quality of extracted information networks. For some sentences, we do not draw the whole information network. can correct this error based on the first sentence in its Wikipedia page “Kommersant is a nationally distributed daily newspaper published in Russia mostly devoted to politics and business”. Rare words. The second challenge is the famous long-tail problem: many triggers, entity mentions (e.g., “caretaker”, “Gazeta.ru”) and contextual phrases in the test data rarely appear in the training data. While most event triggers are verbs or nouns, some adverbs and multi-word expressions can also serve as triggers. Multiple types per trigger. Some trigger words may indicate both the procedure and the result status of an action. For example, “named” may indicate both NOMINATE and START-POSITION events; “killed” and “eliminate” may indicate both ATTACK and DIE events. In these cases the human ground truth usually only annotates the procedure types, whereas our system produces the resultant event types. Need syntactic structure. Our model may benefit from deeper syntactic analysis. For example, in the following sentence “As well as previously holding senior positions at Barclays Bank, BZW and Kleinwort Benson, McCarthy was formerly a top civil servant at the Department of Trade and Industry”, our model misses all of the employers “Barclays Bank”, “BZW” and “Kleinwort Benson” for “McCarthy” probably because they appear in a previous sub-sentence. Uncertain events and metaphors. Our model mistakenly labels some future planned events as specific events because its lacking of tense prediction and metaphor recognition. For example, START-ORG triggered by “formation” does not happen in the following sentence “The statement did not give any reason for the move, but said Lahoud would begin consultations Wednesday aimed at the formation of a new government”. Our model also mistakenly identifies “camp” as a facility, and a 8007 DIE event triggered by “dying” in the following sentence “Russia hints ‘peace camp’ alliance with Germany and France is dying by Dmitry Zaks.”. The IE community is lacking of newer data sets with end-to-end annotations. Unfortunately, the annotation quality of the ACE data set is not perfect due to some long-term debates on the annotation guideline; e.g., Should “government” be tagged as a GPE or an ORG? Should “dead” be both an entity and event trigger? Should we consider designator word as part of the entity mention or not? 5 Related Work Previous work (Roth and Yih, 2004; Li et al., 2011) encodes inter-dependency among knowledge elements as global constraints in an integer linear programming framework to effectively remove extraction errors. Such integrity verification results can be used to find knowledge elements that violate the constraints and identify possible instances of detector errors or failures. Inspired by these previous efforts, we propose a joint neural framework with global features in which the weights are learned during training. Similar to (Li et al., 2014)’s method, ONEIE also uses global features to capture cross-subtask and cross-instance interdependencies, while our features are languageindependent and do not rely on other NLP tools such as dependency parsers. Our methods also differ in local features, optimization methods, and decoding procedures. Some recent efforts develop joint neural models to perform extraction of two IE subtasks, such as entity and relation extraction (Zheng et al., 2017; Katiyar and Cardie, 2017; Bekoulis et al., 2018; Fu et al., 2019; Luan et al., 2019; Sun et al., 2019) and event and temporal relation extraction (Han et al., 2019). Wadden et al. (2019) design a joint model to extract entities, relations and events based on BERT and dynamic span graphs. Our framework extends (Wadden et al., 2019) by incorporating global features based on cross-subtask and crossinstance constraints. Unlike (Wadden et al., 2019) that uses a span-based method to extract mentions, we adopt a CRF-based tagger in our framework because it can extract mentions of any length, not restricted by the maximum span width. 6 Conclusions and Future Work We propose a joint end-to-end IE framework that incorporates global features to capture the interdependency between knowledge elements. Experiments show that our framework achieves better or comparable performance compared to the state of the art and prove the effectiveness of global features. Our framework is also proved to be languageindependent and can be applied to other languages, and it can benefit from multi-lingual training. In the future, we plan to incorporate more comprehensive event schemas that are automatically induced from multilingual multimedia data and external knowledge to further improve the quality of IE. We also plan to extend our framework to more IE subtasks such as document-level entity coreference resolution and event coreference resolution. Acknowledgement This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA875019-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014, Air Force No. FA8650-17C-7715, the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract No. FA8650-17-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. References Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP2015). Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association of Computational Linguistics (TACL). Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. 8008 Natural language processing (almost) from scratch. Journal of Machine Learning Research. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT2019). Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. In Transactions of the Association for Computational Linguistics (TACL). Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL2019). Rujun Han, Qiang Ning, and Nanyun Peng. 2019. Joint event and temporal relation extraction with shared representations and structured prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP2019). Heng Ji and Ralph Grishman. 2005. Improving name tagging by reference resolution and relation detection. In In Proceedings of ACL 05, Ann Arbor, USA. Heng Ji, David Westbrook, and Ralph Grishman. 2005. Using semantic relations to refine coreference decisions. In In Proceedings of HLT/EMNLP 05, Vancouver, B.C., Canada. Arzoo Katiyar and Claire Cardie. 2017. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL2017). Johannes Kirschnick, Holmer Hemsen, and Volker Markl. 2016. JEDI: Joint entity and relation detection using type inference. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics System Demonstrations (ACL2016). Qi Li, Sam Anzaroot, Wen-Pin Lin, Xiang Li, and Heng Ji. 2011. Joint inference for cross-document information extraction. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management (CIKM2011). Qi Li, Heng Ji, HONG Yu, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP2014). Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL2016). Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP2015). Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT2019). Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP2014). Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL2004). Avirup Sil and Alexander Yates. 2013. Re-ranking for joint named-entity recognition and linking. In Proceedings of the 22nd ACM international conference on Conference on Information & Knowledge Management (CIKM2013). Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man Lan, Shiliang Sun, and Nan Duan. 2019. Joint type inference on entities and relations via graph convolutional networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL2019). David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP2019). Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT2016). Tongtao Zhang, Heng Ji, and Avirup Sil. 2019. Joint entity and event extraction with generative adversarial imitation learning. Data Intelligence. Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging 8009 scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL2017).
2020
713
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8010–8020 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8010 Document-Level Event Role Filler Extraction using Multi-Granularity Contextualized Encoding Xinya Du and Claire Cardie Department of Computer Science Cornell University Ithaca, NY, USA {xdu, cardie}@cs.cornell.edu Abstract Few works in the literature of event extraction have gone beyond individual sentences to make extraction decisions. This is problematic when the information needed to recognize an event argument is spread across multiple sentences. We argue that document-level event extraction is a difficult task since it requires a view of a larger context to determine which spans of text correspond to event role fillers. We first investigate how end-toend neural sequence models (with pre-trained language model representations) perform on document-level role filler extraction, as well as how the length of context captured affects the models’ performance. To dynamically aggregate information captured by neural representations learned at different levels of granularity (e.g., the sentence- and paragraph-level), we propose a novel multi-granularity reader. We evaluate our models on the MUC-4 event extraction dataset, and show that our best system performs substantially better than prior work. We also report findings on the relationship between context length and neural model performance on the task. 1 Introduction The goal of document-level event extraction1 is to identify in an article events of a pre-specified type along with their event-specific role fillers, i.e., arguments. The complete document-level extraction problem generally requires role filler extraction, noun phrase coreference resolution and event tracking (i.e., determine which extracted role fillers belong to which event). In this work, we focus only on document-level role filler extraction. Figure 1 provides a representative example of this task. Given an article consisting of multiple paragraphs/sentences, and a fixed set of event types 1The task is also referred to as template filling (MUC-4, 1992). Related Work Machine reader reads through the document Perpetrator Individual four terrorists Perpetrator Organization Target Newspaper El Espectador Victim Teofilo Forero Castro, Luis Carlos Galan Sarmiento Weapon car bomb, dynamite [S1] ... by special urban troops, four terrorists have been arrested in soacha. [S2] They are responsible for the car bomb attack on the Newspaper El Espectador, to a series of bogota dynamite attacks, to the freeing of a group of paid assassins. [S3] The terrorists are also connected to the murder of Teofilo Forero Castro, … [S4] General Ramon is the commander of the 13th infantry brigade. [S5] He said that at least two of those arrested have fully confessed to having taken part in the accident of Luis Carlos Galan Sarmiento in soacha, Cundinamarca. [S6] .. triumph over organized crime, its accomplices and its protectors. ... Figure 1: The document-level event role fillers extraction task. (e.g., terrorist events) and associated roles (e.g., PERPETRATOR INDIVIDUAL, VICTIM, WEAPON), we aim to identify those spans of text that denote the role fillers for each event described in the text. This generally requires both sentence-level understanding and accurate interpretation of the context beyond the sentence. Examples include identifying “Teofilo Forero Castro” (mentioned in S3) as a victim of the car bomb attack event (mentioned in S2), determining there’s no role filler in S4 (both of which rely mainly on sentence-level understanding, and identifying “four terrorists” in S1 as a perpetrator individual (which requires coreference resolution across sentence boundaries). Generating the document-level extractions for events is essential in facilitating downstream applications such as information retrieval and article summarization (Yang and Mitchell, 2016), and for real-life applications such as trends analysis of world events (Sundheim, 1992). Recent work in document-level event role filler extraction has employed a pipeline architecture with separate classifiers for each type of role and for 8011 relevant context detection (Patwardhan and Riloff, 2009; Huang and Riloff, 2011). However these methods: (1) suffer from error propagation across different pipeline stages; and (2) require heavy feature engineering (e.g., lexico-syntactic pattern features for candidate role filler extraction; lexical bridge and discourse bridge features for detecting event-relevant sentences at the document level). Moreover, the features are manually designed for a particular domain, which requires linguistic intuition and domain expertise (Nguyen and Grishman, 2015). Neural end-to-end models have been shown to excel at sentence-level information extraction tasks, such as named entity recognition (Lample et al., 2016; Chiu and Nichols, 2016) and ACE-type within-sentence event extraction (Chen et al., 2015; Nguyen et al., 2016; Wadden et al., 2019). However, to the best of our knowledge, no prior work has investigated the formulation of document-level event role filler extraction as an end-to-end neural sequence learning task. In contrast to extracting events and their role fillers from standalone sentences, document-level event extraction poses special challenges for neural sequence learning models. First, capturing long-term dependencies in long sequences remains a fundamental challenge for recurrent neural networks (Trinh et al., 2018). To model long sequences, most RNN-based approaches use backpropagation through time. But it’s still difficult for the models to scale to very long sequences. We provide empirical evidence for this for event extraction in Section 4.3. Second, although pretrained bi-directional transformer models such as BERT (Devlin et al., 2019) better capture long-distance dependencies as compared to an RNN architecture, they still have a constraint on the maximum length of the sequence, which is below the length of many articles about events. In the sections below, we study how to train and apply end-to-end neural models for event role filler extraction. We first formalize the problem as a sequence tagging task over the tokens in a set of contiguous sentences in the document. To address the aforementioned challenges for neural models applied to long sequences, (1) we investigate the effect of context length (i.e., maximum input segment length) on model performance, and find the most appropriate length; and (2) propose a multi-granularity reader that dynamically aggregates the information learned from the local context (e.g., sentence-level) and the broader context (e.g., paragraph-level). A quantitative evaluation and qualitative analysis of our approach on the MUC-4 dataset (MUC-4, 1992) both show that the multi-granularity reader achieves substantial improvements over the baseline models and prior work. For replication purposes, our repository for the evaluation and preprocessing scripts will be available at https://github.com/xinyadu/doc_ event_role. 2 Related Work Event extraction has been mainly studied under two paradigms: detecting the event trigger and extracting the arguments from an individual sentence (e.g., the ACE task (Doddington et al., 2004)2, vs. at the document level (e.g., the MUC-4 template-filling task (Sundheim, 1992)). Sentence-level Event Extraction The ACE event extraction task requires extraction of the event trigger and its arguments from a sentence. For example, in the sentence “ ... Iraqi soldiers were killed by U.S. artillery ...”, the goal is to identify the “die” event triggered by killed and the corresponding arguments (PLACE, VICTIM, INSTRUMENT, etc.). Many approaches have been proposed to improve performance on this specific task. Li et al. (2013, 2015) explore various hand-designed features; Nguyen and Grishman (2015); Nguyen et al. (2016); Chen et al. (2015); Liu et al. (2017, 2018) employ deep learning based models such as recurrent neural networks (RNNs) and convolutional neural network (CNN). Wadden et al. (2019) utilize pre-trained contextualized representations. The approaches generally focus on sentence-level context for extracting event triggers and arguments and rarely generalize to the document-event extraction setting (Figure 1). Only a few models have gone beyond individual sentences to make decisions. Ji and Grishman (2008) enforce event role consistency across documents. Liao and Grishman (2010) explore event type co-occurrence patterns to propagate event classification decisions. Similarly, Yang and Mitchell (2016) propose jointly extracting events and entities within a document context. Also related to our work are Duan et al. (2017) and Zhao et al. (2018), which utilize document embeddings to aid 2https://catalog.ldc.upenn.edu/ LDC2006T06 8012 event detection with recurrent neural networks. Although these approaches make decisions with crosssentence information, their extractions are still at the sentence level. Document-level Event Extraction has been studied mainly under the classic MUC paradigm (MUC-4, 1992). The full task involves the construction of answer key templates, one template per event (some documents in the dataset describe more than one events). Typically three steps are involved — role filler extraction, role filler mention coreference resolution and event tracking). In this work we focus on role filler extraction. From the modeling perspective, recent work explores both the local and additional context to make the role filler extraction decisions. GLACIER (Patwardhan and Riloff, 2009) jointly considers crosssentence and noun phrase evidence in a probabilistic framework to extract role fillers. TIER (Huang and Riloff, 2011) proposes to first determine the document genre with a classifier and then identify event-relevant sentences and role fillers in the document. Huang and Riloff (2012) propose a bottom-up approach that first aggressively identifies candidate role fillers (with lexico-syntactic pattern features), and then removes the candidates that are in spurious sentences (i.e., not event-related) via a cohesion classifier (with discourse features). Similar to Huang and Riloff (2012), we also incorporate both intra-sentence and cross-sentence features (paragraph-level features), but instead of using manually designed linguistic information, our models learn in an automatic way how to dynamically incorporate learned representations of the article. Also, in contrast to prior work that is pipeline-based, our approach tackles the task as an end-to-end sequence tagging problem. There has also been work on unsupervised event schema induction (Chambers and Jurafsky, 2011; Chambers, 2013) and open-domain event extraction (Liu et al., 2019) from documents: the main idea is to group entities corresponding to the same role into an event template. Our models, on the other hand, are trained in supervised way and the event schemas are pre-defined. Apart from event extraction, there has been increasing interest on cross-sentence relation extraction (Mintz et al., 2009; Peng et al., 2017; Jia et al., 2019). This work assumes that mentions are provided, and thus is more of a mention/entity-level classification problem. Our work instead focuses on role filler/span extraction using sequence tagging approaches; role filler type is determined during this process. Capturing Long-term Dependencies for Neural Sequence Models For training neural sequence models such as RNNs, capturing long-term dependencies in sequences remains a fundamental challenge (Trinh et al., 2018). Most approaches use backpropagation through time (BPTT) but it is difficult to scale to very long sequences. Many variations of models have been proposed to mitigate the effect of long sequence length, such as Long Short Term Memory (LSTM) Networks (Hochreiter and Schmidhuber, 1997; Gers et al., 1999; Graves, 2013) and Gated Recurrent Unit Networks (Cho et al., 2014). Transformer based models (Vaswani et al., 2017; Devlin et al., 2019) have also shown improvements in modeling long text. In our work for document-level event role filler extraction, we also implement LSTM layers in the models as well as utilize the pre-trained representations provided by the bi-directional transformer model – BERT. From an application perspective, we investigate the suitable length of context to incorporate for the neural sequence tagging model in the document-level extraction setting. We also study how to mitigate problems associated with long sequences by dynamically incorporating both sentence-level and paragraph-level representations in the model (Figure 3). 3 Methodology In the following we describe (1) how we transform the document into paired token-tag sequences and formalize the task as a sequence tagging problem (Section 3.1); (2) the architectures of our base ksentence reader (Section 3.2) and multi-granularity reader (Section 3.3). 3.1 Constructing Paired Token-tag Sequences from Documents and Gold Role Fillers We formalize document-level event role filler extraction as an end-to-end sequence tagging problem. The Figure 2 illustrates the general idea. Given a document and the text spans associated with the gold-standard (i.e., correct) fillers for each role, we adopt the BIO (Beginning, Inside, Outside) tagging scheme to transform the document into paired token/BIO-tag sequences.. We construct example sequences of variant context lengths for training and testing our end-to8013 … … Our Method: Training for reader [S1] ... by special urban troops, four terrorists have been arrested in soacha. [S2] They are responsible for the car bomb attack on the newspaper el espectador, to a series of bogota dynamite attacks, … [S3] The terrorists are also connected to the murder of teofilo forero castro, … [S4] General Ramon is the commander of the 13th infantry brigade. … Constructing positive sequences of length k (k=1 in this example) with BIO labels. Sample same number of negative sequences to construct a balanced training set. General ramon is the commander of the 13th infantry brigade . O O O O O O O O O O O … Training the sequence reader Perpetrator Individual four terrorists Perpetrator Organization Target newspaper el espectador Victim teofilo forero castro, luis carlos galan sarmiento Weapon car bomb, dynamite k sentences Embedding Layer BiLSTM Layer CRF Layer … four terrorists who are apparently … … B-PerpInd I-PerpInd O O O … ... four terrorists have been arrested in soacha … ... B-PerpInd I-PerpInd O O O O O … … are responsible for the car bomb attack on the newspaper … O O O O B-Weapon I-Weapon O O O B-Target el espectador , to a series of bogota dynamite attacks … I-Taget I-Target O O O O O O B-Weapon O … … 1 2 Figure 2: An overview of our framework for training the sequence reader for event role filler extraction. end k-sentence readers (i.e., the single-sentence, double-sentence, paragraph and chunk readers). By “chunk”, we mean the chunk of contiguous sentences which is right within the sequence length constraint for BERT – 512 in this case. Specifically, we use a sentence splitter3 to divide the document into sentences s1, s2, ..., sn. To construct the training set, starting from each sentence i, we concatenate the k contiguous sentences (si to si+k−1) to form overlapping candidate sequences of length k – sequence 1 consists of {s1, ..., sk}, sequence 2 consists of {s2, ..., sk+1}, etc. To make the training set balanced, we sample the same number of positive and negative sequences from the candidate sequences, where "positive" sequence contains at least one event role filler, and “negative” sequences contain no event role fillers. To construct the dev/test set, where the reader is applied, we simply group the contiguous k sentences together in order, producing n k sequences (i.e., sequence 1 consists of {s1, ..., sk}, sequence 2 consists of {sk+1, ..., s2k}, etc.) For the paragraph reader, we set k to average paragraph length for the training set, and to the real paragraph length for test set. We denote the token in the sequence with x, the input for the k-sentence reader is X = {x(1) 1 , x(1) 2 , ..., x(1) l1 , ..., x(k) 1 , x(k) 2 , ..., x(k) lk }; where x(k) i is the i-th token of the k-th sentence, and lk is the length of the k-th sentence. 3https://spacy.io/ 3.2 k-sentence Reader Since our general k-sentence reader does not recognize sentence boundaries, we simplify the notation for the input sequence as {x1, x2, ..., xm} here. Embedding Layer In the embedding layer, we represent each token xi in the input sequence as the concatenation of its word embedding and contextual token representation: • Word Embedding: We use the 100dimensional GloVe pre-trained word embeddings (Pennington et al., 2014) trained from 6B Web crawl data. We keep the pre-trained word embeddings fixed. Given a token xi, we have its word embedding: xei = E(xi). • Pre-trained LM representation: Contextualized embeddings produced by pre-trained language models (Peters et al., 2018; Devlin et al., 2019) have been proved to be capable of modeling context beyond the sentence boundary and improve performance on a variety of tasks. Here we employ the contextualized representations produced by BERT-base for our k-sentence labeling model, as well as the multi-granularity reader to be introduced next. Specifically, we use the average of all the 12 layers’ representations and freeze the weights (Peters et al., 2019) during training 8014 after empirical trials4. Given the sequence {x1, x2, ..., xm}, we have: xb1, xb2, ..., xbm = BERT(x1, x2, ..., xm) We forward the concatenation of the two representations for each token to the upper layers: xi = concat(xei, xbi) BiLSTM Layer To help the model better capture task-specific features between the sequence tokens. We use a multi-layer (3 layers) bi-directional LSTM encoder on top of the token representations, which we denote as BiLSTM: {p1, p2, ..., pm} = BiLSTM({x1, x2, ..., xm}) CRF Layer Drawing inspirations for sentencelevel sequence tagging models on tasks like NER (Lample et al., 2016). Modeling the labeling decisions jointly rather than independently improves the models performance (e.g., the tag “I-Weapon” should not follow “B-Victim”). We model labeling decisions jointly using a conditional random field (Lafferty et al., 2001). After passing {p1, p2, ..., pm} through a linear layer, we have P of size m× size of tag space, where Pi,j is the score of the tag j of the i-th token in the sequence. For a tag sequence y = {y1, ..., ym}, we have the score for the sequencetag pair as: score(X, y) = m X i=0 Ayi,yi+1 + m X i=1 Pi,yi A is the transition matrix of scores such that Ai,j represents the score of a transition from the tag i to tag j. A softmax function is applied over scores for all possible tag sequences, which yield a probability for the gold sequence ygold. The logprobability of the gold tag sequence is maximized during training. During decoding, the model predicts the output sequence that obtains the maximum score. 3.3 Multi-Granularity Reader To explore the effect of aggregating contextualized token representations from different granularities 4Using the representations of the last layer, or summing all the 12 layers’ representations give consistently worse results. (sentence- and paragraph-level), we propose the multi-granularity reader (Figure 3). Similar to the general k-sentence reader, we use the same embedding layer here to represent the tokens. But we apply the embedding layer to two granularities of the paragraph text (sentence- and paragraph-level). Although the word embeddings are the same for the embedding layers from different granularities, the contextualized representations are different for each token – when the token is encoded in the context of a sentence, or in the context of a paragraph. Correspondingly, we build two BiLSTMs (BiLSTMsent. and BiLSTMpara.) on top of the sentence-level contextualized token representations {˜x(1) 1 , ..., ˜x(1) l1 , ..., ˜x(k) lk , ..., ˜x(k) lk }, and the paragraph-level contextualized token representations {ˆx(1) 1 , ..., ˆx(1) l1 , ..., ˆx(k) lk , ..., ˆx(k) lk }: Sentence-Level BiLSTM The BiLSTMsent. is applied sequentially to each sentence in the paragraph: {˜p(1) 1 , ˜p(1) 2 , ..., ˜p(1) l1 } = BiLSTMsent.({˜x(1) 1 , ˜x(1) 2 , ..., ˜x(1) l1 }) ... {˜p(k) 1 , ˜p(k) 2 , ..., ˜p(k) lk } = BiLSTMsent.({˜x(k) 1 , ˜x(k) 2 , ..., ˜x(k) lk }) Then we have the sentence-level representations for each token in the paragraph as {˜p(1) 1 , ..., ˜p(1) l1 , ..., ˜p(k) 1 , ..., ˜p(k) lk } Paragraph-Level BiLSTM Another BiLSTM layer (BiLSTMpara.) is applied to the entire paragraph (as compared to BiLSTMsent., which is applied to each sentence), to capture the dependency between tokens in the paragraph: {ˆp(1) 1 , ..., ˆp(1) l1 , ..., ˆp(k) 1 , ..., ˆp(k) lk } = BiLSTMpara.({ˆx(1) 1 , ..., ˆx(1) l1 , .., ˆx(k) lk , ..., ˆx(k) lk }) Fusion and Inference Layer For each token x(j) i (the i-th token in the j-th sentence), to fuse the representations learned at the sentence-level (˜p(j) i ) and paragraph-level (ˆp(j) i ), we propose two options – the first uses a sum operation, and the second uses a gated fusion operation: 8015 [S1] … four terrorists have been arrested in soacha. Embedding Layer Sentence-Level BiLSTM Embedding Layer Sentence-Level BiLSTM Embedding Layer Sentence-Level BiLSTM [S2] … the car bomb attack on the newspaper el espectador … [S3]… murder teofilo forero castro … Embedding Layer Paragraph-Level BiLSTM [S1] … four terrorists have been arrested in soacha. [S2] … the car bomb attack on the newspaper el espectador … [S3]… murder teofilo forero castro … … CRF layer Rep. Fusion Concatenated representations from sentences in the paragraph concatenation … … … … … … … … … Figure 3: Overview for our multi-granularity reader. The dark blue BiLSTMsent. produces sentence-level representations for each token, the yellow BiLSTMpara. produces paragraph-level representations for each token. • Simple Sum Fusion: p(j) i = ˜p(j) i + ˆp(j) i • Gated Fusion: The gated fusion compute the gate vector g(j) i with its sentence-level token representation ˜p(j) i and paragraph-level token representation ˆp(j) i , to control how much information should be incorporated from the two representations. g(j) i = sigmoid(W1˜p(j) i + W2ˆp(j) i + b) p(j) i = g(j) i ⊙˜p(j) i + (1 −g(j) i ) ⊙ˆp(j) i ⊙: element-wise product Similarly to in the general k-sentence reader, we add the CRF layer (section 3.2) on top of the fused representations for each token in the paragraph {p(1) 1 , ..., p(1) l1 , ..., p(k) 1 , ..., p(k) lk }, to help jointly model the labeling decisions between tokens in the paragraph. 4 Experiments and Analysis We evaluate our models’ performance on the MUC4 event extraction benchmark (MUC-4, 1992), and compare to prior work. We also report findings on the effect of context length on the end-to-end readers’ performance on this document-level task. 4.1 Dataset and Evaluation Metrics MUC-4 Event Extraction Dataset The MUC-4 dataset consists of 1,700 documents with associated answer key (role filler) templates. To make sure our results are comparable to the previously reported results on this dataset, we use the 1300 documents for training, 200 documents (TST1+TST2) as the development set and the 200 documents (TST3+TST4) as the test set. Evaluation Metrics Following the prior work, we use head noun phrase match to compare the extractions against gold role fillers for evaluation 5; besides noun phrase matching, we also report exact match accuracy, to capture how well the models are capturing the role fillers’ boundary6. Our results are reported as Precision (P), Recall (R) and F-measure (F-1) score for the macro average for all the event roles. In Table 2, we also present the scores for each event role (i.e., PERPETRATOR INDIVIDUALS, PERPETRATOR ORGANIZATIONS, PHYSICAL TARGETS, VICTIMS and WEAPONS) based on the head noun match metric. The detailed documentation and implementation for the evaluation script will be released. 4.2 Baseline Systems and Our Systems We compare to the pipeline and manual feature engineering based systems: GLACIER (Patwardhan and Riloff, 2009) consists of a sentential event classifier and a set of plausible role filler recog5Duplicate role fillers (i.e., extractions for the same role that have the same head noun) are conflated before being scored; they are counted as one hit (if the system produces it) or one miss (if the system fails to produce any of the duplicate mentions). 6Similarly, duplicate extractions with the same string are counted as one hit or miss. 8016 Head Noun Match Exact Match Prec. Recall F-1 Prec. Recall F-1 GLACIER (Patwardhan and Riloff, 2009) 47.80 57.20 52.08 TIER (Huang and Riloff, 2011) 50.80 61.40 55.60 Cohesion Extract (Huang and Riloff, 2012) 57.80 59.40 58.59 w/o contextualized embedding Single-Sentence Reader 48.69 56.11 52.14 46.16 53.16 49.41 Double-sentence Reader 56.37 47.53 51.57 53.70 43.95 48.34 Paragraph Reader 53.19 53.16 53.17 49.45 49.26 49.35 Chunk Reader 61.76 37.04 46.31 56.91 34.92 43.28 w/ contextualized embedding Contextualized Single-Sentence Reader 47.32 61.26 53.39 44.40 57.67 50.17 Contextualized Double-sentence Reader 57.17 53.36 55.20 53.38 49.22 51.22 Contextualized Paragraph Reader 56.78 52.64 54.64 53.36 49.65 51.44 Contextualized Chunk Reader 60.90 41.10 49.07 55.18 37.51 44.66 Multi-Granularity Reader 56.44 62.77 59.44 52.03 56.81 54.32 Table 1: Macro average results for the document-level event extraction task (highest number of the column boldfaced). nizers for each event role. The final extraction decisions are based on the product of normalized sentential and phrasal probabilities; TIER (Huang and Riloff, 2011) proposes a multi-stage approach. It processes a document in three stages: classifying narrative document, recognizing event sentence and noun phrase analysis. Cohesion Extract (Huang and Riloff, 2012) adopts a bottom-up approach, which first aggressively identifies candidate role fillers in the document and then refines the candidate set with cohesion sentence classifier. Cohesion Extract obtains substantially better precision and with similar level of recall as compared to GLACIER and TIER. To investigate how the neural models capture the long dependency in the context of variant length (single-sentence, double-sentence, paragraph or longer), we initialize the k in k-sentence reader to different values to build the: Single-Sentence Reader (k = 1), which reads through the document sentence-by-sentence to extract the event role fillers; Double-Sentence Reader (k = 2), which reads the document with step of two sentences; Paragraph Reader (k = # sentences in the paragraph), which reads the document paragraph-byparagraph; Chunk Reader (k = maximum # of sentences that fit right in the length constraint for pretrained LM models), which reads the document with the longest step (the constraint of BERT model). The final row in Table 1&2 presents the results obtained with our Multi-Granularity Reader. Similar to the paragraph-level reader, it reads through document paragraph-by-paragraph, but learns the representations for both intra-sentence and inter-sentence context. 4.3 Results and Findings We report the macro average results in Table 1. To understand in detail how the models extract the fillers for each event role, we also report the per event role results in Table 2. We summarize the results into important findings below: • The end-to-end neural readers can achieve nearly the same level or significantly better results than the pipeline systems. Although our models rely on no hand-designed features, the contextualized double-sentence reader and paragraph reader achieves nearly the same level of F-1 compared to Cohesion Extraction (CE), judging by the head noun matching metric. Our multi-granularity reader performs significantly better (∼60) than the prior stateof-the-art. • Contextualized embeddings for the sequence consistently improve the neural readers’ performance. The results show that the contextualized k-sentence readers all outperform their non-contextualized counterparts, especially when k > 1. The trends also exhibit in the per event role analysis (Table 2). To notice, we freeze the transformers’ parameters during training (fine-tuning yields worse results). • It’s not the case that modeling the longer context will result in better neural sequence 8017 PerpInd PerpOrg Target Victim Weapon P R F-1 P R F-1 P R F-1 P R F-1 P R F-1 GLACIER (Patwardhan and Riloff, 2009) 51 58 54 34 45 38 42 72 53 55 58 56 57 53 55 TIER (Huang and Riloff, 2011) 54 57 56 55 49 51 55 68 61 63 59 61 62 64 63 Cohesion Extract (Huang and Riloff, 2012) 54 57 56 55 49 51 55 68 61 63 59 61 62 64 63 w/o contextualized embedding Single-Sentence Reader 38.38 50.68 43.68 40.98 69.05 51.44 62.50 42.76 50.78 36.69 55.79 44.27 64.91 62.30 63.58 Double-Sentence Reader 50.00 35.14 41.27 63.83 35.71 45.80 61.62 44.83 51.90 51.02 54.74 52.81 55.41 67.21 60.74 Paragraph Reader 42.51 51.35 46.52 44.80 54.76 49.28 70.33 43.45 53.71 53.75 47.37 50.36 54.55 68.85 60.87 Chunk Reader 65.63 26.19 37.44 50.00 45.45 47.62 77.78 22.62 35.05 55.00 21.15 30.56 60.42 69.77 64.76 w/ contextualized embedding C-Single-Sentence Reader 44.97 52.70 48.53 35.15 73.81 47.62 71.74 24.83 36.89 33.63 77.89 46.98 51.11 77.05 61.46 C-Double-Sentence Reader 63.49 31.76 42.34 53.25 48.81 50.93 69.52 50.34 58.40 44.03 62.11 51.53 55.56 73.77 63.38 C-Paragraph Reader 43.92 53.38 48.19 52.94 54.76 53.84 74.19 44.83 55.89 50.57 46.32 48.35 62.30 63.93 63.10 C-Chunk Reader 57.14 27.38 37.02 47.62 40.91 44.01 70.27 29.76 41.81 59.46 42.31 49.44 70.00 65.12 67.47 Multi-Granularity Reader 53.08 52.23 52.65 50.99 67.88 58.23 60.38 64.10 62.18 49.34 62.05 54.97 68.42 67.57 67.99 Table 2: Per event role results based on head noun match metric (“C-” stands for contextualized). The highest F-1 are boldfaced for each event role. Head Noun Match Exact Match Precision Recall F-1 Precision Recall F-1 Multi-granularity Reader 56.44 62.77 59.44 52.03 56.81 54.32 w/o gated fusion 48.09 67.32 56.10 43.75 62.37 51.43 w/o BERT 59.16 50.80 54.66 55.48 46.99 50.88 w/o CRF layer 50.52 56.95 53.54 47.02 53.55 50.07 Table 3: Ablation study on modules’ influence on the multi-granularity reader. tagging model on this document-level task. When increasing the input context from a single sentence to two sentences, the reader has a better precision and lower recall, resulting in no better F-1; When increase the input context length further to the entire paragraph, the precision increases and recall remains the same level, resulting in higher F-1; When we keep increasing the length of input context, the reader becomes more conservative and F-1 drops significantly. All these indicate that focusing on the local (intra-sentence) and broader (paragraph-level) context are both important for the task. Similar results regarding the context length have also been found in document-level coreference resolution (Joshi et al., 2019). • Our multi-granularity reader that dynamically incorporates sentence-level and paragraph-level contextual information performs significantly better, than the nonend-to-end systems and our base k-sentence readers on the macro average F-1 metric. In terms of the per event role performance (Table 2), our reader: (1) substantially outperforms CE with a ∼7 F-1 gap on the PERPETRATOR ORGANIZATION role; (2) slightly outperforms CE (∼1 on the Target category); (3) achieves nearly the same-level of F-1 for PERPETRATOR INDIVIDUAL and worse F-1 on VICTIM category. 5 Further Analysis We conduct an ablation study on how modules of our multi-granularity reader affect its performance on this document-level extraction task (Table 3). From the results, we find that: (1) when replacing the gated fusion operation with the simple sum of the sentence- and paragraph-level token representations, the precision and F-1 drop substantially, which proves the importance of dynamically incorporating context; (2) when removing the BERT’s contextualized representations, the model becomes more conservative and yields substantially lower recall and F-1; (3) when replacing the CRF layer and make independent labeling decisions for each token, both the precision and recall drops substantially. We also do an error analysis with examples and 8018 predictions from different models, to understand qualitatively the advantages and disadvantages of our models. In the first example below (green span: gold extraction, the role after is the span’s event role), the multi-granularity (MG) reader and single-sentence reader correctly extracts the two target expressions, which the paragraph reader overlooks. Although only in the last sentence the attack and targets are mentioned, our MG reader successfully captures this with focusing on both the paragraph-level and intra-sentence context. ... the announcer says president virgilio barco will tonight disclose his government’s peace proposal. ...... . Near the end, the announcer adds to the initial report on the el tomate attack with a 3-minute update that adds 2 injured, 21 houses Target destroyed, and 1 bus Target burned. In the second example (red span: false positive perpInd extraction by the single-sentence reader), although “members of the civil group” appears in a sentence about explosion, judging from paragraph-level context or reasoning about the expression itself should help confirm that it is not perpetrator individual. The MG and paragraph reader correctly handles this and also extracts “the bomb”. .... An attack came at approximately 22:30 last night. Members of the civil group and the peruvian investigative police went to the site of the explosion. The members of the republican guard antiexplosives brigade are investigating to determine the magnitude of the bomb Weapon used in this attack. There’s substantial improvement space for our MG reader’s predictions. There are many role fillers which the reader overlooks. In the example below, “La Tandona” being a perpetrator organization is implicitly expressed in the document and the phrase did not appear elsewhere in the corpus. But external knowledge (e.g., Wikipedia) could help confirm its event role. ... Patriotic officer, it is time we sit down to talk, to see what we can do with our fatherland, and what are we going to do with La Tandona PerpOrg. .... To continue defending what, we ask you. ... . In the last example, there are no explicit expression such as “kill” or “kidnap” in the context for the target. Thus it requires deeper understanding of the entire narrative and reasoning about the surrounding context to understand that “Jorge Serrano Gonzalez” is involved in a terrorism event. ... said that the guerrillas are desperate and ... . The president expressed his satisfaction at the release of Santander department senator Jorge Serrano Gonzalez Target, whom he described as one of the most important people that colombian democracy has at this moment. 6 Conclusion and Future Work We have demonstrated that document-level event role filler extraction could be successfully tackled with end-to-end neural sequence models. Investigations on how the input context length affects the neural sequence readers’ performance show that context of very long length might be hard for the neural models to capture and results in lower performance. We propose a novel multi-granularity reader to dynamically incorporate paragraph- and sentence-level contextualized representations. Evaluations on the benchmark dataset and qualitative analysis prove that our model achieves substantial improvement over prior work. In the future work, it would be interesting to further explore how the model can be adapted to jointly extract role fillers, tackles coreferential mentions and constructing event templates. Acknowledgments We thank the anonymous reviewers and Ana Smith for helpful feedback. References Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Seattle, Washington, USA. Association for Computational Linguistics. Nathanael Chambers and Dan Jurafsky. 2011. Template-based information extraction without the templates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 976–986, Portland, Oregon, USA. Association for Computational Linguistics. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167–176, Beijing, China. Association for Computational Linguistics. Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Trans8019 actions of the Association for Computational Linguistics, 4:357–370. Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ACE) program – tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04), Lisbon, Portugal. European Language Resources Association (ELRA). Shaoyang Duan, Ruifang He, and Wenli Zhao. 2017. Exploiting document level information to improve event detection via recurrent neural networks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 352–361, Taipei, Taiwan. Asian Federation of Natural Language Processing. Felix A Gers, Jürgen Schmidhuber, and Fred Cummins. 1999. Learning to forget: Continual prediction with lstm. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Ruihong Huang and Ellen Riloff. 2011. Peeling back the layers: Detecting event role fillers in secondary contexts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1137–1147, Portland, Oregon, USA. Association for Computational Linguistics. Ruihong Huang and Ellen Riloff. 2012. Modeling textual cohesion for event extraction. In Twenty-Sixth AAAI Conference on Artificial Intelligence. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of ACL-08: HLT, pages 254–262, Columbus, Ohio. Association for Computational Linguistics. Robin Jia, Cliff Wong, and Hoifung Poon. 2019. Document-level n-ary relation extraction with multiscale representation learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3693–3704, Minneapolis, Minnesota. Association for Computational Linguistics. Mandar Joshi, Omer Levy, Daniel S Weld, and Luke Zettlemoyer. 2019. Bert for coreference resolution: Baselines and analysis. arXiv preprint arXiv:1908.09091. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 73–82, Sofia, Bulgaria. Association for Computational Linguistics. Xiang Li, Thien Huu Nguyen, Kai Cao, and Ralph Grishman. 2015. Improving event detection with abstract meaning representation. In Proceedings of the First Workshop on Computing News Storylines, pages 11–15, Beijing, China. Association for Computational Linguistics. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 789–797, Uppsala, Sweden. Association for Computational Linguistics. Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1789–1798, Vancouver, Canada. Association for Computational Linguistics. Xiao Liu, Heyan Huang, and Yue Zhang. 2019. Open domain event extraction using neural latent variable 8020 models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2860–2871, Florence, Italy. Association for Computational Linguistics. Xiao Liu, Zhunchen Luo, and Heyan Huang. 2018. Jointly multiple events extraction via attentionbased graph information aggregation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1247–1256, Brussels, Belgium. Association for Computational Linguistics. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011, Suntec, Singapore. Association for Computational Linguistics. MUC-4. 1992. Fourth message understanding conference (MUC-4). In Proceedings of FOURTH MESSAGE UNDERSTANDING CONFERENCE (MUC4), McLean, Virginia. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309, San Diego, California. Association for Computational Linguistics. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 365–371, Beijing, China. Association for Computational Linguistics. Siddharth Patwardhan and Ellen Riloff. 2009. A unified model of phrasal and sentential evidence for information extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 151–160, Singapore. Association for Computational Linguistics. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph LSTMs. Transactions of the Association for Computational Linguistics, 5:101–115. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7–14, Florence, Italy. Association for Computational Linguistics. Beth M. Sundheim. 1992. Overview of the fourth message understanding evaluation and conference. In FOURTH MESSAGE UNDERSTANDING CONFERENCE (MUC-4), Proceedings of a Conference Held in McLean, Virginia, June 16-18, 1992. Trieu Trinh, Andrew Dai, Thang Luong, and Quoc Le. 2018. Learning longer-term dependencies in RNNs with auxiliary losses. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4965–4974, StockholmsmÃd’ssan, Stockholm Sweden. PMLR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5788– 5793, Hong Kong, China. Association for Computational Linguistics. Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 289–299, San Diego, California. Association for Computational Linguistics. Yue Zhao, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. 2018. Document embedding enhanced event detection with hierarchical and supervised attention. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 414–419, Melbourne, Australia. Association for Computational Linguistics.
2020
714
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8021–8032 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8021 Exploiting the Syntax-Model Consistency for Neural Relation Extraction Amir Pouran Ben Veyseh1, Franck Dernoncourt2, Dejing Dou1 and Thien Huu Nguyen1,3 1Department of Computer and Information Science, University of Oregon, Eugene, Oregon, USA 2Adobe Research, San Jose, CA, USA 3VinAI Research, Hanoi, Vietnam {apouranb, dou, thien}@cs.uoregon.edu [email protected] Abstract This paper studies the task of Relation Extraction (RE) that aims to identify the semantic relations between two entity mentions in text. In the deep learning models for RE, it has been beneficial to incorporate the syntactic structures from the dependency trees of the input sentences. In such models, the dependency trees are often used to directly structure the network architectures or to obtain the dependency relations between the word pairs to inject the syntactic information into the models via multi-task learning. The major problems with these approaches are the lack of generalization beyond the syntactic structures in the training data or the failure to capture the syntactic importance of the words for RE. In order to overcome these issues, we propose a novel deep learning model for RE that uses the dependency trees to extract the syntax-based importance scores for the words, serving as a tree representation to introduce syntactic information into the models with greater generalization. In particular, we leverage OrderedNeuron Long-Short Term Memory Networks (ON-LSTM) to infer the model-based importance scores for RE for every word in the sentences that are then regulated to be consistent with the syntax-based scores to enable syntactic information injection. We perform extensive experiments to demonstrate the effectiveness of the proposed method, leading to the state-of-the-art performance on three RE benchmark datasets. 1 Introduction One of the fundamental tasks in Information Extraction (IE) is Relation Extraction (RE) where the goal is to find the semantic relationships between two entity mentions in text. Due to its importance, RE has been studied extensively in the literature. The recent studies on RE has focused on deep learning to develop methods to automatically induce sentence representations from data (Zeng et al., 2014; Nguyen and Grishman, 2015a; Verga et al., 2018). A notable insight in these recent studies is that the syntactic trees of the input sentences (i.e., the dependency trees) can provide effective information for the deep learning models, leading to the stateof-the-art performance for RE recently (Xu et al., 2015; Guo et al., 2019; Tran et al., 2019). In particular, the previous deep learning models for RE has mostly exploited the syntactic trees to structure the network architectures according to the word connections presented in the trees (e.g., performing Graph Convolutional Neural Networks (GCN) over the dependency trees (Zhang et al., 2018)). Unfortunately, these models might not be able to generalize well as the tree structures of the training data might significantly differ from those in the test data (i.e., the models are overfit to the syntactic structures in the training data). For instance, in the cross-domain setting for RE, the domains for the training data and test data are dissimilar, often leading to a mismatch between the syntactic structures of the training data and test data. In order to overcome this issue, the overall strategy is to obtain a more general representation of the syntactic trees that can be used to inject the syntactic information into the deep learning models to achieve better generalization for RE. A general tree representation for RE is presented in (Veyseh et al., 2019) where the dependency trees are broken down into their sets of dependency relations (i.e., the edges) between the words in the sentences (called the edge-based representation). These dependency relations are then used in a multitask learning framework for RE that simultaneously predicts both the relation between the two entity mentions and the dependency connections between the pairs of words in the input sentences. Although the dependency connections might be less specific to the training data than the whole tree structures, 8022 the major limitation of the edge-based representation is that it only captures the pairwise (local) connections between the words and completely ignores the overall (global) importance of the words in the sentences for the RE problem. In particular, some words in a given sentence might involve more useful information for relation prediction in RE than the other words, and the dependency tree for this sentence can help to better identify those important words and assign higher importance scores for them (e.g., choosing the words along the shortest dependency paths between the two entity mentions). We expect that introducing such importance information for the words in the deep learning models might lead to improved performance for RE. Consequently, in this work, we propose to obtain an importance score for each word in the sentences from the dependency trees (called the syntax-based importance scores). These will serve as the general tree representation to incorporate the syntactic information into the deep learning models for RE. How can we employ the syntax-based importance scores in the deep learning models for RE? In this work, we first use the representation vectors for the words from the deep learning models to compute another importance score for each word (called the model-based importance scores). These model-based importance scores are expected to quantify the semantic information that a word contributes to successfully predict the relationship between the input entity mentions. Afterward, we propose to inject the syntax-based importance scores into the deep learning models for RE by enforcing that the model-based importance scores are consistent with the syntactic counterparts (i.e., via the KL divergence). The motivation of the consistency enforcement is to promote the importance scores as the bridge through which the syntactic information can be transmitted to enrich the representation vectors in the deep learning models for RE. In order to implement this idea, we employ the Ordered-Neuron Long Short-Term Memory Networks (ON-LSTM) (Shen et al., 2019) to compute the model-based importance scores for the words in the sentences for RE. ON-LSTM extends the popular Long Short-Term Memory Networks (LSTM) by introducing two additional gates (i.e., the master forget and input gates) in the hidden vector computation. These new gates controls how long each neuron in the hidden vectors should be activated across different time steps (words) in the sentence (i.e., higher-order neurons would be maintained for a longer time). Based on such controlled neurons, the model-based importance score for a word can be determined by the number of active neurons that the word possesses in the operation of ON-LSTM. To our knowledge, this is the first time ON-LSTM is applied for RE in the literature. One of the issues in the original ON-LSTM is that the master gates and the model-based importance score for each word are only conditioned on the word itself and the left context encoded in the previous hidden state. However, in order to infer the importance for a word in the overall sentence effectively, it is crucial to have a view over the entire sentence (i.e., including the context words on the right). To this end, instead of relying only on the current word, we propose to obtain an overall representation of the sentence that is used as the input to compute the master gates and the importance score for each word in the sentence. This would enrich the model-based importance scores with the context from the entire input sentences, potentially leading to the improved RE performance of the model in this work. Finally, to further improve the representations learned by the deep learning models for RE, we introduce a new inductive bias to promote the similarity between the representation vectors for the overall sentences and the words along the shortest dependency paths between the two entity mentions. The intuition is that the relation between the two entity mentions of interest in a sentence for RE can be inferred from either the entire sentence or the shortest dependency path between the two entity mentions (due to the demonstrated ability of the shortest dependency path to capture the important context words for RE in the prior work (Bunescu and Mooney, 2005)). We thus expect that the representation vectors for the sentence and the dependency path should be similar (as both capture the semantic relation) and explicitly exploiting such similarity can help the models to induce more effective representations for RE. Our extensive experiments on three benchmark datasets (i.e., ACE 2005, SPOUSE and SciERC) demonstrate the effectiveness of the proposed model for RE, leading to the state-of-the-art performance for these datasets. 2 Related Work RE has been traditionally solved by the featurebased or kernel-based approaches (Zelenko et al., 8023 2003; Zhou et al., 2005; Bunescu and Mooney, 2005; Sun et al., 2011; Chan and Roth, 2010; Nguyen and Grishman, 2014; Nguyen et al., 2015c). One of the issues in these approaches is the requirement for extensive feature or kernel engineering effort that hinder the generalization and applicability of the RE models. Recently, deep learning has been applied to address these problems for the traditional RE approaches, producing the state-ofthe-art performance for RE. The typical network architectures for RE include the Convolutional Neural Networks (Zeng et al., 2014; Nguyen and Grishman, 2015a; dos Santos et al., 2015; Wang et al., 2016), Recurrent Neural Networks (Nguyen and Grishman, 2016; Zhou et al., 2016; Zhang et al., 2017; Nguyen et al., 2019a), and self-attentions in Transformer (Verga et al., 2018). The syntactic information from the dependency trees has also been shown to be useful for the deep learning models for RE (Tai et al., 2015; Xu et al., 2015; Liu et al., 2015; Miwa and Bansal, 2016; Peng et al., 2017; Zhang et al., 2018; Guo et al., 2019; Tran et al., 2019; Song et al., 2019; Veyseh et al., 2019). However, these methods tend to poorly generalize to new syntactic structures due to the direct reliance on the syntactic trees (e.g., in different domains) or fail to exploit the syntax-based importance of the words for RE due to the sole focus on edges of the dependency trees (Veyseh et al., 2019). 3 Model The RE problem can be formulated as a multi-class classification problem. Formally, given an input sentence W = w1, w2, . . . , wN where wt is the t-th word in the sentence W of length N, and two entity mentions of interest at indexes s and o (1 ≤ s < o ≤N), our goal is to predict the semantic relation between ws and wo in W. Similar to the previous work on deep learning for RE (Shi et al., 2018; Veyseh et al., 2019), we first transform each word wt into a representation vector xt using the concatenation of the three following vectors: (i) the pre-trained word embeddings of wt, (ii) the position embedding vectors (to encode the relative distances of wt to the two entity mentions of interest ws and wo (i.e., t −s and t −o)), and (iii) the entity type embeddings (i.e., the embeddings of the BIO labels for the words to capture the entity mentions present in X). This word-to-vector transformation converts the input sentence W into a sequence of representation vectors X = x1, x2, . . . , xN to be consumed by the next neural computations of the proposed model. There are three major components in the RE model in this work, namely (1) the CEON-LSTM component (i.e., context-enriched ON-LSTM) to compute the model-based importance scores of the words wt, (2) the syntax-model consistency component to enforce the similarity between the syntaxbased and model-based importance scores, and (3) the similarity component between the representation vectors of the overall sentence and the shortest dependency path. 3.1 CEON-LSTM The goal of this component is to obtain a score for each word wt that indicates the contextual importance of wt with respect to the relation prediction between ws and wo in W. In this section, we first describe the ON-LSTM model to achieve these importance scores (i.e., the model-based scores). A new model (called CEON-LSTM) that integrates the representation of the entire sentence into the cells of ON-LSTM will be presented afterward. ON-LSTM: Long-short Term Memory Networks (LSTM) (Hochreiter and Schmidhuber, 1997) has been widely used in Natural Language Processing (NLP) due to its natural mechanism to obtain the abstract representations for a sequence of input vectors (Nguyen and Nguyen, 2018b, 2019). Given the input representation vector sequence X = x1, x2, . . . , xN, LSTM produces a sequence of hidden vectors H = h1, h2, . . . , hN using the following recurrent functions at the time step (word) wt (assuming the zero vector for h0): ft = σ(Wfxt + Ufht−1 + bf) it = σ(Wixt + Uiht−1 + bi) ot = σ(Woxt + Uoht−1 + bo) ˆct = tanh(Wcxt + Ucht−1 + bc) ct = ft ◦ct−1 + it ◦ˆct ht = ot ◦tanh(ct) (1) where ft, it and ot are called the forget, input and output gates (respectively). In order to compute the importance score for each word wt, ON-LSTM introduce into the mechanism of LSTM two additional gates, i.e., the master forget gate ˆft and the master input gate ˆit (Shen et al., 2019). These gates are computed and inte8024 grated into the LSTM cell as follow: ˆft = cummax(W ˆfxt + U ˆfht−1 + b ˆf) ˆit = 1 −cummax(Wˆixt + Uˆiht−1 + bˆi) ¯ft = ˆft ◦(ftˆit + 1 −ˆit) ¯it = ˆit ◦(it ˆft + 1 −ˆft) ct = ¯ft ◦ct−1 +¯it ◦ˆct (2) where cummax is an activation function defined as cummax(x) = cumsum(softmax(x))1. The forget and input gates in LSTM (i.e., ft and it) are different from the master forget and input gates in ON-LSTM (i.e., ˆft and ˆit) as the gates in LSTM assume that the neurons/dimensions in their hidden vectors are equally important and that these neurons are active at every step (word) in the sentence. This is in contrast to the master gates in ON-LSTM that impose a hierarchy over the neurons in the hidden vectors and limit the activity of the neurons to only a portion of the words in the sentence (i.e., higher-ranking neurons would be active for more words in the sentence). Such hierarchy and activity limitation are achieved via the function cumax(x) that aggregates the softmax output of the input vector x along the dimensions. The output of cumax(x) can be seen as the expectation of some binary vector of the form (0, . . . , 0, 1, . . . , 1) (i.e., involving two consecutive segments: the 0’s segment and the 1’s segment). At one step, the 1’s segments in the gate vectors represents the neurons that are activated at that step. In ON-LSTM, a word wi is more contextually important than another word wj if the master gates for wi have more active neurons than those for wj. Consequently, in order to compute the importance score for the word wt, we can rely on the number of active neurons in the master gates that can be estimated by the sum of the weights of the neurons in the master gates in ON-LSTM. Following (Shen et al., 2019), we employ the hidden vectors for the master forget gate in ON-LSTM to compute the importance scores for the words in this work. Specifically, let ˆft = ˆft1, ˆft2, . . . , ˆftD be the weights for the neurons/dimensions in ˆht (i.e., D is the dimension of the gate vectors). The model-based importance score modt for the word wt ∈W is then obtained by: modt = 1 −P i=1..D ˆfti. For convenience, we also use H = h1, h2, . . . , hN to denote the hidden 1cumsum(u1, u2, . . . , un) = (u′ 1, u′ 2, . . . , u′ n) where u′ i = P j=1..i uj. vectors returned from the application of ON-LSTM over the input representation vectors X. Introducing Sentence Context into ON-LSTM One limitation of the ON-LSTM model is that it only relies on the representation vector of the current word xt and the hidden vector for the left context (encoded in ht−1) to compute the master gate vectors and the model-based important score for the word wt as well. However, this score computation mechanism might not be sufficient for RE as the importance score for wt might also depend on the context information on the right (e.g., the appearance of some word on the right might make wt less important for the relation prediction between ws and wo). Consequently, in this work, we propose to first obtain a representation vector x′ t = g(x1, x2, . . . , xN) that has the context information about the entire sentence W (i.e., both the left and right context for the current word wt). Afterward, x′ t will replace the input representation vector xt in the computation for the master gates and importance score at step t of ON-LSTM (i.e., in the formulas for ˆft and ˆit in Equation 2). In this way, the model-based importance score for wt will be able to condition on the overall context in the input sentence. In this work, we obtain the representation vector x′ t for each step t of ON-LSTM based on the weighted sum of the transformed vectors of the input representation sequence x1, x2, . . . , xN: x′ t = P i αti(Wxxi + bx). The weight αti for the term with xi in this formula is computed by: αti = exp((Whht−1 + bh) · (Wxxi + bx)) PN j=1 exp((Whht−1 + bh) · (Wxxj + bx)) (3) where Wh, bh, Wx and bx are the learnable parameters. Note that in this formula, we use the ONLSTM hidden vector ht−1 from the previous step as the query vector to compute the attention weight for each word. The rationale is to enrich the attention weights for the current step with the context information from the previous steps (i.e., encoded in ht−1), leading to the contextualized input representation x′ t with richer information for the master gates and importance score computations in ON-LSTM. The proposed ON-LSTM with the enriched input vectors x′ t is called CEON-LSTM (i.e., Context-Enriched ON-LSTM) in this work. 8025 3.2 Syntax-Model Consistency As mentioned in the introduction, the role of the model-based importance scores obtained from CEON-LSTM is to serve as the bridge to inject the information from the syntactic structures of W into the representation vectors of the deep learning models for RE. In particular, we first leverage the dependency tree of W to obtain another importance score synt for each word wt ∈W (i.e., the syntax-based importance score). Similar to the model-based scores, the syntax-based scores are expected to measure the contextual importance of wt with respect to the relation prediction for ws and wo. Afterward, we introduce a constraint to encourage the consistency between the model-based and syntax-based importance scores (i.e., modt and synt) for the words via minimizing the KL divergence Limport between the normalized scores: mod1, . . . , modN = softmax(mod1, . . . , modN) syn1, . . . , synN = softmax(syn1, . . . , synN) Limport = −Σimodilogmodi syni (4) The intuition is to exploit the consistency to supervise the model-based importance scores from the models with the syntax-based importance scores from the dependency trees. As the model-based importance scores are computed from the master gates with the active and inactive neurons in CEONLSTM, this supervision allows the syntactic information to interfere directly with the internal computation/structure of the cells in CEON-LSTM, potentially generating representation vectors with better syntax-aware information for RE. To obtain the syntax-based importance scores, we take the motivation from the previous work on RE where the shortest dependency paths between the two entity mentions of interest have been shown to capture many important context words for RE. Specifically, for the sentence W, we first retrieve the shortest dependency path DP between the two entity mentions ws and wo and the length T of the longest path between any pairs of words in the dependency tree of W. The syntax-based importance score synt for the word wt ∈W is then computed as the difference between T and the length of the shortest path between wt and some word in DP in the dependency tree (i.e., the words along DP will have the score of T). On the one hand, these syntax-based importance scores are able to capture the importance of the words that is customized for the relation prediction between ws and wo. This is better suited for RE than the direct use of the edges in the dependency trees in (Veyseh et al., 2019) that is agnostic to the entity mentions of interest and fails to encode the importance of the words for RE. On the other hand, the syntax-based importance scores synt represent a relaxed form of the original dependency tree that might have a better chance to generalize over different data and domains for RE than the prior work (i.e., the ones that directly fit the models to the whole syntactic structures (Zhang et al., 2018) and run the risk of overfitting to the structures in the training data). 3.3 Sentence-Dependency Path Similarity In this component, we seek to further improve the representation vectors in the proposed deep learning model for RE by introducing a novel constraint to maximize the similarity between the representation vectors for the overall input sentence W and the words along the shortest dependency path DP (i.e., inductive bias). The rationale for this bias is presented in the introduction. In order to implement this idea, we first obtain the representation vectors RW and RDP for the sentence W and the words along DP (respectively) by applying the max-pooling operation over the CEON-LSTM hidden vectors h1, h2, . . . , hN for the words in W and DP: RW = max poolingwi∈W {hi} and RDP = max poolingwi∈DP {hi}. In the next step, we promote the similarity between RW and RDP by explicitly minimizing their negative cosine similarity2, i.e., adding the following term Lpath into the overall loss function: Lpath = 1 −cos (RW , RDP ) (5) 3.4 Prediction Finally, in the prediction step, following the prior work (Veyseh et al., 2019), we employ the following vector V as the overall representation vector to predict the relation between ws and wo in W: V = [xs, xo, hs, ho, RW ]. Note that V involves the information at different abstract levels for W, i.e., the raw input level with xs and xo, the abstract representation level with hs and ho 2We tried the KL divergence and the mean square error for this, but cosine similarity achieved better performance. 8026 from CEON-LSTM, and the overall sentence vector RW . In our model, V would be fed into a feed-forward neural network with the softmax layer in the end to estimate the probability distribution P(.|W, ws, wo) over the possible relations for W. The negative log-likelihood function is then obtained to serve as the loss function for the model: Llabel = −log P(y|W, ws, wo) (y is the golden relation label for ws and wo in W). Eventually, the overall loss function of the model in this work is: L = Llabel + αLimport + βLpath (6) where α and β are trade-off parameters. The model is trained with shuffled mini-batching. 4 Experiments 4.1 Datasets and Hyper-parameters We evaluate the models in this work using three benchmark datasets, i.e., ACE 2005, SPOUSE, and SciERC. For ACE 2005, similar to the previous work (Nguyen and Grishman, 2016; Fu et al., 2017; Shi et al., 2018; Veyseh et al., 2019), we use the dataset preprocessed and provided by (Yu et al., 2015) for compatible comparison. There are 6 different domains in this dataset, i.e., (bc, bn, cts, nw, un, and wl), covering text from news, conversations and web blogs. Following the prior work, the union of the domains bn and nw (called news) is used as the training data (called the source domain); a half of the documents in bc is reserved for the development data, and the remainder (cts, wl and the other half of bc) serve as the test data (called the target domains). This data separation facilitates the evaluation of the cross-domain generalization of the models due to the domain difference of the training and test data. The SPOUSE dataset is recently introduced by (Hancock et al., 2018), involving 22,195 sentences for the training data, 2,796 sentences for the validation data, and 2,697 sentences for the test data. Each sentence in this dataset contains two marked person names (i.e., the entity mentions) and the goal is to identify whether the two people mentioned in the sentence are spouses. Finally, the SciERC dataset (Luan et al., 2018) annotates 500 scientific abstracts for the entity mentions along with the coreferences and relations between them. For RE, this dataset provides 3,219 sentences in the training data, 455 sentences in the validation data and 974 sentences in the test data. We fine tune the hyper-parameters for the models in this work on the validation data of the ACE 2005 dataset. The best parameters suggested by this process include: 30 dimensions for the position embeddings and entity type embeddings, 200 hidden units for the CEON-LSTM model and all the other hidden vectors in the model (i.e., the hidden vectors in the final feed-forward neural network (with 2 layers) and the intermediate vectors in the weighted sum vector for x′ t), 1.0 for both loss tradeoff parameters α and β, and 0.001 for the initial learning rate with the Adam optimizer. The batch size is set to 50. Finally, we use either the uncontextualized word embeddings word2vec (with 300 dimensions) or the hidden vectors in the last layer of the BERTbase model (with 768 dimensions) (Devlin et al., 2019) to obtain the pre-trained word embeddings for the sentences (Devlin et al., 2019). We find it better to fix BERT in the experiments. Note that besides this section, we provide some additional analysis for the models in the Appendix. 4.2 Comparison with the state of the art We fist compare the proposed model (called CEONLSTM) with the baselines on the popular ACE 2005 dataset. In particular, the four following groups of RE models in the prior work on RE with the ACE 2005 dataset is chosen for comparison: (i) Feature based models: These models handdesign linguistic features for RE, i.e., FCM, Hybrid FCM, LRFCM, and SVM (Yu et al., 2015; Hendrickx et al., 2010). (ii) Deep sequence-based models: These models employ deep learning architectures based on the sequential order of the words in the sentences for RE, i.e., log-linear, CNN, Bi-GRU, Forward GRU, Backward GRU (Nguyen and Grishman, 2016), and CNN+DANN (Fu et al., 2017). (iii) Adversarial learning model: This model, called GSN, attempts to learn the domainindependent features for RE (Shi et al., 2018). (iv) Deep structure-based models: These models use dependency trees either as the input features or the graphs to structure the network architectures in the deep learning models. The state-ofthe-art models of this type include: AGGCN (Attention Guided GCN) (Guo et al., 2019), SACNN (Segment-level Attention-based CNN) (Tran et al., 2019) and DRPC (the Dependency Relation Prediction and Control model) (Veyseh et al., 2019). DRPC has the best reported performance on ACE 8027 System bc cts wl Avg. FCM (2015) 61.90 52.93 50.36 55.06 Hybrid FCM (2015) 63.48 56.12 55.17 58.25 LRFCM (2015) 59.40 Log-linear (2016) 57.83 53.14 53.06 54.67 CNN (2016) 63.26 55.63 53.91 57.60 Bi-GRU (2016) 63.07 56.47 53.65 57.73 Forward GRU (2016) 61.44 54.93 55.10 57.15 Backward GRU (2016) 60.82 56.03 51.78 56.21 CNN+DANN (2017) 65.16 GSN (2018) 66.38 57.92 56.84 60.38 C-GCN (2018) 65.55 62.98 55.91 61.48 AGGCN (2019) 63.47 59.70 56.50 59.89 SACNN (2019) 65.06 61.71 59.82 62.20 DRPC (2019) 67.30 64.28 60.19 63.92 CEON-LSTM (ours) 68.55 65.42 61.93 65.30 Table 1: F1 scores of the models on the ACE 2005 test datasets using the word2vec word embeddings. 2005. Note that we obtain the performance of these models on the considered datasets using the actual implementation released by the original papers. Most of the prior RE work on the ACE 2005 dataset uses the uncontextualized word embeddings (i.e., word2vec) for the initial word representation vectors. In order to achieve a fair comparison with the baselines, we first show the performance of the models (i.e., the F1 scores) on the ACE 2005 test datasets when word2vec is employed for the pre-trained word embeddings in Table 1. The first observation from the table is that the deep structured-based models (e.g., C-GCN, DRPC) are generally better than the deep sequence-based models (e.g., CNN, Bi-GRU) and the feature base models with large performance gaps. This demonstrates the benefits of the syntactic structures that can provide useful information to improve the performance for the deep learning models for RE. We will thus focus on these deep structure-based models in the following experiments. Among all the models, we see that the proposed model CEON-LSTM is significantly better than all the baseline models over different test domains/datasets. In particular, CEONLSTM is 1.38% and 3.1% better than DRPC and SACNN (respectively) on the average F1 scores over different test datasets. These performance improvements are significant with p < 0.01 and clearly demonstrate the effectiveness of the proposed CEON-LSTM model for RE. In order to further compare CEON-LSTM with the baselines, Table 2 presents the performance of the models when the words are represented by the contextualized word embeddings (i.e., BERT). For this case, we also report the performance of the recent BERT-based model (i.e., Entity-Aware BERT (EA-BERT)) in (Wang et al., 2019) for RE System bc cts wl Avg. C-GCN (2018) 67.02 64.4 58.92 63.44 AGGCN (2019) 65.29 63.65 60.35 63.09 SACNN (2019) 68.52 64.21 62.19 64.97 DRPC (2019) 69.41 65.82 61.65 65.62 EA-BERT (2019) 69.25 61.70 58.48 63.14 CEON-LSTM (ours) 71.58 66.92 65.17 67.89 Table 2: F1 scores of the models on the ACE 2005 test datasets using the BERT word embeddings. System SPOUSE SciERC C-GCN (word2vec) (2018) 73.52 65.30 AGGCN (word2vec) (2019) 73.51 67.91 SACNN (word2vec) (2019) 72.88 67.54 DRPC (word2vec) (2019) 74.66 68.18 CEON-LSTM (word2vec) (ours) 76.43 69.92 C-GCN (BERT) (2018) 75.18 74.11 AGGCN (BERT) (2019) 76.91 75.77 SACNN (BERT) (2019) 77.98 76.42 DRPC (BERT) (2019) 78.93 77.21 CEON-LSTM (BERT) (ours) 81.01 78.24 Table 3: F1 scores of the models on the SPOUSE and SciERC datasets. on the ACE 2005 dataset. Comparing the models in Table 2 with the counterparts in 1, it is clear that the contextualized word embeddings can significantly improve the deep structure-based models for RE. More importantly, similar to the case with word2vec, we see that the proposed model CEON-LSTM still significantly outperforms all the baselines models with large performance gaps and p < 0.01, further testifying to the benefits of the CEON-LSTM model in this work. Finally, in order to demonstrate the generalization of the proposed model over the other datasets, we show the performance of the models on the two other datasets in this work (i.e., SPOUSE and SciERC) using either word2vec or BERT as the word embeddings in Table 3. The results clearly confirm the effectiveness of CEON-LSTM as it is significantly better than all the other models over different datasets and word embedding settings. 4.3 Ablation Study The Effect of the Model Components: There are three major components in the proposed model: (1) the introduction of the overall sentence representation x′ t into the ON-LSTM cells (called SCG – Sentence Context for Gates), (2) the consistency constraint for the syntax-based and model-based importance scores (called SMC – Syntax-Semantic Consistency), and (3) the similarity constraint for the representation vectors of the overall sentence and the shortest dependency path (called SDPS – Sentence-Dependency Path Similarity). In order to 8028 System P R F1 CEON-LSTM (Full) 74.51 67.29 71.08 - SCG 74.00 66.98 70.45 - SMC 72.87 66.85 69.89 - SDPS 73.02 66.00 69.18 - SCG - SMC 71.52 64.62 68.08 - SCG - SDPS 70.33 64.22 67.17 - SMC - SDPS 71.02 63.95 67.58 - SCG - SMC - SDPS 70.51 63.01 66.98 Table 4: Ablation study on the development set of ACE 2005. The components listed in each row are removed from the overall model. evaluate the contribution of these components for the overall model CEON-LSTM, we incrementally remove these components from CEON-LSTM and evaluate the performance of the remaining model. Table 4 reports the performance of the models on the ACE 2005 development dataset. It is clear from the table that all the components are necessary for the proposed model as excluding any of them would hurt the performance significantly. It is also evident that removing more components results in more performance drop, thus demonstrating the complementary nature of the three proposed components in this work. The Variants for CEON-LSTM: We study several variants of SCG, SMC, and SDPS in CEONLSTM to demonstrate the effectiveness of the designed mechanisms. In particular, we consider the following alternatives for CEON-LSTM: (i) Bi-ON-LSTM: Instead of employing the attention-based representation vectors x′ t to capture the context of the entire input sentence for the model-based importance scores in SCG, we run two unidirectional ON-LSTM models (i.e., the forward and backward ON-LSTM) to compute the forward and backward importance scores for each word in W. The final model-based importance score for each word is then the average of the corresponding forward and backward scores. (ii) SA-ON-LSTM: In this method, instead of using the hidden vector ht−1 as the query vector to compute the attention weight αti in Equation 3 for SCG, we utilize the input representation vector xt for wt as the query vector (i.e., replace ht−1 with xt in Equation 3). Consequently, SA-ON-LSTM is basically a composed model where we first run the self-attention (SA) model (Vaswani et al., 2017) over X. The results are then fed into ON-LSTM to obtain the model-based importance scores modt. (iii) CE-LSTM: This aims to explore the effecSystem P R F1 CEON-LSTM (proposed) 74.51 67.29 71.08 Bi-ON-LSTM 72.65 67.17 69.28 SA-ON-LSTM 73.21 67.31 70.13 CE-LSTM 71.58 64.19 67.92 EP-ON-LSTM 71.03 65.16 68.45 SP-CEON-LSTM (RW in V ) 73.58 66.92 70.13 SP-CEON-LSTM (RW not in V ) 72.94 65.21 69.51 Table 5: Models’ performance on the development dataset of ACE 2005. tiveness of ON-LSTM for our model. In CE-LSTM, we replace the ON-LSTM network with the usual LSTM model in CEON-LSTM. The SMC component is not included in this case as the LSTM model cannot infer the importance scores. (iv) EP-ON-LSTM: Before this work, the DRPC model in (Veyseh et al., 2019) has the stateof-the-art on ACE 2005. Both DRPC and CEONLSTM apply a more general representation of the dependency trees in a deep learning model (i.e., avoid directly using the original trees to improve the generalization). To illustrate the benefit of the importance score representation for SMC, EP-ONLSTM replaces the importance score representation for the dependency trees in CEON-LSTM with the dependency edge representation in DRPC. In particular, we replace the term Limport in the overall loss function (i.e., Equation 6) with the dependency edge prediction loss (using the ON-LSTM hidden vectors) in DRPC for EP-ON-LSTM. (v) SP-CEON-LSTM: This model removes the SDPS component and includes the representation vector of the dependency path DP (i.e., RDP ) in the final representation V for relation prediction. We consider both retaining and excluding the sentence representation RW in V in this case. This model seeks to show that the use of RDP for the similarity encouragement with RW is more effective than employing RDP directly in V . Table 5 reports the performance of these CEONLSTM variations on the ACE 2005 development dataset. As we can see from the table, all the considered variants have significantly worse performance than CEON-LSTM (with p < 0.005). This clearly helps to justify the designs of the components SCG, SMC and SDPS for CEON-LSTM in this work. Baseline for the Model-Based Importance Scores: One of the contributions in our work is to employ the gates in the cells of ON-LSTM to obtain the model-based importance scores that are then used to promote the consistency with the syntaxbased importance scores (i.e., in the SMC compo8029 System P R F1 CEON-LSTM (proposed) 74.51 67.29 71.08 HIS-CEON-LSTM 72.02 63.97 68.29 Table 6: Models’ performance on the development dataset of ACE 2005. nent). In order to demonstrate the effectiveness of the master cell gates to obtain the model-based importance scores, we evaluate a typical baseline where the model-based importance score modi for wi ∈W is computed directly from the hidden vector hi of CEON-LSTM (i.e., by feeding hi into a feed-forward neural network with sigmoid activation function in the end). The model-based importance scores obtained in this way then replace the importance scores from the cell gates and are used in the SMC component of CEON-LSTM in the usual way (i.e., via the KL divergence in Limport) (note that we tried the alternatives for the KL divergence in Limport (i.e., the mean square error and the cosine similarity between the syntaxbased and model-based importance scores), but the KL divergence produced the best results for both CEON-LSTM and HIS-CEON-LSTM on the development data). The resulting model is called HISCEON-LSTM. Table 6 reports the performance of HIS-CEON-LSTM and the proposed model CEONLSTM on the ACE 2005 development dataset. It is clear from this table that the proposed model CEON-LSTM achieves significantly better performance than HIS-CEON-LSTM (with large performance gap), thus testifying to the importance of the master gates to obtain the model-based importance scores for CEON-LSTM. 5 Conclusion We introduce a new deep learning model for RE (i.e., CEON-LSTM) that features three major proposals. First, we represent the dependency trees via the syntax-based importance scores for the words in the input sentences for RE. Second, we propose to incorporate the overall sentence representation vectors into the cells of ON-LSTM, allowing it to compute the model-based importance scores more effectively. We also devise a novel mechanism to project the syntactic information into the computation of ON-LSTM via promoting the consistency between the syntax-based and model-based importance scores. Finally, we present a novel inductive bias for the deep learning models that exploits the similarity of the representation vectors for the whole input sentences and the shortest dependency paths between the two entity mentions for RE. Extensive experiments are conducted to demonstrate the benefits of the proposed model. We achieve the state-of-the-art performance on three datasets for RE. In the future, we plan to apply CEON-LSTM to other related NLP tasks (e.g., Event Extraction, Semantic Role Labeling) (Nguyen et al., 2016a; Nguyen and Grishman, 2018a). Acknowledgments This research has been supported in part by Vingroup Innovation Foundation (VINIF) in project code VINIF.2019.DA18, the NSF grant CNS1747798 to the IUCRC Center for Big Learning, and a gift from Adobe Research. This research is also based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 201919051600006 under the Better Extraction from Text Towards Enhanced Retrieval (BETTER) Program. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, the Department of Defense, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations. References Razvan C Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation extraction. In EMNLP. Yee S. Chan and Dan Roth. 2010. Exploiting background knowledge for relation extraction. In COLING. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Lisheng Fu, Thien Huu Nguyen, Bonan Min, and Ralph Grishman. 2017. Domain adaptation for relation extraction with domain adversarial neural network. In IJCNLP. 8030 Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In ACL. Braden Hancock, Martin Bringmann, Paroma Varma, Percy Liang, Stephanie Wang, and Christopher R´e. 2018. Training classifiers with natural language explanations. In ACL. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of SEW2009. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In ACL. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In EMNLP. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. ACL. Minh Nguyen and Thien Huu Nguyen. 2018b. Who is killed by police: Introducing supervised attention for hierarchical lstms. In COLING. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016a. Joint event extraction via recurrent neural networks. In NAACL. Thien Huu Nguyen and Ralph Grishman. 2014. Employing word representations and regularization for domain adaptation of relation extraction. In ACL. Thien Huu Nguyen and Ralph Grishman. 2015a. Relation extraction: Perspective from convolutional neural networks. In Proceedings of the 1st NAACL Workshop on Vector Space Modeling for NLP (VSM). Thien Huu Nguyen and Ralph Grishman. 2016. Combining neural networks and log-linear models to improve relation extraction. Proceedings of IJCAI Workshop on Deep Learning for Artificial Intelligence. Thien Huu Nguyen and Ralph Grishman. 2018a. Graph convolutional networks with argument-aware pooling for event detection. In AAAI. Thien Huu Nguyen, Barbara Plank, and Ralph Grishman. 2015c. Semantic representations for domain adaptation: A case study on the tree kernel-based method for relation extraction. In ACL-IJCNLP. Trung Minh Nguyen and Thien Huu Nguyen. 2019. One for all: Neural joint modeling of entities and events. In AAAI. Tuan Ngo Nguyen, Franck Dernoncourt, and Thien Huu Nguyen. 2019a. On the effectiveness of the pooling methods for biomedical relation extraction with deep learning. In Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019). Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph LSTMs. Transactions of the Association for Computational Linguistics, 5:101–115. Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In ACL. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In ICLR. Ge Shi, Chong Feng, Lifu Huang, Boliang Zhang, Heng Ji, Lejian Liao, and Heyan Huang. 2018. Genre separation network with adversarial training for cross-genre relation extraction. In EMNLP. Linfeng Song, Yue Zhang, Daniel Gildea, Mo Yu, Zhiguo Wang, and jinsong su. 2019. Leveraging dependency forest for neural medical relation extraction. In EMNLP-IJCNLP. Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011. Semi-supervised relation extraction with large-scale word clustering. In ACL. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In ACL. Van-Hien Tran, Van-Thuy Phi, Hiroyuki Shindo, and Yuji Matsumoto. 2019. Relation classification using segment-level attention-based cnn and dependencybased rnn. In NAACL-HLT. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In EMNLP. Amir Pouran Ben Veyseh, Thien Huu Nguyen, and Dejing Dou. 2019. Improving cross-domain performance for relation extraction via dependency prediction and information flow control. In IJCAI. 8031 Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, and Saloni Potdar. 2019. Extracting multiple-relations in one-pass with pre-trained transformers. In ACL. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In EMNLP. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In EMNLP. Mo Yu, Matthew R Gormley, and Mark Dredze. 2015. Combining word embeddings and feature embeddings for fine-grained relation extraction. In NAACL-HLT. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research, 3:1083–1106. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In COLING. Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In EMNLP. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Positionaware attention and supervised data improve slot filling. In EMNLP. Guodong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In ACL. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In ACL. 8032 A Analysis In order to provide more insights into the performance of the proposed model, we analyze examples in the test data that can be predicted correctly with the proposed model and incorrectly with the baselines. For a baseline model M (e.g., GCN, DRPC), we call the test examples that cannot be recognized by M but can be successfully predicted by the proposed model the M-failure examples. Based on our analysis, the GCN-failure examples tend to involve the syntactic/dependency structures that does not appear or are not well represented in the training data. Some examples for the GCNfailure examples are shown in Table 7. On the one hand, as GCN is directly dependent on the syntactic structures of the input sentences, it would not be able to learn effective representations for the sentences with new structures in the GCN-failure examples for RE. On the other hand, as CEONLSTM only exploits a relaxed general form of the tree structures (i.e., the importance scores of the words), it will be able to generalize better to the new structures in the GCN-failure examples where the general tree form is still helpful to induce effective representations for RE. For the DRPC-failure examples (their examples are presented in Table 8), we find that these examples often involve the two entity mentions of interest with long distance from each other in the input sentences. For these examples, the dependency paths between the two entity mentions tend to be very helpful or crucial for RE as they can capture the important context words (thus eliminating the irrelevant ones). This allows the models to learn effective representations to correctly predict the relations in the sentences for RE. As DRPC only retains the dependency edges in the dependency trees separately (i.e., the local tree representations), it cannot directly capture such dependency paths, thereby failing to predict the relations for the DRPC-failure examples with long distances between the entities. This is in contrast to CEONLSTM that exploits the global representations of the trees with the importance scores based on the distances of the words to the dependency paths. As the dependency paths can be still inferred in this global representation, CEON-LSTM can benefit from this information to successfully perform RE for the sentences in the DRPC-failure examples. Sentence Relation Some Arab countries also want to play a role in the stability operation in Iraq but are reluctant to send troops because of political, religious and ethnic considerations, the official said. ORGAFF Some suggested that Russian President Vladimir Putin will now be scrambling to contain the damage to his once -budding friendship with US President George W. Bush because he was poorly advised by his intelligence and defense aides. PERSOC Other countries including the Philippines, South Korea, Qatar and Australia agreed to send other help such as field hospitals, engineers, explosive ordnance disposal teams or nuclear, biological and chemical weapons experts. PARTWHOLE Table 7: The GCN-failure examples. The two entity mentions of interest are shown in bold in the sentences. Sentence Relation US diplomats have hinted in recent weeks that Washington ’s anger with European resistance to the campaign was focused more on Paris –and to a lesser extent Berlin– than it was with Moscow. PARTWHOLE In Montreal, “Stop the War” a coalition of more than 190 groups, said as many as 200,000 people turned out, though police refused to give a figure. PHYS Although the crossing has, in principle, been open for movement between the two territories –while being frequently closed by Israeli for reasons rarely explained– the Palestinian section has been manned by Israel for more than two years. ART Table 8: The DRPC-failure examples. The two entity mentions of interest are shown in bold in the sentences.
2020
715
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8033–8044 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8033 From English to Code-Switching: Transfer Learning with Strong Morphological Clues Gustavo Aguilar and Thamar Solorio Department of Computer Science University of Houston Houston, TX 77204-3010 {gaguilaralas, tsolorio}@uh.edu Abstract Linguistic Code-switching (CS) is still an understudied phenomenon in natural language processing. The NLP community has mostly focused on monolingual and multi-lingual scenarios, but little attention has been given to CS in particular. This is partly because of the lack of resources and annotated data, despite its increasing occurrence in social media platforms. In this paper, we aim at adapting monolingual models to code-switched text in various tasks. Specifically, we transfer English knowledge from a pre-trained ELMo model to different code-switched language pairs (i.e., NepaliEnglish, Spanish-English, and Hindi-English) using the task of language identification. Our method, CS-ELMo, is an extension of ELMo with a simple yet effective position-aware attention mechanism inside its character convolutions. We show the effectiveness of this transfer learning step by outperforming multilingual BERT and homologous CS-unaware ELMo models and establishing a new state of the art in CS tasks, such as NER and POS tagging. Our technique can be expanded to more English-paired code-switched languages, providing more resources to the CS community. 1 Introduction Although linguistic code-switching (CS) is a common phenomenon among multilingual speakers, it is still considered an understudied area in natural language processing. The lack of annotated data combined with the high diversity of languages in which this phenomenon can occur makes it difficult to strive for progress in CS-related tasks. Even though CS is largely captured in social media platforms, it is still expensive to annotate a sufficient amount of data for many tasks and languages. Additionally, not all the languages have the same incidence and predominance, making annotations impractical and expensive for every combination Hindi-English Tweet Original: Keep calm and keep kaam se kaam !!!other #office #tgif #nametag #buddhane #SouvenirFromManali #keepcalm English: Keep calm and mind your own business !!! Nepali-English Tweet Original: Youtubene ma live re ,other chalcha ki vanni aash garam !other Optimistic .other English: They said Youtube live, let’s hope it works! Optimistic. Spanish-English Tweet Original: @MROlvera06other @T11gReother go too cavendersne y tambien ve a @ElToroBootsne other English: @MROlvera06 @T11gRe go to cavenders and also go to @ElToroBoots Figure 1: Examples of code-switched tweets and their translations from the CS LID corpora for HindiEnglish, Nepali-English and Spanish-English. The LID labels ne and other in subscripts refer to named entities and punctuation, emojis or usernames, respectively (they are part of the LID tagset). English text appears in italics and other languages are underlined. of languages. Nevertheless, code-switching often occurs in language pairs that include English (see examples in Figure 1). These aspects lead us to explore approaches where English pre-trained models can be leveraged and tailored to perform well on code-switching settings. In this paper, we study the CS phenomenon using English as a starting language to adapt our models to multiple code-switched languages, such as Nepali-English, Hindi-English and SpanishEnglish. In the first part, we focus on the task of language identification (LID) at the token level using ELMo (Peters et al., 2018) as our reference for English knowledge. Our hypothesis is that English pre-trained models should be able to recognize whether a word belongs to English or not when such models are fine-tuned with codeswitched text. To accomplish that, we introduce 8034 CS-ELMo, an extended version of ELMo that contains a position-aware hierarchical attention mechanism over ELMo’s character n-gram representations. These enhanced representations allow the model to see the location where particular n-grams occur within a word (e.g., affixes or lemmas) and to associate such behaviors with one language or another.1 With the help of this mechanism, our models consistently outperform the state of the art on LID for Nepali-English (Solorio et al., 2014), Spanish-English (Molina et al., 2016), and HindiEnglish (Mave et al., 2018). Moreover, we conduct experiments that emphasize the importance of the position-aware hierarchical attention and the different effects that it can have based on the similarities of the code-switched languages. In the second part, we demonstrate the effectiveness of our CS-ELMo models by further fine-tuning them on tasks such as NER and POS tagging. Specifically, we show that the resulting models significantly outperform multilingual BERT and their homologous ELMo models directly trained for NER and POS tagging. Our models establish a new state of the art for Hindi-English POS tagging (Singh et al., 2018) and Spanish-English NER (Aguilar et al., 2018). Our contributions can be summarized as follows: 1) we use transfer learning from models trained on a high-resource language (i.e., English) and effectively adapt them to the code-switching setting for multiple language pairs on the task of language identification; 2) we show the effectiveness of transferring a model trained for LID to downstream code-switching NLP tasks, such as NER and POS tagging, by establishing a new state of the art; 3) we provide empirical evidence on the importance of the enhanced character n-gram mechanism, which aligns with the intuition of strong morphological clues in the core of ELMo (i.e., its convolutional layers); and 4) our CS-ELMo model is self-contained, which allows us to release it for other researchers to explore and replicate this technique on other code-switched languages.2 2 Related Work Transfer learning has become more practical in the last years, making possible to apply very large neural networks to tasks where annotated data is limited (Howard and Ruder, 2018; Peters et al., 1Note that there are more than two labels in the LID tagset, as explained in Section 4. 2http://github.com/RiTUAL-UH/cs_elmo 2018; Devlin et al., 2019). CS-related tasks are good candidates for such applications, since they are usually framed as low-resource problems. However, previous research on sequence labeling for code-switching mainly focused on traditional ML techniques because they performed better than deep learning models trained from scratch on limited data (Yirmibes¸o˘glu and Eryi˘git, 2018; AlBadrashiny and Diab, 2016). Nonetheless, some researchers have recently shown promising results by using pre-trained monolingual embeddings for tasks such as NER (Trivedi et al., 2018; Winata et al., 2018) and POS tagging (Soto and Hirschberg, 2018; Ball and Garrette, 2018). Other efforts include the use of multilingual sub-word embeddings like fastText (Bojanowski et al., 2017) for LID (Mave et al., 2018), and cross-lingual sentence embeddings for text classification like LASER (Schwenk, 2018; Schwenk and Li, 2018; Schwenk and Douze, 2017), which is capable of handling code-switched sentences. These results show the potential of pre-trained knowledge and they motivate our efforts to further explore transfer learning in code-switching settings. Our work is based on ELMo (Peters et al., 2018), a large pre-trained language model that has not been applied to CS tasks before. We also use attention (Bahdanau et al., 2015) within ELMo’s convolutions to adapt it to code-switched text. Even though attention is an effective and successful mechanism in other NLP tasks, the code-switching literature barely covers such technique (Sitaram et al., 2019). Wang et al. (2018) use a different attention method for NER, which is based on a gated cell that learns to choose appropriate monolingual embeddings according to the input text. Recently, Winata et al. (2019) proposed multilingual meta embeddings (MME) combined with self-attention (Vaswani et al., 2017). Their method establishes a state of the art on Spanish-English NER by heavily relying on monolingual embeddings for every language in the code-switched text. Our model outperforms theirs by only fine-tuning a generic CS-aware model, without relying on task-specific designs. Another contribution of our work are position embeddings, which have not been considered for code-switching either. These embeddings, combined with CNNs, have proved useful in computer vision (Gehring et al., 2017); they help to localize non-spatial features extracted by convolutional networks within an image. We apply the same prin8035 ciple to code-switching: we argue that character n-grams without position information may not be enough for a model to learn the actual morphological aspects of the languages (e.g., affixes or lemmas). We empirically validate those aspects and discuss the incidence of such mechanism in our experiments. 3 Methodology ELMo is a character-based language model that provides deep contextualized word representations (Peters et al., 2018). We choose ELMo for this study for the following reasons: 1) it has been trained on a large amount of English data as a general-purpose language model and this aligns with the idea of having English knowledge as starting point; 2) it extracts morphological information out of character sequences, which is essential for our case since certain character n-grams can reveal whether a word belongs to one language or another; and 3) it generates powerful word representations that account for multiple meanings depending on the context. Nevertheless, some aspects of the standard ELMo architecture could be improved to take into account more linguistic properties. In Section 3.1, we discuss these aspects and propose the position-aware hierarchical attention mechanism inside ELMo. In Section 3.2 and Section 3.3, we describe our overall sequence labeling model and the training details, respectively. 3.1 Position-Aware Hierarchical Attention ELMo convolves character embeddings in its first layers and uses the resulting convolutions to represent words. During this process, the convolutional layers are applied in parallel using different kernel sizes, which can be seen as character n-gram feature extractors of different orders. The feature maps per n-gram order are max-pooled to reduce the dimensionality, and the resulting single vectors per n-gram order are concatenated to form a word representation. While this process has proven effective in practice, we notice the following shortcomings: 1. Convolutional networks do not account for the positions of the character n-grams (i.e., convolutions do not preserve the sequential order), losing linguistic properties such as affixes. 2. ELMo down-samples the outputs of its convolutional layers by max-pooling over the feature maps. However, this operation is not ideal to adapt to new morphological patterns from other languages as the model tends to discard patterns from languages other than English. To address these aspects, we introduce CS-ELMo, an extension of ELMo that incorporates a positionaware hierarchical attention mechanism that enhances ELMo’s character n-gram representations. This mechanism is composed of three elements: position embeddings, position-aware attention, and hierarchical attention. Figure 2A describes the overall model architecture, and Figure 2B details the components of the enhanced character n-gram mechanism. Position embeddings. Consider the word x of character length l, whose character n-gram vectors are (x1, x2, . . . , xl−j+1) for an n-gram order j ∈{1, 2, . . . , n}.3 The n-gram vector xi ∈Rc is the output of a character convolutional layer, where c is the number of output channels for that layer. Also, consider n position embedding matrices, one per n-gram order, {E1, E2, . . . , En} defined as Ej ∈R(k−j+1)×e where k is the maximum length of characters in a word (note that l ≤k), e is the dimension of the embeddings and j is the specific n-gram order. Then, the position vectors for the sequence x are defined by p = (p1, p2, . . . , pl−j+1) where pi ∈Re is the i-th vector from the position embedding matrix Ej. We use e = c to facilitate the addition of the position embeddings and the n-gram vectors.4 Figure 2B illustrates the position embeddings for bi-grams and tri-grams. Position-aware attention. Instead of downsampling with the max-pooling operation, we use an attention mechanism similar to the one introduced by Bahdanau et al. (2015). The idea is to concentrate mass probability over the feature maps that capture the most relevant n-gram information along the word, while also considering positional information. At every individual n-gram order, our attention mechanism uses the following equations: ui = v⊺tanh(Wxxi + pi + bx) (1) αi = exp(ui) PN j=1 exp(uj) , s.t. X i=1 αi = 1 (2) z = X i=1 αixi (3) 3ELMo has seven character convolutional layers, each layer with a kernel size from one to seven characters (n = 7). 4ELMo varies the output channels per convolutional layer, so the dimensionality of Ej varies as well. 8036 Figure 2: A) The left figure shows the overall model architecture, which contains CS-ELMo followed by BLSTM and CRF, and a secondary task with a softmax layer using a simplified LID label set. The largest box describes the components of CS-ELMo, including the enhanced character n-gram module proposed in this paper. B) The right figure describes in detail the enhanced character n-gram mechanism inside CS-ELMo. The figure shows the convolutions of a word as input and a single vector representation as output. where Wx ∈Ra×c is a projection matrix, a is the dimension of the attention space, c is the number of channels for the n-gram order j, and pi is the position embedding associated to the xi n-gram vector. v ∈Ra is the vector that projects from the attention space to the unnormalized scores, and αi is a scalar that describes the attention probability associated to the xi n-gram vector. z is the weighted sum of the input character n-gram vectors and the attention probabilities, which is our down-sampled word representation for the n-gram order j. Note that this mechanism is used independently for every order of n-grams resulting in a set of n vectors {z1, z2, . . . , zn} from Equation 3. This allows the model to capture relevant information across individual n-grams before they are combined (i.e., processing independently all bi-grams, all tri-grams, etc.). Hierarchical attention. With the previous mechanisms we handle the problems aforementioned. That is, we have considered positional information as well as the attention mechanism to down-sample the dimensionality. These components retrieve one vector representation per n-gram order per word. While ELMo simply concatenates the n-gram vectors of a word, we decide to experiment with another layer of attention that can prioritize n-gram vectors across all the orders. We use a similar formulation to Equations 1 and 3, except that we do not have pi, and instead of doing the weighted sum, we concatenate the weighted inputs. This concatenation keeps the original dimensionality expected in the upper layers of ELMo, while it also emphasizes which n-gram order should receive more attention. 3.2 Sequence Tagging We follow Peters et al. (2018) to use ELMo for sequence labeling. They reported state-of-the-art performance on NER by using ELMo followed by a bidirectional LSTM layer and a linear-chain conditional random field (CRF). We use this architecture as a backbone for our model (see Figure 2A), but we add some modifications. The first modification is the concatenation of static English word embeddings to ELMo’s word representation, such as Twitter (Pennington et al., 2014) and fastText (Bojanowski et al., 2017) embeddings similar to Howard and Ruder (2018) and Mave et al. (2018). The idea is to enrich the context of the words by providing domain-specific embeddings and subword level embeddings. The second modification is the concatenation of the enhanced character ngram representation with the input to the CRF layer. This emphasizes even further the extracted morphological patterns, so that they are present during inference time for the task at hand (i.e., not only LID, but also NER and POS tagging). The last modification is the addition of a secondary task on a simplified5 language identification label scheme (see Section 4 for more details), which only uses 5The LID label set uses eight labels (lang1, lang2, ne, mixed, ambiguous, fw, other, and unk), but for the simplified LID label set, we only consider three labels (lang1, lang2 and other) to predict only based on characters. 8037 the output of the enhanced character n-gram mechanism. Intuitively, this explicitly forces the model to associate morphological patterns (e.g., affixes, lemmas, etc.) to one or the other language. 3.3 Multi-Task Training We train the model by minimizing the negative loglikelihood loss of the CRF classifier. Additionally, we force the model to minimize a secondary loss over the simplified LID label set by only using the morphological features from the enhanced character n-gram mechanism (see the softmax layer in Figure 2A). The overall loss L of our model is defined as follows: Ltaskt = −1 N N X i yi log p(yi|Θ) (4) L = Ltask1 + βLtask2 + λ |Θ| X k w2 k (5) where Ltask1 and Ltask2 are the negative loglikelihood losses conditioned by the model parameters Θ as defined in Equation 4. Ltask1 is the loss of the primary task (i.e., LID, NER, or POS tagging), whereas Ltask2 is the loss for the simplified LID task weighted by β to smooth its impact on the model performance. Both losses are the average over N tokens.6 The third term provides ℓ2 regularization, and λ is the penalty weight.7 4 Datasets Language identification. We experiment with code-switched data for Nepali-English, SpanishEnglish, and Hindi-English. The first two datasets were collected from Twitter, and they were introduced at the Computational Approaches to Linguistic Code-Switching (CALCS) workshops in 2014 and 2016 (Solorio et al., 2014; Molina et al., 2016). The Hindi-English dataset contains Twitter and Facebook posts, and it was introduced by Mave et al. (2018). These datasets follow the CALCS label scheme, which has eight labels: lang1 (English), lang2 (Nepali, Spanish, or Hindi), mixed, ambiguous, fw, ne, other, and unk. We show the distribution of lang1 and lang2 in Table 1. Moreover, we add a second set of labels using a simplified LID version of the original CALCS label set. The simplified label set uses lang1, 6While Equation 4 is formulated for a given sentence, in practice N is the number of tokens in a batch of sentences. 7We exclude the CRF parameters in this term. Corpus Split Posts Tokens Lang1 Lang2 Nep-Eng Train 8,494 123,959 38,310 51,689 Dev 1,499 22,097 7,173 9,008 Test 2,874 40,268 12,286 17,216 Spa-Eng Train 11,400 139,539 78,814 33,709 Dev 3,014 33,276 16,821 8,652 Test 10,716 121,446 16,944 77,047 Hin-Eng Train 5,045 100,337 57,695 20,696 Dev 891 16,531 9,468 3,420 Test 1,485 29,854 17,589 5,842 Table 1: The distribution of the LID datasets according to the CALCS LID label set. The label lang1 refers to English and lang2 is either Nepali, Spanish or Hindi depending on the corpus. The full label distribution is in Appendix A. lang2, and other. We use this 3-way tokenlevel labels in the secondary loss of our model where only morphology, without any context, is being exploited. This is because we are interested in predicting whether a word’s morphology is associated to English more than to another language (or vice versa), instead of whether, for example, its morphology describes a named entity (ne). Part-of-speech tagging. Singh et al. (2018) provide 1,489 tweets (33,010 tokens) annotated with POS tags. The labels are annotated using the universal POS tagset proposed by Petrov et al. (2012) with the addition of two labels: PART NEG and PRON WH. This dataset does not provide training, development, or test splits due to the small number of samples. Therefore, we run 5-fold cross validations and report the average scores. Named entity recognition. We use the SpanishEnglish NER corpus introduced in the 2018 CALCS competition (Aguilar et al., 2018), which contains a total of 67,223 tweets with 808,663 tokens. The entity types are person, organization, location, group, title, product, event, time, and other, and the labels follow the BIO scheme. We used the fixed training, development, and testing splits provided with the datasets to benchmark our models. Importantly, Hindi and Nepali texts in these datasets appear transliterated using the English alphabet (see Figure 1). The lack of a standardized transliteration process leads code-switchers to employ mostly ad-hoc phonological rules that conveniently use the English alphabet when they write in social media. This behavior makes the automated processing of these datasets more challenging be8038 Exp ID Experiment Nepali-English Spanish-English Hindi-English Dev Test Dev Test Dev Test Approach 1 (Baseline models) Exp 1.1 ELMo 96.192 95.700 95.508 96.363 95.997 96.420 Exp 1.2 ELMo + BLSTM + CRF 96.320 95.882 95.615 96.748 96.545 96.717 Exp 1.3 ML-BERT 95.436 96.571 96.212 96.212 95.924 96.440 Approach 2 (Upon Exp 1.2) Exp 2.1 Attention on each n-gram 96.413 96.771 95.952 96.519 96.579 96.069 Exp 2.2 Position-aware attention on each n-gram 96.540 96.640 95.994 96.791 96.629 96.141 Exp 2.3 Position-aware hierarchical attention 96.582 96.798 96.072 96.692 96.705 96.186 Approach 3 (Upon Exp 2.3) Exp 3.1 Concatenating character n-grams at the top 96.485 96.761 96.033 96.775 96.665 96.188 Exp 3.2 Adding simplified LID (secondary) task 96.612 96.734 96.051 96.932 96.565 96.215 Exp 3.3 Adding static word embeddings 96.879 97.026 96.757 97.532 96.776 97.001 Comparison: Previous best published results Mave et al. (2018) 96.510 97.060 96.6045 96.840 Table 2: The results of incremental experiments on each LID dataset. The scores are calculated using the weighted F-1 metric across the eight LID labels from CALCS. Within each column, the best score in each block is in bold, and the best score for the whole column is underlined. Note that development scores from subsequent experiments (e.g., Exp 2.2 and 2.3) are statistically significant with p < 0.02. Corpus LID System Lang1 Lang2 WA F1 Spa-Eng Al-Badrashiny and Diab 88.6 96.9 95.2 Jain and Bhat 92.3 96.9 96.0 Mave et al. 93.184 98.118 96.840 Ours (Exp 3.3) 94.411 98.532 97.789 Hin-Eng Mave et al. 98.241 95.657 97.596 Ours (Exp 3.3) 98.372 95.750 97.718 Nep-Eng Al-Badrashiny and Diab 97.6 97.0 97.3 Ours (Exp 3.3) 98.124 95.170 97.387 Table 3: Comparison of our best models with the best published scores for language identification. Scores are calculated with the F1 metric, and WA F1 is the weighted average F1 between both languages. cause it excludes potentially available resources in the original scripts of the languages. 5 Experiments We describe our experiments for LID in Section 5.1, including insights of the optimized models. In Section 5.2, the optimized LID models are further fine-tuned on downstream NLP tasks, such as NER and POS tagging, to show the effectiveness of our preliminary CS adaptation step. We test for statistical significance across our incremental experiments following Dror et al. (2018), and we report p-values below 0.02 for LID. We discuss hyperparameters and fine-tuning details in Appendix D. 5.1 Language Identification Approach 1. We establish three strong baselines using a vanilla ELMo (Exp 1.1), ELMo combined with BLSTM and CRF (Exp 1.2) as suggested by Peters et al. (2018), and a multilingual BERT (Exp 1.3) provided by Devlin et al. (2019). We experiment with frozen weights for the core parameters of ELMo and BERT, but we find the best results when the full models are fine-tuned, which we report in Table 2. Approach 2. In the second set of experiments, we add the components of our mechanism upon ELMo combined with BLSTM and CRF (Exp 1.2). We start by replacing the max-pooling operation with the attention layer at every individual n-gram order in Exp 2.1. In Exp 2.2, we incorporate the position information. The third experiment, Exp 2.3, adds the hierarchical attention across all n-gram order vectors. It is worth noting that we experiment by accumulating consecutive n-gram orders, and we find that the performance stops increasing when n > 3. Intuitively, this can be caused by the small size of the datasets since n-gram features of greater order are infrequent and would require more data to be trained properly. We apply our mechanism for n-gram orders in the set {1, 2, 3}, which we report in Table 2. Approach 3. For the third set of experiments, we focus on emphasizing the morphological clues extracted by our mechanism (Exp 2.3). First, in Exp 3.1, we concatenate the enhanced character n-grams with their corresponding word representation before feeding the input to the CRF layer. In 8039 POS System Dev F1 Test F1 ML-BERT 86.84 84.70 ELMo + BLSTM + CRF 87.42 88.12 Prev. SOTA (Singh et al., 2018) 90.20 Architecture: CS-ELMo + BLSTM + CRF Exp 4.1: No CS knowledge 87.02 87.96 Exp 4.2: CS knowledge frozen 89.55 89.92 Exp 4.3: CS knowledge trainable 90.37 91.03 Table 4: The F1 scores on POS tagging for the HindiEnglish dataset. CS knowledge means that the CSELMo architecture (see Figure 2A) has been adapted to code-switching by using the LID task. Exp 3.2, we add the secondary task over the previous experiment to force the model to predict the simplified LID labels by only using the morphological clues (i.e., no context is provided). Finally, in Exp 3.3, we add static word embeddings that help the model to handle social media style and domain-specific words. We achieve the best results on Exp 3.3, which outperforms both the baselines and the previous state of the art on the full LID label scheme (see Table 2). However, to compare with other work, we also calculate the average of the weighted F1 scores over the labels lang1 and lang2. Table 3 shows a comparison of our results and the previous state of the art. Note that, for Spanish-English and Hindi-English, the gap of improvement is reasonable, considering that similar gaps in the validation experiments are statistically significant. In contrast, in the case of Nepali-English, we cannot determine whether our improvement is marginal or substantial since the authors only provide one decimal in their scores. Nevertheless, Al-Badrashiny and Diab (2016) use a CRF with hand-crafted features (AlBadrashiny and Diab, 2016), while our approach does not require any feature engineering. 5.2 POS Tagging and NER We use LID to adapt the English pre-trained knowledge of ELMo to the code-switching setting, effectively generating CS-ELMo. Once this is achieved, we fine-tune the model on downstream NLP tasks such as POS tagging and NER. In this section, our goal is to validate whether the CS-ELMo model can improve over vanilla ELMo, multilingual BERT, and the previous state of the art for both tasks. More specifically, we use our best architecture (Exp 3.3) from the LID experiments 1) without the codeswitching adaptation, 2) with the code-switching NER System Dev F1 Test F1 ML-BERT 61.11 64.56 ELMo + BLSTM + CRF 59.91 63.53 Best at CALCS (Trivedi et al., 2018) 63.76 Prev. SOTA (Winata et al., 2019) 66.63 Architecture: CS-ELMo + BLSTM + CRF Exp 5.1: No CS knowledge 62.59 66.30 Exp 5.2: CS knowledge frozen 64.39 67.96 Exp 5.3: CS knowledge trainable 64.28 66.84 Table 5: The F1 scores on the Spanish-English NER dataset. CS knowledge means that the CS-ELMo architecture (see Figure 2A) has been adapted to codeswitching by using the LID task. adaptation and only retraining the inference layer, and 3) with the code-switching adaptation and retraining the entire model. POS tagging experiments. Table 4 shows our experiments on POS tagging using the HindiEnglish dataset. When we compare our CS-ELMO + BLSTM + CRF model without CS adaptation (Exp 4.1) against the baseline (ELMo + BLSTM + CRF), the performance remains similar. This suggests that our enhanced n-gram mechanism can be added to ELMo without impacting the performance even if the model has not been adapted to CS. Slightly better performance is achieved when the CS-ELMo has been adapted to code-switching, and only the BLSTM and CRF layers are retrained (Exp 4.2). This result shows the convenience of our model since small improvements can be achieved faster by leveraging the already-learned CS knowledge while avoiding to retrain the entire model. Nevertheless, the best performance is achieved by the adapted CS-ELMO + BLSTM + CRF when retraining the entire model (Exp 4.3). Our results are better than the baselines and the previous state of the art. Interestingly, our model improves over multilingual BERT, which is a powerful and significantly bigger model in terms of parameters. Our intuition is that this is partly due to the word-piece tokenization process combined with the transliteration of Hindi. The fact that we use the multilingual version of BERT does not necessarily help to handle transliterated Hindi, since Hindi is only present in BERT’s vocabulary with the Devanagari script. Indeed, we notice that in some tweets, the original number of tokens was almost doubled by the greedy tokenization process in BERT. This behavior tends to degrade the syntactic and semantic 8040 Figure 3: Visualization of the tri-gram attention weights for the 2016 Spanish-English LID dataset. The boxes contain the tri-grams of the word below them along with the right () or wrong () predictions by the model. information captured in the original sequence of tokens. In contrast, ELMo generates contextualized word representations out of character sequences, which makes the model more suitable to adapt to the transliteration of Hindi. NER experiments. Table 5 contains our experiments on NER using the 2018 CALCS SpanishEnglish dataset. Exp 5.1 shows that the enhanced n-gram mechanism can bring improvements over the ELMo + BLSTM + CRF baseline, even though the CS-ELMo has not been adapted to the codeswitching setting. However, better results are achieved when the CS-ELMo model incorporates the code-switching knowledge in both Exp 5.2 and 5.3. Unlike the POS experiments 4.2 and 4.3, fixing the parameters of CS-ELMo model yields better results than updating them during training. Our intuition is that, in the NER task, the model needs the context of both languages to recognize entities within the sentences, and having the code-switching knowledge fixed becomes beneficial. Also, by freezing the CS-ELMo model, we can accelerate training because there is no backpropagation for the CS-ELMo parameters, which makes our code-switching adapatation very practical for downstream tasks. 6 Analysis Position embeddings. Localizing n-grams within a word is an important contribution of our method. We explore this mechanism by using our fine-tuned CS-ELMo to predict the simplified LID labels on the validation set from the secondary task (i.e., the predictions solely rely on morphology) in two scenarios. The first one uses the position embeddings corresponding to the actual place of the character n-gram, whereas the second one chooses position embeddings randomly. We notice a consistent decay in performance across the language pairs, and a variation in the confidence of the predicted classes. The most affected language pair is Spanish-English, with an average difference of 0.18 based on the class probability gaps between both scenarios. In contrast, the probability gaps in Hindi-English and Nepali-English are substantially smaller; their average differences are 0.11 and 0.09, respectively. Position distribution. Considering the previous analysis and the variations in the results, we gather insights of the attention distribution according to their n-gram positions (see position-aware attention in Section 3.1). Although the distribution of the attention weights across n-gram orders mostly remain similar along the positions for all language pairs, Spanish-English has a distinctive concentration of attention at the beginning and end of the words. This behavior can be caused by the differences and similarities between the language pairs. For Spanish-English, the model may rely on inflections of similar words between the languages, such as affixes. On the other hand, transliterated Hindi and Nepali tend to have much less overlap with English words (i.e., words with few characters can overlap with English words), making the distinction more spread across affixes and lemmas. Attention analysis. Figure 3 shows the tri-gram attention weights in the Spanish-English LID dataset. The model is able to pick up affixes that belong to one or the other language. For instance, the tri-gram -ing is commonly found in English at the end of verbs in present progressive, like in the word coming from the figure, but it also appears in Spanish at different places (e.g., ingeniero) making the position information relevant. On the contrary, the tri-grams aha and hah from the figure do not seem to rely on position information because the attention distribution varies along the words. See more examples in Appendix E. 8041 Error analysis. Morphology is very useful for LID, but it is not enough when words have similar spellings between the languages. We inspect the predictions of the model, and find cases where, for example, miserable is gold-labeled as ambiguous but the model predicts a language (see the top-right tweet in Figure 3). Although we find similar cases for Nepali-English and HindiEnglish, it mostly happens for words with few characters (e.g., me, to, use). The model often gets such cases mislabeled due to the common spellings in both languages. Although this should be handled by context, our contribution relies more on morphology than contextualization, which we leave for future work. 7 Conclusion and Future Work We present a transfer learning method from English to code-switched languages using the LID task. Our method enables large pre-trained models, such as ELMo, to be adapted to code-switching settings while taking advantage of the pre-trained knowledge. We establish new state of the art on LID for Nepali-English, Spanish-English, and HindiEnglish. Additionally, we show the effectiveness of our CS-ELMo model by further fine-tuning it for NER and POS tagging. We outperform multilingual BERT and homologous ELMo models on Spanish-English NER and Hindi-Enlgish POS tagging. In our ongoing research, we are investigating the expansion of this technique to language pairs where English may not be involved. Acknowledgements This work was supported by the National Science Foundation (NSF) on the grant #1910192. We thank Deepthi Mave for providing general statistics of the code-switching datasets and Mona Diab for insightful discussions on the topic. References Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018. Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 138–147, Melbourne, Australia. Association for Computational Linguistics. Mohamed Al-Badrashiny and Mona Diab. 2016. LILI: A Simple Language Independent Approach for Language Identification. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1211– 1219, Osaka, Japan. The COLING 2016 Organizing Committee. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Kelsey Ball and Dan Garrette. 2018. Part-of-Speech Tagging for Code-Switched, Transliterated Texts without Explicit Language Identification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3084– 3089, Brussels, Belgium. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics, 5:135–146. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The Hitchhiker’s Guide to Testing Statistical Significance in Natural Language Processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392. Association for Computational Linguistics. Bj¨orn Gamb¨ack and Amitava Das. 2014. On Measuring the Complexity of Code-Mixing. In Proceedings of the 11th International Conference on Natural Language Processing, Goa, India, pages 1–7. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional Sequence to Sequence Learning. CoRR, abs/1705.03122. Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Naman Jain and Riyaz Ahmad Bhat. 2014. Language Identification in Code-Switching Scenario. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 87–93, Doha, Qatar. Association for Computational Linguistics. 8042 Deepthi Mave, Suraj Maharjan, and Thamar Solorio. 2018. Language Identification and Analysis of Code-Switched Social Media Text. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 51– 61, Melbourne, Australia. Association for Computational Linguistics. Giovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas ReyVillamizar, Mona Diab, and Thamar Solorio. 2016. Overview for the Second Shared Task on Language Identification in Code-Switched Data. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 40–49, Austin, Texas. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A Universal Part-of-Speech Tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), pages 2089–2096, Istanbul, Turkey. European Language Resources Association (ELRA). Holger Schwenk. 2018. Filtering and Mining Parallel Data in a Joint Multilingual Space. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 228–234, Melbourne, Australia. Association for Computational Linguistics. Holger Schwenk and Matthijs Douze. 2017. Learning Joint Multilingual Sentence Representations with Neural Machine Translation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 157–167, Vancouver, Canada. Association for Computational Linguistics. Holger Schwenk and Xian Li. 2018. A Corpus for Multilingual Document Classification in Eight Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Kushagra Singh, Indira Sen, and Ponnurangam Kumaraguru. 2018. A Twitter Corpus for HindiEnglish Code Mixed POS Tagging. In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media, pages 12– 17, Melbourne, Australia. Association for Computational Linguistics. Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Krishna Rallabandi, and Alan W. Black. 2019. A Survey of Code-switched Speech and Language Processing. CoRR, abs/1904.00784. Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, and Pascale Fung. 2014. Overview for the First Shared Task on Language Identification in Code-Switched Data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62–72, Doha, Qatar. Association for Computational Linguistics. Victor Soto and Julia Hirschberg. 2018. Joint Partof-Speech and Language ID Tagging for CodeSwitched Data. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 1–10, Melbourne, Australia. Association for Computational Linguistics. Shashwat Trivedi, Harsh Rangwani, and Anil Kumar Singh. 2018. IIT (BHU) Submission for the ACL Shared Task on Named Entity Recognition on Code-switched Data. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 148–153, Melbourne, Australia. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Changhan Wang, Kyunghyun Cho, and Douwe Kiela. 2018. Code-Switched Named Entity Recognition with Embedding Attention. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 154–158, Melbourne, Australia. Association for Computational Linguistics. Genta Indra Winata, Zhaojiang Lin, and Pascale Fung. 2019. Learning Multilingual Meta-Embeddings for Code-Switching Named Entity Recognition. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 181– 186, Florence, Italy. Association for Computational Linguistics. Genta Indra Winata, Chien-Sheng Wu, Andrea Madotto, and Pascale Fung. 2018. Bilingual Character Representation for Efficiently Addressing Out8043 of-Vocabulary Words in Code-Switching Named Entity Recognition. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 110–114, Melbourne, Australia. Association for Computational Linguistics. Zeynep Yirmibes¸o˘glu and G¨uls¸en Eryi˘git. 2018. Detecting Code-Switching between Turkish-English Language Pair. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy Usergenerated Text, pages 110–115, Brussels, Belgium. Association for Computational Linguistics. Appendix for “From English to Code-Switching: Transfer Learning with Strong Morphological Clues” A Language Identification Distributions Table 6 shows the distribution of the language identification labels across the CALCS datasets. Labels Nep-Eng Spa-Eng Hin-Eng lang1 71,148 112,579 84,752 lang2 64,534 119,408 29,958 other 45,286 55,768 21,725 ne 5,053 5,693 9,657 ambiguous 126 404 13 mixed 177 54 58 fw 0 30 542 unk 0 325 17 Table 6: Label distribution for LID datasets. We notice that the CALCS datasets have monolingual tweets, which we detail at the utterancelevel in Table 7. We use the information in this table to measure the rate of code-switching by using the Code-Mixed Index (CMI) (Gamb¨ack and Das, 2014). The higher the score of the CMI, the more code-switched the text is. We show the CMI scores in Table 8. Labels Nep-Eng Spa-Eng Hin-Eng code-switched 9,868 8,733 3,237 lang1 1,374 8,427 3,842 lang2 1,614 7,273 298 other 11 697 44 Table 7: Utterance level language distribution for language identification datasets. B Parts-of-Speech Label Distribution Table 9 shows the distribution of the POS tags for Hindi-English. This dataset correspond to the POS tagging experiments in Section 5.2. Corpus CMI-all CMI-mixed Nepali-English 2014 19.708 25.697 Spanish-English 2016 7.685 22.114 Hindi-English 2018 10.094 23.141 Table 8: Code-Mixing Index (CMI) for the language identification datasets. CMI-all: average over all utterances in the corpus. CMI-mixed: average over only code-switched instances. POS Labels Train Dev Test X 5296 790 1495 VERB 4035 669 1280 NOUN 3511 516 1016 ADP 2037 346 599 PROPN 1996 271 470 ADJ 1070 170 308 PART 1045 145 23 PRON 1013 159 284 DET 799 116 226 ADV 717 100 204 CONJ 571 77 161 PART NEG 333 43 92 PRON WH 294 39 88 NUM 276 35 80 Table 9: The POS tag distribution for Hindi-English. C Named Entity Recognition Label Distribution Table 10 shows the distribution of the NER labels for Spanish-English. This dataset corresponds to the NER experiments in Section 5.2. NER Classes Train Dev Test person 6,226 95 1,888 location 4,323 16 803 organization 1,381 10 307 group 1,024 5 153 title 1,980 50 542 product 1,885 21 481 event 557 6 99 time 786 9 197 other 382 7 62 NE Tokens 18,544 219 4,532 O Tokens 614,013 9,364 178,479 Tweets 50,757 832 15,634 Table 10: The distribution of labels for the SpanishEnglish NER dataset from CALCS 2018. D Hyperparameters and Fine-tuning We experiment with our LID models using Adam optimizer with a learning rate of 0.001 and a plateau learning rate scheduler with patience of 8044 Figure 4: Visualization of the attention weights at the tri-gram level for the Hindi-English 2018 dataset on the LID task. The boxes contain the tri-grams of the word below them. We also provide the predicted label by the model, and whether it was correct or wrong. 5 epochs based on the validation loss. We train our LID models using this setting for 50 epochs. For the last block of experiments in Table 2, we use a progressive fine-tuning process described below. Fine-tuning. We fine-tune the model by progressively updating the parameters from the top to the bottom layers of the model. This avoids losing the pre-trained knowledge from ELMo and smoothly adapts the network to the new languages from the code-switched data. We use the slanted triangular learning rate scheduler with both gradual unfreezing and discriminative fine-tuning over the layers (i.e., different learning rates across layers) proposed by Howard and Ruder (2018). We group the non-ELMo parameters of our model apart from the ELMo parameters. We set the non-ELMo parameters to be the first group of parameters to be tuned (i.e., parameters from enhanced character ngrams, CRF, and BLSTM). Then, we further group the ELMo parameters as follows (top to bottom): 1. the second bidirectional LSTM layer, 2. the first bidirectional LSTM layer, 3. the highway network, 4. the linear projection from flattened convolutions to the token embedding space, 5. all the convolutional layers, and 6) the character embedding weights. Once all the layers have been unfrozen, we update all the parameters together. This technique allows us get the most of our model moving from English to a code-switching setting. We train our fine-tuned models for 200 epochs and a initial learning rate of 0.01 that gets modified during training. Additionally, we use this fine-tuning process for the downstream NLP task presented in the paper (i.e., NER and POS tagging). E Visualization of Attention Weights for Hindi-English Figure 4 shows the attention behavior for tri-grams on the Hindi-English dataset. Similar to the cases discussed for Spanish-English in the main content, we observe that the model learns tri-grams like -ing, -ian for English and iye, isi for Hindi.
2020
716
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8045–8056 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8045 Learning Interpretable Relationships between Entities, Relations and Concepts via Bayesian Structure Learning on Open Domain Facts Jingyuan Zhang, Mingming Sun, Yue Feng, Ping Li Cognitive Computing Lab Baidu Research 1195 Bordeaux Dr, Sunnyvale, CA 94089, USA No. 10 Xibeiwang East Road, Beijing 100085, China 10900 NE 8th St, Bellevue, WA 98004, USA {zhangjingyuan03,sunmingming01, v fengyue, liping11}@baidu.com Abstract Concept graphs are created as universal taxonomies for text understanding in the open domain knowledge. The nodes in concept graphs include both entities and concepts. The edges are from entities to concepts, showing that an entity is an instance of a concept. In this paper, we propose the task of learning interpretable relationships from open domain facts to enrich and refine concept graphs. The Bayesian network structures are learned from open domain facts as the interpretable relationships between relations of facts and concepts of entities. We conduct extensive experiments on public English and Chinese datasets. Compared to the state-of-the-art methods, the learned network structures help improving the identification of concepts for entities based on the relations of entities on both English and Chinese datasets. 1 Introduction Concept graphs are created as universal taxonomies for text understanding and reasoning in the open domain knowledge (Dagan et al., 2010; Bowman et al., 2015; Zamir et al., 2018; Huang et al., 2019; Hao et al., 2019; Jiang et al., 2019). The nodes in concept graphs include both entities and concepts. The edges are from entities to concepts, showing that an entity is an instance of a concept. The task of extracting and building concept graphs from user-generated texts has attracted a lot of research attentions for a couple of decades (Fellbaum, 1998; Wu et al., 2012; Shwartz et al., 2016; Chang et al., 2018; Le et al., 2019; Lewis, 2019). Most of these methods rely on high quality syntactic patterns to determine whether an entity belongs to a concept. For example, given the pattern “X is a Y ” or “Y , including X” appearing in sentences, we can infer that the entity X is an instance of the concept Y . These pattern-based methods require that an entity and concept pair co-occurs in sentences. However, due to the different expressions of a certain concept, an entity and a concept may rarely appear in sentences together. We conduct a data analysis of millions of sentences extracted from Wikipedia and discover that only 10.61% of entity-concept pairs co-occur in sentences out of more than six million of pairs from the public Microsoft concept graph (https: //concept.research.microsoft.com). We also analyze Baidu Baike (http://baike.baidu.com) and its corresponding concept graph. A similar phenomenon is observed that only 8.56% entityconcept pairs co-occur in sentences. Table 1 shows the statistics for Wikipedia and Baidu Baike. With such limitations, the existing approaches have difficulties in helping build a complete concept graph from open domain texts. Dataset # Pairs # Sentences # Co-occurrence Percentage Wikipedia 6,347,294 7,871,825 673,542 10.61% Baike 3,229,301 9,523,183 276,485 8.56% Table 1: Entity-concept pairs that co-occur in sentences from Wikipedia (English) and Baidu Baike (Chinese). Nowadays, the task of open domain information extraction (OIE) has become more and more important (Christensen et al., 2011; Wu and Weld, 2010; Etzioni et al., 2011; Mausam et al., 2012; Sun et al., 2018b,a; Di et al., 2019; Rashed et al., 2019; Liu et al., 2020a,b). OIE aims to generate entity and relation level intermediate structures to express facts from open domain sentences. These open domain facts usually express natural languages as triples in the form of (subject, predicate, object). For example, given the sentence “Anderson, who hosted Whose Line, is a winner of a British Comedy Award in 1991.”, two facts will be extracted. They are (“Anderson”, “host”, “Whose Line”) and (“Anderson”, “winner of a British Comedy Award”, “1991”). The subject and object in a fact are both 8046 Concept Graph Facts · · · f1 : (s1, r1, o1) fn : (sn, rn, on) r1 rp c1 cq e1 em · · · c1 cq · · · e1 em · · · · · · · · · Texts Subject-Relation View Concept Discovery Bayesian Network Structure Learning r1 r2 c1 c2 c3 e f1 : (e, r1, o1) f2 : (e, r2, o2) c1 r1 rp · · · e1 em · · · Object-Relation View Entity-Concept View r1 c1 cq e1 em · · · · · · · · · rp · · · Figure 1: The workflow of learning interpretable relationships from open domain facts for concept discovery. fi = (si, ri, oi) represents a fact, where si and oi are both entities, and ri is a relation. We use ei to denote an entity and ci to represent a concept. entities. The open domain facts contain rich information about entities by representing the subject or object entities via different types of relations (i.e., groups of predicates). It would be helpful for concept graph completion if we can take advantage of the relations in open domain facts. We again take the above two facts of “Anderson” as an instance. If we have explored the connections between relations of facts and concepts, and learned that “host” and “winner of a British Comedy Award” are associated with an “English presenter” subject with a higher probability than a “Japanese presenter” subject, we can infer that “Anderson” belongs to the “English presenter” concept regardless of whether these two co-appear in a sentence or not. In real-world open domain corpus, however, the connections between relations and concepts are not available to us. In this paper, we propose the task of learning interpretable relationships between entities, relations and concepts from open domain facts to help enriching and refining concept graphs. Learning Bayesian networks (BNs) from data has been studied extensively (Heckerman et al., 1995; Koivisto and Sood, 2004; Scanagatta et al., 2015; Niinimaki et al., 2016) in the last few decades. The BNs formally encode probabilistic connections in a certain domain, yielding a human-oriented qualitative structure that facilitates communication between a user and a system incorporating the probabilistic model. Specifically, we apply the Bayesian network structure learning (BNSL) (Chow and Liu, 1968; Yuan et al., 2011; Yuan and Malone, 2013) to discover meaningful relationships between entities, relations and concepts from open domain facts. The learned network encodes the dependencies from the relations of entities in facts to the concepts of entities, leading to the identification of more entity-concept pairs from open domain facts for the completion of concept graphs. Figure 1 illustrates the proposed workflow of learning interpretable relationships from open domain facts. We summarize our contributions as follows: • We propose the task of learning interpretable relationships between entities, relations and concepts from open domain facts, which is important for enriching and refining concept graphs. • We build the BNSL model to discover meaningful network structures that express the connections from relations of entities in open domain facts to concepts of entities in concept graphs. • Experimental results on both English and Chinese datasets reveal that the learned interpretable relationships help identify concepts for entities based on the relations of entities, resulting in a more complete concept graph. 2 Related Work Concept Graph Construction. Concept graph construction has been extensively studied in the literature (Fellbaum, 1998; Ponzetto and Strube, 2007; Banko et al., 2007; Suchanek et al., 2007; Wu et al., 2012; Shwartz et al., 2016; Chang et al., 2018; Le et al., 2019; Lewis, 2019). Notable works toward creating open domain concept graphs from scratch include YAGO (Suchanek et al., 2007) and Probase (Wu et al., 2012). In addition, a wide variety of methods (Nakashole et al., 2012; Weeds et al., 2014; Roller et al., 2014; Shwartz et al., 2016; Roller et al., 2018; Chang et al., 2018; Le et al., 2019; Lewis, 2019) are developed to detect the 8047 hypernymy between entities and concepts for a more complete concept graph. Distributional representations of entities and concepts are learned for good hypernymy detection results (Weeds et al., 2014; Roller et al., 2014; Chang et al., 2018; Lewis, 2019). In contrast to distributional methods, pathbased algorithms (Nakashole et al., 2012; Shwartz et al., 2016; Roller et al., 2018; Le et al., 2019) are proposed to take advantage of the lexico-syntactic paths connecting the joint occurrences of an entity and a concept in a corpus. Most of these methods require the co-occurrence of entity and concept pairs in sentences for the graph completion task. However, due to the different expressions of a certain concept, an entity and a concept may rarely appear in one sentence together. With such limitations, the existing methods in the literature cannot deal with those non co-occurring entity concept pairs, leading to an incomplete concept graph. Open Domain Information Extraction. Open domain information extraction (OIE) has attracted a lot of attention in recent years (Wu and Weld, 2010; Christensen et al., 2011; Etzioni et al., 2011; Mausam et al., 2012; Pal and Mausam, 2016; Yahya et al., 2014; Sun et al., 2018b,a; Roy et al., 2019; Liu et al., 2020a,b). It extracts facts from open domain documents and expresses facts as triples of (subject, predicate, object). Recently, a neuralbased OIE system Logician (Sun et al., 2018b,a; Liu et al., 2020a,b) is proposed. It introduces a unified knowledge expression format SAOKE (symbol aided open knowledge expression) and expresses the most majority information in natural language sentences into four types of facts (i.e., relation, attribute, description and concept). Logician is trained on a human labeled SAOKE dataset using a neural sequence to sequence model. It achieves a much better performance than traditional OIE systems in Chinese language and provides a set of open domain facts with much higher quality to support upper-level algorithms. Since the subject and object in a fact are both entities, the open domain facts contain rich information about entities by representing the subjects or objects via different types of relations (i.e., groups of predicates). It can help the task of concept graph completion by making full use of the relations in open domain facts. In this paper, we leverage the high-quality facts of Logician as one dataset in the experiment. Bayesian Network Structure Learning. Learning a Bayesian network structure from realworld data is a well-motivated but computationally hard task (Heckerman et al., 1995; Koivisto and Sood, 2004; de Campos et al., 2009; Malone et al., 2011; Scanagatta et al., 2015; Niinimaki et al., 2016). A Bayesian network specifies a joint probability distribution of a set of random variables in a structured fashion. A key component in this model is the network structure, a directed acyclic graph on the variables, encoding a set of conditional independence assertions. Several exact and approximate algorithms are developed to learn optimal Bayesian networks (Chow and Liu, 1968; Koivisto and Sood, 2004; Singh and Moore, 2005; Silander and Myllym¨aki, 2006; Yuan et al., 2011; Yuan and Malone, 2013). Some exact algorithms (Koivisto and Sood, 2004; Singh and Moore, 2005; Silander and Myllym¨aki, 2006) are based on dynamic programming to find the best Bayesian network. In 2011, an A⋆search algorithm is introduced (Yuan et al., 2011) to formulate the learning process as a shortest path finding problem. However, these exact algorithms are inefficient due to the full evaluation of an exponential solution space. In this paper, we consider the Chow-Liu tree building algorithm (Chow and Liu, 1968) to approximate the underlying relationships between entities, relations and concepts as a dependency tree. This method is very efficient when there are large numbers of variables. 3 Finding Interpretable Relationships We formulate the relationships between entities, relations, and concepts as follows: • Entities are associated with a set of relations that represent the behaviors and attributes of entities; • A concept is defined by a set of relations. The instances of a concept are those entities that associate with the corresponding set of relations. In concept graphs, a concept is associated with a set of entities which share some common behaviors or attributes. However, the essence of a concept is a set of relations, and entities which associate with these relations automatically become the instance of the concept. So our formulation of the relationships between entities, relations and concepts can be illustrated by Figure 2. In the closed domain, a knowledge base has a predefined ontology and the relationships in Figure 2 are already known. For example, DBPedia (Auer et al., 2007) builds a knowledge graph 8048 em cq Entity e1 · · · r1 c1 · · · Relation Concept · · · · · · · · · · · · rp Figure 2: Relationships of entities, relations and concepts. from Wikipedia to encode the relationships between entities and relations in the forms of facts. The relationships between relations and concepts are represented in the ontology structure of DBPedia, where each concept is associated with a group of relations. However, in the open domain, a predefined ontology does not exist, and hence the components in Figure 2 may not be associated with each other. For instance, given an open domain concept graph, we can discover the relationships between entities and concepts. Given the open domain corpus/facts, we can find the relationships between entities and relations. But the relationships between open domain concepts and relations are not available, to our knowledge. In this paper, we aim to find the connection between open domain relations and concepts, so that we can provide interpretations to the question “why the entity is associated with those concepts in open domain”. 3.1 Problem Formulation Suppose we have a set of entities E = {e1, · · · , em}, a set of relations R = {r1, · · · , rp}, a set of concepts C = {c1, · · · , cq}, and a set of observed triplets O = {(e, r, c)}. Here E and C are from a concept graph G. R is from a set of facts F = {f1, · · · , fn} extracted from a text corpus D. A triplet (e, r, c) is observed means that the entity e with relation r and concept of c is found in above data sources. Given a set of observations O with N samples, the Bayesian network can be learned by maximizing the joint probability p(O): p(O) = Y (e,r,c)∈O p((e, r, c)) = Y (e,r,c)∈O p(c|(e, r)) · p(r|e) · p(e) = Y (e,r,c)∈O p(c|r) · p(r|e) · p(e) where p(c|(e, r)) = p(c|r) is due to our Bayesian network assumption (see Figure 2). By learning with the observed triplets with above model, we can infer the missing triplets, especially give interpretable relationship between entities and concepts. Since p(r|e) can be approximated by the information from OIE corpus, the core of the above problem becomes to learn the part of the network of p(c|e). The difficulty of learning p(c|e) is the unknown structure of the Bayesian network. Due to sparsity of real-world knowledge base, the target network would be sparse. But the sparse structure must be known beforehand for probability learning. In this paper, we employ the Bayesian Network Structure Learning (BNSL) technique to explore the connections between relations and concepts. Due to the large number of variables (i.e., entities, relations and concepts) in open domain facts and concept graphs, we develop an approximate algorithm to learn the network structure. 3.2 The Proposed Approximate Algorithm Due to the sparsity of the relationships between relations and concepts, we decompose the problem into several sub-problems, with each sub-problem containing only one concept variable. Then for each concept variable, we identify possible related relations and apply a BNSL algorithm to discover the network structure between them. Finally, we use the learned network for concept discovery. The procedure is shown in Algorithm 1. We will state the key steps in detail in the next sub-sections. 3.2.1 Sub-problem Construction Given a concept c ∈C, we first collect all its entities Ec ⊂E from the concept graph. Then we can obtain a set of facts Fc that contain these entities. Since an entity can appear in a fact as a subject or an object, we split the facts Fc into subject-view facts Fc,s and object-view facts Fc,o. If we make use of all the relations under the subject or object view, it would be inefficient or event impossible to learn the sparse network structure with a large number of relation variables. Hence, based on the facts, we select possible related relations to the concept c to reduce the complexity of the problem. 3.2.2 Relation Selection There are various strategies which can be applied for the relation selection. We can assume that a relation is highly related to the concept if it appears many times in the fact set Fc. In this way, we can 8049 Algorithm 1: BNSL for concept discovery Input: Texts D and a concept graph G. Output: Valid entity-concept pairs. /* OIE step: */ 1 Extract open domain facts F from D; /* Concept discovery step: */ 2 for each concept c ∈C do 3 Get entities Ec of this concept; 4 Select facts Fc including Ec; /* Subject view step: */ 5 Split Fc into subject-view facts Fc,s; 6 Select top K relations Rc,s from Fc,s; 7 Get entity-relation data Xc,s; /* Object view step: */ 8 Repeat step 5 to get object-view Fc,o; 9 Repeat step 6 to get Rc,o from Fc,o; 10 Repeat step 7 to get Xc,o; /* BNSL training step: */ 11 Feed Xc,s and Xc,o into BNSL; 12 Get a network structure Sc for c; 13 end for /* BNSL prediction step: */ 14 Predict on new entities; 15 Return valid entity-concept pairs; count the frequencies of relations for each view and select the top K as the most relevant ones with a concept. We call it TF selection since we measure the relevance of a relation according to its frequency. We can also select relations according to the TFIDF measurement (Wu et al., 2008). For each view, we select the most relevant K relations for the concept c. We denote them as Rc,s ⊂R for the subject-view facts and Rc,o ⊂R for the object-view facts. In summary, for each concept, we construct two sub-problems for the BNSL task. One is from the subject view and the other is from the object view. Under each view, the sub-problem contains one concept and at most K relations. The goal is to learn a network structure from the concept and corresponding relations. 3.2.3 Data Observations Given a sub-problem for a concept c, we first obtain the corresponding data observations and then feed them as the input of BNSL for interpretable relationship discoveries. For each concept, we can learn a Bayesian network structure from its top subject-view or object view relations. The data observations Xc,s with TF relation selection for the subject-view of the concept c are generated as follows: for each entity e ∈Ec, we use 1 to be the concept observation, meaning that the entity e is an instance of concept c. We use the times of the subject e and a top relation r ∈Rc,s appearing together in facts Fc,s as a relation observation for e and r. The K relation observations and the concept observation together become the positive data observations for c. In order to learn meaningful network structures, we generate an equal number of negative data observations for c. We first randomly sample the same number of entities from Ec′ = {ei : ei ∈E \ Ec} as negative entities of c. We use 0 as the concept observation for negative entities. Then for each negative entity e′, we count the times of the subject e′ and a relation r ∈Rc,s appearing in all the collected facts as a relation observation for e′ and r. The K relation observations and the concept observation together become the negative data observations for c. Xc,s consists of both the positive and negative data observations. Similarly, we can obtain the data observations Xc,o for the object view. 3.2.4 Network Structure Learning In this paper, we employ the widely-used ChowLiu tree building algorithm (Chow and Liu, 1968) as the BNSL method. This algorithm approximates the underlying distributions of variables as a dependency tree, which is a graph where each node only has one parent and cycles are not allowed. It will first calculate the mutual information between each pair of nodes (i.e., variables), and then take the maximum spanning tree of that matrix as the approximation. While this will only provide a rough approximation of the underlying data, it provides good results for many applications (Suzuki, 2010; Tavassolipour et al., 2014; Hassan-Moghaddam and Jovanovic, 2018; Ding et al., 2019), especially when you need to know the most important influencer on each variable. In addition, this algorithm becomes extremely efficient when it deals with to a large number of variables. Since both the subject and object views reflect some properties of entities, we can concatenate the subject-view relations and object-view relations together for a more complete representation of entities. The concatenated data can be forwarded into BNSL for a more comprehensive result of interpretable relationship discovery. Given q concept variables and K relevant relations for each concept, the number of parameters in BNSL is at most q×K. 8050 3.2.5 Prediction After we learn a network structure for each concept, we can learn the concept of a new entity e easily. We first identify the open domain facts with e as its subject or object, and then feed the observation of relations for a concept c into the network to calculate the probability of p(c|e). We still use the open domain entity “Anderson” and its two facts introduced in Section 1 as an example to show how BNSL works. Assume we have two open domain concepts, “English presenter” and “Japanese presenter”. Given the entity “Anderson” and its open domain relations “host” and “winner of a British Comedy Award” as input of BNSL, the output is the probabilities that “Anderson” belongs to each concept. BNSL will predict a higher probability for “Anderson” having the concept “English presenter” than having “Japanese presenter”. 4 Experiments With the learned relationship between relations and concepts from BNSL, we indirectly associate entities with their concepts and give interpretations to the question “why the entity is associated with those concepts in open domain”. The hypernymy detection task aims to identify concepts for entities in open domain. It is helpful for us to evaluate the quality of the learned relationships from BNSL. In this section, we conduct extensive experiments to evaluate the performance of BNSL. 4.1 Data Description We test the performance of our proposed method on two public datasets, one is in English and the other is in Chinese. For the English dataset, we use 15 million high-precision OIE facts1, the Microsoft concept graph2 and 7.87 million Wikipedia sentences3 for our experiments. Since there are more than 5 million concepts in the English dataset and most of them have few entities, we focus on those concepts with more than 50 entities in the experiments. For the Chinese dataset, we use sentences and the corresponding facts4 in (Sun et al., 2018b). The concept graph is also built by Baidu Baike. Table 2 shows the statistics of the concept 1http://reverb.cs.washington.edu 2https://concept.research.microsoft. com/Home/Download 3https://www.kaggle.com/mikeortman/ wikipedia-sentences 4https://ai.baidu.com/broad/download? dataset=saoke Concept Graphs Dataset # entities # concepts # overlaps % overlaps English 12,501,527 5,376,526 613,454 27.10% Chinese 9,230,727 3,245 475,507 48.14% Facts Dataset # facts # subjects # objects # predicates English 14,728,268 1,396,793 1,698,028 664,746 Chinese 37,309,458 624,632 550,404 10,145 Table 2: Statistics of concept graphs and facts. graphs and open domain facts. In open domain facts, each mention of a subject or object is considered as an open domain entity. So we naturally map an entity in open domain facts and concept graphs by the same mention. In Table 2, the column “# of overlap” is about the number of fact entities appearing in the concept graph and the last column is the percentage of fact entities in the concept graph. With the predicates as relations for the open domain facts, we build the Bayesian network structure learning method to bridge the gap between relations in open domain facts and concepts in the concept graph. 4.2 Experimental Setting In the experiment, we compare with the state-ofthe-art model HypeNet (Shwartz et al., 2016) for hypernymy detection. HypeNet improves the detection of entity-concept pairs with an integrated path-based and distributional method. An entity and a concept must appear together in a sentence so that HypeNet can extract lexico-syntactic dependency paths for training and prediction. However, only less than 11% of entity-concept pairs co-occur in Wikipedia sentences in reality (Table 1). Therefore, we compare BNSL with HypeNet only on the entity-concept pairs that co-appear in sentences. In addition, we compare BNSL with recurrent neural networks (RNNs). We apply attention-based Bi-LSTM (Zhou et al., 2016) and derive three versions of RNNs as baseline methods, RNN(f), RNN(sen) and RNN(e). RNN(f) determines the concepts of an entity according to the facts containing the entity, while RNN(sen) by the sentences containing the co-appearance of an entity and a concept. Specifically, each entity in RNN(f) is represented by its associated facts. Each fact is a sequence of subject, predict and object. Each subject, predict and object vector is fed in sequence into RNN(f), resulting a fact embedding vector. The averaged fact vector becomes the entitys feature for concept classification. Similar to HypeNet, RNN(sen) requires the entity-concept pairs co-appearing in sentences. Dif8051 ferent from RNN(sen), RNN(e) focuses on sentences containing the entity only. Based on the sentences, RNN(e) aims to learn which concept an entity belongs to. We follow HypeNet and RNN to use pre-trained GloVe embeddings (Pennington et al., 2014) for initialization. Besides, we compare BNSL with traditional support vector machines (SVM) with linear kernel. The input features for SVM and BNSL are the same, i.e., the top K relations for each concept. Here we set K = 5. During testing, all methods are evaluated on the same testing entities. we calculate the accuracy, precision, recall and F1-score over the prediction results for evaluation. We split the data into 80% of training and 20% of testing. For English, the total numbers of training and testing data are 504,731 and 123,880, respectively; whereas for Chinese, the numbers are 5,169,220 and 1,289,382, respectively. 4.3 Performance Evaluation In this section, we show the evaluation performance on the task of concept discovery with the learned interpretable relationships from open domain fact. Table 3 and Table 4 list the results for co-occurred and non co-occurred entity-concept pairs in sentences respectively. In the tables, (s) and (o) mean the performance only under the subject and the object view, respectively. RNN(f), BNSL and SVM present the prediction performance with the concatenation of both the subject and object views. As is mentioned in the previous section, we can use TF or TFIDF for the most relevant relation selection. We test both strategies for BNSL and SVM. For the English dataset, TFIDF performs much better than TF while the result is the opposite for the Chinese dataset. In this section, we analyze the results of BNSL and SVM with TFIDF for the English dataset. For the Chinese dataset, we report the performance of BNSL and SVM with TF. We will show more results for the relation selection in the next section. For the co-occurred entity-concept pairs in sentences, BNSL(s) performs the best for both datasets. Surprisingly, SVM performs much better than HypeNet with an improvement of around 10% on accuracy for both datasets as is shown in Table 3. In addition, SVM achieves better results compared to RNN(sen). The reason that HypeNet or RNN(sen) cannot perform well may be that the information expressed from the sentences are too diverse. HypeNet or RNN(sen) cannot capture meaningful patterns from sentences for the task of concept discovery. Since RNN(e) further ignores the concept information during the sentence collection step, it cannot perform well compared with RNN(sen). In contrast, information extracted from open domain facts are much more concentrated about concepts. Furthermore, the most relevant relations associated with entities help filtering out noise. Therefore, SVM can achieve a much better result than sentence-based baselines. Though SVM does well on the co-occurred data, BNSL outperforms SVM with all the four evaluation metrics. By learning interpretable relationships between relations and concepts, BNSL captures the most important knowledge about concepts and further exploits their dependencies to help improve the concept discovery task. However, the concatenation of subject and object views for BNSL cannot help improve the performance for both datasets. Similar phenomena can be observed for RNN(f) and SVM. Specifically, the results under the subject view are usually better than those of the object view, implying that when people narrate facts, they may pay more attention to selecting suitable predicate for subjects, rather for objects. Table 4 lists the performances of RNN(e), RNN(f), SVM and BNSL on non co-occurred data. We can observe a similar trend compared to the results on co-occurred data. Since HypeNet and BNSL make use of different information sources (natural language sentences for HypeNet and open domain facts for BNSL), we try to ensemble them to improve the performance further. We first train HypeNet and BNSL independently. Then we can obtain prediction probabilities of entity-concept pairs from HypeNet and BNSL separately. We select the probabilities with higher values as the final predictions. The last row in Table 3 shows the performance of ensembling HypeNet and BNSL. We denote it as B + H. It can be seen that B + H achieves the best accuracy, recall and F1-scores on the co-occurred data. It reveals that interpretable relationships extracted from open domain facts are complementary to natural language sentences in helping concept discovery. Studying meaningful knowledge from open domain facts provides an alternative perspective to build concept graphs and this paper starts the first trial. 4.4 Analysis on the Relation Selection Relation selection helps reducing the complexity of BNSL. In this section, we first evaluate how differ8052 Dataset English Chinese Method Accuracy Precision Recall F1-score Accuracy Precision Recall F1-score HypeNet 69.64% 75.09% 69.74% 72.31% 76.57% 87.17% 71.22% 78.39% RNN(sen) 77.18% 80.74% 78.62% 79.67% 71.90% 72.85% 84.35% 78.18% RNN(e) 67.77% 77.09% 61.62% 68.49% 57.67% 61.19% 79.53% 69.16% RNN(s) 73.38% 80.35% 70.39% 75.04% 64.93% 64.02% 94.13% 76.21% RNN(o) 70.95% 79.81% 65.46% 71.93% 64.97% 64.08% 94.01% 76.21% RNN(f) 70.01% 79.08% 64.25% 70.90% 49.55% 61.23% 42.81% 49.95% SVM(s) 76.68% 74.82% 88.93% 81.26% 85.06% 90.01% 84.33% 87.07% SVM(o) 74.81% 72.72% 89.14% 80.10% 51.86% 57.54% 73.87% 64.69% SVM 77.43% 74.38% 92.00% 82.25% 86.07% 90. 86% 85.22% 87.95% BNSL(s) 86.03% 82.89% 95.07% 88.56% 87.54% 92.40% 86.21% 89.20% BNSL(o) 86.22% 84.52% 92.76% 88.45% 49.03% 56.79% 61.10% 58.86% BNSL 84.79% 81.87% 94.08% 87.55% 87.37% 92.32% 86.00% 89.05% B + H 91.27% 91.15% 93.75% 92.43% 87.88% 86.01% 95.18% 90.36% Table 3: Performance on the co-occurred data. The best results are in bold. Dataset English Chinese Method Accuracy Precision Recall F1-score Accuracy Precision Recall F1-score RNN(e) 63.94% 67.38% 52.09% 58.75% 53.82% 51.84% 95.06% 67.09% RNN(s) 73.83% 74.61% 71.12% 72.82% 55.18% 52.55% 97.49% 68.29% RNN(o) 73.74% 77.05% 66.56% 71.42% 55.34% 52.64% 97.47% 68.36% RNN(f) 72.36% 75.53% 65.02% 69.88% 51.82% 51.63% 42.45% 46.59% SVM(s) 71.94% 66.48% 86.91% 75.34% 90.03% 86.73% 94.30% 90.36% SVM(o) 65.82% 61.55% 81.70% 70.21% 51.14% 50.39% 85.37% 63.37% SVM 71.62% 65.62% 89.16% 75.60% 90.91% 88.11% 94.37% 91.14% BNSL(s) 85.97% 82.15% 91.42% 86.54% 92.47% 90.12% 95.23% 92.60% BNSL(o) 82.27% 78.36% 88.48% 83.11% 51.52% 50.70% 74.63% 60.38% BNSL 84.78% 80.77% 90.74% 85.47% 92.39% 90.05% 95.15% 92.53% Table 4: Performance on the non co-occurred data. The best results are in bold. ent relation selection strategies will influence the performance of BNSL and SVM methods. Table 5 is the performance of TF and TFIDF relation selection on the entire data for both English and Chinese. We observe that TFIDF selection performs better on English while TF is better on Chinese. However, BNSL always outperforms SVM regardless of the views or the relation selections. In addition, since SVM performs much better than the neural network based HypeNet and RNN, we try to ensemble it with BNSL to improve the performance further. We consider the prediction probabilities of SVM as a new variable and incorporate it into BNSL for network structure learning. We denote the model as BNSL + SVM. For comparison, we ensemble SVM with BNSL by taking the results of BNSL as one new feature dimension to SVM. We name it as SVM + BNSL. It can be seen from Table 5 that the ensemble of BNSL and SVM outperforms single models on both datasets. Especially, BNSL + SVM does better than SVM + BNSL, revealing that BNSL has a better capability of exploring meaningful knowledge from other sources. Furthermore, we evaluate how BNSL performs with different numbers of relations. Figure 3 shows the results of BNSL(s) by setting relation numbers from 1 to 20. TFIDF relation selection is used for the English dataset and TF for Chinese. We can observe that BNSL performs best when we select the top 5 relations and the results become stable with more than 5 relations. 1 5 10 15 20 # relations 0.5 0.6 0.7 0.8 0.9 1 Performance English Accuracy Precision Recall F1 1 5 10 15 20 # relations 0.5 0.6 0.7 0.8 0.9 1 Performance Chinese Accuracy Precision Recall F1 Figure 3: BNSL(s) with different numbers of relations. 8053 Relation Selection TF Selection TFIDF Selection Dataset Method Accuracy Precision Recall F1-score Accuracy Precision Recall F1-score English SVM(s) 58.19% (10) 55.17% (10) 87.43% (6) 67.65% (11) 72.38% (10) 67.28% (10) 87.12% (10) 75.93% (11) BNSL(s) 71.57% (5) 67.93% (5) 81.70% (10) 74.19% (6) 86.00% (2) 82.24% (2) 91.82% (2) 86.77% (2) SVM + BNSL(s) 71.62% (4) 68.36% (4) 80.48% (11) 73.93% (7) 82.04% (7) 78.31% (6) 88.63% (7) 83.15% (7) BNSL + SVM(s) 78.46% (1) 80.55% (1) 75.04% (12) 77.70% (3) 88.36% (1) 86.48% (1) 90.94% (4) 88.65% (1) SVM(o) 55.07% (12) 52.91% (12) 92.29% (1) 67.26% (12) 66.65% (12) 62.64% (12) 82.48% (12) 71.21% (12) BNSL(o) 71.14% (7) 65.68% (7) 88.54% (5) 75.42% (4) 82.64% (5) 78.99% (5) 88.95% (6) 83.67% (6) SVM + BNSL(o) 66.84% (9) 61.65% (9) 89.07% (3) 72.87% (8) 78.27% (9) 74.79% (8) 85.28% (11) 79.70% (9) BNSL + SVM(o) 77.02% (2) 73.10% (2) 85.50% (7) 78.81% (1) 84.16% (4) 81.49% (3) 88.40% (9) 84.80% (4) SVM 57.38% (11) 54.36% (11) 92.05% (2) 68.35% (10) 72.15% (11) 66.46% (11) 89.45% (5) 76.26% (10) BNSL 71.26% (6) 66.77% (6) 84.63% (9) 74.65% (5) 84.78% (3) 80.89% (4) 91.09% (3) 85.69% (3) SVM + BNSL 68.31% (8) 63.71% (8) 85.09% (8) 72.86% (9) 78.70% (8) 73.99% (9) 88.50% (8) 80.60% (8) BNSL + SVM 75.84% (3) 70.60% (3) 88.58% (4) 78.57% (2) 82.22% (6) 76.50% (7) 93.03% (1) 83.96% (5) Chinese SVM(s) 89.80% (8) 86.91% (8) 93.73% (5) 90.19% (8) 74.58% (8) 67.98% (6) 92.95% (8) 78.53% (8) BNSL(s) 92.23% (5) 90.24% (5) 94.71% (1) 92.42% (5) 75.01% (6) 67.90% (8) 94.88% (1) 79.16% (6) SVM + BNSL(s) 93.31% (4) 93.13% (4) 93.52% (8) 93.32% (4) 76.37% (3) 69.62% (3) 93.55% (6) 79.83% (3) BNSL + SVM(s) 95.56% (1) 97.36% (1) 93.65% (7) 95.47% (1) 77.54% (2) 70.64% (2) 94.27% (4) 80.76% (2) SVM(o) 51.16% (12) 50.71% (12) 82.58% (9) 62.84% (10) 50.55% (12) 50.33% (12) 84.65% (10) 63.12% (10) BNSL(o) 51.39% (10) 50.96% (10) 73.85% (11) 60.31% (12) 50.79% (10) 50.55% (10) 72.37% (12) 59.53% (12) SVM + BNSL(o) 51.33% (11) 50.82% (11) 82.41% (10) 62.87% (9) 50.66% (11) 50.39% (11) 84.73% (9) 63.20% (9) BNSL + SVM(o) 51.72% (9) 51.18% (9) 74.54% (12) 60.69% (11) 50.97% (9) 50.68% (9) 72.98% (11) 59.82% (11) SVM 90.35% (7) 87.69% (7) 93.88% (4) 90.68% (7) 74.68% (7) 67.95% (7) 93.45% (7) 78.68% (7) BNSL 92.15% (6) 90.16% (6) 94.62% (2) 92.34% (6) 75.12% (5) 68.08% (5) 94.61% (2) 79.18% (5) SVM + BNSL 93.61% (3) 93.55% (3) 93.68% (6) 93.61% (3) 76.33% (4) 69.57% (4) 93.60% (5) 79.82% (4) BNSL + SVM 95.46% (2) 96.59% (2) 94.25% (3) 95.40% (2) 77.68% (1) 70.77% (1) 94.32% (3) 80.87% (1) Table 5: Performance of relation selections on the entire data. The results are reported as “value + (rank)”. 4.5 Analysis with missing information In reality, the open domain facts or co-occurring sentences associated with entity-concept pairs are usually missing, making the input information for concept discovery extremely sparse. In this section, we study how BNSL performs with the sparse input. Given a set of entities, we first extract the corresponding facts (or sentences) under each concept. For both datasets, we get around 30 million entityconcept pairs for testing and more than 97% do not have the corresponding fact information with the top K relations, making the prediction of BNSL very challenging. Furthermore, both datasets have a large number of fine-grained concepts, making the task more difficult. For the missing data, we feed an empty fact or sentence into BNSL and other models for training and testing. Also, we observe that RNN does not performs as well compared with other methods and in particular RNN(sen) performs the worst when the input is extremely sparse. In Figure 4, we report the improvement of F1score over RNN(sen). We can observe that HypeNet, SVM and BNSL can achieve much better performance, showing their robustness with missing values. In addition, B + H can still achieve the best result. It further confirms that open domain facts and natural language sentences are compleEnglish 0% 2.5% 5% 7.5% 10% 12.5% Improvement on F1 HypeNet RNN (f) RNN (e) SVM BNSL B + H Chinese 0% 1% 2% 3% 4% 5% Improvement on F1 Figure 4: F1-score improvement on RNN(sen). mentary to each other even when there is a large portion of missing information. 5 Conclusion In this paper, we investigate the task of learning interpretable relationships between entities, relations and concepts from open domain facts to help enriching and refining concept graphs. The Bayesian network structures are learned from open domain facts as the discovered meaningful dependencies between relations of facts and concepts of entities. Experimental results on an English dataset and a Chinese dataset reveal that the learned network structures can better identify concepts for entities based on the relations of entities from open domain facts, which will further help building a more complete concept graph. 8054 References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In Proceedings of the 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference (ISWC+ASWC), pages 722–735, Busan, Korea. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), pages 2670–2676, Hyderabad, India. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 632–642, Lisbon, Portugal. Cassio Polpo de Campos, Zhi Zeng, and Qiang Ji. 2009. Structure learning of bayesian networks using constraints. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML), pages 113–120, Montreal, Canada. Haw-Shiuan Chang, Ziyun Wang, Luke Vilnis, and Andrew McCallum. 2018. Distributional inclusion vector embedding for unsupervised hypernymy detection. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 485–495, New Orleans, LA. C. K. Chow and C. N. Liu. 1968. Approximating discrete probability distributions with dependence trees. IEEE Trans. Inf. Theory, 14(3):462–467. Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2011. An analysis of open information extraction based on semantic role labeling. In Proceedings of the 6th International Conference on Knowledge Capture (K-CAP), pages 113–120, Banff, Canada. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2010. Recognizing textual entailment: Rational, evaluation and approaches - erratum. Nat. Lang. Eng., 16(1):105. Shimin Di, Yanyan Shen, and Lei Chen. 2019. Relation extraction via domain-aware transfer learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), pages 1348–1357, Anchorage, AK. Jie Ding, A. Robert Calderbank, and Vahid Tarokh. 2019. Gradient information for representation and modeling. In Advances in Neural Information Processing Systems (NeurIPS), pages 2393–2402, Vancouver, Canada. Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam. 2011. Open information extraction: The second generation. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), pages 3–10, Barcelona, Spain. Christiane Fellbaum. 1998. WordNet: An electronic lexical database. Junheng Hao, Muhao Chen, Wenchao Yu, Yizhou Sun, and Wei Wang. 2019. Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), pages 1709–1719, Anchorage, AK. Sepideh Hassan-Moghaddam and Mihailo R. Jovanovic. 2018. Topology identification via growing a chow-liu tree network. In Proceedings of the 57th IEEE Conference on Decision and Control (CDC), pages 5421–5426, Miami, FL. David Heckerman, Dan Geiger, and David Maxwell Chickering. 1995. Learning bayesian networks: The combination of knowledge and statistical data. Mach. Learn., 20(3):197–243. Silu Huang, Jialu Liu, Flip Korn, Xuezhi Wang, You Wu, Dale Markowitz, and Cong Yu. 2019. Contextual fact ranking and its applications in table synthesis and compression. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), pages 285– 293, Anchorage, AK. Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V Chawla, and Meng Jiang. 2019. The role of “condition”: A novel scientific knowledge graph representation and construction model. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), pages 1634–1642, Anchorage, AK. Mikko Koivisto and Kismat Sood. 2004. Exact bayesian structure discovery in bayesian networks. J. Mach. Learn. Res., 5:549–573. Matthew Le, Stephen Roller, Laetitia Papaxanthos, Douwe Kiela, and Maximilian Nickel. 2019. Inferring concept hierarchies from text corpora via hyperbolic embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 3231–3241, Florence, Italy. Martha Lewis. 2019. Compositional hyponymy with positive operators. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP), pages 638–647, Varna, Bulgaria. Guiliang Liu, Xu Li, Miningming Sun, and Ping Li. 2020a. An advantage actor-critic algorithm with 8055 confidence exploration for open information extraction. In Proceedings of the 2020 SIAM International Conference on Data Mining (SDM), pages 217–225. Guiliang Liu, Xu Li, Jiakang Wang, Mingming Sun, and Ping Li. 2020b. Large scale semantic indexing with deep level-wise extreme multi-label learning. In Proceedings of the World Wide Web Conference (WWW), pages 2585–2591, Taipei. Brandon M. Malone, Changhe Yuan, and Eric A. Hansen. 2011. Memory-efficient dynamic programming for learning optimal bayesian networks. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence (AAAI), San Francisco, CA. Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 523–534, Jeju Island, Korea. Ndapandula Nakashole, Gerhard Weikum, and Fabian M. Suchanek. 2012. PATTY: A taxonomy of relational patterns with semantic types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1135–1145, Jeju Island, Korea. Teppo Niinimaki, Pekka Parviainen, and Mikko Koivisto. 2016. Structure discovery in bayesian networks by sampling partial orders. J. Mach. Learn. Res., 17:57:1–57:47. Harinder Pal and Mausam. 2016. Demonyms and compound relational nouns in nominal open IE. In Proceedings of the 5th Workshop on Automated Knowledge Base Construction (AKBC@NAACLHLT), pages 35–39, San Diego, CA. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Simone Paolo Ponzetto and Michael Strube. 2007. Deriving a large-scale taxonomy from wikipedia. In Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence (AAAI), pages 1440–1445, Vancouver,Canada. Ahmed Rashed, Josif Grabocka, and Lars SchmidtThieme. 2019. Multi-relational classification via bayesian ranked non-linear embeddings. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), pages 1132–1140, Anchorage, AK. Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hypernymy detection. In Proceedings of the 25th International Conference on Computational Linguistics (COLING), pages 1025–1036, Dublin, Ireland. Stephen Roller, Douwe Kiela, and Maximilian Nickel. 2018. Hearst patterns revisited: Automatic hypernym detection from large text corpora. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 358– 363, Melbourne, Australia. Arpita Roy, Youngja Park, Taesung Lee, and Shimei Pan. 2019. Supervising unsupervised open information extraction models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 728–737, Hong Kong, China. Mauro Scanagatta, Cassio Polpo de Campos, Giorgio Corani, and Marco Zaffalon. 2015. Learning bayesian networks with thousands of variables. In Advances in Neural Information Processing Systems (NIPS), pages 1864–1872, Montreal, Canada. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2389– 2398, Berlin, Germany. Tomi Silander and Petri Myllym¨aki. 2006. A simple approach for finding the globally optimal bayesian network structure. In Proceedings of the 22nd Conference in Uncertainty in Artificial Intelligence (IJCAI), Cambridge, MA. Ajit P Singh and Andrew W Moore. 2005. Finding optimal Bayesian networks by dynamic programming. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web (WWW), pages 697–706, Banff, Canada. Mingming Sun, Xu Li, and Ping Li. 2018a. Logician and Orator: Learning from the duality between language and knowledge in open domain. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2119–2130, Brussels, Belgium. Mingming Sun, Xu Li, Xin Wang, Miao Fan, Yue Feng, and Ping Li. 2018b. Logician: a unified end-toend neural approach for open-domain information extraction. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM), pages 556–564, Marina Del Rey, CA. 8056 Joe Suzuki. 2010. A generalization of the chow-liu algorithm and its application to statistical learning. Technical report, arXiv:1002.2240. Mostafa Tavassolipour, Mahmood Karimian, and Shohreh Kasaei. 2014. Event detection and summarization in soccer videos using bayesian network and copula. IEEE Trans. Circuits Syst. Video Techn., 24(2):291–304. Julie Weeds, Daoud Clarke, Jeremy Reffin, David J. Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of the 25th International Conference on Computational Linguistics (COLING), pages 2249–2259, Dublin, Ireland. Fei Wu and Daniel S. Weld. 2010. Open information extraction using wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 118–127, Uppsala, Sweden. Ho Chung Wu, Robert Wing Pong Luk, Kam-Fai Wong, and Kui-Lam Kwok. 2008. Interpreting TFIDF term weights as making relevance decisions. ACM Trans. Inf. Syst., 26(3):13:1–13:37. Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Q Zhu. 2012. Probase: A probabilistic taxonomy for text understanding. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD), pages 481–492, Scottsdale, AZ. Mohamed Yahya, Steven Whang, Rahul Gupta, and Alon Halevy. 2014. Renoun: Fact extraction for nominal attributes. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 325–335, Doha, Qatar. Changhe Yuan and Brandon M. Malone. 2013. Learning optimal bayesian networks: A shortest path perspective. J. Artif. Intell. Res., 48:23–65. Changhe Yuan, Brandon M. Malone, and XiaoJian Wu. 2011. Learning optimal bayesian networks using A* search. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), pages 2186–2191, Barcelona, Spain. Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. 2018. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3712–3722, Salt Lake City, UT. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany.
2020
717
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8057–8077 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8057 Multi-Sentence Argument Linking Seth Ebner∗Patrick Xia∗Ryan Culkin Kyle Rawlins Benjamin Van Durme Johns Hopkins University {seth, paxia}@cs.jhu.edu {rculkin, kgr, vandurme}@jhu.edu Abstract We present a novel document-level model for finding argument spans that fill an event’s roles, connecting related ideas in sentencelevel semantic role labeling and coreference resolution. Because existing datasets for cross-sentence linking are small, development of our neural model is supported through the creation of a new resource, Roles Across Multiple Sentences (RAMS), which contains 9,124 annotated events across 139 types. We demonstrate strong performance of our model on RAMS and other event-related datasets.1 1 Introduction Textual event descriptions may span multiple sentences, yet large-scale datasets predominately annotate for events and their arguments at the sentence level. This has driven researchers to focus on sentence-level tasks such as semantic role labeling (SRL), even though perfect performance at such tasks would still enable a less than complete understanding of an event at the document level. In this work, we approach event understanding as a form of linking, more akin to coreference resolution than sentence-level SRL. An event trigger evokes a set of roles regarded as latent arguments, with these implicit arguments then potentially linked to explicit mentions in the text. Consider the example in Figure 1: the AirstrikeMissileStrike event (triggered by “bombarding”) gives rise to a frame or set of typelevel roles (attacker, target, instrument, place) with the referents (“Russians”, “rebel outpost”, “aircraft”, “Syria”).2 Intuitively we recognize the possible existence of fillers for these roles, for example, the place of the particular Air∗Equal Contribution 1Data and code at http://nlp.jhu.edu/rams/. 2ϵ would indicate there is no explicit referent in the text. Figure 1: A passage annotated for an event’s type, trigger, and arguments. Each arc points from the trigger to the argument that fills the labeled role. strikeMissileStrike event. These implicit arguments are linked to explicit arguments in the document (i.e., text spans). We refer to the task of finding explicit argument(s) to fill each role for an event as argument linking. Prior annotation of cross-sentence argument links has produced small datasets, with a focus either on a small number of predicate types (Gerber and Chai, 2010, 2012; Feizabadi and Pad´o, 2014) or on a small number of documents (Ruppenhofer et al., 2010). To enable the development of a neural model for argument linking, we produce Roles Across Multiple Sentences (RAMS), a dataset of 9,124 annotated events from news based on an ontology of 139 event types and 65 roles. In a 5sentence window around each event trigger, we annotate the closest argument span for each role. Our model builds on recent ideas in span selection models (Lee et al., 2018; He et al., 2018; Ouchi et al., 2018), used in this work for the multisentence argument linking task for RAMS and for several other event-based datasets (Gerber and Chai, 2012; Pradhan et al., 2013; Pavlick et al., 2016, AIDA Phase 1). On RAMS our best model achieves 68.3 F1, and it achieves 73.3 F1 when event types are also known, outperforming strong baselines. We also demonstrate effective use of RAMS as pre-training for a related dataset. 8058 Our main contributions are a novel model for argument linking and a new large-scale dataset for the task. Our dataset is annotated for arguments across multiple sentences and has broader coverage of event types and more examples than similar work. Our experiments highlight our model’s adaptability to multiple datasets. Together, these contributions further the automatic understanding of events at the document level. 2 Non-local Arguments We are not the first to consider non-local event arguments; here we review prior work and refer to O’Gorman (2019) for further reading. Whereas local (sentence-level) event arguments are wellstudied as semantic role labeling—utilizing large datasets such as OntoNotes 5.0 (Weischedel et al., 2013; Pradhan et al., 2013)—existing datasets annotated for non-local arguments are too small for training neural models. Much of the effort on non-local arguments, sometimes called implicit SRL, has focused on two datasets: SemEval-2010 Task 10 (Ruppenhofer et al., 2010) and Beyond NomBank (henceforth BNB) (Gerber and Chai, 2010, 2012). These datasets are substantially smaller than RAMS: the SemEval Task 10 training set contains 1,370 frame instantiations over 438 sentences, while BNB contains 1,247 examples covering just 10 nominal predicate types. Multi-sentence AMR (MS-AMR) (O’Gorman et al., 2018; Knight et al., 2020) contains 293 documents annotated with a documentlevel adaptation of the Abstract Meaning Representation (AMR) formalism. O’Gorman (2019) notes that the relatively small size of the MS-AMR and SemEval datasets hinders supervised training. In contrast to these datasets, RAMS contains 9,124 annotated examples covering a wide range of nominal and verbal triggers. Under the DARPA AIDA program, the Linguistic Data Consortium (LDC) has annotated document-level event arguments under a threelevel hierarchical event ontology (see Figure 2) influenced by prior LDC-supported ontologies such as ERE and ACE. These have been packaged as the AIDA Phase 1 Practice3 and Eval4 releases (henceforth AIDA-1), currently made available to performers in the AIDA program and participants 3LDC2019E04 (data); LDC2019E07 (annotations) 4LDC2019E42 (data); LDC2019E77 (annotations) Correspondence CommandOrder Negotiate FirearmAttack Yield Conflict { Communicator, Recipient, Place } { Attacker, Target, Instrument, Place } … … … Attack Contact Meet … Figure 2: Subset of the AIDA-1 ontology illustrating the three-level Type/Subtype/Sub-subtype event hierarchy. Dashed gray edges point to roles for two event nodes, which have one role in common (Place). in related NIST evaluations.5 AIDA-1 documents focus on recent geopolitical events relating to interactions between Russia and Ukraine. Unless otherwise noted, statistics about AIDA-1 pertain only to the Practice portion of the dataset. For each document in LDC’s collection, only AIDA-salient events are annotated. This protocol does not guarantee coverage over the event ontology: 1,559 event triggers are annotated in the text portion of the collection, accounting for only 88 of the 139 distinct event sub-subtypes in the ontology. Our dataset, RAMS, employs the same annotation ontology but is substantially larger and covers all 139 types in the ontology. Figure 3 (§3) compares the two datasets. Across multiple datasets, a substantial number of event arguments are observed to be nonlocal. For example, Gerber and Chai (2012) found that their annotation of non-local arguments added 71% (relative) role coverage to NomBank annotations. Additionally, 38.1% of the annotated events in AIDA-1 have an argument outside the sentence containing the trigger. This phenomenon is not surprising in light of the analysis of zero anaphora and definite null complements by Fillmore (1986) and the distinction between “core” and “non-core” frame elements or roles in FrameNet (Baker et al., 1998) and PropBank (Palmer et al., 2005). As previous datasets have been small, various approaches have been taken to handle scarcity. To obtain more training data, Silberer and Frank (2012) created artificial instances from data annotated jointly for coreference and semantic roles. Roth and Frank (2013) automatically induced implicit arguments from pairs of comparable texts, but recovered a proportionally small set of additional arguments. Feizabadi and Pad´o (2015) 5While rarely freely released, historically such collections are eventually made available under a license to anyone, under some timeline established within a program. 8059 combined existing corpora to increase and diversify sources of model supervision. Cheng and Erk (2018, 2019) approached the data scarcity problem by recasting implicit SRL as a cloze task and as a reading comprehension task, for which data can be generated automatically. The TAC KBP event argument extraction task also seeks arguments from document contexts. However, in our work we are concerned with reified events (explicit mentions) and links between event mentions and argument mentions rather than entity-level arguments (coreference clusters). 3 RAMS Motivated by the scarcity of data for training neural models to predict non-local arguments, we constructed Roles Across Multiple Sentences (RAMS), a crowd-sourced dataset with annotations for 9,124 events following the AIDA ontology. We employed the AIDA ontology in RAMS so-as to be most similar to an existing corpus already being investigated by various members of the community. Each example consists of a typed trigger span and 0 or more argument spans in an English document. A trigger span is a word or phrase that evokes a certain event type in context, while argument spans denote role-typed participants in the event (e.g., the Recipient). Trigger and argument spans are token-level [start, end] offsets into a tokenized document. Typically, event and relation datasets annotate only the argument spans that are in the same sentence as the trigger, but we present annotators with a multi-sentence context window surrounding the trigger. Annotators may select argument spans in any sentence in the context window. 3.1 Dataset Description Data Source We used Reddit, a popular internet forum, to filter a collection of news articles to be topically similar to AIDA-1. After applying a set of criteria based on keywords, time period, and popularity (listed in Appendix A.1) we identified approximately 12,000 news articles with an average length of approximately 40 sentences. Annotation We manually constructed a mapping from each event ((sub-)sub)type to a list of lexical units (LUs) likely to evoke that type.6 This mapping was designed to give high precision and 6For example, Conflict/Attack/SetFire is evoked by inferno, blaze, and arson (and word forms). Train Dev Test Total Docs 3,194 399 400 3,993 Examples 7,329 924 871 9,124 Event Types 139 131 – 139 Roles 65 62 – 65 Arguments 17,026 2,188 2,023 21,237 Table 1: Sizes and coverage of RAMS splits. RAMS covers all of the 139 event types and 65 roles types in the AIDA Phase 1 ontology. low recall, in that for a given (Type, LUs) pair, the items in LUs are all likely to evoke the Type, although LUs can omit items that also evoke the Type. On average, |LUs| = 3.9. We performed a soft match7 between every LU and every word in our text collection to select candidate sentences for each event type. This matching procedure produced approximately 94,000 candidates, which we balanced by sampling the same number of sentences for each LU. Candidate sentences were then vetted by crowdsourcing to ensure that they evoked their associated event type and had positive factuality. We collected judgments on approximately 17,500 candidate sentences, of which 52% were determined to satisfy these constraints, yielding 9,124 sentences containing a LU trigger. Using these sentences we then collected multi-sentence annotations, presenting annotators with a 5-sentence window containing two sentences of context before the sentence with the trigger and two sentences after.8 Annotators then selected in the context window a span to fill each of the event’s roles. A window size of five sentences was chosen based on internal pilots and supported by our finding that 90% of event arguments in AIDA-1 are recoverable in this window size. Similarly, Gerber and Chai (2010) found that in their data almost 90% of implicit arguments can be resolved in the two sentences preceding the trigger.9 Arguments fall close to the trigger in RAMS as well: 82% of arguments occur in the same sentence as the trigger. On average, we collected 66 full annotations (trigger and arguments) per event type. Table 1 shows dataset size and coverage. All aspects of the protocol, including the annotation interface and instructions, are included in Appendix A. 7We stem all words and ignore case. 8If fewer than two sentences appeared before/after the trigger, annotators were shown as many sentences as were available. 9Arguments following the trigger were not annotated. 8060 Inter-Annotator Agreement We randomly selected 93 tasks for redundant annotation in order to measure inter-annotator agreement, collecting five responses per task from distinct users. 68.5% of the time, all annotators mark the role as either absent or present. Less frequently (21.7%), four of the five annotators agree, and rarely (9.8%) is there strong disagreement. We compute pairwise agreement for span boundaries. For each annotated (event, role) combination, we compare pairs of spans for which both annotators believe the role is present. 55.3% of the pairs agree exactly. Allowing for a fuzzier match, such as to account for whether one includes a determiner, spans whose boundaries differ by one token have a much higher agreement of 69.9%. Fewer spans agree on the start boundary (59.8%) than on the end (73.5%), while 78.0% match at least one of the two boundaries. We demonstrate data quality in §5.2 by showing its positive impact on a downstream task. 0 50 100 Event types 0 100 200 Frequency RAMS train AIDA-1 BNB Figure 3: Comparison of frequency of event types in various datasets sorted by decreasing frequency in that dataset. RAMS has a heavier tail than AIDA-1 and BNB and broader coverage of events. Comparisons to Related Datasets Comparisons of event type coverage among RAMS, AIDA-1, and BNB (Gerber and Chai, 2010, 2012) are given in Figure 3. RAMS provides larger and broader coverage of event types than do AIDA-1 and BNB. By design, BNB focuses on only a few predicate types, but we include its statistics for reference. More figures regarding type and role coverage are included in Appendix A.4. Related Protocols Feizabadi and Pad´o (2014) also considered the case of crowdsourcing annotations for cross-sentence arguments. Like us, they provided annotators with a context window rather than the whole document, annotating two frames each with four roles over 384 predicates. Annotators in that work were shown the sentence containing the predicate and the three previous sentences, unlike ours which shows two preceding and two following sentences. Rather than instructing annotators to highlight spans in the text (“marking”), Feizabadi and Pad´o (2014) directed annotators to fill in blanks in templatic sentences (“gap filling”). We in contrast require annotators to highlight mention spans directly in the text. Our protocol of event type verification followed by argument finding is similar to the protocol supported by interfaces such as SALTO (Burchardt et al., 2006) and that of Fillmore et al. (2002). 4 Model We formulate argument linking as follows, similar to the formulation in Das et al. (2010). Assume a document D contains a set of described events E, each designated by a trigger—a text span in D. The type of an event e determines the set of roles the event’s arguments may take, denoted Re. For each e ∈E, the task is to link the event’s roles with arguments—text spans in D—if they are attested. Specifically, one must find for each e all (r, a) pairs such that r ∈Re and a ∈D. This formulation does not restrict each role to be filled by only one argument, nor does it restrict each explicit argument to take at most one role. 4.1 Architecture Our model architecture is related to recent models for SRL (He et al., 2018; Ouchi et al., 2018). Contextualized text embeddings are used to form candidate argument span representations, A. These are then pruned and scored alongside the trigger span and learned role embeddings to determine the best argument span (possibly none) for each event and role, i.e., argmaxa∈AP(a | e, r) for each event e ∈E and role r ∈Re. Representations To represent text spans, we adopt the convention from Lee et al. (2017) that has been used for a broad suite of core NLP tasks (Swayamdipta et al., 2018; He et al., 2018; Tenney et al., 2019b). A bidirectional LSTM encodes each sentence’s contextualized embeddings (Peters et al., 2018; Devlin et al., 2018). The hidden states at the start and end of the span are concatenated along with a feature vector for the size of the span and a soft head word vector produced by a learned attention mask over the word vectors 8061 (GloVe embeddings (Pennington et al., 2014) and character-level convolutions) within the span. We use this method to form representations of trigger spans, e, and of candidate argument spans, a. We learn a separate embedding, r, for each role in the ontology, r ∈R. Since our objective is to link candidate arguments to event-role pairs, we construct an event-role representation10 by applying a feed-forward neural network (F˜a) to the event trigger span and role embedding: ˜ae,r = F˜a([e; r]) (1) This method is similar to one for forming edge representations for cross-sentence relation extraction (Song et al., 2018), but contrasts with prior work which limits the interaction between r and e (He et al., 2018; Tenney et al., 2019b). Pruning Given a document with n tokens, there are O(n2) candidate argument text spans, which leads to intractability for large documents. Following Lee et al. (2017) and He et al. (2018), we consider within-sentence spans up to a certain width (giving O(n) spans) and score each span, a, using a learned unary function of its representation: sA(a) = w⊤ AFA(a). We keep the top λAn spans (λA is a hyperparameter) and refer to this set of high-scoring candidate argument spans as A. In an unpruned model, we need to create at least P e |Re| event-role representations and evaluate Ω(n P e |Re|) combinations of events, roles, and arguments, which can become prohibitively large when there are numerous events and roles. Assuming the number of events is linear in document length, the number of combinations would be quadratic in document length (rather than quadratic in sentence length as in He et al. (2018)). Lee et al. (2018) addressed this issue in coreference resolution, a different document-level task, by implementing a coarse pruner to limit the number of candidate spans that are subsequently scored. For our model, any role can potentially be filled (if the event type is not known). Thus, we do not wish to prematurely prune (e, r) pairs, so we must further prune A. Rather than scoring a ∈A with every event-role pair (e, r), we assign a score between a and every event e. This relaxation reflects a loose notion of how likely an 10As a role for an event evokes an implicit discourse referent, this can be regarded as an implicit discourse referent representation. argument span is to participate in an event, which can be determined irrespective of a role: sc(e, a) = e⊤Wca + sA(a) + sE(e) + φc(e, a) where Wc is learned and φc(e, a) are task-specific features. We use Ae ⊆A to refer to the top-kscoring candidate argument spans in relation to e. Scoring We introduce a link scoring function, l(a, ˜ae,r), between candidate spans a ∈Ae and event-role pairs ˜ae,r = (e, r) ∈E × R.11 The scoring function decomposes as: l(a, ˜ae,r) = sE,R(e, r) + sA,R(a, r) + sl(a, ˜ae,r) + sc(e, a), a ̸= ϵ (2) sE(e) = w⊤ EFE(e) sE,R(e, r) = w⊤ E,RFE,R([e; r]) sA,R(a, r) = w⊤ A,RFA,R([a; r]) sl(a, ˜ae,r) = w⊤ l Fl([a; ˜ae,r; a ◦˜ae,r; φl(a, ˜ae,r)]) (3) where φl(a, ˜ae,r) is a feature vector containing information such as the (bucketed) token distance between e and a.12 Fx are feed-forward neural networks, and wx are learned weights. The decomposition is inspired by Lee et al. (2017) and He et al. (2018), while the direct scoring of candidate arguments against event-role pairs, sl(a, ˜ae,r), bears similarities to the approach taken by Schenk and Chiarcos (2016), which finds the candidate argument whose representation is most similar to the prototypical filler of a frame element (role). Learning We denote “no explicit argument” by ϵ and assign it link score l(ϵ, ˜ae,r) ≜0, which acts as a threshold for the link function. For every event-role-argument triple (e, r, a), we maximize P(a | e, r) = exp{l(a, ˜ae,r)} P a′∈Ae∪{ϵ} exp {l(a′, ˜ae,r)}. Decoding We experiment with three decoding strategies: argmax, greedy, and type-constrained. If we assume each role is satisfied by exactly one argument (potentially ϵ), we can perform argmax decoding independently for each role: ˆa = argmaxa∈Ae∪{ϵ}P(a | e, r) 11If the type of e is known, then we could restrict r ∈Re. 12Distance = max(estart −aend, astart −eend). 8062 To instead predict multiple non-overlapping arguments per role, we could use P(ϵ | e, r) as a threshold in greedy decoding (Ouchi et al., 2018). We may know the gold event types and the mapping between events e and their permitted roles, Re. While this information can be used during training, we take a simpler approach of using it for type-constrained decoding (TCD). If an event type allows mr arguments for role r, we keep only the top-scoring mr arguments based on link scores. 4.2 Related Models Our model is inspired by several recent span selection models (He et al., 2018; Lee et al., 2018; Ouchi et al., 2018), as well as the long line of neural event extraction models (Chen et al., 2015; Nguyen et al., 2016, inter alia). O’Gorman (2019) speculates a joint coreference and SRL model in which implicit discourse referents are generated for each event predicate and subsequently clustered with the discovered referent spans using a model for coreference, which is similar to the approach of Silberer and Frank (2012). O’Gorman (2019) further claims that span selection models would be difficult to scale to the document level, which is the regime we are most interested in. We focus on the implicit discourse referents (i.e., the event-role representations) for an event and link them to argument mentions, rather than cluster them using a coreference resolution system or aggregate event structures across multiple events and documents (Wolfe et al., 2015). Our approach is also similar to the one used by Das et al. (2010) for FrameNet parsing. CoNLL 2012 SRL As our model bears similarities to the SRL models proposed by He et al. (2018) and Ouchi et al. (2018), we evaluate our model on the sentence-level CoNLL 2012 dataset as a sanity check. Based on a small hyperparameter sweep, our model achieves 81.4 F1 when given gold predicate spans and 81.2 F1 when not given gold predicates.13 Our model’s recall is harmed because our span pruning occurs at the document level rather than at the sentence level, which leads to overpruning in some sentences. Although our model is designed to accommodate cross-sentence links, it maintains competitive performance on sentence-level SRL. 13We use ELMo (Peters et al., 2018) in these experiments. He et al. (2018) achieve 85.5 F1 with gold predicates and 82.9 F1 without gold predicates, and Ouchi et al. (2018) achieve 86.2 F1 with gold predicates. Model Dev. F1 P R F1 Our model 69.9 62.8 74.9 68.3 Our modelTCD 75.1 78.1 69.2 73.3 Most common 17.3 15.7 15.7 15.7 Fixed triggerTCD 60.2 83.7 41.9 55.8 Context as triggerTCD 62.1 80.5 45.8 58.4 Distractor args 24.3 60.5 15.1 24.2 Distractor argsTCD 24.2 68.8 14.3 23.7 No given args 8.7 20.2 3.5 6.0 No given argsTCD 8.4 26.6 3.1 5.5 Table 2: P(recision), R(ecall), and F1 on RAMS development and test data. TCD designates the use of ontology-aware type-constrained decoding. 5 RAMS Experiments and Results In the following experiments, for each event the model is given the (gold) trigger span and the (gold) spans of the arguments. The model finds for each role the best argument(s) to fill it. Predictions are returned as trigger-role-argument triples. We use feature-based BERT-base (Devlin et al., 2018)—mixing layers 9 through 12—by splitting the documents into segments of size 512 subtokens and encoding each segment separately.14 We perform preliminary sweeps across hyperparameter values, which are then fixed while we perform a more exhaustive sweep across scoring features. We also compare argmax decoding with greedy decoding during training. The best model is selected based on F1 on the development set, and ablations are reported in Table 3. Our final model uses greedy decoding, sA,R, and sl and omits sE,R and sc (see Equation 2). More details can be found in Appendix B. The results using our model with greedy decoding and TCD are reported in Table 2. We also report performance of the following baselines: 1) choosing for each link the most common role (place), 2) using the same fixed trigger representation across examples, and 3) using the full context window as the trigger. Additionally, we experiment with two other data conditions: 1) linking the correct argument(s) from among a set of distractor candidate arguments provided by a constituency parser (Kitaev and Klein, 2018),15 and 2) finding the correct argument(s) from among all possible spans up to a fixed length. 140.2% of the training documents span multiple segments. 15We take as the distractor arguments all (potentially overlapping) NPs predicted by the parser. On average, this yields 44 distractors per training document. 8063 Model Greedy TCD Our model 69.9 75.1 - distance score 69.0 74.3 - sl(a, ˜ae,r) 54.9 58.4 - sA,R(a, r) 68.6 73.8 + sE,R(e, r) 69.5 74.4 + sc(e, a) 65.9 70.6 w/ argmax decoding 69.9 75.1 BERT 6–9 69.6 75.3 ELMo 68.5 75.2 Table 3: F1 on RAMS dev data when link score components are separately included/excluded (Equation 2) or other contextualized encoders are used in the best performing model. TCD = type-constrained decoding. For the distractor experiment, we use the same hyperparameters as for the main experiment. When not given gold argument spans, we consider all spans up to 5 tokens long and change only the hyperparameters that would prune less aggressively. We hypothesize that the low performance in this setting is due to the sparsity of annotated spans compared to the set of all enumerated spans. In contrast, datasets such as CoNLL 2012 are more densely annotated, so the training signal is not as affected when the model must determine argument spans in addition to linking them. Finally, we examine the effect of TCD to see whether the model effectively uses gold event types if they are given. TCD filters out illegal predictions, boosting precision. Recall is still affected by this decoding strategy because the model may be more confident in the wrong argument for a given role, thus filtering out the less confident, correct one. Nevertheless, using gold types at test time generally leads to gains in performance. 5.1 Analysis Ablations Ablation studies on development data for components of the link score as well as the contextualized encoder and decoding strategy are shown in Table 3. Type-constrained decoding based on knowledge of gold event types improves F1 in all cases because it removes predictions that are invalid with respect to the ontology. The most important link score component is the score between a combined event-role and a candidate argument. This result follows intuitions that sl is the primary component of the link score since it directly captures the compatibility of the explicit argument and the implicit argument represented by the event-role pair. Dist. # Gold # Predict P R F1 -2 79 (26) 69 (21) 81.2 70.9 75.7 -1 164 (33) 151 (27) 76.8 70.7 73.7 0 1,811 (61) 1,688 (51) 77.7 72.4 75.0 1 87 (24) 83 (22) 78.3 74.7 76.5 2 47 (18) 39 (14) 87.2 72.3 79.1 Total 2,189 (62) 2,030 (52) 78.0 72.3 75.1 Table 4: Performance breakdown by distance (number of sentences) between argument and event trigger for our model using TCD over the development data. Negative distances indicate that the argument occurs before the trigger. # Gold and # Predict list the number of arguments (and unique roles) at that distance. We also experiment with both ELMo (Peters et al., 2018) and BERT layers 6–9, which were found to have the highest mixture weights for SRL by Tenney et al. (2019a). We found that BERT generally improves over ELMo and layers 9–12 often perform better than layers 6–9. Argument–Trigger Distance One of the differentiating components of RAMS compared to SRL datasets is its non-local annotation of arguments. At the same time, RAMS uses naturally occurring text so arguments are still heavily distributed within the same sentence as the trigger (Figure 5). This setting allows us to ask whether our model accurately finds arguments outside of the sentence containing the trigger despite the non-uniform distribution. In Table 4, we report F1 based on distance on the development set and find that performance on distant arguments is comparable to performance on local arguments, demonstrating the model’s ability to handle non-local arguments. Role Embeddings and Confusion We present in Figure 4 the cosine similarities between the learned 50-dimensional role embeddings in our model and also the errors made by the model under argmax decoding on the dev set.16 Some roles are highly correlated. For example, origin and destination have the most similar embeddings, possibly because they co-occur frequently and have the same entity type. Conversely, negatively correlated roles have different entity types or occur in different events, such as communicator compared to destination and artifact. We also observe that incorrect predictions are made more often between highly correlated roles and err 16Analysis of the confusion matrix with type-constrained decoding is less meaningful because the constraints, which rely on gold event types, filter out major classes of errors. 8064 Figure 4: Embedding similarity (top) and rownormalized confusion (bottom) between roles for the 15 most frequent roles with our model. The full figures are included in Appendix C. Best viewed in color. on the side of the more frequent role, as most errors occur below the diagonal. Examples We present predictions from the development set which demonstrate some phenomena of interest. These are made without TCD, illustrating the model’s predictions without knowledge of gold event types. In Table 5, the first example demonstrates the model’s ability to link a non-local argument which occurs in the sentence before the trigger. Greedy decoding helps the model find multiple arguments satisfying the same participant role, which also appear on either side of the trigger. In the second example, the model correctly predicts the driverpassenger, one of the rarer roles in RAMS (17 instances in the training set), consistent with the gold AccidentCrash event type. In Table 6, the model fills roles corresponding to both the Death and the gold JudicialConsequences event types, thereby mixing roles from different event types. The predictions are plausible when interpreted in context and would be more accurate under TCD. The EU’s leaders PARTICIPANT in Brussels are expected to play hardball in negotiating Britain’s exit, to send a message to other states that might be contemplating a similar move. “Informal meeting of EU 27 next week without PM in the room to decide common negotiating position vs UK PARTICIPANT on exit negotiations” —Faisal Islam. SPEAKER: I’m Mary Ann Mendoza, the mother of Sergeant Brandon Mendoza DRIVERPASSENGER , who was killed in a violent head-on collision in Mesa PLACE . Table 5: Two examples of correct predictions on the development set. “Many people are saying that the Iranians KILLER killed the scientist who helped the US because of Hillary Clinton’s hacked emails.” —8 August, Twitter. Shahran Amiri VICTIM, DEFENDANT , the nuclear scientist executed in Iran PLACE last week, ... “Many people are saying that the Iranians JUDGECOURT killed the scientist who helped the US CRIME because of Hillary Clinton’s hacked emails.” —8 August, Twitter. Shahran Amiri DEFENDANT , the nuclear scientist executed in Iran PLACE last week, ... Table 6: A partially correct prediction (top) and its corresponding gold annotations (bottom). 5.2 AIDA Phase 1 We also investigate how well RAMS serves as pre-training data for AIDA-1. A model using the hyperparameters of our best-performing RAMS model and trained on just English AIDA-1 Practice data achieves 19.1 F1 on the English AIDA-1 Eval data under greedy decoding and 18.2 F1 with TCD. When our best-performing RAMS model is fine-tuned to the AIDA task by further training on the AIDA-1 data, performance is improved to 24.4 F1 under greedy decoding and 24.8 F1 with TCD. The crowdsourced annotations in RAMS are therefore of sufficient quality to serve as augmentation to LDC’s AIDA-1. Experimental details are available in Appendix D. 6 Other Datasets 6.1 Beyond NomBank The Beyond NomBank (BNB) dataset collected by Gerber and Chai (2010) and refined by Gerber and Chai (2012) contains nominal predicates (event triggers) and multi-sentence arguments, both of which are properties shared with RAMS. To accommodate our formulation of the argu8065 Field Baseline* Our Model Victim Name 9.3 (54.1) 62.2 (69.6) Shooter Name 4.7 (24.1) 53.1 (57.8) Location 12.2 (18.9) 34.9 (63.3) Time 68.1 (69.3) 62.9 (69.4) Weapon 1.1 (17.9) 32.5 (49.6) Table 7: Strict (and approximate) match F1 on GVDB. Due to the different data splits and evaluation conditions, we are not directly comparable to the baseline (Pavlick et al., 2016), provided only for reference. ment linking task, we modify the BNB data in two ways: 1) we merge “split” arguments, which in all but one case are already contiguous spans; and 2) we reduce each cluster of acceptable argument fillers to a set containing only the argument closest to the trigger. We also make modifications to the data splits for purposes of evaluation. Gerber and Chai (2012) suggest evaluation be done using cross-validation on shuffled data, but this may cause document information to leak between the train and evaluation folds. To prevent such leakage and to have a development set for hyperparameter tuning, we separate the data into train, dev, and test splits with no document overlap. Additional data processing details and hyperparameters are given in Appendix E. When given gold triggers and argument spans, our model achieves 75.4 F1 on dev data and 76.6 F1 on test data. 6.2 Gun Violence Database The Gun Violence Database (GVDB) (Pavlick et al., 2016) is a collection of news articles from the early 2000s to 2016 with annotations specifically related to a gun violence event. We split the corpus chronologically into a training set of 5,056 articles, a development set of 400, and a test set of 500. We use this dataset to perform a MUC-style information extraction task (Sundheim, 1992). While GVDB’s schema permits any number of shooters or victims, we simply predict the first mention of each type. Pavlick et al. (2016) perform evaluation in two settings: a strict match is awarded if the predicted string matches the gold string exactly, while an approximate match is awarded if either string contains the other. Assuming each document contains a single gun violence event triggered by the full document, our goal is to predict the value (argument) for each slot (role) for the event. As each slot is filled by exactly one value, we use argmax decoding. While the baseline experiments of Pavlick et al. (2016) made sentence-level predictions focusing on five attributes, we make document-level predictions and consider the larger set of attributes. Table 7 shows our model’s performance on the shared subset of attributes, but the numerical values are not directly comparable because the prior work makes predictions on the full dataset and also combines some roles. Our results show that our model is suitable for information extraction tasks like slot filling. Appendix F contains information on hyperparameters and performance on the full set of roles. To our knowledge, our results are a substantial improvement over prior attempts to predict attributes of gun violence event reports, and we make our models available in the hopes of assisting social scientists in their corpus studies. 7 Conclusion We introduced a novel model for document-level argument linking. Because of the small amount of existing data for the task, to support training our neural framework we constructed the RAMS dataset consisting of 9,124 events covering 139 event types. Our model outperforms strong baselines on RAMS, and we also illustrated its applicability to a variety of related datasets. We hope that RAMS will stimulate further work on multisentence argument linking. Acknowledgments We thank Craig Harman for his help in developing the annotation interface. We also thank Tongfei Chen, Yunmo Chen, members of JHU CLSP, and the anonymous reviewers for their helpful discussions and feedback. This work was supported in part by DARPA AIDA (FA8750-18-2-0015) and IARPA BETTER (#2019-19051600005). The views and conclusions contained in this work are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, or endorsements of DARPA, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In COLING 8066 1998 Volume 1: The 17th International Conference on Computational Linguistics. Aljoscha Burchardt, Katrin Erk, Anette Frank, Andrea Kowalski, and Sebastian Pado. 2006. SALTO - a versatile multi-level annotation tool. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA). Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167–176, Beijing, China. Association for Computational Linguistics. Pengxiang Cheng and Katrin Erk. 2018. Implicit argument prediction with event knowledge. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 831–840, New Orleans, Louisiana. Association for Computational Linguistics. Pengxiang Cheng and Katrin Erk. 2019. Implicit argument prediction as reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6284–6291. Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A. Smith. 2010. Probabilistic frame-semantic parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 948–956, Los Angeles, California. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Parvin Sadat Feizabadi and Sebastian Pad´o. 2014. Crowdsourcing annotation of non-local semantic roles. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers, pages 226–230, Gothenburg, Sweden. Association for Computational Linguistics. Parvin Sadat Feizabadi and Sebastian Pad´o. 2015. Combining seemingly incompatible corpora for implicit semantic role labeling. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 40–50, Denver, Colorado. Association for Computational Linguistics. Charles J Fillmore. 1986. Pragmatically controlled zero anaphora. In Annual Meeting of the Berkeley Linguistics Society, volume 12, pages 95–107. Charles J. Fillmore, Collin F. Baker, and Hiroaki Sato. 2002. The FrameNet database and software tools. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02), Las Palmas, Canary Islands - Spain. European Language Resources Association (ELRA). Matthew Gerber and Joyce Chai. 2010. Beyond NomBank: A study of implicit arguments for nominal predicates. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1583–1592, Uppsala, Sweden. Association for Computational Linguistics. Matthew Gerber and Joyce Y. Chai. 2012. Semantic role labeling of implicit arguments for nominal predicates. Computational Linguistics, 38(4):755–798. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 364–369, Melbourne, Australia. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics. Kevin Knight, Bianca Badarau, Laura Baranescu, Claire Bonial, Madalina Bardocz, Kira Griffitt, Ulf Hermjakob, Daniel Marcu, Martha Palmer, Tim O’Gorman, and Nathan Schneider. 2020. Abstract Meaning Representation (AMR) annotation release 3.0 LDC2020T02. Linguistic Data Consortium, Philadelphia, PA. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics. Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-tofine inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 687–692, New Orleans, Louisiana. Association for Computational Linguistics. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent 8067 neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309, San Diego, California. Association for Computational Linguistics. Tim O’Gorman, Michael Regan, Kira Griffitt, Ulf Hermjakob, Kevin Knight, and Martha Palmer. 2018. AMR beyond the sentence: the multi-sentence AMR corpus. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3693–3702, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Timothy J O’Gorman. 2019. Bringing Together Computational and Linguistic Models of Implicit Role Interpretation. PhD dissertation, University of Colorado at Boulder. Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2018. A span selection model for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1630–1642, Brussels, Belgium. Association for Computational Linguistics. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Ellie Pavlick, Heng Ji, Xiaoman Pan, and Chris Callison-Burch. 2016. The gun violence database: A new task and data set for NLP. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1018–1024, Austin, Texas. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143– 152, Sofia, Bulgaria. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Michael Roth and Anette Frank. 2013. Automatically identifying implicit arguments to improve argument linking and coherence modeling. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 306–316, Atlanta, Georgia, USA. Association for Computational Linguistics. Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2010. SemEval-2010 task 10: Linking events and their participants in discourse. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 45–50, Uppsala, Sweden. Association for Computational Linguistics. Niko Schenk and Christian Chiarcos. 2016. Unsupervised learning of prototypical fillers for implicit semantic role labeling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1473–1479, San Diego, California. Association for Computational Linguistics. Carina Silberer and Anette Frank. 2012. Casting implicit role linking as an anaphora resolution task. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 1–10, Montr´eal, Canada. Association for Computational Linguistics. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. N-ary relation extraction using graphstate LSTM. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2226–2235, Brussels, Belgium. Association for Computational Linguistics. Beth M. Sundheim. 1992. Overview of the fourth message understanding evaluation and conference. In FOURTH MESSAGE UNDERSTANDING CONFERENCE (MUC-4), Proceedings of a Conference Held in McLean, Virginia, June 16-18, 1992. Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3772–3782, Brussels, Belgium. Association for Computational Linguistics. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In 8068 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? Probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes release 5.0 LDC2013T19. Linguistic Data Consortium, Philadelphia, PA. Travis Wolfe, Mark Dredze, and Benjamin Van Durme. 2015. Predicate argument alignment using a global coherence model. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 11–20, Denver, Colorado. Association for Computational Linguistics. 8069 A RAMS Data A.1 Collection On Reddit, users make submissions containing links to news articles, images, videos, or other kinds of documents, and other users may then vote or comment on the submitted content. We collected news articles matching the following criteria: 1) Posted to the r/politics sub-forum between January and October 2016; 2) Resulted in threads with at least 25 comments; and 3) Contained at least one mention of the string “Russia”. The resulting subset of articles tended to describe geopolitical events and relations like the ones in the AIDA ontology. In order to filter out low-quality, fake, or disreputable news articles, we treat the number of comments in the discussion as a signal of information content. Our approach of gathering user-submitted and curated content through Reddit is similar to those used for creating large datasets for language model pre-training (Radford et al., 2019). Documents were split into sentences using NLTK 3.4.3, and sentences were split into tokens using SpaCy 2.1.4. A.2 Annotation To assess whether a lexical unit (LU) evoked an event with positive factuality, the vetting task contained an event definition and several candidate sentences, each with a highlighted LU. Annotators were asked to judge how well each highlighted LU, in the context of its sentence, matched the provided event definition. In the same task, they were also asked to assess the factuality of the sentence. Annotation instructions and examples are shown in Figure 9 and Figure 10. 2 1 0 1 2 # sentences between argument and trigger 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of arguments 79 610 164 1350 1811 14018 87 769 47 279 train dev Figure 5: Distances between triggers and arguments in RAMS and proportion of arguments at that distance (counts are shown above each bar). Negative distances indicate that the argument occurs before the trigger. Each argument selection task contained five tokenized sentences, a contiguous set of tokens marking the trigger, a definition of the event type, and a list of roles and their associated definitions. For each role, annotators were asked whether a corresponding argument was present in the 5sentence window, and if so, to highlight the argument span that was closest to the event trigger, as there could be multiple. In cases near the beginning or end of a document, annotators were shown up to two sentences before or after the sentence containing the trigger. Annotators were allowed to highlight any set of (within-sentence) contiguous tokens within the 5-sentence window aside from the trigger tokens. The distribution of distances between triggers and arguments is shown in Figure 5. Annotation instructions and an example are shown in Figure 11 and Figure 12. A.3 Agreement We additionally compute the frequency with which annotators agreed a given role was or was not present in the context window. To measure the frequency with which annotators agree whether a given role is present, we treat the majority annotation as the gold standard. Then, we calculated the precision, recall, and F1 of the annotations. Across the set of redundantly annotated tasks, there were 83 false negatives, 60 false positives, and 892 true positives, giving a precision of 93.7, recall of 91.5, and an F1 of 92.6. Threshold Conjunctive Disjunctive Start End 0 55.3 78.0 59.8 73.5 1 69.9 80.3 74.9 75.3 2 73.9 82.0 78.2 77.8 3 76.4 83.6 80.9 79.1 4 78.8 84.3 82.7 80.4 Table 8: Pairwise span boundary inter-annotator agreement statistics for various span difference thresholds. We consider a wider range of span difference thresholds, where span difference is calculated by using the absolute difference of the (start, end) token indices from each pair. These are presented in Table 8. In conjunctive agreement, both |start1 −start2| and |end1 −end2| must be less than the given threshold; therefore, conjunctive agreement at threshold 0 is the percent of pairs that exactly agree (55.3%). Disjunctive agreement is less strict, requiring that either the absolute difference of start offsets or end offsets must be less than the threshold. Start and end agreement is deter8070 0 50 100 Event types 0 100 200 Frequency RAMS train AIDA-1 BNB 0 50 100 Event types 0.00 0.25 0.50 0.75 1.00 CDF RAMS train AIDA-1 BNB Figure 6: Comparison of frequency (top) and amount of dataset covered (bottom) of event types sorted by decreasing frequency. RAMS has more annotations for a more diverse set of event types than do AIDA Phase 1 and Beyond NomBank. mined by considering whether the absolute difference of the pair’s start or end offsets (respectively) is within the given threshold. A.4 Event and Role Type Coverage Event type and role type coverage are shown in Figure 6 and Figure 7. Figure 6 illustrates that RAMS contains more annotations for a larger set of event types than does AIDA-1. In addition, the distribution of annotations in RAMS is less skewed (more entropic) than in AIDA-1, in that in order to cover a given percentage of the dataset, more event types must be considered in RAMS than in AIDA-1. Figure 7 shows a similar pattern for role type coverage. Figure 8 shows role coverage per event type, a measure of how much of each event type’s role set is annotated on average. Role coverage per event type is calculated as the average number of filled roles per instance of the event type divided by the number of roles specified for that event type by the ontology. For the RAMS training set, the 25th percentile is 55.6%, the 50th percentile is 61.9%, and the 75th percentile is 68.6% coverage. 0 20 40 60 Role types 0 1000 2000 Frequency RAMS train AIDA-1 0 20 40 60 Role types 0.2 0.4 0.6 0.8 1.0 CDF RAMS train AIDA-1 Figure 7: Comparison of frequency (top) and amount of dataset covered (bottom) of roles sorted by decreasing frequency. RAMS has more annotations for a more diverse set of role types than the AIDA Phase 1 data. B RAMS Hyperparameters Table 9 lists the numerical hyperparameters shared by all models discussed in this paper. Models may ignore some link score components if they were found to be unhelpful during our sweep of Equation 2 and Equation 3. For our model, we learn a linear combination of the top layers (9, 10, 11, 12) of BERT-base cased, while we use the middle layers (6, 7, 8, 9) for the 6–9 ablation. For ELMo, we use all three layers and encode each sentence separately. We apply a lexical dropout of 0.5 to these embeddings. 20 30 40 50 60 70 80 90 % roles filled 0 5 10 15 20 # event types Figure 8: Number of event types for which a given percentage of roles are filled in RAMS train set. 8071 Figure 9: Annotation instructions for determining whether a lexical unit (in context) evokes an event type. Figure 10: Annotation interface for determining whether a lexical unit (in context) evokes an event type. 8072 Figure 11: Annotation instructions for selecting arguments for an event. Figure 12: Annotation interface for selecting arguments for an event. 8073 Hyperparameter Value Embeddings role size 50 feature (φl) size 20 LSTM size 200 layers 3 dropout 0.4 argument (FA) size 150 layers 2 event-role (FE,R) size 150 layers 2 F˜a (Eqn. 1) layers 2 arg-role (FA,R) size 150 layers 2 Fl size 150 layers 2 distance FFNN size 150 layers 2 # buckets 10 Pruning k 10 Memory Limits training doc size 1000 batch size 1 Training learning rate 0.001 decay 0.999 100 steps patience 10 Table 9: Hyperparameters of the model trained on RAMS. Sizes of learned weights that are omitted from the table can be determined from these hyperparameters. As the argument spans are given to the model in our experiments, we skip the first pass of pruning. We do not clip gradients. In our best model, we use learned bucketed distance embeddings (Lee et al., 2017). These embeddings are scored as part of φc in computing sc(e, a) in Equation 2 and are also scored as a part of φl in sl (Equation 3). Since span boundaries are given in our primary experiments, we do not include a score sA or sE in sc. Our best model uses both sA,R and sl(a, ˜ae,r) in Equation 2. These features were chosen as the result of a sweep over possible features, with other ablations reported in Table 3. We adopt the span embedding approach by Lee et al. (2017), which uses character convolutions (50 8-dimensional filters of sizes 3, 4, and 5) and 300-dimensional GloVe embeddings. The default dropout applied to all connections is 0.2. We optimize using Adam (Kingma and Ba, 2015) with patience-based early stopping, resulting in the best checkpoint after 19 epochs (9 hours on an NVIDIA 1080Ti), using F1 as the evaluation metric. Hyperparameters for the condition with distractor candidate arguments are the same as those in Table 9. For the condition with no given argument spans, we consider all intrasentential spans up to 5 tokens in length. We include the score of each candidate argument span when pruning to encourage the model to keep correct spans. We modify hyperparameters in Table 9 to prune less aggressively, setting k = 100 and λA = 1.0 (defined in §4.1). C Full Role Confusion and Similarity Matrices Figure 13 shows the similarity between all 65 role embeddings, while Figure 14 visualizes all the errors made by the model on the development set. These are expansions of the per-role results from §5.1. Since argument linking is not a one-to-one labeling problem, we need to perform a modified procedure for visualizing a confusion matrix. For example, an argument span may take on multiple roles for the same event. To compute the errors, we first align the correct prediction(s) and subsequently compute the errors for the remaining gold and predicted label(s). For example, if the correct set of roles is {destination, origin} and the model predicts {origin, place}, then we only mark place as an error for destination. D AIDA Phase 1 D.1 Data Processing We filter and process the AIDA-1 Practice and Eval data in the following way. Because annotations are available for only a subset of the documents in AIDA-1, we consider only the documents that have textual event triggers. We then take from this set only the English documents, which, due to noisy language ID in the original annotations, were selected by manual inspection of the first 5 sentences of each document by one of the authors of this work. In addition, the argument spans in each example are only those that participate in events. In other words, arguments of relations (that are not also arguments of events) are not included. Additionally, a document may contain multiple events, unlike in RAMS. The training and development set come from AIDA-1 Practice, and the test set comes from AIDA-1 Eval. As the AIDA-1 Eval documents are about different topics than the Practice documents are, we emulate the mismatch in topic distribution by using a development set that is about a different 8074 Strategy Dev. F1 P R F1 No pre-training 25.0 36.6 12.9 19.1 No pre-trainingTCD 27.1 53.5 11.0 18.2 RAMS pre-training 34.1 43.9 16.9 24.4 RAMS pre-trainingTCD 34.2 62.5 15.4 24.8 Table 10: P(recision), R(ecall), and F1 on AIDA-1 English development and test data. TCD designates the use of ontology-aware type-constrained decoding. topic than the training set is. We use Practice topics R103 and R107 for training and R105 for development because R105 is the smallest of the three practice topics both by number of documents and by number of annotations. The test set consists of all 3 topics (E101, E102, E103) from the (unsequestered) Eval set. After the filtering process described above, we obtain a training set of 46 documents, a development set of 17 documents, and a test set of 69 documents. There are 389 events in the training set, and the training documents have an average length of 50 sentences. D.2 Hyperparameters We use the same hyperparameters as the best model for RAMS, shown in Table 9. D.3 Pre-training on RAMS Both the models with and without pre-training on RAMS were trained on AIDA-1 for 100 epochs with an early-stopping patience of 50 epochs using the same hyperparameters as the best RAMS model. All parameters were updated during finetuning (none were frozen). The vocabulary of the pre-trained model was not expanded when trained on AIDA-1. The models’ lower performance on AIDA-1 than on RAMS may be in part explained by the presence of distractors in AIDA-1. Moving from RAMS (one trigger per example) to AIDA-1 (many triggers per example) introduces distractor “negative” links: an argument for one event might not participate in a different event in the same document. When given gold argument spans, a model learns from RAMS that every argument gets linked to the trigger, but there are many negative links in the AIDA-1 data, which the model must learn to not predict. Full results are given in Table 10. Typeconstrained decoding does not improve performance on AIDA-1 as much as it did in Table 3, possibly because the AIDA-1 data often does not adhere to the multiplicity constraints of the ontology. For example, many attack events have more than one annotated attacker or target. Under TCD, correct predictions made in excess of what the ontology allows are deleted, hurting recall. Interestingly, type-constrained decoding hurts performance on AIDA-1 Eval when there is no pre-training. As discussed in §5, type-constrained decoding tends to improve precision and lower recall. Despite the same behavior here, F1 is nonetheless decreased. We see similar behavior in this experiment to the RAMS experiment involving distractor candidate arguments: low performance which is reduced further when using TCD. E BNB Data Processing and Hyperparameters E.1 Data Processing We use the data from Gerber and Chai (2012).17 We processed the data in the following way. The annotations were first aligned to text in the Penn Treebank. Because our model assumes that arguments are contiguous spans, we then manually merged all “split” arguments, which with one exception were already contiguous spans of text. For the one split argument that was not a contiguous span, we replaced it with its maximal span.18 We then removed special parsing tokens such as “trace” terminals from the text and realigned the spans. While BNB gives full credit as long as one argument in each argument “cluster” is found, our training objective assumes one argument per role. We therefore automatically reduced each argument cluster to a singleton set containing the argument closest to the trigger. This reformulation of the problem limits our ability to compare to prior work. Once all the data had been processed, we created training, development, and test splits. To avoid leaking information across splits, we bucketed examples by document and randomly assigned documents to the splits so that the splits contained instances in the proportions 80% (train), 10% (dev), and 10% (test). 17http://lair.cse.msu.edu/projects/implicit_ argument_annotations.zip. Information about the data and its fields is available at http://lair.cse.msu.edu/ projects/implicit_annotations.html. 18The instance is a quote broken by speaker attribution, where the split argument consists of the two halves of the quote. This example appears in our training set. 8075 Hyperparameter Value Embeddings role size 50 feature (φl) size 20 LSTM size 200 layers 3 dropout 0.4 argument (FA) size 150 layers 2 event-role (FE,R) size 150 layers 2 F˜a (Eqn. 1) layers 2 Fl size 150 layers 2 positional FFNN size 150 layers 2 # buckets 10 Pruning λA 0.8 k 45 Memory Limits training doc size 600 span width 15 batch size 1 Training learning rate 0.0005 decay 0.999 200 steps patience 20 gradient clipping 10.0 Table 11: Hyperparameters of the model trained on GVDB. E.2 Hyperparameters We use the same hyperparameters as the best model for RAMS, shown in Table 9. F GVDB Hyperparameters and Additional Results The entire GVDB corpus consists of 7,366 articles. We exclude articles that do not have a reliable publication date or lack annotated spans for the roles we are interested in. Additionally, a buffer of 100 articles spanning roughly one week between the dev and test set is discarded, limiting the possibility of events occurring in both the development and test sets. We also filter out spans whose start and end boundaries are in different sentences, as these are unlikely to be well-formed argument spans. For evaluation, a slot’s value is marked as correct under the strict setting if any of the predictions for that slot match the string of the correct answer exactly, while an approximate match is awarded if either a prediction contains the correct answer or if the correct answer contains the predicted string. The approximate setting is necessary due to inconsistent annotations (e.g., omitting first or last names). We experiment with the feature-based version of BERT-base and with ELMo as our contextualized encoder. Table 11 lists the numerical hyperparameters for this model. Since there is only one event per document and no explicit trigger, e is represented by a span embedding of the full document. We use the top four layers (9–12) of BERTbase cased (all three layers for ELMo) with a lexical dropout of 0.5. Everywhere else, we apply a dropout of 0.4. We train with the Adam optimizer (Kingma and Ba, 2015) and use patiencebased early stopping. Our best checkpoint was after 8 epochs (roughly 9 hours on a single NVIDIA 1080Ti). Even though the official evaluation is string based, we used a span-based micro F1 metric for early stopping. For this model, φl corresponds to a learned (bucketed) positional embedding of the argument span (i.e., distance from the start of the document). In computing the coarse score, we omit φc. When computing Equation 2, we omit sA,R but keep all other terms in Equation 2. We adopt the character convolution of 50 8-dimensional filters of window sizes 3, 4, and 5 (Lee et al., 2017). With the same hyperparameters and feature choices, we perform an identical evaluation using ELMo instead of BERT. As the original documents are not tokenized, we use SpaCy 2.1.4 for finding sentence boundaries and tokenization. The complete list of annotated fields are VICTIM (name, age, race), SHOOTER (name, age, race), LOCATION (specific location19 or city), TIME (time of day or clock time) and WEAPON (weapon type, number of shots fired). While Pavlick et al. (2016) only make predictions for VICTIM.NAME, SHOOTER.NAME, LOCATION.(CITY|LOCATION), TIME.(TIME|CLOCK), and WEAPON.WEAPON, we perform predictions over all annotated spanbased fields. The full results for both BERT and ELMo are reported in Table 12 and Table 13, respectively. BERT generally improves over ELMo across the board, but not by a sizeable margin. Despite the inability to directly compare, we nonetheless present a stronger and more comprehensive baseline for future work with GVDB. 19For example, a park or a laundromat. 8076 Figure 13: Full version of Figure 4, showing cosine similarity between role embeddings. Best viewed in color. Field Strict Partial Baseline Us Baseline Us P R F1 P R F1 P R F1 P R F1 VICTIM Name 10.2 8.5 9.3 61.2 63.3 62.2 59.5 49.6 54.1 68.4 70.9 69.6 Age – – – 19.4 24.2 21.5 – – – 67.3 84.1 74.8 Race – – – 75.5 74.1 74.8 – – – 75.5 74.1 74.8 SHOOTER Name 5.8 3.9 4.7 55.3 51.1 53.1 30.2 20.1 24.1 60.2 55.6 57.8 Age – – – 34.1 32.6 33.3 – – – 69.0 65.9 67.4 Race – – – 72.7 55.2 62.7 – – – 81.8 62.1 70.6 LOCATION City 19.9 8.8 12.2 67.4 66.2 66.8 30.8 13.6 18.9 72.2 70.9 71.5 Location 36.1 33.8 34.9 65.4 61.2 63.3 TIME Time 69.3 66.9 68.1 57.2 69.7 62.9 70.5 68.1 69.3 63.2 76.9 69.4 Clock 44.0 47.6 45.7 84.0 90.8 87.2 WEAPON Weapon 2.1 0.7 1.1 33.3 31.7 32.5 36.8 11.8 17.9 50.9 48.3 49.6 Num Shots – – – 40.6 11.2 17.6 – – – 62.5 17.2 27.0 Table 12: P(recision), R(ecall), and F1 on event-based slot filling (GVDB) using BERT as the document encoder. Due to the different data splits and evaluation conditions, the results are not directly comparable to the baseline (Pavlick et al., 2016), which is provided only for reference. Fields that were aggregated in the baseline are predicted separately in our model. ‘–’ indicates result is not reported in the baseline. 8077 Figure 14: Full version of Figure 4, showing row-normalized confusion between roles. Note that roles not predicted at all would result in empty rows and so are omitted from the table. Field Strict Partial Baseline Us Baseline Us P R F1 P R F1 P R F1 P R F1 VICTIM Name 10.2 8.5 9.3 56.2 56.8 56.5 59.5 49.6 54.1 62.7 63.3 63.0 Age – – – 29.5 33.9 31.6 – – – 64.4 74.0 68.9 Race – – – 73.2 75.9 74.5 – – – 75.0 77.8 76.4 SHOOTER Name 5.8 3.9 4.7 53.7 60.2 56.7 30.2 20.1 24.1 56.4 63.2 59.6 Age – – – 27.3 31.8 29.4 – – – 53.2 62.1 57.3 Race – – – 55.9 65.5 60.3 – – – 58.8 69.0 63.5 LOCATION City 19.9 8.8 12.2 59.1 61.1 60.1 30.8 13.6 18.9 64.1 66.2 65.1 Location 36.6 34.7 35.6 59.1 56.0 57.5 TIME Time 69.3 66.9 68.1 57.7 64.7 61.0 70.5 68.1 69.3 64.5 72.4 68.2 Clock 44.6 45.8 45.2 83.5 85.6 84.5 WEAPON Weapon 2.1 0.7 1.1 32.7 26.7 29.4 36.8 11.8 17.9 44.9 36.7 40.4 Num Shots – – – 23.3 18.1 20.4 – – – 42.2 32.8 36.9 Table 13: P(recision), R(ecall), and F1 on event-based slot filling (GVDB) using ELMo at the sentence level. On average, the performance is outperformed by BERT.
2020
718
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8078–8092 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8078 Rationalizing Medical Relation Prediction from Corpus-level Statistics Zhen Wang1, Jennifer Lee2,3, Simon Lin4, Huan Sun1 1The Ohio State University 2Department of Family Medicine, The Ohio State University Wexner Medical Center 3Department of Physician Informatics, Nationwide Children’s Hospital 4Abigail Wexner Research Institute at Nationwide Children’s Hospital {wang.9215, sun.397}@osu.edu {Jennifer.Lee2, Simon.Lin}@nationwidechildrens.org Abstract Nowadays, the interpretability of machine learning models is becoming increasingly important, especially in the medical domain. Aiming to shed some light on how to rationalize medical relation prediction, we present a new interpretable framework inspired by existing theories on how human memory works, e.g., theories of recall and recognition. Given the corpus-level statistics, i.e., a global cooccurrence graph of a clinical text corpus, to predict the relations between two entities, we first recall rich contexts associated with the target entities, and then recognize relational interactions between these contexts to form model rationales, which will contribute to the final prediction. We conduct experiments on a real-world public clinical dataset and show that our framework can not only achieve competitive predictive performance against a comprehensive list of neural baseline models, but also present rationales to justify its prediction. We further collaborate with medical experts deeply to verify the usefulness of our model rationales for clinical decision making1. 1 Introduction Predicting relations between entities from a text corpus is a crucial task in order to extract structured knowledge, which can empower a broad range of downstream tasks, e.g., question answering (Xu et al., 2016), dialogue systems (Lowe et al., 2015), reasoning (Das et al., 2017), etc. There has been a large amount of existing work focusing on predicting relations based on raw texts (e.g., sentences, paragraphs) mentioning two entities (Hendrickx et al., 2010; Zeng et al., 2014; Zhou et al., 2016; Mintz et al., 2009; Riedel et al., 2010; Lin et al., 2016; Verga et al., 2018; Yao et al., 2019). 1Our code and datasets are available at: https:// github.com/zhenwang9102/X-MedRELA 521 794 122 2341 198 428 1356 389 may cause may treat may treat 122 Co-occurrence Link (with counts) Association Recall Assumption Recognition Aspirin Caffeine Migraine Pain Relief Fever Headache Figure 1: Our intuition for how to rationalize relation prediction based on the corpus-level statistics. To infer the relation between the target entities (red nodes), we recall (blue dashed line) their associated entities (blue nodes) and infer their relational interactions (red dashed line), which will serve as assumptions or model rationales to support the target relation prediction. In this paper, we study a relatively new setting in which we predict relations between entities based on the global co-occurrence statistics aggregated from a text corpus, and focus on medical relations and clinical texts in Electronic Medical Records (EMRs). The corpus-level statistics present a holistic graph view of all entities in the corpus, which will greatly facilitate the relation inference, and can better preserve patient privacy than raw or even de-identified textual content and are becoming a popular substitute for the latter in the research community for studying EMR data (Finlayson et al., 2014; Wang et al., 2019). To predict relations between entities based on a global co-occurrence graph, intuitively, one can first optimize the graph embedding or global word embedding (Pennington et al., 2014; Perozzi et al., 2014; Tang et al., 2015), and then develop a relation classifier (Nickel et al., 2011; Socher et al., 2013; Yang et al., 2015; Wang et al., 2018) based on the embedding vectors of the two entities. However, such kind of neural frameworks often lack the desired interpretability, which is especially important for the medical domain. In general, despite 8079 their superior predictive performance in many NLP tasks, the opaque decision-making process of neural models has concerned their adoption in high stakes domains like medicine, finance, and judiciary (Rudin, 2019; Murdoch et al., 2019). Building models that provide reasonable explanations and have increased transparency can remarkably enhance user trust (Ribeiro et al., 2016; Miller, 2019). In this paper, we aim to develop such a model for our medical relation prediction task. To start with, we draw inspiration from the existing theories on cognitive processes about how human memory works, e.g., two types of memory retrieval (recall and recognition) (Gillund and Shiffrin, 1984). Basically, in the recall process, humans tend to retrieve contextual associations from long-term memory. For example, given the word “Paris”, one may think of “Eiffel Tower” or “France”, which are strongly associated with “Paris” (Nobel and Shiffrin, 2001; Kahana et al., 2008; Budiu, 2014). Besides, there is a strong correlation between the association strength and the co-occurrence graph (Spence and Owens, 1990; Lundberg and Lee, 2017). In the recognition process, humans typically recognize if they have seen a certain piece of information before. Figure 1 shows an example in the context of relation prediction. Assume a model is to predict whether Aspirin may treat Headache or not (That “Aspirin may treat Headache” is a known fact, and we choose this relation triple for illustration purposes). It is desirable if the model could perform the aforementioned two types of memory processes and produce rationales to base its prediction upon: (1) Recall. What entities are associated with Aspirin? What entities are associated with Headache? (2) Recognition. Do those associated entities hold certain relations, which can be leveraged as clues to predict the target relation? For instance, a model could first retrieve a relevant entity Pain Relief for the tail entity Headache as they co-occur frequently, and then recognize there is a chance that Aspirin can lead to Pain Relief (i.e., formulate model rationales or assumptions), based on which it could finally make a correct prediction (Aspirin, may treat, Headache). Now we formalize such intuition to rationalize the relation prediction task. Our framework consists of three stages, global association recall (CogStage-1), assumption formation and representation (CogStage-2), and prediction decision making (CogStage-3), shown in Figure 2. CogStage-1 Associations Entity Pair Recall Memory Recognition Memory Pred. Assumptions Rationalized by CogStage-1 CogStage-2 CogStage-3 Figure 2: A high-level illustration of our framework. models the process of recalling diverse contextual entities associated with the target head and tail entities respectively, CogStage-2 models the process of recognizing possible interactions between those recalled entities, which serve as model rationales (or, assumptions2) and are represented as semantic vectors, and finally CogStage-3 aggregates all assumptions to infer the target relation. We jointly optimize all three stages using a training set of relation triples as well as the co-occurrence graph. Model rationales can be captured through this process without any gold rationales available as direct supervision. Overall, our framework rationalizes its relation prediction and is interpretable to users3 by providing justifications for (i) why a particular prediction is made, (ii) how the assumptions of the prediction are developed, and (iii) how the particular assumptions are relied on. On a real-life clinical text corpus, we compare our framework with various competitive methods to evaluate the predictive performance and interpretability. We show that our method obtains very competitive performance compared with a comprehensive list of various neural baseline models. Moreover, we follow recent work (Singh et al., 2019; Jin et al., 2020) to quantitatively evaluate model interpretability and demonstrate that rationales produced by our framework can greatly help earn expert trust. To summarize, we study the important problem of rationalizing medical relation prediction based on corpus-level statistics and propose a new framework inspired by cognitive theories, which outperforms competitive baselines in terms of both interpretability and predictive performance. 2 Background Different from existing work using raw texts for relation extraction, we assume a global co-occurrence graph (i.e., corpus-level statistics) is given, which was pre-constructed based on a text corpus D, and denote it as an undirected graph G = (V, E), where 2We use the two terms interchangeably in this paper. 3Following Murdoch et al. (2019), desired interpretability is supposed to provide insights to particular audiences, which in our case are medical experts. 8080 {ℎ {‰ Rela. Pred. 83 463 984 123 146 385 130 122 353 428 Global Association Recall Assumption Formation & Representation Decision Making { , , ..., } w1 ‰ w2 ‰ wj‰ ‰ Corpus-level Statistics { , , ..., } w1 ℎw2 ℎ wjℎ ℎ w~+1 ℎ w~ ℎ |( , , ) w~ ℎ‡€ w ‰ OWA Rationales rela. vec. head vec. tail vec. w~−1 ℎ ⋯⋯ ⋯⋯ w+1 ‰ w ‰ w−1 ‰ ⋯⋯ ⋯⋯ Figure 3: Framework Overview. each vertex v ∈V represents an entity extracted from the corpus and each edge e ∈E is associated with the global co-occurrence count for the connected nodes. Counts reflect how frequent two entities appear in the same context (e.g., co-occur in the same sentence, document, or a certain time frame). In this paper, we focus on clinical co-occurrence graph in which vertices are medical terms extracted from clinical notes. Nevertheless, as we will see later, our framework is very general and can be applied to other relations with corpus-level statistics. Our motivation for working under this setting lies in three folds: (1) Such graph data is stripped of raw textual contexts and thus, has a better preserving of patient privacy (Wang et al., 2019), which makes itself easier to be constructed and shared under the HIPPA protected environments (Act, 1996) for medical institutes (Finlayson et al., 2014); (2) Compared with open-domain relation extraction, entities holding a medical relation oftentimes do not co-occur in a local context (e.g., a sentence or paragraph). For instance, we observe that in a widely used clinical co-occurrence graph (Finlayson et al., 2014), which is also employed for our experiments later, of all entity pairs holding the treatment relation according to UMLS (Unified Medical Language System), only about 11.4% have a co-occurrence link (i.e., co-occur in clinical notes within a time frame like 1 day or 7 days); (3) As suggested by cognitive theories (Spence and Owens, 1990), lexical co-occurrence is significantly correlated with association strength in the recall memory process, which further inspires us to utilize such statistics to find associations and form model rationales for relation prediction. Finally, our relation prediction task is formulated as: Given the global statistics G and an entity pair, we predict whether they hold a relation r (e.g., MAY TREAT), and moreover provide a set of model rationales T composed of relation triples for the prediction. For the example in Figure 1, we aim to build a model that will not only accurately predict the MAY TREAT relation, but also provide meaningful rationales on how the prediction is made, which are crucial for gaining trust from clinicians. 3 Methodology Following a high-level framework illustration in Figure 2, we show a more detailed overview in Figure 3 and introduce each component as follows. 3.1 CogStage-1: Global Association Recall Existing cognitive theories (Kahana et al., 2008) suggest that recall is an essential function of human memory to retrieve associations for later decision making. On the other hand, the association has been shown to significantly correlate with the lexical co-occurrence from the text corpus (Spence and Owens, 1990; Lund and Burgess, 1996). Inspired by such theories and correlation, we explicitly build up our model based on recalled associations stemming from corpus-level statistics and provide global highly-associated contexts as the source of interpretations. Given an entity, we build an estimation module to globally infer associations based on the corpuslevel statistics. Our module leverages distributional learning to fully explore the graph structure. One can also directly utilize the raw neighborhoods in the co-occurrence graph, but due to the noise introduced in the preprocessing of building the graph, it is a less optimal choice in real practice. Specifically, for a selected node/entity ei ∈E, our global association recall module estimates a conditional probability p (ej|ei), representing how likely the entity ej ∈E is associated with ei4. We formally define such conditional probability as: p (ej|ei) = exp (υ′T ej · υei) P|V| k=1 exp (υ′T ek · υei) (1) 4We assume all existing entities can be possible associations for the given entity. 8081 where υei ∈Rd is the embedding vector of node ei and υ′ ej ∈Rd is the context embedding for ej. There are many ways to approximate p (ej|ei) from the global statistics, e.g., using global logbilinear regression (Pennington et al., 2014). To estimate such probabilities and update entity embeddings efficiently, we optimize the conditional distribution p (ej|ei) to be close to the empirical distribution ˆp (ej|ei) defined as: ˆp (ej|ei) = pij P (i,k)∈E pik (2) where E is the set of edges in the co-occurrence graph and pij is the PPMI value calculated by the co-occurrence counts between node ei and ej. We adopt the cross entropy loss for the optimization: Ln = − X (ei,ej)∈V ˆp(ej|ei) log (p(ej|ei)) (3) This association recall module will be jointly trained with other objective functions to be introduced in the following sections. After that, given an entity ei, we can select the top-Nc entities from p(·|ei) as ei’s associative entities for subsequent assumption formation. 3.2 CogStage-2: Assumption Formation and Representation As shown in Figure 3, with the associative entities from CogStage-1, we are ready to formulate and represent assumptions. In this paper, we define model assumptions as relational interactions between associations, that is, as shown in Figure 1, the model may identify (Caffeine, MAY TREAT, Migraine) as an assumption, which could help predict Aspirin may treat Headache (Caffeine and Migraine are associations for Aspirin and Headache respectively). Such relational rationales are more concrete and much easier for humans to understand than the widely-adopted explanation strategy (Yang et al., 2016; Mullenbach et al., 2018; Vashishth et al., 2019) in NLP that is based on pure attention weights on local contexts. One straightway way to obtain such rationales is to query existing medical knowledge bases (KBs), e.g., (Caffeine, MAY TREAT, Migraine) may exist in SNOMED CT5 and can serve as a model rationale. We refer to rationales acquired in this way as the Closed-World Assumption (CWA) (Reiter, 1981) setting since only KB-stored facts are considered and trusted in a closed world. In contrast 5https://www.snomed.org/ to the CWA rationales, considering the sparsity and incompleteness issues of KBs that are even more severe in the medical domain, we also propose the Open-World Assumptions (OWA) (Ceylan et al., 2016) setting to discover richer rationales by estimating all potential relations between associative entities based on a seed set of relation triples (which can be regarded as prior knowledge). In general, the CWA rationales are relatively more accurate as each fact triple has been verified by the KB, but would have a low coverage of other possibly relevant rationales for the target prediction. On the other hand, the OWA rationales are more comprehensive but could be noisy and less accurate, due to the probabilistic estimation procedure and the limited amount of prior knowledge. However, as we will see, by aggregating all OWA rationales into the whole framework with an attention-based mechanism, we can select high-quality and most relevant rationales for prediction. For the rest of the paper, by default we adopt the OWA setting in our framework and describe its details as follows. Specifically, given a pair of head and tail entity, eh, et ∈V, let us denote their association sets as A(eh) = {ai h}Nh i=1 and A(et) = {aj t}Nt j=1, where Nh, Nt are the number of associative entities ah, at to use. Each entity has been assigned an embedding vector by the previous association recall module. We first measure the probability of relations holding for the pair. Given ai h ∈A(eh), aj t ∈A(et) and a relation rk ∈R, we define a scoring function as Bordes et al. (2013) to estimate triple quality: sij k = f(ai h, rk, aj t) = −||υai h + ξk −υaj t||1 (4) where υai h and υaj t are embedding vectors, relations are parameterized by a relation matrix R ∈ RNr×d and ξk is its k-th row vector. Such a scoring function encourages larger value for correct triples. Additionally, in order to filter unreliable estimations, we define an NA relation to represent other trivial relations or no relation as the score, sij NA = f(ai h, NA, aj t), which can be seen as a dynamic threshold to produce reasonable rationales. Now we formulate OWA rationales by calculating the conditional probability of a relation given a pair of associations as follows (we save the superscript ij for space): p(rk|ai h, aj t) =      exp (sk) P sk≥sNA exp (sk), sk > sNA 0, sk ≤sNA (5) 8082 For each association pair, (ai h, aj t), we only form an assumption with a relation r∗ k if r∗ k is top ranked according to p(rk|ai h, aj t).6 To represent assumptions, we integrate all relation information per pair into a single vector representation. Concretely, we calculate the assumption representation by treating p(rk|ai h, aj t) as weights for all relations as follows: aij = ρ(ai h, aj t; R) = Nr X k′=1 p(rk′|ai h, aj t) · ξk′ (6) Finally, we combine the entity vectors as well as the relation vector to get the final representation of assumptions for association pair (ai h, aj t), where ci ∈A(eh) and cj ∈A(et): eij = tanh([υai h; υaj t; aij]Wp + bp) (7) where [· ; ·] represents vector concatenation, Wp ∈ R3d×dp, bp ∈Rdp are the weight matrix and bias in a fully-connected network. 3.3 CogStage-3: Prediction Decision Making Analogical to human thinking, our decision making module aggregates all assumption representations and measures their accountability for the final prediction. It learns a distribution over all assumptions and we select the ones with highest probabilities as model rationales. More specifically, we define a scoring function g(eij) to estimate the accountability based on the assumption representation eij and normalize g(eij) as: g(eij) = vT · tanh(Waeij + ba) (8) pij = exp(g(eij)) PNh m=1 PNt n=1 exp(g(emn)) (9) where Wa, ba are the weight matrix and bias for the scoring function. Then we get the weighted rationale representation as: r = ψ(eh, et) = Nh X i=1 Nt X j=1 pijeij (10) With the representation of weighted assumption information for the target pair (eh, et), we calculate the binary prediction probability for relation r as: p(r|eh, et) = σ(Wrr + br) (11) where σ(x) = 1/(1 + exp(−x)) and Wr, br are model parameters. 6We remove the target relation to predict if it exists in the assumption set. Rationalizing relation prediction. After fully training the entire model, to recover the most contributing assumptions for predicting the relation between the given target entities (eh, et), we compute the importance scores for all assumptions and select those most important ones as model rationales. In particular, we multiply pij (the weight for association pair (ai h, aj t) in Eqn. 9) with p(rk|ai h, aj t) (the probability of a relation given the pair (ai h, aj t) in Eqn. 5) to score the triple (ai h, rk, aj t). We rank all such triples for ai h ∈A(eh), aj t ∈A(et), rk ∈R and select the top-K triples as model rationales for the final relation prediction. 3.4 Training We now describe how we train our model efficiently for multiple modules. For relational learning to estimate the conditional probability p(rk|ai h, aj t), we utilize training data as the seed set of triples for all relations as correct triples denoted as (h, r, t) ∈P. The scoring function in Eqn. 4 is expected to score higher for correct triples than the corrupted ones in which we denote N(?, r, t) (N(t, r, ?)) as the set of corrupted triples by replacing the head (tail) entity randomly. Instead of using margin-based loss function, we adopt a more efficient training strategy from (Kadlec et al., 2017; Toutanova and Chen, 2015) with a negative log likelihood loss function as: Lr = −P (h,r,t)∈P log p (h|t, r) −P (h,r,t)∈P log p (t|h, r) (12) where the conditional probability p(h|t, r) is defined as follows (p(t|h, r) is defined similarly): p (h|t, r) = exp(f (h, r, t)) P h′∈N(?,r,t) exp(f (h′, r, t)) (13) For our binary relation prediction task, we define a binary cross entropy loss function with Eqn. 11 as follows: Lp = −PM i=1(yi · log(p(r|ei h, ei t)) + (1 −yi) · log(1 −p(r|ei h, ei t))) (14) where M is the number of samples, yi is the label showing whether eh, et holds a certain relation. The above three loss functions, i.e., Ln for global association recall, Lr for relational learning and Lp for relation prediction, are all jointly optimized. All three of them share the entity embeddings and Lp will reuse the relation matrix from Lr to conduct the rationale generation. 8083 4 Experiments In this section, we first introduce our experimental setup, e.g, the corpus-level co-occurrence statistics and datasets used for our experiments, and then compare our model with a list of comprehensive competitive baselines in terms of predictive performance. Moreover, we conduct expert evaluations as well as case studies to demonstrate the usefulness of our model rationales. 4.1 Dataset We directly adopt a publicly available medical cooccurrence graph for our experiments (Finlayson et al., 2014). The graph was constructed in the following way: Finlayson et al. (2014) first used an efficient annotation tool (LePendu et al., 2012) to extract medical terms from 20 million clinical notes collected by Stanford Hospitals and Clinics, and then computed the co-occurrence counts of two terms based on their appearances in one patient’s records within a certain time frame (e.g., 1 day, 7 days). We experiment with their biggest dataset with the largest number of nodes (i.e., the per-bin 1-day graph here7) so as to have sufficient training data. The co-occurrence graph contains 52,804 nodes and 16,197,319 edges. To obtain training labels for relation prediction, we utilize the mapping between medical terms and concepts provided by Finlayson et al. (2014). To be specific, they mapped extracted terms to UMLS concepts with a high mapping accuracy by suppressing the least possible meanings of each term (see Finlayson et al. (2014) for more details). We utilize such mappings to automatically collect relation labels from UMLS. For term ea and eb that are respectively mapped to medical concept cA and cB, we find the relation between cA and cB in UMLS, which will be used as the label for ea and eb. Following Wang and Fan (2014) that studied distant supervision in medical text and identified several crucial relations for clinical decision making, we select 5 important medical relations with no less than 1,000 relation triples in our dataset. Each relation is mapped to UMLS semantic relations, e.g., relation CAUSES corresponds to cause of, induces, causative agent of in UMLS. A full list of mapping is in the appendix. We sample an equal number of negative pairs by randomly pairing head and tail entities with the correct argument types (Wang 7https://datadryad.org/stash/dataset/ doi:10.5061/dryad.jp917 Med Relations Train Dev Test Symptom of 14,326 3,001 3,087 May treat 12,924 2,664 2,735 Contraindicates 10,593 2,237 2,197 May prevent 2,113 440 460 Causes 1,389 305 354 Total 41.3k 8.6k 8.8k Table 1: Dataset Statistics. et al., 2016). We split all samples into train/dev/test sets with a ratio of 70/15/15. Only relation triples in the training set are used to optimize relational parameters. The statistics of the positive samples for relations are summarized in Table 1. 4.2 Predictive Performance Evaluation Compared Methods. There are a number of advanced neural methods (Tang et al., 2015; Qu et al., 2018; Wang et al., 2018) that have been developed for the link prediction task, i.e., predicting the relation between two nodes in a co-occurrence graph. At the high level, their frameworks comprise of an entity encoder and a relation scoring function. We adapt various existing methods for both the encoder and the scoring functions for comprehensive comparison. Specifically, given the co-occurrence graph, we employ existing distributional representation learning methods to learn entity embeddings. With the entity embeddings as input features, we adapt various models from the knowledge base completion literature as a binary relation classifier. More specifically, for the encoder, we select one word embedding method, Word2vec (Mikolov et al., 2013; Levy and Goldberg, 2014), two graph embedding methods, random-walk based DeepWalk (Perozzi et al., 2014), edge-sampling based LINE (Tang et al., 2015), and one distributional approach REPEL-D (Qu et al., 2018) for weakly-supervised relation extraction that leverages both the co-occurrence graph and training relation triples to learn entity representations. For the scoring functions, we choose DistMult (Yang et al., 2015), RESCAL (Nickel et al., 2011) and NTN (Socher et al., 2013). Note that one can apply more complex encoders or scoring functions to obtain higher predictive performance; however, in this work, we emphasize more on model interpretability than predictive performance, and unfortunately, all such frameworks are hard to interpret as they provide little or no 8084 Methods MAY TREAT CONTRAIN. SYMPTOM OF MAY PREVENT CAUSES Avg. Word2vec + DistMult 0.767 (±0.008) 0.777 (±0.013) 0.815 (±0.005) 0.649 (±0.018) 0.671 (±0.015) 0.736 Word2vec + RESCAL 0.743 (±0.010) 0.767 (±0.003) 0.808 (±0.009) 0.658 (±0.023) 0.659 (±0.039) 0.727 Word2vec + NTN 0.693 (±0.013) 0.758 (±0.005) 0.808 (±0.004) 0.605 (±0.022) 0.631 (±0.017) 0.699 DeepWalk + DistMult 0.740 (±0.003) 0.776 (±0.004) 0.805 (±0.003) 0.608 (±0.014) 0.650 (±0.018) 0.716 DeepWalk + RESCAL 0.671 (±0.010) 0.778 (±0.003) 0.800 (±0.003) 0.600 (±0.023) 0.708 (±0.011) 0.711 DeepWalk + NTN 0.696 (±0.006) 0.778 (±0.005) 0.787 (±0.005) 0.614 (±0.016) 0.674 (±0.024) 0.710 LINE + DistMult 0.767 (±0.003) 0.783 (±0.002) 0.795 (±0.003) 0.621 (±0.015) 0.641 (±0.024) 0.721 LINE + RESCAL 0.725 (±0.003) 0.771 (±0.002) 0.801 (±0.001) 0.613 (±0.013) 0.694 (±0.015) 0.721 LINE + NTN 0.733 (±0.002) 0.773 (±0.003) 0.800 (±0.001) 0.601 (±0.015) 0.706 (±0.013) 0.723 REPEL-D + DistMult 0.784 (±0.002) 0.797 (±0.002) 0.809 (±0.003) 0.681 (±0.010) 0.694 (±0.022) 0.751 REPEL-D + RESCAL 0.726 (±0.003) 0.780 (±0.002) 0.776 (±0.002) 0.685 (±0.010) 0.708 (±0.003) 0.737 REPEL-D + NTN 0.736 (±0.004) 0.780 (±0.002) 0.773 (±0.001) 0.667 (±0.015) 0.694 (±0.024) 0.731 Ours (w/ CWA) 0.709 (±0.005) 0.751 (±0.009) 0.744 (±0.007) 0.667 (±0.008) 0.661 (±0.032) 0.706 Ours 0.805 (±0.017) 0.811 (±0.006) 0.816 (±0.004) 0.676 (±0.020) 0.684 (±0.017) 0.758 Table 2: Comparison of model predictive performance. We run all methods for five times and report the averaged F1 scores with standard deviations. explanations on how predictions are made. We also show the predictive performance of our framework under the CWA setting in which the CWA rationales are existing triples in a “closed” knowledge base (i.e., UMLS). We first adopt the pre-trained association recall module to retrieve associative contexts for head and tail entities, then formulate the assumptions using top-ranked triples (that exist in our relation training data), where the rank is based on the product of their retrieval probabilities (pij = p(ei|eh) × p(ej|et)). We keep the rest of our model the same as the OWA setting. Results. We compare the predictive performance of different models in terms of F1 score under each relation prediction task. As shown in Table 2, our model obtains very competitive performance compared with a comprehensive list of baseline methods. Specifically, on the prediction tasks of MAY TREAT and CONTRAINDICATES, our model achieves a substantial improvement (1∼2 F1 score) and a very competitive performance on the task of SYMPTOM OF and MAY PREVENT. The small amount of training data might partly explain why our model does not perform so well in the CAUSES tasks. Such comparison shows the effectiveness of predicting relations based on associations and their relational interactions. Moreover, compared with those baseline models which encode graph structure into latent vector representation, our model utilizes co-occurrence graph more explicitly by leveraging the associative contexts symbolically to generate human-understandable rationales, which can assist medical experts as we will see shortly. In addition, we observe that our model consistently OWA Rationales CWA Rationales Ranking Score 17 5 Avg. Sum Score/Case 6.14 2.24 Avg. Max Score/Case 2.04 0.77 Table 3: Human evaluation on the quality of rationales. outperforms the CWA setting: Despite the CWA rationales are true statements on their own, they tend to have a low coverage of possible rationales, and thus, may be not so relevant for the target relation prediction, which leads to a poor predictive performance. 4.3 Model Rationale Evaluation To measure the quality of our model rationales (i.e., OWA rationales), as well as to conduct an ablation study of our model, we conduct an expert evaluation for the OWA rationales and also compare them with the CWA rationales. We first collaborate with a physician to explore how much a model’s rationales help them better trust the model’s prediction following recent work for evaluating model interpretability (Singh et al., 2019; Mullenbach et al., 2018; Atutxa et al., 2019; Jin et al., 2020). Then, we present some case studies to show what kind of rationales our model has learnt. Note that compared with evaluation by human annotators for open-domain tasks (without expertise requirement), evaluation by medical experts is more challenging in general. The physician in our study (an M.D. with 9 years of clinical experience and currently a fellow trained in clinical informatics), who is able to understand the context of terms and the basics of the compared algorithms and can dedicate time, is qualified for our evaluation. 8085 Expert Evaluation. We first explained to the physician about the recall and recognition process in our framework and how model rationales are developed. They endorsed such reasoning process as one possible way to gain their trust in the model. Next, for each target pair for which our model correctly makes the prediction, they were shown the top-5 rationales produced by our framework and were asked whether each rationale helps them better trust the model prediction. For each rationale, they were asked to score it from 0 to 3 in which 0 is no helpful, 1 is a little helpful, 2 is helpful and 3 is very helpful. In addition to the individual rationale evaluation, we further compare the overall quality of CWA and OWA rationales, by letting experts rank them based the helpfulness of each set of rationales (the rationale set ranked higher gets 1 ranking score and both get 0 if they have the same rank). We refer readers to the appendix for more details of the evaluation protocol. We randomly select 30 cases in the MAY TREAT relation and the overall evaluation results are summarized in Table 3. Out of 30, OWA wins in 17 cases and gets higher scores on individual rationales per case on average. There are 8 cases where the two sets of rationales are ranked the same8 and 5 cases where CWA is better. To get a better idea of how the OWA model obtains more trust, we calculate the average sum score per case, which shows the OWA model gets a higher overall score per case. Considering in some cases only a few rationales are able to get non-zero scores, we also calculate the average max score per case, which shows that our OWA model generally provides one helpful rationale (score>2) per case. Overall, as we can see, the OWA rationales are more helpful to gain expert trust. Case Study. Table 4 shows two concrete examples demonstrating what kind of model rationales our framework bases its predictions on. We highlight the rationales that receive high scores from the physician for being especially useful for trusting the prediction. As we can see, our framework is able to make correct predictions based on reasonable rationales. For instance, to predict that “cephalosporine” may treat “bacterial infection”, our model relies on the rationale that “cefuroxime” may treat “infectious diseases”. We also note that not all rationales are clinically established facts or even make sense, due to the unsupervised rationale learning and the probabilistic assumption formation 8Of which, 7 cases are indicated equally unhelpful. Case 1 cephalosporins may treat bacterial infection cefuroxime may treat viral syndrome cefuroxime may treat low grade fever cefuroxime may treat infectious diseases cefuroxime may prevent low grade fever sulbactam may treat low grade fever Case 2 azelastine may treat perennial allergic rhinitis astepro may treat perennial allergic rhinitis pseudoephedrine may treat perennial allergic rhinitis ciclesonide may treat perennial allergic rhinitis overbite may treat perennial allergic rhinitis diclofenac may treat perennial allergic rhinitis Table 4: Case studies for rationalizing medical relation prediction. For each case, the first panel is target pair and the second is top-5 rationales (Bold ones are useful rationales with high scores from the physician). The left (right) most column is the head (tail) term and their relational associations. process, which leaves space for future work to further improve the quality of rationales. Nevertheless, such model rationales can provide valuable information or new insights for clinicians. For another example, as pointed out by the physician, different medications possibly having the same treatment response, as shown in Case 2, could be clinically useful. That is, if three medications are predicted to possibly treat the same condition and a physician is only aware of two doing so, one might get insights into trying the third one. To summarize, our model is able to provide reasonable rationales and help users understand how model predictions are made in general. 5 Related Work Relation Extraction (RE) typically focuses on predicting relations between two entities based on their text mentions, and has been well studied in both open domain (Mintz et al., 2009; Zeng et al., 2015; Riedel et al., 2013; Lin et al., 2016; Song et al., 2019; Deng and Sun, 2019) and biomedical domain (Uzuner et al., 2011; Wang and Fan, 2014; Sahu et al., 2016; Lv et al., 2016; He et al., 2019). Among them, most state-of-the-art work develops various powerful neural models by leveraging human annotations, linguistic patterns, distance supervision, etc. More recently, an increasing amount of work has been proposed to improve model’s transparency and interpretability. For example, Lee et al. (2019) visualizes self-attention weights learned from BERT (Devlin et al., 2019) to explain relation prediction. However, such text-based interpretable 8086 models tend to provide explanations within a local context (e.g., words in a single sentence mentioning target entities), which may not capture a holistic view of all entities and their relations stored in a text corpus. We believe that such a holistic view is important for interpreting relations and can be provided to some degree by the global statistics from a text corpus. Moreover, global statistics have been widely used in the clinical domain as they can better preserve patient privacy (Finlayson et al., 2014; Wang et al., 2019). On the other hand, in recent years, graph embedding techniques (Perozzi et al., 2014; Tang et al., 2015; Grover and Leskovec, 2016; Yue et al., 2019) have been widely applied to learn node representations based on graph structure. Representation learning based on global statistics from a text corpus (i.e., co-occurrence graph) has also been studied (Levy and Goldberg, 2014; Pennington et al., 2014). After employing such methods to learn entity embeddings, a number of relation classifiers (Nickel et al., 2011; Bordes et al., 2013; Socher et al., 2013; Yang et al., 2015; Wang et al., 2018) can be adopted for relation prediction. We compare our method with such frameworks to show its competitive predictive accuracy. However, such frameworks tend to be difficult to interpret as they provide little or no explanations on how decisions are made. In this paper, we focus more on model interpretability than predictive accuracy, and draw inspirations from existing cognitive theories of recall and recognition to develop a new framework, which is our core contribution. Another line of research related to interpreting relation prediction is path-based knowledge graph (KG) reasoning (Gardner et al., 2014; Neelakantan et al., 2015; Guu et al., 2015; Xiong et al., 2017; Stadelmaier and Pad´o, 2019). In particular, existing paths mined from millions of relational links in knowledge graphs can be used to provide justifications for relation predictions. For example, to explain Microsoft and USA may hold the relation CountryOfHeadquarters, by traversing a KG, one can extract the path Microsoft IsBasedIn −−−−−→Seattle CountryLocatedIn −−−−−−−−−→USA as one explanation. However, such path-finding methods typically require largescale relational links to infer path patterns, and cannot be applied to our co-occurrence graph as the co-occurrence links are unlabeled. In addition, our work is closely related to the area of rationalizing machine decision by generating justifications/rationales accounting for model’s prediction. In some scenarios, human rationales are provided as extra supervision for more explainable models (Zaidan et al., 2007; Bao et al., 2018). However, due to the high cost of manual annotation, model rationales are desired to be learned in an unsupervised manner(Lei et al., 2016; Bouchacourt and Denoyer, 2019; Zhao et al., 2019). For example, Lei et al. (2016) select a subset of words as rationales and Bouchacourt and Denoyer (2019) provide an explanation based on the absence or presence of “concepts”, where the selected words and “concepts” are learned unsupervisedly. Different from text-based tasks, in this paper, we propose to rationalize relation prediction based on global cooccurrence statistics and similarly, model rationales in our work are captured without explicit manual annotation either, via a joint training framework. 6 Conclusion In this paper, we propose an interpretable framework to rationalize medical relation prediction based on corpus-level statistics. Our framework is inspired by existing cognitive theories on human memory recall and recognition, and can be easily understood by users as well as provide reasonable explanations to justify its prediction. Essentially, it leverages corpus-level statistics to recall associative contexts and recognizes their relational connections as model rationales. Compared with a comprehensive list of baseline models, our model obtains competitive predictive performances. Moreover, we demonstrate its interpretability via expert evaluation and case studies. Acknowledgments We thank Srinivasan Parthasarathy, Ping Zhang, Samuel Yang and Kaushik Mani for valuable discussions. We also thank the anonymous reviewers for their hard work and constructive feedback. This research was sponsored in part by the PatientCentered Outcomes Research Institute Funding ME-2017C1-6413, the Army Research Office under cooperative agreements W911NF-17-1-0412, NSF Grant IIS1815674, and Ohio Supercomputer Center (Center, 1987). The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S.Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. 8087 References Accountability Act. 1996. Health insurance portability and accountability act of 1996. Public law, 104:191. Aitziber Atutxa, Arantza D´ıaz de Ilarraza, Koldo Gojenola, Maite Oronoz, and Olatz Perez-de Vi˜naspre. 2019. Interpretable deep learning to map diagnostic texts to icd-10 codes. International Journal of Medical Informatics, 129:49–59. Yujia Bao, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human rationales. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1903–1913. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems 26, pages 2787–2795. Diane Bouchacourt and Ludovic Denoyer. 2019. Educe: Explaining model decisions through unsupervised concepts extraction. arXiv preprint arXiv:1905.11852. Raluca Budiu. 2014. Memory recognition and recall in user interfaces. Nielsen Norman Group. Ohio Supercomputer Center. 1987. Ohio supercomputer center. Ismail Ilkan Ceylan, Adnan Darwiche, and Guy Van den Broeck. 2016. Open-world probabilistic databases. In Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning. Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. 2017. Chains of reasoning over entities, relations, and text using recurrent neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 132–141. Xiang Deng and Huan Sun. 2019. Leveraging 2-hop distant supervision from table entity pairs for relation extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 410–420. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Samuel G Finlayson, Paea LePendu, and Nigam H Shah. 2014. Building the graph of medicine from millions of clinical narratives. Scientific data, 1:140032. Matt Gardner, Partha Talukdar, Jayant Krishnamurthy, and Tom Mitchell. 2014. Incorporating vector space similarity in random walk inference over knowledge bases. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 397–406. Gary Gillund and Richard M Shiffrin. 1984. A retrieval model for both recognition and recall. Psychological review, 91(1):1. Aditya Grover and Jure Leskovec. 2016. Node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 855–864. Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 318–327. Bin He, Yi Guan, and Rui Dai. 2019. Classifying medical relations in clinical text via convolutional neural networks. Artificial Intelligence in Medicine, 93:43– 49. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33–38. Xisen Jin, Zhongyu Wei, Junyi Du, Xiangyang Xue, and Xiang Ren. 2020. Towards hierarchical importance attribution: Explaining compositional semantics for neural sequence models. In International Conference on Learning Representations. Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 69–74. Michael Kahana, Marc Howard, and Sean Polyn. 2008. Associative retrieval processes in episodic memory. Psychology. D. P. Kingma and J. Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR) 2015. Joohong Lee, Sangwoo Seo, and Yong Suk Choi. 2019. Semantic relation classification via bidirectional lstm networks with entity-aware attention using latent entity typing. Symmetry, 11(6):785. 8088 Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117. Paea LePendu, Srinivasan V Iyer, C´edrick Fairon, and Nigam H Shah. 2012. Annotation analysis for testing drug safety signals using unstructured clinical notes. In Journal of biomedical semantics, volume 3, page S5. BioMed Central. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems 27, pages 2177–2185. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2124–2133. Ryan Lowe, Nissan Pow, Iulian Serban, Laurent Charlin, and Joelle Pineau. 2015. Incorporating unstructured textual knowledge sources into neural dialogue systems. In Neural information processing systems workshop on machine learning for spoken language understanding. Kevin Lund and Curt Burgess. 1996. Producing high-dimensional semantic spaces from lexical cooccurrence. Behavior research methods, instruments, & computers, 28(2):203–208. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30, pages 4765–4774. Xinbo Lv, Yi Guan, Jinfeng Yang, and Jiawei Wu. 2016. Clinical relation extraction with deep learning. International Journal of Hybrid Information Technology, 9(7):237–248. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1–38. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101–1111. W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. 2019. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 116(44):22071–22080. Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 156–166. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, page 809–816. Peter A Nobel and Richard M Shiffrin. 2001. Retrieval processes in recognition and cued recall. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27(2):384. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, et al. 2017. Automatic differentiation in pytorch. In NIPS-W. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, page 701–710. Meng Qu, Xiang Ren, Yu Zhang, and Jiawei Han. 2018. Weakly-supervised relation extraction by patternenhanced embedding learning. In Proceedings of the 2018 World Wide Web Conference, WWW ’18, page 1257–1266. Raymond Reiter. 1981. On closed world data bases. In Readings in artificial intelligence, pages 119–140. Elsevier. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 1135–1144. 8089 Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of the 2010 European Conference on Machine Learning and Knowledge Discovery in Databases: Part III, ECML PKDD’10, page 148–163. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84. Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215. Sunil Sahu, Ashish Anand, Krishnadev Oruganty, and Mahanandeeshwar Gattu. 2016. Relation extraction from clinical texts using domain invariant convolutional neural network. In Proceedings of the 15th Workshop on Biomedical Natural Language Processing, pages 206–215. Chandan Singh, W. James Murdoch, and Bin Yu. 2019. Hierarchical interpretations for neural network predictions. In International Conference on Learning Representations. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems 26, pages 926–934. Linfeng Song, Yue Zhang, Daniel Gildea, Mo Yu, Zhiguo Wang, and Jinsong Su. 2019. Leveraging dependency forest for neural medical relation extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 208–218. Donald P Spence and Kimberly C Owens. 1990. Lexical co-occurrence and association strength. Journal of Psycholinguistic Research, 19(5):317–330. Josua Stadelmaier and Sebastian Pad´o. 2019. Modeling paths for explainable knowledge base completion. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 147–157. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15, page 1067–1077. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66. ¨Ozlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5):552–556. Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention interpretability across nlp tasks. arXiv preprint arXiv:1909.11218. Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 872–884. Chang Wang, Liangliang Cao, and James Fan. 2016. Building joint spaces for relation extraction. In IJCAI, pages 2936–2942. Chang Wang and James Fan. 2014. Medical relation extraction with manifold models. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 828–838. Yanjie Wang, Rainer Gemulla, and Hui Li. 2018. On multi-relational link prediction with bilinear models. In Thirty-Second AAAI Conference on Artificial Intelligence. Zhen Wang, Xiang Yue, Soheil Moosavinasab, Yungui Huang, Simon Lin, and Huan Sun. 2019. Surfcon: Synonym discovery on privacy-aware clinical data. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, page 1578–1586. Wenhan Xiong, Thien Hoang, and William Yang Wang. 2017. DeepPath: A reinforcement learning method for knowledge graph reasoning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 564–573. Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016. Question answering on Freebase via relation extraction and textual evidence. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2326–2336. Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations (ICLR) 2015. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In 8090 Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 764–777. Xiang Yue, Zhen Wang, Jingong Huang, Srinivasan Parthasarathy, Soheil Moosavinasab, Yungui Huang, Simon M Lin, Wen Zhang, Ping Zhang, and Huan Sun. 2019. Graph embedding on biomedical networks: methods, applications and evaluations. Bioinformatics, 36(4):1241–1251. Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using “annotator rationales” to improve machine learning for text categorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 260–267. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1753– 1762. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335–2344. Jie Zhao, Ziyu Guan, and Huan Sun. 2019. Riker: Mining rich keyword representations for interpretable product question answering. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, page 1389–1398. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 207–212. A Appendices A.1 Implementation Details. We implemented our model in Pytorch (Paszke et al., 2017) and optimized it by the Adam optimizer (Kingma and Ba, 2015). The dimension of term/node embeddings is set at 128. The number of negative triples for the relational learning is set at 100. The number of association contexts to use for assumption formation, Nc is 32. Early stopping is used when the performance in the dev set does not increase continuously for 10 epochs. We augment the relation triples for optimizing Lr (Eqn. 12) by adding their reverse relations for better training. We obtain DeepWalk and LINE (2nd) embeddings by OpenNE9 and word2vec embeddings by doing SVD decomposition over the shifted PPMI co-occurrence matrix (Levy and Goldberg, 2014). Code, dataset and more implementation details are available online10. A.2 Training Algorithm Algorithm 1 CogStage Training Algorithm INPUT: Corpus Statistics G, Gold Triples P, Binary Relation Data {(hk, tk), yk}M k=1 OUTPUT: Model parameters 1: repeat 2: Sample {ei}b1 i=1 with gold contexts from G 3: for i ←1 : b1 do 4: Calculate p(ej|ei) and ˆp(ej|ei) 5: Optimize Ln by Eqn. 3 6: Sample {(hi, ri, ti)}b2 i=1 from P 7: for i ←1 : b2 do 8: Generate Nn corrupted triples 9: Optimize Lr by Eqn. 12 10: Sample {(hi, ti), yi}b3 i=1 11: for i ←1 : b3 do 12: Calculate p(ej|hi) and p(ej|ti) 13: Get contexts {am h }Nc m=1 and {an t }Nc n=1 14: Optimize Lp by Eqn. 14 15: until Convergence 9https://github.com/thunlp/OpenNE 10https://github.com/zhenwang9102/ X-MedRELA 8091 Evaluation Interface (Example) All models predict the may_treat relation between t1 term unfractionated heparin ['unfractionated heparin [epc]', 'heparin'] and t2 term myocardial infarction (mi) ['myocardial infarction'] with the following rationales. Please answer the following questions: 1. Are you familiar with t1 and t2 terms? Yes No Kind of 2. Check each rationale and answer this question: Is which degree is rationale helpful for you to trust the prediction? (0: no helpful; 1: a little bit helpful; 2: helpful; 3: very helpful) Model A's Rationale Set: T1’s contexts Relational Interaction T2’s contexts Score metabolic alkalosis may_prevent myocardial infarction (mi) metabolic alkalosis may_prevent venous thrombosis rbbb may_treat myocardial infarction (mi) ards symptom_of myocardial infarction (mi) micronutrient may_prevent venous thrombosis Model B's Rationale Set: T1’s contexts Relational Interaction T2’s contexts Score cardiac dysrhythmias contraindicates theophylline malignant neoplasm without specification of site has_symptom family history of cancer Iddm contraindicates glyburide morphine sulfate contraindicated_by respiratory depression insulin dependent diabetes contraindicates glyburide 3. Please rank all sets of rationales based on overall how much they help you trust the model prediction (e.g., A > B). Note that it is ok to reject them if both models are unhelpful (A = B = 0). Figure 4: Evaluation interface for expert evaluation. 8092 Relations UMLS Relations May treat may treat May prevent may prevent Contraindicates has contraindicated drug Causes cause of; induces; causative agent of Symptom of disease has finding; disease may have finding; has associated finding; has manifestation; associated condition of; defining characteristic of Table 5: Relations in our dataset and their mapped UMLS semantic relations. (UMLS relation “Treats” does not exist in our dataset and hence is not mapped with the “May treat” relation.)
2020
719
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 789–799 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 789 Interactive Construction of User-Centric Dictionary for Text Analytics Ryosuke Kohita [email protected] Issei Yoshida IBM Research Hiroshi Kanayama {issei,hkana,nasukawa}@jp.ibm.com Tetsuya Nasukawa Abstract We propose a methodology to construct a term dictionary for text analytics through an interactive process between a human and a machine. The interactive approach helps the creation of flexible dictionaries with precise granularity required in text analysis. This paper introduces the first formulation of interactive dictionary construction to address this issue. To optimize the interaction, we propose a new algorithm that effectively captures an analyst’s intention starting from only a small number of sample terms. Along with the algorithm, we also design an automatic evaluation framework that provides a systematic assessment of any interactive method for the dictionary creation task. Experiments using real scenario based corpora and dictionaries show that our algorithm outperforms baseline methods, and works even with a small number of interactions. Also, we provide our dataset for future studies1. 1 Introduction Since the emergence of practical interests in text analytics that finds insights from massive documents (Nasukawa and Nagano, 2001), there are several requirements for enhancing valuable discoveries. The one critical issue we tackle in this paper is an effective construction of a term dictionary (Godbole et al., 2010). The term dictionary, which is an arbitrary set of terms, is used in text analytics to represent interesting analysis perspectives (Nasukawa and Nagano, 2001; Nasukawa, 2009); for example, dictionaries of “product names” and “evaluative description” are required for mining customer reputations about products. The motivation of this paper is how to reduce the human workload for the dictionary con1https://github.com/kohilin/ IDC-evalset.git Evaluative description Functional description Appearance description Yellow Dirty Waterproof Portable Sturdy User-Friendly Stretchy Superior Formal Traditional For pregnant Metalic Bad Studless Synonym of U.S.A Medicine name U.S., U.S.A., America, … Aspirin, Opdivo, Tylenol, … Typical dictionary in previous studies Beautiful Flexible and fine-grained dictionary in this work Nice Figure 1: Typical dictionaries in previous works (upper) and fine-grained dictionaries in this work (lower) struction as much as possible. To this end, we establish a methodology of interactive dictionary construction that incrementally captures an analyst’s intention starting from a small number of sample terms and enables him/her to effortlessly expand terms in the intended dictionary through suggestions by a machine. The term dictionary for text analytics is expensive to be constructed because we need to focus more on terms with flexible granularity for in-depth analysis (Takeuchi et al., 2009; Godbole et al., 2010; Mostafa, 2013). For instance, if the analyst wants to examine product evaluation from both its function and appearance, he/she then needs to separately create those dictionaries whose boundaries are vague and overlapped (Figure 1). In short, we need to group any terms the analyst wants together depending on documents and the objective of analysis, which forces an ad hoc construction of the term dictionary. This situation is rather severe in the real-world tasks because the vocabulary size for an exhaustive search of the texts is vast, and the analyst will go through re790 peated trial and error of creating dictionaries until he/she reaches findings. At present, there is a demand for a machine that decreases the cost of the ad hoc dictionary construction. As the dictionary construction can be considered as a type of collecting terms, there is a related research field — set expansion that expands a small set of terms by means of bootstrapping (Pantel and Pennacchiotti, 2006). This approach automatically finds new terms for the given set from documents in accordance with a predefined exploration strategy (Pantel et al., 2009; He and Xin, 2011). Although such an automatic procedure is advantageous for reducing the human workload, the quality of the collected terms is suspicious for a term dictionary. For example, a good analysis requires more fine-grained dictionaries than the original targets in set expansion such as distinct ontological terms (e.g., country name, Shen et al. 2017, 2018). Several studies have incorporated a human in the term collection process (Godbole et al., 2010; Coden et al., 2012). Specifically, dictionaries are built in an interactive process where the human gives feedback to the machine and the machine suggests candidates based on the given feedback (Alba et al., 2017, 2018). Such a humanin-the-loop approach has been an active topic in other fields as well, for instance, image classification (Cui et al., 2016), dialogue system (Li et al., 2017), and audio annotation (Kim and Pardo, 2018). We can generally expect that a reliable feedback provided by human makes a system more accurate. With respect to dictionary construction, however, experimental results in this vein are limited due to the empirical evaluation by just a few participants and the use of a coarse dictionary as the test items. In short, it is a still open question — what is a critical issue for interactive construction of fine-grained term dictionary for text analytics? Moving in the same promising direction of leveraging both a human and a machine, we establish a well-defined and effective methodology for constructing the term dictionary. In summary, our contribution in this paper is fourfold: (i) We formulate the interactive process of a term collection, which brings clarity to the problem to be solved (§2). (ii) We develop a method that captures an analyst’s intention from a small number of samples with our formulation as the basis (§3). (iii) We Formal ① Nice, Traditional? Traditional Stretchy, Yellow? User feedback Candidate selection ② ③ ④ I want to have terms related to Formal Yes for Traditional, but no for Nice Figure 2: Interactive dictionary construction propose an automatic evaluation framework that provides a systematic assessment for interactive methods (§4). (iv) Our experimental results show that the proposed method surpasses baseline methods such as set expansion, word embedding and a linear classifier on the crowdsourced dataset. The dataset emulates the real-world scenario of flexible and fine-grained dictionary construction, and we distribute the dataset to the public (§5). 2 Task Definition In this section, we provide the definitions and notations used throughout this paper. First, a term is a string representation of a certain notion such as “apple” and “New York”. A dictionary is a collection of terms. A user denotes the person who wants to construct a dictionary, and system denotes the machine that helps the user. Let W be the whole set of terms in documents. Our objective is to rapidly find as many terms of the user’s interest U ⊂W as possible. As seen in Figure 2, interactive dictionary construction is defined as an iterative process in which each iteration consists of the following steps: 1) User feedback in which the user selects terms for the dictionary from the current candidate terms, and 2) Candidate selection in which the system finds candidate terms for the next user feedback. For the i-th iteration (i = 0, 1, 2, . . .), let Ci be the set of terms that the system finds in the candidate selection step and Ui be the set of terms that the user selects from Ci−1 in the user feedback step as positive examples. Here, U0 is a special feedback we call seed terms that are directly given by the user first. Note that, because we wish to expand the dictionary, each term in Ci should be new to the user in the (i + 1)-th iteration. In the i-th step of the user feedback (i ≥1), we assume that the user can annotate which terms in Ci−1 are in U without being aware of the whole 791 𝑊 𝑊 𝑊 𝑈 𝑈# 𝑈 𝑈 𝑈# 𝐶# 𝐶# 𝑈# 𝑈% Set seed term(s) (User feedback) Candidate selection User feedback (𝑈# ∪𝑈% = (𝑈%) Figure 3: Task definition U. So, Ui ⊂U for each i. Let eUi := ∪i m=0Um be the set of words of the user’s interest that is found by the end of the i-th iteration. However, it is impractical to define our objective as an optimization problem for the asymptotic convergence of eUi because the user feedback is done by a human, and i cannot be large. Hence, we try to maximize |Ci ∩U|, the number of suggested terms that match the user’s interest. Also, since Ci is manually selected by a human user, the proper size of Ci is practically limited to 5 ∼10. Figure 3 shows the steps from setting the seed terms to giving the first feedback to the first candidates. Using the example in Figure 2, U0 is {Formal}, C0 is {Nice, Traditional}, U1 is {Traditional}, and C0\U is {Nice}. The system then next selects C1 based on U0 and U1 (i.e., eU1) from W except for the shown terms C0 ∪eU1. It is important that we design the system to be effective so that the overlapped area of Ci and U becomes larger. There are two major challenges for this problem; one is number of seed terms, and the other is term overlaps of different dictionaries. In terms of the first issue, we have only a few seed terms for the target dictionary at the first iteration. If the system requires more seed terms, the advantage of the system drops because it contradicts our purpose to decrease the human workload in constructing the dictionary. Therefore, we need a method that captures the user’s intention from a smaller number of samples. In terms of the second issue, identifying terms of user’s interest is difficult because boundaries between dictionaries are often overlapped in text analytics as seen in Figure 1. In other words, the system need to be more sensitive to subtle semantic differences only with a few feedbacks. 3 Method In this section, we first describe a previous candidate selection model, SetExpan algorithm (Shen et al., 2017) that inspired our method (§3.1). Subsequently, we introduce our method as the weighted version of SetExpan with improvements in dealing with interactive settings (§3.2∼). Throughout this section, we discuss the i-th step of candidate selection for a certain i. For simplicity, Ci and eUi are denoted as C and eU, respectively. 3.1 Candidate Selection: Similarity Scoring based on Feature Collection As we stated in §2, the objective of the task is to suggest C that contains as many terms in U as possible. Recall that eU is a set of positive examples for terms of the user’s interest that are found in previous steps. Following the strategy taken in set expansion (Shen et al., 2017), a straightforward and reasonable approach to determine C is to define Sim(e, e′|F) which returns a similarity score for two terms e and e′ based on a set of features F, and then to select terms that are most similar to the positive terms in eU. The issue is how to obtain the ideal F that assigns a higher score to terms potentially included in U. Shen et al. (2017) formulates this feature selection problem as choosing features with the number of fixed-size Q so that the positive terms are most similar to each other: F ∗= arg max |F|=Q X 1≤i≤j≤n Sim(ei, ej|F), (1) where eU := {e1, . . . , en}. They propose using the Jaccard coefficient for Sim(ei, ej|F), which narrows the optimization problem to a binary decision on whether to use each feature. This combinatorial problem is NP-hard; hence, they use heuristics to choose an approximation of F ∗. 3.2 From Feature Selection to Feature Weighting with Predefined Similarity Instead of explicitly choosing features to use in the similarity calculation, we consider using all of the possible features {f1, . . . , fL} with the weight wk ∈R for each feature fk. In addition, we define our optimization problem as finding the best wk for fk (k = 1, . . . , L). Let us develop a formula that extends (1) and takes wk into consideration. First, in such a formula, Sim(ei, ej|F) should be a weighted sum of 792 the similarity score for each feature fk, denoted as Sim(ei, ej|f). By replacing F with w in the expression of the similarity function, we have Sim(ei, ej|w) = L X k=1 wk · Sim(ei, ej|fk). (2) Next, to define the similarity between a term e and eU, we assume that the similarity is the average of similarities between e and ei ∈eU, that is, Sim(e, eU|w) := 1 n n X i=1 Sim(e, ei|w). (3) The initial formulation of our optimization problem is thus as follows: w∗= arg max w X 1≤i≤n Sim(ei, eU|w). (4) We show in the Appendix that our formulation of (4) can be considered as the weighted version of (1) under the natural condition that Sim(ei, ei|fk) = Sim(ej, ej|fk) for any i, j, and k, and PL k=1 wk = 1. It is easy to set Sim(e, e′|fk) satisfying this condition. For a feature fk, we define a vector vfk(e) of an e and define Sim(e, e′|fk) as the standard inner product of vfk(e) and vfk(e′). Then by normalizing all these vectors, Sim(ei, ei|fk) = ∥vfk(ei)∥= 1 holds for any i; hence, the condition is satisfied, and that is a conventional cosine similarity of word vectors (Levy et al., 2015). Thus, any mapping from W to a vector space is available as a feature such as the tf-idf of terms and discrete features (Manning et al., 2008), word2vec (Mikolov et al., 2013), or GloVe (Pennington et al., 2014). Note that the dimension of the vector space may be different among the features. Hence, we assume vfk(e) is defined for each feature fk and any e. When we use Sim(e, e′|fk) = vfk(e) · vfk(e′), (2) is computed by Sim(ei, ej|w) = L X k=1 wk · vfk(ei) · vfk(ej), (5) and by a simple calculation, (3) is equal to Sim(e, eU|w) = L X k=1 wk · vfk(e) · vfk(eU), (6) where vfk(eU) := 1 n Pn i=1 vfk(ei) is the centroid vector for {vfk(ei)}i=1,...,n in the feature space of !𝑈= { } 𝑓' 𝑓( !𝑈= { } Weight more 𝑓' Weight more 𝑓( Figure 4: Feature weighting puts weights on feature spaces by placing terms of user’s interest nearby. fk. We simply call vfk(eU) the centroid of eU. Formulas (5) and (6) demonstrates that the similarity between any two terms can be measured by combining the characteristics of the L different feature spaces. We “select” the feature spaces in which terms in eU become similar to each other by adjusting the weights, as shown in Figure 4. Note that our feature weighting formulation is categorized as a conventional linear regression that finds fk characterizing eU via the weights. Instead of calculating the weights for bare features of each term, our method estimates those for differently predefined feature spaces (i.e., the similarity scores in these spaces). It aims to mitigate the difficulty of finding optimal weights for the vast number of features only from few labeled samples. However, the drawback is that this sacrifices a model’s degree of freedom; therefore, we test the effectiveness of our proposed model compared to an ordinary linear classifier in the experiment. 3.3 Optimization by User Feedback Although the initial formulation (4) proved to be a natural extension of the discrete version of feature selection, it does not always work as expected. In this section, we discuss the reason for this and how we can improve the initial formulation of our optimization problem. By substituting (2) and (3) into (4), the objective P 1≤i≤n Sim(ei, eU|w) is a linear function of w. Assuming that PL k=1 wk = 1, the optimal w is determined by putting all the weight values on a particular feature space which has the highest score in the averaged similarity between the terms in eU and the centroid of eU. This is equivalent to selecting only one feature space for the similarity computation. Such extreme optimization is not suitable for our interactive setting because the target dictionary is obscure, especially in earlier iter793 ations. We want the system to diversify the candidate terms to broadly cover the user’s interests and allow the user to discover related vocabularies for a customized dictionary. To address this issue, we modify our formulation of (4) as w∗= arg max w min 1≤i≤n Sim(ei, eU|w). (7) We maximize the minimum similarity score between a term in eU and the centroid of eU. The idea here is to reduce the distance between the farthest positive term and the centroid. This strategy is analogous to those used in active learning, where examples near the separating hyperplane are actively leveraged (Schohn and Cohn, 2000). Our objective function min1≤i≤n Sim(ei, eU|w) is a concave function of w (see Appendix); therefore, we can solve it by (for example) gradient descent. We can also leverage negative feedback, i.e., unselected terms in C, to make the system more sophisticated. Let N := C \U = {z1, . . . , zm}, then we can extend (7) by w∗ = arg max w n min 1≤i≤n Sim(ei, eU|w) − max 1≤j≤m Sim(zj, eU|w) o . (8) The second term on the right-hand side of (8) increases the distance between the closest negative term and the centroid of eU. Again the objective function of (8) is a concave function of w; thus, the information of both positive and negative examples is taken into consideration to learn the optimal w∗. 3.4 Feedback Denoising Although our min-maximize optimization strategy diversifies candidates, it may be disadvantageous in terms of the system being affected by outliers. It happens that several terms in eU (especially for manually fed terms such as seeds) distribute differently in possessing feature spaces compared to the rest of the positive terms. Such a case holds up the learning because the maximum similarity score of the outliers to the centroid is low. The left side of Figure 5 shows an example of this problem: specifically, the system cannot put a higher weight value on f1 because the optimization target, which is the most distant one from the centroid (“watermelon” in this case), is biased to f2. Feedback denoising is a simple solution to this problem. We apply a clustering algorithm (e.g., 𝑓" 𝑓# 𝑓" 𝑓# without feedback denoising Centroid Centroid Outlier with feedback denoising $𝑈('∗) Figure 5: The difference in terms used in learning (blue) with/without feedback denoising. K-Means) to terms in eU, and obtain K term sets eU(0), eU(1), ..., eU(K). Then, we conduct the optimization by replacing eU in (7) and (8) with eU(K∗) where K∗= arg maxK|eU(K)|, that is, the majority class among terms in eU as shown in the right side of Figure 5. This is effective for denoising irregular terms with respect to feature distribution, and for guiding the system to a promising w∗. 4 Evaluation Framework In this section, we explain an automatic evaluation framework for interactive dictionary construction. By using a predefined dictionary as the oracle dictionary U ∗, we emulate the manual feedback process and apply a new evaluation metric to estimate the effectiveness of building a dictionary with consideration of the human interaction. 4.1 Human Emulation We describe the emulation process with U ∗, and the entire flow of the emulation procedure is in Algorithm 1. At the beginning of the emulation process, a small number of seed terms are randomly chosen from U ∗, and U0 is initialized with them (l.1). The number of iterations I (l.2) and the number of suggested terms per iteration |C| (l.3) are also determined. The iteration consisting of user feedback and candidate selection is then launched. In every i-th iteration, the system first suggests the Ci based on the known positive terms eUi−1 (l.5). After receiving the suggested Ci, the automatic evaluation process takes the intersection of Ci and U ∗, and records the overlapped terms as Ui (l.6). It also takes the difference set of Ci and U ∗as the negative terms Ni (l.7). If the system is trainable, its training process runs before moving to the next iteration (l.8 −10). 794 Algorithm 1 Human emulation with oracle dictionary 1: SET seed terms U0 from U ∗ 2: SET number of iterations I 3: SET number of suggested terms per iteration |C| 4: for i = 1 to I do 5: Ci ←Suggest from eUi−1 6: Ui ←Ci ∩U ∗ 7: Ni ←Ci\U ∗ 8: if System is trainable then 9: Run training with eUi (and e Ni) 10: end if 11: end for 4.2 A Metric for Effectiveness Estimation In addition to the automatic evaluation process, we introduce a new metric that takes the interaction quality into account when evaluating the accuracy of the candidate selection. The final goal of dictionary construction is to obtain a complete set of terms consistent with U ∗; however, there is a limitation stemming from a user’s workload in real scenarios. Given that an effective system should suggest terms of user’s interest in earlier iterations, we propose weighted coverage per iteration (WCpI) as the evaluation metric for interactive dictionary construction: WCpI = PI i=1 (1 −α)i−1 |eUi| min{i|C|,|U∗|} PI i=1 (1 −α)i−1 , (9) where α is the hyperparameter to adjust the importance of the iteration number. We illustrate the intuition of WCpI in Figure 6. WCpI is an area ratio of accumulated positive terms from system suggestions to its upper bound in each iteration. In short, it measures how many correct suggestions the system can provide in the comparison with a “perfect” system that never suggests unrelated terms. We can also regulate the importance of iteration number by adjusting α. Specifically, a larger value of α underestimates the importance of terms found in the later iterations, in other words, it attaches importance to terms found in the earlier iterations. As an intuitive explanation based on an actual scenario, α is like representing a constant probability for the user to quit dictionary construction midway through. The graphs in Figure 6 compare the calculation of WCpI for the same system suggestions. The right one with α = 0.1, in which we as0 20 40 60 80 100 1 2 3 4 5 6 7 8 9 101 2 3 4 5 6 7 8 9 10 |"𝑈| 𝑖 𝑊𝐶𝑝𝐼= U-bound System 𝛼= 0.0 𝛼= 0.1 𝑊𝐶𝑝𝐼= 67.5 𝑊𝐶𝑝𝐼= 71.7 Figure 6: Weighted coverage per iteration WCpI. TheThe x- and y-axes are the number of iterations and accumulated positive terms, respectively. The blue and red areas represent the upper bound and system performance, respectively. The left side is for α = 0.0 and the right side is for α = 0.1 when |C| = 10, |U ∗| > 100, |Ui| = 10 −i, and I = 10. sume the user quit creating a dictionary with 10% probability at every iteration, has a higher WCpI than the left one with α = 0.0. 5 Experiments We conduct an experiment following the automatic evaluation framework by using public datasets and oracle dictionaries created through crowdsourcing. In the experiment, we compare several methods in addition to our proposed method. As emulation parameters, we set number of seed terms (|U0| ), the number of terms in one suggestion (|C|), and number of total iterations (I) to 3, 10, and 30, respectively. Note that we tried different numbers of seeds (1 and 5), but the overall tendencies were the same. 5.1 Dataset We used crowdsourcing to create oracle dictionaries on the Amazon review corpus (Blitzer et al., 2007), which is publicly available.2 First, we explain the corpus processing and the procedure to construct the oracle dictionaries. We then describe the evaluation items. Our evaluation items will be publicly available for the system evaluation in future research. Corpus. The corpus originally consists of sub corpora from 25 domains. Given that size and domain vary, we pick five domains; apparel (APP), baby (BAB), camera & photo (CAM), health & personal care (HEL), and sports & outdoors (SPO). We process the raw texts with spaCy 3 and its dis2https://www.cs.jhu.edu/ mdredze/datasets/sentiment/ 3https://spacy.io/ 795 tributed English model 4. We then construct the vocabulary with words and noun chunks that appeared more than five times except for standard stopwords. Note that all terms in the vocabulary are identified after lemmatization by spaCy. Oracle Dictionaries. For each selected corpus, we create oracle dictionaries through crowdsourcing.5 In the task for workers, we provide predefined dictionaries and ask the worker to choose one or more dictionaries to which a given term belongs. For example, we prepared three independent dictionaries nursery items dictionaries for sleeping, movement, and safety in the BAB corpus, and asked a worker to judge which dictionary includes the term “car seat”. With respect to each corpus, we define multiple dictionaries and request three workers to make judgments for every term in the vocabulary. We determine that a term is included in a dictionary when at least one of three workers choose the dictionary for the term. Note that we filter noisy users and their answers beforehand according to the reliability score estimated by the crowdsourcing service 6. Finally, we also manually clean each dictionary. Excluding dictionaries consisting of less than 15 terms or too much noise, we eventually obtain 22 dictionaries. We list the dictionaries and example terms in Table 1. Evaluation Item. We generate ten evaluation items per dictionary, for 220 items in total. An evaluation item consists of a unique set of seed terms (U0) and the remaining terms in the corresponding dictionary as the oracle (U ∗:= U\U0). We suggest that fewer seed terms are adequate for evaluating an interactive dictionary construction method; because the purpose is to gather terms with a minimum human effort as mentioned in §2. 5.2 Methods We compare four methods: Word2Vec, SetExpan, logistic regression, and our proposed method with several configurations. All methods possess the same vocabulary W, and all methods excluding Word2Vec use the same feature spaces: tf-idfs of Bag of Words, unigrams, bigrams, and word embeddings. Any feature space is applicable, though. 4https://spacy.io/models/en#en core web sm 5https://www.figure-eight.com/ 6Although we also tried other thresholds such as correspondence between three workers, this criterion provided the best balance of data cleanliness and size. Word2Vec: Word2Vec is a popular and promising method for representing word meanings in a continuous vector space, and the vector similarity is naturally applicable to interactive dictionary construction (Alba et al., 2018). We use two computation methods of candidate selection based on Word2Vec. The first is w2v(avg)and involves simply taking cosine similarity with an averaged vector of terms among eU. The second is w2v(rank)and calculates the mean reciprocal rank from terms in eU. Both select the candidates in order of their estimated scores. The embeddings are learned for each corpus with the gensim implementation using the default parameters.7 SetExpan: We implement SetExpan (SE; Shen et al. 2017), which is a feature-selection method for conventional set expansion. The original version does not involve the user in the iteration and updates eUi according to its own criteria to filter incorrect terms. In our scenario, we provide the correct terms in the update phase of eUi. We use the same input features with other methods and set the hyperparameters to those Shen et al. (2017) reported as best. Logistic Regression: We include logistic regression in our comparison because the feature weighting is one of the conventional types of linear discriminant analysis. The logistic regression version, LR, takes a word representation and then predicts the probability of the word appearing in a current dictionary. For word representation, we concatenate vectors in each feature space (explained in §5.2) and then use the vector compressed into 300 dimensions with singular value decomposition. In every iteration, we train LR from scratch with positive and negative terms. For the negative terms at the first iteration (i.e., N0), however, we randomly select |U0| of negative words from the entire vocabulary except for dictionary terms. We select candidates following the order of estimated probabilities. While we tried other regression models (SVM and Random Forest) and dimensions of the input vector (noncompression, 50, 100, 200, 500, and 1000), the above condition was the best configuration. Feature Weighting with Predefined Similarity: We test six versions of our proposed methods: • FWPS: Our base model without optimization 7https://radimrehurek.com/gensim/models/word2vec.html 796 Corpus Dictionary name Size Examples APP Accessory 63 flower, watch, glove, ring, case, scarf, garter, holder Wearables for upper body 92 visor, tuxedo, sweatshirt, tank, pajama, blanket, glass, outer Wearables for lower body 83 gown, robe, harness, sandal jersey, boot, loafer, nightgown Items for outdoor 39 sweatshirt, glove, trunk, bike, backpack, coat, hat BAB Nursery items for transport 37 carrier, stroller, leash, walker, backpack, strap, seat cover, sunshade Nursery items for safety 74 cover, sterilizer, infant car seat, seat belt, beeping, monitor, sunshade Nursery items for sleeping 62 cover, hammock, bedding, mattress, sleep sack, bumper, cushion, lamp Enjoyments for baby 134 car, playmat, crayon, ring, dad, bell, bird, toy box Wearables for baby 52 shoe, bouncer, towel, cloth, comforter, fleece, diaper, head support CAM Scene of photograph 84 summer, space, excursion, cruise, pool, face, wildlife, land Subject of photograph 71 space, performer, ocean, magic, young, garden, action, snow Functions of camera 108 remote switch, waterproof, trigger, telephoto, interface, portrait External accessories 93 station, remote switch, polarizer, trigger, case, battery, microphone HEL Health equipment or product 54 bathtub, air bed, vitamin, heater, read glass, flosser, supplement, pillow Appearance description 58 clear, oily, tint, sharp, handy, masculine, cheap, small Functional description 86 naturally, refill, powerful, rapid, oily, sharp, smooth shave, handy Beauty equipment or product 88 mirror, nivea, eyebrow, vanity, straightner, dryer, vitamin,fragrance SPO Body 57 blood, knuckle, eye, nose, chin, nail, knee, face, bone, palm Wearables 70 pedometer, roll, strap, vest, rattle, cloth, boat, tent, slip, altimeter Items for exercise 148 fanny pack, dumbell, pod, rower, bottle, pedometer, knee pad, rack, towel Items for outdoors 132 bicycle, opener, bottle, rack, towel, strap, guitar, fanny pack Movements in exercise 86 sit, twist, pull, stand, situp, running, roll, rowing, swing setter, punch Table 1: Examples of dictionaries. where w is uniform distribution →§3 • +PickOne: Selecting only one feature space with the highest similarity scores among positive terms →§3.3 • +Op(p): With optimization using positive feedback →Eq.(7) • +Op(p/n): With optimization using both positive and negative feedback →Eq.(8) • +Fd(p): With +Op(p)and feedback denoising →§3.4 • +Fd(p/n):With +Op(p/n)and feedback denoising →§3.4 We use the K-means algorithm for +Op(p) and +Op(p/n) with K = 3, though the overall trend was almost the same with K = 2 and 5. Hybrid: We also introduce a joint method HB that combines LR and an FWPS version. The strategy is simple; HB firstly uses FWPS ’s mechanism to broadly cover candidate terms, and then switches to LR when the amount of feedback increases. This mechanism naturally solves LR’s problems that require negative feedback from the beginning and demand a moderate number of labels for training. Any of the FWPS versions can be combined with LR; therefore, we chose the best one for our experiment. The switch timing is empirically set to the 5-th iteration. 5.3 Results and Discussion Table 2 lists the WCpI scores for each method across five corpora with α = 0.0. In all domain texts, HB outperforms the others. The scores of LR are second highest, which implies that a combination with a FWPS model boosts performance. Among the versions of FWPS, +PickOne largely drops in score, which indicates the importance of the min-maximizing optimization strategy for this task (see §3.3). However, at least when α = 0.0 that assumes the user never quit the process in the midway through, the performances of FWPS and other versions with optimized w are not different much. In particular, the negative feedback tends to degrade the performance. Subsequently, SE, w2v(avg), and w2v(rank)perform poorly. SE may not be suitable for gathering arbitrary terms from a nonlarge corpus because it was originally designed and tested for collecting ontological terms from large-scaled data (Shen et al., 2017). Also, we find that leveraging embeddings in a straightforward manner is not sufficient, especially for interactive dictionary construction. Let us now discuss changes when adjusting the WCpI’s α listed in Table 3. Ignoring corpus differences, we take the average scores among all evaluation items. The most crucial change can be found in LR which significantly drops in score along with an increase in α. When α = 0.1, the score of LR already becomes inferior to most 797 Methods APP BAB CAM HEL SPO SE 21.20 18.83 11.34 17.01 16.79 w2v(avg) 18.51 12.77 10.36 14.60 14.28 w2v(rank) 24.39 12.71 10.29 18.63 17.04 LR 51.99 36.76 31.18 38.17 37.59 FWPS 46.90 34.32 27.61 38.17 35.56 +PickOne 18.51 12.79 10.36 14.40 14.28 +Op(p) 45.60 33.73 26.13 36.43 35.29 +Op(p/n) 43.42 30.88 23.13 33.19 31.91 +Fd(p) 46.17 34.92 26.76 37.60 36.03 +Fd(p/n) 46.33 32.01 25.36 37.04 34.63 HB (+Fd(p)) 53.07 37.38 32.31 42.22 39.74 Table 2: WCpI scores across corpora (α = 0.0) Methods α 0.0 0.1 0.3 0.5 SE 17.05 14.36 14.26 15.46 w2v(avg) 14.10 10.83 9.39 9.63 w2v(rank) 16.61 12.42 9.95 9.68 LR 39.14 30.06 23.39 22.45 FWPS 36.51 32.02 30.49 31.51 +PickOne 14.07 10.93 9.64 9.95 +Op(p) 35.44 31.79 30.72 31.58 +Op(p/n) 32.50 29.17 28.89 30.32 +Fd(p) 36.30 32.42 31.10 31.95 +Fd(p/n) 35.07 30.92 29.56 30.53 HB (+Fd(p)) 40.95 34.14 30.97 31.62 Table 3: Change in WCpI scores when increasing α. The scores are averaged among all evaluation items. of the FWPS versions. Also, the scores of FWPS tend to be higher with a larger value of α. When α ≥0.3, +Fd(p) performs the best among all methods. In short, LR suggests correct terms in latter iterations; while FWPS, in particular with trainable ones (+Op(p), +Fd(p)), suggests correct terms in earlier iterations. Figure 7 directly describes the score differences with different alphas by showing the hit ratios defined as |Ui|/|C| in terms of each iteration number for LR, +Fd(p), and HB. Regardless of the number of seed terms, LR suggests fewer correct terms in earlier iterations, but its hit ratio stably goes beyond +Fd(p) after obtaining a moderate number of training labels (around five iterations, i.e., fifty labels). On the other hand, +Fd(p) performs better by a large margin in earlier iterations than LR. In short, our method using predefined term similarities overcomes the smaller sample issue which a conventional linear classifier suffers from and contributes to quick dictionary construction. This result is practically important because the analyst will go through repeated trial and error — observing documents from various points of views — by creating many small dictionaries. In addition, such contrasts are much stronger when 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 1 4 7 10 13 16 19 22 25 28 𝑖 |𝑈$| |𝐶| 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 LR +Fd(p) HB # of seed = 1 # of seed = 3 Figure 7: Hit ratio (|Ui|/|C|) in terms of each iteration number by LR, +Fd(p), and HB. The upper and lower graphs start with one and three seeds, respectively. we give only one seed term (the upper graph), which is also meaningful because the user often starts dictionary construction with only one seed term in a real situation. HB enjoys both benefits of coverage by LR and quickness by +Fd(p). In other words, a conventional classifier and our method are complementary; LR becomes favorable when the user prioritizes coverage than quickness, and +Fd(p) becomes favorable when vice versa. As a possible use case of HB, the analyst may quickly find interesting perspectives by creating various dictionaries with one of the FWPS methods, and once finding those, he/she switches to a linear classifier to expand the promising dictionaries more. 6 Conclusion To the best of our knowledge, this paper proposes the first formulation of interactive dictionary construction for text analytics, which clarifies the critical issues to resolve. In response to those issues, we provide the method, the evaluation framework, and the experimental dataset. Also, our experimental results show the promising performances of our method in concern with real situations of text analytics. Our systematic study will pave the way to future research about the effective construction of dictionaries for text analytics. Acknowledgement We appreciate anonymous reviewers and their insightful comments. Also, we are grateful to Tadayuki Yoshida and Ryuki Tachibana for helpful discussions based on practical use-cases. 798 References Alfredo Alba, Anni Coden, Anna Lisa Gentile, Daniel Gruhl, Petar Ristoski, and Steve Welch. 2017. Multi-lingual concept extraction with linked data and human-in-the-loop. In Proceedings of the Knowledge Capture Conference, pages 1–8. Alfredo Alba, Daniel Gruhl, Petar Ristoski, and Steve Welch. 2018. Interactive dictionary expansion using neural language models. In Proceedings of the 2nd International Workshop on Augmenting Intelligence with Humans-in-the-Loop, pages 7–15. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440–447. Anni Coden, Daniel Gruhl, Neal Lewis, Michael Tanenblatt, and Joe Terdiman. 2012. Spot the drug! an unsupervised pattern matching method to extract drug names from very large clinical corpora. In Proceedings of the 2012 IEEE Second International Conference on Healthcare Informatics, Imaging and Systems Biology, pages 33–39. Yin Cui, Feng Zhou, Yuanqing Lin, and Serge J. Belongie. 2016. Fine-grained categorization and dataset bootstrapping using deep metric learning with humans in the loop. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 1153–1162. Shantanu Godbole, Indrajit Bhattacharya, Ajay Gupta, and Ashish Verma. 2010. Building re-usable dictionary repositories for real-world text mining. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, pages 1189–1198. Yeye He and Dong Xin. 2011. Seisa: set expansion by iterative similarity aggregation. In Proceedings of the 2011 World Wide Web Conference, pages 427– 436. Bongjun Kim and Bryan Pardo. 2018. A human-inthe-loop system for sound event detection and annotation. ACM Trans. Interact. Intell. Syst., 8(2). Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2017. Dialogue learning with human-in-the-loop. In Proceedings of the 5th International Conference on Learning Representations. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schtze. 2008. Introduction to Information Retrieval. Cambridge University Press. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of the 1st International Conference on Learning Representations. Mohamed M. Mostafa. 2013. More than words: Social networks’ text mining for consumer brand sentiments. Expert Systems with Applications, 40(10):4241–4251. Tetsuya Nasukawa. 2009. Text analysis and knowledge mining. In Proceedings of the 8th International Symposium on Natural Language Processing, pages 1–2. Tetsuya. Nasukawa and Tohru. Nagano. 2001. Text analysis and knowledge mining system. IBM Systems Journal, 40(4):967–984. Patrick Pantel, Eric Crestan, Arkady Borkovsky, AnaMaria Popescu, and Vishnu Vyas. 2009. Web-scale distributional similarity and entity set expansion. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 938–947. Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 113–120. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543. Greg Schohn and David Cohn. 2000. Less is more: Active learning with support vector machines. In Proceedings of the 17th International Conference on Machine Learning, pages 839–846. Jiaming Shen, Zeqiu Wu, Dongming Lei, Jingbo Shang, Xiang Ren, and Jiawei Han. 2017. Setexpan: Corpus-based set expansion via context feature selection and rank ensemble. In Proceedings of the 2017 Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 288–304. Jiaming Shen, Zeqiu Wu, Dongming Lei, Chao Zhang, Xiang Ren, Michelle T. Vanni, Brian M. Sadler, and Jiawei Han. 2018. Hiexpan: Task-guided taxonomy construction by hierarchical tree expansion. In Proceedings of 24th International Conference on Knowledge Discovery & Data Mining, pages 2180– 2189. Hironori Takeuchi, L. Venkata Subramaniam, Tetsuya Nasukawa, and Shourya Roy. 2009. Getting insights from the voices of customers: Conversation mining at a contact center. Information Sciences, 179(11):1584–1591. 799 A Appendix A.1 Proof We prove that our formulation of the optimization problem is a natural extension of that of SetExpan, assuming a reasonable normalization constraint for entity vectors and their weights. Notations follow from the main paper. Recall that the formulation of SetExpan is F ∗= arg max |F|=Q X 1≤i<j≤n Sim(ei, ej|F), (1) where Q is the number of features in F and is a fixed integer value. Building on this, our formulation is w∗= arg max w n X i=1 Sim(ei, eU|w). (4) Substitute (2) and (3) in the main paper into the above formulation to obtain w∗ = arg max w n X i=1 n 1 n n X j=1 Sim(ei, ej|w) o = arg max w 1 n  2 × X 1≤i<j≤n Sim(ei, ej|w) + n X i=1 Sim(ei, ei|w) = arg max w n 2 n X 1≤i<j≤n Sim(ei, ej|w) + 1 n2 L X k=1 wk  n X i=1 ∥vfk(ei)∥  . o In the right-hand side of the last equation, the second term is a constant when all of the vectors {vfk(ei)}i=1,...,n have the same norm and PL k=1 wk = 1. Then our optimization problem is equivalent to w∗= arg max w X 1≤i<j≤n Sim(ei, ej|w), which is a continuous version of (1). Next, let us prove that in the modified version of our optimization problem ((7) in the main paper), min 1≤i≤n Sim(ei, eU|w) (10) is a concave function of w. Hence we can apply standard techniques of convex optimization to solve (7). First let us rewrite (10) as follows: min 1≤i≤n Sim(ei, eU|w) = min 1≤i≤n L X k=1 wk · (vfk(ei) · vfk(eU)) = min 1≤i≤n L X k=1 wk · xik, where xik = vfk(ei) · vfk(eU). Then it is sufficient to prove the following lemma. Lemma 1. The following function is concave for w when w is defined on a convex set. g(w) := min 1≤i≤n L X k=1 wk · xik (11) Proof. It is a straightforward calculation by the definition of concavity. for any w1 = {w11, · · · , w1L}, w2 = {w21, · · · , w2L}, and λ ∈ (0, 1), we need to prove that g((1 −λ)w1 + λw2) ≥(1 −λ)g(w1) + λg(w2). We can compute the left-hand side of this equation by using (11): g((1 −λ)w1 + λw2) = min 1≤i≤n L X k=1 ((1 −λ)w1k + λw2k) · xik = min 1≤i≤n L X k=1  (1 −λ)w1kxik + λw2kxik ≥ min 1≤i≤n L X k=1 (1 −λ)w1kxik + min 1≤i≤n L X k=1 λw2kxik = (1 −λ) min 1≤i≤n L X k=1 w1kxik +λ min 1≤i≤n L X k=1 w2kxik = (1 −λ)g(w1) + λg(w2). Here we use the inequality min1≤i≤n(Ai + Bi) ≥min1≤i≤n Ai + min1≤i≤n Bi that holds for any sequences of real numbers {Ai}i=1,...,n and {Bi}i=1,...,n. Since PL k=1 wk = 1, 0 ≤wk (k = 1, . . . , L) is a convex set, we can apply this lemma to our objective function.
2020
72
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8093–8104 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8093 Sources of Transfer in Multilingual Named Entity Recognition David Mueller1,2 Nicholas Andrews2 Mark Dredze1,2 1Center for Language and Speech Processing, Johns Hopkins University 2Human Language Technology Center of Excellence, Johns Hopkins University dam,[email protected] [email protected] Abstract Named-entities are inherently multilingual, and annotations in any given language may be limited. This motivates us to consider polyglot named-entity recognition (NER), where one model is trained using annotated data drawn from more than one language. However, a straightforward implementation of this simple idea does not always work in practice: naive training of NER models using annotated data drawn from multiple languages consistently underperforms models trained on monolingual data alone, despite having access to more training data. The starting point of this paper is a simple solution to this problem, in which polyglot models are fine-tuned on monolingual data to consistently and significantly outperform their monolingual counterparts. To explain this phenomena, we explore the sources of multilingual transfer in polyglot NER models and examine the weight structure of polyglot models compared to their monolingual counterparts. We find that polyglot models efficiently share many parameters across languages and that fine-tuning may utilize a large number of those parameters. 1 Introduction Multilingual learning—using data from multiple languages to train a single model—can take many forms, such as adapting a model from a highresource to low-resource language (Xie et al., 2018; Ni et al., 2017; Mayhew et al., 2017; Cotterell and Duh, 2017; Wu and Dredze, 2019; M`arquez et al., 2003), taking advantage of beneficial multilingual features or datasets (Kim et al., 2012; Ehrmann et al., 2011; T¨ackstr¨om, 2012), and unsupervised representation learning (Devlin et al., 2018a). We adopt the term “Polyglot” from Tsvetkov et al. (2016) to refer to models that are trained on and applied to multiple languages. There are several advantages to training a single polyglot model across languages. Single models ease production requirements; only one model need be maintained. They can be more efficient, using fewer parameters than multiple monolingual models. Additionally, they can enable multilingual transfer (Devlin, 2018; Wu and Dredze, 2019; Pires et al., 2019). However, a key goal of polyglot learning concerns producing a single model that does better on each language than a monolingual model. In the context of named entity recognition, we may expect aspects of the task to transfer across languages. For example, since entity names tend to be transliterated or directly used across languages, even distant languages may see benefit from training a single model, e.g. “Apple” (company) is rendered as such in French rather than as “Pomme.” Intuitively, the more similar and the larger the set of languages, the more we should expect to see a benefit from considering them jointly. These polyglot models can take advantage of different sets of labeled corpora in different languages (Gillick et al., 2016; Mulcaire et al., 2019). Nevertheless, progress towards this goal remains mixed; polyglot models often do not improve results in each language (Mulcaire et al., 2019; Kondratyuk and Straka, 2019; Upadhyay et al., 2018; Conneau et al., 2019). Models trained across all languages come close but typically fail to outperform monolingual models. Thus, while multilingual learning can benefit low resource languages through transfer and simplify models by sharing one across all languages, it fails to realize a key goal: improving results in each language. Our experiments in §4 confirm this negative result in two different multilingual settings for 4 different neural NER models. Our first contribution is a technique in which a polyglot NER model can be adapted to a target language by fine-tuning on monolingual data. 8094 A similar continued training approach to transfer has been explored for domain adaptation in neural machine translation (Luong and Manning, 2015; Khayrallah et al., 2018); we show that it works with polyglot models for NER, improving performance by up to 3 F1 over monolingual baselines. Our second contribution is an explanation of the surprising effectiveness of this technique through an extensive empirical study of polyglot models for NER. We compare several types of neural NER models, including three character (or byte) level architectures, and evaluate transfer across a small (4) and large (10) set of languages. In particular, we find that: • §4 Other than Byte-to-Span (BTS; Gillick et al., 2016), most NER architectures do not benefit from polyglot training. Still, simpler models than BTS, with more inductive bias, can outperform BTS in both monolingual and polyglot settings. • §5.2 Polyglot models are more efficient than monolingual models in that for a given level of performance, they require vastly fewer parameters. This suggests that many parameters are shared cross-lingually. • §4.2 Polyglot weights transfer to unseen languages with mixed results. In particular, transfer can occur when there is high lexical overlap or closely related languages in the polyglot training set. • §5.3 Languages share a large number of important parameters between each other in polyglot models, and fine-tuning may utilize those parameters to strengthen it’s performance. To our knowledge, ours is the first systematic study of polyglot NER models. 2 Related Work There is a long history of multilingual learning for NER (Kim et al., 2012; Ehrmann et al., 2011; T¨ackstr¨om, 2012). This work has is driven by an interest in learning NER models for many languages (Cucerzan and Yarowsky, 1999; Pan et al., 2017a) and the relative lack of data for many languages of interest (Das et al., 2017). Polyglot Models Johnson et al. (2017) and Lee et al. (2017) showed that a single neural MT model could benefit from being trained in a multilingual setting. Gillick et al. (2016) showed similar results for NER, presenting a model that benefited from learning to perform NER on 4 languages at once. We find that other polyglot NER models are rarely better than monolingual models in terms of absolute performance. Mulcaire et al. (2019) showed that polyglot language model pretraining can help improve performance on NER tasks, although polyglot NER training hurts. However, multilingual BERT (Devlin et al., 2018b), when compared to monolingual BERT performance on NER, shows that polyglot pretraining is not always beneficial for downstream tasks. Finally, most recently, Kondratyuk and Straka (2019) showed how to train a single model on 75 languages for dependency parsing while retaining competitive performance or improving performance, mostly on low-resource languages. This work is closely related to ours, although we are predominantly interested in how we can leverage polyglot learning to improve performance across all languages. Cross-lingual Models Cross-lingual transfer leverages labeled data from different source languages to augment data for a target language. Rahimi et al. (2019) do this on a massive scale for NER, leveraging over 40 languages for crosslingual transfer. Xie et al. (2018) employed selfattention to combat word-order differences when transferring parameters from high-resource languages to low-resource. Much work in this space has looked at how to leverage a mixture of shared features and language-specific features (Kim et al., 2017), similar to domain adaptation techniques Daum´e III (2007). Recently, a lot of this work has focused on using adversarial models to force models to learn language-agnostic feature spaces (Chen et al., 2019; Huang et al., 2019). These works show, similar to our work, that it is possible to leverage multilingual data to increase performance across languages. 3 Models We evaluate three polyglot NER neural models.1 3.1 Word Level CRF The Neural (BiLSTM) CRF is a standard model for sequence labeling tasks (Ma and Hovy, 2016; Durrett and Klein, 2015). Our implementation 1We release the code for these models at https:// github.com/davidandym/multilingual-NER 8095 Model Eng Deu Nld Spa Avg Amh Ara Fas Hin Hun Ind Som Swa Tgl Vie Avg Character CRF Monolingual 84.91 71.39 78.96 82.60 79.45 60.62 43.22 45.11 62.12 60.47 62.14 61.75 68.04 84.13 47.31 59.49 Polyglot 83.38 70.86 79.38 81.64 77.85 59.39 43.25 43.20 62.88 60.86 64.59 65.45 68.32 84.80 49.71 59.87 Finetuned 86.49 72.95 80.91 82.72 80.82 59.86 44.69 46.85 68.30 65.21 67.15 66.11 70.07 87.03 51.80 62.71 Byte CRF Monolingual 85.75 71.42 78.36 81.19 79.18 59.13 44.95 44.76 65.89 57.91 61.46 61.05 67.09 84.46 48.73 59.54 Polyglot 83.79 71.54 79.43 80.25 78.75 57.03 42.88 41.88 65.10 60.46 61.07 62.22 68.40 82.75 47.27 58.90 Finetuned 86.68 73.02 80.09 82.95 80.69 59.37 42.69 45.25 67.68 63.91 64.38 64.92 70.78 86.25 51.14 61.64 CharNER Monolingual 83.83 69.30 79.60 79.46 78.05 54.33 36.31 40.68 62.03 53.04 58.05 56.88 63.70 81.04 39.64 54.53 Polyglot 84.14 69.19 78.94 79.39 77.92 49.64 36.98 37.41 60.02 49.37 55.51 58.56 63.49 79.36 44.50 53.48 Finetuned 85.23 70.60 81.00 82.00 79.70 53.46 40.15 39.20 65.57 59.84 60.70 59.09 68.85 84.61 45.47 57.70 Byte To Span Monolingual 87.91 63.92 71.34 73.07 74.06 48.23 39.41 26.76 19.01 44.51 54.32 58.81 54.27 71.76 26.90 44.50 Polyglot 86.43 71.10 76.11 74.26 76.98 46.41 41.59 40.09 55.69 60.53 57.58 62.30 54.78 74.52 43.95 53.64 Multilingual BERT Monolingual 90.94 81.50 88.62 88.16 87.31 48.36 56.42 72.52 66.99 78.32 62.69 72.18 86.13 54.18 66.75 Polyglot 90.67 80.96 87.48 87.04 86.53 48.33 56.92 74.81 68.16 77.56 59.29 71.92 87.59 57.06 66.84 Finetuned 91.08 81.27 88.74 86.87 86.99 49.94 54.67 76.83 69.52 80.14 62.70 73.16 88.05 56.74 69.97 Table 1: Performance for monolingual, multilingual, and finetuned models trained on either CoNLL (left) or LORELEI (right) data sets. The results are taken from the best model out of 5 random seeds, as measured by dev performance. Almost every model achieves the best performance in the finetuned setting, indicating that multilingual pretraining is learning transferable parameters, but multilingual models are not able to use them effectively across all languages simultaneously. Note that we do not evaluate Amharic with mBERT, because the Amharic script is not a part of mBERT’s vocabulary. broadly follows the description in Lample et al. (2016), and we consider three different variants of this model. The first two are character- and byte-level models.2 We consider these since Gillick et al. (2016) showed that multilingual transfer could occur across byte-level representations and we were interested in whether characters produced similar results when more diverse languages were involved. Each word passes through a multi-layer BiLSTM as a sequence of characters or bytes to produce word-level representations. Word-level representations feed into a sentence-level BiLSTM, which outputs, for each time step, logits for all possible labels. The logits are then fed into a CRF model (Lafferty et al., 2001) trained to maximize the loglikelihood of the gold label sequences. The third variant of this model uses contextualized representations from multilingual BERT (mBERT) (Devlin et al., 2018b). This model is similar to the one described above, with the key difference being that word-level representation are obtained using a pretrained subword-level BERT model, as opposed to being built from raw characters/bytes. As is done in the original BERT paper, 2Early experiments found these models suffered much less from multilingual training than subword/word models. we treat the representation of the first subword of each word as a representation for that word, and take the concatenation of the outputs of the last 4 layers at that subword position as our final word representation. 3.2 CharNER CharNER (Kuru et al., 2016) is a deep neural sequence labeling architecture which operates strictly at the character level during training, but uses word-level boundaries during inference. The model runs a 5-layer BiLSTM over sequences of characters, and is trained to predict the NER tag for each character of the sequence (without BIO labels). During inference a Viterbi decoder with untrained transition parameters enforces consistent character level tags across each word; no heuristics and little post-processing is necessary to obtain word-level BIO labels. To compare with the other architectures, we apply this model to bytes and evaluate its polyglot performance. Intuitively, we expect this model to do better than a word-level CRF at seeing beneficial transfer across languages, as it is closer to the model of Gillick et al. (2016): a deep, byte-level model that performs inference at the level of individual bytes. 8096 3.3 Byte to Span (BTS) BTS is a sequence-to-sequence model operating over byte sequences (Gillick et al., 2016). The input consists of a window of UTF-8 bytes, and the output is sequences with sufficient statistics of labeled entity spans occurring in the input sequence.3 Because byte sequences are long BTS operates over a sliding window of 60 bytes, treating each segment independently; the model’s entire context is always limited to 60 bytes. By consuming bytes and producing byte annotations, it has the attractive quality of being truly languageagnostic, without any language specific preprocessing. Despite obviating the need for languagespecific preprocessing, BTS achieves comparable results to more standard model architectures with no pretraining information. Additionally, it showed significant improvement in monolingual CoNLL performance after being trained on all 4 CoNLL languages. In this paper, we find that this trend holds in our multilingual settings, although our results show lower overall numbers to those reported in Gillick et al. (2016).4 3.4 Hyperparameters All experiments are run on GeForce RTX 2080 Ti GPUs, using Tensorflow (Abadi et al., 2016). CRF The character- and byte-level neural CRF use a sub-token BiLSTM encoder with 2-layers and 256 hidden units. The sentence-level BiLSTM has 1-layer with 256 hidden units. All characters and bytes have randomly initialized embeddings of size 256. We optimized these parameters with grid-search over 1-3 layers at each level and hidden sizes of {128, 256, 512}. We train using Adam with a learning rate of 0.001 and tune the early stop parameter for each model based on development set F1 performance. CharNER Our CharNER model operates over bytes rather than characters. It uses the same hyperparameters reported in Kuru et al. (2016), (5 3For a PER span at bytes 5-10, the correct output sequence is y = S:5, L:5, PER, STOP 4We reimplemented BTS based on correspondence with the model authors. We matched the published results on CoNLL English, and the same overall trends, but could not match the other three CoNLL languages. Despite significant effort, two differences remained: the authors could not share their proprietary implementation or deep learning library, and reported using more byte segments than is available in our CoNLL dataset. Language Code Family Genus Script # Train Sent. CoNLL English eng Indo-European Germanic Latin 11,663 Spanish spa Indo-European Romance Latin 8,323 German deu Indo-European Germanic Latin 12,152 Dutch nld Indo-European Germanic Latin 15,806 LORELEI Amharic amh Afroasiatic Semitic Ge’ez 4,923 Arabic ara Afroasiatic Semitic Arabic 4,990 Farsi fas Indo-Iranian Arabic 3,849 Hindi hin Indo-European Indo-Aryan Devanagari 4,197 Hungarian hun Uralic Ugric Latin 4,846 Indonesian ind Austronesian Malayo-Polynesian Latin 4,605 Somali som Afroasiatic Cushitic Latin 3,253 Swahili swa Niger-Congo Bantu Latin 3,318 Tagalog tgl Austronesian Latin 4,780 Vietnamese vie Austroasiatic Vietic Latin (Viet.) 4,042 LORELEI - held out for zeroshot Russian rus Indo-European Slavic Cyrillic 6,480 Bengali ben Indo-European Indo-Aryan Bengali 7,538 Uzbek uzb Turkic Arabic 11,323 Yoruba yor Niger-Congo Latin 1,753 Table 2: Different sets of languages we used, their sources, family and genus, script, and training set size. layers with hidden size 128, Adam Optimizer) with a byte dropout of 0.2, and dropout rates of 0.8 on the final layer, and 0.5 on the other layers. We also train our models using a learning rate of 0.001 and early stop based on development set F1 performance. BTS For BTS we use the same training scheme and hyperparameters reported in Gillick et al. (2016).5 Since we do not have document-level information in LORELEI, we treat each separate language dataset as its a whole document and slide a window across the entire dataset at once. We train using SGD (Adam performed much worse), with a learning rate of 0.3, and similarly, early stop based on development set F1 performance. 4 Experiments Each LORELEI language has less than half the data of a CoNLL language, but in total, the two datasets are roughly equal in size. The CoNLL setting consists of European languages in the same alphabet, and prior work has shown beneficial transfer in this setting (Gillick et al., 2016). LORELEI is more challenging because it contains more distantly related languages. We train a monolingual NER model for each language (14 models) and two polyglot models: CoNLL and LORELEI. For polyglot training we concatenate each annotated language-specific dataset into one combined corpus. Because our language-specific datasets are comparable in size 54 layers with 320 hidden units, byte dropout of 3.0 and layer dropout of 5.0. 8097 we do not correct for minor size differences.6 All models were trained over 5 random seeds, with the best model selected by development performance. For polyglot models, we select the best model using the average development performance across all languages. Results Table 1 reports test performance. With few exceptions, polyglot training does worse than monolingual. In some cases, the two settings do nearly the same (such as Character and mBERT CRFs on LORELEI) but we do not see improved results from a polyglot model. Murthy et al. (2018) found that languages with different label distributions do worse for transfer. We find large label distribution changes in CoNLL, but not LORELEI. To determine if this could explain polyglot NER failures in CoNLL, we allow our CRF models to learn languagespecific label distributions via language-specific CRF transition parameters. However, we saw little difference in the results for either CoNLL or LORELEI (no more than 0.5 F1 on any language). This suggests that other factors are preventing more language transfer. The exception to these observations is the BTS model, which showed significant improvements in the polyglot settings, matching the conclusion of Gillick et al. (2016). However, our implementation failed to match the higher numbers of the original paper, and so the model is significantly worse overall compared to the other NER models. Perhaps the unique architecture of BTS enables it to improve in the polyglot setting. However, if BTS requires more training data to achieve results similar to the other models, the polyglot improvements may not hold up. Conclusion Polyglot NER models fail to improve over their monolingual counterparts, despite using 4 (CoNLL) or 10 (LORELEI) times more labeled data. Discrepancies of label priors between languages do not, by themselves, account for this. 4.1 Target Language Polyglot Adaptation While polyglot models perform worse than monolingual models, they are competitive. This suggests that polyglot models may be successfully learning multilingual representations, but that the optimization procedure is unable to find a global 6A uniform sampling strategy is recommended for language combinations with significant size discrepancies. Language Monoling. Poly. (Zero-shot) Poly. (Fine-tuned) Russian 43.97 1.61 41.55 Bengali 76.10 2.08 76.63 Uzbek 65.39 14.54 61.10 Yoruba 62.66 29.02 64.95 Table 3: F1 of a Byte-level CRF on 4 different lorelei language datasets, compared to the performance of the multilingual model which was not trained on any of these 4 languages, as well as the multilingual model after finetuning. The results are mixed - moreover, zeroshot performance does not seem to be a good indicator of transferability. minimum for all languages. To test this theory, we fine-tune the polyglot model separately for each language. We treat the parameters of the polyglot NER models as initializations for monolingual models of each language, and we train these models in the same fashion as the monolingual models, with the exception of using a different initial step size.7 With few exceptions, fine-tuned polyglot models surpass their monolingual counterparts (Table 1), improving up to 3 F1 over monolingual baselines. Conclusion This demonstrates that the polyglot models are in fact learning more from observing multiple languages, and that this information can transfer to each language. Additionally, this indicates that the ideal optima for a monolingual model may not be achievable using standard training objectives without observing other languages; we found more regularization did not help the monolingual models. However, jointly optimizing all languages naively may provide too challenging an optimization landscape to obtain that optima for each language simultaneously. 4.2 Novel language transfer Finally, since the polyglot models demonstrate the ability to transfer information between languages, we ask: can these models generalize to unseen languages? We consider a similar approach to the previous section, except we now fine-tune the polyglot model on a novel language for which we have supervised NER data. In this setting, we only consider byte-level models, since byte vocabularies mean we can use the same parameters on unseen languages with different character sets. We select 4 additional LORELEI languages: Rus7We use the Adam optimizer settings saved from multilingual training. 8098 sian, Yoruba, Bengali, and Uzbek. For comparison, we train monolingual Byte CRF models (from scratch), following the same optimization protocols, as described above. Table 3 shows results for the monolingual model, polyglot fine-tuned, and the polyglot model evaluated without any fine-tuning (zeroshot). Unsurprisingly, the polyglot model does poorly in the zero-shot setting as it has never seen the target language. However, sharing a script with some languages in the polyglot training set can lead to significantly better than random performance (as in the case of Yoruba and Uzbek). In the fine-tuning setting, the results are mixed. Yoruba, which enjoys high script overlap with the polyglot training set, sees a large boost in performance from utilizing the polyglot parameters, whereas Uzbek, which has moderate script overlap but no family overlap, is hurt by it. Russian and Bengali have no script overlap with the polyglot training set, but Bengali, which is closely related to Hindi (sharing family and genus) sees a moderate amount of transfer, while Russian, which is not closely related to any language in the training set, is negatively impacted from using the polyglot weights. Conclusion The transferability of the polyglot parameters to unseen languages depends on a variety of factors. We conjecture that these factors are partially connected to relatedness to languages in the original polyglot training set. 5 How do Polyglot Models Learn? We now turn our attention towards understanding how polyglot models are transferring information across languages. We examine the types of errors made in each setting, as well as how polyglot models efficiently use parameters and how parameter weights are shared across languages. 5.1 Error Analysis We broadly examine the types of errors made across each of our regimes, focusing on results from the Byte-CRF model. To explore what kinds of errors polyglot fine-tuning targets we plot, in Figure 1, the counts of recall errors (including O-tags) on validation data made by the monolingual and polyglot models, compared to the finetuned model. We find that polyglot models tend to make more errors on O-tags, indicating a tendency towards making precision errors, but that Monolingual Polyglot Model 0 250 500 750 1000 1250 1500 1750 2000 Number of Errors -248 +154 +77 +69 +44 +33 +32 +14 +111 +99 +3 -1 (a) LRL Byte-CRF Error Differences O Errors PER Errors LOC Errors GPE Errors ORG Errors TTL Errors FT O Errors FT PER Errors FT LOC Errors FT GPE Errors FT ORG Errors FT TTL Errors Monolingual Polyglot Model 0 200 400 600 800 1000 1200 Number of Errors -23 +254 +186 +17 +102 +83 +91 +110 +146 +23 (b) CoNLL Byte-CRF Error Differences O Errors PER Errors LOC Errors MISC Errors ORG Errors FT O Errors FT PER Errors FT LOC Errors FT MISC Errors FT ORG Errors Figure 1: (a) The count of errors made by the LORELEI Byte-CRF monolingual and polyglot models, compared to the fine-tuned (FT) models (across all languages), (b) shows the CoNLL setting. Deltas (Errors minus FT Errors) are displayed on top. Polyglot models tend to make more errors on O-tagged tokens (precision errors) than monolingual models. However, fine-tuning tends to recover these errors to nearly monolingual performance. In the CoNLL regime, polyglot models make fewer errors on PER and ORG tags, and fine-tuned models generally maintain that error rate. fine-tuning tends to correct this trend back towards monolingual performance. We additionally find that, compared to monolingual models, fine-tuned models do much better PER and ORG tags (in both LORELEI and CoNLL settings). However, the same is not true for polyglot LORELEI models, indicating that some of this transfer comes from the combination of polyglot and fine-tune training. One reason that polyglot fine-tuned models may perform better than monolingual models is the larger number of entities they see during training. Many languages contain entities in their validation set, which appear in the training sets of other languages. We identify such “common entities” as entities in the validation set of a language l which share some level of surface form overlap (either n-gram or exact match)8 and type with an entity appearing in the training set of lan8We explore n-gram overlap with n = 4, 5, 6, 7, 8 and exact name overlap. We report the average rate across each granularity. 8099 eng deu spa nld Language 0 5 10 15 20 25 30 35 40 Rate of Error on Common Entities Errors on Common Entities mono poly poly fine Figure 2: The rate of errors containing surface forms that overlap with an entity of the same type in other languages’ training set. We report the harmonic mean between the rate in precision and recall errors, for the monolingual, polyglot, and fine-tuned byte-CRF models. We find that polyglot models have a lower rate of errors on entities which appear in other languages’ training sets, indicating that they are benefiting from the higher quantity of entities seen. guage l′ ̸= l. We plot the average error rate (defined as the harmonic mean between the rate of precision errors and the rate of recall errors) of the CoNLL Byte-CRF model in Figure 2. We find that polyglot models have a lower error rate on “common entities” than monolingual models, indicating that such entities are a source of transfer in polyglot NER. We also see that language-specific fine-tuning tends to increase the error rate, either due to forgetting or simply to decreasing errors on “non-common entities” during fine-tuning. 5.2 Polyglot Parameter Efficiency Many studies have demonstrated that modern neural models have enormous capacity, and that not all parameters are needed to model the target function (LeCun et al., 1990; Hinton et al., 2015; Frankle and Carbin, 2019; Sanh et al., 2019). Let us assume that it takes Ml parameters to learn a monolingual NER model for language l. If we sought to train monolingual models for each language in L, we would need ˆ M = P l∈L Ml parameters. Does a polyglot model trained on these languages need ˆ M parameters? Perhaps the polyglot NER model is partitioning its parameters by language, and little sharing occurs across languages, so the full ˆ M parameters are needed. In this case, the negative results for polyglot learning could be explained by the under-parameterization of the model. Conversely, the model could be sharing parameters across many languages, effectively learning crosslingual representations. In this case, we would expect the model to need much fewer than ˆ M parameters, and the over-sharing of parameters across languages could explain the poor polyglot performance. Model Compression To explore polyglot model behavior, we utilize model compression techniques, which have the goal of compressing a large number of parameters into a smaller amount with minimal loss in overall model accuracy. We use magnitude weight pruning (Han et al., 2015) to answer two questions: (1) How many more parameters do polyglot models require than monolingual models? (2) Does fine-tuning learn an equally compact solution to that of monolingual training? We analyze the byte-level CRF because they are stronger than, or comparable to, all other models with no pretraining, and have the same number of parameters across all languages and settings (monolingual, polyglot, and fine-tuned). We perform our analysis on models without pretraining, as we wish to isolate the effects of polyglot learning on our models from external polyglot resources. We prune the lowest magnitude weights of each model in 10% increments and plot the average9 performance over time in Figure 3. Additionally, we define “over-pruning” to occur for language l and model m when pruning causes the performance of model m on language l to decrease by more than 1 F1 from model m ’s original performance. We plot the pruning threshold for each language and model10 before “over-pruning” occurs in Figure 3 as well. We find that polyglot models require more parameters than monolingual models to maintain their performance, but are significantly more efficient, i.e. they need much fewer than ˆ M parameters. For example, the CoNLL polyglot model needs 60% of its parameters to maintain performance on all languages; English, Spanish, and Dutch require fewer parameters still. Compared to the total number of parameters needed by the four individual monolingual models combined ( ˆ M), the polyglot model needs only 30% of that, although this is paid for by an average decrease of 0.33 F1. This suggests that polyglot performance suffers due to over-sharing parameters, rather than 9Averaged across all CoNLL or LORELEI languages. 10For polyglot models we report the percentage required to maintain performance on each individual language using the same model. 8100 0 10 20 30 40 50 60 70 80 90 Percentage Pruned (%) 0 10 20 30 40 50 60 70 80 90 F1 (a) Performance of Pruned Byte CRF (CoNLL) model mono poly poly fine 0 10 20 30 40 50 60 70 80 90 Percentage Pruned (%) 0 10 20 30 40 50 60 70 80 90 F1 (b) Performance of Pruned Byte CRF (LORELEI) model mono poly poly fine mono poly poly fine model 0 20 40 60 80 100 percentage pruned (%) (c) Pruning Thresholds for Byte CRF (CoNLL) eng deu nld spa mono poly poly fine model 0 20 40 60 80 100 percentage pruned (%) (d) Pruning Thresholds for Byte CRF (LORELEI) amh ara fas hin hun ind som swa tgl vie Figure 3: (a & b) Average F1 of Byte-CRF models as the pruning threshold increases. We find that monolingual models learn much more sparse solutions than polyglot models. Interestingly, fine-tuning does not recover the sparsity of the monolingual models. (c & d) The pruning thresholds before language performance drops by more than 1 F1 for each model. In the CoNLL setting, languages share nearly equally sparse solutions. However, in the LORELEI setting, the sparsity across all languages exhibits high variance, even in the fully shared polyglot model. under-sharing, during joint optimization. Additionally, we find that fine-tuning the polyglot models does not recover as sparse a solution as monolingual training. This finding suggests that either fine-tuning utilizes polyglot parameters to learn a denser solution than monolingual models, or that fine-tuning retains several high-magnitude polyglot weights not crucial to the target language. In the latter case, more sophisticated pruning criteria may be better suited to determining the sparsity of fine-tuned models, despite recent evidence indicating the strength of simple magnitude pruning (Gale et al., 2019). 5.3 Important Weights Across Languages In addition to measuring the parameter efficiency of the polyglot models, we are interested in knowing how much overlap exists between the parameters which are most important for different languages, and how those parameters change during fine-tuning. This answers two important questions: 1) How do languages utilize shared polyglot parameters? 2) Does fine-tuning benefit from many or few polyglot weights? To measure overlap between important weights for each language in a polyglot model, we compare the language-specific Fisher information matrix diagonals of the polyglot model. The Fisher information matrix has been used in this way to measure individual parameter importance on a specific task, and has been shown to be effective for retaining important information across tasks during sequential learning (Kirkpatrick et al., 2016; Thompson et al., 2019). For a given language l with N training examples we estimate the Fisher information matrix F l with the empirical Fisher information matrix ¯F l. F l is computed via11 1 N N X i=1 E y∼pθ  ∇θ log pθ(y|xi)∇θ log pθ(y|xi)T  We take the diagonal values ¯Fi,i as an assignment of importance to θi. To compute the overlap of important weights shared between two tasks, we take the top 5%, 25%, and 50% of weights from each layer for each task (given by the tasks’ Fishers) and calculate the percentage overlap between them. We do this for two settings: First, we consider the percentage of weights shared between a specific language and all other languages in a polyglot model. Second, we examine the percentage of weights that remain important to a particular language after finetuning. We plot the average overlap across all lan11The expectation over y ∼pθ is approximated by sampling exactly from the posterior of each xi. We take 1,000 samples for each example. 8101 0: Embed 1: Byte LSTM 2: Byte LSTM 3: Sent LSTM 4: Dense model layers 0 20 40 60 80 100 weight overlap (%) (a) Overlap of Important Weights in Poly Model Top Percent 5.0% 25.0% 50.0% 0: Embed 1: Byte LSTM 2: Byte LSTM 3: Sent LSTM 4: Dense model layers 0 20 40 60 80 100 weight overlap (%) (b) Poly to Finetune Important Weight Retention Top Percent 5.0% 25.0% 50.0% Figure 4: (a) Percentage of important weight overlap between a single language and all other languages in the polyglot Byte-CRF LORELEI model (averaged over all languages). The top 5% of parameters for each language share little overlap with other languages, implying that the most important weights for each language are uniquely important to that language. (b) Overlap of important weights between the polyglot and fine-tuned Byte-CRF LORELEI model, for a given language (averaged over all languages). Only 30% of the top 5% of weights important to a given language are retained after fine-tuning, suggesting that fine-tuning targets the most important parameters for a language. guages for each setting with our LORELEI ByteCRF models in Figure 4. We find that languages share a high number of important weights between each other in the polyglot model (40% overlap in the top 25% of weights of the LSTM layers), which helps explain how polyglot models are competitive, with fewer parameters, than multiple monolingual models. Interestingly, however, we find that the most important weights (top 5%) for each language share little overlap, implying that in polyglot learning, each language acquires parameters that are uniquely important to that language. We additionally find that fine-tuning does not shift the importance of a significant number of weights (more than half of the top 25% important weights for a language in the polyglot model remain similarly important after fine-tuning). Surprisingly, the parameters that were most important to a language in the polyglot model are the parameters that are the most affected during finetuning for that language. Thus, we see that language-specific fine-tuning retains the importance of many shared parameters, but the most important weights to that language are significantly affected.12 6 Conclusions We explore the benefits of polyglot training for NER across a range of models. We find that, while not all models can benefit in performance from polyglot training, the parameters learned by those models can be leveraged in a language-specific way to consistently outperform monolingual models. We probe properties of polyglot NER models, and find that they are much more efficient than monolingual models in terms of the parameters they require, while generally maintaining a competitive performance across all languages. We show that the high amount of parameter sharing in polyglot models partially explains this, and additionally find that language-specific fine-tuning may use a large portion of those shared parameters. In future work, we will explore whether the observed trends hold in much larger polyglot settings, e.g. the Wikiann NER corpus (Pan et al., 2017b). Finally, regarding the sharing of weights between languages in polyglot models, our key conclusion is that standard training objectives are unable to find an optimum which simultaneously achieves high task performance across all languages. With this in mind, exploring different training strategies, such as multi-objective optimization, may prove beneficial (Sener and Koltun, 2018). On the other hand, when the objective is to maximize performance on a single target language it may be possible to improve the proposed fine-tuning approach further using methods such as elastic weight consolidation (Kirkpatrick et al., 2016). Acknowledgments We would like to thank the anonymous reviewers for their helpful comments. 12Note that typically it is not reasonable to compare the weights of two different neural networks, as they are unidentifiable (Goodfellow and Vinyals, 2015). However, since one model is initialized from the other, we believe it is reasonable to characterize how weights shift during language-specific fine-tuning. 8102 References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2019. Multisource cross-lingual model transfer: Learning what to share. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3098–3112, Florence, Italy. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. Ryan Cotterell and Kevin Duh. 2017. Lowresource named entity recognition with crosslingual, character-level neural conditional random fields. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 91–96. Asian Federation of Natural Language Processing. Silviu Cucerzan and David Yarowsky. 1999. Language independent named entity recognition combining morphological and contextual evidence. In 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. Arjun Das, Debasis Ganguly, and Utpal Garain. 2017. Named entity recognition with word embeddings and wikipedia categories for a low-resource language. ACM Transactions on Asian and LowResource Language Information Processing (TALLIP), 16(3):18. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256–263, Prague, Czech Republic. Association for Computational Linguistics. Jacob Devlin. 2018. Multilingual bert readme document. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018a. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018b. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Greg Durrett and Dan Klein. 2015. Neural crf parsing. In Proceedings of the Association for Computational Linguistics, Beijing, China. Association for Computational Linguistics. Maud Ehrmann, Marco Turchi, and Ralf Steinberger. 2011. Building a multilingual named entityannotated corpus using annotation projection. In Proceedings of the International Conference Recent Advances in Natural Language Processing 2011, pages 118–124, Hissar, Bulgaria. Association for Computational Linguistics. Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations. Trevor Gale, Erich Elsen, and Sara Hooker. 2019. The state of sparsity in deep neural networks. ArXiv, abs/1902.09574. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1296–1306. Association for Computational Linguistics. Ian J. Goodfellow and Oriol Vinyals. 2015. Qualitatively characterizing neural network optimization problems. CoRR, abs/1412.6544. Song Han, Jeff Pool, John Tran, and William J. Dally. 2015. Learning both weights and connections for efficient neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 1135–1143, Cambridge, MA, USA. MIT Press. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Lifu Huang, Heng Ji, and Jonathan May. 2019. Crosslingual multi-level adversarial transfer to enhance low-resource name tagging. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3823–3833, Minneapolis, Minnesota. Association for Computational Linguistics. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Huda Khayrallah, Brian Thompson, Kevin Duh, and Philipp Koehn. 2018. Regularized training objective for continued training for domain adaptation in neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 36–44. 8103 Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for POS tagging without cross-lingual resources. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2832–2838, Copenhagen, Denmark. Association for Computational Linguistics. Sungchul Kim, Kristina Toutanova, and Hwanjo Yu. 2012. Multilingual named entity recognition using parallel data and metadata from wikipedia. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 694–702, Stroudsburg, PA, USA. Association for Computational Linguistics. James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2016. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 114 13:3521–3526. Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2779–2795, Hong Kong, China. Association for Computational Linguistics. Onur Kuru, Ozan Arkan Can, and Deniz Yuret. 2016. Charner: Character-level named entity recognition. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 911–921, Osaka, Japan. The COLING 2016 Organizing Committee. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics. Yann LeCun, John S. Denker, and Sara A. Solla. 1990. Optimal brain damage. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 598–605. Morgan-Kaufmann. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365–378. Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spoken language domains. In Proceedings of the International Workshop on Spoken Language Translation, pages 76–79. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Association for Computational Linguistics. Llu´ıs M`arquez, Adri`a de Gispert, Xavier Carreras, and Llu´ıs Padr´o. 2003. Low-cost named entity classification for Catalan: Exploiting multilingual resources and unlabeled data. In Proceedings of the ACL 2003 Workshop on Multilingual and Mixedlanguage Named Entity Recognition, pages 25–32, Sapporo, Japan. Association for Computational Linguistics. Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap Translation for Cross-Lingual Named Entity Recognition. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Phoebe Mulcaire, Jungo Kasai, and Noah A. Smith. 2019. Polyglot contextual representations improve crosslingual transfer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3912–3918, Minneapolis, Minnesota. Association for Computational Linguistics. Rudra Murthy, Anoop Kunchukuttan, and Pushpak Bhattacharyya. 2018. Judicious selection of training data in assisting language for multilingual neural ner. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 401–406. Association for Computational Linguistics. Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1470– 1480, Vancouver, Canada. Association for Computational Linguistics. Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017a. Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1946–1958. 8104 Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017b. Cross-lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151–164, Florence, Italy. Association for Computational Linguistics. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Ozan Sener and Vladlen Koltun. 2018. Multi-task learning as multi-objective optimization. In Advances in Neural Information Processing Systems, pages 527–538. Oscar T¨ackstr¨om. 2012. Nudging the envelope of direct transfer methods for multilingual named entity recognition. In Proceedings of the NAACLHLT Workshop on the Induction of Linguistic Structure, pages 55–63, Montr´eal, Canada. Association for Computational Linguistics. Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2062–2068, Minneapolis, Minnesota. Association for Computational Linguistics. Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W Black, Lori Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. In North American Chapter of the Association for Computational Linguistics (NAACL). Shyam Upadhyay, Nitish Gupta, and Dan Roth. 2018. Joint multilingual supervision for cross-lingual entity linking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2486–2495, Brussels, Belgium. Association for Computational Linguistics. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. arXiv preprint arXiv:1904.09077. Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural crosslingual named entity recognition with minimal resources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369–379, Brussels, Belgium. Association for Computational Linguistics.
2020
720
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8105–8117 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8105 ZeroShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages Colin Lockard University of Washington [email protected] Prashant Shiralkar Amazon [email protected] Xin Luna Dong Amazon [email protected] Hannaneh Hajishirzi University of Washington, Allen Institute for AI [email protected] Abstract In many documents, such as semi-structured webpages, textual semantics are augmented with additional information conveyed using visual elements including layout, font size, and color. Prior work on information extraction from semi-structured websites has required learning an extraction model specific to a given template via either manually labeled or distantly supervised data from that template. In this work, we propose a solution for “zero-shot” open-domain relation extraction from webpages with a previously unseen template, including from websites with little overlap with existing sources of knowledge for distant supervision and websites in entirely new subject verticals. Our model uses a graph neural network-based approach to build a rich representation of text fields on a webpage and the relationships between them, enabling generalization to new templates. Experiments show this approach provides a 31% F1 gain over a baseline for zero-shot extraction in a new subject vertical. 1 Introduction Semi-structured websites offer rich sources of highquality data across many areas of knowledge (Dong et al., 2014). These websites present information via text that is accompanied by rich visual and layout features that can be generalized beyond a single website. However, most prior work on information extraction (IE) from websites has largely ignored most of these features, instead relying only on HTML features specific to an individual website (Ferrara et al., 2014). This requires training data for every website targeted for extraction, an approach that cannot scale up if training data must be manually created. To circumvent manual data annotation, previous work used a distant supervision process requiring a knowledge base aligned to the website targeted Figure 1: Our zero-shot open-domain information extraction process learns generalizable graph-based representations of how relations are visually presented on semi-structured websites, allowing for training on one vertical (such University sites) and extraction from another (such as Movie sites). for extraction (Gentile et al., 2015; Lockard et al., 2018), including for OpenIE extraction (Banko et al., 2007; Bronzi et al., 2013; Lockard et al., 2019). These methods, however, can only learn a website-specific model based on seed knowledge for the site, but cannot be generalized to the majority of websites with knowledge from new verticals, by long-tail specialists, and in different languages. In this paper, we introduce the task of zero-shot relation extraction from semi-structured websites, in which a learned model is applied to extract from a website that was not represented in its training data (Figure 1). Moreover, we introduce ZEROSHOTCERES, a graph neural network model that encodes semantic textual and visual patterns common across different training websites and can generalize to extract information from documents with never-before-seen templates and topics. 8106 Unlike unstructured text, which can be modeled as a sequence, or images, which can be modeled as a two-dimensional grid of pixels, it is not obvious how to operate over the many shapes and sizes of text fields on a semi-structured webpage. We illustrate our intuition using the webpage snippets in Figure 1: Despite their differences, each site uses alignment of relation and object strings, either vertically or horizontally, to help indicate relationships; in addition, relation strings are often more prominent than their objects, either in size or boldness. Such features are semantically meaningful to readers and often consistent from site to site; thus, encoding them into the representation of webpages will allow us to generalize to unseen sites. Our model, ZEROSHOTCERES, encodes these diverse feature types in a graph representation in which each text field becomes a node in a graph, connected by edges indicating layout relationships on the page. This abstracts away the details of the page while maintaining the core visual structure presented to the reader. A graph neural network is then applied to produce a new representation of each text field, informed by the surrounding page context. This representation is then used to extract entities and relationships from the document. This allows us to extract not only in the closed-domain setting, but also allows us to conduct OpenIE on websites about entirely new subject verticals not seen during training. Our contributions are threefold: (a) We introduce a graph neural network model for webpage representation that integrates multi-modal information including visual, layout, and textual features, enabling generalization for IE from never-beforeseen websites. (b) We propose the first approach to enable Open Information Extraction from semistructured websites without prior knowledge or training data in the subject vertical. (c) Our method works in both OpenIE and ClosedIE settings. We conduct evaluations showing the effectiveness of the technique and exploring the challenges of zeroshot semi-structured IE, achieving a 31% improvement in F1 compared to an OpenIE baseline. The graph model gives a 26% F1 boost when extracting according to a defined schema (ClosedIE). 2 Related Work DOM-based ClosedIE: The conventional approach to extraction from semi-structured websites is wrapper induction (Kushmerick et al., 1997), in which training data for documents from a given template is used to learn a rule-based extractor based on DOM (i.e., HTML) features to apply to other documents of the same template, extracting relations according to a pre-defined ontology (“ClosedIE”). Since this approach requires training data for each template targeted for extraction, recent work has focused on reducing the manual work needed per site. Fonduer (Wu et al., 2018) provides an interface for easily creating training data, Vertex (Gulhane et al., 2011) uses semi-supervision to minimize the number of labels needed, LODIE (Gentile et al., 2015) and Ceres (Lockard et al., 2018) automatically generate training data based on distant supervision, and DIADEM (Furche et al., 2014) identifies matching rules for specific entity types. DOM-based OpenIE: WEIR (Bronzi et al., 2013) and OpenCeres (Lockard et al., 2019) offer OpenIE approaches to DOM extraction. The latter method uses visual features in a semi-supervised learning setting to identify candidate pairs that are visually similar to known (relation, object) pairs; however, the ultimate extraction model learned is still sitespecific and based on DOM features rather than the more generalizable visual or textual features. Pasupat and Liang (2014) present a zero-shot method for extraction from semi-structured webpages, but limit their work to extraction of entities rather than relationships and do not consider visual elements of the page. Multi-modal extraction: The incorporation of visual information into IE was proposed by Aumann et al. (2006), who attempted to learn a fitness function to calculate the visual similarity of a document to one in its training set to extract elements like headlines and authors. Other recent approaches that attempt to address the layout structure of documents are CharGrid (Katti et al., 2018), which represents a document as a two-dimensional grid of characters, RiSER, an extraction technique targeted at templated emails (Kocayusufoglu et al., 2019), and that by Liu et al. (2018), which presents an RNN method for learning DOM-tree rules. However, none of these address the OpenIE setting, which requires understanding the relationship between different text fields on the page. The approaches most similar to ours are GraphIE (Qian et al., 2019) and the approach by Liu et al. (2019). Both approaches involve constructing a graph of text fields with edges representing 8107 Figure 2: A depiction of the web page representation module (left) and relation classifiers (right). horizontal and vertical adjacency, followed by an application of a GCN. However, neither approach makes use of visual features beyond text field adjacency nor DOM features, and both only consider extraction from a single text field rather than OpenIE. In addition, they show only very limited results on the ability of their model to generalize beyond the templates present in the training set. 3 Problem and Approach Overview 3.1 Zero-shot relation extraction from semi-structured websites We address the problem of extracting entities and the relationships between them as expressed by never-before-seen semi-structured websites. A semi-structured website typically belongs to a subject vertical V , where V is a general field of knowledge such as movies, finance, or sports. A semistructured website consists of a set of detail pages sharing a similar template, each of which contains a set of facts about a page topic entity etopic. The HTML document w defines a set of text fields T, which the web browser renders as a webpage according to the instructions defined in the HTML and any referenced auxiliary files such as CSS or Javascript. The text fields have both textual and visual features, described in Section 4.2.1. 3.1.1 Relation Extraction Our goal is to extract (subject, relation, object) knowledge triples, where the subject is etopic, the object is a text field t ∈T containing the name of an entity (or atomic attribute value), and the relation indicates the relationship between the two entities. For this work, we assume the page topic entity has already been identified, (such as by the method proposed by Lockard et al. (2018) or by using the HTML title tag) and thus limit ourselves to identifying the objects and corresponding relations. We consider the following two settings: Relation Extraction (ClosedIE): Let R define a closed set of relation types, including a special type indicating “No Relation”. Relation Extraction is the assignment of each text field t to one ri ∈R, which indicates the relationship between the entity eobject mentioned in t and etopic. Open Relation Extraction (OpenIE): Given a pair of text fields (i, j), Open Relation Extraction is a binary prediction of whether i is a relation string indicating a relationship between the entity eobject mentioned in j and etopic. 3.1.2 Zero-shot Extraction Unlike prior work that requires the learning of a model specific to the semi-structured website targeted for extraction, we look at zero-shot extraction. Given a semi-structured website W targeted for extraction, zero-shot extraction is the learning of a model without any use of pages from W during training. We consider two zero-shot settings: Unseen-Website Zero-shot Extraction is the learning of a model without any use of pages from W, but with pages from some other website(s) from vertical V during training. Unseen-Vertical Zero-shot Extraction is the learning of a model without any use of pages from W or of pages from any website with vertical V during training. 3.2 Approach Overview Figure 2 depicts our approach for zero-shot relation extraction (detailed in Section 5) leveraging a web page representation that will capture the 8108 similarities in visual and textual semantics across websites (Section 4). Our web page representation module first converts each page into a layout graph (Section 4.1) that abstracts away the details of the page structure while maintaining the adjacency relationships between text fields. We represent each text field with an initial feature vector of visual and textual attributes. This input is passed into a graph neural network that allows for information to flow between nodes, producing a new text field representation that captures contextual information (Section 4.2). To obtain a web page encoding, we leverage a pre-training step with auxilliary loss function Lpre that encourages the model to produce an intermediary representation useful for IE. This is performed via a three-way classification that determines if a text field contains a relation name, the object of some relation, or irrelevant text (Section 4.3). After pre-training, the weights of this GNN are frozen and it can be applied to new pages, with its output used as input into a relation extraction module, optimized with task-specific loss function Ltask, where the task is either OpenIE or ClosedIE, described in Section 5. The resulting approach minimizes our overall loss LZSCERES, with: LZSCERES = Lpre + Ltask (1) 4 Web Page Encoder The key idea behind our solution is to train webpage representations to capture the fundamental similarities in visual and textual semantics across websites to express relations, objects, and their relationships. The fundamental characteristics we capture, generalizable across templates and verticals, thus allow us to carry over our knowledge across websites and enable zero-shot extraction. There are two key parts in our solution. First, we build a graph to capture the layout relationships in a more abstract form that allows us to more easily learn the common features across different sites such as the fact that relation strings are often to the left or above their objects (Section 4.1). Second, we apply a Graph Neural Network (GNN) to learn representations for each node capturing contextual information about its neighborhood on the webpage (Section 4.2), allowing information to flow through the nodes, providing context (e.g., flowing through “Cast” to a far-away node “Uma Thurman” via the closer node “Ethan Hawke” in Figure 3). This Figure 3: A cropped portion of the detail page from allmovie.com for the film Tape. Arrows overlaid showing the constructed page graph consisting of edges for each horizontal (purple), vertical (yellow) and DOM (green) relationship between text fields. representation will be useful for relation extraction as described in Section 5. 4.1 Page graph construction We encode the layout relationships between text fields in the form of a graph, G, consisting of a set of nodes N, each corresponding to a text field, and a set of edges E corresponding to relationships between the text fields. The edges capture three forms of adjacency, as shown in the example in Figure 3: Horizontal: Edges are added when two text fields are horizontal neighbors on the page; that is, they have a shared vertical location and there are no other text fields between them. Vertical: Edges are added when two text fields are vertical neighbors on the page; that is, they have an overlapping horizontal location and there are no other text fields between them. DOM: Edges are added when two text fields are siblings or cousins in the DOM tree; that is, the absolute XPaths identifying their locations differ only at a single index value. 4.2 Graph Neural Network (GNN) To build a representation of each text field that incorporates the surrounding page context, we use Graph Attention Networks (GAT) (Veliˇckovi´c et al., 2018). The feature vector for each text field (described below) and the page graph form the input to a GAT, which then produces a new representation for each text field based on the surrounding context 8109 in the graph. Specifically, for each text field i, GAT layer l computes a representation hl i as follows: hl i = σ X j∈Ni αijW l Ghl−1 j ! , (2) where Ni is the set of neighbors of node i in the graph, and hl−1 j is the representation of node j from the preceding layer; h0 j indicates the input features for the node. (For each node, we add a self loop to the graph; that is, including i in Ni.) W l G is a learned weight matrix applied to the node features for layer l −1 and σ is a non-linear function, in our case a ReLU. The attention weight αij determines how influenced a node’s representation is by each of its neighbors, calculated as follows: αij = exp  σ a⊤[W l Ghl−1 i ; W l Ghl−1 j ]  P k∈Ni exp  σ a⊤[W l Ghl−1 i ; W lghl−1 k ] , (3) where a is a weight vector applied against the concatenation (represented by “;”) of the two node’s features as transformed by W l G and σ is a ReLU. This produces a new contextualized set of features for each node that are informed by the surrounding page context. We describe the original input features for each text field in the next section. 4.2.1 Initial text field features For each text field on the page, we produce an initial feature vector containing both visual feature vector V and textual feature vector T. We define the input feature vector h0 i for text field i as: h0 i = [T(i); V (i)] (4) where “;” represents concatenation. Visual Features: A numeric feature vector is constructed representing the bounding box coordinates of the text field, the height and width of the bounding box, and the font size, along with one-hot features representing the typeface, font weight, font style, color, and text alignment. Textual Features: In ClosedIE, to capture its semantics, the textual content of the text field is processed with a pre-trained BERT (Devlin et al., 2018) model. To produce a representation of the entire text field, we simply average the BERT-Base output for each token in the text field. For OpenIE, since the goal is to generalize to entirely new subject verticals that may contain text not seen during training, only a single textual feature is used1: the percent of pages on the site on which the string in the text field appears. This frequency measure helps differentiate relation strings, which are likely to be common, from object strings, which are more likely to be rare. 4.3 Pre-Training Web Page Encoder To encourage the GNN weights to capture the features necessary to represent relationships on the page, we use a pre-training step to learn the GNN representation before incorporating it into the extraction model. The pre-training task is a simplified form of the OpenIE task. To speed up training by avoiding the pairwise decisions necessary for OpenIE, we instead perform a multi-class classification of each text field into a class c in the set {Relation, Object, Other}: p  c|hl i; θ  = softmax  Wprehl i  (5) where hl i is the output of the GNN for the text field, Wpre is a weight matrix, and θ comprises WG and Wpre. Given a training set with T text fields, each with a ground truth class ypre i , we minimize the cross-entropy loss Lpre: Lpre = − T X i=1 log p  ypre i |hl i, θ  (6) To discourage overfitting to spurious details in the small number of websites in our training set, we freeze the GNN weights after pre-training and do not update them during the full OpenIE training. After pre-training we discard the linear layer Wpre since it is not needed for subsequent steps; instead, we directly use the GNN output hl. 5 Relation Extraction Model Once we have the new representation hl t of each text field t produced by the above GNN process, we can perform our final classification. 5.1 OpenIE For OpenIE, the classification decision must be made over a pair of text fields, i and j, the first containing the candidate relation string and the second containing the candidate object string. To 1This feature is also used during ClosedIE 8110 avoid examining all possible pairs of fields, we first apply the candidate pair identification algorithm from Lockard et al. (2019), which filters down to a set of potential pairs based on physical and layout distance between text fields. For each candidate pair, we concatenate the GNN-produced contextual features hl for both text fields with the original features h0 for both text fields (since some information can be diluted in the GNN), as well as a pairwise feature vector that simply contains the horizontal and vertical distance between the two text fields, and pass them into a binary classifier: rOIE i = FNN  [h0 i ; h0 j; hl i; hl j; pairwisei,j], θOIE (7) where FNN is a feed-forward neural network with parameters θOIE, “;” indicates concatenation, and rOIE i is the predicted probability that the two text fields constitute a (relation, object) pair. We then optimize for cross-entropy loss across training examples T with yOIE i = 1 if the pair is positive: LOIE = T X i=1 yOIE i log rOIE i + 1 −yOIE i  log 1 −rOIE i  , (8) 5.2 ClosedIE For ClosedIE, we perform a multi-class classification using the contextual representation produced by the GNN (hl i) along with the original features (h0 i ) for text field i: rCIE i = FNN  [h0 i ; hl i], θCIE (9) where FNN is a feed-forward neural network parameterized by θCIE, “;” indicates concatenation, and rCIE i is the predicted probability of relation r in set R. We optimize for cross entropy loss LCIE: LCIE = − T X i=1 log p  yCIE i |h0 i , hl i, θCIE (10) where yCIE i is the true class for example i. For both ClosedIE and OpenIE we use one hidden layer in the feed-forward network. 6 Experimental Setup 6.1 Dataset For both OpenIE and ClosedIE, our primary dataset is the extended version (Lockard et al., 2019) of the SWDE dataset (Hao et al., 2011), which contains gold labels for OpenIE extractions for 21 Englishlanguage websites (each with one template) in three subject verticals (Movie, NBA, and University), with between 400 and 2,000 pages per site. We generated ClosedIE labels by converting the OpenIE labels to ClosedIE labels via manual alignment of OpenIE relations between websites, giving a set of 18 relations for the Movie vertical, 14 for NBA, and 13 for University. More information on training data creation and a complete listing of ClosedIE relations is available in the Appendix. We used three SWDE Movie sites (AMCTV, AllMovie, and IMDb) as a development set and did not evaluate on them for the reported results. 6.2 Experimental Settings For each model tested (both our own and the baselines), we classify the training setting into the following categories indicating the level of vertical or site-specific knowledge used, in decreasing level of difficulty. • Level I–Unseen-Vertical Zero-shot (OpenIE only): A model is trained on sites from two of the three verticals (e.g. NBA and University) and applied to sites from the other vertical (Movie). This is the hardest case and is important when we wish to extract knowledge from new verticals where we do not have any prior knowledge or annotations. • Level II–Zero-shot with Vertical Knowledge: A model is trained on all sites but one (spanning Movie, NBA, and University) and then applied to the held-out site. As in cross-validation, experiments are repeated with each site having a turn being held out. It is easier than Level I but is still important for a new website that may not have data overlapping with other websites in the same vertical. For the ClosedIE setting, we train only on in-vertical sites. • Level III–Site-specific Knowledge: This is the traditional setting used by two of our baselines where we have seed knowledge overlapping with the website data to allow training a specific model for the website. Whereas Level I-II are both zero-shot settings, Level III is not, as it allows site-specific training data via weak supervision. (We do not present results using full supervision from manual annotations since it is known from prior work (e.g., Gulhane et al. (2011)) that full 8111 supervision from the target website yields highly accurate semi-structured extractors; we note that ZSCERES also achieves comparable results (∼ 0.95 F1) in this setting. We repeated our experiments 10 times and we report the results averaged across the runs. For OpenIE, we follow the “lenient” scoring method for SWDE introduced by Lockard et al. (2019), scoring an extraction as correct if the relation string matches any of acceptable surface forms listed by the ground truth for that object. Models are constructed in PyTorch (Paszke et al., 2017), with graph functions implemented in DGL (Wang et al., 2019) and optimization performed using Adam (Kingma and Ba, 2014) and a batch size of 20. For OpenIE, we use a hidden layer size of 25 for the GAT and 100 for the feed-forward layer. For ClosedIE, we use a hidden layer size of 200 for all layers. We use a 2-layer GAT and dropout of 0.25. We obtain visual features by rendering the page using the headless Chrome browser and querying the values using Selenium2. Extraction Threshold: Since our zero-shot setting means we cannot use a development set of pages from the target site to tune the decision threshold, we instead set the threshold for each experiment to the value that attains the optimal F1 on the experiments where other sites were held-out. OpenIE Postprocessing Rules: To ensure consistency among the extracted values, we keep only the highest confidence extraction in the case that the same text field is extracted as both a relation and object, or if multiple relations are extracted for the same object. In addition, some pages in the dataset contain relational tables, from which we sometimes extract the column headers as relations with the column contents as objects. While we believe a post-processing step could potentially recover these relational contents from our extractions, the SWDE data does not contain ground truth for such facts. Instead, we apply the heuristics described by (Cafarella et al., 2008) to identify these tables and remove them from our extractions. 6.3 Baselines and Models We compare against several baselines: Colon Baseline (OpenIE) This is a heuristic technique that identifies all text fields ending in a colon 2https://www.seleniumhq.org (“:”) and assumes they are relation strings, then extracts the text field to the right or below, whichever is closer, as the object. We consider it as Level I knowledge since it requires no training. WEIR (OpenIE) This approach by Bronzi et al. (2013) discovers relations by aligning multiple pages about the same entity. Because it requires sites to be grouped by vertical and uses a gazetteer list of entity names for the alignment, it has Level III knowledge. OpenCeres (OpenIE) This applies the model by Lockard et al. (2019), which requires a knowledge base matching some facts presented on the target website, using Level III knowledge. ZSCERES-FFNN (Feed-forward neural network): This model takes the same features and training data as the full ZSCERES model but removes the GNN component, with versions tested with both Level I (ZSCERES-FFNN UnseenVertical) and Level II (ZSCERES-FFNN UnseenWebsite) knowledge. ZSCERES-GNN: This applies the full model described in Section 4.2, with versions tested with both Level I (ZSCERES-GNN Unseen-Vertical) and Level II (ZSCERES-GNN Unseen-Website) knowledge. 7 Experimental Results 7.1 OpenIE Level-I Knowledge: Table 1 shows that ZSCERES is able to extract facts in entirely new subject verticals 31% more accurately than the colon baseline. Across all SWDE sites (micro-averaging across all extractions), ZSCERES-GNN achieves an F1 of 0.45, in comparison with 0.43 for ZSCERESFFNN, showing that the additional information provided by the page encoder allows for a better representation of the relationships between text fields. By successfully learning general patterns of relational presentation on webpages, ZSCERES-GNN is able to train solely on a set of 16 websites about Movies and NBA players, and then extract from University websites more accurately than the WEIR and OpenCeres systems, which take advantage of Level III knowledge to learn models specific to those University sites. While OpenCeres’s rich vertical knowledge allows it to attain better results in Movie and NBA, ZSCERES-GNN still posts much stronger results than the other baselines in these two verticals. 8112 System Site-specific Level Movie NBA University Average Model P R F1 P R F1 P R F1 F1 OpenCeres Yes III 0.71 0.84 0.77 0.74 0.48 0.58 0.65 0.29 0.40 0.58 WEIR Yes III 0.14 0.10 0.12 0.08 0.17 0.11 0.13 0.18 0.15 0.13 ZSCERES-FFNN Unseen-Website No II 0.37 0.5 0.45 0.35 0.49 0.41 0.47 0.59 0.52 0.46 ZSCERES-GNN Unseen-Website No II 0.49 0.51 0.50 0.47 0.39 0.42 0.50 0.49 0.50 0.47 Colon Baseline No I 0.47 0.19 0.27 0.51 0.33 0.40 0.46 0.31 0.37 0.35 ZSCERES-FFNN Unseen-Vertical No I 0.42 0.38 0.40 0.44 0.46 0.45 0.50 0.45 0.48 0.44 ZSCERES-GNN Unseen-Vertical No I 0.43 0.42 0.42 0.48 0.49 0.48 0.49 0.45 0.47 0.46 Table 1: With no vertical knowledge, ZSCERES-GNN achieves 65% higher recall and comparable precision in all verticals compared to the colon baseline. Even in comparison to approaches that use vertical knowledge to learn site-specific OpenIE models, ZSCERES achieves an F1 seven points higher in the University vertical. Figure 4: For OpenIE, using the full SWDE set (except the test site), including in-vertical training data (i.e. Level II knowledge), allows for 5-10 point gains in precision at equivalent recall compared to using only outof-vertical training data (Level I). System Knowledge Level P R F1 ZSCERES-FFNN II 0.45 0.49 0.46 ZSCERES-GNN II 0.62 0.55 0.58 Table 2: For ClosedIE, using the pre-trained GNN adds 12 F1 points in comparison to the baseline lacking contextual information. Level-II Knowledge: Figure 4 shows that adding the in-vertical sites to the training set (but still withholding the test site) allows the model to achieve performance better than the Level I training set that uses only out-of-vertical data. 7.2 ClosedIE Table 2 shows the results for ClosedIE extraction. ZSCERES-GNN attains an overall F1 of 0.58 averaged across the three verticals. This significantly outperforms the feed-forward model that did not use the GNN, which attained an F1 of 0.46. While our performance on this dataset is far below Figure 5: Performance on the ClosedIE Movie vertical increases significantly as more sites are added to the training data. OpenIE F1 ClosedIE F1 Full Model 0.71 0.73 No GNN 0.68 (0.03 ↓) 0.63 (0.10 ↓) No pre-training 0.66 (0.05 ↓) 0.73 No DOM edges 0.65 (0.06 ↓) 0.58 (0.15 ↓) No spatial edges 0.65 (0.06 ↓) 0.62 (0.11 ↓) No visual features 0.55 (0.16 ↓) 0.73 No BERT features – 0.10 (0.63 ↓) Add BERT features 0.68 (0.03 ↓) – Table 3: Ablations on the Movie development set. the state-of-the-art for semi-structured ClosedIE (above 0.9 for all verticals), prior systems all learn site-specific models based on manual labeling or prior knowledge aligned to the website, while we have only Level II Knowledge available. Figure 5 shows how adding additional training data improves performance in the Movie vertical. It appears that adding additional training sites would further improve the performance. 8113 7.3 Ablation Study Table 3 shows the contributions of different elements of the model in the OpenIE and ClosedIE settings as calculated on the development set of three sites in the Movie vertical. These ablations show that the GNN helps in both settings, with a larger effect in ClosedIE, which is likely due to sharing the rich information about the text of nearby text fields. Pre-training is important in OpenIE but does not have a significant effect for ClosedIE. This is not surprising given that the pre-training task is closely related to the OpenIE task. Both DOM and spatial adjacency edges contribute to the success of the page layout graph for the GNN. In the ClosedIE setting, the text and layout relationships alone will generally contain sufficient information to make an extraction, while in OpenIE the visual elements (such as whether text is bold or underlined) are a strong source of consistency across websites. 7.4 Error Analysis OpenIE: To understand what cases our ZSCERESGNN model is missing, we sampled 100 error cases in each vertical from the Unseen-Vertical experiment and manually examined them. Some examples of both erroneous and correct extractions are shown in Table 4 in the Appendix. False positives were largely due to the presence of two different types of n-ary relationships on the page. The first class of errors involving n-ary relationships, making up 43% of all false positives, were where several facts have a multi-way relationship with the page topic, but individually the fields are not meaningful. For example, the NBA site USAToday includes a “Latest notes” section with links to several articles relevant to the page topic entity, mentioning the date, headline, and summary. We extract all of these objects with the “Latest notes” relation, but to obtain meaningful knowledge it would be necessary to additionally associate the correct date, headline, and summary with each other. While we can envision methods for doing this via post-processing, the SWDE benchmark considers these to be errors. In the second class, ZSCERES correctly extracted (relation, object) pairs, but from page sections that contain facts about entities other than the page topic. For example, on the MatchCollege site, a section of “Similar Local Colleges” contains some of the same relations presented for the page topic, in similar formatting. These types of errors made up another 6% of false positives. Of the remaining errors, 33% were due to the extraction of pairs where the extracted relation did not represent a relationship, while another 14% were due to the extraction of pairs with a correct relation string and incorrect object. Most false negatives occurred in long vertical lists, where some values were extracted, but not all. ClosedIE: False negatives were most likely to occur on long lists of values (such as cast lists), where values toward the bottom of the list were sometimes missed. Recall also suffered on relations where the relation name varied significantly from site to site, or where ambiguity existed. For example, the string “Produced by” is used by some sites to indicate the producer of the film, while on other sites it indicates the production company. 8 Conclusion We have introduced a zero-shot method for learning a model for relation extraction from semistructured documents that generalizes beyond a single document template. Moreover, this approach enables OpenIE extraction from entirely new subject verticals where no prior knowledge is available. By representing a webpage as a graph defined by layout relationship between text fields, with text fields associated with both visual and textual features, we attain a 31% improvement over the baseline for new-vertical OpenIE extraction. Future extensions of this work involve a more general pre-training objective allowing for the learned representations to be useful in many tasks as well as distantly or semi-supervised approaches to benefit from more data. Acknowledgments We would like to acknowledge grants from ONR N00014- 18-1-2826, DARPA N66001-19-2-403, NSF (IIS1616112, IIS1252835), Allen Distinguished Investigator Award, and Sloan Fellowship. References Yonatan Aumann, Ronen Feldman, Yair Liberzon, Binyamin Rosenfeld, and Jonathan Schler. 2006. Visual information extraction. Knowledge and Information Systems, 10:1–15. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew G Broadhead, and Oren Etzioni. 8114 2007. Open information extraction from the web. In IJCAI. Mirko Bronzi, Valter Crescenzi, Paolo Merialdo, and Paolo Papotti. 2013. Extraction and integration of partially overlapping web sources. PVLDB, 6:805– 816. Michael J. Cafarella, Alon Y. Halevy, Yang Zhang, Daisy Zhe Wang, and Eugene Wu. 2008. Uncovering the relational web. In WebDB. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Xin Luna Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: a web-scale approach to probabilistic knowledge fusion. In KDD. Emilio Ferrara, Pasquale De Meo, Giacomo Fiumara, and Robert Baumgartner. 2014. Web data extraction, applications and techniques: A survey. KnowledgeBased Systems, 70:301–323. Tim Furche, Georg Gottlob, Giovanni Grasso, Xiaonan Guo, Giorgio Orsi, Christian Schallhart, and Cheng Wang. 2014. Diadem: Thousands of websites to a single database. PVLDB, 7:1845–1856. Anna Lisa Gentile, Ziqi Zhang, and Fabio Ciravegna. 2015. Early steps towards web scale information extraction with lodie. AI Magazine, 36:55–64. Pankaj Gulhane, Amit Madaan, Rupesh R. Mehta, Jeyashankher Ramamirtham, Rajeev Rastogi, Sandeepkumar Satpal, Srinivasan H. Sengamedu, Ashwin Tengli, and Charu Tiwari. 2011. Web-scale information extraction with vertex. ICDE, pages 1209–1220. Qiang Hao, Rui Cai, Yanwei Pang, and Lei Zhang. 2011. From one tree to a forest: a unified solution for structured web data extraction. In SIGIR. Anoop R. Katti, Christian Reisswig, Cordula Guder, Sebastian Brarda, Steffen Bickel, Johannes H¨ohne, and Jean Baptiste Faddoul. 2018. Chargrid: Towards understanding 2d documents. In EMNLP. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. ICLR. Furkan Kocayusufoglu, Ying Sheng, Nguyen Vo, James Bradley Wendt, Qi Zhao, Sandeep Tata, and Marc Najork. 2019. Riser: Learning better representations for richly structured emails. In WWW. Nicholas Kushmerick, Daniel S. Weld, and Robert B. Doorenbos. 1997. Wrapper induction for information extraction. In IJCAI. Shengpeng Liu, Ying Li, and Binbin Fan. 2018. Hierarchical RNN for few-shot information extraction learning. In ICPCSEE. Xiaojing Liu, Feiyu Gao, Qiong Zhang, and Huasha Zhao. 2019. Graph convolution for multimodal information extraction from visually rich documents. In NAACL-HLT. Colin Lockard, Xin Luna Dong, Prashant Shiralkar, and Arash Einolghozati. 2018. Ceres: Distantly supervised relation extraction from the semi-structured web. PVLDB. Colin Lockard, Prashant Shiralkar, and Xin Luna Dong. 2019. OpenCeres: When open information extraction meets the semi-structured web. In NAACLHLT. Panupong Pasupat and Percy Liang. 2014. Zero-shot entity extraction from web pages. In ACL. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NeurIPS Autodiff Workshop. Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, and Regina Barzilay. 2019. GraphIE: A graph-based framework for information extraction. In NAACLHLT. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph Attention Networks. ICLR. Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J Smola, and Zheng Zhang. 2019. Deep graph library: Towards efficient and scalable deep learning on graphs. ICLR Workshop on Representation Learning on Graphs and Manifolds. Sen Wu, Luke Hsiao, Xiao Cheng, Braden Hancock, Theodoros Rekatsinas, Philip Levis, and Christopher R´e. 2018. Fonduer: Knowledge base construction from richly formatted data. SIGMOD, 2018:1301– 1316. 8115 A Appendix A.1 ClosedIE Label Mappings SWDE provides OpenIE labels for all binary relations between the objects mentioned on the page and the page topic entity. These labels include the relation string used to indicate the relationship, sometimes including multiple acceptable surface forms if there is more than one applicable string for the relation (usually due to more or less specific versions of the relation). The original SWDE data only includes ClosedIE labels for a small subset of relation types. To create ClosedIE ground truth for all relations on the sites, we examined all OpenIE relations across the SWDE sites and grouped them into a set of relations that each represented the same fundamental idea. In some cases, we chose to map relations into a somewhat more general category, such as mapping “Associate Producer” and “Executive Producer” into the same “Producer” concept. After obtaining this set, we eliminated all relations that appeared on fewer than 3 websites in the dataset. The set of relations used for the ClosedIE experiments is given in Table 5. The full mapping of OpenIE to ClosedIE relations can be found at https: //github.com/cdlockard/expanded_swde. A.2 Training Data Creation The Extended SWDE dataset provides ground truth extractions of OpenIE predicate and object strings for the webpages it contains. However, it does not specify which text fields on the page were the source of the extractions. To create training data, we need to label a specific text field. It is usually the case that each ground truth string matches only one text field, so there is no ambiguity, but in cases where multiple text fields have the same value, we must disambiguate which one to use. We did this by identifying all matching text fields for the ground truth predicate and object and chose the pair in which the predicate and object strings have the closest Euclidean distance on the rendered page. While this is generally a safe assumption, there are still occasional errors in the training data. In particular, we observed that the NBA vertical had considerably more ambiguous cases since most relations are numerical and the pages often contained large tables of numbers. We hypothesize that this may explain why performance on the NBA vertical is lower when using Unseen-Website training data compared to the Unseen-Vertical setting (Table 1). During testing, we applied the same standard used by prior work on the dataset and accepted an answer as correct if it matched the ground truth string, regardless of which text field produced the extraction. 8116 Vertical Site Extraction Correct Notes Page Topic Relation Object Movie Hollywood Spanish Fly Costume Designer Jose Maria de Cossio Yes Movie Metacritic Saving Face Reviewed by Maitland McDonagh Yes NBAPlayer ESPN Jameer Nelson Birth Place Chester, PA Yes NBAPlayer MSNCA Matt Bonner College Florida Yes University CollegeProwler Spring Arbor University Admission Difficulty Average Yes University MatchCollege Menlo College College Credits Accepted AP Credit Yes Movie RottenTomatoes Slow Burn Tomatometer Percentage 97% No Subject of relation is not page topic but is an unrelated recently released film Movie RottenTomatoes Ginger Snaps 2 WHAT’S HOT ON RT Trailer: Santa has a bloody Xmas No Extracted relation string is not a relation Movie Metacritic The Constant Gardener User Panel Options The Constant Gardener No Extracted relation string is not a relation University CollegeProwler Minnesota School of Business CP Top 10 Lists Best Performance Venues No Link to article not related to page topic, but is a “Similar School” University MatchCollege Maric College Highest Degree Associate’s No Subject of relation is not page topic NBAPlayer FoxSports Tony Parker Latest News Mon. Dec 6, 2010 No n-ary object NBAPlayer MSNCA Gilbert Arenas Birthplace 215 No Erroneous extraction of weight for birthplace (both text fields are nearby) Table 4: Selected OpenIE Extractions from ZSCERES-GNN with Level I training (no knowledge of the subject vertical). 8117 Vertical Relation movie movie.aka movie movie.box office movie movie.budget movie movie.country movie movie.directed by movie movie.distributor movie movie.genre movie movie.language movie movie.produced by movie movie.production company movie movie.rating movie movie.release date movie movie.runtime movie movie.starring movie movie.synopsis movie movie.written by movie movie.year nbaplayer nbaplayer.age nbaplayer nbaplayer.assists nbaplayer nbaplayer.birthdate nbaplayer nbaplayer.birthplace nbaplayer nbaplayer.college nbaplayer nbaplayer.draft nbaplayer nbaplayer.experience nbaplayer nbaplayer.field goal percentage nbaplayer nbaplayer.height nbaplayer nbaplayer.points nbaplayer nbaplayer.position nbaplayer nbaplayer.rebounds nbaplayer nbaplayer.weight university university.application fee university university.calendar system university university.control university university.enrollment university university.in state tuition university university.out state tuition university university.phone university university.religious affiliation university university.setting university university.tuition university university.undergraduate enrollment university university.website Table 5: A listing of ClosedIE relation types mapped from OpenIE labels in SWDE
2020
721
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8118–8123 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8118 Soft Gazetteers for Low-Resource Named Entity Recognition Shruti Rijhwani, Shuyan Zhou, Graham Neubig, Jaime Carbonell Language Technologies Institute Carnegie Mellon University {srijhwan,shuyanzh,gneubig,jgc}@cs.cmu.edu Abstract Traditional named entity recognition models use gazetteers (lists of entities) as features to improve performance. Although modern neural network models do not require such handcrafted features for strong performance, recent work (Wu et al., 2018) has demonstrated their utility for named entity recognition on English data. However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages. To address this problem, we propose a method of “soft gazetteers” that incorporates ubiquitously available information from English knowledge bases, such as Wikipedia, into neural named entity recognition models through cross-lingual entity linking. Our experiments on four low-resource languages show an average improvement of 4 points in F1 score.1 1 Introduction Before the widespread adoption of neural networks for natural language processing tasks, named entity recognition (NER) systems used linguistic features based on lexical and syntactic knowledge to improve performance (Ratinov and Roth, 2009). With the introduction of the neural LSTM-CRF model (Huang et al., 2015; Lample et al., 2016), the need to develop hand-crafted features to train strong NER models diminished. However, Wu et al. (2018) have recently demonstrated that integrating linguistic features based on part-of-speech tags, word shapes, and manually created lists of entities called gazetteers into neural models leads to better NER on English data. Of particular interest to this paper are the gazetteer-based features – binary-valued features determined by whether or not an entity is present in the gazetteer. 1Code and data are available at https://github. com/neulab/soft-gazetteers. Although neural NER models have been applied to low-resource settings (Cotterell and Duh, 2017; Huang et al., 2019), directly integrating gazetteer features into these models is difficult because gazetteers in these languages are either limited in coverage or completely absent. Expanding them is time-consuming and expensive, due to the lack of available annotators for low-resource languages (Strassel and Tracey, 2016). As an alternative, we introduce “soft gazetteers”, a method to create continuous-valued gazetteer features based on readily available data from highresource languages and large English knowledge bases (e.g., Wikipedia). More specifically, we use entity linking methods to extract information from these resources and integrate it into the commonlyused CNN-LSTM-CRF NER model (Ma and Hovy, 2016) using a carefully designed feature set. We use entity linking methods designed for low-resource languages, which require far fewer resources than traditional gazetteer features (Upadhyay et al., 2018; Zhou et al., 2020). Our experiments demonstrate the effectiveness of our proposed soft gazetteer features, with an average improvement of 4 F1 points over the baseline, across four low-resource languages: Kinyarwanda, Oromo, Sinhala, and Tigrinya. 2 Background Named Entity Recognition NER identifies named entity spans in an input sentence, and classifies them into predefined types (e.g., location, person, organization). A commonly used method for doing so is the BIO tagging scheme, representing the Beginning, the Inside and the Outside of a text segment (Ratinov and Roth, 2009). The first word of a named entity is tagged with a “B-”, subsequent words in the entity are “I-”, and non-entity words are “O”. For example: [Mark]B-PER [Watney]I-PER [visited]O [Mars]B-LOC 8119 Application to each word in the span Nuveli Zelande n'igihugu muri Oseyaniya translation: New Zealand country in Oceania "Nuveli Zelande" ˆ = New Zealand 0.95 LOC New Caledonia 0.05 LOC Candidates with scores and types Feature vector for top-1 score "Nuveli" = Œ~ "Zelande" = Œ~ LOC PER ORG 0.95 0.00 0.00 LOC PER ORG BI0.0 0.0 0.0 0.95 0.0 0.0 BILOC PER ORG 0.95 0.0 0.0 0.0 0.0 0.0 Figure 1: An example in Kinyarwanda to demonstrate soft gazetteer feature creation for each span s using candidate lists. The feature vector is applied to each word wi in the span, depending on the position (“B-” or “I-”). Binary Gazetteer Features Gazetteers are lists of named entities collected from various sources (e.g., nation-wide census, GeoNames, etc.). They have been used to create features for NER models, typically binary features indicating whether the corresponding n-gram is present in the gazetteer. Entity Linking Entity linking (EL) is the task of associating a named entity mention with its corresponding entry in a structured knowledge base (KB) (Hachey et al., 2013). For example, linking the entity mention “Mars” with its Wikipedia entry. In most entity linking systems (Hachey et al., 2013; Sil et al., 2018), the first step is shortlisting candidate KB entries, which are further processed by an entity disambiguation algorithm. Candidate retrieval methods, in general, also score each candidate with respect to the input mention. 3 Soft Gazetteer Features As briefly alluded to in the introduction, creating binary gazetteer features is challenging for lowresource languages. The soft gazetteer features we propose instead take advantage of existing limited gazetteers and English knowledge bases using lowresource EL methods. In contrast to typical binary gazetteer features, the soft gazetteer feature values are continuous, lying between 0 and 1. Given an input sentence, we calculate the soft gazetteer features for each span of n words, s = wi, . . . , wi+n−1, and then apply the features to each word in the span. We assume that we have an EL candidate retrieval method that returns candidate KB entries C = (c1, c2...) for the input span. c1 is the highest scoring candidate. As a concrete example, consider a feature that represents the score of the top-1 candidate. Figure 1 shows an example of calculating this feature on a sentence in Kinyarwanda, one of the languages used in our experiments. The feature vector f has an element corresponding to each named entity type in the KB (e.g., LOC, PER, and ORG). For this feature, the element corresponding to the entity type of the highest scoring candidate c1 is updated with the score of the candidate. That is, f type(c1) = score(c1). This feature vector is applied to each word in the span, considering the position of the specific word in the span according to the BIO scheme; we use the “B-” vector elements for the first word in the span, “I-” otherwise. For a word wi, we combine features from different spans by performing an element-wise addition over vectors of all spans of length n that contain wi. The cumulative vector is then normalized by the number of spans of length n that contain wi, so that all values lie between 0 and 1. Finally, we concatenate the normalized vectors for each span length n from 1 to N (N = 3 in this paper). We experiment with different ways in which the candidate list can be used to produce feature vectors. The complete feature set is: 1. top-1 score: This feature takes the score of the highest scoring candidate c1 into account. f type(c1) = score(c1) 2. top-3 score: Like the top-1 feature, we additionally create feature vectors for the second and third highest scoring candidates. 3. top-3 count: These features are type-wise counts of the top-3 candidates. Instead of adding the score to the appropriate feature element, we add 1.0 to the current value. For a candidate type t, such as LOC, PER or ORG, f t = X c∈{c1,c2,c3} 1.0 × 1type(c)=t 8120 NER CRF Auto-encoder Soft gazetteer features Word-level BiLSTM Input sentence Soft gazetteer features Word embeddings Character representation Figure 2: NER Model Architecture. The proposed soft gazetteer features are highlighted and the autoencoder reconstructs these features, indicated by a dotted line. 1type(c)=t is an indicator function that returns 1.0 when the candidate type is the same as the feature element being updated, 0.0 otherwise. 4. top-30 count: This feature computes typewise counts for the top-30 candidates. 5. margin: The margin between the scores of consecutive candidates within the top-4. These features are not computed type-wise. For example the feature value for the margin between the top-2 candidates is, f c1,c2 = score(c1) −score(c2) We experiment with different combinations of these features by concatenating their respective vectors. The concatenated vector is passed through a fully connected neural network layer with a tanh non-linearity and then used in the NER model. 4 Named Entity Recognition Model As our base model, we use the neural CRF model of Ma and Hovy (2016). We adopt the method from Wu et al. (2018) to incorporate linguistic features, which uses an autoencoder loss to help retain information from the hand-crafted features throughout the model (shown in Figure 2). We briefly discuss the model in this section, but encourage readers to refer to the original papers for a more detailed description. NER objective Given an input sequence, we first calculate a vector representation for each word by concatenating the character representation from a CNN, the word embedding, and the soft gazetteer features. The word representations are then used as input to a bidirectional LSTM (BiLSTM). The hidden states from the BiLSTM and the soft gazetteer features are input to a Conditional Random Field Lang. Dataset size Frac. of NIL Gaz. size kin 951 0.41 912 orm 2958 0.36 313 sin 1068 0.29 2738 tir 2202 0.28 92 Table 1: NER dataset and Wikipedia gazetteer sizes. (CRF), which predicts a sequence of NER labels. The training objective, LCRF , is the negative loglikelihood of the gold label sequence. Autoencoder objective Wu et al. (2018) demonstrate that adding an autoencoder to reconstruct the hand-crafted features leads to improvement in NER performance. The autoencoder takes the hidden states of the BiLSTM as input to a fully connected layer with a sigmoid activation function and reconstructs the features. This forces the BiLSTM to retain information from the features. The crossentropy loss of the soft gazetteer feature reconstruction is the autoencoder objective, LAE. Training and inference The training objective is the joint loss: LCRF + LAE. The losses are given equal weight, as recommended in Wu et al. (2018). During inference, we use Viterbi decoding to obtain the most likely label sequence. 5 Experiments In this section, we discuss our experiments on four low-resource languages and attempt to answer the following research questions: 1) “Although gazetteer-based features have been proven useful for neural NER on English, is the same true in the low-resource setting?” 2) “Do the proposed soft-gazetteer features outperform the baseline?” 3) “What types of entity mentions benefit from soft gazetteers?” and 4) “Does the knowledge base coverage affect performance?”. 5.1 Experimental setup NER Dataset We experiment on four lowresource languages: Kinyarwanda (kin), Oromo (orm), Sinhala (sin), and Tigrinya (tir). We use the LORELEI dataset (Strassel and Tracey, 2016), which has text from various domains, including news and social media, annotated for the NER task. Table 1 shows the number of sentences annotated. The data is annotated with four named entity 8121 types: locations (LOC), persons (PER), organizations (ORG), and geopolitical entities (GPE). Following the CoNLL-2003 annotation standard, we merge the LOC and GPE types (Tjong Kim Sang and De Meulder, 2003). Note that these datasets are very low-resource, merely 4% to 13% the size of the CoNLL-2003 English dataset. These sentences are also annotated with entity links to a knowledge base of 11 million entries, which we use only to aid our analysis. Of particular interest are “NIL” entity mentions that do not have a corresponding entry in the knowledge base (Blissett and Ji, 2019). The fraction of mentions that are NIL is shown in Table 1. Gazetteer Data We also compare our method with binary gazetteer features, using entity lists from Wikipedia, the sizes of which are in Table 1. Implementation Our model is implemented using the DyNet toolkit (Neubig et al., 2017), and we use the same hyperparameters as Ma and Hovy (2016). We use randomly initialized word embeddings since we do not have pretrained vectors for low-resource languages.2 Evaluation We perform 10-fold cross-validation for all experiments because of the small size of our datasets. Our primary evaluation metric is spanlevel named entity F1 score. 5.2 Methods Baselines We compare with two baselines: • NOFEAT: The CNN-LSTM-CRF model (section 4) without any features. • BINARYGAZ: We use Wikipedia entity lists (Table 1) to create binary gazetteer features. Soft gazetteer methods We experiment with different candidate retrieval methods designed for lowresource languages. These are trained only with small bilingual lexicons from Wikipedia, of similar size as the gazetteers (Table 1). • WIKIMEN: The WikiMention method is used in several state-of-the-art EL systems (Sil et al., 2018; Upadhyay et al., 2018), where 2A note on efficiency: our method involves computing entity linking candidates for each n-gram span in the dataset. The most computationally intensive candidate retrieval method (PBEL, discussed in subsection 5.2) takes ≈1.5 hours to process all spans on a single 1080Ti GPU. Note that this is a preprocessing step and once completed, it does not add any extra computational cost to the NER training process. bilingual Wikipedia links are used to retrieve the appropriate English KB candidates. • Pivot-based-entity-linking (Zhou et al., 2020): This method encodes entity mentions on the character level using n-gram neural embeddings (Wieting et al., 2016) and computes their similarity with KB entries. We experiment with two variants and follow Zhou et al. (2020) for hyperparameter selection: 1) PBELSUPERVISED: trained on the small number of bilingual Wikipedia links available in the target low-resource language. 2) PBELZERO: trained on some high-resource language (“the pivot”) and transferred to the target language in a zero-shot manner. The transfer languages we use are Swahili for Kinyarwanda, Indonesian for Oromo, Hindi for Sinhala, and Amharic for Tigrinya. Oracles As an upper-bound on the accuracy, we compare to two artificially strong systems: • ORACLEEL: For soft gazetteers, we assume perfect candidate retrieval that always returns the correct KB entry as the top candidate if the mention is non-NIL. • ORACLEGAZ: We artificially inflate BINARYGAZ by augmenting the gazetteer with all the named entities in our dataset. 5.3 Results and Analysis Results are shown in Table 2. First, comparing BINARYGAZ to NOFEAT shows that traditional gazetteer features help somewhat, but gains are minimal on languages with fewer available resources.3 Further, we can see that the proposed soft gazetteer method is effective, some variant thereof achieving the best accuracy on all languages. For the soft gazetteer method, Table 2 shows the performance with the best performing features (which were determined on a validation set): top-1 features for Kinyarwanda, Sinhala and Tigrinya, 3We note that binary gazetteer features usually refer to simply using the gazetteer as a lookup (Ratinov and Roth, 2009). However, we also attempt to use WIKIMEN and PBEL for retrieval, with scores converted to binary values at a threshold of 0.5. BINARYGAZ in Table 2 is the best F1 score among these methods–this turns out to be the string lookup for all four languages. This is expected because, for low-resource languages, the other candidate retrieval methods are less precise than their high-resource counterparts. Binary-valued features are not fine-grained enough to be robust to this. 8122 Model kin orm sin tir NOFEAT 67.16 71.07 49.68 75.44 BINARYGAZ 69.05 71.24 54.08 75.84 WIKIMEN 68.36 71.58 51.34 75.69 PBELSUPER. 68.94 71.61 60.95 76.49 PBELZERO 69.92 71.75 51.69 76.99 ORACLEEL 82.89 87.69 81.98 89.85 ORACLEGAZ 93.38 94.71 94.00 94.43 Table 2: 10-fold cross-validation NER F1 score. The best performing feature combination is shown here. Bold indicates the best non-oracle system. and top-30 features for Oromo. Although Sinhala (sin) has a relatively large gazetteer (Table 1), we observe that directly using the gazetteer as recommended in previous work with BINARYGAZ, does not demonstrate strong performance. On the other hand, with the soft gazetteer method and our carefully designed features, PBELSUPERVISED works well for Sinhala (sin) and improves the NER performance. PBELZERO is the best method for the other three languages, illustrating how our proposed features can be used to benefit NER by leveraging information from languages closely related to the target. The improvement for Oromo (orm) is minor, likely because of the limited crosslingual links available for training PBELSUPERVISED and the lack of suitable transfer languages for PBELZERO (Rijhwani et al., 2019). Finally, we find that both ORACLEGAZ and ORACLEEL improve by a large margin over all nonoracle methods, indicating that there is substantial headroom to improve low-resource NER through either the development of gazetteer resources or the creation of more sophisticated EL methods. How do soft-gazetteers help? We look at two types of named entity mentions in our dataset that we expect to benefit from the soft gazetteer features: 1) non-NIL mentions with entity links in the KB that can use EL candidate information, and 2) mentions unseen in the training data that have additional information from the features as compared to the baseline. Table 3 shows that the soft gazetteer features increase the recall for both types of mentions by several points. Knowledge base coverage Table 3 indicates that the soft gazetteer features benefit those entity menNon-NIL Recall Unseen Recall Lang. Baseline SoftGaz Baseline SoftGaz kin 66.5 73.3 35.4 43.9 orm 72.0 72.8 49.5 51.9 sin 57.3 69.8 20.3 35.3 tir 79.2 80.9 38.9 41.5 Avg. 68.7 74.2 36.0 43.1 Table 3: Recall for non-NIL mentions and mentions unseen in the training data. SoftGaz represents the best soft gazetteer model as seen in Table 2. kin orm sin tir Orig. KB 69.92 71.71 60.95 76.58 NIL augment 76.28 76.50 70.87 83.07 Table 4: NER F1 score of the best performing soft gazetteer model with the original KB and with augmenting NIL-clustered entity mentions. tions that are present in the KB. However, our dataset has a significant number of NIL-clustered mentions (Table 1). The ability of our features to add information to NIL mentions is diminished because they do not have a correct candidate in the KB. To measure the effect of KB coverage, we augment the soft gazetteer features with ORACLEGAZ features, applied only to the NIL mentions. Large F1 increases in Table 4 indicate that higher KB coverage will likely make the soft gazetteer features more useful, and stresses the importance of developing KBs that cover all entities in the document. 6 Conclusion We present a method to create features for lowresource NER and show its effectiveness on four low-resource languages. Possible future directions include using more sophisticated feature design and combinations of candidate retrieval methods. Acknowledgements Shruti Rijhwani is supported by a Bloomberg Data Science Ph.D. Fellowship. Shuyan Zhou is supported by the DARPA Information Innovation Office (I2O) Low Resource Languages for Emergent Incidents (LORELEI) program under Contract No. HR0011-15-C0114. We also thank Samridhi Choudhary for help with the model implementation and Deepak Gopinath for feedback on the paper. 8123 References Kevin Blissett and Heng Ji. 2019. Cross-lingual NIL entity clustering for low-resource languages. In Proceedings of the Second Workshop on Computational Models of Reference, Anaphora and Coreference, pages 20–25, Minneapolis, USA. Association for Computational Linguistics. Ryan Cotterell and Kevin Duh. 2017. Lowresource named entity recognition with crosslingual, character-level neural conditional random fields. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 91–96, Taipei, Taiwan. Asian Federation of Natural Language Processing. Ben Hachey, Will Radford, Joel Nothman, Matthew Honnibal, and James R Curran. 2013. Evaluating entity linking with wikipedia. Artificial intelligence, 194:130–150. Lifu Huang, Heng Ji, and Jonathan May. 2019. Crosslingual multi-level adversarial transfer to enhance low-resource name tagging. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3823–3833, Minneapolis, Minnesota. Association for Computational Linguistics. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147–155, Boulder, Colorado. Association for Computational Linguistics. Shruti Rijhwani, Jiateng Xie, Graham Neubig, and Jaime Carbonell. 2019. Zero-shot neural transfer for cross-lingual entity linking. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6924–6931. Avirup Sil, Gourab Kundu, Radu Florian, and Wael Hamza. 2018. Neural cross-lingual entity linking. In Thirty-Second AAAI Conference on Artificial Intelligence. Stephanie Strassel and Jennifer Tracey. 2016. LORELEI language packs: Data, tools, and resources for technology development in low resource languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 3273–3280, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Shyam Upadhyay, Nitish Gupta, and Dan Roth. 2018. Joint multilingual supervision for cross-lingual entity linking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2486–2495, Brussels, Belgium. Association for Computational Linguistics. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1504–1515. Minghao Wu, Fei Liu, and Trevor Cohn. 2018. Evaluating the utility of hand-crafted features in sequence labelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2850–2856, Brussels, Belgium. Association for Computational Linguistics. Shuyan Zhou, Shruti Rijhwani, John Wieting, Jaime Carbonell, and Graham Neubig. 2020. Improving candidate generation for low-resource cross-lingual entity linking. Transactions of the Association of Computational Linguistics.
2020
722
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8124–8137 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8124 A Prioritization Model for Suicidality Risk Assessment Han-Chin Shing Computer Science University of Maryland College Park, MD [email protected] Philip Resnik Linguistic/UMIACS University of Maryland College Park, MD [email protected] Douglas W. Oard iSchool/UMIACS University of Maryland College Park, MD [email protected] Abstract We reframe suicide risk assessment from social media as a ranking problem whose goal is maximizing detection of severely at-risk individuals given the time available. Building on measures developed for resource-bounded document retrieval, we introduce a well founded evaluation paradigm, and demonstrate using an expert-annotated test collection that meaningful improvements over plausible cascade model baselines can be achieved using an approach that jointly ranks individuals and their social media posts. 1 Introduction Mental illness is one of the most significant problems in healthcare: in economic terms alone, by 2030 mental illness worldwide is projected to cost more than cardiovascular disease, and more than cancer, chronic respiratory diseases, and diabetes combined (Bloom et al., 2012). Suicide takes a terrible toll: in 2016 it became the second leading cause of death in the U.S. among those aged 10-34, fourth among those aged 35-54 (Hedegaard et al., 2018). Prevalence statistics suggest that roughly 141 of the 3,283 people who attended ACL 2019 have since had serious thoughts of suicide, 42 have made a plan, and 19 have actually made attempts.1 The good news is that NLP and machine learning are showing strong promise for impact in mental health, just as they are having large impacts everywhere else. Traditional methods for predicting suicidal thoughts and behaviors have failed to make progress for fifty years (Franklin et al., 2017), but with the advent of machine learning approaches (Linthicum et al., 2019), including text analysis methods for psychology (Chung and Pennebaker, 2007) and the rise of research on mental 1Approximately: ACL is international, but these figures use prevalence statistics for U.S. adults (SAMHSA, 2019). health using social media (Choudhury, 2013), algorithmic classification has reached the point where it can now dramatically outstrip performance of prior, more traditional prediction methods (Linthicum et al., 2019; Coppersmith et al., 2018). Further progress is on the way as the community shows increasing awareness and enthusiasm in this problem space (e.g., Milne et al., 2016; Losada et al., 2020; Zirikly et al., 2019). The bad news is that moving these methods from the lab into practice will create a major new challenge: identifying larger numbers of people who may require clinical assessment and intervention will increase stress on a severely resource-limited mental health ecosystem that cannot easily scale up.2 This motivates a reformulation of the technological problem from classification to prioritization of individuals who might be at risk, for clinicians or other suitably trained staff as downstream users. Perhaps the most basic way to do prioritization is with a single priority queue that the user scans from top to bottom. This “ranked retrieval” paradigm is common for Information Retrieval (IR) tasks such as document retrieval. The same approach has been applied to ranking people based on their expertise (Balog et al., 2012), or more generally to ranking entities based on their characteristics (Balog, 2018). Rather than evaluating categorical accuracy, ranked retrieval systems are typically evaluated by some measure of search quality that rewards placing desired items closer to the top (Voorhees, 2001). Most such measures use only item position, but we find it important to also model the time it takes to recognize desired items, since in our setting the time of qualified users is the most limited resource. In this paper, we do so by building on Time2120M Americans live in areas with mental healthcare provider shortages (Bureau of Health Workforce, 2020). That number reflects an increase of about 7 million people between September 30, 2019 and March 31, 2020. 8125 individual document overview ..I do n’t want ** be alive a**e ** **.. ..I <**> ** ** s**g ** ** ** <**> f**r.. ..If there ’s s**e h**e ** p**e h**p ** **.. ... ** h**s b**n ** a**l <**> <**> weeks ... ..I ’m suffocating I used ** think depression w**s **.. ..I ’**e fallen into serious depression a**d ** ** n**t.. ... I ’ve been depressed for ** l**g ** I.. ..w**h ** c**d p**t t**s w**e ** l**d o**s c**d.. ..I really want to do it . ** w**d **.. individual ranking doc ranking Figure 1: Illustration of an assessment framework in which individuals are ranked by predicted suicide risk based on social media posts, posts are ranked by expected usefulness for downstream review by a clinician, and word-attention highlighting helps foreground important information for risk assessment. Real Reddit posts, obfuscated and altered for privacy. Biased Gain (TBG, Smucker and Clarke, 2012), an IR evaluation measure that models the expected number of relevant items a user can find in a ranked list given a time budget. We observe that in many risk assessment settings (e.g., Yates et al. (2017); Coppersmith et al. (2018); Zirikly et al. (2019)), the available information comprises a (possibly large and/or longitudinal) set of documents, e.g. social media posts, associated with each individual, of which possibly only a small number contain a relevant signal.3 This gives rise to a formulation of our scenario as a nested, or hierarchical, ranking problem, in which individuals are ordered by priority, but each individual’s documents must also be ranked (Figure 1). Accordingly, we introduce hierarchical Time-Biased Gain (hTBG), a variant of TBG in which individuals are the top level ranked items, and expected reading time is modeled for the ranked list of documents that provides evidence for each individual’s assessment. In addition, we introduce a prioritization model that uses a three-level hierarchical attention network to jointly optimize the nested ranking task; this model also addresses the fact that in our scenario, as in many other healthcare-related scenarios, relevance obtains at the level of individuals rather than individual documents (cf. Shing et al., 2019). Using a test collection of Reddit-posting individuals who have been assessed for suicide risk by clinicians based on their posts (Shing et al., 2018), we use hTBG to model prioritization of individuals and demonstrate that our joint model substantially outperforms cascade model baselines in which the nested rankings are produced independently. 3Our dataset, for example, has one severe risk individual with 1,326 postings, of which only two are ”signal” posts identified by the experts. See Table 2 for detailed statistics. 2 Related Work NLP for Risk Assessment. Calvo et al. (2017) survey NLP for mental health applications using non-clinical texts such as social media. Several recent studies and shared tasks focus on risk assessment of individuals in social media using a multi-level scale (Milne et al., 2016; Yates et al., 2017; Losada et al., 2020). Shing et al. (2018) introduce the dataset we use, and Zirikly et al. (2019) describe a shared task in which 11 teams tackled the individual-level classification that feeds into our prioritization model (their Task B). Our work contributes by modeling the downstream users’ prioritization task as taking a key step closer to the real-world problem. Hierarchical Attention Attention, especially in the context of NLP, has two main advantages: it allows the network to attend to likely-relevant parts of the input (either words or sentences), often leading to improved performance, and it provides insight into which parts of the input are being used to make the prediction. These characteristics have made attention mechanisms a popular choice for deep learning that requires human investigation, such as automatic clinical coding (Baumel et al., 2018; Mullenbach et al., 2018; Shing et al., 2019). Although concerns about using attention for interpretation exist (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Wallace, 2019), Shing et al. (2019) show hierarchical document attention can align well with human-provided ground truth. Our prediction model, 3HAN, is a variant of Hierarchical Attention Networks (HAN, Yang et al., 2016). Yang et al. use a two-level attention mechanism that learns to pay attention to specific words in a sentence to form a sentence representation, and at the next higher level to weight specific sentences in 8126 a document in forming a document representation. Adapting this approach to suicide assessment of at-risk individuals, our model moves a level up the representational hierarchy, learning also to weight documents to form representations of individuals. This allows us to jointly model ranking individuals and ranking their documents as potentially relevant evidence, without document-level annotations. Evaluating rankings. There is an extensive IR literature on quality measures for ranked lists (J¨arvelin and Kek¨al¨ainen, 2002; Chapelle et al., 2009; Smucker and Clarke, 2012; Sakai, 2019), which generally reward placing highly relevant items near the top of the list, and are often relatively insensitive to mistakes made near the bottom. In the setting of suicidality risk assessment, we care about how much gain (number of at-risk individuals found) can be achieved for a given time budget. Time-biased gain (TBG, Smucker and Clarke, 2012) measures this by assuming a determined user working down a ranked list, with the discount being a function of the time it takes to reach that position. However, neither TBG nor other ranking measures, to the best of our knowledge, can measure the hierarchical ranking found in the scenario that motivates our work: ranking items (i.e. individuals) when each item itself contains a ranked list of potential evidence (their posts). In this paper, we design a new metric, hierarchical time-biased gain (hTBG), to measure the hierarchical ranking by incorporating the cascading user model found in Expected Reciprocal Rank (ERR, Chapelle et al., 2009) into TBG. 3 A Measure for Risk Prioritization Section 1 argued for formulating risk assessment as a prioritization process where the assessor has a limited time budget. This leads to four desired properties in an evaluation measure:4 • Risk-based: Individuals with high risk should be ranked above others. • Head-weighted: Ranking quality near the top of the list, where assessors are more likely to assess, should matter more than near the bottom. • Speed-biased: For equally at-risk individuals, the measure should reward ranking the one who can be assessed more quickly closer to 4Throughout, assessor or user signify a clinician or other human assessor, and individual is someone being assessed. Figure 2: User model for Time-Biased Gain (TBG) the top, so that more people at risk can be identified within a given time budget. • Interpretable: The evaluation score assigned to a system should be meaningful to assessors. Among many rank-based measures that satisfy the risk-based and head-weighted criteria, TBG directly accounts for assessment time in a way that also satisfies the speed-biased criterion (see Theorem 3.1). Furthermore, the numeric value of TBG is a lower bound on the expected number of relevant items — in our case, high-risk individuals — found in a given time budget (Smucker and Clarke, 2012), making it interpretable. After introducing TBG, in Section 3.2 we develop hierarchical TimeBiased Gain (hTBG), an extension of TBG, to account for specific properties of risk assessment using social media posts.5 3.1 Time-Biased Gain TBG was originally developed in IR for the case of a user seeking to find a relevant document, but here we frame it in the context of risk assessment (Figure 2). TBG assumes a determined user (say a clinician) examining a ranked list of individuals in the order presented by the system. For each individual, the clinician first examines a summary and then decides whether to check relevance via more detailed examination, or to move on. Checking requires more time to make an assessment of whether the individual is indeed at-risk. TBG is a weighted sum of gain, gk, and discount, D(·), a function of time: TBG = ∞ X k=1 gkD (T (k)). (1) 5TBG and hTBG code: https://github.com/sidenver/hTBG 8127 Parameter Description Value Pcheck(reli) Prob. to check, given the relevance of summary 0.64, if reli = 1 0.39, if reli = 0 Pflag(reli) Prob. to flag, given the relevance of individual 0.77, if reli = 1 0.27, if reli = 0 Ts Seconds to evaluate a summary 4.4 TαW + Tβ Seconds to judge W words 0.018W + 7.8 Table 1: Parameters used for TBG and hierarchical TBG. T(k) is the expected amount of time it takes a user to reach position k: T(k) = k−1 X i=1 t (i) (2) t(i) = Ts + Pcheck (reli) Ei (3) where t(i) is expected time spent at position i. Breaking down t(i), Ts is the time it takes to read a summary and decide whether to check the individual; if yes (probability Pcheck(reli)), Ei is expected time for detailed assessment, calculated as a function of the individual’s total word count Wi: Ei = TαWi + Tβ (4) where Tα and Tβ scales words to time. The discount function D(t) decays exponentially with halflife h: D(t) = 2−t h (5) where h is the time at which half of the clinicians will stop, on average. The expected stop time (or mean-life) is h ln(2). Finally, the gain, gk is: gk = Pcheck(relk)Pflag(relk)1[relk=1] (6) where Pcheck(relk) is the probability of checking the individual after reading the summary at position k, and Pflag(relk) is the probability of then flagging that individual as high risk. Gain thus accrues only if a clinician actually finds a high-risk individual. The decay function in Equation 5 monotonically decreases with increasing time (and thus rank), so TBG satisfies the head-weighted criterion. Table 1 shows the parameters used in Smucker and Clarke (2012), which were estimated from user studies using data from TREC 2005 Robust track. Particularly of interest in a time-limited assessment, we can prove that TBG is speed-biased: Theorem 3.1 (TGB satisfies the speed-biased criterion). Swapping an at-risk individual of longer Figure 3: hTBG’s model for calculating expected assessment time for an individual, replacing shaded box in Figure 2. assessment time ranked at k with an equally atrisk individual of shorter assessment time ranked at k + r, where r > 0, always increases TBG. Proof. See Appendix B.1 3.2 Hierarchical Time-Biased Gain TBG assumes that detailed assessment involves looking at all available evidence (Equation 4). However, in our setting, an individual may have a large or even overwhelming number of social media posts. One severe risk individual in the SuicideWatch dataset, for example, has 1,326 posts in Reddit, the vast majority of which would provide the assessor with no useful information. Therefore we need to prioritize the documents to be read, and a way of estimating when the user will have read enough to make a decision. In general, clinicians engage in a sensemaking process as they examine evidence, and modeling the full complexity of that process would be difficult. We therefore make two simplifying assumptions: (1) that there is a high-signal document that suffices, once read, to support a positive relevance judgment, and (2) that the clinician will not read more than some maximum number of documents. These assumptions align well with those of Expected Reciprocal Rank (ERR), whose cascading user model assumes that as the user works down a ranked list (in our case, the ranked documents posted by a single individual), they are more likely to stop after viewing a highly relevant document than after viewing an irrelevant one, as their information need is more likely to have been satisfied (Chapelle et al., 2009). This results in a cascade model of user behavior: ERR = P∞ k=1 1 kP (stop at k), in which P (stop at k) = Rk Qk−1 i=1 (1 −Ri), where Rk = f(relk) is the probability of stopping at position k as a function of relevance. 8128 This suggests replacing Equation 4 with the following expected time estimate for detailed assessment of an individual: Ei = Tα L X l=1 Wi,l l−1 Y m=1 (1 −Ri,m) ! + Tβ (7) where Ri,l is the probability of stopping at the lth document for individual i, and Wi,l > 0 is the cost (in our case, word count) of reading the l-th document for individual i. Note that for the special case of ∀i, l ∈N, Ri,l = 0, hTBG reduces to TBG. See Figure 3 for an illustration of Ei of hTBG. For derivation of Equation 7 from ERR’s cascading user model, see Appendix B.3. 3.3 Optimal Values for TBG and hTBG Calculation of the optimal value for a measure is often important for normalization, though not always easy; in some cases it can be NP-hard (Agrawal et al., 2009, ERR-IA). Another popular approach is to normalize by calculating the metric with an ideal collection. For example, Smucker and Clarke (2012) calculate the normalization factor of TBG by assuming a collection with an infinite number of relevant documents, each of which lack any content. In our case, however, we are actually interested in an optimal value achievable for a given test collection: the optimal values of TBG and hTBG are properties of the bottleneck that occurs due to the user’s limited time-budget. We find that: Theorem 3.2 (Optimal TBG). The optimal value of TBG under binary relevance is obtained if and only if (1) all at-risk individuals are ranked above not-at-risk individuals, and (2) within the at-risk individuals, they are sorted based on time spent in ascending order. Proof. See Appendix B.1 Theorem 3.2 makes sense, as any time spent on assessing a not-at-risk individual is time not spent on assessing other potentially at-risk individuals. Preference in assessing individuals with shorter assessment time also increased the chance of assessing more individuals in the given time budget. Minimum Individual Assessment Time. To calculate optimal hTBG, we need to minimize individual assessment time. A natural question to ask, then, is whether a result similar to Theorem 3.2 holds for the individual assessment time of hTBG in Equation 7. By swapping paired documents, we can use proof by contradiction to show that: Theorem 3.3. Minimum individual assessment time is obtained if the documents are sorted in descending order by Ri,l Wi,l . Proof. See Appendix B.2 Theorem 3.3 shows a surprisingly intuitive tradeoff between how relevant a document might be, and how much time (proportional to word counts) the expert needs to take to read it: highly relevant documents with short reading time are preferred. Observe that Theorem 3.1 (speed-biased criterion) and Theorem 3.2 both apply to hTBG, as the two theorems only concern the ranking of individuals, not documents, and hTBG is an extension of TBG to measure the document ranking. Using Theorem 3.3 and Theorem 3.2, calculation of optimal TBG and hTBG values is simply a matter of sorting. For TBG, time complexity is O(n log(n)), where n ≤K is the number of at-risk individuals in the test collection. For hTBG, worst-case time complexity is O(n log(n) + nm log(m)), where m ≤L is the maximum number of relevant documents per individual. 4 Classification Model We began by motivating risk assessment via social media as a person-centered, time-limited prioritization problem, in which the technological goal is to support downstream clinicians or other assessors in identifying as many people at risk as possible. This led to the conclusion that systems should not only rank individuals but, for each individual, rank their posts, and we introduced an evaluation framework that involves an abstraction of the user’s process of identifying people at risk given a nested ranking. Next, we need a system that can produce such nested rankings of individuals and their posts. Ideally such a system should be able to train on only individual-level, not document-level, labels, since suicide risk is a property of individuals, not documents, and document labels are more difficult to obtain. In addition, such a system should ideally produce additional information to help the downstream user — if not justification of its output, then at least highlighting potentially useful information. To address this need, we introduce 3HAN, a hierarchical attention network (Yang et al., 2016) that extends up to the level of individuals, who are 8129 represented as sequences of documents. This architecture is similar to the network we proposed in Shing et al. (2019) for coding clinical encounters; it obtained good predictive performance and we also showed that, despite concerns about the interpretation of network attention (Jain and Wallace, 2019), hierarchical document-level attention succeeded in identifying documents containing relevant evidence. The architecture here differs in that it builds representations hierarchically from the word level, as opposed to pre-extracted conceptual features, and takes document ordering into account using a bi-directional GRU (Bahdanau et al., 2015). Specifically, our model has five layers (Figure 4). The first is a word-embedding layer that turns a one-hot word vector into a dense vector. The second to fourth layers are three Seq2Vec layers with attention that learn to aggregate, respectively, a sequence of word vectors into a sentence vector, a sequence of sentence vectors into a document vector, and a sequence of document vectors into an individual vector (hence 3HAN). The final layer is a fully connected layer followed by softmax. We detail our Seq2Vec layer in the context of aggregating a sequence of document vectors to an individual’s vector, though the three Seq2Vec layers are the same. See Figure 4b for an illustration. Document vectors {di,j}m j=1 are first passed through a bi-directional GRU layer. The outputs, after passing through a fully-connected layer and a non-linear layer, are then compared to a learnable attention vector, vattention. Specifically, gi,j = Bi-GRU(di,j) (8) ri,j = tanh (Wgi,j + b) (9) ai,j = er⊤ i,jvattention Pm j′=1 er⊤ i,j′vattention (10) ui = Xm j=1 ai,jgi,j (11) where ai,j is the normalized document attention score for the j-th vector, and ui is the final aggregated individual vector. As shown in Equation 10, the transformed vector ri,j is compared with the learnable attention vector vattention using a dot product, and further normalized for the weighted averaging step in Equation 11. Once we have the individual vector ui, we can predict the risk label of the individual by passing it through a fully-connected layer and a softmax. Specifically, P( ˆyi) = softmax (WFCui + bFC) (12) Finally, we compare with the ground truth label yi of individual i using negative log-likelihood to calculate a loss: lossi = −log (P ( ˆyi = yi)) . (13) 5 Experimentation We first introduce the test collection and then show how we can evaluate 3HAN and the cascade model baselines on the test collection using hTBG. To demonstrate the effectiveness of the 3HAN model, which jointly learns to rank individuals and, within each individual, their posts as evidence, we compare it with different combinations of individual-level rankers and document-level rankers. Training details for all the models can be found in Appendix C. 5.1 Test Collection In our experimentation, we use the University of Maryland Reddit Suicidality Dataset, v.2 (Shing et al., 2018; Zirikly et al., 2019).6 This Englishlanguage dataset, derived from the 2015 Full Reddit Submission Corpus (2006-2015), includes 11,129 potentially at-risk individuals who posted on r/SuicideWatch (a subreddit dense in self-reports about suicidality, henceforth SW), as well as 11,129 control individuals who never posted on any mental-health related subreddit. Entire posting histories (not just from SW, but all Reddit forums) were collected.7 An individual’s number of posts can range from 10 to 1,326. See Table 2 for a detailed breakdown of number of posts per individual across datasets and risk categories. The full dataset has three subsets with disjoint individuals. The first, which we term the WEAK SUPERVISION dataset, includes 10,263 individuals who posted in SW and 10,263 control individuals who did not; they are respectively considered to be indirectly positively and negatively labeled, very noisily since posting on SW does not necessary imply suicidal ideation.8 The second set is the CROWDSOURCE dataset, including 621 individuals annotated by crowdsourcers with four risk levels: No Risk, Low Risk, Moderate Risk, and Severe Risk. 6See Appendix A for IRB and ethical considerations. 7See Gaffney and Matias (2018) for caveats. 8E.g. seeking help for a friend, or offering support. 8130 (a) 3HAN (b) Seq2Vec with Attention Figure 4: An illustration of the three-level Hierarchical Attention Network (3HAN) model # Posts 10-20 20-40 40-60 60-100 100-200 200-500 500-1,000 1,000-1,500 CrowdSource No Risk 31 42 25 27 18 12 4 0 Low Risk 19 22 5 11 2 4 0 0 Moderate Risk 46 45 19 14 9 7 1 0 Severe Risk 80 79 37 19 28 12 3 0 Expert No Risk 3 7 2 5 7 8 3 0 Low Risk 6 11 5 11 8 7 1 1 Moderate Risk 23 19 12 26 13 14 5 3 Severe Risk 7 2 5 9 10 4 4 1 Table 2: Number of individuals with the number (range) of posts, by dataset and risk category. The last is the EXPERT dataset, including 242 individuals with the same four-level annotation, by four suicide risk assessment experts.9 Along with the level of risk for each individual, the expert annotators also designated the single post that most strongly supported each of their low, moderate, or severe risk labels. 5.2 Evaluating with hTBG As TBG and hTBG are measures designed for binary relevance judgements, we map the Severe Risk category to at-risk, and everything else to not-atrisk.10 For word counts, we directly use the token counts in documents. We use the parameters that Smucker and Clarke (2012) estimated for TBG in user studies (Table 1). As discussed in Section 3.2, we assume there exists a maximum number of documents the clinician can read for each individual. 9Shing et al. (2018) report reliable expert annotation, Krippendorff’s α = .81. The original EXPERT dataset had 245 individuals; we exclude three owing to errors in processing. 10Since the label definitions distinguish severe from moderate by focusing on the risk of an attempt in the near future, this binary distinction is aligned with recent work in suicidology that focuses specifically on characterizing “the acute mental state that is associated with near-term suicidal behavior” (Schuck et al., 2019). We set that number to 50 for the calculation of hTBG; if no relevant document exists in the top 50 documents, we consider that individual a miss and set the gain to zero.11 To rank individuals using our classification models, we use a standard conversion method to convert four-class probability to a single score: R X reli P (ˆyi = reli) scorereli (14) where R is {No, Low, Moderate, Severe}, and scorereli is the real number that maps to the risklevel of the individual i. We use {No = 0, Low = 1, Moderate = 2, Severe = 4} as our mapping — No Risk can plausibly be treated the same as a post with no annotation (e.g. a control individual), and exponential scaling also seems plausible although just one of many possibilities, which we leave for future work. The hTBG metric also requires a stopping probability for each document, Ri,l. Assuming that the more severe the risk associated with a document is, the more likely the assessor is to stop and flag the 11All parameters were frozen prior to testing. We plan to estimate hyperparameters in our own user studies in the future. 8131 individual, on the EXPERT dataset where we have document-level annotations, we can estimate the expected stopping probability as: Ri,l = 1 − C Y c=1  1 −scorereli,l,c scoremax  (15) where C annotators annotated the post as most strongly supporting their judgment. Scorereli,l,c is a mapping from the document-level risk by annotator c to a real number, with the same mapping used in Equation 14. Scoremax = 4 is the maximum in that mapping. To reflect different time budgets, we report results with the half-life parameter ranging from 1 to 6 hours, which corresponds to expected reading time budgets from 1.4 to 8.7 hours. 5.3 Models for Ranking Individuals 3HAN. 3HAN is first pretrained on the binary WEAK SUPERVISION dataset. The model is then further tuned on the four-class CROWDSOURCE dataset by transferring the weights (except the last fully-connected prediction layer) over. We initialized and fixed the word embedding using the 200-dimensional Glove embedding trained on Twitter (Pennington et al., 2014).12 3HAN Av. 3HAN Average is trained the same way as 3HAN, except that the last Seq2Vec layer (the layer that aggregates a sequence of document vectors to an individual vector) is averaged instead of using attention, which can be achieved by fixing ai,j = 1 m in Equation 10. This is similar to the HN-AVE baseline in Yang et al. (2016). Note that 3HAN AV cannot rank documents, as it lacks document attention. LR. A logistic regression model is trained on the CROWDSOURCE dataset. The feature vector for an individual is computed by converting documents into document-level feature vectors, and then averaging them to obtain an individual-level feature vector. For each document, we concatenate four feature sets: (1) bag-of-words for vocabulary count larger than three, (2) Glove embedding summing over words, (3) 194 features representing emotional topics from Empath (Fast et al., 2016), 12We experimented with trainable Glove embedding as well as BERT, but saw little to no improvement in performance using cross-validation. We plan to explore fine-tuning BERT on Reddit in future work. and (4) seven scores measuring document readability.13 This model is included as a conventional baseline in suicide risk assessment, similar to the baseline found in Shing et al. (2018). 5.4 Models for Ranking Documents 3HAN Att. Document attention learned jointly with 3HAN. As a side effect to training our 3HAN model, we learn document attention scores, see Equation 10. This score can then be used to rank documents in terms of their relevance to the judgement. This availability of document ranking, despite a lack of document annotations, is a significant advantage of hierarchical attention networks, since fine-grained document annotations are difficult to obtain on a large scale. Sentence- and wordlevel attention are a further advantage, in terms of potentially facilitating user review (see Figure 1), although exploring that awaits future work. Forward and Backward. Ranking an individual’s documents in either chronological order or reverse chronological order is an obvious default in the absence of a trained model for document ranking, important baselines for testing whether a document ranking model actually adds value. 6 Results and Discussion Our model, 3HAN+3HAN ATT, the only joint model, achieves the best performance on hTBG compared to all other combinations of individual rankers and document rankers across three different time budgets (Table 3). The result is significant except when compared to 3HAN AV+3HAN ATT.14 However, using 3HAN ATT to rank documents implies that you have already trained 3HAN. Therefore, a more reasonable combination to compare with is 3HAN AV+BACKWARD, which we outperform by a significant margin. Overall, the effect of document ranking is larger than the effect of individual ranking. Notably, the FORWARD document ranker always yields the worst performance. BACKWARD, on the other hand, is surprisingly competitive. We hypothesize that this may be an indication that suicidal ideation worsens over time, or perhaps of the unfortunate 13Flesch-Kincaid Grade Level, Flesch Reading Ease, Dale Chall Readability, Automated Readability Index (ARI), Coleman Liau Index, Gunning Fog Index, and Linsear Write. 14Paired bootstrap resampling test, repeated 1000 times, p < 0.05. 8132 Individual Document Half-life h Ranker Ranker 1 hr 3 hrs 6 hrs LR FORWARD 7.51 10.05 10.89 3HAN AV FORWARD 7.76 10.15 10.94 3HAN FORWARD 7.40 9.98 10.84 LR BACKWARD 8.75 11.70 12.68 3HAN AV BACKWARD 9.65 12.09 12.89 3HAN BACKWARD 9.73 12.17 12.95 LR 3HAN ATT 9.44 12.05 12.88 3HAN AV 3HAN ATT 10.16 12.35 13.04 3HAN 3HAN ATT 10.39 12.49 13.12 Optimal hTBG 19.78 20.39 20.54 Table 3: hTBG scores with three different time budgets, all combinations of individual and document rankers. event of suicide attempts following posting a Severe Risk document. This motivates the importance of prioritizing the reading order of documents: being able to find evidence early in suicide assessment leaves more time for other individuals, and will reduce probability of misses. Document ranking alone does not decide everything, as 3HAN+BACKWARD outperforms LR+3HAN ATT. It is the combination of 3HAN and its document attentions that produce our best model. This makes sense, as 3HAN, while learning to predict the level of risk, also learns which documents are important to make the prediction. Figure 1 shows the top 3 documents in a summary-style view for each of the highest ranked 3 individuals, with word-level attention shown using shading. Words without attention are obfuscated; others are altered to preserve privacy. Previously Existing Measures. For previously existing measures, e.g. TBG and NDCG@20, document ranking has no effect, and thus these are not suitable measures in our scenario. However, we include results here for reference (Table 4). Since 3HAN AV. and LR cannot rank documents, it is impossible to calculate hTBG, so we report results on the chronologically backward ranking strategy. NDCG@20 is NDCG score cut off at 20, chosen based on the optimal hTBG value. 7 Conclusions and Future Work We introduced hTBG, a new evaluation measure, as a step toward moving beyond risk classification to a paradigm in which prioritization is the focus, and where time matters. Like TBG, the hTBG score is interpretable as a lower bound on the expected Ranker hTBG TBG NDCG@20 3HAN+3HAN ATT. 12.49 11.46 70.90 3HAN AV.+BACKWARD 12.09 11.40 68.28 LR+BACKWARD 11.70 10.98 69.44 Optimal 20.39 19.75 100.00 Table 4: TBG and NDCG@20 listed to compare with hTBG. Both hTBG’s and TBG’s half lives are set at 3 hrs, and maximum document cutoff is set at 50. number of relevant items found in a ranking, given a time budget. In our experiment, a “relevant item” is a person classified by experts as being at risk of attempting suicide in the near future. Measured at an expected reading time budget of about half a day (4hr20min, half-life 3hrs), our joint ranking approach achieved hTBG of 12.49 compared with 11.70 for a plausible baseline from prior art: using logistic regression to rank individuals, and then looking at a individual’s posts in backward chronological order. That increase is just a bit short of identifying one more person in need of immediate help in the experiment’s population of 242 individuals. There are certainly limitations in our study and miles to go before validating our approach in the real world, but our framework should make it easy to integrate and explore other individual rankers, document rankers and explanation mechanisms, and to actually build user interfaces like the schematic in Figure 1. Acknowledgments This work has been supported in part by a University of Maryland Strategic Partnership (MPower) seed grant, an AWS Machine Learning Research Award, and an AI + Medicine for High Impact (AIM-HI) Challenge Award. We are immensely grateful to Glen Coppersmith, Michelle Colder Carras, April Foreman, Michelle Kuchuk, Beau Pinkham, Rebecca Resnik, Katherine Musacchio Schafer, Jonathan Singer, Raymond Tucker, Tony Wood, Ayah Zirikly, members of the UMIACS CLIP lab, and participants at the Workshops on Computational Linguistics and Clinical Psychology for valuable discussions related to this work. References Rakesh Agrawal, Sreenivas Gollapudi, Alan Halverson, and Samuel Ieong. 2009. Diversifying search results. In Proceedings of the Second ACM International Conference on Web Search and Data Mining, 8133 WSDM ’09, page 5–14, New York, NY, USA. Association for Computing Machinery. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Krisztian Balog. 2018. Entity-Oriented Search, volume 39 of The Information Retrieval Series. Springer. Krisztian Balog, Yi Fang, Maarten de Rijke, Pavel Serdyukov, and Luo Si. 2012. Expertise retrieval. Foundations and Trends in Information Retrieval, 6(2–3):127–256. Tal Baumel, Jumana Nassour-Kassis, Raphael Cohen, Michael Elhadad, and No´emie Elhadad. 2018. Multi-label classification of patient notes: Case study on ICD code assignment. In The Workshops of the The Thirty-Second AAAI Conference on Artificial Intelligence, pages 409–416. AAAI Press. Adrian Benton, Glen Coppersmith, and Mark Dredze. 2017. Ethical research protocols for social media health research. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, EthNLP@EACL, pages 94–102. Association for Computational Linguistics. David E. Bloom, Elizabeth Cafiero, Eva Jan´e-Llopis, Shafika Abrahams-Gessel, Lakshmi Reddy Bloom, Sana Fathima, Andrea B. Feigl, Tom Gaziano, Ali Hamandi, Mona Mowafi, Danny O’Farrell, and Emre. 2012. The Global Economic Burden of Noncommunicable Diseases. PGDA Working Papers 8712, Program on the Global Demography of Aging. Bureau of Health Workforce. 2020. Designated health professional shortage areas: Statistics, second quarter of fiscal year 2020, designated HPSA quarterly summary. Rafael A. Calvo, David N. Milne, M. Sazzad Hussain, and Helen Christensen. 2017. Natural language processing in mental health applications using nonclinical texts. Nat. Lang. Eng., 23(5):649–685. Stevie Chancellor, Michael L. Birnbaum, Eric D. Caine, Vincent M. B. Silenzio, and Munmun De Choudhury. 2019. A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Olivier Chapelle, Donald Metlzer, Ya Zhang, and Pierre Grinspan. 2009. Expected reciprocal rank for graded relevance. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM 2009, pages 621–630. ACM. Munmun De Choudhury. 2013. Role of social media in tackling challenges in mental health. In Proceedings of the 2nd international workshop on Socially-aware multimedia, SAM@ACM Multimedia 2013, pages 49–52. ACM. Cindy Chung and James W Pennebaker. 2007. The psychological functions of function words. Social communication, 1:343–359. Glen Coppersmith, Ryan Leary, Patrick Crutchley, and Alex Fine. 2018. Natural Language Processing of Social Media as Screening for Suicide Risk. Biomedical Informatics Insights, 10:117822261879286. Ethan Fast, Binbin Chen, and Michael S. Bernstein. 2016. Empath: Understanding topic signals in largescale text. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 4647–4657. ACM. Joseph C. Franklin, Jessica D. Ribeiro, Kathryn R. Fox, Kate H. Bentley, Evan M. Kleiman, Xieyining Huang, Katherine M. Musacchio, Adam C. Jaroszewski, Bernard P. Chang, and Matthew K. Nock. 2017. Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychological Bulletin, 143(2):187–232. Devin Gaffney and J. Nathan Matias. 2018. Caveat emptor, computational social science: Large-scale missing data in a widely-published reddit corpus. PLOS ONE, 13(7):1–13. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6. Association for Computational Linguistics. Holly Hedegaard, Sally C Curtin, and Margaret Warner. 2018. Suicide rates in the United States continue to increase. National Center for Health Statistics. Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for dependency parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, pages 1373–1378. The Association for Computational Linguistics. Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, pages 3543–3556. Association for Computational Linguistics. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. 8134 Kathryn P. Linthicum, Katherine Musacchio Schafer, and Jessica D. Ribeiro. 2019. Machine learning in suicide science: Applications and ethics. Behavioral Sciences & the Law, 37(3):214–222. David E. Losada, Fabio Crestani, and Javier Parapar. 2020. eRisk 2020: Self-harm and depression challenges. In Advances in Information Retrieval - 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14-17, 2020, Proceedings, Part II, volume 12036 of Lecture Notes in Computer Science, pages 557–563. Springer. David N. Milne, Glen Pink, Ben Hachey, and Rafael A. Calvo. 2016. CLPsych 2016 shared task: Triaging content in online peer-support forums. In Proceedings of the 3rd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, CLPsych@NAACL-HLT 2016, pages 118–127. The Association for Computational Linguistics. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, pages 1101–1111. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, pages 1532–1543. ACL. Tetsuya Sakai. 2019. Graded relevance assessments and graded relevance measures of NTCIR: A survey of the first twenty years. CoRR, abs/1903.11272. SAMHSA. 2019. National Survey on Drug Use and Health, 2017 and 2018. Center for Behavioral Health Statistics and Quality. Table 8.58B. Allison Schuck, Raffaella Calati, Shira Barzilay, Sarah Bloch-Elkouby, and Igor Galynker. 2019. Suicide Crisis Syndrome: A review of supporting evidence for a new suicide-specific diagnosis. Behavioral sciences & the law, 37(3):223–239. Han-Chin Shing, Suraj Nair, Ayah Zirikly, Meir Friedenberg, Hal Daum´e III, and Philip Resnik. 2018. Expert, crowdsourced, and machine assessment of suicide risk via online postings. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, CLPsych@NAACL-HTL, pages 25–36. Association for Computational Linguistics. Han-Chin Shing, Guoli Wang, and Philip Resnik. 2019. Assigning medical codes at the encounter level by paying attention to documents. In ML4H, Machine Learning for Health Workshop at NeurIPS. Mark D. Smucker and Charles L. A. Clarke. 2012. Time-based calibration of effectiveness measures. In The 35th International ACM SIGIR conference on research and development in Information Retrieval, SIGIR ’12, pages 95–104. ACM. Ellen M. Voorhees. 2001. The philosophy of information retrieval evaluation. In Evaluation of CrossLanguage Information Retrieval Systems, Second Workshop of the Cross-Language Evaluation Forum, CLEF 2001, volume 2406 of Lecture Notes in Computer Science, pages 355–370. Springer. Byron C. Wallace. 2019. Thoughts on ”attention is not not explanation”. Medium, Accessed: December, 2019. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLPIJCNLP 2019, pages 11–20. Association for Computational Linguistics. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hierarchical attention networks for document classification. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. The Association for Computational Linguistics. Andrew Yates, Arman Cohan, and Nazli Goharian. 2017. Depression and Self-Harm Risk Assessment in Online Forums. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2968–2978. Michael Zimmer. 2010. “But the data is already public”: on the ethics of research in Facebook. Ethics and Information Technology, 12(4):313–325. Ayah Zirikly, Philip Resnik, ¨Ozlem Uzuner, and Kristy Hollingshead. 2019. CLPsych 2019 shared task: Predicting the degree of suicide risk in Reddit posts. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 24–33. Association for Computational Linguistics. 8135 A Appendix: Ethical Considerations Our research involving the University of Maryland Reddit Suicide Dataset has undergone review by the University of Maryland Institutional Review Board with a determination of Category 4 Exempt status under U.S. federal regulations. For this dataset, (a) the original data are publicly available, and (b) the originating site (Reddit) is intended for anonymous posting. In addition, since Reddit is officially anonymous, but that is not enforced on the site, the dataset has undergone automatic de-identification using named entity recognition aggressively to identify and mask out potential personally identifiable information such as personal names and organizations, in order to create an additional layer of protection (Zirikly et al., 2019). In an assessment of de-identification quality, we manually reviewed a sample of 200 randomly selected posts (100 from the SuicideWatch subreddit and 100 from other subreddits), revealing zero instances of personally identifiable information. Following Benton et al. (2017), we treat the data (even though de-identified) as sensitive and restrict access to it, we use obfuscated and minimal examples in papers and presentations, and we do not engage in linkage with other datasets. The dataset is available to other researchers via an application process put in place with the American Association of Suicidology that requires IRB or equivalent ethical review, a commitment to appropriate data management, and, since ethical research practice is not just a matter of publicly available data or even IRB approval (Zimmer, 2010; Benton et al., 2017; Chancellor et al., 2019), a commitment to following additional ethical guidelines. Interested researchers can find information at http://umiacs.umd.edu/∼resnik/umd reddit suicidality dataset.html. B Appendix: Proofs B.1 Time-Biased Gain In order to prove that TBG statisfies the speedbiased criterion, consider two individuals ranked at consecutive positions k and k + 1; if we swap the two individual, the change in TBG score is: ∆TBG = (gk+1 −gk)D(T(k)) + gkD (T(k) + t(k + 1)) −gk+1D (T(k) + t(k)) (16) This leads to Lemma B.1-B.3: Lemma B.1. Swapping a not-at-risk individual ranked at k with an at-risk individual ranked at k + 1 always increases TBG. Proof. Let gk = 0 and gk+1 > 0. Equation 16 simplifies to ∆TBG = gk+1 (D(T(k)) −D(T(k) + t(k))) (17) which is always positive because the decay function monotonically decreases, and each assessment of an individual requires at least Ts seconds. Lemma B.2 (Risk-based Criterion). The optimal value of TBG under binary relevance is obtained only if all not-at-risk individuals are ranked below all at-risk individuals. Proof. Let π be a ranking of individuals that yields the optimal value of TBG. Assume that in π there exist not-at-risk individuals ranked before at-risk individuals. Let the k-position be the lowest ranked not-at-risk individual that is at least in front of one at-risk individual, we can then apply Lemma B.1 to increase TBG. This leads to a contradiction. Lemma B.3. Swapping an at-risk individual of longer assessment time ranked at k of with an atrisk individual of shorter assessment time ranked at k + n, where k + n is the closest at-risk individual ranked lower than k, always increases TBG. Proof. Let gk = gk+n > 0, and ∀i ∈{i|k < i < k + n}, gi = 0. We have ∆TBG = gk(D(T(k + n) + t(k + n) −t(k)) −D(T(k + n))) (18) which is always positive because the decay function monotonically decreases, and t(k+n) < t(k) from the assumption that the individual at k + n has shorter assessment time. Lemma B.3 naturally leads to a proof for the speed-biased property of TBG: Proof for Theorem 3.1. Applying Lemma B.3, we know that swapping k and k + r leads to a positive gain between the two. Now, consider all 8136 at-risk individuals ranked between k and k +r: ∀u, s.t. k < u < k + r, the difference is: gu(D(T(u) + t(k + r) −t(k)) −D(T(u))) (19) which is always greater than or equal to zero due to the fact that the decay function monotonically decrease, and t(k + r) < t(k). Thus, the net difference is always larger than zero, thus satisfying the speed-biased criterion. Finally, combing previous results, we can easily show: Proof for Theorem 3.2. A direct consequence of Theorem 3.1 is that if the at-risk individuals are sorted by assessment time in ascending order, no swapping between any two individuals can increase TBG. This, combined with Lemma B.2, that all at-risk individuals are on top of not-at-risk individuals, leads to the necessary condition. Because any swapping within the not-at-risk individuals does not change TBG when no at-risk individuals are ranked lower, this implies that ranking according to Theorem 3.2 gives us a unique and optimal value, which satisfies the sufficient condition of Theorem 3.2. B.2 Hierarchical Time-Biased Gain The assessment time of an individual ranked at k, t(k), is monotonic with Ei, thus showing minimal value of Ei suffices. Recall that Ei is calculated as: Ei = Tα L X l=1 Wi,l l−1 Y m=1 (1 −Ri,m) ! + Tβ (20) Consider, again, swapping a document at rank l with a document at rank l + 1 belonging to the same individual i. The change in Ei is: ∆Ei = κi,l (Wi,l+1Ri,l −Wi,lRi,l+1) (21) where κi,l = Tα Ql−1 j=1 (1 −Ri,j) ≥0 is a fixed term that is not affected by the swap. Equation 21 also points to an important observation: Lemma B.4. If Wi,l+1Ri,l −Wi,lRi,l+1 < 0 and Ri,j < 1 for all j < l, then swapping document l with document l + 1 will decrease Ei. Proof. This follows directly from Equation 21. Lemma B.5. If Ri,j < 1 for all j, then minimum individual assessment time is obtained if and only if the documents are sorted in descending order by Ri,l Wi,l . (22) Proof. Let τ be a document ranking that yields the minimum individual assessment time, and for the sake of contradiction, not a ranking that can be obtained by ranking according to Ri,l Wi,l . We can, thus, find two neighboring documents, without loss of generality, l and l + 1, such that: Ri,l Wi,l < Ri,l+1 Wi,l+1 (23) this leads to: Ri,lWi,l+1 −Ri,l+1Wi,l < 0 (24) since all W > 0. Lemma B.4 together with the prerequisite that Ri,j < 1 for all j then suggest that swapping the two leads to a decrease of Ei. This contradicts with the assumption that τ is an optimal ranking. This proves that to achieve minimum individual assessment time, it is necessary to sort by Ri,l Wi,l . The sufficient condition follows by the fact that swapping tied documents does not lead to change in Ei, as shown in Equation 21 Proof for Theorem 3.3. Let τ be a document ranking according to Ri,l Wi,l . Let m be the document such that Ri,m = 1 and is ranked closer to the top then any other document with Ri,: = 1 (i.e. with the shortest Wi,:). Now, consider using m to cut the documents into two partitions: the first partition of documents are ones ranked before m. Applying Lemma B.5, this partition of documents are already in optimal sorted order, since there’s no Ri,: = 1. The second partition, documents ranked lower than m, the ranking simply does not matter, as Equation 20 shows, the (1 −Ri,m) term will make everything zero afterwards. Now, let’s consider moving a document from the second partition to the first partition. Since any documents in the second partition has a Ri,j Wi,j that is smaller than any documents in the first partition, after you move the document, the optimal ranking for the first partition will put the document at the bottom, right next to m. And since Ri,m Wi,m ≥Ri,j Wi,j due to the original ordering, we can apply Lemma B.4, which can swap the document back below m. Next, 8137 consider moving the lowest ranked document of the first partition (the one ranked at m −1) to the second partition. This will always increase Ei, as shown from Lemma B.4. Moving any other document in the first partition will also increase Ei as least as much as before, since the process is equivalent to swapping with (and thus potentially increase Ei) any intermediate documents in between. Combine these two together, we show that Ei is at a minimum value when sorted in descending order according to Ri,l Wi,l . B.3 Relationship between ERR and hTBG Here we show the derivation from the cascading user model in ERR to the individual assessment time estimation (Ei) in hTBG. ERR assumes a stopping probability (written in hTBG terms): P(stop at l) = Ri,l l−1 Y j=1 (1 −Ri,j) (25) The expected words read, can then be calculated as: L X l=1 P(stop at l) l X d=1 Wi,l ! = L X l=1  Ri,l l−1 Y j=1 (1 −Ri,j) l X d=1 Wi,l !  (26) This can be rearranged to the formula we used in hTBG: L X l=1 Wi,l l−1 Y m=1 (1 −Ri,m) ! (27) by letting Ri,L = 1 (the user has to stop reading at the last document). To show this, observe that Wi,1 appears in all L terms of the summation, thus the coefficient for Wi,1 is simply PL l=1(Ri,l Ql−1 j=1(1 −Ri,j)) = 1, from both simple manipulation and the fact that we are summing over probability. Similarly, Wi,2 appears in all L terms except with l = 1, thus (1 −Ri,1). For Wi,3 it is (1−Ri,1)−Ri,2(1−Ri,1) = Q2 j=1(1−Ri,j). The rest follows. C Appendix: Training Details All models are built using AllenNLP (Gardner et al., 2018). Tokenization and sentence splitting are done using spaCy (Honnibal and Johnson, 2015). The CROWDSOURCE dataset is split into a training set (80%) and a validation set (20%) during model development. We did not test on the EXPERT dataset until all parameters of the models were fixed. Cross validation on the training set is used for hyperparameter tuning. For 3HAN, we used ADAM with learning rate 0.003, trained for 100 epochs with early stopping on the validation dataset, with patience set to 30. For 3HAN AV, the same hyperparameters are used. For LR, we used SGD with learning rate 0.003, trained for 100 epochs with early stopping on the validation dataset, with patience set to 30. Both 3HAN and 3HAN AV’s Seq2Vec layers use bi-directional GRU with attention. The wordto-sentence layer has input dimension of 200, hidden dimension of 50, and output dimension of 100, since the GRU is bi-directional. The sentence-todocument and document-to-individual layer, similarly, has input dimension of 100, hidden dimension of 50, and output dimension of 100. Hyperparameters were selected using cross validation on the training set split of the CROWDSOURCE dataset.
2020
723
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8138–8150 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8138 CluHTM - Semantic Hierarchical Topic Modeling based on CluWords Felipe Viegas1, Washington Cunha1, Christian Gomes1, Antˆonio Pereira2, Leonardo Rocha2, Marcos Andr´e Gonc¸alves1 1 Universidade Federal de Minas Gerais - Brazil 2 Universidade Federal de S˜ao Jo˜ao del Rei - Brazil {frviegas, washingtoncunha}@dcc.ufmg.br {christianreis, mgoncalv}@dcc.ufmg.br {antoniopereira, lcrocha}@ufsj.edu.br Abstract Hierarchical Topic modeling (HTM) exploits latent topics and relationships among them as a powerful tool for data analysis and exploration. Despite advantages over traditional topic modeling, HTM poses its own challenges, such as (1) topic incoherence, (2) unreasonable (hierarchical) structure, and (3) issues related to the definition of the “ideal” number of topics and depth of the hierarchy. In this paper, we advance the stateof-the-art on HTM by means of the design and evaluation of CluHTM, a novel nonprobabilistic hierarchical matrix factorization aimed at solving the specific issues of HTM. CluHTM’s novel contributions include: (i) the exploration of richer text representation that encapsulates both, global (dataset level) and local semantic information – when combined, these pieces of information help to solve the topic incoherence problem as well as issues related to the unreasonable structure; (ii) the exploitation of a stability analysis metric for defining the number of topics and the “shape” the hierarchical structure. In our evaluation, considering twelve datasets and seven stateof-the-art baselines, CluHTM outperformed the baselines in the vast majority of the cases, with gains of around 500% over the strongest state-of-the-art baselines. We also provide qualitative and quantitative statistical analyses of why our solution works so well. 1 Introduction Topic Modeling (TM) is the task of automatically extracting latent topics (e.g., a concept or a theme) from a collection of textual documents. Such topics are usually defined as a probability distribution over a fixed vocabulary (a set of words) that refers to some subject and describes the latent topic as a whole. Topics might be related to each other, and if they are defined at different semantic granularity levels (more general or more specific), this naturally induces a hierarchical structure. Although traditional TM strategies are of great importance to extract latent topics, the relationships among them are also extremely valuable for data analysis and exploration. In this context, Hierarchical Topic Modeling (HTM) aims to achieve – to induce latent topics from text data while preserving the inherent hierarchical structure (Teh et al., 2006). Relevant scenarios have been shown to enjoy the usefulness of HTM, such as (i) hierarchical categorization of Web pages (Ming et al., 2010), (ii) extracting aspects hierarchies in reviews (Kim et al., 2013) and (iii) discovering research topics hierarchies in academic repositories (Paisley et al., 2014). Despite its practical importance and potential advantages over traditional TM, HTM poses its own challenges, the main ones being: (i) topic incoherence and (ii) unreasonable hierarchical structure. Topic Incoherence has to do with the need to learn meaningful topics. That is, the top words that represent a topic have to be semantically consistent with each other. Unreasonable structure is related to the extracted hierarchical topic structure. Topics near the root should be more general, while topics close to the leaves should be more specific. Furthermore, child topics must be coherent with their corresponding parent topics, guaranteeing a reasonable hierarchical structure. Finally, (iii) the number of topics in each hierarchy level is usually unknown and cannot be previously set to a predefined value since it directly depends on the latent topical distribution of the data. Both supervised and unsupervised approaches have been applied to HTM. Supervised methods use prior knowledge to build the hierarchical tree structure, such as labeled data or linking relationships among documents (Wang et al., 2015). Those strategies are unfeasible when there is no explicit taxonomy or hierarchical scheme to associate with documents or when such an association (a.k.a., labeling) is very cumbersome or costly to obtain. Unsupervised HTM (uHTM) 8139 deals with such limitations. uHTM methods do not rely on previous knowledge (such as taxonomies or labeled hierarchies), having the additional challenge of discovering the hierarchy of topics based solely on the data at hand. HTM solutions can also be roughly grouped into non-probabilistic and probabilistic models. In probabilistic strategies, textual data is considered to be “ruled” by an unknown probability distribution that governs the relationships between documents and topics, hierarchically. The major drawback in this type of approach has to do with the number of parameters in the model, which rapidly grows with the number of documents. This leads to learning inefficiencies and proneness to over-fitting, mainly for short textual data (Tang et al., 2014). To overcome these drawbacks, non-probabilistic models aim at extracting hierarchical topic models through matrix factorization techniques instead of learning probability distributions. Such strategies also pose challenges. They are usually limited to just local information (i.e., data limitation) as they go deeper into the hierarchy when extracting the latent topics. That is, as one moves more in-depth in the hierarchical structure representing the latent topics, the available data rapidly reduces in size, directly impacting the quality of extracted topics (in terms of both coherence and structure reasonableness). Probabilistic models mitigate this phenomenon as they rely on global information when handling the probability distributions(Xu et al., 2018). Because of that, the current main HTM methods are built based on probabilistic methods (Griffiths et al., 2004; Mimno et al., 2007). In this paper, we aim at exploring the best properties of both non-probabilistic and probabilistic strategies while mitigating their main drawbacks. Up to our knowledge, the only work to explore this research venue is (Liu et al., 2018). In that work, the authors explore NMF for solving HTM tasks by enforcing three optimization constraints during matrix factorization: global independence, local independence, and information consistency. Those constraints allow their strategy, named HSOC, to produce hierarchical topics that somehow preserve topic coherence and reasonable hierarchical structures. However, as we shall see in our experiments, HSOC is still not capable of extracting coherent topics when applied to short text data, which is currently prominent on the Web, especially on social network environments. We here propose a distinct approach, taking a data engineering perspective, instead of focusing on the optimization process. More specifically, we explore a matrix factorization solution properly designed to explore global information (akin to probabilistic models) when learning hierarchical topics while ensuring proper topic coherence and structure reasonableness. This strategy allows us to build a data-efficient HTM strategy, less prone to over-fitting that also enjoys the desired properties of topic coherence and reasonable (hierarchical) structure. We do so by applying a matrix factorization method over a richer text representation that encapsulates both, global and semantic information when extracting the hierarchical topics. Recent non-probabilistic methods (Shi et al., 2018; Viegas et al., 2019) have produced top-notch results on traditional TM tasks by taking advantage of semantic similarities obtained from distances between words within an embedding space (Mikolov et al., 2013; Pennington et al., 2014). Our critical insight for HTM was to note that the richer (semantic) representation offered by distributional word embeddings can be readily explored as a global1 source of information in more profound levels of the hierarchical structure of topics. This insight gives us an essential building block to overcome the challenges of matrix factorization strategies for HTM without the need for additional optimization constraints. In (Viegas et al., 2019), the authors exploit the nearest words of a given “pre-trained” word embedding to generate “meta-words”, aka Cluwords, able of expanding and enhancing the document representation in terms of syntactic and semantic information. Such an improved representation is capable of mitigating the drawbacks of using the projected space of word embeddings as well as extracting cohesive topics when applying nonnegative matrix factorization for topic modeling. Motivated by this finding, we here advance the state-of-the-art in HTM, by designing, developing and evaluating an unsupervised non-probabilistic HTM method that exploits CluWords as a key building block for TM when capturing the latent hierarchical structure of topics. We focus on the NMF method for uncovering the latent hierarchy as it is the most effective matrix factorization method for our purposes. Finally, the last aspect needed 1Distances in the embeddings space are global as they do consider the whole vocabulary and interactions among words in specific contexts. 8140 to be addressed for the successful use of NMF for HTM is the definition of the appropriate number of topics k to be extracted. Choosing just a few topics will produce overly broad results while choosing too many will result in over-clustering the data into many redundant, highly-similar topics. Thus, our proposed method uses a stability analysis concept to automatically select the best number of topics for each level of the hierarchy. As we shall see, our approach outperforms HSOC and hLDA (current state-of-the-art) for both small and large text datasets, often by large margins. To summarize, our main contributions are: (i) a novel non-probabilistic HTM strategy – CluHTM – based on NMF and CluWords that excels on HTM tasks (in both short and large text data) while ensuring topic coherence and reasonable topic hierarchies; (ii) the exploitation in an original way of a cross-level stability analysis metric for defining the number of topics and ultimately ‘the shape’ of the hierarchical structure; as far as we know this metric has never been applied with this goal; (iii) an extensive empirical analysis of our proposal considering twelve datasets and seven state-of-the-art baselines. In our experimental evaluation, CluHTM outperformed the baselines in the vast majority of the cases (In case of NPMI, in all cases), with gains of 500% when compared to hLDA and 549% when compared to HSOC, some of the strongest baselines; and finally, (iv) qualitative and quantitative statistical analyses of the individual components of our solution. 2 Related Work Hierarchical Topic Modeling (HTM) can be roughly grouped into supervised and unsupervised methods. Considering the supervised HTM strategies, we here highlight some relevant supervised extensions to the traditional Latent Dirichlet Allocation (LDA) (Blei et al., 2003), a widely used strategy for the topic modeling (TM). LDA assumes a Dirichlet probability distribution over textual data to estimate the probabilities of words for each topic. In (Mcauliffe and Blei, 2008), the authors propose SLDA, a supervised extension of LDA that provides a statistical model for labeled documents. SLDA allows connecting each document to a regression variable to find latent topics that will best predict the response variables for future unlabeled documents. Based on SLDA, Hierarchical Supervised LDA (HSLDA) (Perotte et al., 2011) incorporates the hierarchy of multilabel and pre-labeled data into a single model, thus providing extended prediction capabilities w.r.t., the latent hierarchical topics. The Supervised Nested LDA (SNLDA) (Resnik et al., 2015), also based on SLDA, implements a generative probabilistic strategy where topics are sampled from a probability distribution. SNLDA extends SLDA by assuming that the topics are organized into a tree structure. Although our focus is on unsupervised solutions, we include SLDA, HSLDA and SNLDA as baselines in our experimental evaluation. We now turn our attention to unsupervised HTM strategies, in which a hierarchical structure is learned during topic extraction. In (Mimno et al., 2007) the authors propose Hierarchical Pachinko Allocation Model (hPAM), an extension of Pachinko Allocation (PAM) (Li and McCallum, 2006). In PAM, documents are a mix of distributions over an individual topic set, using a directed acyclic graph to represent the co-occurrences of topics. Each node in such a graph represents a Dirichlet distribution. At the highest level of PAM, there is only a single node, where the lowest levels represent a distribution between nodes of the next higher level. In hPAM, each node is associated with a distribution over the vocabulary of documents. In (Griffiths et al., 2004), the authors propose the hLDA algorithm, which is also an expansion of LDA, being considered state-of-the-art in HTM. In hLDA, in addition to using the text Dirichlet distribution, the nested Chinese Restaurant Process (nCRP) is used to generate a hierarchical tree. NCRP needs two parameters: the tree level and a γ parameter. At each node of the tree, a document can belong to a path or create a new tree path with probability controlled by γ. More recently, in (Xu et al., 2018), the authors propose the unsupervised HTM strategy named a knowledge-based hierarchical topic model (KHTM). This method is based on hLDA and, as such, models a generative process whose parameter estimation strategy is based on Gibbs sampling. KHTM is able to uncover prior knowledge (such as the semantic correlation among words), organizing them into a hierarchy, consisting of knowledge sets (k-sets). More specifically, the method first generates, through hLDA, an initial set of topics. After comparing pairs of topics, those topics with similarity higher than α (a.k.a., k-sets) are then filtered so that the first 20 words of each topic are kept, and the remaining are just discarded. Those 8141 extracted k-sets are then used as an extra weight when extracting the final topics. All these methods are used as baselines in our experimentation. Probably the most similar work to ours is the HSOC strategy, proposed in (Liu et al., 2018), which proposes to use NMF for solving HTM tasks. In order to mitigate the main drawbacks of NMF in the HTM setting2, HSOC relies on three optimization constraints to properly drive the matrix factorization operations when uncovering the hierarchical topic structure. Such constraints are global independence, local independence, and information consistency, and allow HSOC to derive hierarchical topics that somehow preserve topic coherence and reasonable hierarchical structures. As it can be observed, almost all models, supervised or unsupervised, are based on LDA. As discussed in Section 1, though matrix factorization strategies normally present better results than Dirichlet strategies in TM tasks, for HTM, the situation is quite different. In fact, matrix factorization methods face difficult challenges in HTM, mainly regarding data size as ones go deeper into the hierarchy. More specifically, at every hierarchical level, a matrix factorization needs to be applied to increasingly smaller data sets, ultimately leading to insufficient data at lower hierarchy levels. These approaches also do not exploit semantics nor any external enrichment, relying only on the statistical information extracted from the dataset. Contrarily, here we propose a new HTM approach, called CluHTM, which exploits externally built word embedding models to incorporate global semantic information into the hierarchical topic tree creation. This brings some important advantages to our proposal in terms of effectiveness, topic coherence, and hierarchy reasonableness altogether. 3 Background 3.1 CluWords Representation Cluwords (Viegas et al., 2019) combine the traditional Bag of Words (BoW) statistical representation with semantic information related to the words present in the documents. The semantic context is obtained employing a “pre-trained” word representation, such as Fasttext (Mikolov et al., 2018). Figure 1 presents the process of transforming each original word into a Cluword 2Namely, the incoherence of topics and unreasonable hierarchical structure caused by the lack of a learned probability distribution that governs the document/topics relationships (cluster of words) representation. First, the strategy uses the information about the dataset, as well as pre-trained word embedding (i.e. Fasttext) to build semantic relationships between a word and its neighbors (described in Section 3.1.1). Next, statistical information on words (e.g., term frequency, document frequency) is extracted from the dataset. Then, both semantic and statistical information are combined to measure the importance of each Cluword as explained in Section 3.1.2. Cluwords enjoy the best of “two worlds”: it conjugates statistical information on the dataset, which has demonstrated to be very effective, efficient and robust in text applications, enriched with semantic contextual information captured by distributional word embeddings adapted to the dataset by the clusterization process described next. Figure 1: Diagram showing the steps for building the CluWords representation. 3.1.1 Cluwords Generation Let W be the set of vectors representing each word t in the dataset vocabulary (represented as V). Each word t ∈V has a corresponding vector u ∈W. The CluWords representation is defined as in Figure 1. The semantic matrix in the Figure 1 is defined as C ∈R|V|×|V|, where each dimension has the size of the vocabulary (|V|), t′ represents the rows of C while t represents the columns. Finally, each index Ct′,t is computed according to Eq. 1. Ct′,t =  ω(ut′, ut) if ω(ut′, ut) ≥α 0 otherwise, (1) where ω(ut′, ut) is the cosine similarity and α is a similarity threshold that acts as a regularizer for the representation. Larger values of α lead sparser representations. In this notation each column t of the semantic matrix C will be forming a CluWord t and each value of the matrix Ct′,t may receive the cosine similarity between the vectors ut′ and ut in the embedding space W if it is greater than 8142 or equal to α . Otherwise, the Ct′,t receives zero, according to the Eq. 1. 3.1.2 TFIDF Weight for CluWords In Figure 1, the CluWords representation is defined as the product between the statistical matrix (a.k.a. term-frequency matrix) and semantic matrix C. The statistical matrix (TF) can be represented as a TF ∈R|D|×|V|, where each position TFd,t relates to the frequency of a word t in document d. Thus, given a CluWord (CW) t for a document d, its data representation corresponds to CWd,t = −−→ TFd × −→ C,t, where −−→ TFd has the term-frequencies of document d, and −→ C,t is the semantic scores for the CluWord t, according to Eq. 1. The TFIDF weighting for a CluWord t in a document d is defined as CWd,t = CWd,t × idft. The IDF component is defined as idft = log  |D| P 1≤d≤|D| µt,d  , where D is the number of documents and µt,d is the average of semantic weights of the semantic matrix C for the CluWord t ( −→ C,t) that occurs in the vocabulary Vd. The average µt,d is defined as µt,d = 1 |Vd,t′| ·P t′∈(Vd∩−→ C,t) Ct′,t. 3.2 Stability Measure The Stability measure is motivated by the termcentering approach generally taken in topic modeling strategies, where topics are usually summarized as a truncated set of top words (Greene et al., 2014). The intuition behind this strategy is, given some K topics, to measure whether running multiple random samplings for a topic modeling strategy results in Stability, in terms of p top words extracted from the topics. Given a range of topics [Kmin, Kmax], and some topic modeling strategy (on our case, Non-negative Factorization Matrix method), the strategy proceeds as follows. First, it learns a topic model considering the complete data set representation D, which will be used as a reference point (WD) for analyzing the Stability afforded by the K topics. Note that the p top words represent each topic. Subsequently, S samples of the data are randomly drawn from D without replacement, forming a subset of D′ documents. Then, |S| topic models are generated, one for each subsampling (WSi). To measure the quality of K topics, the Stability computes the mean agreement among each pair of (WD, WSi). The goal is to find the best match between the p top words of the compared topics. The agreement is defined as agree(Wx, Wy) = 1 p Pp i=1 AJ(wxi, ρ(wxi)), where AJ(·) is the average Jaccard coefficient used to compare the similarity among the words w and ρ(·) is the optimal permutation of the words in WSi that can be found in O(p3) time by solving the minimal weight bipartite matching problem using the Hungarian method (Kuhn, 2010). 4 Proposed Solution CluHTM is an iterative method able to automatically define the best number of topics in each hierarchy, given a range of possible number of topics [Kmin, Kmax]. CluHTM explores Cluwords and Non-negative Matrix Factorization (NMF) (Lee and Seung, 2001), one of the main non-probabilistic strategies. Finally, the Stability method (described in Section 3) is used to select NMF k parameters (a.k.a number of topics). CluHTM has five inputs (Algorithm 1), (i) Dmax corresponds to the depth down to which we want to extract the hierarchical structure. (ii) Kmin and Kmax control the range of some topics, such range will be used in all levels of the hierarchy; (iii) T is the input text data; and (iv) W is the “pre-trained” word embedding vector space used in the CluWords generation. The output is the hierarchical structure H of p top words for each topic. Algorithm 1: CluHTM Input: Dmax - Hierarchy Depth; Kmin - Number of minimum topics; Kmax - Number of maximum topics; T - Term-frequency representation; W - Word embedding vectors ∈T ; Output: H - Hierarchical Structure. 1 parent ←−1; 2 queue.push(0, T ); 3 while queue ̸= ∅do 4 depth, T ′ ←queue.pop(); 5 Clu ←GenerateCluwords(T ′, W); 6 K ←Stability(Kmin, Kmax, Clu) 7 O ←NMF(Clu, K) 8 topics ←ExtractTopics(O) 9 foreach topic ∈topics do 10 parent ←parent ∪topic; 11 H ←H ∪topic; 12 if depth + 1 ≤Dmax then 13 T ′ ←ExtractDocs(topic); 14 queue.push(depth + 1, T ′) 15 return H The method starts by getting the root topic (line 2-3 of Algorithm 1), which is composed of all documents in T . Since the method is iterative, each iteration is controlled by a queue schema to build a hierarchical structure. Thus, at each iteration (line 3), the algorithm produces the 8143 CluWords representation for the documents ∈T ′ (line 5), chooses the number of topics, exploiting the Stability measure (line 6), and runs the NMF method (line 7) to extract the p words for each topic in O (line 8). Then, in the loop of line 9, each topic is stored in the queue, as well as the respective documents of each topic. Summarizing, our solution exploits global semantic information (captured by CluWords) within local factorizations, limited by a stability criterion that defines the ‘shape’ of the hierarchical structure. Though simple (and original), the combination of these ideas is extremely powerful for solving the HTM task, as we will see next. 5 Experimental Results 5.1 Experimental Setup The primary goal of our solution is to effectively perform hierarchical topic modeling so that more coherent topics can be extracted. To evaluate topic model coherence, we consider 12 real-world datasets as reference. All of them were obtained from previous works in the literature. For all datasets, we performed stopwords removal (using the standard SMART list) and removed words such as adverbs, using the VADER lexicon dictionary (Hutto and Gilbert, 2014), as the vast majority of the essential words for identifying topics are nouns and verbs. These procedures improved both the efficiency and effectiveness of all analyzed strategies. Table 1 provides a summary of the reference datasets, reporting the number of features (words) and documents, as well as the mean number of words per document (density) and the corresponding references. Table 1: Dataset characteristics Dataset #Feat #Doc Density Angrybirds 1,903 1,428 7.135 Dropbox 2,430 1,909 9.501 Evernote 6,307 8,273 11.002 InfoVis-Vast 3 6,104 909 86.215 Pinterest 2,174 3,168 4.478 TripAdvisor 3,152 2,816 8.532 Tweets 8,029 12,030 4.450 WhatsApp 1,777 2,956 3.103 20NewsGroup 4 29,842 15,411 76.408 ACM 16,811 22,384 30.428 Uber 5,517 11,541 7.868 Facebook 5,168 12,297 6.427 3https://www.cc.gatech.edu/gvu/ii/jigsaw/datafiles.html 4http://qwone.com/∼jason/20Newsgroups/ We compare the HTM strategies using representative topic quality metrics in the literature (Nikolenko, 2016; Nikolenko et al., 2017). We consider three classes of topic quality metrics based on three criteria: (a) coherence, (b) mutual information, and (c) semantic representation. In this paper, we focus on these three criteria since they are the most used metrics in the literature (Shi et al., 2018). We consider three topic lengths (5, 10 and 20 words) for each parameter in our evaluation, since different lengths may bring different challenges. Regarding the metrics, coherence captures easiness of interpretation by co-occurrence. Words that frequently co-occur in similar contexts in a corpus are easier to correlate since they usually define a more well-defined “concept” or “topic”. We employ an improved version of regular coherence (Nikolenko, 2016), called Coherence, defined as c(t, Wt) = X w1,w2∈Wt log d(w1, w2) + ε d(w1) , (2) where d(w1) denotes the number of occurrences of w1, d(w1, w2) is the number of documents that contain both w1 and w2 together, and ε is a smoothing factor used for preventing log(0). Another class of topic quality metrics is based on the notion of pairwise pointwise mutual information (PMI) between the top words in a topic. It captures how much one “gains” in the information given the occurrence of the other word, taking dependencies between words into consideration. Following a recent work (Nikolenko, 2016), we here compute a normalized version of PMI (NPMI) where, for a given ordered set of top words Wt = (w1, ..., wN) in a topic: NPMIt = X i<j log p(wi,wj) p(wi)p(wj) −logp(wi, wj). (3) Finally, the third class of metrics is based on the distributed word representations introduced in (Nikolenko, 2016). The intuition is that, in a well-defined topic, the words should be semantically similar, or at least related, to be easily interpreted by humans. In a d-dimensional vector space model in which every vocabulary word w ∈W has been assigned to a vector vw ∈Rd, the vectors corresponding to the top words in a topic should be close to each other. In (Nikolenko, 2016), the authors define topic quality as the average distance between the top words in the topic, as follows: 8144 W2V −L1 = 1 |Wt|(|Wt| −1) X w1̸=w2∈Wt dcos(vw1, vw2). (4) Generally speaking, let d(w1, w2) be a distance function in Rd. In this case, larger d(w1, w2) corresponds to worse topics (with words not as localized as in topics with smaller average distances). In (Nikolenko, 2016), the authors suggest four different distance metrics, with cosine distance achieving the best results. We here also employ the cosine distance, defined as dcos(x, y) = 1 −xT y. We compare our approach described in Section 4, with seven hierarchical topic model strategies marked in bold in Section 2. For the input parameters of CluHTM (Algorithm 1), we set Kmin = 5, Kmax=25, R = 10 and Dmax = 3. We define Kmin through empirical experiments, and the Kmax was defined according to the number of topics exploited in (Viegas et al., 2019). For the baseline methods, we adopt the parameters suggested by their own works. We assess the statistical significance of our results employing a paired t-test with 95% confidence and Holm-Bonferroni correction to account for multiple tests. 5.2 Experimental Results We start by comparing CluHTM against four state-of-the-art uHTM baselines considering the twelve reference datasets. Three hierarchical levels for each strategy are used in this comparison. In Figures 2, 4 and 3 we contrast the results of our proposed CluHTM and the reference strategies, considering the NPMI, W2V-L1, and Coherence metrics. Figure 2: uHTM Comparative Results (NPMI). Figure 3: uHTM Comparative Results (Coherence) Figure 4: uHTM Comparative Results (W2V-L1) Note that each strategy extracted a different number of topics in its hierarchical structure. Considering NPMI, the most important metric to evaluate the quality of topics (Nikolenko, 2016), we can see in Figure 2 that our strategy outperforms all baselines in all datasets by large margins, with gains over 500% against some of the strongest ones. Some of these results are the highest in terms of NMPI ever reported for several of these datasets. Considering the Coherence scores (Figure 3), our strategy achieves the single best results in 2 out of 12 datasets, with gains up to 58% and 92% against the most robust baseline (hPAM), tying in 8 out 12 and losing two times for hLDA and hPAM. Similar results can be observed for the W2V-L1 metric (Figure 4) – CluHTM ties in 10 out of 12 results, with one win and one loss for KHTM. As we will see, even with very few losses in these metrics, our method proves to be 8145 Dataset CluHTM SLDA SNLDA HSLDA Coherence 20News −62.6898 ± 21.0606 ▲ −403.3413 ± 90.2313 −410.0020 ± 71.2366 −309.9041 ± 132.5511 ACM −32.3371 ± 29.5853 ▲ −539.6660 ± 115.2125 −507.4476 ± 108.6966 −486.4835 ± 104.9369 W2V-L1 20News 1.1863 ± 0.1176 ▼ 0.3093 ± 0.2006 0.3456 ± 0.2051 0.0952 ± 0.1094 ACM 1.0489 ± 0.6506 • 0.6347 ± 0.2617 0.6803 ± 0.2243 0.2816 ± 0.1567 NPMI 20News 0.9351 ± 0.0365 ▲ 0.2714 ± 0.1157 0.2205 ± 0.0752 0.4383 ± 0.2162 ACM 0.9641 ± 0.0416 ▲ 0.2071 ± 0.0579 0.2064 ± 0.0529 0.2761 ± 0.0978 Table 2: Comparing the results achieved by each supervised HTM strategy for Coherence, W2V-L1 and NPMI. Table 3: Number of times each strategy was the top performer. CluHTM is the best performer in most cases. Method Metric P NPMI W2V-L1 Coherence (Sum) CluHTM 12 11 10 33 hPAM 0 9 8 17 hLDA 0 2 9 11 HSOC 0 9 0 9 KHTM 0 6 2 8 SNLDA 0 2 0 2 HSLDA 0 2 0 2 textbfSLDA 0 1 0 1 more consistent than the baselines. We now turn our attention to the effectiveness of our proposal when compared to the supervised HTM strategies. We consider the 20News and ACM datasets for which have a ground truth for supervised strategies. Table 2 presents the results considering Coherence, W2V-L1, and NPMI. The statistical significance tests ensure that the best results, marked in ▲, are superior to others. The statistically equivalent results are marked in • while statistically significant losses are marked in ▼. Once again, in Table 2, our proposed strategy achieves the best results in 4 out of 6 cases, tying with SNLDA and HSLDA in ACM and loosing only to SLDA in 20News, both considering the W2V-L1 metric. It is important to remind that, differently from these supervised baselines, our method does not use any privileged class information to build the hierarchical structure nor to extract topics. We provide a comparative table with all experimental results5, including the results for each extracted level of the hierarchical structure. We summarize our findings regarding the behavior of all analyzed strategies in the 12 datasets, counting the number of times each strategy figured out as a top performer6. The summarized results can be seen in Table 3. Our proposal is in considerable advantage over the other explored baselines, being 5see Appendix, Section Supplementary Results for detailed results 6If two approaches are statistically tied as top performers in the same dataset, both will be counted. the strategy of choice in the vast majority of cases. Overall, considering a universe of 36 experimental results (the combination of 3 evaluation metrics over 12 datasets), we obtained the best results (33 best performances), with the most robust baseline – hPAM – coming far away, with just 17 top performances. Another interesting observation is that, in terms of NPMI, CluHTM wins in all cases. Details of this analysis are summarized in the Appendix. 5.3 Impact of the Factors One important open question remains to be answered: To what extent the characteristics of the dataset impact the quality of the topics generated by our strategy? To answer this question, we provide a quantitative analysis regarding the hierarchical topic modeling effectiveness, measured by the NPMI score. We start our analysis by quantifying the effects of the parameters of interest (i.e., factors). Those factors might affect the performance of the system under study, while also determining whether the observed variations are due to significant effects (e.g., measurement errors, the inherent variability of the process being analyzed (Jain, 1991)). To this end, we adopt a full factorial design, which uses all the possible combinations of the levels of the factors in each complete experiment. The first factor is the dataset. The idea is to analyze the impact of textual properties such as dataset size, density, dimensionality, etc. Thus, each level of this factor is a dataset in Table 1. The second factor is the HTM strategies evaluated in the previous Section. In this factor, we intend to assess the impact of the extracted topics, as well as the hierarchical structure. Each level of this factor is an evaluated HTM strategy. All the possible combination between these two factors will be measured by the average of NMPI among topics of the hierarchical structure. Results are shown in Table 4. In the Table, we highlight the average NPMI and the effects of each 8146 Dataset-Algorithm CluHTM hLDA hPAM HSOC KHTM Row Sum Row Mean Row Effect Angry Birds 0.8934 0.5593 0.3604 0.2120 0.4940 2.5191 0.5038 0.0507 Dropbox 0.9002 0.5806 0.2529 0.1703 0.4022 2.3062 0.4612 0.0082 Evernote 0.9374 0.5668 0.1534 0.1222 0.4426 2.2224 0.4445 -0.0086 Facebook 0.8686 0.5998 0.1517 0.1128 0.4791 2.2120 0.4424 -0.0107 InfoVis-Vast 0.9935 0.1650 0.1191 0.1632 0.1459 1.5867 0.3173 -0.1357 Pinterest 0.8482 0.5614 0.3028 0.1865 0.3912 2.2901 0.4580 0.0049 Trip Advisor 0.9265 0.5769 0.2745 0.1477 0.5007 2.4263 0.4853 0.0322 Tweets 0.8950 0.5966 0.2130 0.1928 0.4759 2.3733 0.4747 0.0216 Uber 0.9116 0.5829 0.1403 0.1006 0.4168 2.1522 0.4304 -0.0226 Whatsapp 0.8594 0.5881 0.3976 0.2031 0.5172 2.5654 0.5131 0.0600 Col Sum 9.0338 5.3774 2.3657 1.6112 4.2656 22.6537 Col Mean 0.9034 0.5377 0.2366 0.1611 0.4266 0.4531 Col effect 0.4503 0.0847 -0.2165 -0.2920 -0.0265 Table 4: Overview of the factorial desgin Component Sum of Degrees % Variation Degrees of Freedom Mean Square F-Computed F-Table (0.99) y 14.0670 50 y.. 10.2638 1 y −y.. 3.8032 100.00% 49 A 3.4276 90.12% 4 0.8569 127.9197 3.8903 B 0.1345 3.92% 9 0.0149 2.2303 2.9461 e 0.2412 6.34% 36 0.0067 Table 5: ANOVA Test with 99% confidence to measure the impact of each factor. factor. From the effects, we can observe that the CluHTM impact in the NPMI value is 99.38% higher than the overall average. We can also see that hLDA has an NPMI score higher than the overall average (18.67%) and HSOC has an NPMI score of approximately 64.44% smaller than overall NMPI. Concerning the datasets’ effects, the full factorial design experiment tells us that they have a small impact on the variation concerning the obtained average NPMI scores. We can also observe that the dataset with the most variation of NPMI is InfoVis-Vast, with a score of 29.97% smaller than the overall NPMI. We perform a ANOVA test to assess whether the studied factors are indeed statistically significant and conclude, with 99% confidence according to the F-test, that the choice of algorithm (factor B) explains approximately 90% of the obtained NPMI values. We can also conclude that the investigated properties of the textual data (factor A), as well as the experimental errors, have a small influence on the experimental results. Summarizing, we can conclude that the characteristics of the datasets have a lower impact on the results and that the impact of CluHTM is consistent across all of them. The ANOVA test details are presented in Table 5. 6 Conclusion We advanced the state-of-the-art in hierarchical topic modeling (HTM) by designing, implementing and evaluation a novel unsupervised nonprobabilistic method – CluHTM. Our new method exploits a more elaborate (global) semantic data representation – CluWords – as well as an original application of a stability measure to define the “shape” of the hierarchy. CLUHTM excelled in terms of effectiveness, being around two times more effective than the strongest state-of-the-art baselines, considering all tested datasets and evaluation metrics. The overall gains over some of these strongest baselines are higher than 500% in some datasets. We also showed that CluHTM results are consistent across most datasets, independently of the data characteristics and idiosyncrasies. As future work, we intend to apply CluHTM in other representative applications on the Web, such as hierarchical classification by devising a supervised version of CluHTM. We also intend to incorporate some type of attention mechanism into our methods to better understand which Cluwords are more important to define certain topics. 7 Acknowledgments This work is partially supported by CAPES, CNPq, Finep, Fapemig, Mundiale, Astrein, projects InWeb and MASWeb. References David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Derek Greene, Derek O’Callaghan, and P´adraig Cunningham. 2014. How many topics? stability analysis for topic models. CoRR. Thomas L Griffiths, Michael I Jordan, Joshua B Tenenbaum, and David M Blei. 2004. Hierarchical topic 8147 models and the nested chinese restaurant process. In Advances in neural information processing systems, pages 17–24. Clayton J. Hutto and Eric Gilbert. 2014. VADER: A parsimonious rule-based model for sentiment analysis of social media text. In ICWSM’14. Raj Jain. 1991. The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling. Suin Kim, Jianwen Zhang, Zheng Chen, Alice Oh, and Shixia Liu. 2013. A hierarchical aspect-sentiment model for online reviews. In Twenty-Seventh AAAI Conference on Artificial Intelligence. Harold W. Kuhn. 2010. The hungarian method for the assignment problem. In 50 Years of Integer Programming. Daniel D Lee and H Sebastian Seung. 2001. Algorithms for non-negative matrix factorization. In Advances in neural information processing systems. Wei Li and Andrew McCallum. 2006. Pachinko allocation: Dag-structured mixture models of topic correlations. In Proceedings of the 23rd international conference on Machine learning, pages 577– 584. ACM. Rui Liu, Xingguang Wang, Deqing Wang, Yuan Zuo, He Zhang, and Xianzhu Zheng. 2018. Topic splitting: a hierarchical topic model based on nonnegative matrix factorization. Journal of Systems Science and Systems Engineering, 27(4):479–496. Jon D Mcauliffe and David M Blei. 2008. Supervised topic models. In Advances in neural information processing systems, pages 121–128. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In LREC’18. David Mimno, Wei Li, and Andrew McCallum. 2007. Mixtures of hierarchical topics with pachinko allocation. In Proceedings of the 24th ICML, pages 633– 640. ACM. Zhao-Yan Ming, Kai Wang, and Tat-Seng Chua. 2010. Prototype hierarchy based clustering for the categorization and navigation of web collections. In Proceedings of the 33rd ACM SIGIR, pages 2–9. ACM. Sergey I Nikolenko. 2016. Topic quality metrics based on distributed word representations. In SIGIR’16. Sergey I Nikolenko, Sergei Koltcov, and Olessia Koltsova. 2017. Topic modelling for qualitative studies. Journal of Information Science. John Paisley, Chong Wang, David M Blei, and Michael I Jordan. 2014. Nested hierarchical dirichlet processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):256–270. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Adler J Perotte, Frank Wood, Noemie Elhadad, and Nicholas Bartlett. 2011. Hierarchically supervised latent dirichlet allocation. In Advances in neural information processing systems, pages 2609–2617. Philip Resnik, William Armstrong, Leonardo Claudino, Thang Nguyen, Viet-An Nguyen, and Jordan BoydGraber. 2015. Beyond lda: exploring supervised topic modeling for depression-related language in twitter. In Proc. of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 99–107. Tian Shi, Kyeongpil Kang, Jaegul Choo, and Chandan K. Reddy. 2018. Short-text topic modeling via non-negative matrix factorization enriched with local word-context correlations. In WWW ’18, pages 1105–1114. Jian Tang, Zhaoshi Meng, XuanLong Nguyen, Qiaozhu Mei, and Ming Zhang. 2014. Understanding the limiting factors of topic modeling via posterior contraction analysis. In Proceedings of the 31st ICML’14, pages I–190–I–198. JMLR.org. Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. 2006. Hierarchical dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. Felipe Viegas, S´ergio Canuto, Christian Gomes, Washington Luiz, Thierson Rosa, Sabir Ribas, Leonardo Rocha, and Marcos Andr´e Gonc¸alves. 2019. Cluwords: Exploiting semantic word clustering representation for enhanced topic modeling. In Proceedings of WSDM ’19, pages 753–761. Chi Wang, Jialu Liu, Nihit Desai, Marina Danilevsky, and Jiawei Han. 2015. Constructing topical hierarchies in heterogeneous information networks. Knowledge and Information Systems, 44(3):529– 558. Yueshen Xu, Jianwei Yin, Jianbin Huang, and Yuyu Yin. 2018. Hierarchical topic modeling with automatic knowledge mining. Expert Systems with Applications, 103:106–117. 8148 A Appendix Supplementary Results The Tables below expand on the results of Section 5. Datasets CluNMF SLDA SNLDA HSLDA 20News -62.68 ± 21.06 -403.34 ± 90.23 -410.00 ± 71.23 -309.90 ± 132.55 ACM -32.33 ± 29.58 -539.66 ± 115.21 -507.44 ± 108.69 -486.48 ± 104.93 Table 6: Overall Coherence results compared with supervised HTM strategies. Datasets CluNMF SLDA SNLDA HSLDA 20News 0.9351 ± 0.0365 0.2714 ± 0.1157 0.2205 ± 0.0752 0.4383 ± 0.2162 ACM 0.9641 ± 0.0416 0.2071 ± 0.0579 0.2064 ± 0.0529 0.2761 ± 0.0978 Table 7: Overall NPMI results compared with supervised HTM strategies. Datasets CluNMF SLDA SNLDA HSLDA 20News 1.1863 ± 0.1176 0.3093 ± 0.2006 0.3456 ± 0.2051 0.0952 ± 0.1094 ACM 1.0489 ± 0.6506 0.6347 ± 0.2617 0.6803 ± 0.2243 0.2816 ± 0.1567 Table 8: Overall W2V-L1 results compared with supervised HTM strategies. Datasets Level CluNMF SLDA SNLDA HSLDA 20News 1 Level -12.65 ± 0.00 -403.34 ± 90.23 -428.59 ± 0.00 -317.08 ± 0.00 2 Level -45.37 ± 22.72 -426.88 ± 103.53 -194.91 ± 0.00 3 Level -39.03 ± 22.94 -403.75 ± 56.71 -313.75 ± 135.41 ACM 1 Level -34.15 ± 16.73 -539.66 ± 115.21 -451.24 ± 0.00 -594.07 ± 0.00 2 Level -34.15 ± 16.7377 -431.38 ± 55.64 -467.37 ± 0.00 3 Level -27.26 ± 6.48 -516.92 ± 110.93 -483.32 ± 106.59 Table 9: Coherence results by level of hierarchy compared with supervised HTM strategies. Datasets CluNMF hLDA hPAM HSOC KHTM 20News -62.68 ± 21.06 -339.45 ± 186.20 -62.56 ± 9.81 -393.40 ± 76.70 -397.19 ± 143.59 ACM -32.33 ± 29.58 -219.85 ± 159.25 -77.20 ± 9.92 -577.08 ± 102.33 -341.39 ± 98.87 AngryBirds -77.39 ± 41.17 -107.70 ± 65.08 -46.80 ± 16.44 -492.40 ± 33.12 -148.41 ± 94.66 Dropbox -69.56 ± 33.86 -119.76 ± 103.54 -65.52 ± 12.96 -529.34 ± 28.97 -285.62 ± 88.38 Evernote -54.45 ± 32.81 -190.48 ± 123.60 -87.33 ± 8.80 -634.04 ± 51.01 -292.90 ± 93.97 Facebook -110.57 ± 51.62 -151.63 ± 130.89 -98.11 ± 12.53 -689.19 ± 51.92 -297.75 ± 96.46 InfoVis-Vast -4.46 ± 11.36 -407.05 ± 74.89 -61.9381 ± 10.1260 -416.49 ± 42.25 -464.38 ± 60.96 Pinterest -111.90 ± 48.77 -106.95 ± 87.06 -58.00 ± 16.46 -533.79 ± 42.19 -267.71 ± 84.46 TripAdvisor -55.60 ± 27.54 -126.96 ± 90.43 -68.95 ± 15.60 -584.30 ± 29.14 -147.53 ± 97.18 Tweets -93.11 ± 31.38 -114.24 ± 71.03 -92.52 ± 9.71 -665.23 ± 59.39 -266.22 ± 77.54 Uber -75.25 ± 39.94 -185.50 ± 136.32 -94.72 ± 9.24 -681.78 ± 55.40 -391.70 ± 108.40 Whatsapp -105.28 ± 38.64 -60.81 ± 62.04 -48.48 ± 15.55 -552.77 ± 45.50 -204.83 ± 87.73 Table 10: Overall Coherence results compared with uHTM strategies. Datasets CluNMF hLDA hPAM HSOC KHTM 20News 0.9351 ± 0.0365 0.4603 ± 0.1498 0.2176 ± 0.0622 0.2875 ± 0.0782 0.4433 ± 0.1223 ACM 0.9641 ± 0.0416 0.5781 ± 0.1021 0.1758 ± 0.0432 0.1889 ± 0.0490 0.4631 ± 0.0769 AngryBirds 0.8934 ± 0.0514 0.5593 ± 0.0565 0.3604 ± 0.1005 0.2120 ± 0.0306 0.4940 ± 0.0711 Dropbox 0.9002 ± 0.0454 0.5806 ± 0.0864 0.2529 ± 0.0877 0.1703 ± 0.0325 0.4022 ± 0.0615 Evernote 0.9374 ± 0.0334 0.5668 ± 0.0819 0.1534 ± 0.0564 0.1222 ± 0.0232 0.4426 ± 0.0763 Facebook 0.8686 ± 0.0531 0.5998 ± 0.0734 0.1517 ± 0.0765 0.1128 ± 0.0458 0.4791 ± 0.0744 InfoVis-Vast 0.9935 ± 0.0190 0.1650 ± 0.0732 0.1191 ± 0.0533 0.1632 ± 0.0504 0.1459 ± 0.0793 Pinterest 0.8482 ± 0.0535 0.5614 ± 0.0664 0.3028 ± 0.0988 0.1865 ± 0.0414 0.3912 ± 0.0559 TripAdvisor 0.9265 ± 0.0344 0.5769 ± 0.0745 0.2745 ± 0.0906 0.1477 ± 0.0306 0.5007 ± 0.0744 Tweets 0.8950 ± 0.0323 0.5966 ± 0.0381 0.2130 ± 0.0453 0.1928 ± 0.0534 0.4759 ± 0.0472 Uber 0.9116 ± 0.0424 0.5829 ± 0.0863 0.1403 ± 0.0582 0.1006 ± 0.0305 0.4168 ± 0.0861 Whatsapp 0.8594 ± 0.0456 0.5881 ± 0.0326 0.3976 ± 0.0750 0.2031 ± 0.0385 0.5172 ± 0.0634 Table 11: Overall NPMI results compared with uHTM strategies. 8149 Datasets CluNMF hLDA hPAM HSOC KHTM 20News 1.1863 ± 0.1176 1.4423 ± 0.1412 1.1318 ± 0.0860 0.3201 ± 0.2085 0.2153 ± 0.1757 ACM 1.0489 ± 0.6506 1.4741 ± 0.0915 1.1296 ± 0.0987 0.6408 ± 0.2712 0.1544 ± 0.1522 AngryBirds 1.1489 ± 0.1157 1.3236 ± 0.0327 1.2286 ± 0.0779 1.1816 ± 0.0388 1.2603 ± 0.0285 Dropbox 1.1454 ± 0.0918 1.3388 ± 0.0402 1.1794 ± 0.0873 1.1687 ± 0.0417 1.3032 ± 0.0421 Evernote 1.0999 ± 0.1247 1.3828 ± 0.0524 1.1447 ± 0.0643 1.1825 ± 0.0453 1.3272 ± 0.0525 Facebook 1.1909 ± 0.1224 1.4152 ± 0.0411 1.1598 ± 0.0541 1.1767 ± 0.0561 1.3008 ± 0.0390 InfoVis-Vast 1.1047 ± 0.0867 1.1939 ± 0.0717 1.1651 ± 0.0651 1.1919 ± 0.0509 1.1720 ± 0.0510 Pinterest 1.2101 ± 0.0963 1.2912 ± 0.0263 1.2147 ± 0.0712 1.1760 ± 0.0495 1.2255 ± 0.0257 TripAdvisor 1.1081 ± 0.1082 1.3686 ± 0.0464 1.1814 ± 0.0685 1.1470 ± 0.0318 1.3161 ± 0.0327 Tweets 1.0493 ± 0.1086 1.4315 ± 0.0314 1.2142 ± 0.0654 1.2242 ± 0.0687 1.3285 ± 0.0372 Uber 1.1323 ± 0.1328 1.3758 ± 0.0419 1.1370 ± 0.0664 1.1677 ± 0.0381 1.3018 ± 0.0518 Whatsapp 1.1239 ± 0.1087 1.2254 ± 0.0141 1.2162 ± 0.0656 1.1732 ± 0.0423 1.2952 ± 0.0268 Table 12: Overall W2V-L1 results compared with uHTM strategies. Datasets Level CluNMF SLDA SNLDA HSLDA 20News 1 Level 0.9863 ± 0.0000 0.2714 ± 0.1157 0.1829 ± 0.0000 0.6622 ± 0.0000 2 Level 0.9386 ± 0.0311 0.1753 ± 0.0644 0.5728 ± 0.0000 3 Level 0.9495 ± 0.0319 0.2368 ± 0.0732 0.4255 ± 0.2179 ACM 1 Level 0.9552 ± 0.0185 0.2071 ± 0.0579 0.1107 ± 0.0000 0.3060 ± 0.0000 2 Level 0.9552 ± 0.0185 0.1472 ± 0.0319 0.3909 ± 0.0000 3 Level 0.9682 ± 0.0071 0.2155 ± 0.0483 0.2709 ± 0.0986 Table 13: NPMI results by level of hierarchy compared with supervised HTM strategies. Datasets Level CluNMF SLDA SNLDA HSLDA 20News 1 Level 1.0232 ± 0.0000 0.3093 ± 0.2006 0.2961 ± 0.0000 *** 2 Level 1.1925 ± 0.1096 0.2625 ± 0.1157 0.3296 ± 0.0000 3 Level 1.2060 ± 0.1183 0.3750 ± 0.2231 0.0902 ± 0.1025 ACM 1 Level 1.0955 ± 0.1047 0.6347 ± 0.2617 0.5365 ± 0.0000 0.1488 ± 0.0000 2 Level 1.0955 ± 0.1047 0.5060 ± 0.0302 0.0074 ± 0.0000 3 Level 1.0320 ± 0.0716 0.7025 ± 0.2296 0.2961 ± 0.1510 Table 14: W2V-L1 results by level of hierarchy compared with supervised HTM strategies. Datasets Level CluNMF hLDA hPAM HSOC KHTM 20News 1 Level -12.6556 ± 0.00 -323.30 ± 0.00 -64.17 ± 0.00 -389.41 ± 69.27 -334.20 ± 0.00 2 Level -45.37 ± 22.72 -448.93 ± 166.05 -57.03 ± 8.52 -395.88 ± 88.74 -595.39 ± 139.97 3 Level -39.03 ± 22.94 -328.58 ± 184.81 -63.10 ± 9.81 -394.92 ± 72.10 -389.23 ± 137.96 ACM 1 Level -34.15 ± 16.73 -368.20 ± 0.00 -69.87 ± 0.00 -529.64 ± 107.93 -371.89 ± 0.00 2 Level -34.15 ± 16.73 -544.64 ± 111.50 -80.91 ± 9.70 -566.92 ± 103.18 -708.12 ± 146.29 3 Level -27.26 ± 6.48 -210.03 ± 149.94 -76.90 ± 9.89 -594.03 ± 95.85 -336.04 ± 87.40 AngryBirds 1 Level -20.27 ± 0.00 -514.71 ± 0.00 -68.27 ± 0.00 -528.23 ± 17.38 -546.60 ± 0.00 2 Level -40.73 ± 16.47 -168.89 ± 66.22 -14.35 ± 10.30 -510.20 ± 22.09 -549.78 ± 151.09 3 Level -80.55 ± 40.95 -98.05 ± 57.18 -49.83 ± 13.05 -474.54 ± 28.33 -133.16 ± 46.61 Dropbox 1 Level -12.78 ± 0.00 -487.63 ± 0.00 -65.27 ± 0.00 -536.63 ± 34.81 -527.14 ± 0.00 2 Level -60.01 ± 25.01 -247.70 ± 81.57 -42.53 ± 18.08 -537.71 ± 23.30 -569.05 ± 36.06 3 Level -70.81 ± 34.32 -100.89 ± 91.65 -67.82 ± 9.78 -523.33 ± 28.47 -270.74 ± 61.27 Evernote 1 Level -20.85 ± 0.00 -489.14 ± 0.00 -91.59 ± 0.00 -608.34 ± 44.38 -513.78 ± 0.00 2 Level -29.64 ± 7.40 -364.60 ± 106.17 -76.91 ± 12.32 -620.48 ± 50.72 -634.88 ± 28.08 3 Level -57.31 ± 33.23 -177.60 ± 114.68 -88.33 ± 7.67 -647.25 ± 48.42 -286.84 ± 83.09 Facebook 1 Level -57.62 ± 16.87 -589.33 ± 0.00 -85.66 ± 0.00 -663.14 ± 53.09 -607.85 ± 0.00 2 Level -82.49 ± 39.87 -547.12 ± 113.61 -84.63 ± 18.77 -684.94 ± 55.76 -748.77 ± 13.93 3 Level -115.51 ± 51.69 -138.70 ± 109.50 -99.58 ± 10.82 -697.83 ± 46.95 -291.21 ± 80.64 InfoVis-Vast 1 Level 0.20 ± 0.00 -257.31 ± 0.00 -33.92 ± 0.00 -387.50 ± 38.85 -334.82 ± 0.00 2 Level 0.08 ± 0.08 -310.04 ± 41.24 -59.86 ± 4.40 -403.23 ± 39.74 -425.13 ± 62.71 3 Level -4.91 ± 11.81 -432.98 ± 54.56 -62.42 ± 10.16 -430.37 ± 38.30 -481.76 ± 47.85 Pinterest 1 Level -70.83 ± 18.60 -533.33 ± 0.00 -71.06 ± 0.00 -576.22 ± 30.36 -572.82 ± 0.00 2 Level -99.54 ± 42.53 -233.10 ± 75.69 -23.58 ± 18.07 -551.13 ± 30.74 -597.43 ± 29.30 3 Level -130.69 ± 50.72 -92.76 ± 74.58 -61.31 ± 11.70 -514.51 ± 37.95 -255.41 ± 56.51 TripAdvisor 1 Level -19.15 ± 0.00 -457.78 ± 0.00 -74.75 ± 0.00 -583.20 ± 32.28 -493.97 ± 0.00 2 Level -33.33 ± 11.39 -224.06 ± 80.71 -33.34 ± 22.71 -590.83 ± 22.31 -651.94 ± 13.39 3 Level -58.03 ± 27.52 -113.90 ± 82.96 -72.45 ± 8.90 -581.31 ± 30.75 -132.31 ± 43.91 Tweets 1 Level -80.04 ± 0.00 -826.50 ± 0.00 -94.64 ± 0.00 -683.25 ± 68.75 -832.73 ± 0.00 2 Level -68.88 ± 24.80 -251.61 ± 64.09 -79.45 ± 9.09 -673.96 ± 66.61 -805.36 ± 25.10 3 Level -98.40 ± 30.67 -106.99 ± 62.55 -93.81 ± 8.80 -656.36 ± 50.74 -260.31 ± 53.26 Uber 1 Level -34.86 ± 0.00 -555.34 ± 0.00 -94.14 ± 0.00 -658.81 ± 52.17 -577.00 ± 0.00 2 Level -40.37 ± 10.30 -576.02 ± 92.22 -85.52 ± 17.19 -678.75 ± 57.32 -673.40 ± 21.48 3 Level -79.09 ± 40.06 -172.80 ± 117.45 -95.64 ± 7.48 -689.03 ± 53.47 -386.45 ± 102.42 Whatsapp 1 Level -56.74 ± 0.00 -597.47 ± 0.00 -55.41 ± 0.00 -604.01 ± 14.66 -686.37 ± 0.00 2 Level -59.06 ± 15.53 -147.96 ± 84.02 -23.31 ± 14.63 -577.14 ± 27.50 -571.32 ± 90.15 3 Level -110.30 ± 37.06 -47.66 ± 43.13 -50.93 ± 13.31 -527.78 ± 40.16 -188.80 ± 38.45 Table 15: Coherence results by level of hierarchy compared with uHTM strategies. 8150 Datasets Level CluNMF hLDA hPAM HSOC KHTM 20News 1 Level 0.9863 ± 0.0000 0.1577 ± 0.0000 0.2338 ± 0.0000 0.2786 ± 0.0870 0.1059 ± 0.0000 2 Level 0.9386 ± 0.0311 0.3358 ± 0.1395 0.3014 ± 0.0861 0.3041 ± 0.1177 0.2082 ± 0.0739 3 Level 0.9495 ± 0.0319 0.4735 ± 0.1443 0.2090 ± 0.0526 0.2799 ± 0.0298 0.4542 ± 0.1123 ACM 1 Level 0.9552 ± 0.0185 0.1701 ± 0.0000 0.2192 ± 0.0000 0.1979 ± 0.0516 0.1960 ± 0.0000 2 Level 0.9552 ± 0.0185 0.3193 ± 0.1018 0.1943 ± 0.0238 0.1875 ± 0.0507 0.0771 ± 0.0540 3 Level 0.9682 ± 0.0071 0.5861 ± 0.0911 0.1735 ± 0.0442 0.1874 ± 0.0473 0.4690 ± 0.0607 AngryBirds 1 Level 0.9729 ± 0.0000 0.0749 ± 0.0000 0.2516 ± 0.0000 0.1708 ± 0.0211 0.0530 ± 0.0000 2 Level 0.9486 ± 0.0135 0.4849 ± 0.0657 0.5608 ± 0.0777 0.2006 ± 0.0232 0.2076 ± 0.1048 3 Level 0.8887 ± 0.0504 0.5711 ± 0.0406 0.3415 ± 0.0783 0.2280 ± 0.0226 0.5055 ± 0.0346 Dropbox 1 Level 0.9819 ± 0.0000 0.0522 ± 0.0000 0.2795 ± 0.0000 0.1385 ± 0.0262 0.0565 ± 0.0000 2 Level 0.9184 ± 0.0312 0.4441 ± 0.0768 0.3911 ± 0.0873 0.1589 ± 0.0263 0.2048 ± 0.0490 3 Level 0.8980 ± 0.0459 0.6009 ± 0.0645 0.2389 ± 0.0753 0.1839 ± 0.0289 0.4131 ± 0.0378 Evernote 1 Level 0.9760 ± 0.0000 0.0522 ± 0.0000 0.1485 ± 0.0000 0.1105 ± 0.0225 0.0698 ± 0.0000 2 Level 0.9659 ± 0.0092 0.4177 ± 0.0706 0.2694 ± 0.0918 0.1190 ± 0.0247 0.0398 ± 0.0116 3 Level 0.9341 ± 0.0334 0.5780 ± 0.0704 0.1419 ± 0.0348 0.1268 ± 0.0213 0.4499 ± 0.0544 Facebook 1 Level 0.9330 ± 0.0141 0.0167 ± 0.0000 0.1863 ± 0.0000 0.1035 ± 0.0601 0.0448 ± 0.0000 2 Level 0.9017 ± 0.0402 0.3331 ± 0.0908 0.2471 ± 0.1057 0.1100 ± 0.0463 -0.0237 ± 0.0082 3 Level 0.8627 ± 0.0527 0.6086 ± 0.0530 0.1418 ± 0.0660 0.1166 ± 0.0406 0.4865 ± 0.0434 InfoVis-Vast 1 Level 1.0001 ± 0.0000 0.0379 ± 0.0000 0.0353 ± 0.0000 0.1610 ± 0.0516 0.0128 ± 0.0000 2 Level 1.0000 ± 0.0001 0.0441 ± 0.0037 0.1949 ± 0.0486 0.1576 ± 0.0521 0.0666 ± 0.0430 3 Level 0.9929 ± 0.0198 0.1938 ± 0.0475 0.1124 ± 0.0472 0.1666 ± 0.0489 0.1750 ± 0.0660 Pinterest 1 Level 0.9042 ± 0.0258 0.0161 ± 0.0000 0.2502 ± 0.0000 0.1391 ± 0.0376 0.0051 ± 0.0000 2 Level 0.8634 ± 0.0488 0.4311 ± 0.0594 0.5187 ± 0.1105 0.1700 ± 0.0362 0.1688 ± 0.0343 3 Level 0.8246 ± 0.0503 0.5762 ± 0.0440 0.2818 ± 0.0669 0.2065 ± 0.0298 0.4001 ± 0.0303 TripAdvisor 1 Level 0.9775 ± 0.0000 0.0701 ± 0.0000 0.1002 ± 0.0000 0.1147 ± 0.0159 0.0656 ± 0.0000 2 Level 0.9561 ± 0.0109 0.4652 ± 0.0746 0.4475 ± 0.1408 0.1350 ± 0.0193 0.1549 ± 0.0250 3 Level 0.9232 ± 0.0342 0.5920 ± 0.0585 0.2589 ± 0.0599 0.1623 ± 0.0288 0.5115 ± 0.0420 Tweets 1 Level 0.9036 ± 0.0000 0.0149 ± 0.0000 0.1987 ± 0.0000 0.1756 ± 0.0715 0.0180 ± 0.0000 2 Level 0.9176 ± 0.0311 0.4764 ± 0.0442 0.2763 ± 0.0422 0.1850 ± 0.0566 0.1019 ± 0.0236 3 Level 0.8902 ± 0.0310 0.6029 ± 0.0231 0.2068 ± 0.0407 0.2010 ± 0.0441 0.4801 ± 0.0254 Uber 1 Level 0.9600 ± 0.0000 0.0195 ± 0.0000 0.1076 ± 0.0000 0.0845 ± 0.0319 0.0232 ± 0.0000 2 Level 0.9544 ± 0.0091 0.2844 ± 0.0916 0.2504 ± 0.0871 0.0953 ± 0.0313 0.0090 ± 0.0160 3 Level 0.9069 ± 0.0419 0.5927 ± 0.0658 0.1297 ± 0.0408 0.1073 ± 0.0275 0.4246 ± 0.0657 Whatsapp 1 Level 0.9241 ± 0.0000 0.1105 ± 0.0000 0.3840 ± 0.0000 0.1574 ± 0.0222 0.0533 ± 0.0000 2 Level 0.9208 ± 0.0201 0.5428 ± 0.0525 0.5285 ± 0.0741 0.1893 ± 0.0307 0.2394 ± 0.0770 3 Level 0.8528 ± 0.0426 0.5951 ± 0.0166 0.3846 ± 0.0618 0.2214 ± 0.0324 0.5295 ± 0.0161 Table 16: NPMI results by level of hierarchy compared with uHTM strategies. Datasets Level CluNMF hLDA hPAM HSOC KHTM 20News 1 Level 1.0232 ± 0.0000 0.9866 ± 0.0000 1.0314 ± 0.0000 0.3025 ± 0.2030 0.0346 ± 0.0000 2 Level 1.1925 ± 0.1096 1.3775 ± 0.1562 1.1515 ± 0.0843 0.3221 ± 0.1925 0.2698 ± 0.1107 3 Level 1.2060 ± 0.1183 1.4500 ± 0.1361 1.1308 ± 0.0857 0.3356 ± 0.2301 0.2137 ± 0.1775 ACM 1 Level 1.0955 ± 0.1047 0.9653 ± 0.0000 1.1710 ± 0.0000 0.6783 ± 0.2517 0.4138 ± 0.0000 2 Level 1.0955 ± 0.1047 1.3447 ± 0.1152 1.1661 ± 0.0721 0.6292 ± 0.2627 0.6458 ± 0.2348 3 Level 1.0320 ± 0.0716 1.4782 ± 0.0873 1.1255 ± 0.1006 0.6373 ± 0.2791 0.1470 ± 0.1382 AngryBirds 1 Level 0.9416 ± 0.0000 1.2087 ± 0.0000 1.2447 ± 0.0000 1.1833 ± 0.0379 1.2514 ± 0.0000 2 Level 1.1148 ± 0.1406 1.2806 ± 0.0502 1.2878 ± 0.0569 1.1797 ± 0.0343 1.1949 ± 0.0341 3 Level 1.1532 ± 0.1120 1.3300 ± 0.0229 1.2225 ± 0.0777 1.1822 ± 0.0410 1.2626 ± 0.0256 Dropbox 1 Level 0.9895 ± 0.0000 1.1067 ± 0.0000 1.2108 ± 0.0000 1.1641 ± 0.0264 1.1113 ± 0.0000 2 Level 1.1191 ± 0.0966 1.2912 ± 0.0507 1.2193 ± 0.0613 1.1590 ± 0.0426 1.1794 ± 0.0472 3 Level 1.1489 ± 0.0905 1.3459 ± 0.0318 1.1751 ± 0.0889 1.1747 ± 0.0432 1.3099 ± 0.0288 Evernote 1 Level 0.9707 ± 0.0000 1.1173 ± 0.0000 1.1561 ± 0.0000 1.1660 ± 0.0490 1.0959 ± 0.0000 2 Level 1.0048 ± 0.0658 1.3185 ± 0.0671 1.1936 ± 0.0806 1.1725 ± 0.0385 1.1449 ± 0.0305 3 Level 1.1109 ± 0.1249 1.3876 ± 0.0475 1.1397 ± 0.0606 1.1916 ± 0.0452 1.3306 ± 0.0464 Facebook 1 Level 1.0722 ± 0.1251 1.1125 ± 0.0000 1.2397 ± 0.0000 1.1611 ± 0.0444 1.1435 ± 0.0000 2 Level 1.1471 ± 0.1405 1.2787 ± 0.0575 1.1766 ± 0.0346 1.1707 ± 0.0522 1.1124 ± 0.0168 3 Level 1.1993 ± 0.1170 1.4197 ± 0.0314 1.1573 ± 0.0550 1.1836 ± 0.0594 1.3036 ± 0.0319 InfoVis-Vast 1 Level 1.0569 ± 0.0000 1.0817 ± 0.0000 1.0517 ± 0.0000 1.1783 ± 0.0507 1.1137 ± 0.0000 2 Level 1.0604 ± 0.0589 1.0673 ± 0.0066 1.2105 ± 0.0526 1.1824 ± 0.0490 1.1466 ± 0.0570 3 Level 1.1091 ± 0.0881 1.2228 ± 0.0440 1.1617 ± 0.0639 1.2000 ± 0.0504 1.1820 ± 0.0457 Pinterest 1 Level 1.1468 ± 0.1059 1.1385 ± 0.0000 1.1699 ± 0.0000 1.1522 ± 0.0531 1.1640 ± 0.0000 2 Level 1.1850 ± 0.0991 1.2690 ± 0.0321 1.2962 ± 0.0771 1.1617 ± 0.0495 1.1918 ± 0.0331 3 Level 1.2467 ± 0.0772 1.2938 ± 0.0236 1.2070 ± 0.0655 1.1892 ± 0.0441 1.2269 ± 0.0244 TripAdvisor 1 Level 0.9554 ± 0.0000 1.0859 ± 0.0000 1.1188 ± 0.0000 1.1275 ± 0.0247 1.1113 ± 0.0000 2 Level 1.0465 ± 0.0663 1.2975 ± 0.0563 1.2573 ± 0.0611 1.1441 ± 0.0320 1.1783 ± 0.0327 3 Level 1.1155 ± 0.1087 1.3782 ± 0.0345 1.1744 ± 0.0646 1.1534 ± 0.0310 1.3204 ± 0.0206 Tweets 1 Level 0.9102 ± 0.0000 1.0165 ± 0.0000 1.2447 ± 0.0000 1.2161 ± 0.0827 1.1532 ± 0.0000 2 Level 1.0231 ± 0.0463 1.3655 ± 0.0525 1.2616 ± 0.0339 1.2282 ± 0.0679 1.1491 ± 0.0227 3 Level 1.0592 ± 0.1151 1.4350 ± 0.0246 1.2092 ± 0.0661 1.2242 ± 0.0650 1.3304 ± 0.0323 Uber 1 Level 1.0136 ± 0.0000 1.1040 ± 0.0000 1.1916 ± 0.0000 1.1716 ± 0.0340 1.1249 ± 0.0000 2 Level 1.0353 ± 0.0867 1.2679 ± 0.0557 1.2252 ± 0.0630 1.1684 ± 0.0371 1.1468 ± 0.0195 3 Level 1.1431 ± 0.1329 1.3794 ± 0.0360 1.1277 ± 0.0600 1.1663 ± 0.0396 1.3048 ± 0.0474 Whatsapp 1 Level 0.9615 ± 0.0000 1.1816 ± 0.0000 1.2917 ± 0.0000 1.1685 ± 0.0421 1.1781 ± 0.0000 2 Level 0.9975 ± 0.0661 1.2235 ± 0.0270 1.2748 ± 0.0427 1.1675 ± 0.0401 1.2057 ± 0.0394 3 Level 1.1380 ± 0.1030 1.2257 ± 0.0110 1.2095 ± 0.0644 1.1773 ± 0.0430 1.2991 ± 0.0178 Table 17: W2V-L1 results by level of hierarchy compared with uHTM strategies.
2020
724
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8151–8160 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8151 Empower Entity Set Expansion via Language Model Probing Yunyi Zhang1, Jiaming Shen1, Jingbo Shang2, Jiawei Han1 1 University of Illinois at Urbana-Champaign, IL, USA 2 University of California San Diego, CA, USA 1{yzhan238, js2, hanj}@illinois.edu 2 [email protected] Abstract Entity set expansion, aiming at expanding a small seed entity set with new entities belonging to the same semantic class, is a critical task that benefits many downstream NLP and IR applications, such as question answering, query understanding, and taxonomy construction. Existing set expansion methods bootstrap the seed entity set by adaptively selecting context features and extracting new entities. A key challenge for entity set expansion is to avoid selecting ambiguous context features which will shift the class semantics and lead to accumulative errors in later iterations. In this study, we propose a novel iterative set expansion framework that leverages automatically generated class names to address the semantic drift issue. In each iteration, we select one positive and several negative class names by probing a pre-trained language model, and further score each candidate entity based on selected class names. Experiments on two datasets show that our framework generates high-quality class names and outperforms previous state-of-the-art methods significantly. 1 Introduction Entity set expansion aims to expand a small set of seed entities (e.g., {“United States”, “China”, “Canada”}) with new entities (e.g., “United Kingdom”, “Australia”) belonging to the same semantic class (i.e., Country). The entities so discovered may benefit a variety of NLP and IR applications, such as question answering (Wang et al., 2008), query understanding (Hua et al., 2017), taxonomy construction (Shen et al., 2018a), and semantic search (Xiong et al., 2017; Shen et al., 2018b). Most existing entity set expansion methods bootstrap the initial seed set by iteratively selecting context features (e.g., co-occurrence words (Pantel et al., 2009), unary patterns (Rong et al., 2016), and coordinational patterns (Mamou et al., 2018)), Entities Hearst Pattern [NP0] such as [NP1],[NP2], and [NP3] {USA, China, Canada} [MASK] such as USA, China, and Canada Class-probing Query Class Name Entity Hearst Pattern [NP0], [NP1],or other [NP2] countries Canada Entity-probing Query Canada, [MASK], or other countries Retrieved Class Names countries states large countries … cities … Retrieved Entities Japan United Kingdom Mexico … Toronto … LanguageModel (e.g. BERT/XLNet) Figure 1: Examples of class-probing and entityprobing queries generated based on Hearst patterns. while extracting and ranking new entities. A key challenge to set expansion is to avoid selecting ambiguous patterns that may introduce erroneous entities from other non-target semantic classes. Take the above class Country as an example, we may find some ambiguous patterns like “* located at” (which will match more general Location entities) and “match against *” (which may be associated with entities in the Sports Club class). Furthermore, as bootstrapping is an iterative process, those erroneous entities added at early iterations may shift the class semantics, leading to inferior expansion quality at later iterations. Addressing such “semantic drift” issue without requiring additional user inputs (e.g., mutually exclusive classes (Curran et al., 2007) and negative example entities (Jindal and Roth, 2011)) remains an open research problem. In this study, we propose to empower entity set expansion with class names automatically generated from pre-trained language models (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019). Intuitively, knowing the class name is “country”, instead of “state” or “city”, can help us identify unambiguous patterns and eliminate erroneous entities like “Europe” and “New York”. Moreover, we can acquire such knowledge (i.e., positive and negative class names) by probing a pre-trained language 8152 model automatically without relying on human annotated data. Motivated by the above intuition, we propose a new iterative framework for entity set expansion that consists of three modules: (1) The first, class name generation module, constructs and submits class-probing queries (e.g., “[MASK] such as USA, China, and Canada.” in Fig. 1) to a language model for retrieving a set of candidate class names. (2) The second, class name ranking module, builds an entity-probing query for each candidate class name and retrieves a set of entities. The similarity between this retrieved set and the current entity set serves as a proxy for the class name quality, based on which we rank all candidate class names. An unsupervised ensemble technique (Shen et al., 2017) is further used to improve the quality of final ranked list from which we select one best class name and several negative class names. (3) The third, class-guided entity selection module, scores each entity conditioned on the above selected class names and adds top-ranked entities into the currently expanded set. As better class names may emerge in later iterations, we score and rank all entities (including those already in the expanded set) at each iteration, which helps alleviate the semantic drift issue. Contributions. In summary, this study makes the following contributions: (1) We propose a new set expansion framework that leverages class names to guide the expansion process and enables filtration of the entire set in each iteration to resolve the semantic drift issue; (2) we design an automatic class name generation algorithm that outputs highquality class names by dynamically probing pretrained language models; and (3) experiments on two public datasets from different domains demonstrate the superior performance of our approach compared with state-of-the-art methods. 2 Background In this section, we provide background on language models and define the entity set expansion problem. 2.1 Language Model A standard language model (LM) inputs a word sequence w = [w1, w2, . . . , wn] and assigns a probability P(w) to the whole sequence. Recent studies (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019) found that language models, simply trained for next word or missing word prediction, can generate high quality contextualized word representations which benefit many downstream applications. Specifically, these language models will output an embedding vector for each word appearance in a specific context that is usually the entire sentence where the target word occurs, rather than just words appearing before the target word. Therefore, we can also view a LM as a model that inputs a word sequence w and outputs a probability P(wi) = P(wi|w1, . . . , wi−1, wi+1, . . . , wn) to any position 1 ≤i ≤n. Currently, Devlin et al. (2019) propose BERT and train the language model with two objectives: (1) a cloze-filling objective which randomly substitutes some words with a special [MASK] token in the input sentence and forces LM to recover masked words, and (2) a binary classification objective that guides LM to predict whether one sentence directly follows another (sentence). BERT leverages Transformer (Vaswani et al., 2017) architecture and is learned on English Wikipedia as well as BookCorpus. More LM architectures are described in Section 5. 2.2 Problem Formulation We first define some key concepts and then present our problem formulation. Entity. An entity is a word or a phrase that refers to a real-world instance. For example, “U.S.” refers to the country: United States. Class Name. A class name is a text representation of a semantic class. For instance, country could be a class name for the semantic class that includes entities like “United States” and “China”. Probing Query. A probing query is a word sequence containing one [MASK] token. In this work, we utilize Hearst patterns (Hearst, 1992) to construct two types of probing queries: (1) A classprobing query aims to predict the class name of some given entities (e.g., “[MASK] such as United States and China”), and (2) an entity-probing query aims to retrieve entities that fit into the mask token (e.g., “countries such as [MASK] and Japan”). Problem Formulation. Given a text corpus D and a seed set of user-provided entities, we aim to output a ranked list of entities that belong to the same semantic class. Example 1. Given a seed set of three countries {“United States”, “China”, “Canada”}, we aim to return a ranked list of entities belonging to the same country class such as “United Kingdom”, “Japan”, and “Mexico”. 8153 3 Class-Guided Entity Set Expansion We introduce our class-guided entity set expansion framework in this section. First, we present our class name generation and ranking modules in Sections 3.1 and 3.2, respectively. Then, we discuss how to leverage class names to guide the iterative expansion process in Section 3.3. 3.1 Class Name Generation The class name generation module inputs a small collection of entities and generates a set of candidate class names for these entities. We build this module by automatically constructing classprobing queries and iteratively querying a pretrained LM to obtain multi-gram class names. First, we notice that the class name generation goal is similar to the hypernymy detection task which aims to find a general hypernym (e.g., “mammal”) for a given specific hyponym (e.g., “panda”). Therefore, we leverage the six Hearst patterns (Hearst, 1992)1, widely used for hypernymy detection, to construct the class-probing query. More specifically, we randomly select three entities in the current set as well as one Hearst pattern (out of six choices) to construct one query. For example, we may choose entities {“China”, “India”, “Japan”} and pattern “NPy such as NPa, NPb, and NPc” to construct the query “[MASK] such as China, India, and Japan”. By repeating such a random selection process, we can construct a set of queries and feed them into pre-trained language models to obtain predicted masked tokens which are viewed as possible class names. The above procedure has one limitation—it can only generate unigram class names. To obtain multi-gram class names, we design a modified beam search algorithm to iteratively query a pretrained LM. Specifically, after we query a LM for the first time and retrieve top K most likely words (for the masked token), we construct K new queries by adding each retrieved word after the masked token. Taking the former query “[MASK] such as China, India, and Japan” as an example, we may first obtain words like “countries”, “nations”, and then construct a new query “[MASK] countries such as China, India, and Japan”. Probing the LM again with this new query, we can get words like “Asian” or “large”, and obtain more fine-grained class names like “Asian countries” or “large coun1For example, the pattern “NPy such as NPa” indicates that noun phrase y is a hypernym of noun phrase a. tries”. We repeat this process for maximum three times and keep all generated class names that are noun phrases2. As a result, for each Hearst pattern and randomly selected three entities from the current set, we will obtain a set of candidate class names. Finally, we use the union of all these sets as our candidate class name pool, denoted as C. Note that in this module, we focus on the recall of candidate class name pool C, without considering its precision, since the next module will further rank and select these class names based on the provided text corpus. 3.2 Class Name Ranking In this module, we rank the above generated candidate class names to select one best class name that represents the whole entity set and some negative class names used in the next module to filter out wrong entities. A simple strategy is to rank these class names based on the number of times it has been generated in the previous module. However, such a strategy is sub-optimal because short unigram class names always appear more frequently than longer multi-gram class names. Therefore, we propose a new method below to measure how well each candidate class name represents the entity set. First, we introduce a corpus-based similarity measure between an entity e and a class name c. Given the class name c, we first construct 6 entityprobing queries by masking the hyponym term in six Hearst patterns3, and query a pre-trained LM to obtain the set of six [MASK] token embeddings, denoted as Xc. Moreover, we use Xe to denote the set of all contextualized representations of the entity e in the given corpus. Then, we define the similarity between e and c, as: M k(e, c) = 1 k max X⊆Xe,|X|=k X x∈X max x′∈Xc cos(x, x′), (1) where cos(x, x′) is the cosine similarity between two vectors x and x′. The inner max operator finds the maximum similarity between each occurrence of e and the set of entity-probing queries constructed based on c. The outer max operator identifies the top-k most similar occurrences of e with the queries and then we take their average as the final similarity between the entity e and the class name c. This measure is analogous to finding 2Therefore, class names likes “and countries” and “, countries” are filtered out. 3For example, a query for class name “countries” is “countries such as [MASK]”. 8154 China India Canada United States Korea Canada United States China Korea China India United States Canada India [MASK] such as United States, China, and Canada [MASK] such as China, India, and Korea [MASK] such as Canada, India, and United States … Europe United Kingdom Japan , countries … countries states large countries and states countries nations Asian countries developing countries countries states … … … , nations large countries commonwealth countries … , states Candidate Class Names …… commonwealth countries developing countries nations Asian countries states countries large countries (1) Class Name Generation Module United States … … 0.765 0.825 … 0.819 0.728 states cities … countries large countries China … … 0.760 0.861 … 0.848 0.753 states territories … Asian countries countries … Rank list L1 Current Set E L|E| Rank list … (2) Class Name Ranking Module Positive Class Name: Negative Class Names: …… territories states cities countries countries such as [MASK] {China, United States, Canada} + countries, including [MASK] {China, India, Korea} + [MASK] and other countries {United States, India, Korea} + Queries and sampled entities subsets … … Japan Thailand Singapore … Malaysia Singapore Europe Filter Filter Filter … United Kingdom Japan … Japan Thailand Singapore … Malaysia Singapore (3) Class-Guide Entity Selection Module …… United Kingdom Japan Singapore … … Selected New Entities … … Figure 2: Overview of one iteration in CGExpan framework. k best occurrences of entity e that matches to any of the probing queries of class c, and therefore it improves the previous similarity measures that utilize only the context-free representations of entities and class names (e.g., Word2Vec). After we define the entity-class similarity score, we can choose one entity in the current set and obtain a ranked list of candidate class names based on their similarities with this chosen entity. Then, given an entity set E, we can obtain |E| ranked lists, L1, L2, . . . , L|E|, one for each entity in E. Finally, we follow (Shen et al., 2017) and aggregate all these lists to a final ranked list of class names based on the score s(c) = P|E| i=1 1 ric , where ri c indicates the rank position of class name c in ranked list Li. This final ranked list shows the order of how well each class name can represent the current entity set. Therefore, we choose the best one that ranks in the first position as the positive class , denoted as cp. Aside from choosing the positive class name cp, we also select a set of negative class names for the target semantic class to help bound its semantics. To achieve this goal, we assume that entities in the initial user-provided seed set E0 definitely belong to the target class. Then, we choose those class names that rank lower than cp in all lists corresponding to entities in E0, namely {Li|ei ∈E0}, and treat them as the negative class names. We refer to this negative set of class names as CN and use them to guide the set expansion process below. 3.3 Class-Guided Entity Selection In this module, we leverage the above selected positive and negative class names to help select new entities to add to the set. We first introduce two entity scoring functions and then present a new rank ensemble algorithm for entity selection. The first function utilizes the positive class name cp and calculates each entity ei’s score : scoreloc i = M k(ei, cp), (2) where Mk is defined in Eq. (1). We refer to this score as a local score because it only looks at top-k best occurrences in the corpus where the contextualized representation of entity ei is most similar to the representation of class name cq. The second scoring function calculates the similarity between each candidate entity and existing entities in the current set, based on their contextfree representations. For each entity e, we use the average of all its contextualized embedding vectors as its context-free representation, denoted as ve. Given the current entity set E, we first sample several entities from E, denoted as Es, and calculate the score for each candidate entity ei as: scoreglb i = 1 |Es| X e∈Es cos(vei, ve). (3) Note here we sample a small set Es (typically of size 3), rather than using the entire set E. Since the current entity set E may contain wrong entities introduced in previous steps, we do not use all the entities in E and compute the candidate entity score only once. Instead, we randomly select multiple subsets of entities from the current set E, namely Es, obtain a ranked list of candidate entities for each sampled subset, and aggregate all ranked lists to select the final entities. Such a sampling strategy can reduce the effect of using wrong entities in E, as they are unlikely to be sampled multiple times, and thus can alleviate potential errors that are introduced in previous iterations. We refer to this score as a global score because it utilizes context-free representations which better reflect entities’ overall positions in the embedding space and measure the entity-entity similarity in a more global sense. Such a global score complements the above local score and we use their geometric mean to finally rank all candidate entities: scorei = q scoreloc i × scoreglb i . (4) As the expansion process iterates, wrong entities 8155 may be included in the set and cause semantic drifting. We develop a novel rank ensemble algorithm that leverages those selected class names to improve the quality and robustness of entity selection. First, we repeatedly sample Es (used for calculating scoreglb i in Eq. (3)) T times from current entity set E, and obtain T entity ranked lists {Rm}T m=1. Second, we follow the class name ranking procedure in Section 3.2 to obtain |E| class ranked lists {Ln}|E| n=1, one for each entity ei ∈E. Note here each Ln is actually a ranked list over {cp} ∪CN, namely the set of selected one positive class name and all negative class names. Intuitively, an entity belonging to our target semantic class should satisfy two criteria: (1) it appears at the top positions in multiple entity ranked lists, and (2) within its corresponding class ranked list, the selected best class name cp should be ranked above any one of the negative class name in CN. Combining these two criteria, we define a new rank aggregation score as follows: S(ei) = T X t=1 1(ei ∈E) + st(ei)  × 1(ri cp < min c′∈CN ri c′), (5) where 1(·) is an indicator function, ri c is the rank of class name c in entity ei’s ranked list Li c, and st(ei) the individual aggregation score of ei deduced from the ranked list Rt, for which we test two aggregation methods: (1) mean reciprocal rank, where st(ei) = 1 rt i (6) and rt i is the rank of entity ei in the t-th ranked list Rt; and (2) the combination of scores (CombSUM), where st(ei) = scoret i −minej∈Rt scoret j maxej∈Rt scoret j −minej∈Rt scoret j (7) is the ranking score of ei in the ranked list Rt after min-max feature scaling. To interpret Eq. 5, the first summation term reflects our criterion (1) and its inner indicator function ensuring an entity in the current set E prone to have a large rank aggregation score if not been filtered out below. The second term reflects our criterion (2) by using an indicator function that filters out all entities which are more similar to a negative class name than the positive class name. Note here we calculate the aggregation score for all entities in Dataset # Test Queries # Entities # Sentences Wiki 40 33K 1.50M APR 15 76K 1.01M Table 1: Datasets statistics the vocabulary list, including those already in the current set E, and it is possible that some entity in E will be filtered out because it has 0 value in the second term. This makes a huge difference comparing with previous iterative set expansion algorithms which all assume that once an entity is included in the set, it will stay in the set forever. Consequently, our method is more robust to the semantic drifting issue than previous studies. Summary. Starting with a small seed entity set, we iteratively apply the above three modules to obtain an entity ranked list and add top-ranked entities into the set. We repeat the whole process until either (1) the expanded set reaches a pre-defined target size or (2) the size of the set does not increase for three consecutive iterations. Notice that, by setting a large target size, more true entities belonging to the target semantic class will be selected to expand the set, which increases the recall, but wrong entities are also more likely to be included, which decreases the precision. However, as the output of the set expansion framework is a ranked list, the most confident high-quality entities will still be ranked high in the list. 4 Experiments 4.1 Experiment Setup Datasets. We conduct our experiments on two public benchmark datasets widely used in previous studies (Shen et al., 2017; Yan et al., 2019): (1) Wiki, which is a subset of English Wikipedia articles, and (2) APR, which contains all news articles published by Associated Press and Reuters in 2015. Following the previous work, we adopt a phrase mining tool, AutoPhrase (Shang et al., 2018), to construct the entity vocabulary list from the corpus, and select the same 8 semantic classes for the Wiki dataset as well as 3 semantic classes for the APR dataset. Each semantic class has 5 seed sets and each seed set contains 3 entities. Table 1 summarizes the statistics for these datasets. Compared methods. We compare the following corpus-based entity set expansion methods. 1. Egoset (Rong et al., 2016): This is a multifaceted set expansion system using context features and Word2Vec embeddings. The original 8156 Methods Wiki APR MAP@10 MAP@20 MAP@50 MAP@10 MAP@20 MAP@50 Egoset (Rong et al., 2016) 0.904 0.877 0.745 0.758 0.710 0.570 SetExpan (Shen et al., 2017) 0.944 0.921 0.720 0.789 0.763 0.639 SetExpander (Mamou et al., 2018) 0.499 0.439 0.321 0.287 0.208 0.120 CaSE (Yu et al., 2019b) 0.897 0.806 0.588 0.619 0.494 0.330 MCTS (Yan et al., 2019) 0.980∇ 0.930∇ 0.790∇ 0.960∇ 0.900∇ 0.810∇ CGExpan-NoCN 0.968 0.945 0.859 0.909 0.902 0.787 CGExpan-NoFilter 0.990 0.975 0.890 0.979 0.962 0.892 CGExpan-Comb 0.991 0.974 0.895 0.983 0.984 0.937 CGExpan-MRR 0.995 0.978 0.902 0.992 0.990 0.955 Table 2: Mean Average Precision on Wiki and APR. “∇” means the number is directly from the original paper. framework aims to expand the set in multiple facets. Here we treat all expanded entities as in one semantic class due to little ambiguity in the seed set. 2. SetExpan (Shen et al., 2017): This method iteratively selects skip-gram context features from the corpus and develops a rank ensemble mechanism to score and select entities. 3. SetExpander (Mamou et al., 2018): This method trains different embeddings based on different types of context features and leverages additional human-annotated sets to build a classifier on top of learned embeddings to predict whether an entity belongs to the set. 4. CaSE (Yu et al., 2019b): This method combines entity skip-gram context feature and embedding features to score and rank entities once from the corpus. The original paper has three variants and we use the CaSE-W2V variant since it is the best model claimed in the paper. 5. MCTS (Yan et al., 2019): This method bootstraps the initial seed set by combing the Monte Carlo Tree Search algorithm with a deep similarity network to estimate delayed feedback for pattern evaluation and to score entities given selected patterns. 6. CGExpan: This method is our proposed Class-Guided Set Expansion framework, using BERT (Devlin et al., 2019) as the pre-trained language model. We include two versions of our full model, namely CGExpan-Comb and CGExpan-MRR, that use the combination of score and mean reciprocal rank for rank aggregation, respectively. 7. CGExpan-NoCN: An ablation of CGExpan that excludes the class name guidance. Therefore, it only incorporates the average BERT representation to select entities. 8. CGExpan-NoFilter: An ablation of CGExpan CGExpan vs. Other MAP@10 MAP@20 MAP@50 vs. SetExpan 100% 94.5% 87.3% vs. CGExpan-NoFilter 100% 94.5% 58.2% vs. CGExpan-NoCN 100% 94.5% 70.9% Table 3: Ratio of seed entity set queries on which the first method reaches better or the same performance as the second method. that excludes the negative class name selection step and uses only the single positive class name in the entity selection module. Evaluation Metric. We follow previous studies and evaluate set expansion results using Mean Average Precision at different top K positions (MAP@K) as below: MAP@K = 1 |Q| X q∈Q APK(Lq, Sq), where Q is the set of all seed queries and for each query q, we use APK(Lq, Sq) to denote the traditional average precision at position K given a ranked list of entities Lq and a ground-truth set Sq. Implementation Details. For CGExpan, we use BERT-base-uncased4 as our pre-trained LM. For parameter setting, in the class name generation module (Sec. 3.1), we take top-3 predicted tokens in each level of beam search and set the maximum length of generated class names up to 3. When calculating the similarity between an entity and a class name (Eq. 1), we choose k = 5, and will later provide a parameter study on k in the experiment. Also, since MAP@K for K = 10, 20, 50 are typically used for set expansion evaluations, we follow the convention and choose 50 as the target set size in our experiments.5 4In principle, other masked LMs such as RoBERTa and XLNet can also be used in our framework. 5The code and data are available at https://github. com/yzhan238/CGExpan 8157 Methods Wiki APR MAP@{10/20/50} MAP@{10/20/50} Oracle-Full 0.991/0.976/0.891 1.000/1.000/0.964 Oracle-NoFilter 0.994/0.983/0.887 0.988/0.966/0.894 CGExpan 0.995/0.978/0.902 0.992/0.990/0.955 Table 4: Compared to oracle models knowing ground truth class names, CGExpan automatically generates class names and achieves comparative performances. 4.2 Experiment Results Overall Performance. Table 2 shows the overall performance of different entity set expansion methods. We can see that CGExpan along with its ablations in general outperform all the baselines by a large margin. Comparing with SetExpan, the full model CGExpan achieves 24% improvement in MAP@50 on the Wiki dataset and 49% improvement in MAP@50 on the APR dataset, which verifies that our class-guided model can refine the expansion process and reduce the effect of erroneous entities on later iterations. In addition, CGExpan-NoCN outperforms most baseline models, meaning that the pre-trained LM itself is powerful to capture entity similarities. However, it still cannot beat CGExpan-NoFilter model, which shows that we can properly guide the set expansion process by incorporating generated class names. Moreover, by comparing our full model with CGExpan-NoFilter, we can see that negative class names indeed help the expansion process by estimating a clear boundary for the target class and filtering out erroneous entities. Such an improvement is particularly obvious on the APR dataset. The two versions of our full model overall have comparable performance, but CGExpan-MRR consistently outperforms CGExpan-Comb. To explain such a difference, empirically we observe that highquality entities tend to rank high in most of the ranked lists. Therefore, we use the MRR version for the rest of our experiment, denoted as CGExpan. Fine-grained Performance Analysis. Table 3 reports more fine-grained comparison results between two methods. Specifically, we calculate the ratio of seed entity set queries (out of total 55 queries) on which one method achieves better or the same performance as the other method. We can see that CGExpan clearly outperforms SetExpan and its two variants on the majority of queries. In Table 4, we further compare CGExpan with two “oracle” models that have the access to ground truth class names. Results show that CGExpan can 1 2 3 4 5 6 7 8 9 k 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 MAP scores MAP@10 MAP@20 MAP@50 1 2 3 4 5 6 7 8 9 k 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 MAP scores MAP@10 MAP@20 MAP@50 Figure 3: Performance for different k values on Wiki (left) and APR (right). achieve comparative results as those oracle models, which indicates the high quality of generated class names and effectiveness of CGExpan. Parameter Study. In CGExpan, we calculate the similarity between an entity and a class name based on its k occurrences that are most similar to the class name (cf. Eq. (1)). Figure 3 studies how this parameter k would affect the overall performance. We find that the model performance first increases when k increases from 1 to 5 and then becomes stable (in terms of MAP@10 and MAP@20) when k further increases to 10. Overall, we find k = 5 is enough for calculating entity-class similarity and CGExpan is insensitive to k as long as its value is larger than 5. 4.3 Case Studies Class Name Selection. Table 5 shows some results of our class name ranking module for several queries from different semantic classes in the Wiki dataset. We see that CGExpan is able to select the correct class name and thus injects the correct semantics in later entity selection module. Moreover, as shown in the last column, CGExpan can identify several negative class names that provide a tight boundary for the target semantic class, including sports and competition for sport league class, as well as city and country for Chinese province class. These negative class names help CGExpan avoid adding those related but erroneous entities into the set. From Table 5 we can see that it happens when the predicted positive class name is not exactly the ground true class name in the original dataset. However, since we use both the generated class names and currently expanded entities as guidance and select new entities according to the context features in the provided corpus, those imperfect class names can still guide the set expansion process and perform well empirically. Also, in principle, synonyms of the positive class name can be wrongly selected as negative class names, which also happens but very rarely in our experiments. However, since these synonyms con8158 Seed Entity Set Ground True Class Name Positive Class Name Negative Class Names {“Intel”, “Microsoft”, “Dell”} company company product, system, bank, ... {“United States”, “China”, “Canada”} country country state, territory, island, ... {“ESPNews”, “ESPN Classic”, “ABC”} tv channel television network program, sport, show, ... {“NHL”, “NFL”, “American league”} sports league professional league sport, competition, ... {“democratic”, “labor”, “tories”} party political party organization, candidate, ... {“Hebei”, “Shandong”, “Shanxi”} Chinese province chinese province city, country, state, ... {“tuberculossi”, “Parkinson’s disease”, “esophageal cancer”} disease chronic disease symptom, condition, ... {“Illinois”, “Arizona”, “California”} US state state county, country, ... Table 5: Class names generated for seed entity sets. The 2nd column is the ground true class name in the original dataset. The 3rd and 4th columns are positive and negative class names predicted by CGExpan, respectively. sistently rank lower than the positive one for the initial seeds based on the given corpus, they are indeed not good class names for this specific corpus. Thus, misclassifying them will not have much influence on the performance of our model. Entity Selection. Table 6 shows expanded entity sets for two sample queries. After correctly predicting true positive class names and selecting relevant negative class names, CGExpan utilizes them to filter out those related but erroneous entities, including two TV shows in television network class and three entities in political party class. As a result, CGExpan can outperform CGExpan-NoFilter. 5 Related Work Entity Set Expansion. Traditional entity set expansion systems such as Google Sets (Tong and Dean, 2008) and SEAL (Wang and Cohen, 2007, 2008) typically submit a query consisting of seed entities to a general-domain search engine and extract new entities from retrieved web pages. These methods require an external search engine for online seed-oriented data collection, which can be costly. Therefore, more recent studies propose to expand the seed set by offline processing a corpus. These corpus-based set expansion methods can be categorized into two general approaches: (1) onetime entity ranking which calculates entity distributional similarities and ranks all entities once without back and forth refinement (Mamou et al., 2018; Yu et al., 2019b), and (2) iterative bootstrapping which aims to bootstrap the seed entity set by iteratively selecting context features and ranking new entities (Rong et al., 2016; Shen et al., 2017; Yan et al., 2019; Zhu et al., 2019; Huang et al., 2020). Our method in general belongs to the later category. Finally, there are some studies that incorporate extra knowledge to expand the entity set, including negative examples (Curran et al., 2007; McIntosh and Curran, 2008; Jindal and Roth, 2011), semistructured web table (Wang et al., 2015), and external knowledge base (Yu et al., 2019a). Particularly, Wang et al. (2015) also propose to use a class name to help expand the target set. However, their method requires a user-provided class name and utilizes web tables as additional knowledge, while our method can automatically generate both positive and negative class names and utilize them to guide the set expansion process. Language Model Probing. Traditional language models aim at assigning a probability for an input word sequence. Recent studies have shown that by training on next word or missing word prediction task, language models are able to generate contextualized word representations that benefit many downstream applications. ELMo (Peters et al., 2018) proposes to learn a BiLSTM model that captures both forward and backward contexts. BERT (Devlin et al., 2019) leverages the Transformer architecture and learns to predict randomly masked tokens in the input word sequence and to classify the neighboring relation between pair of input sentences. Based on BERT’s philosophy, RoBERTa (Liu et al., 2019) conducts more careful hyper-parameter tuning to improve the performance on downstream tasks. XLNet (Yang et al., 2019) further combines the ideas from ELMo and BERT and develops an autoregressive model that learns contextualized representation by maximizing the expected likelihood over permutations of the input sequence. Aside from generating contextualized representations, pre-trained language models can also serve as knowledge bases when being queried appropriately. Petroni et al. (2019) introduce the language model analysis probe and manually define probing queries for each relation type. By submitting those probing queries to a pre-trained LM, they show that we can retrieve relational knowledge and achieve competitive performance on various NLP tasks. More recently, Bouraoui et al. (2020) further analyze BERT’s ability to store relational knowledge by using BERT to automatically select high8159 Seed Entity Set CGExpan CGExpan-NoCN CGExpan-NoFilter 1 “Pb” 1 “NBC” 1 “Pb” 2 “ABC” 2 “CBS” 2 “Mtv” 3 “CBS” 3 “Disney Channel” 3 “ABC” ... ... ... 35 “Telemundo” 35 “ESPN Radio”* 35 “MyNetworkTV” 36 “Fox Sports Net” 36 “BBC America” 36 “ESPN2” 37 “Dateline NBC” 37 “G4” 37 “the Today Show”* 38 “Channel 4” 38 “Sirius Satellite Radio”* 38 “Access Hollywood”* 39 “The History Channel” 39 “TNT” 39 “Cartoon Network” {“ESPN”, “Discovery Channel”, “Comedy Central”} ... ... ... 1 “republican” 1 “national party” 1 “republican” 2 “likud” 2 “labour party” 2 “likud” 3 “liberal democrats” 3 “gop establishment”* 3 “liberal democrats” ... ... ... 40 “komeito” 40 “republican jewish coalition”* 40 “young voters”* 41 “centrist liberal democrats” 41 “british parliament”* 41 “bjp” 42 “aipac”* 42 “tea party patriots”* 42 “religious”* 43 “aam aadmi party” 43 “centrist liberal democrats” 43 “congress”* 44 “ennahda” 44 “federal government”* 44 “lib dem” {“democratic party”, “republican party”, “labor party”} ... ... Table 6: Expanded entity sets for two sample queries, with erroneous entities colored red and marked with a “*”. quality templates from text corpus for new relation prediction. Comparing with previous work, in this paper, we show that probing pre-trained language model works for entity set expansion task, and we propose a new entity set expansion framework that combines corpus-independent LM probing with corpus-specific context information for better expansion performance. 6 Conclusions In this paper, we propose a new entity set expansion framework that can use a pre-trained LM to generate candidate class names for the seed set, rank them according to the provided text corpus, and guide the entity selection process with the selected class names. Extensive experiments on the Wiki and APR datasets demonstrate the effectiveness of our framework on both class name prediction and entity set expansion. In the future, we plan to expand the method scope from expanding concrete entity sets to more abstract concept sets. For example, we may expand the set {“machine translation”, “information extraction”, “syntactic parsing”} to acquire more NLP task concepts. Another interesting direction is to generate a class name hierarchy via language model probing. Acknowledgments Research was sponsored in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and SocialSim Program No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon. We thank anonymous reviewers for valuable feedback. References Zied Bouraoui, Jose Camacho-Collados, and Steven Schockaert. 2020. Inducing relational knowledge from bert. In AAAI. James R. Curran, Tara Murphy, and Bernhard Scholz. 2007. Minimising semantic drift with mutual exclusion bootstrapping. In PAACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In COLING. Wen Hua, Zhongyuan Wang, Haixun Wang, Kai Zheng, and Xiaofang Zhou. 2017. Understand short texts by harvesting and analyzing semantic knowledge. In TKDE. Jiaxin Huang, Yiqing Xie, Yu Meng, Jiaming Shen, Yunyi Zhang, and Jiawei Han. 2020. Guiding corpus-based set expansion by auxiliary sets generation and co-expansion. In WebConf. 8160 Prateek Jindal and Dan Roth. 2011. Learning from negative examples in set-expansion. In ICDM. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692. Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, and Daniel Korat. 2018. Term set expansion based nlp architect by intel ai lab. In EMNLP. Tara McIntosh and James R. Curran. 2008. Weighted mutual exclusion bootstrapping for domain independent lexicon and template acquisition. In ALTA. Patrick Pantel, Eric Crestan, Arkady Borkovsky, AnaMaria Popescu, and Vishnu Vyas. 2009. Web-scale distributional similarity and entity set expansion. In EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? In EMNLP. Xin Rong, Zhe Chen, Qiaozhu Mei, and Eytan Adar. 2016. Egoset: Exploiting word ego-networks and user-generated ontology for multifaceted set expansion. In WSDM. Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R. Voss, and Jiawei Han. 2018. Automated phrase mining from massive text corpora. TKDE. Jiaming Shen, Zeqiu Wu, Dongming Lei, Jingbo Shang, Xiang Ren, and Jiawei Han. 2017. Setexpan: Corpus-based set expansion via context feature selection and rank ensemble. In ECML/PKDD. Jiaming Shen, Zeqiu Wu, Dongming Lei, Chao Zhang, Xiang Ren, Michelle T. Vanni, Brian M. Sadler, and Jiawei Han. 2018a. Hiexpan: Task-guided taxonomy construction by hierarchical tree expansion. In KDD. Jiaming Shen, Jinfeng Xiao, Xinwei He, Jingbo Shang, Saurabh Sinha, and Jiawei Han. 2018b. Entity set search of scientific literature: An unsupervised ranking approach. In SIGIR. Simon Tong and Jeff Dean. 2008. System and methods for automatically creating lists. US Patent 7,350,187. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Chi Wang, Kaushik Chakrabarti, Yeye He, Kris Ganjam, Zhimin Chen, and Philip A. Bernstein. 2015. Concept expansion using web tables. In WWW. Richard C. Wang and William W. Cohen. 2007. Language-independent set expansion of named entities using the web. In ICDM. Richard C. Wang and William W. Cohen. 2008. Iterative set expansion of named entities using the web. In ICDM. Richard C. Wang, Nico Schlaefer, William W. Cohen, and Eric Nyberg. 2008. Automatic set expansion for list question answering. In EMNLP. Chenyan Xiong, Russell Power, and James P. Callan. 2017. Explicit semantic ranking for academic search via knowledge graph embedding. In WWW. Lingyong Yan, Xianpei Han, Le Sun, and Ben He. 2019. Learning to bootstrap for entity set expansion. In EMNLP. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurlPS. Jifan Yu, Chenyu Wang, Gan Luo, Lei Hou, Juan-Zi Li, Zhiyuan Liu, and Jie Tang. 2019a. Course concept expansion in moocs with external knowledge and interactive game. In ACL. Puxuan Yu, Zhiqi Huang, Razieh Rahimi, and James D Allan. 2019b. Corpus-based set expansion with lexical features and distributed representations. In SIGIR. Wanzheng Zhu, Hongyu Gong, Jiaming Shen, Chao Zhang, Jingbo Shang, Suma Bhat, and Jiawei Han. 2019. Fuse: Multi-faceted set expansion by coherent clustering of skip-grams. ArXiv, abs/1910.04345.
2020
725
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8161–8171 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8161 Feature Projection for Improved Text Classification Qi Qin1,2,∗, Wenpeng Hu3,∗, Bing Liu2,† 1 Center for Data Science, AAIS, Peking University 2 Wangxuan Institute of Computer Technology, Peking University 3 Department of Information Science, Peking University {qinqi, wenpeng.hu, dcsliub}@pku.edu.cn Abstract In classification, there are usually some good features that are indicative of class labels. For example, in sentiment classification, words like good and nice are indicative of the positive sentiment and words like bad and terrible are indicative of the negative sentiment. However, there are also many common features (e.g., words) that are not indicative of any specific class (e.g., voice and screen, which are common to both sentiment classes and are not discriminative for classification). Although deep learning has made significant progresses in generating discriminative features through its powerful representation learning, we believe there is still room for improvement. In this paper, we propose a novel angle to further improve this representation learning, i.e., feature projection. This method projects existing features into the orthogonal space of the common features. The resulting projection is thus perpendicular to the common features and more discriminative for classification. We apply this new method to improve CNN, RNN, Transformer, and Bert based text classification and obtain markedly better results. 1 Introduction Text classification is an important task in natural language processing and text mining. It has a very wide range of applications, such as sentiment classification (Liu, 2012), question classification (Li and Roth, 2002), and deception detection (Liu, 2012; Feng et al., 2012). In recent years, deep learning models have been shown to outperform traditional classification methods (Kim, 2014; Iyyer et al., 2015; Tang et al., 2015; Dai and Le, 2015; Jin et al., 2016; Joulin et al., 2017; Shen et al., 2018). Given the input document, the system applies a mapping function (e.g., averaging or summation, a ∗Equal Contribution. †Corresponding Author. convolution neural network (CNN), recurrent neural network (RNN), and so on) to learn a dense representation of the document and then uses this representation to perform the final classification. Representation learning is one of the key strengthes of deep learning. In this paper, we propose to further improve the representation learning, i.e., to make the representation more discriminative for classification. Note that throughout the paper we will use sentence sentiment classification as an example to explain different ideas, but in our experiments, non-sentiment classification datasets are also used to show the generality of the proposed method. For text classification, many neural networks and embedding techniques have been devised and applied, e.g., RNN, CNN, Transformer (Vaswani et al., 2017) and Bert (Devlin et al., 2018). For example, RNN can model the whole sentence and also capture the long-term dependencies within the sentence. However, modeling the entire sequence may neglect some key local contexts that are important for classification (Yin et al., 2017). CNN is able to extract more local and position-invariant features (Scherer et al., 2010; Collobert et al., 2011). However, these methods may not give enough weights to some special or discriminative words. To solve this problem, the attention mechanism was introduced. For example, by exploiting attention, Transformer and Bert (which maximizes Transformer’s ability to extract sentence semantic information) can achieve even better results than both CNN and RNN on many tasks. We will see some other related methods to produce effective representations in the related work section. Although the existing models are already able to produce excellent representations, we will show that these representations can still be improved. This paper explores in an entirely different direction, i.e., feature projection. In a typical sentence or 8162 document, there are usually some words or features that are correlated with some class labels, but there are also many other common features that cannot distinguish different classes. For example, in sentiment classification, words like Good and Nice are indicative of the positive sentiment, and words like Bad and Terrible are indicative of the negative sentiment. Words like picture, price, and battery are not indicative of any sentiment, i.e., they are not discriminative. However, they may still interfere the representation learning to produce sub-optimal feature representations for the final classification. Even though the attention mechanism can alleviate this problem to some extent by giving higher weights to words associated with classes and lower weights to the other words that are not indicative of any specific classes. However, due to the idiosyncrasy of the data and the inaccuracy of the attention mechanism, the problem remains. In this paper, we propose a novel feature projection method to improve feature representation learning to make it more discriminative for classification. The proposed method is called Feature Purification Network (FP-Net). Specifically, FPNet consists of two sub-networks, a common feature learning network referred to as the C-net and a projection network referred to as the P-net. Cnet uses a Gradient Reverse Layer (GRL) (Ganin and Lempitsky, 2014; Zhang et al., 2019) to extract common features⃗b (i.e., invariant features (Zhang et al., 2019)) that are shared by multiple classes and have little discriminative power for classification. At the same time, P-net uses a traditional feature extractor to learn the feature vector ⃗a for the input sentence or document. Then the feature (or representation) vector ⃗a is projected onto the vector of the common features⃗b (i.e., vector⃗b) to get a projection vector ⃗c, which represents the input sentence’s own common features. Then, we project the feature vector ⃗a onto the orthogonal direction of the vector of the common features ⃗c to produce the final purer features for classification. It is quite clear and intuitive that this orthogonal project is to get rid of the common features and make the system focusing on those discriminative features only. We will explain why two projections are used in Section 3. In summary, the key contribution of this paper is the improvement to representation learning through feature vector projection. To the best of our knowledge, this is the first such technique. Specifically, an Orthogonal Projection Layer (OPL) is proposed to map the features obtained by a traditional feature extractor to the classification-specific semantic space, which is orthogonal to the common features such that we obtain a more relevant and discriminative (or purer) feature representation from the original document for classification. Extensive experiments have been conducted to verify the effectiveness of the proposed method on two sentence sentiment classification datasets MR and SST2, a natural language inference dataset SNLI, and a question classification dataset TREC. The results show that the proposed method can improve the classification accuracy of RNN, CNN, Transformer and Bert based classification methods markedly, which shows that feature projection is a highly promising direction to explore. 2 Related Work It is well known that one of the key strengths of deep neural networks is their superb ability to learn highly effective representations or features from the raw data, which have been shown to be very successful for all kinds of applications including natural language processing tasks such as text classification (Jin et al., 2016), machine translation (Bahdanau et al., 2014; Vaswani et al., 2017) dialogue (Wang and Jiang, 2016), etc. Previous work on learning representations broadly falls in two main categories: supervised and unsupervised methods. Our work focuses on improving the representation of text for supervised classification. Supervised methods: These methods improve data utilization efficiency and discriminative feature distillation as they can obtain better training signals from the labeled data. Sequence models such as recurrent neural networks (RNN), Long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent unit (GRU) (Chung et al., 2014) networks are suitable for handling text because a sentence or document can be regarded as a sequence. Therefore, a large amount of work based on RNN and its variants for feature extraction and downstream tasks has been done (Tang et al., 2015; Wang and Tian, 2016; He et al., 2016). Unlike RNN’s sequence modeling approach, CNN (Convolutional Neural Network) uses different sized windows to capture local correlations and position-invariant information (Kim, 2014; Conneau et al., 2016; Lai et al., 2015; Xiao and Cho, 2016; Wang, 2018). A common approach 8163 of these methods is to create an instance-level representation by using the final hidden state of the RNN, the maximum (or average) pooling of the RNN hidden states, or convolutional n-grams. However, they may ignore the importance of special words that are highly discriminative for classification. After Bahdanau et al. (2014) introduced the attention mechanism in machine translation, attention mechanism has been exploited in many natural language processing tasks including text classification to solve the above problem. For example, Yang et al. (2016) introduced attention as an integral part of the model for text classification. Lin et al. (2017) proposed a new model for extracting interpretable sentence embeddings using self-attention. Ma et al. (2018) showed that attention mechanism is also effective for sentiment classification. Vaswani et al. (2017) further illustrated that they can get a stronger sentence-level representation by stacking multiple blocks of selfattention. Bert (Devlin et al., 2018) combines Transformer and a large corpus to produce an even more complete and better sentence-level representation. Some other studies improved the representation of sentences from the perspective of language structures (e.g., parse trees and dependency trees) (Tai et al., 2015; Mou et al., 2015). Subramanian et al. (2018) utilized a single multi-task framework to combine the benefits of diverse sentence representation learning objectives. However, to the best of our knowledge, these existing works and others have not used feature projection to improve (or purify) representations for supervised learning, which we believe is a promising direction to explore. Unsupervised methods: These methods utilize a large unlabeled text corpus to learn word representations which are then composed into sentence and document representations. For example, Kiros et al. (2015) constructed sentence representations by trying to reconstruct neighbouring sentences. Hill et al. (2016) proposed a log-linear bag-of-words models for sentence representation. The unsupervised smooth inverse frequency method in (Ethayarajh, 2018) built on this but used a weighted average of word embeddings and principal component removal for sentence representations. Our work is again clearly different from these unsupervised methods as the proposed method works under supervised learning. Existing unsupervised methods also do not use feature projection. Some other works have also been done for semisupervised representation learning (Kevin Clark, 2018) and transfer learning (Tamaazousti et al., 2018). Jason Phang (2019) also proposed to use some data-rich intermediate supervised tasks for pre-training to help produce better representation for the end task. To the best of our knowledge, all these previous studies tried to improve representations using external data or knowledge, which are quite different from our method as we don’t use any external information. Also, the philosophy of our approach is entirely different as we try to eliminate commonalities among classes through feature projection, which is orthogonal to existing representation learning approaches. Finally, our work is related to several other works. Ganin and Lempitsky (2014) introduced the gradient reverse layer (GRL) for extracting common features in the context of domain adaptation. It embeds domain adaptation into the process of learning representations so that the final classification decision has more discriminative and invariant characteristics for domain changes. We also use GRL to extract irrelevant or common features. However, we do not work on domain adaptation and they do not use feature projection. Belinkov et al. (2019) used adversarial learning to encourage models to learn representations free of hypothesis-only biases in the SNLI dataset. Zhang et al. (2019) combined GRL and aspect attention to study cross-domain sentiment classification. They found common features across domains and then extracted information from the aspects (which are product features) with the help of common features to do classifications. Our work is clearly different because none of these existing works improve representation learning through feature projection. 3 Feature Purification Network The overall framework of our model is shown in Figure 1. The whole model consists of two parts, the first part is the projection network (i.e., P-net) and the other is the common feature learning network (i.e., C-net). As mentioned earlier, the goal of C-net is to extract common features and the goal of P-net is to compute the purified features for classification, which is done by projecting the learned full information vector of the input document into a more discriminative semantic space to eliminate the influence of the common features. P-net consists of four parts: the input layer X, the feature extractor Fp, Orthogonal Projection 8164 C-Net P-Net YOPL YGRL word embedding Figure 1: The architecture of FP-Net Layer (OPL), and the final classification layer Cp. C-net is also composed of four parts: the input layer X, the feature extractor Fc (Fp and Fc’s parameters are not shared)1, the Gradient Reverse Layer (GRL) and the classification layer Cc. The key idea of the proposed technique is as follows: The feature vector fp computed by the feature extractor Fp is projected to the orthogonal direction of the feature vector fc extracted by Fc of the C-net. That is, fp (the full information extracted from the input document) is projected to the discriminative semantic space to be purified for the final classification. However, in order to perform the orthogonal projection, two operations are required, which we will explain shortly. Next we use CNN as an example feature extractor to detail each component of the proposed FP-Net. CNN Extractor: Given a dataset D = {(xi, yi)}N i=1, where xi is an input document with the length L (after padding or cut) and yi is the label corresponding to the sample xi. Let Vij ∈Rk be the word vector corresponding to the jth word of the document xi. Xi ∈RL×k is the embedding matrix of xi. Recall our FP-Net model consists of two sub-networks, i.e., P-net and C-net, with the same input xi. The two sub-networks also have the same structure for the feature extractor CNN, but there are no shared parameters between them. The feature extractors of P-net and C-net are Fp, Fc. 1The feature extractor can be any existing extractor. In this work, we verified the effectiveness of our purification network using CNN, RNN, Transformer, and Bert as feature extractors as we will see in the experiment section. We use Fc as an example to introduce the working of CNN. When the feature extractor Fc receives Xi from the input layer, Fc extracts the advanced features fc from Xi in the form of n-grams, which is: fc = [c1, c2, ..., cl−n+1] = [cj]l−n+1 j=1 , (1) where cj represents the output produced by CNN’s filter on Xi[j : j +n−1, :]. Mathematically, a convolution operation consists of a filter W ∈Rn×k and a bias b ∈R. Then cj can be expressed as: cj = g(W · Xi[j : j + n −1, :] + b), (2) where g is a nonlinear activation function such as Relu. We use a Maxpooling operation over the feature map and take the maximum value fc = max{fc} as the feature corresponding to this particular filter. The same feature extractor Fp will also get the advanced features fp from the input layer. We refer to the features of the P-net and C-net respectively as fp = CNNp(X), (3) fc = CNNc(X). (4) Other details of C-net will be introduced in C-net Module, and likewise, additional details about Pnet will be introduced in P-net Module. C-net Module: The goal of C-net is to extract the common features, which are the semantic information of the input example that is not discriminative for the classification task. As mentioned earlier, common features are those shared by all classes of the problem. The classifier Cc should not use them to distinguish different classes. To obtain common features, we add a Gradient Reverse Layer (GRL) (Ganin and Lempitsky, 2014; Ganin et al., 2016) after the feature extractor Fc to reverse the gradient direction. Through this training module, we can obtain the common features that are shared among classes. Without loss of generality, we can think of the gradient reverse layer as a ”pseudo-function” defined by two incompatible equations describing its forward and back-propagation behaviors: GRL(x) = x, (5) ∂GRL(x) ∂x = −λI, (6) 8165 Figure 2: Working of the Orthogonal Projection Layer. The example here is in a 2-dimensional space. fp represents the traditional feature vector; fc represents the common feature vector; fp∗is the projected feature vector; efp is our final Orthogonal Projection feature vector. where λ is a hyper-parameter. We process the feature vector fc through GRL as GRL(fc) = efc, which is then fed to the classifier Cc: YGRL = softmax( efc · Wc + bc), (7) Lossc = CrossEntropy(Ytruth, YGRL), (8) where Wc and bc are the weights and bias of Cc respectively. By optimizing the objective function Lossc, the feature extractor Fc is able to extract the common features of different classes. P-net Module: The goal of P-net is to first extract the full semantic information from the input example and then project it into the semantic space purified for classification. In order to achieve this, we perform the projection of the feature fp extracted by the feature extractor Fp onto the orthogonal direction of the common feature fc, extracted by Fc. The feature space orthogonal to the common feature vector should contain features that are pure and highly effective for classification (e.g., sentiment related information in sentiment classification). Projecting the traditional feature vector fp to this orthogonal feature space preserves the discriminative information and remove those common features of the classes that are unhelpful and even confusing to the classification task. The Orthogonal Projection Layer (OPL) helps us accomplish this goal. Figure 2 illustrates the idea of OPL using a two-dimensional space example. Mathematically, we first project the tradition feature vector fp onto the common feature vector fc: fp∗= Proj(fp, fc), (9) where Proj is a projection function. Proj(x, y) = x · y |y| y |y|, (10) where x, y are vectors. We then do the projection in the orthogonal direction of the projected feature fp to get the purer classification feature vector: f fp = Proj(fp, (fp −fp∗)). (11) Clearly, it is easy to show that the feature vector f fp obtained by Eq. 11 is equivalent to fp–fp∗. Using the traditional feature vector fp and the projected feature vector fp∗, we can build a plane (in three dimensions). The intersection of this plane and the orthogonal plane of the projected feature vector fp∗is our pure feature vector. In other words, the projection in Eq. 9 is a constraint on the common feature vector. That is to say: the modulus of the common feature vector is limited by projecting the traditional feature vector of the input xi to the common feature vector, so the semantic information of the new common feature vector (i.e., the projected feature fp∗) contains only the common semantic information in xi. This makes the final purified feature vector f fp coming from the traditional feature vector fp rather than any vector in any plane orthogonal to the common feature vector fc. Finally, we use the purified feature vector f fp to do the classification. YOPL = softmax(f fp · Wp + bp), (12) Lossp = CrossEntropy(Ytruth, YOPL). (13) Note that here Lossp and Lossc are trained simultaneously, and they use different optimizers. Lossp uses the Adam optimizer. Since Ganin and Lempitsky (2014) used Moment SGD as the domain classifier’s optimizer, our C-net loss function Lossc also uses Moment SGD optimizer.2 Gradients are also passed back through feature fc when optimizing Lossp. Although the two losses are opposite to each other in terms of optimization targets of the feature extractor Fc, the effect of Lossp on Fc is in the orthogonal direction of fc. A balance will be found to make the extracted feature fc closer to the real common features. The complete training algorithm of the proposed FP-Net is given in Algorithm 1, which is self-explanatory. 4 Experiments We now evaluate the proposed FP-Net 3 using four text classification datasets and compare it with baselines without the purification capability. Our goal is 2We have conducted experiments using the Adam optimizer for both C-Net and P-Net. The results are about the same as using two different optimiers. 3https://github.com/Qqinmaster/FP-Net/ 8166 Algorithm 1 Feature Purification Network 1: Input: Dataset D = {(xi, yi)}N i=1, xi’s embedding matrix Xi ∈RLk; Randomly initialized FPNet’s parameters θ. 2: for each iteration b = 1, 2, ..., M do 3: Sample one batch Xb from D 4: C-net part: 5: Generate common features (CFs) (Eq. 3) 6: CFs go through GRL (Eq. 5) 7: Perform classification (Eq. 7) 8: P-net part: 9: Generate traditional features (TFs) (Eq. 4) 10: TFs projection (Eq. 9) 11: Get the purified features (Eq. 11) 12: Perform classification (Eq. 12) 13: Update parameters: 14: C-net, P-net’s parameters are updated together (Eq. 8 & Eq. 13) 15: end for to verify whether the proposed feature purification is general and effective for different deep learning classification models (or more precisely, feature extractors) on diverse datasets. 4.1 Experimental Datasets We carried out experiments on four diverse benchmark datasets: MR: This is a movie review dataset for sentiment classification. It has two classes: positive and negative (Pang and Lee, 2005).4 SST2: This is the Stanford Sentiment Treebank dataset.5 Each sample is marked as negative or positive. TREC: This is a question classification dataset, which is to classify a question into one of the six question types (Li and Roth, 2002).6 SNLI: This is a popular text entailment dataset. It contains 570k human annotated sentence pairs, in which the premises are drawn from the captions of the Flickr 30 corpus and hypotheses are manually annotated (Bowman et al., 2015). For this SNLI dataset, we created the following settings to suit our needs: (1) we concatenated the two sentences (in a pair) as a single sample; (2) when using 4http://www.cs.cornell.edu/people/ pabo/movie-review-data/ 5http://nlp.stanford.edu/sentiment/ 6http://cogcomp.cs.illinois.edu/Data/ QA/QC/ Data c l Train Test |V | MR 2 45 8,529 1,066 17,884 SNLI 3 40 54,936 9,824 33,944 SST2 2 35 6,920 1,821 16,789 TREC 6 15 5,000 952 8,834 Table 1: Dataset statistics. c: number of classes. l: average length of sentences, after padding and cutting. Train, Test: number of training and testing examples. |V |: vocabulary size. Bert as a feature extractor, we reduced the number of training set samples to 25,000 to speed up the training process. For other feature extractors (see below), the complete data is used. The dataset statistics are given in Table 1. 4.2 Baselines Since our goal is to perform feature purification so that the purified features are more conducive for classification, to verify the validity of the proposed FP-Net model, we compare the classification results with and without purification using the following popular feature extractors: LSTM: The long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997) for solving the gradient disappearing problem of the traditional RNN. CNN: We use the Convolution Neural Networks in (Kim, 2014) as the feature extractor to generate representations. Transformer: We use the encoder part of the model proposed by (Vaswani et al., 2017) as the feature extractor, followed by a classifier. Bert: We fine-tuned on the trained Bert base (Devlin et al., 2018). Bert base includes 12-layer, 768-hidden, 12-heads and 110M parameters. In particular, we use Bert-base Uncased, where Uncased means that the text has been lower cased before WordPiece tokenization. Note, those existing feature learning or feature enhancement approaches discussed in Section 2 are not compared as they are entirely different from our approach. They mainly relied on external data or information to improve representation learning. Our method does not use any external data or information. However, we do include Bert as a baseline as it is perhaps one of the most successful feature learning methods using external data. Our method can improve on top of Bert. 8167 Model MR SNLI SST2 TREC LSTM 77.46(±0.41) 76.98(±0.07) 80.41(±0.20) 87.19(±0.58) FP+LSTM 78.13(±0.18) 77.92(±0.10) 81.60(±0.17) 88.83(±0.40) CNN 76.18(±0.45) 72.92(±0.19) 80.47(±0.59) 90.86(±0.51) FP+CNN 78.74(±0.36) 74.38(±0.14) 82.02(±0.11) 92.78(±0.26) Trans 75.18(±0.57) 66.71(±0.58) 76.93(±0.39) 87.33(±0.23) FP+Trans 76.83(±0.66) 73.34(±0.43) 78.42(±0.49) 89.51(±0.79) Bert 87.45(±0.51) 80.78(±0.42) 90.38(±0.10) 96.67(±0.22) FP+Bert 90.56(±0.35) 81.47(±0.26) 92.24(±0.29) 98.33(±0.24) Table 2: Results of our FP-Net against baseline methods. In each block, FP+X is a model obtained by our FP-Net using X as the feature extractor. Accuracy (%) is the evaluation metric. Each result in the table is the average accuracy of five experiments with the standard deviation in parentheses. 4.3 Implementation Details First, all the word embeddings in our experiments are randomly initialized as 200-dimension vectors and then modified during training (except Bert). For each type of feature extractor, we have the following configuration: 1) For the RNN-based models, we use a twolayer LSTM for feature extraction and the hidden state of each layer is set to 256. 2) For the CNN-based models, in order to obtain more fine-grained features, we use filter sizes of [2,3,4,5,6] with 100 feature maps each. 3) For the Transformer-based models, we use Transformer’s encoder as the feature extractor, specifically with single-head and 3 blocks. 4) For the Bert-based models, we fine-tuned the pre-trained Bert-base parameters. These settings are exactly the same in the baseline as in our FPNet. In the training of the C-net module, we use a stochastic gradient with 0.9 as the momentum and the following annealing learning rate (Ganin and Lempitsky, 2014). lp = l0 (1 + α · p)β where p is the training progress linearly changing from 0 to 1, l0 = 0.01, α = 10 and β = 0.75. In GRL, the hyper-parameters λ swept [0.05, 0.1, 0.2, 0.4, 0.8, 1.0]. 4.4 Experiment Results In our experiments, we adopt the classification accuracy as the evaluation metric. We summarize the experimental results in Table 2, where FP+X means that the model trained by the proposed FPNet using X as the feature extractor. Each of the two lines compares the experimental results of the traditional model with our proposed model on these four datasets. From Table 2, we can make the following observations. 1. Our FP-Net model consistently improves the results of the baseline feature extractors (i.e., LSTM, CNN, Transformer and Bert) using the proposed feature projection. This verifies the effectiveness of the proposed feature purification method of projecting the traditional feature vectors to the orthogonal direction of the common features. 2. Compared with the traditional CNN, the FP+CNN model increases the accuracy by 2.56% on the MR dataset and 1.46% on the SNLI dataset. The improvement of FP+LSTM is less, increased by 0.67% and 0.94% on the MR and SNLI datasets. This shows that the way that CNN extracts input features (concatenate the feature after using different sliding window sizes for extracting local features) is quite effective in extracting more complete semantic information, which leads to more irrelevant features being used. That is why the projection on the CNN features brings more improvements compared to the RNN-based model. 3. By comparing the experimental results of the attention-base model (i.e., Transformer and Bert), we can see that our FP-Net can improve the feature representation capabilities of these feature extractors. For example, in the Bert-based experiment, our FP+Bert can increases the accuracy by 3.11% on MR and 1.66% on TREC. That is to say our orthogonal projection method can make the representation of attention-based obtain a higher discriminative power for classification. Outperforming Bert is particularly significant because Bert is perhaps one of the best feature extractors, if not the best. 8168 Model MR SNLI SST2 TREC FP+CNN 78.74(±0.36) 74.38(±0.14) 82.02(±0.11) 92.78(±0.26) FP+CNN-G 77.71(±0.44) 72.85(±0.62) 81.09(±0.17) 91.89(±0.10) FP+CNN-O 76.64(±0.39) 73.11(±0.22) 81.25(±0.11) 90.76(±0.37) FP+CNN-G-O(plus) 76.38(±0.45) 73.08(±0.19) 80.67(±0.52) 90.89(±0.41) FP+CNN-G-O(concat) 76.18(±0.51) 72.91(±0.26) 81.02(±0.18) 91.16(±0.41) Table 3: Ablation experiments. The first block contains the results of FP+CNN with GRL (-G) or OPL (-O) removed and the results with both GRL and OPL (-G-O) removed and the features of the two modules (or subnetworks) summed. The second block contains the results with both GRL (-G) and OPL (-O) removed and the features of the two modules concatenated. Model MR SNLI SST2 TREC CNN 76.18(±0.45) 72.92(±0.19) 80.47(±0.59) 90.86(±0.51) CNN Dp 76.72(±0.50) 73.49(±0.14) 80.67(±0.40) 90.91(±0.41) FP+CNN 78.74(±0.36) 74.38(±0.14) 82.02(±0.11) 92.78(±0.26) Trans 75.18(±0.57) 66.71(±0.58) 76.93(±0.39) 87.33(±0.23) Trans Dp 75.75(±0.31) 68.36(±0.25) 77.10(±0.48) 88.16(±0.34) FP+Trans 76.83(±0.66) 73.34(±0.43) 78.42(±0.49) 89.51(±0.79) Table 4: Experimental results with doubled parameter size on the four datasets. For example, Trans Dp shows the increase of the number of blocks of the Transformer from 3 to 6. 4.5 Ablation Experiments and Analysis In order to analyze the effectiveness of each component of FP-Net, we performed the following two ablation experiments. First, in Table 3, we report the results of the ablation test of each component of FP-Net, where FP+CNN-G (or O, G-O) represents FP-Net with the GRL (or OPL, or both GRL and OPL) removed while using CNN as the feature extractor. The parameters of all the experiments compared in the first block are exactly the same. In order to keep the parameter size consistent, we performed elementwise summation of the features of FP-Net’s two sub-networks fp and fc in the FP+CNN-G-O experiment. By comparing the experimental results of the first block, we observe the following: 1) Whether GRL or OPL is removed or both GRL and OPL are removed at the same time, the accuracy will drop significantly compared with the complete FP-Net. For example, for the MR dataset, when we remove the GRL and keep the OPL (i.e., FP+CNN-G), the accuracy decreases by 1.03%; When we remove both GRL and OPL, and then execute fp + fc (i.e., FP+CNN-G-O(plus)), the accuracy decreases by 2.36%, etc. These results show that each component in FP-Net is important, and the absence of any one component will lead to decline in accuracy. 2) In the experiment of FP+CNN-O, we remove OPL and keep GRL, which means that we use fp − fc instead of the orthogonal projection (i.e., fp − fp∗). As stated in P-Net module of Section 3, such a replacement will give up a constraint that gets the common feature fp∗of the current input xi from the base common feature fc. The results showed that the accuracy decreases by 2.10% on MR and decreases by 1.27% on SNLI, which mean that the projection operation (i.e., Eq. 9) is necessary. 3) Clearly, adding fp and fc of FP-Net is not the only way to connect the two sub-networks of FP+CNN-G-O. We can do fp ⊕fc, where ⊕is the concatenation operator. Although this method has more parameters in the P-net classifier, we can still observe that the accuracy of FP+CNN-G-O is not as good as the accuracy of FP+CNN. For example, FP+CNN-G-O reduced the accuracy by 2.36% on MR and 1.30% on SNLI, which can also prove the effectiveness of GRL and OPL in our FP-Net. Second, we show that the improvement in accuracy by FP-Net is not due to the increase in the number of parameters. We doubled the parameters of traditional CNN and Transformer and compared with our FP+CNN, FP+Trans. The results of this part of the experiments are shown in Table 4, where the index ’Dp’ means the Doubled parameter size. For example, Tans Dp increases the number 8169 of blocks of Transformer in the baseline from 3 to 6. All experimental results show that increasing the number of parameters of the baseline models will improve classification accuracy slightly, but there is still a large gap with the proposed model. 5 Conclusion In this paper, we proposed a novel Feature Purification Network (FP-Net) to improve the representation for text classification. The method is based on feature projection. The proposed model uses two sub-networks, one for identifying common features that are not discriminative for classification, and the other for feature projection that projects the traditional features to the orthogonal direction of the common features. To the best of our knowledge, this is the first method that uses feature projection to improve text classification. Through a large number of comparative experiments, we showed the effectiveness of the proposed feature projection method. Our current method is designed only for traditional text classification methods such as LSTM, CNN, and Transformer. In our future work, we will consider extending it to graph-based methods such as GCN for graph data, and to generation-based methods such as GAN for adversarial learning. Acknowledgments The project was funded by Peking University. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yonatan Belinkov, Adam Poliak, Stuart M Shieber, Benjamin Van Durme, and Alexander M Rush. 2019. On adversarial removal of hypothesis-only bias in natural language inference. arXiv preprint arXiv:1907.04389. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493–2537. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2016. Very deep convolutional networks for text classification. arXiv preprint arXiv:1606.01781. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Kawin Ethayarajh. 2018. Unsupervised random walk sentence embeddings: A strong but simple baseline. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 91–100. Song Feng, Ritwik Banerjee, and Yejin Choi. 2012. Syntactic stylometry for deception detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short PapersVolume 2, pages 171–175. Association for Computational Linguistics. Yaroslav Ganin and Victor Lempitsky. 2014. Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. arXiv preprint arXiv:1602.03483. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1681–1691. 8170 Samuel R. Bowman Jason Phang, Thibault F´evry. 2019. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv:1811.01088 [cs.CL]. Peng Jin, Yue Zhang, Xingyuan Chen, and Yunqing Xia. 2016. Bag-of-embeddings for text classification. In IJCAI, volume 16, pages 2824–2830. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Christopher D. Manning Quoc V. Le Kevin Clark, Minh-Thang Luong. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2002 conference on Empirical methods in natural language processing. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Twenty-ninth AAAI conference on artificial intelligence. Xin Li and Dan Roth. 2002. Learning question classifiers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1–7. Association for Computational Linguistics. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1–167. Yukun Ma, Haiyun Peng, and Erik Cambria. 2018. Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive lstm. In Thirty-Second AAAI Conference on Artificial Intelligence. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2015. Natural language inference by tree-based convolution and heuristic matching. arXiv preprint arXiv:1512.08422. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 115–124. Association for Computational Linguistics. Dominik Scherer, Andreas M¨uller, and Sven Behnke. 2010. Evaluation of pooling operations in convolutional architectures for object recognition. In International conference on artificial neural networks, pages 92–101. Springer. Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple wordembedding-based models and associated pooling mechanisms. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. arXiv preprint arXiv:1804.00079. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. Y. Tamaazousti, H. Le Borgne, C. Hudelot, M. E. A. Seddik, and M. Tamaazousti. 2018. Learning more universal representations for transfer-learning. arXiv:1712.09708v5 [cs.CV]. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 1422–1432. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Baoxin Wang. 2018. Disconnected recurrent neural networks for text categorization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2311–2320. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905. Yiren Wang and Fei Tian. 2016. Recurrent residual learning for sequence classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 938–943. Yijun Xiao and Kyunghyun Cho. 2016. Efficient character-level document classification by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367. 8171 Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480–1489. Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Sch¨utze. 2017. Comparative study of cnn and rnn for natural language processing. arXiv preprint arXiv:1702.01923. Kai Zhang, Hefu Zhang, Qi Liu, Hongke Zhao, Hengshu Zhu, and Enhong Chen. 2019. Interactive attention transfer network for cross-domain sentiment classification.
2020
726
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8172–8181 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8172 A Negative Case Analysis of Visual Grounding Methods for VQA Robik Shrestha1 Kushal Kafle1,2 Christopher Kanan1,3,4 Rochester Institute of Technology1 Adobe Research2 Paige3 Cornell Tech4 {rss9369, kk6055, kanan}@rit.edu Abstract Existing Visual Question Answering (VQA) methods tend to exploit dataset biases and spurious statistical correlations, instead of producing right answers for the right reasons. To address this issue, recent bias mitigation methods for VQA propose to incorporate visual cues (e.g., human attention maps) to better ground the VQA models, showcasing impressive gains. However, we show that the performance improvements are not a result of improved visual grounding, but a regularization effect which prevents over-fitting to linguistic priors. For instance, we find that it is not actually necessary to provide proper, humanbased cues; random, insensible cues also result in similar improvements. Based on this observation, we propose a simpler regularization scheme that does not require any external annotations and yet achieves near state-of-theart performance on VQA-CPv21. 1 Introduction Visual Question Answering (VQA) (Antol et al., 2015), the task of answering questions about visual content, was proposed to facilitate the development of models with human-like visual and linguistic understanding. However, existing VQA models often exploit superficial statistical biases to produce responses, instead of producing the right answers for the right reasons (Kafle et al., 2019). The VQA-CP dataset (Agrawal et al., 2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have endeavored to enforce proper visual grounding, where the goal is to make models produce answers by looking at relevant visual regions (Gan et al., 2017; Selvaraju et al., Answer distribution VQA-CP Dataset Prediction: Brown Baseline Methods Affected by language priors Green Brown Q: What color is the couch? A: Green Training Test Green Brown Fail to generalize Prediction: Green Recent Methods Improve by grounding on relevant regions +9% over baselines Prediction: Green Our Findings Irrelevant/random regions result in similar gains +9% over baselines Figure 1: We find that existing visual sensitivity enhancement methods improve performance on VQACPv2 through regularization as opposed to proper visual grounding. 2019; Wu and Mooney, 2019), instead of exploiting linguistic priors. These approaches rely on additional annotations/cues such as human-based attention maps (Das et al., 2017), textual explanations (Huk Park et al., 2018) and object label predictions (Ren et al., 2015) to identify relevant regions, and train the model to base its predictions on those regions, showing large improvements (810% accuracy) on the VQA-CPv2 dataset. Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the 1https://github.com/erobic/negative_ analysis_of_grounding 8173 variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy. Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the predictions are correct or incorrect. We find that this approach also achieves near state-of-the-art performance (48.9% on VQACPv2), providing further support for our claims. While we agree that visual grounding is a useful direction to pursue, our experiments show that the community requires better ways to test if systems are actually visually grounded. We make some recommendations in the discussion section. 2 Related Work 2.1 Biases in VQA As expected of any real world dataset, VQA datasets also contain dataset biases (Goyal et al., 2017). The VQA-CP dataset (Agrawal et al., 2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nearly impossible for the models that rely upon linguistic correlations to perform well on the test set (Agrawal et al., 2018; Shrestha et al., 2019). 2.2 Bias Mitigation for VQA VQA algorithms without explicit bias mitigation mechanisms fail on VQA-CP, so recent works have focused on the following solutions: 2.2.1 Reducing Reliance on Questions Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization (Grand and Belinkov, 2019; Ramakrishnan et al., 2018) or to re-scale the loss based on the difficulty of the question (Cadene et al., 2019). However, when these ideas are applied to the UpDn model (Anderson et al., 2018), which attempts to learn correct visual grounding, these approaches achieve 4-7% lower accuracy compared to the state-of-the-art methods. 2.2.2 Enhancing Visual Sensitivities Both Human Importance Aware Network Tuning (HINT) (Selvaraju et al., 2019) and Self Critical Reasoning (SCR) (Wu and Mooney, 2019), train the network to be more sensitive towards salient image regions by improving the alignment between visual cues and gradient-based sensitivity scores. HINT proposes a ranking loss between humanbased importance scores (Das et al., 2016) and the gradient-based sensitivities. In contrast, SCR does not require exact saliency ranks. Instead, it penalizes the model if correct answers are more sensitive towards non-important regions as compared to important regions, and if incorrect answers are more sensitive to important regions than correct answers. 3 Existing VQA Methods Given a question Q and an image I, e.g., represented by bottom-up region proposals: v (Anderson et al., 2018), a VQA model is tasked with predicting the answer a: P(a|Q, I) = fV QA(v, Q). (1) 3.1 Baseline VQA Methods Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn (Anderson et al., 2018), tend to rely on the linguistic priors: P(a|Q) to answer questions. Such models fail on VQA-CP, because the priors in the test set differ from the train set. 3.2 Visual Sensitivity Enhancement Methods To reduce the reliance on linguistic priors, visual sensitivity enhancement methods attempt to train the model to be more sensitive to relevant visual regions when answering questions. Following (Wu and Mooney, 2019), we define the sensitivity of an answer a with respect to a visual region vi as: S(a, vi) := (∇viP(a|I, Q))T 1. (2) Existing methods propose the following training objectives to improve grounding using S: • HINT uses a ranking loss, which penalizes the model if the pair-wise rankings of the sensitivities of visual regions towards ground truth answers agt are different from the ranks computed from the human-based attention maps. 8174 • SCR divides the region proposals into influential and non-influential regions and penalizes the model if: 1) S(agt) of a non-influential region is higher than an influential region, and 2) the region most influential for the correct answer has even higher sensitivity for incorrect answers. Both methods improve baseline accuracy by 8-10%. Is this actually due to better visual grounding? 4 Why Did the Performance Improve? We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4). Then, we analyze the regularization effects by evaluating the performance on VQACPv2’s train split (Sec. 4.5) and the behavior on a dataset without changing priors (Sec. 4.6). We present a new metric to assess visual grounding in Sec. 4.7 and describe our regularization method in Sec. 5. 4.1 Experimental Setup We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5 runs, where a pretrained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Further training details are provided in the Appendix. 4.2 Training on Irrelevant Visual Cues In our first experiment we studied how irrelevant visual cues performed compared to relevant ones. We fine-tune the model with irrelevant cues defined as: Sirrelevant := (1 −Sh), where, Sh represents the human-based importance scores. As shown in the ‘Grounding using irrelevant cues’ section of Table 1, both HINT and SCR are within 0.3% of the results obtained from looking at relevant regions, which indicates the gains for HINT and SCR are not necessarily from looking at relevant regions. 4.3 Training on Random Visual Cues In our next experiment we studied how random visual cues performed with HINT and SCR. We assign random importance scores to the visual regions: Srand ∼uniform(0, 1). We test two variants of randomness: Fixed random regions, where Table 1: Results on VQA-CPv2 and VQAv2 datasets for the baseline UpDn, visual sensitivity enhancement methods (HINT and SCR) and our own regularization method, including the published (pub.) numbers. VQA-CPv2 VQAv2 Train Test Train Val Baseline - Without visual grounding UpDn 84.0 40.1 83.4 64.4 Grounding using human-based cues HINTpub. N/A 46.7 N/A 63.41 SCRpub. N/A 49.5 N/A 62.2 HINT 73.9 48.2 75.7 61.3 SCR 75.9 49.1 77.9 61.3 Grounding using irrelevant cues HINT 71.2 48.0 73.5 60.3 SCR 75.7 49.2 74.1 59.1 Grounding using fixed random cues HINT 72.0 48.1 73.0 59.5 SCR 70.0 49.1 78.0 61.4 Grounding using variable random cues HINT 71.9 48.1 72.9 59.4 SCR 69.6 49.2 78.1 61.5 Regularization by zeroing out answers Ours1% fixed 78.0 48.9 80.1 62.6 Ours1% var. 77.6 48.5 80.0 62.6 Ours100% 75.7 48.2 79.9 62.4 1 The published number is a result of fine-tuning HINT on the entire training set, but as described in Sec. 4.6, other published numbers and our experiments fine-tune only on the instances with cues. Srand are fixed once chosen, and Variable random regions, where Srand are regenerated every epoch. As shown in Table 1, both of these variants obtain similar results as the model trained with human-based importance scores. The performance improves even when the importance scores are changed every epoch, indicating that it is not even necessary to look at the same visual regions. 4.4 Significance of Statistical Differences To test if the changes in results were statistically significant, we performed Welch’s t-tests (Welch, 1938) on the predictions of the variants trained on relevant, irrelevant and random cues. We pick Welch’s t-test over the Student’s t-test, because the latter assumes equal variances for predictions from different variants. To perform the tests, we first randomly sample 5000 subsets of non-overlapping test instances. We then average the accuracy of each subset across 5 runs, obtaining 5000 values. Next, we run the t-tests for HINT and SCR separately on the subset accuracies. As shown in Table 2, the p-values across the variants of HINT and SCR are 8175 Table 2: p-values from the Welch’s t-tests and the percentage of overlap between the predictions (Ovp.) of different variants of HINT and SCR. Methods p Ovp.(%) HINT variants against Baseline Default vs. Baseline 0.0 83.6 Irrelevant vs. Baseline 0.0 82.4 Fixed Random vs. Baseline 0.0 82.0 Variable Random vs. Baseline 0.0 81.5 Among HINT variants Default vs Irrelevant 0.3 89.7 Default vs Fixed random 0.7 90.9 Default vs Variable random 0.6 91.9 Irrelevant vs Fixed random 0.5 95.6 Irrelevant vs Variable random 0.7 93.9 Fixed random vs Variable random 0.9 96.9 SCR variants against Baseline Default vs. Baseline 0.0 85.6 Irrelevant vs. Baseline 0.0 84.2 Fixed Random vs. Baseline 0.0 80.7 Variable Random vs. Baseline 0.0 80.6 Among SCR variants Default vs Irrelevant 0.6 92.0 Default vs Fixed random 0.8 89.3 Default vs Variable random 0.6 89.5 Irrelevant vs Fixed random 0.4 91.7 Irrelevant vs Variable random 1.0 91.6 Fixed random vs Variable random 0.4 96.7 greater than or equal to 0.3. Using a confidence level of 95% (α = 0.05), we fail to reject the null hypothesis that the mean difference between the paired values is 0, showing that the variants are not statistically significantly different from each other. We also compare the predictions of HINT/SCR against baseline, and find that p-values are all zeros, showing that the differences have statistical significance. Percentage of Overlaps: To further check if the variants trained on irrelevant or random regions gain performance in a manner similar to the models trained on relevant regions, we compute the overlap between their predictions on VQA-CPv2’s test set. The percentage of overlap is defined as: % Overlap = nsame ntotal × 100%, where, nsame denotes the number of instances where either both variants were correct or both were incorrect and ntotal denotes the total number of test instances. As shown in Table 2, we compare %Overlap between different variants of HINT/SCR with baseline and against each other. Epochs Accuracy on VQAv2 60 61 62 63 64 65 0 1 2 3 4 5 6 7 8 HINT (full) SCR (full) HINT (subset with cues) SCR (subset with cues) Figure 2: Accuracies for HINT and SCR on VQAv2’s val set, when fine-tuned either on the full train set or on the subset containing visual cues. We find 89.7 −91.9% and 89.5 −92.0% overlaps for different variants of HINT and SCR respectively. These high overlaps suggest that the variants are not working in fundamentally different manners. 4.5 Drops in Training Accuracy We compare the training accuracies to analyze the regularization effects. As shown in Table 1, the baseline method has the highest training results, while the other methods cause 6.0 −14.0% and 3.3−10.5% drops in the training accuracy on VQACPv2 and VQAv2, respectively. We hypothesize that degrading performance on the train set helps forget linguistic biases, which in turn helps accuracy on VQA-CPv2’s test set but hurts accuracy on VQAv2’s val set. 4.6 Drops in VQAv2 Accuracy As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training. This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2. 4.7 Assessment of Proper Grounding In order to quantitatively assess visual grounding, we propose a new metric called: Correctly Predicted but Improperly Grounded (CPIG): %CPIG = Ncorrect ans, improper grounding Ncorrect ans × 100%, which is the number instances for which the most sensitive visual region used to correctly predict the 8176 answer is not within top-3 most relevant ground truth regions, normalized by the total number of correct predictions. HINT and SCR trained on relevant regions obtained lower CPIG values that other variants (70.24% and 80.22% respectively), indicating they are better than other variants at finding relevant regions. However, these numbers are still high, and show that only 29.76% and 19.78% of the correct predictions for HINT and SCR were properly grounded. Further analysis is presented in the Appendix. 5 Embarrassingly Simple Regularizer The usage of visual cues and sensitivities in existing methods is superfluous because the results indicate that performance improves through degradation of training accuracy. We hypothesize that simple regularization that does not rely on cues or sensitivities can also achieve large performance gains for VQA-CP. To test this hypothesis, we devise a simple loss function which continuously degrades the training accuracy by training the network to always predict a score of zero for all possible answers i.e. produce a zero vector (0). The overall loss function can be written as: L := BCE(P(A), Agt) + λBCE(P(A), 0), where, BCE refers to the binary cross entropy loss and P(A) is a vector consisting of predicted scores for all possible answers. The first term is the binary cross entropy loss between model predictions and ground truth answer vector (Agt), and the second term is our regularizer with a coefficient of λ = 1. Note that this regularizer continually penalizes the model during the course of the training, whether its predictions are correct or incorrect. As shown in Table 1, we present results when this loss is used on: a) Fixed subset covering 1% of the dataset, b) Varying subset covering 1% of the dataset, where a new random subset is sampled every epoch and c) 100% of the dataset. Confirming our hypothesis, all variants of our model achieve near state-of-the-art results, solidifying our claim that the performance gains for recent methods come from regularization effects. It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not observe such behavior in any of the methods, indicating that they are not producing right answers for the right reasons. 6 Discussion on Proper Grounding While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented in this paper. We recommend that both train and test accuracy be reported, because a model truly capable of visual grounding would not cause drastic drops in training accuracy to do well on the test sets. Finally, we advocate for creating a dataset with ground truth grounding available for 100% of the instances using synthetically generated datasets (Kafle et al., 2017; Kafle and Kanan, 2017; Kafle et al., 2018; Acharya et al., 2019b; Hudson and Manning, 2019; Johnson et al., 2017), enabling the community to evaluate if their methods are able to focus on relevant information. Another alternative is to use tasks that explicitly test grounding, e.g., in visual query detection an agent must output boxes around any regions of a scene that match the natural language query (Acharya et al., 2019a). 7 Conclusion Here, we showed that existing visual grounding based bias mitigation methods for VQA are not working as intended. We found that the accuracy improvements stem from a regularization effect rather than proper visual grounding. We proposed a simple regularization scheme which, despite not requiring additional annotations, rivals state-of-theart accuracy. Future visual grounding methods should be tested with a more comprehensive experimental setup and datasets for proper evaluation. Acknowledgement. This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any sponsor. We are grateful to Tyler Hayes for agreeing to review the paper at short notice and suggesting valuable edits and corrections for the paper. 8177 References Manoj Acharya, Karan Jariwala, and Christopher Kanan. 2019a. VQD: Visual query detection in natural scenes. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1955–1961, Minneapolis, Minnesota. Association for Computational Linguistics. Manoj Acharya, Kushal Kafle, and Christopher Kanan. 2019b. Tallyqa: Answering complex counting questions. In Association for the Advancement of Artificial Intelligence (AAAI). Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Dont just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4971–4980. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In The IEEE International Conference on Computer Vision (ICCV). Remi Cadene, Corentin Dancette, Matthieu Cord, Devi Parikh, et al. 2019. Rubi: Reducing unimodal biases for visual question answering. In Advances in Neural Information Processing Systems (NeurIPS), pages 839–850. Abhishek Das, Harsh Agrawal, C Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2016. Human attention in visual question answering: Do humans and deep networks look at the same regions? In Conference on Empirical Methods on Natural Language Processing (EMNLP). Abhishek Das, Harsh Agrawal, Larry Zitnick, Devi Parikh, and Dhruv Batra. 2017. Human attention in visual question answering: Do humans and deep networks look at the same regions? Computer Vision and Image Understanding (CVIU), 163:90–100. Chuang Gan, Yandong Li, Haoxiang Li, Chen Sun, and Boqing Gong. 2017. Vqs: Linking segmentations to questions and answers for supervised attention in vqa and question-focused semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1811–1820. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, page 3. Gabriel Grand and Yonatan Belinkov. 2019. Adversarial regularization for visual question answering: Strengths, shortcomings, and side effects. In Proceedings of the Second Workshop on Shortcomings in Vision and Language, pages 1–13, Minneapolis, Minnesota. Association for Computational Linguistics (ACL). Drew A Hudson and Christopher D Manning. 2019. GQA: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6700–6709. Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Marcus Rohrbach. 2018. Multimodal explanations: Justifying decisions and pointing to the evidence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 8779–8788. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1988–1997. IEEE. Kushal Kafle and Christopher Kanan. 2017. An analysis of visual question answering algorithms. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1983–1991. IEEE. Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. 2018. DVQA: Understanding data visualizations via question answering. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5648–5656. Kushal Kafle, Robik Shrestha, and Christopher Kanan. 2019. Challenges and prospects in vision and language research. Frontiers in Artificial Intelligence. Kushal Kafle, Mohammed Yousefhussien, and Christopher Kanan. 2017. Data augmentation for visual question answering. In Proceedings of the 10th International Conference on Natural Language Generation (INLG), pages 198–202. Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. 2018. Overcoming language priors in visual question answering with adversarial regularization. In Advances in Neural Information Processing Systems (NeurIPS), pages 1541–1551. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NeurIPS). 8178 Ramprasaath R Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry Heck, Dhruv Batra, and Devi Parikh. 2019. Taking a hint: Leveraging explanations to make vision and language models more grounded. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2591–2600. Robik Shrestha, Kushal Kafle, and Christopher Kanan. 2019. Answer them all! toward universal visual question answering models. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Bernard L Welch. 1938. The significance of the difference between two means when the population variances are unequal. Biometrika, 29(3/4):350–362. Jialin Wu and Raymond Mooney. 2019. Self-critical reasoning for robust visual question answering. In Advances in Neural Information Processing Systems (NeurIPS), pages 8601–8611. A Appendix A.1 Training Details We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pretrained UpDn, which was trained on either VQACPv2 or VQAv2 for 40 epochs with a learning rate of 10−3. When fine-tuning with HINT, SCR or our method, we also use the main binary cross entropy VQA loss, whose weight is set to 1. The batch size is set to 384 for all of the experiments. HINT Following (Selvaraju et al., 2019), we train HINT on the subset with human-based attention maps (Das et al., 2017), which are available for 9% of the VQA-CPv2 train and test sets. The same subset is used for VQAv2 too. The learning rate is set to 2 × 10−5 and the weight for the HINT loss is set to 2. SCR Since (Wu and Mooney, 2019) reported that humanbased textual explanations (Huk Park et al., 2018) gave better results than human-based attention maps for SCR, we train all of the SCR variants on the subset containing textual explanation-based cues. SCR is trained in two phases. For the first phase, which strengthens the influential objects, we use a learning rate of 5 × 10−5, loss weight of 3 Table A3: Results on VQA-CPv2 and VQAv2 datasets for the baseline UpDn, visual sensitivity enhancement methods (HINT and SCR) and our own regularization method, including the published (pub.) numbers. VQA-CPv2 VQAv2 Baseline - Without visual grounding UpDn 0.0110 0.0155 Grounding using human-based cues HINT 0.1020 0.1350 SCR 0.0340 -0.0670 Grounding using irrelevant cues HINT -0.0048 -0.0200 SCR 0.0580 -0.0100 Grounding using fixed random cues HINT 0.0510 0.0620 SCR -0.0250 -0.0350 Grounding using variable random cues HINT 0.0570 0.0623 SCR -0.0380 0.0246 Regularization by zeroing out answers Ours1% fixed -0.1050 -0.1200 Ours100% -0.0750 -0.0100 and train the model to a maximum of 12 epochs. Then, following (Wu and Mooney, 2019), for the second phase, we use the best performing model from the first phase to train the second phase, which criticizes incorrect dominant answers. For the second phase, we use a learning rate of 10−4 and weight of 1000, which is applied alongside the loss term used in the first phase. The specified hyperparameters worked better for us than the values provided in the original paper. Our Zero-Out Regularizer Our regularization method, which is a binary cross entropy loss between the model predictions and a zero vector, does not use additional cues or sensitivities and yet achieves near state-of-the-art performance on VQA-CPv2. We set the learning rate to: 2×10−6 r , where r is the ratio of the training instances used for fine-tuning. The weight for the loss is set to 2. We report the performance obtained at the 8th epoch. A.2 Results Correlation with Ground Truth Visual Cues Following (Selvaraju et al., 2019), we report Spearman’s rank correlation between network’s sensitivity scores and human-based scores in Table A3. For HINT and our zero-out regularizer, we use human-based attention maps. For SCR, we use textual explanation-based scores. We find that HINT 8179 trained on human attention maps has the highest correlation coefficients for both datasets. However, compared to baseline, HINT variants trained on random visual cues also show improved correlations. For SCR, we obtain surprising results, with the model trained on irrelevant cues obtaining higher correlation than that trained on relevant visual cues. As expected, applying our regularizer does not improve rank correlation. Since HINT trained on relevant cues obtains the highest correlation values, it does indicate improvement in visual grounding. However, as we have seen, the improvements in performance cannot necessarily be attributed to better overlap with ground truth localizations. A Note on Qualitative Examples Presentation of qualitative examples in visual grounding models for VQA suffers from confirmation bias i.e., while it is possible to find qualitative samples that look at relevant regions to answer questions properly, it is also possible to find samples that produce correct answers without looking at relevant regions. We present examples for such cases in Fig. A3. We next present a quantitative assessment of visual grounding, which does not suffer from the confirmation bias. Quantitative Assessment of Grounding In order to truly assess if existing methods are using relevant regions to produce correct answers, we use our proposed metric: Correctly Predicted but Improperly Grounded (CPIG). If the CPIG values are large, then it implies that large portion of correctly predicted samples were not properly grounded. Fig. A4 shows % CPIG for different variants of HINT trained on human attention-based cues, whereas Fig. A5 shows the metric for different variants of SCR trained on textual explanationbased cues. We observe that HINT and SCR trained on relevant regions have the lowest % CPIG values (70.24% and 80.22% respectively), indicating that they are better than other variants in finding relevant regions. However, only a small percentage of correctly predicted samples were properly grounded (29.76% and 19.78% for HINT and SCR respectively), even when trained on relevant cues. Breakdown by Answer Types Table A4 shows VQA accuracy for each answer type on VQACPv2’s test set. HINT/SCR and our regularizer show large gains in ‘Yes/No’ questions. Table A4: VQA accuracy per answer-type on VQACPv2 test set. Overall Yes/No Num Other Baseline - Without visual grounding UpDn 40.1 41.1 12.0 47.2 Grounding using human-based cues HINT 48.2 65.2 13.8 47.5 SCR 49.1 70.3 11.5 48.0 Grounding using irrelevant cues HINT 48.0 67.2 13.5 47.1 SCR 49.2 73.4 11.5 46.4 Grounding using fixed random cues HINT 48.1 66.9 13.8 46.9 SCR 49.1 74.7 12.2 45.1 Grounding using variable random cues HINT 48.1 67.1 13.9 46.9 SCR 49.2 74.7 12.2 45.1 Regularization by zeroing out answers Ours1% fixed 48.9 69.8 11.3 47.8 Ours100% 48.2 66.7 11.7 47.9 We hypothesize that the methods help forget linguistic priors, which improves test accuracy of such questions. In the train set of VQACPv2, the answer ‘no’ is more frequent than the answer ‘yes’, tempting the baseline model to answer ‘yes/no’ questions with ‘no’. However, in the test set, answer ‘yes’ is more frequent. Regularization effects caused by HINT/SCR and our method cause the models to weaken this prior i.e., reduce the tendency to just predict ‘no’, which would increase accuracy at test because ‘yes’ is more frequent in the test set. Next, all of the methods perform poorly on ‘Number (Num)’ answer type, showing that methods find it difficult to answer questions that are most reliant on correct visual grounding such as: localizing and counting objects. Finally, we do not observe large improvements in ‘Other’ question type, most likely due to the large number of answers present under this answer type. Accuracy versus Size of Train Set We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to 1 −100% of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance occurring when our loss is applied to 1% of the training instances. These results support our claims that it is possible to improve performance without actually performing visual grounding. 8180 Q: Is this food sweet? A: yes Remarks: The most sensitive regions for irrelevant/random variants do not contain food, yet their answers are correct. Ground Truth Localization HINT trained on relevant cues HINT trained on irrelevant cues HINT trained on random cues Q: Has the boy worn out his jeans? A: yes Remarks: All of the variants look at both relevant and irrelevant regions to produce correct answer. Q: Is the sport being played tennis or volleyball? A: tennis Remarks: None of the variants look at relevant regions, and yet produce correct answer. Q: What is the swimmer doing? A: surfing Remarks: Models trained on irrelevant/random cues do not look at the swimmer at all, yet produce correct answer. Figure A3: Visualizations of most sensitive visual regions used by different variants of HINT to make predictions. We pick samples where all variants produce correct response to the question. The first column shows ground truth regions and columns 2-4 show visualizations from HINT trained on relevant, irrelevant and fixed random regions respectively. 8181 71.56% 70.24% 79.51% 75.22% 75.90% 82.48% 80.75% Methods % CPIG 0% 25% 50% 75% 100% Baseline (HAT) HINT (relevant) HINT (irrelevant) HINT (random) HINT (var random) Ours (1% fixed) Ours (100%) Figure A4: % CPIG for baseline and different variants of HINT and our method, computed using ground truth relevant regions taken from human attention maps (lower is better). 81.28% 80.22% 84.30% 86.67% 86.81% 87.82% 87.19% Methods % CPIG 0% 25% 50% 75% 100% Baseline (txt) SCR (relevant) SCR (irrelevant) SCR (random) SCR (var random) Ours (1% fixed) Ours (100%) Figure A5: % CPIG for baseline and different variants of SCR and our method, computed using ground truth relevant regions taken from textual explanations (txt). 78 76.5 77.4 77.1 76.8 76.5 76.5 76.7 76.8 76.9 48.9 48.7 48.5 48.5 48.4 48.3 48.3 48.2 48.2 48.1 Percentage of training subset used for fine-tuning Accuracy 0 20 40 60 80 1% 2% 3% 4% 5% 20% 40% 60% 80% 100% Train Test Figure A6: The regularization effect of our loss is invariant with respect to the dataset size.
2020
727
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8182–8197 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8182 History for Visual Dialog: Do we really need it? Shubham Agarwal1∗, Trung Bui2, Joon-Young Lee2, Ioannis Konstas1 and Verena Rieser1 1The Interaction Lab, Heriot-Watt University, Edinburgh, UK 2Adobe Research, San Jose, CA, US {sa201, i.konstas, v.t.rieser}@hw.ac.uk {bui, jolee}@adobe.com Abstract Visual Dialog involves “understanding” the dialog history (what has been discussed previously) and the current question (what is asked), in addition to grounding information in the image, to generate the correct response. In this paper, we show that co-attention models which explicitly encode dialog history outperform models that don’t, achieving state-ofthe-art performance (72 % NDCG on val set). However, we also expose shortcomings of the crowd-sourcing dataset collection procedure by showing that history is indeed only required for a small amount of the data and that the current evaluation metric encourages generic replies. To that end, we propose a challenging subset (VisDialConv) of the VisDial val set and provide a benchmark of 63% NDCG. 1 Introduction Recently, there has been an increased interest in visual dialog, i.e. dialog-based interaction grounded in visual information (Chattopadhyay et al., 2017; De Vries et al., 2017; Seo et al., 2017; Guo et al., 2018; Shekhar et al., 2018; Kottur et al., 2019; Haber et al., 2019). One of the most popular test beds is the Visual Dialog Challenge (VisDial) (Das et al., 2017), which involves an agent answering questions related to an image, by selecting the answer from a list of possible candidate options. According to the authors, nearly all interactions (98%) contain dialog phenomena, such as co-reference, that can only be resolved using dialog history, which makes this a distinct task from previous Visual Question Answering (VQA) challenges, e.g. (Antol et al., 2015). For example, in order to answer the question “About how many?” in Figure 1, we have to infer from what was previously said, that the conversation is about the skiers. ∗This work was carried out during the internship at Adobe Research. A group of skiers racing up a mountain Caption About how many? Current Question Q1 Is 1 winning? A1 no. Q2 Do they have numbers? A2 yes. Conversational History / Context not really maybe 5 or 6, hard to see all of him 0 of those either few of them looks about 7 7 (GT answer) ...... Answer options 0.0 0.6 0.0 0.4 0.8 0.4 .... Relevance Figure 1: Visual Dialog task according to (Das et al., 2017) as a ranking problem, where for the current question (blue), the agent ranks list of 100 candidate answers (yellow). Relevance weights for each candidate were collected via crowd-sourcing. Previous dialog history (red) together with the caption (green) forms the contextual information for the current turn. In the original paper, Das et al. (2017) find that models which structurally encode dialog history, such as Memory Networks (Bordes et al., 2016) or Hierarchical Recurrent Encoders (Serban et al., 2017) improve performance. However, “naive” history modelling (in this case an encoder with late fusion/concatenation of current question, image and history encodings) might actually hurt performance. Massiceti et al. (2018) take this even further, claiming that VisDial can be modeled without taking history or even visual information into account. Das et al. (2019) rebutted by showing that both features are still needed to achieve state-of-the8183 art (SOTA) results and an appropriate evaluation procedure has to be used. In this paper, we show that competitive results on VisDial can indeed be achieved by replicating the top performing model for VQA (Yu et al., 2019b) – and effectively treating visual dialog as multiple rounds of question-answering, without taking history into account. However, we also show that these results can be significantly improved by encoding dialog history, as well as by fine-tuning on a more meaningful retrieval metric. Finally, we show that more sophisticated dialog encodings outperform naive fusion on a subset of the data which contains “true” dialog phenomena according to crowd-workers. In contrast to previous work on the VisDial dataset, e.g. (Kottur et al., 2018; Agarwal and Goyal, 2018; Gan et al., 2019; Guo et al., 2019; Kang et al., 2019), we are the first to conduct a principled study of dialog history encodings. Our contributions can thus be summarized as follows: • We present SOTA results on the VisDial dataset using transformer-based Modular CoAttention (MCA) networks. We further show that models encoding dialog history outperform VQA models on this dataset. • We show that curriculum fine-tuning (Bengio et al., 2009) on annotations of semantically equivalent answers further improves results. • We experiment with different dialog history encodings and show that early fusion, i.e. dense interaction with visual information (either via grounding or guided attention) works better for cases where conversational historical context is required. • We release a crowd-sourced subset containing verified dialog phenomena and provide benchmark results for future research. 2 Visual Dialog Models In this section, we extend Modular Co-Attention Networks, which won the VQA challenge 2019 (Yu et al., 2019b) and adapt it to visual dialog. Different from previous co-attention networks (Kim et al., 2018; Nguyen and Okatani, 2018), MCA networks use guided attention to model dense relations between the question and image regions for better visual grounding. In the following, we explore MCA networks with different input encodings following a ‘[model]-[input]’ convention to refer to our MCA model variants; see Figure 3 for an overview. Whenever unspecified, images are represented as a bag of bottom-up features, i.e. object level representations (see Section 3). 2.1 Modular Co-Attention networks The MCA module with multi-modal fusion as depicted in Figure 2, is common to all our architectures. Inspired by the transformers (Vaswani et al., 2017), the MCA network (Yu et al., 2019b) is a modular composition of two basic attention units: self-attention and guided attention. These are arranged in an encoder-decoder composition in the MCA module (Figure 2), which performed best for VQA (Yu et al., 2019b). 2.1.1 Self-Attention and Guided-Attention The Self-Attention (SA) unit in transformers (Vaswani et al., 2017) is composed of a multihead attention layer followed by a feed-forward layer. When applied to vision, the SA unit can be viewed as selecting the most relevant object-level image features for the downstream task. Specifically, the scaled dot product attention takes as input key, query and value (usually same modality’s embedded representations) and outputs a self-attended vector (Eq.1). Multi-head attention provides multiple representation spaces to capture different linguistic/grounding phenomena, which are otherwise lost by averaging using a single head. Att(Q, K, V ) = softmax(QKT √ dK )V MHAtt(Q, K, V ) = Concat(head1, . . . headn)W O headi = Att(QW Q i , KW K k , V W V i ) (1) The Guided-Attention (GA) unit conditions the attention on different sequences. The key and value come from one modality, while the query comes from a different modality similar to the decoder architecture in Transformers (Vaswani et al., 2017). Similar to Eq. 1, the GA unit outputs features fi = Att(X, Y, Y ) where X ∈Rm×dx comes from one modality and Y ∈Rn×dy from the other. Residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) are applied to the output of both the attention and feed-forward layers similar to (Vaswani et al., 2017; Yu et al., 2019b) in both the SA and GA units. 2.1.2 Modular Co-Attention Module The following description of the MCA module is based on the question and the image, but can be 8184 Attention Reduce Attention Reduce FC FC + MCA Module Multimodal Fusion SA GA SA SA SA GA SA X Y Multi-head Attention Add & Layer Norm Feedforward K Q V Add & Layer Norm Multi-head Attention Add & Layer Norm Feedforward Q SA unit V Add & Layer Norm K GA unit MCA-MF MCA module Encoder Decoder Multimodal Fusion Decoder X Y Per word Ques emb Object level Img emb Dot Log probs 100 Answer  options Fused embedding 100 X 1 XL XL YL XL YL YL(i) YL (i) Figure 2: Modular Co-Attention (MCA) module with MCA-I (Section 2.1) as an example. extended analogously to model the interaction between the question and history. First, the input (i.e. the question) is passed through multiple multi-head self-attention layers L, in order to get self-aware representations before acting as conditional signal to different modalities (visual or contextual history) similar to the auto-encoding procedure of Transformers. Then the final representation XL is used as the input for GA units to model cross-modal dependencies and learn the final conditioned representation Y L. 2.1.3 Multi-modal fusion The learned representations XL ∈Rm×d and Y L ∈Rn×d contain the contextualized and conditioned representations over the word and image regions, respectively. We apply attention reduction (Yu et al., 2019b) with a multi-layer perceptron (MLP) for XL (analogously for Y L). We obtain the final multi-modal fused representation z: α x = softmax(MLP x(X L)) ̃x = i=1 ∑ m α x i x L i z = LayerNorm(W T x ̃x + W T y ̃y) (2) where αx = [αx 1 . . . αx m] ∈Rm are learned attention weights (same process for αy and ̃y) and Wx ∈Rd×dz, Wy ∈Rd×dz are linear projection matrices (dimensions are the same for simplicity). We call this model MCA with Image component only; (MCA-I), since it only encodes the question and image features and therefore treats each question in Visual Dialog as an independent instance of VQA, without conditioning on the historical context of the interaction. 2.2 Variants with Dialog History In the following, we extend the above framework to model dialog history. We experiment with late/shallow fusion of history and image (MCA-IH), as well as modelling dense interaction between conversational history and the image representation (i.e. MCA-I-VGH, MCA-I-HGuidedQ). History guided Question (MCA-I-HGuidedQ): The network in Figure 3a is designed to model coreference resolution, which can be considered as the primary task in VisDial (Kottur et al., 2018). We first enrich the question embedding by conditioning on historical context using guided attention in the MCA module. We then use this enriched (coreference resolved) question to model the visual interaction as described in Section 2.1. Visually grounded history with image representation (MCA-I-VGH): Instead of considering conversational history and the visual context as two different modalities, we now ground the history with the image first, see Figure 3b. This is similar in spirit to maintaining a pool of visual attention maps (Seo et al., 2017), where we argue that different questions in the conversation attend to different parts of the image. Specifically, we pass the history to attend to object-level image features using the MCA module to get visually grounded contextual history. We then embed the question to pool the relevant grounded history using another MCA module. 8185 Per word Concat History emb Per word Ques emb MCA Module Multimodal Fusion c) MCA-I-H Decoder Object level Img emb MCA Module Multimodal Fusion Concat + Linear Per turn  History emb Per word Ques emb Decoder Object level Img emb MCA Module Multimodal Fusion Concat + Linear MCA Module Multimodal Fusion MCA Module Multimodal Fusion Per turn Visually  grounded history Y Y X X X Y Y Y X b) MCA-I-VGH Per word Concat History emb Per word Ques emb MCA Module Multimodal Fusion MCA Module Decoder Object level Img emb Enhanced Ques Y X X Y a) MCA-I-HGuidedQ Figure 3: All models incorporating dialog history described in Section 2.2 In parallel, the question embedding is also used to ground the current visual context. At the final step, the respective current image and historical components are fused together and passed through a linear layer before decoding. Note, this model is generic enough to potentially handle multiple images in a conversation and thus could be extended for tasks e.g. conversational image editing, which is one of the target applications of visual dialog (Kim et al., 2017; Manuvinakurike et al., 2018a,b; Lin et al., 2018; El-Nouby et al., 2018). Two-stream Image and History component (MCA-I-H): Figure 3c shows the model which maintains two streams of modular co-attention networks – one for the visual modality and the other for conversational history. We follow a similar architecture for the visual component as MCA-I and duplicate the structure for handling conversational history. At the final step, we concatenate both the embeddings and pass them through a linear layer. 2.3 Decoder and loss function For all the models described above, we use a discriminative decoder which computes the similarity between the fused encoding and RNN-encoded answer representations which is passed through a softmax layer to get the probability distribution over the candidate answers. We train using cross entropy over the ground truth answer: L(θ) = 1 N N=100 ∑ n=1 ynlogP(xn, θ) (3) N denotes the number of candidate answers which is set to 100 for this task, yn is the (ground truth) label which is 0 or 1 during the training procedure, or a relevance score of the options during fine-tuning (casting it as multi-label classification). 3 Implementation We use PyTorch1 (Paszke et al., 2017) for our experiments2. Following Anderson et al. (2018), we use bottom-up features of 36 proposals from images using a Faster-RCNN (Ren et al., 2015) pre-trained on Visual Genome (Krishna et al., 2017) to get a bag of object-level 2048-d image representations. Input question and candidate options are tokenized to a maximum length of 20 while the conversational history to 200. Token embeddings in text are initialized with 300-d GloVe vectors (Pennington et al., 2014) and shared among all text-based encoders. The RNN encodings are implemented using LSTMs (Hochreiter and Schmidhuber, 1997). 1https://pytorch.org/ 2Code available at https://github.com/ shubhamagarwal92/visdial_conv 8186 We use the Adam optimizer (Kingma and Ba, 2015) both for training and fine-tuning. More training details can be found in Appendix A. 4 Task Description 4.1 Dataset We use VisDial v1.0 for our experiments and evaluation.3 The dataset contains 123K/2K/8K dialogs for train/val/test set respectively. Each dialog is crowd-sourced on a different image, consisting of 10 rounds of dialog turns, totalling approx. 1.3M turns. Each question has also been paired with a list of 100 automatically generated candidate answers which the model has to rank. To account for the fact that there can be more than one semantically correct answer (e.g. “Nope”, “No”, “None”, “Cannot be seen”), “dense annotations” for 2k/2k turns of train/val of the data have been provided, i.e. a crowd-sourced relevance score between 0 and 1 (1 being totally relevant) for all 100 options. 4.2 Evaluation protocol As the Visual Dialog task has been posed as a ranking problem, standard information retrieval (IR) metrics are used for evaluation, such as Recall@{1,5,10} to measure performance in the top N results (higher better), mean reciprocal rank (MRR) of the Ground-Truth (GT) answer (higher better), and Mean rank of the GT answer (lower better). Normalized Discounted Cumulative Gain (NDCG) is another measure of ranking quality, which is commonly used when there is more than one correct answer (provided with their relevance). 4.3 Training details Sparse Annotation Phase: We first train on sparse annotations, i.e. only 1 provided groundtruth answer, which is available for the whole training set. Here the model learns to select only one relevant answer. Curriculum Fine-tuning Phase: Dense annotations, i.e. crowd-sourced relevance weights, are provided for 0.16% of training set, which we use to fine-tune the model to select multiple semantically equivalent answers. This acts like a curriculum learning setup (Elman, 1993; Bengio et al., 2009), 3Following the guidelines on the dataset page we report results only on v1.0, instead of v0.9. VisDial v1.0 has been consistently used for Visual Dialog Challenge 2018 and 2019. where selecting one answer using sparse annotation is an easier task and fine-tuning more difficult.4 4.4 Baselines MCA-I-HConcQ and MCA-H: MCA-IHConcQ is a naive approach of concatenating raw dialog history to the question while keeping the rest of the architecture the same as MCA-I. MCA-H on the other hand considers this task as only conversational (not visual) dialog with MCA module on history instead of image. RvA: We reproduce the results of Niu et al. (2019)’s Recursive Visual Attention model (RvA), which won the 2019 VisDial challenge. Their model browses the dialog history and updates the visual attention recursively until the model has sufficient confidence to perform visual co-reference resolution. We use their single model’s opensource implementation and apply our fine-tuning procedure on the val set in Table 1. When reporting on the test set results in Table 2, we use the leaderboard scores published online which contains further unpublished enhancements based on ensembling (MReaL-BDAI). 5 Results In the following, we report results on the VisDial v1.0 val set, (Table 1), as well as the test-std set,5 (Table 2). For measuring significance (reported on p ≤0.05), we use Kruskal-Wallis (Kruskal and Wallis, 1952) and Wilcoxon signed rank test (Wilcoxon, 1992) with Bonferroni correction (Bonferroni, 1936). We report results in terms of NDCG, which is the main metric of the challenge. MCA-I-H is our best performing model. It achieves state-of-the-art performance: It outperforms the RvA baseline by almost 5 NDCG points on the val set and by over 7 points on the test set. On the official challenge test set, MCA-I-H ranks 2nd: it improves over 7 NDCG over the best single model but loses by 2 points against a 6-strong RvA ensemble model (2019 winning entry). 4While ‘instance-level’ curriculum learning is defined in terms of ‘harder dialogs’, in our work, we used ‘dataset/tasklevel’ curriculum finetuning. Our suggested method is a combination of curriculum learning and fine tuning (pre-training and adjusting to a specific downstream task). As such, we use the term ‘curriculum fine-tuning’ i.e. adaptation by NDCG aware curriculum during fine-tuning. 5We only report results for our best preforming models as the number of allowed submissions to the challenge is limited. 8187 Model Sparse annotation Phase Curriculum Fine-tuning NDCG ↑ MRR ↑ R@1 ↑ R@5 ↑ R@10 ↑ Mean ↓ NDCG ↑ MRR ↑ R@1 ↑ R@5 ↑ R@10 ↑ Mean ↓ RvA (Challenge winners; single model) 55.86 64.42 50.71 81.50 90.15 4.06 67.90 51.92 36.57 70.69 83.61 5.85 MCA-H 51.67 59.65 45.21 77.01 86.79 4.92 64.06 38.16 22.86 54.99 71.24 9.19 MCA-I 59.94 59.67 45.95 76.15 86.24 5.24 70.82 37.34 21.22 56.13 72.74 9.23 MCA-I-HConcQ 60.65 64.08 50.83 80.74 89.62 4.22 70.81 40.75 24.53 60 75.11 8.13 MCA-I-HGuidedQ 60.17 64.36 50.99 80.95 89.93 4.17 71.32 44.1 28.44 61.74 76.53 7.83 MCA-I-VGH 62.44 61.25 47.5 78.16 87.8 4.74 72.0 40.22 24.38 58.8 73.77 8.44 MCA-I-H 60.27 64.33 51.12 80.91 89.65 4.24 72.22 42.38 26.94 60.17 75.2 8.2 MCA-I-H-GT 60.27 64.33 51.12 80.91 89.65 4.24 72.18 46.92 32.09 63.85 78.06 7.37 Table 1: Results on VisDial v1.0 val set. Here ‘I’ denotes image modality while ‘H’ refers to the use of dialog history. Our baseline models are defined in Section 2.1 and 4.4. MCA variants with dialog history follow the same order as Section 2.2. MCA-I-H-GT refers to the model with corrected dense annotations (see Section 6.2) Model NDCG ↑ MRR ↑ R@1 ↑ R@5 ↑ R@10 ↑ Mean ↓ RvA 55.59 63.03 49.03 80.40 89.83 4.18 MS-D365-AI (Ensemble-2nd) 64.78 54.23 42.88 65.38 76.12 6.50 MReaL-BDAI (Ensemble-1st) 74.57 53.37 40.96 66.45 79.70 6.60 MCA-I 70.97 35.65 19.32 54.57 71.39 9.51 MCA-I-VGH 71.33 38.92 22.35 58.42 74.5 8.69 MCA-I-H 72.47 37.68 20.67 56.67 72.12 8.89 Table 2: Evaluation on test-std set with results from the online leaderboard. Winners are picked on NDCG. MReaL-BDAI (2019 winning entry) is an ensemble of 6 RvA models. Runner-up MS-D365AI (unpublished) also used ensembling. Note all our submitted MCA models use curriculum fine-tuning and no ensembling. Compared to MCA-I, which treats the task as multiple rounds of VQA, encoding history improves results, but only significantly for MCAI-VGH in the sparse annotation phase. After fine-tuning, MCA-I-VGH and MCA-I-H perform equally. MCA-I-H implements a late/shallow fusion of history and image. Architectures which model dense interaction between the conversational history and the image representations (i.e. MCAI-VGH, MCA-I-HGuidedQ) perform comparably; only MCA-HConcQ performs significantly worse. Note that MCA-I also outperforms the baselines and current SOTA by a substantial margin (both in the sparse annotation phase and curriculum finetuning phase), while, counter-intuitively, there is not a significant boost by adding conversational history. This is surprising, considering that according to Das et al. (2017), 38% of questions contain a pronoun, which would suggest that these questions would require dialog history in order to be “understood/grounded” by the model. Furthermore, curriculum fine-tuning significantly improves performance with an average improvement of 11.7 NDCG points, but worsens performance in terms of the other metrics, which only consider a single ground truth (GT) answer. 6 Error Analysis In the following, we perform a detailed error analysis, investigating the benefits of dialog history encoding and the observed discrepancy between the NDCG results and the other retrieval based metrics. 6.1 Dialog History We performed an ablation study whereby we did not include the caption as part of historical context and compare with the results in Table 1. The performance dropped from (NDCG 72.2, MRR 42.3) to (NDCG 71.6, MRR 40.7) using our best performing MCA-I-H model after finetuning. Since the crowd-sourced conversation was based on the caption, the reduced performance was expected. In order to further verify the role of dialog history, we conduct a crowd-sourcing study to understand which questions require dialog history, in order to be understood by humans. We first test our history-encoding models on a subset (76 dialogs) of the recently released VisPro dataset (Yu et al., 2019a) which focuses on the task of Visual Pronoun Resolution.6 Note that VisPro also contains non-referential pleonastic pronouns, i.e. pronouns used as “dummy subjects” when e.g. talking about the weather (“Is it sunny?”). We thus create a new crowd-sourced dataset7, which we call VisDialConv. This is a subset of the VisDial val-set consisting of 97 dialogs, where the crowd-workers identified single turns (with dense annotations) requiring historical information. In particular, we asked crowd-workers whether they could provide an answer to a question given an image, without showing them the dialog history, and select one of the categories in Table 4 (see further details in Appendix B). In order to get reliable results, we recruited 3 crowd-workers per image-question pair and only kept instances where at least 2 people agreed. Note that we only had to discharge 14.5% of the origi6We use the intersection of dialogs in VisDial val set and VisPro to create this subset. 7Data collection code available at https://github. com/shubhamagarwal92/visdialconv-amt 8188 Model Sparse annotation Phase Curriculum Fine-tuning NDCG ↑ MRR ↑ R@1 ↑ R@5 ↑ R@10 ↑ Mean ↓ NDCG ↑ MRR ↑ R@1 ↑ R@5 ↑ R@10 ↑ Mean ↓ VisPro subset dataset MCA-I 59.80 57.88 45.39 72.24 82.76 5.84 69.82 36.2 20 54.08 70.92 10.02 MCA-I-HConcQ 61.08 61.79 48.95 77.5 86.58 4.72 68.44 38 22.24 55.79 71.71 9.17 MCA-I-HGuidedQ 61.35 60.13 47.11 75.26 86.18 5.23 68.29 36.59 21.05 53.29 70.13 9.76 MCA-I-VGH 61.68 59.33 46.18 75.53 86.71 5.07 68.97 39.21 23.68 57.11 70.53 8.83 MCA-I-H 61.72 59.62 45.92 77.11 86.45 4.85 70.87 39.8 25.39 55.13 70.39 9.42 VisDialConv (Crowd-sourced subset) dataset MCA-I 52.07 55.55 41.65 72.47 83.81 5.92 58.65 36.2 20.52 53.3 68.25 10.32 MCA-I-HConcQ 54.84 62.06 47.42 80.1 88.87 4.37 61.42 37.92 21.86 55.67 73.3 9.01 MCA-I-HGuidedQ 53.81 62.29 48.35 80.1 88.76 4.42 62.92 38.07 22.58 54.74 70.82 9.5 MCA-I-VGH 55.48 58.45 44.54 74.95 86.19 5.18 60.63 38.1 22.89 53.71 70.31 9.49 MCA-I-H 53.01 61.24 47.63 79.07 87.94 4.77 59.89 39.73 25.15 56.49 71.86 9.53 Table 3: Automatic evaluation on the subsets of VisPro and VisDialConv dataset. We found history based MCA models to outperform significantly compared to the MCA-I model. On VisDialConv, MCA-I-VGH still outperform all other models in spare annotation phase while MCA-I-HGuidedQ performs the best after fine-tuning. Annotation Count Percentage VQA turns 594 67.12% History required 97 10.96% Common Sense 94 10.62% Guess 59 6.67% Cant tell 34 3.84% Not relevant 7 0.79% Table 4: Results of crowd-sourcing study to understand whether humans require dialog history to answer the question. ‘VQA turns’ indicate that humans could potentially answer correctly without having access to the previous conversation while ‘History required’ are the cases identified requiring dialog context. We also identified the cases requiring world knowledge/ common sense, guessing and questions not relevant to the image. nal 1035 image-question pairs, leaving us with 885 examples. The results in Table 4 show that only 11% required actual dialog historical context according to the crowd-workers. Most of the time (67% cases), crowd-workers said they can answer the question correctly without requiring history. The results in Table 3 are on the subset of 97 questions which the crowd-workers identified as requiring history.8 They show that history encoding models (MCA-I-HGuidedQ / MCA-I-HConcQ / MCA-I-H / MCA-I-VGH) significantly outperform MCA-I, suggesting that this data cannot be modelled as multiple rounds of VQA. It can also be seen that all the models with dense (early) interaction of the historical context outperform the one with late interaction (MCA-I-H) in terms of NDCG. Models with dense interactions appear to be more reliable in choosing other correct relevant answers because of the dialog context. 8We took care to only include examples from Visdial val set in both Vispro and VisDialConv subsets. Also note, there are only 8 overlapping instances between Vispro and VisdialConv subsets. Our best performing model on VisDialConv is MCA-I-HGuidedQ and achieves a NDCG value of 62.9 after curriculum fine-tuning. However, on the VisPro subset, we observe that MCA-I-H still outperforms the other models. Interestingly, on this set, MCA-I also outperforms other history encoding models (except for MCA-I-H). In sum, our analysis shows that only a small subset of the VisDial dataset contains questions which require dialog history, and for those, models which encode history lead to better results. We posit that this is due to the fact that questions with pleonastic pronouns such as “Is it sunny/daytime/day. .. ” are the most frequent according to our detailed analysis in Appendix C about the dialog phenomena. Relevance of GT Train Val Count Percent Count Percent 1 1057 52.85% 643 31.15% 0.8 397 19.23% 0.6 330 15.99% 0.5 526 26.30% 0.4 281 13.61% 0.2 227 11.00% 0 417 20.85% 186 9.01% Total 2000 100% 2064 100% Table 5: Relevance score (dense annotation) provided for 2k/2k train/val QA turns. We find that 20% of the ground truth answers were marked as irrelevant (0 score) and partially relevant (0.5 score) by the human annotators for train set. This can be attributed to human errors made while collecting the original data as well as when crowd-sourcing the dense annotations. 6.2 Dense Annotations for NDCG Here, we investigate the discrepancy between the NDCG results and the other retrieval-based methods. First, we find that the annotation scales differs: while there is a 3-way annotation on the train set, the val set defines 6 possible relevance classes, see Table 5. This affects the evaluation results of our 8189 Image Dialog MCA-I-H MCA-I-VGH A bag of chips and a apple and orange. NRel: 15 Q What kind of chips are they? A Chili cheese corn chips. Q Is the bag open or still sealed? A Sealed. Q Is it next to the apple and orange? A Yes. Q Are they all on a table? GT: Yes. Rel: 1.0 ♣ RGT :1 ; NDCG: 65.56 (1.0) Yes. (1.0) Yes they are on a table. (0.0) Maybe , it’s a close up. (0.0) Can’t see a table. (0.2) I think so, it is a close up. ♦ RGT :2 ; NDCG: 69.94 (0.8) I think so. (1.0) Yes. (0.2) It appears to be. (0.4) I would think so. (0.2) I think so, it is a close up. ♣ RGT :1 ; NDCG: 83.93 (1.0) Yes. (1.0) Yes they are on a table. (0.0) Yes they are. (0.0) Can’t see a table. (0.2) I think so, it is a close up. ♦ RGT :4 ; NDCG: 84.15 (0.8) I think so. (0.8) They appear to be. (0.4) Probably. (1.0) Yes. (1.0) Yes they are. A remote controller is hidden in a console inside of an arm rest. NRel: 8 Q Can you see the remote? A Yes i can. Q What color is it? A It is black. Q Can you tell what it is for? A It appears to be a phone. Q What kind of furniture is it in? GT: Looks like a car console. Rel: 0.4 ♣ RGT :1 ; NDCG: 63.19 (0.4) Looks like a car console. (0.4) It looks like a chair on a train or a bus. (0.0) There are tables. (0.0) Looks like an outdoor space. (0.2) It’s a cubicle with shelves. ♦ RGT :3 ; NDCG: 79.2 (0.4) I cannot tell. (0.4) I can’t tell. (0.4) Looks like a car console. (0.2) Not sure. (0.4) Can’t tell. ♣ RGT :2; NDCG: 58.99 (0.0) A cell phone, i can’t see it close up. (0.4) Looks like a car console. (0.4) It looks like a chair on a train or a bus. (0.2) It’s a cubicle with shelves. (0.0) The picture does not show 1. ♦ RGT :7 ; NDCG: 82.22 (0.4) I cannot tell. (0.4) Can’t tell. (0.4) I can’t tell. (0.2) Not sure. (0.0) The picture does not show 1. Figure 4: Top-5 ranked predictions (relevance in parentheses) of MCA-I-H and MCA-I-VGH after both sparse annotation and curriculum fine-tuning phase. RGT defines the rank of Ground Truth (GT) predicted by the model. We also calculate NDCG of rankings for current question turn. NRel denotes number of candidate answer options (out of 100) with non-zero relevance (dense annotations). Here ♣and ♦represents predictions after sparse annotation and curriculum fine-tuning respectively. model, for which we can’t do much. Next, a manual inspection reveals that the relevance weight annotations contain substantial noise: We find that ground truth answers were marked as irrelevant for about 20% of train and 10% of val set. Thus, our models seem to get “confused” by fine-tuning on this data. We, therefore, manually corrected the relevance of only these GT answers (in dense annotations of train set only, but not in val set). Please see Appendix D for further details. The results in Table 1 (for MCA-I-H-GT) show that the model fine-tuned on the corrected data still achieves a comparable NDCG result, but substantially improves stricter (single answer) metrics, which confirms our hypothesis. Finally, due to the noisy signal they receive during fine-tuning, our models learn to select “safe” answers9, such as “I can’t tell” (see examples in 9We show the statistics of top-ranked predictions by our MCA-I-H model on our VisdialConv subset (i.e. 97 dialogs of the Visdial val set). Read as: (Response, count, %) (Yes, 14, 14%) (No, 11, 11.34%) (I cannot tell, 9, 9.27%) (Nope, 3, 3%) (Not that I see, 2, 2.06%) (Red and white, 2, 2.06%) (Not sure, 2, 2.06%) (I can’t tell, 2, 2.06%). This shows that Figure 4), which rank high according to (the more forgiving) NDCG, but perform poorly for stricter metrics like MRR and Recall. 7 Discussion and Related Work Our results suggest that the VisDial dataset only contains very limited examples which require dialog history. Other visual dialog tasks, such as GuessWhich? (Chattopadhyay et al., 2017) and GuessWhat?! (De Vries et al., 2017) take place in a goal-oriented setting, which according to Schlangen (2019), will lead to data containing more natural dialog phenomena. However, there is very limited evidence that dialog history indeed matters for these tasks (Yang et al., 2019). As such, we see data collection to capture visual dialog phenomena as an open problem. Nevertheless, our results also show that encoding dialog history still leads to improved results. This is in contrast with early findings that a) “naive” encoding will harm performance (Das et al. (2017); at least 13.3% of answers are non-commital (I cannot tell, Not sure, I can’t tell). 8190 see MCA-I-HConcQ in Table 1), or that b) history is not necessary (Massiceti et al., 2018). Furthermore, we find that our model learns to provide generic answers by taking advantage of the NDCG evaluation metric. Learning generic answers is a well-known problem for open-domain dialog systems, e.g. (Li et al., 2016). While the dialog community approaches these phenomena by e.g. learning better models of coherence (Xu et al., 2018), we believe that evaluation metrics also need to be improved for this task, as widely discussed for other generation tasks, e.g. (Liu et al., 2016; Novikova et al., 2017; Reiter, 2018). As a first step, BERT score (Zhang et al., 2019) could be explored to measure ground-truth similarity replacing the noisy NDCG annotations of semantic equivalence. 8 Conclusion and Future Work In sum, this paper shows that we can get SOTA performance on the VisDial task by using transformerbased models with Guided-Attention (Yu et al., 2019b), and by encoding dialog history and finetuning we can improve results even more. Of course, we expect pre-trained visual BERT models to show even more improvements on this task, e.g. Vilbert (Lu et al., 2019), LXMert (Tan and Bansal, 2019), UNITER (Chen et al., 2019) etc. However, we also show the limitations of this shared task in terms of dialog phenomena and evaluation metrics. We, thus, argue that progress needs to be carefully measured by posing the right task in terms of dataset and evaluation procedure. Acknowledgments We thank the anonymous reviewers for their insightful comments. Shubham would like to thank Raghav Goyal for the discussions during ‘Pikabot’ submission to Visual Dialog Challenge 2018. This work received continued support by Adobe Research gift funding for further collaboration. This research also received funding from Adeptmind Inc., Toronto, Canada and the EPSRC project MaDrIgAL (EP/N017536/1). We would also like to acknowledge the AWS Cloud Credits for Research programme. References Shubham Agarwal and Raghav Goyal. 2018. Ensemble based discriminative models for visual dialog challenge 2018. In Visual Dialog Challenge. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In ICCV. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. In CoRR abs/1607.06450. Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In ACM. Carlo Bonferroni. 1936. Statistical class theory and probability calculation. Publications of the Higher Institute of Economic and Commercial Sciences of Florence. Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. In CoRR abs/1605.07683. Prithvijit Chattopadhyay, Deshraj Yadav, Viraj Prabhu, Arjun Chandrasekaran, Abhishek Das, Stefan Lee, Dhruv Batra, and Devi Parikh. 2017. Evaluating visual conversational agents via cooperative humanAI games. In CoRR abs/1708.05122. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. UNITER: Learning universal image-text representations. In CoRR abs/1909.11740. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In CVPR. Abhishek Das, Devi Parikh, and Dhruv Batra. 2019. Response to “visual dialogue without vision or dialogue”(massiceti et al., 2018). CoRR abs/1901.05531. Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In CVPR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Alaaeldin El-Nouby, Shikhar Sharma, Hannes Schulz, Devon Hjelm, Layla El Asri, Samira Ebrahimi Kahou, Yoshua Bengio, and Graham W. Taylor. 2018. Tell, draw, and repeat: Generating and modifying images based on continual linguistic instruction. In ICCV. 8191 Jeffrey L Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition. Zhe Gan, Yu Cheng, Ahmed EI Kholy, Linjie Li, Jingjing Liu, and Jianfeng Gao. 2019. Multi-step reasoning via recurrent dual attention for visual dialog. In ACL. Dalu Guo, Chang Xu, and Dacheng Tao. 2019. Imagequestion-answer synergistic network for visual dialog. In CVPR. Xiaoxiao Guo, Hui Wu, Yu Cheng, Steven Rennie, Gerald Tesauro, and Rogerio Feris. 2018. Dialog-based interactive image retrieval. In NeurIPS. Janosch Haber, Tim Baumg¨artner, Ece Takmaz, Lieke Gelderloos, Elia Bruni, and Raquel Fern´andez. 2019. The photobook dataset: Building common ground through visually-grounded dialogue. In ACL. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. Vidur Joshi, Matthew Peters, and Mark Hopkins. 2018. Extending a parser to distant domains using a few dozen partially annotated examples. In ACL. Gi-Cheon Kang, Jaeseo Lim, and Byoung-Tak Zhang. 2019. Dual attention networks for visual reference resolution in visual dialog. In EMNLP. Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In NeurIPS. Jin-Hwa Kim, Devi Parikh, Dhruv Batra, Byoung-Tak Zhang, and Yuandong Tian. 2017. Codraw: Visual dialog for collaborative drawing. In CoRR abs/1712.05558. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Satwik Kottur, Jos´e MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018. Visual coreference resolution in visual dialog using neural module networks. In ECCV. Satwik Kottur, Jos´e MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2019. CLEVR-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. In NAACL. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision. William H Kruskal and W Allen Wallis. 1952. Use of ranks in one-criterion variance analysis. Journal of the American statistical Association. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL. Tzu-Hsiang Lin, Trung Bui, Doo Soon Kim, and Jean Oh. 2018. A multimodal dialogue system for conversational image editing. In NeurIPS Conv AI workshop. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP. Sharid Lo´aiciga, Liane Guillou, and Christian Hardmeier. 2017. What is it? disambiguating the different readings of the pronoun ‘it’. In EMNLP. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. VilBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In CoRR abs/1908.02265. Ramesh Manuvinakurike, Jacqueline Brixey, Trung Bui, Walter Chang, Doo Soon Kim, Ron Artstein, and Kallirroi Georgila. 2018a. Edit me: A corpus and a framework for understanding natural language image editing. In LREC. Ramesh Manuvinakurike, Trung Bui, Walter Chang, and Kallirroi Georgila. 2018b. Conversational image editing: Incremental intent identification in a new dialogue task. In SIGDial. Daniela Massiceti, Puneet K Dokania, N Siddharth, and Philip HS Torr. 2018. Visual dialogue without vision or dialogue. CoRR abs/1812.06417. Duy-Kien Nguyen and Takayuki Okatani. 2018. Improved fusion of visual and language representations by dense symmetric co-attention for visual question answering. In CVPR. Yulei Niu, Hanwang Zhang, Manli Zhang, Jianhong Zhang, Zhiwu Lu, and Ji-Rong Wen. 2019. Recursive visual attention in visual dialog. In CVPR. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. In EMNLP. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NeurIPS-W. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. 8192 Ehud Reiter. 2018. A structured review of the validity of bleu. Computational Linguistics. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In NeurIPS. David Schlangen. 2019. Grounded agreement games: Emphasizing conversational grounding in visual dialogue settings. In CoRR abs/1908.11279. Paul Hongsuck Seo, Andreas Lehrmann, Bohyung Han, and Leonid Sigal. 2017. Visual reference resolution using attention memory for visual dialog. In NeurIPS. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI. Ravi Shekhar, Tim Baumgartner, Aashish Venkatesh, Elia Bruni, Raffaella Bernardi, and Raquel Fern´andez. 2018. Ask no more: Deciding when to guess in referential visual dialogue. In COLING. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. In EMNLP. Damien Teney, Peter Anderson, Xiaodong He, and Anton van den Hengel. 2018. Tips and tricks for visual question answering: Learnings from the 2017 challenge. In CVPR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Frank Wilcoxon. 1992. Individual comparisons by ranking methods. Breakthroughs in statistics. Xinnuo Xu, Ondˇrej Duˇsek, Ioannis Konstas, and Verena Rieser. 2018. Better conversations by modeling, filtering, and optimizing for coherence and diversity”. In EMNLP. Tianhao Yang, Zheng-Jun Zha, and Hanwang Zhang. 2019. Making history matter: Gold-critic sequence training for visual dialog. CoRR abs/1902.09326. Xintong Yu, Hongming Zhang, Yangqiu Song, Yan Song, and Changshui Zhang. 2019a. What you see is what you get: Visual pronoun coreference resolution in dialogues. In EMNLP. Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. 2019b. Deep modular co-attention networks for visual question answering. In CVPR. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with bert. In CoRR abs/1904.09675. A More implementation details We built our implementation upon starter code in PyTorch which the VisDial organisers kindly provided.10 We follow the guidelines of Teney et al. (2018) and used static 36 as the number of object proposals in our experiments (though our model can handle dynamic number of proposals). We experimentally determined the learning rates of 0.0005 for training MCA models and 0.0001 for fine-tuning and reducing it by 1/10 after every 7 and 10 epochs out of a total of 12 epochs for training and 1/5 after 2 epochs for fine-tuning. We use pytorch’s LambdaLR scheduler while training and ReduceLROnPlateau for the finetuning procedure. Dropout of 0.2 is used for regularization and we perform early stopping and saved the best model by tracking the NDCG value on val set. Layer normalisation (Ba et al., 2016) is used for stable training following (Vaswani et al., 2017; Yu et al., 2019b). Attention reduction consisted of 2 layer MLP (fc(d)-ReLU-Dropout(0.2)-fc(1)). We also experimented with different contextual representations, including BERT (Devlin et al., 2019); However we didn’t observe any improvement, similar to the observation by (Tan and Bansal, 2019). For the results on the validation set, only the training split is used. To report results on test-std set, both the training and val set are used for training. For curriculum fine-tuning we use multi-class cross entropy loss where weighted by the relevance score. All our MCA modules have 6 layers and 8 heads, which we determined via a hyper parameter search. Table 7 shows more details. Annotation Text VQA turns I can confidently tell the correct answer just seeing the image. History required I want to know what was discussed before to answer confidently. Cannot answer with just the question and image. Need more information (context) from previous conversation. Common Sense I can answer it but by inferring using common sense. Guess I can only guess the answer. Cant tell I can’t tell the answer. Not relevant Not relevant question for this image. Table 6: Mapping of human annotation with the actual text shown to the user. 10https://github.com/batra-mlp-lab/ visdial-challenge-starter-pytorch. 8193 Model Training Curriculum Fine-tuning NDCG MRR R@1 R@5 R@10 Mean NDCG MRR R@1 R@5 R@10 Mean MCA-I-H (L6 H8) 60.27 64.33 51.12 80.91 89.65 4.24 72.22 42.38 26.94 60.17 75.2 8.2 MCA-I-H (L2 H4) 58.99 64.46 51.14 81.03 89.91 4.19 70.57 42.48 26.3 61.3 76.05 8.06 MCA-I-H (L6 H2) 60.13 60.63 46.7 77.55 87.47 4.8 70.42 39.17 23.3 57.64 73.48 8.69 Table 7: Hyper-parameter tuning for number of layers and number of heads. The results in the main paper are reported with 6 Layers(L6) and 8 Heads (H8) for all MCA models. B AMT Interface Here, we provide more details on the crowdsourcing study described in Section 6.1. Figure 6 shows the instructions shown to the turkers. We also setup a qualification test consisting of 2 test images (in Figure 7) to assess whether turkers understood the task properly. This allowed us to have an automated quality check for the annotations. Each HIT consisted of 15 images. For the actual task (e.g. Fig. 8), users were shown just the image and the current question – without any previous historical context – and asked to choose one of the answers as shown in Table 6. Our AMT interface11 used AWS boto3 library in python. C Diversity and dialog phenomena in VisDial dataset We also did an analysis of the top-20 questions (Figure 9) and answers (Figure 10) in the training set. ‘Yes’/‘No’ binary answers form the major chunk (19.15% and 21.2% respectively) of ground truth answers. Color related answers (such as White, Brown in the top-20 answers) form 4% of all the ground truth answers. Numbered answers (such as 0, 1, 2 ,3) form 1.3% while ‘Can’t tell’ form another 1.2%. As evident in the top-20 questions, weather related questions (such as ‘Is it sunny/daytime/day/night?’), color related (‘What color is it/his hair/the table?’) and basic conversational-starters (‘Can you see any people?’) form the major portion. We also tried to analyze the top-20 answers (Figure 11) which had non-zero relevance in the dense annotations. Specifically, we took all 2k example turns of training set with dense annotations for each of 100 options. We find that generic answers such as ‘Can’t tell’, binary answers ‘Yes/No’ and their semantically equivalent answers ‘Not that i can see’ are mostly given non-zero relevance by crowd-workers. 11We built upon the repo: https://github.com/ jcjohnson/simple-amt. We tried to calculate the statistics of the pronouns and ellipsis which we consider essential (but not complete) phenomena in a dialog dataset. Figure 12 shows the number of pronouns in a dialog. We find that major chunk consisted of 2-6 pronouns in all the 10 questions across the dialog. We tried to distinguish between the usage of ‘it’ as pleonastic and non-pleonastic pronouns (discussed in (Lo´aiciga et al., 2017)). For e.g. in the sentence: ‘It is raining’. Here, though, ‘it’ would be identified as a pronoun, but it doesn’t refer to anything. Notice the drift in distribution of the number of pronouns (All pronouns vs Non-pleonastic). We also tried to identify the cases of ellipsis (methodology explained further) and found that majority questions (82%) doesn’t contain any case of ellipsis in the dialog. We define simple heuristics to identify dialog phenomena. Specifically, our heuristics can be listed as: • We use constituency parser (Joshi et al., 2018) 12 to parse each question. If the parsed tree doesn’t contain ‘Sentence’ as the root (‘S’, ‘SQ’, ‘SBARQ’, ‘SINV’), we consider it a case of ellipsis. • We use spaCy 13 to extract the pronouns in all the questions of a dialog. • To distinguish between different usage of ‘it’, we mark all the co-occurrences of manually defined weather identifiers (‘rainy’, ‘sunny’, ‘daytime’, ‘day’, ‘night’) as pleonastic. • Though ‘other’ is a pronoun, it is not tagged by standard taggers. We explicitly deal with these cases to tag ‘other’ as a case of pronoun. For e.g. ‘What about the other?’ D Corrected dense annotations We maintain the whole relevance list, however we change the relevance of only the ground truth (GT) to 1 instead of 0/0.5 in the train annotations (only 943 values). This was done to avoid extra gradient 12https://github.com/allenai/allennlp/ blob/master/allennlp/pretrained.py 13https://spacy.io/usage/ linguistic-features 8194 Image Dialog MCA-I-H MCA-I-VGH A surfer crouches as they ride a cresting wave. NRel: 15 Q Is the photo in color? A Yes. Q Any other people? GT: No. Rel: 0.8 ♣ RGT :1 ; NDCG 83.32 (0.8) No. (0.2) 0. (1.0) Nope. (0.8) No there’s not. (0.4) Just the 1. ♦ RGT :2; NDCG 91.2 (1.0) Nope. (0.8) No. (0.8) Not that i can see. (0.8) Not that i see. (0.8) No there’s not. ♣ RGT :1 ; NDCG 74.98 (0.8) No. (0.2) 0. (1.0) Nope. (0.2) 0 at all. (0.8) Not that i can see. ♦ RGT :1 ; NDCG 85.24 (0.8) No. (0.8) Not that i can see. (0.8) Not that i see. (0.8) No there’s not. (1.0) Nope. An apple and orange are sitting in a white box with size measurements. NRel: 4 Q What color is the apple? A It is red and yellow. Q What color is the orange? A It is dark orange. Q What is the size of the box? A Can’t tell. Q Where is the box? GT: Can’t tell. Rel: 1.0 ♣ RGT :9 ; NDCG 0.0 (0.0) On the table. (0.0) In a container by the window. (0.0) On table. (0.0) Use no paper here. (1.0) I cannot tell. ♦ RGT :5 ; NDCG 73.59 (1.0) I cannot tell. (0.0) On the table. (1.0) I can’t tell. (0.6) Not sure. (1.0) Can’t tell. ♣ RGT :9 ; NDCG 0.0 (0.0) On table. (0.0) On the table. (0.0) In a container by the window. (0.0) In a kitchen. (1.0) I cannot tell. ♦ RGT :4 ; NDCG 98.84 (1.0) I cannot tell. (1.0) I can’t tell. (0.6) Not sure. (1.0) Can’t tell. (0.0) In a container by the window. A clock with a rose on its corner sits on the wall. NRel: 3 Q Is it a real rose? A I don’t think so. Q Is it an old fashion clock? A No, not really. Q Is it a digital clock? A No. Q Is it hanging on the wall or leaning? GT: It’s hanging. Rel: 1.0 ♣ RGT :1 ; NDCG: 81.55 (1.0) It’s hanging. (0.0) Yes, it’s attached to the side of the building. (0.0) Yes. (0.0) It is cut out, but it is definitely sitting on something. (0.0) It looks like. ♦ RGT :2 ; NDCG 51.45 (0.0) It looks like. (1.0) It’s hanging. (0.0) Can’t tell. (0.0) Unclear. (0.0) I think so. ♣ RGT :2 ; NDCG 51.45 (0.0) No it is not mounted on the wall. (1.0) It’s hanging. (0.0) It is cut out, but it is definitely sitting on something. (0.0) Yes, it’s attached to the side of the building. (0.0) On the rail. ♦ RGT :3 ; NDCG 40.78 (0.0) No it is not mounted on the wall. (0.0) Not sure. (1.0) It’s hanging. (0.0) Can’t tell. (0.0) I can’t tell. Figure 5: Top-5 ranked predictions (relevance in parentheses) of MCA-I-H and MCA-I-VGH after both sparse annotation and curriculum fine-tuning phase. RGT defines the rank of Ground Truth (GT) predicted by the model and NDCG of rankings for current question turn. NRel denotes number of candidate answer options (out of 100) with non-zero relevance (dense annotations). Here ♣and ♦represents predictions after sparse annotation and curriculum fine-tuning respectively. information that the model will receive because of noise in the dataset, since these examples were already seen during the spare annotation phase. Val annotations remains unaffected for fair comparison. As expected, this simple correction increase the ground truth related metrics such as R{1,5,10} drastically. 8195 Figure 6: Instructions for the AMT task. Figure 7: Qualification test consisting of 2 test images to allow the turkers to actually attempt the task 8196 Figure 8: Sample task. Percent (of Train) Top-20 ques is it sunny is it daytime is the photo in color can you see the sky are there any people what color is it any people is it day or night is it sunny out is this in color do you see any people what color are the walls can you see any is it a sunny day any trees how old is the man are there trees what is he wearing what color is the table what color is his hair Unique ques 0.00% 10.00% 20.00% 30.00% 40.00% Figure 9: Top-20 questions in the training set. Of all the questions in the training set, only 30% questions are unique while weather related questions (like sunny, daytime, rainy) top the charts. Percent (of Train) Top-20 ans no yes white black brown can't tell i can't tell 2 yes it is 1 0 not that i can see nope blue not really 3 red day i think so green Unique ans 0.00% 10.00% 20.00% 30.00% Figure 10: Top-20 answers in the training set. Yes/No forms a major chunk in top 20 answers. 8197 Percent Non-zero relevance no nope yes not that i can see i think so i can't tell can't tell not sure yes it is not really maybe 0 yes, it is i don't think so appears to be looks like it not visible white i don't see any it is 0.00% 5.00% 10.00% 15.00% Figure 11: Top-20 answers with non-zero relevance in the dense annotations of training set. Generic and yes/no semantically equivalent answers mostly constitute the list. Percentage is calculated out of total 3652 unique answers which have non-zero relevance in train dense annotations set. Number of Pronouns 0 5 10 15 20 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 19 20 21 23 Non-pleonastic All pronouns Figure 12: Number of pronouns in 10 questions of a dialog.
2020
728
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8198–8210 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8198 Mapping Natural Language Instructions to Mobile UI Action Sequences Yang Li Jiacong He Xin Zhou Yuan Zhang Jason Baldridge Google Research, Mountain View, CA, 94043 {liyang,zhouxin,zhangyua,jasonbaldridge}@google.com Abstract We present a new problem: grounding natural language instructions to mobile user interface actions, and create three new datasets for it. For full task evaluation, we create PIXELHELP, a corpus that pairs English instructions with actions performed by people on a mobile UI emulator. To scale training, we decouple the language and action data by (a) annotating action phrase spans in HowTo instructions and (b) synthesizing grounded descriptions of actions for mobile user interfaces. We use a Transformer to extract action phrase tuples from long-range natural language instructions. A grounding Transformer then contextually represents UI objects using both their content and screen position and connects them to object descriptions. Given a starting screen and instruction, our model achieves 70.59% accuracy on predicting complete ground-truth action sequences in PIXELHELP. 1 Introduction Language helps us work together to get things done. People instruct one another to coordinate joint efforts and accomplish tasks involving complex sequences of actions. This takes advantage of the abilities of different members of a speech community, e.g. a child asking a parent for a cup she cannot reach, or a visually impaired individual asking for assistance from a friend. Building computational agents able to help in such interactions is an important goal that requires true language grounding in environments where action matters. An important area of language grounding involves tasks like completion of multi-step actions in a graphical user interface conditioned on language instructions (Branavan et al., 2009, 2010; Liu et al., 2018; Gur et al., 2019). These domains matter for accessibility, where language interfaces could help visually impaired individuals perform tasks with open the app drawer. navigate to settings > network & internet > Wifi. click add network, and then enter starbucks for SSID. Action Phrase Extraction Model Screen Operation Object Argument Screen_1 CLICK OBJ_2 Screen_2 CLICK OBJ_6 Screen_3 CLICK OBJ_5 ... Screen_6 INPUT OBJ_9 [Starbucks] Instructions Operation_Desc Object_Desc Argument_Desc [open] [app drawer] [navigate to] [settings] [navigate to] [network & internet] [navigate to] [wifi] [click] [add network] [enter] [ssid] [starbucks] Executable actions based on the screen at each step Action Phrase Tuples Grounding Model Transition to next screen … Mobile User Interface at each step Figure 1: Our model extracts the phrase tuple that describe each action, including its operation, object and additional arguments, and grounds these tuples as executable action sequences in the UI. interfaces that are predicated on sight. This also matters for situational impairment (Sarsenbayeva, 2018) when one cannot access a device easily while encumbered by other factors, such as cooking. We focus on a new domain of task automation in which natural language instructions must be interpreted as a sequence of actions on a mobile touchscreen UI. Existing web search is quite capable of retrieving multi-step natural language instructions for user queries, such as “How to turn on flight mode on Android.” Crucially, the missing piece for fulfilling the task automatically is to map the returned instruction to a sequence of actions that can be automatically executed on the device with 8199 little user intervention; this our goal in this paper. This task automation scenario does not require a user to maneuver through UI details, which is useful for average users and is especially valuable for visually or situationally impaired users. The ability to execute an instruction can also be useful for other scenarios such as automatically examining the quality of an instruction. Our approach (Figure 1) decomposes the problem into an action phrase-extraction step and a grounding step. The former extracts operation, object and argument descriptions from multi-step instructions; for this, we use Transformers (Vaswani et al., 2017) and test three span representations. The latter matches extracted operation and object descriptions with a UI object on a screen; for this, we use a Transformer that contextually represents UI objects and grounds object descriptions to them. We construct three new datasets 1. To assess full task performance on naturally occurring instructions, we create a dataset of 187 multi-step English instructions for operating Pixel Phones and produce their corresponding action-screen sequences using annotators. For action phrase extraction training and evaluation, we obtain English How-To instructions from the web and annotate action description spans. A Transformer with spans represented by sum pooling (Li et al., 2019) obtains 85.56% accuracy for predicting span sequences that completely match the ground truth. To train the grounding model, we synthetically generate 295k single-step commands to UI actions, covering 178K different UI objects across 25K mobile UI screens. Our phrase extractor and grounding model together obtain 89.21% partial and 70.59% complete accuracy for matching ground-truth action sequences on this challenging task. We also evaluate alternative methods and representations of objects and spans and present qualitative analyses to provide insights into the problem and models. 2 Problem Formulation Given an instruction of a multi-step task, I = t1:n = (t1, t2, ..., tn), where ti is the ith token in instruction I, we want to generate a sequence of automatically executable actions, a1:m, over a sequence of user interface screens S, with initial screen s1 1Our data pipeline is available at https://github. com / google-research / google-research / tree/master/seq2act. and screen transition function sj=τ(aj−1, sj−1): p(a1:m|s1, τ, t1:n) = m Y j=1 p(aj|a<j, s1, τ, t1:n) (1) An action aj = [rj, oj, uj] consists of an operation rj (e.g. Tap or Text), the UI object oj that rj is performed on (e.g., a button or an icon), and an additional argument uj needed for oj (e.g. the message entered in the chat box for Text or null for operations such as Tap). Starting from s1, executing a sequence of actions a<j arrives at screen sj that represents the screen at the jth step: sj = τ(aj−1, τ(...τ(a1, s1))): p(a1:m|s1, τ, t1:n) = m Y j=1 p(aj|sj, t1:n) (2) Each screen sj = [cj,1:|sj|, λj] contains a set of UI objects and their structural relationships. cj,1:|sj| = {cj,k | 1 ≤k ≤|sj|}, where |sj| is the number of objects in sj, from which oj is chosen. λj defines the structural relationship between the objects. This is often a tree structure such as the View hierarchy for an Android interface2 (similar to a DOM tree for web pages). An instruction I describes (possibly multiple) actions. Let ¯aj denote the phrases in I that describes action aj. ¯aj = [¯rj, ¯oj, ¯uj] represents a tuple of descriptions with each corresponding to a span—a subsequence of tokens—in I. Accordingly, ¯a1:m represents the description tuple sequence that we refer to as ¯a for brevity. We also define ¯A as all possible description tuple sequences of I, thus ¯a ∈¯A. p(aj|sj, t1:n) = X ¯ A p(aj|¯a, sj, t1:n)p(¯a|sj, t1:n) (3) Because aj is independent of the rest of the instruction given its current screen sj and description ¯aj, and ¯a is only related to the instruction t1:n, we can simplify (3) as (4). p(aj|sj, t1:n) = X ¯ A p(aj|¯aj, sj)p(¯a|t1:n) (4) 2https : / / developer . android . com / reference/android/view/View.html 8200 We define ˆa as the most likely description of actions for t1:n. ˆa = arg max ¯a p(¯a|t1:n) = arg max ¯a1:m m Y j=1 p(¯aj|¯a<j, t1:n) (5) This defines the action phrase-extraction model, which is then used by the grounding model: p(aj|sj, t1:n) ≈p(aj|ˆaj, sj)p(ˆaj|ˆa<j, t1:n) (6) p(a1:m|t1:n, S) ≈ m Y j=1 p(aj|ˆaj, sj)p(ˆaj|ˆa<j, t1:n) (7) p(ˆaj|ˆa<j, t1:n) identifies the description tuples for each action. p(aj|ˆaj, sj) grounds each description to an executable action given the screen. 3 Data The ideal dataset would have natural instructions that have been executed by people using the UI. Such data can be collected by having annotators perform tasks according to instructions on a mobile platform, but this is difficult to scale. It requires significant investment to instrument: different versions of apps have different presentation and behaviors, and apps must be installed and configured for each task. Due to this, we create a small dataset of this form, PIXELHELP, for full task evaluation. For model training at scale, we create two other datasets: ANDROIDHOWTO for action phrase extraction and RICOSCA for grounding. Our datasets are targeted for English. We hope that starting with a high-resource language will pave the way to creating similar capabilities for other languages. 3.1 PIXELHELP Dataset Pixel Phone Help pages3 provide instructions for performing common tasks on Google Pixel phones such as switch Wi-Fi settings (Fig. 2) or check emails. Help pages can contain multiple tasks, with each task consisting of a sequence of steps. We pulled instructions from the help pages and kept ones that can be automatically executed. Instructions that requires additional user input such as Tap the app you want to uninstall are discarded. 3https://support.google.com/pixelphone Figure 2: PIXELHELP example: Open your device’s Settings app. Tap Network & internet. Click Wi-Fi. Turn on Wi-Fi.. The instruction is paired with actions, each of which is shown as a red dot on a specific screen. Also, instructions that involve actions on a physical button such as Press the Power button for a few seconds are excluded because these events cannot be executed on mobile platform emulators. We instrumented a logging mechanism on a Pixel Phone emulator and had human annotators perform each task on the emulator by following the full instruction. The logger records every user action, including the type of touch events that are triggered, each object being manipulated, and screen information such as view hierarchies. Each item thus includes the instruction input, t1:n, the screen for each step of task, s1:m, and the target action performed on each screen, a1:m. In total, PIXELHELP includes 187 multi-step instructions of 4 task categories: 88 general tasks, such as configuring accounts, 38 Gmail tasks, 31 Chrome tasks, and 30 Photos related tasks. The number of steps ranges from two to eight, with a median of four. Because it has both natural instructions and grounded actions, we reserve PIXELHELP for evaluating full task performance. 3.2 ANDROIDHOWTO Dataset No datasets exist that support learning the action phrase extraction model, p(ˆaj|ˆa<j, t1:n), for mobile UIs. To address this, we extracted English instructions for operating Android devices by processing web pages to identify candidate instructions for how-to questions such as how to change the input method for Android. A web crawling service scrapes instruction-like content from various websites. We then filter the web contents using both heuristics and manual screening by annotators. Annotators identified phrases in each instruction that describe executable actions. They were given a tutorial on the task and were instructed to skip instructions that are difficult to understand or label. For each component in an action description, they 8201 select the span of words that describes the component using a web annotation interface (details are provided in the appendix). The interface records the start and end positions of each marked span. Each instruction was labeled by three annotators: three annotators agreed on 31% of full instructions and at least two agreed on 84%. For the consistency at the tuple level, the agreement across all the annotators is 83.6% for operation phrases, 72.07% for object phrases, and 83.43% for input phrases. The discrepancies are usually small, e.g., a description marked as your Gmail address or Gmail address. The final dataset includes 32,436 data points from 9,893 unique How-To instructions and split into training (8K), validation (1K) and test (900). All test examples have perfect agreement across all three annotators for the entire sequence. In total, there are 190K operation spans, 172K object spans, and 321 input spans labeled. The lengths of the instructions range from 19 to 85 tokens, with median of 59. They describe a sequence of actions from one to 19 steps, with a median of 5. 3.3 RICOSCA Dataset Training the grounding model, p(aj|ˆaj, sj) involves pairing action tuples aj along screens sj with action description ˆaj. It is very difficult to collect such data at scale. To get past the bottleneck, we exploit two properties of the task to generate a synthetic command-action dataset, RICOSCA. First, we have precise structured and visual knowledge of the UI layout, so we can spatially relate UI elements to each other and the overall screen. Second, a grammar grounded in the UI can cover many of the commands and kinds of reference needed for the problem. This does not capture all manners of interacting conversationally with a UI, but it proves effective for training the grounding model. Rico is a public UI corpus with 72K Android UI screens mined from 9.7K Android apps (Deka et al., 2017). Each screen in Rico comes with a screenshot image and a view hierarchy of a collection of UI objects. Each individual object, cj,k, has a set of properties, including its name (often an English phrase such as Send), type (e.g., Button, Image or Checkbox), and bounding box position on the screen. We manually removed screens whose view hierarchies do not match their screenshots by asking annotators to visually verify whether the bounding boxes of view hierarchy leaves match each UI object on the corresponding screenshot image. This filtering results in 25K unique screens. For each screen, we randomly select UI elements as target objects and synthesize commands for operating them. We generate multiple commands to capture different expressions describing the operation ˆrj and the target object ˆoj. For example, the Tap operation can be referred to as tap, click, or press. The template for referring to a target object has slots Name, Type, and Location, which are instantiated using the following strategies: • Name-Type: the target’s name and/or type (the OK button or OK). • Absolute-Location: the target’s screen location (the menu at the top right corner). • Relative-Location: the target’s relative location to other objects (the icon to the right of Send). Because all commands are synthesized, the span that describes each part of an action, ˆaj with respect to t1:n, is known. Meanwhile, aj and sj, the actual action and the associated screen, are present because the constituents of the action are synthesized. In total, RICOSCA contains 295,476 single-step synthetic commands for operating 177,962 different target objects across 25,677 Android screens. 4 Model Architectures Equation 7 has two parts. p(ˆaj|ˆa<j, t1:n) finds the best phrase tuple that describes the action at the jth step given the instruction token sequence. p(aj|ˆaj, sj) computes the probability of an executable action aj given the best description of the action, ˆaj, and the screen sj for the jth step. 4.1 Phrase Tuple Extraction Model A common choice for modeling the conditional probability p(¯aj|¯a<j, t1:n) (see Equation 5) are encoder-decoders such as LSTMs (Hochreiter and Schmidhuber, 1997) and Transformers (Vaswani et al., 2017). The output of our model corresponds to positions in the input sequence, so our architecture is closely related to Pointer Networks (Vinyals et al., 2015). Figure 3 depicts our model. An encoder g computes a latent representation h1:n∈Rn×|h| of the tokens from their embeddings: h1:n=g(e(t1:n)). A decoder f then generates the hidden state qj=f(q<j, ¯a<j, h1:n) which is used to compute a query vector that locates each phrase of a tuple (¯rj, ¯oj, ¯uj) at each step. ¯aj=[¯rj, ¯oj, ¯uj] and they 8202 open the app Transformer Encoder EOS … drawer . navigate to settings … Span Encoding Instruction Encoder START Transformer Decoder … … Span Query Decoder Hidden Span Input Decoder ⌀ ⌀ ⌀ ⌀ ⌀ ⌀ … r j q o j q u j q b:d h it j q Figure 3: The Phrase Tuple Extraction model encodes the instruction’s token sequence and then outputs a tuple sequence by querying into all possible spans of the encoded sequence. Each tuple contains the span positions of three phrases in the instruction that describe the action’s operation, object and optional arguments, respectively, at each step. ∅indicates the phrase is missing in the instruction and is represented by a special span encoding. are assumed conditionally independent given previously extracted phrase tuples and the instruction, so p(¯aj|¯a<j, t1:n)= Q ¯y∈{¯r,¯o,¯u} p(¯yj|¯a<j, t1:n). Note that ¯yj ∈{¯rj, ¯oj, ¯uj} denotes a specific span for y ∈{r, o, u} in the action tuple at step j. We therefore rewrite ¯yj as yb:d j to explicitly indicate that it corresponds to the span for r, o or u, starting at the bth position and ending at the dth position in the instruction, 1≤b<d≤n. We now parameterize the conditional probability as: p(yb:d j |¯a<j, t1:n) = softmax(α(qy j , hb:d)) y ∈{r, o, u} (8) As shown in Figure 3, qy j indicates task-specific query vectors for y∈{r, o, u}. They are computed as qy j =φ(qj, θy)Wy, a multi-layer perceptron followed by a linear transformation. θy and Wy are trainable parameters. We use separate parameters for each of r, o and u. Wy ∈R|φy|×|h| where |φy| is the output dimension of the multi-layer perceptron. The alignment function α(·) scores how a query vector qy j matches a span whose vector representation hb:d is computed from encodings hb:d. Span Representation. There are a quadratic number of possible spans given a token sequence (Lee et al., 2017), so it is important to design a fixed-length representation hb:d of a variablelength token span that can be quickly computed. Beginning-Inside-Outside (BIO) (Ramshaw and Marcus, 1995)–commonly used to indicate spans in tasks such as named entity recognition–marks whether each token is beginning, inside, or outside a span. However, BIO is not ideal for our task because subsequences for describing different actions can overlap, e.g., in click X and Y, click participates in both actions click X and click Y. In our experiments we consider several recent, more flexible span representations (Lee et al., 2016, 2017; Li et al., 2019) and show their impact in Section 5.2. With fixed-length span representations, we can use common alignment techniques in neural networks (Bahdanau et al., 2014; Luong et al., 2015). We use the dot product between the query vector and the span representation: α(qy j , hb:d)=qy j · hb:d At each step of decoding, we feed the previously decoded phrase tuples, ¯a<j into the decoder. We can use the concatenation of the vector representations of the three elements in a phrase tuple or the sum their vector representations as the input for each decoding step. The entire phrase tuple extraction model is trained by minimizing the softmax cross entropy loss between the predicted and ground-truth spans of a sequence of phrase tuples. 4.2 Grounding Model Having computed the sequence of tuples that best describe each action, we connect them to executable actions based on the screen at each step with our grounding model (Fig. 4). In step-bystep instructions, each part of an action is often clearly stated. Thus, we assume the probabilities of the operation rj, object oj, and argument uj are 8203 open UI Objects app drawer Transformer Encoder … … object [ obj4 ] operation [ CLICK ] obj1 obj2obj3 obj4obj5 obj45 ⌀ argument [ NONE ] navigate to settings Transformer Encoder … object [ obj3 ] operation [ CLICK ] ⌀ argument [ NONE ] Object Embedding Screen Encoder Object Encoding Grounded Actions … EOS object [ NONE ] operation [ STOP ] ⌀ argument [ NONE ] ⌀ Initial Screen Transformer Encoder … User Interface Screen … obj1 obj2obj3 obj4obj5 obj9 Screen 2 … obj1 obj2obj3 obj4obj5 obj20 Final Screen Extracted Phrase Tuples Figure 4: The Grounding model grounds each phrase tuple extracted by the Phrase Extraction model as an operation type, a screen-specific object ID, and an argument if present, based on a contextual representation of UI objects for the given screen. A grounded action tuple can be automatically executed. independent given their description and the screen. p(aj|ˆaj, sj) = p([rj, oj, uj]|[ˆrj, ˆoj, ˆuj], sj) = p(rj|ˆrj, sj)p(oj|ˆoj, sj)p(uj|ˆuj, sj) = p(rj|ˆrj)p(oj|ˆoj, sj) (9) We simplify with two assumptions: (1) an operation is often fully described by its instruction without relying on the screen information and (2) in mobile interaction tasks, an argument is only present for the Text operation, so uj=ˆuj. We parameterize p(rj|ˆrj) as a feedforward neural network: p(rj|ˆrj) = softmax(φ(ˆr ′ j, θr)Wr) (10) φ(·) is a multi-layer perceptron with trainable parameters θr. W r∈R|φr|×|r| is also trainable, where |φr| is the output dimension of the φ(·, θr) and |r| is the vocabulary size of the operations. φ(·) takes the sum of the embedding vectors of each token in the operation description ˆrj as the input: ˆr ′ j= Pd k=b e(tk) where b and d are the start and end positions of ˆrj in the instruction. Determining oj is to select a UI object from a variable-number of objects on the screen, cj,k ∈sj where 1≤k≤|sj|, based on the given object description, ˆoj. We parameterize the conditional probability as a deep neural network with a softmax output layer taking logits from an alignment function: p(oj|ˆoj, sj) = p(oj = cj,k|ˆoj, cj,1:|sj|, λj) = softmax(α(ˆo ′ j, c ′ j,k)) (11) The alignment function α(·) scores how the object description vector ˆo ′ j matches the latent representation of each UI object, c ′ j,k. This can be as simple as the dot product of the two vectors. The latent representation ˆo ′ j is acquired with a multi-layer perceptron followed by a linear projection: ˆo ′ j = φ( d X k=b e(tk), θo)Wo (12) b and d are the start and end index of the object description ˆoj. θo and Wo are trainable parameters with Wo∈R|φo|×|o|, where |φo| is the output dimension of φ(·, θo) and |o| is the dimension of the latent representation of the object description. Contextual Representation of UI Objects. To compute latent representations of each candidate object, c ′ j,k, we use both the object’s properties and its context, i.e., the structural relationship with other objects on the screen. There are different ways for encoding a variable-sized collection of items that are structurally related to each other, including Graph Convolutional Networks (GCN) (Niepert et al., 2016) and Transformers (Vaswani et al., 2017). GCNs use an adjacency matrix predetermined by the UI structure to regulate how the latent representation of an object should be affected by its neighbors. Transformers allow each object to carry its own positional encoding, and the relationship between objects can be learned instead. The input to the Transformer encoder is a combination of the content embedding and the positional encoding of each object. The content properties of an object include its name and type. We compute the content embedding of by concatenating the name embedding, which is the average embedding of the bag of tokens in the object name, and the 8204 type embedding. The positional properties of an object include both its spatial position and structural position. The spatial positions include the top, left, right and bottom screen coordinates of the object. We treat each of these coordinates as a discrete value and represent it via an embedding. Such a feature representation for coordinates was used in ImageTransformer to represent pixel positions in an image (Parmar et al., 2018). The spatial embedding of the object is the sum of these four coordinate embeddings. To encode structural information, we use the index positions of the object in the preorder and the postorder traversal of the view hierarchy tree, and represent these index positions as embeddings in a similar way as representing coordinates. The content embedding is then summed with positional encodings to form the embedding of each object. We then feed these object embeddings into a Transformer encoder model to compute the latent representation of each object, c ′ j,k. The grounding model is trained by minimizing the cross entropy loss between the predicted and ground-truth object and the loss between the predicted and ground-truth operation. 5 Experiments Our goal is to develop models and datasets to map multi-step instructions into automatically executable actions given the screen information. As such, we use PIXELHELP’s paired natural instructions and action-screen sequences solely for testing. In addition, we investigate the model quality on phrase tuple extraction tasks, which is a crucial building block for the overall grounding quality4. 5.1 Datasets and Metrics We use two metrics that measure how a predicted tuple sequence matches the ground-truth sequence. • Complete Match: The score is 1 if two sequences have the same length and have the identical tuple [ˆrj, ˆoj, ˆuj] at each step, otherwise 0. • Partial Match: The number of steps of the predicted sequence that match the ground-truth sequence divided by the length of the groundtruth sequence (ranging between 0 and 1). We train and validate using ANDROIDHOWTO and RICOSCA, and evaluate on PIXELHELP. During training, single-step synthetic command-action 4Our model code is released at https : / / github . com / google-research / google-research / tree/master/seq2act. Span Rep. hb:d Partial Complete SumPooling Pd k=b hk 92.80 85.56 StartEnd [hb; hd] 91.94 84.56 [hb; hd, ˆeb:d, φ(d −b)] 91.11 84.33 Table 1: ANDROIDHOWTO phrase tuple extraction test results using different span representations hb:d in (8). ˆeb:d= Pd k=b w(hk)e(tk), where w(·) is a learned weight function for each token embedding (Lee et al., 2017). See the pseudocode for fast computation of these in the appendix. examples are dynamically stitched to form sequence examples with a certain length distribution. To evaluate the full task, we use Complete and Partial Match on grounded action sequences a1:m where aj=[rj, oj, uj]. The token vocabulary size is 59K, which is compiled from both the instruction corpus and the UI name corpus. There are 15 UI types, including 14 common UI object types, and a type to catch all less common ones. The output vocabulary for operations include CLICK, TEXT, SWIPE and EOS. 5.2 Model Configurations and Results Tuple Extraction. For the action-tuple extraction task, we use a 6-layer Transformer for both the encoder and the decoder. We evaluate three different span representations. Area Attention (Li et al., 2019) provides a parameter-free representation of each possible span (one-dimensional area), by summing up the encoding of each token in the subsequence: hb:d = Pd k=b hk. The representation of each span can be computed in constant time invariant to the length of the span, using a summed area table. Previous work concatenated the encoding of the start and end tokens as the span representation, hb:d = [hb; hd] (Lee et al., 2016) and a generalized version of it (Lee et al., 2017). We evaluated these three options and implemented the representation in Lee et al. (2017) using a summed area table similar to the approach in area attention for fast computation. For hyperparameter tuning and training details, refer to the appendix. Table 1 gives results on ANDROIDHOWTO’s test set. All the span representations perform well. Encodings of each token from a Transformer already capture sufficient information about the entire sequence, so even only using the start and end encodings yields strong results. Nonetheless, area attention provides a small boost over the others. As a new dataset, there is also considerable headroom remaining, particularly for complete match. 8205 Screen Encoder Partial Complete Heuristic 62.44 42.25 Filter-1 GCN 76.44 52.41 Distance GCN 82.50 59.36 Transformer 89.21 70.59 Table 2: PIXELHELP grounding accuracy. The differences are statistically significant based on t-test over 5 runs (p < 0.05). Grounding. For the grounding task, we compare Transformer-based screen encoder for generating object representations hb:d with two baseline methods based on graph convolutional networks. The Heuristic baseline matches extracted phrases against object names directly using BLEU scores. Filter-1 GCN performs graph convolution without using adjacent nodes (objects), so the representation of each object is computed only based on its own properties. Distance GCN uses the distance between objects in the view hierarchy, i.e., the number of edges to traverse from one object to another following the tree structure. This contrasts with the traditional GCN definition based on adjacency, but is needed because UI objects are often leaves in the tree; as such, they are not adjacent to each other structurally but instead are connected through nonterminal (container) nodes. Both Filter-1 GCN and Distance GCN use the same number of parameters (see the appendix for details). To train the grounding model, we first train the Tuple Extraction sub-model on ANDROIDHOWTO and RICOSCA. For the latter, only language related features (commands and tuple positions in the command) are used in this stage, so screen and action features are not involved. We then freeze the Tuple Extraction sub-model and train the grounding sub-model on RICOSCA using both the command and screen-action related features. The screen token embeddings of the grounding sub-model share weights with the Tuple Extraction sub-model. Table 2 gives full task performance on PIXELHELP. The Transformer screen encoder achieves the best result with 70.59% accuracy on Complete Match and 89.21% on Partial Match, which sets a strong baseline result for this new dataset while leaving considerable headroom. The GCN-based methods perform poorly, which shows the importance of contextual encodings of the information from other UI objects on the screen. Distance GCN does attempt to capture context for UI objects that are structurally close; however, we suspect that the distance information that is derived from the view hierarchy tree is noisy because UI developers can construct the structure differently for the same UI.5 As a result, the strong bias introduced by the structure distance does not always help. Nevertheless, these models still outperformed the heuristic baseline that achieved 62.44% for partial match and 42.25% for complete match. 5.3 Analysis To explore how the model grounds an instruction on a screen, we analyze the relationship between words in the instruction language that refer to specific locations on the screen, and actual positions on the UI screen. We first extract the embedding weights from the trained phrase extraction model for words such as top, bottom, left and right. These words occur in object descriptions such as the check box at the top of the screen. We also extract the embedding weights of object screen positions, which are used to create object positional encoding. We then calculate the correlation between word embedding and screen position embedding using cosine similarity. Figure 5 visualizes the correlation as a heatmap, where brighter colors indicate higher correlation. The word top is strongly correlated with the top of the screen, but the trend for other location words is less clear. While left is strongly correlated with the left side of the screen, other regions on the screen also show high correlation. This is likely because left and right are not only used for referring to absolute locations on the screen, but also for relative spatial relationships, such as the icon to the left of the button. For bottom, the strongest correlation does not occur at the very bottom of the screen because many UI objects in our dataset do not fall in that region. The region is often reserved for system actions and the on-screen keyboard, which are not covered in our dataset. The phrase extraction model passes phrase tuples to the grounding model. When phrase extraction is incorrect, it can be difficult for the grounding model to predict a correct action. One way to mitigate such cascading errors is using the hidden state of the phrase decoding model at each step, qj. Intuitively, qj is computed with the access to the encoding of each token in the instruction via the Transformer encoder-decoder attention, which can 5While it is possible to directly use screen visual data for grounding, detecting UI objects from raw pixels is nontrivial. It would be ideal to use both structural and visual data. 8206 Figure 5: Correlation between location-related words in instructions and object screen position embedding. potentially be a more robust span representation. However, in our early exploration, we found that grounding with qj performs stunningly well for grounding RICOSCA validation examples, but performs poorly on PIXELHELP. The learned hidden state likely captures characteristics in the synthetic instructions and action sequences that do not manifest in PIXELHELP. As such, using the hidden state to ground remains a challenge when learning from unpaired instruction-action data. The phrase model failed to extract correct steps for 14 tasks in PIXELHELP. In particular, it resulted in extra steps for 11 tasks and extracted incorrect steps for 3 tasks, but did not skip steps for any tasks. These errors could be caused by different language styles manifested by the three datasets. Synthesized commands in RICOSCA tend to be brief. Instructions in ANDROIDHOWTO seem to give more contextual description and involve diverse language styles, while PIXELHELP often has a more consistent language style and gives concise description for each step. 6 Related Work Previous work (Branavan et al., 2009, 2010; Liu et al., 2018; Gur et al., 2019) investigated approaches for grounding natural language on desktop or web interfaces. Manuvinakurike et al. (2018) contributed a dataset for mapping natural language instructions to actionable image editing commands in Adobe Photoshop. Our work focuses on a new domain of grounding natural language instructions into executable actions on mobile user interfaces. This requires addressing modeling challenges due to the lack of paired natural language and action data, which we supply by harvesting rich instruction data from the web and synthesizing UI commands based on a large scale Android corpus. Our work is related to semantic parsing, particularly efforts for generating executable outputs such as SQL queries (Suhr et al., 2018). It is also broadly related to language grounding in the human-robot interaction literature where human dialog results in robot actions (Khayrallah et al., 2015). Our task setting is closely related to work on language-conditioned navigation, where an agent executes an instruction as a sequence of movements (Chen and Mooney, 2011; Mei et al., 2016; Misra et al., 2017; Anderson et al., 2018; Chen et al., 2019). Operating user interfaces is similar to navigating the physical world in many ways. A mobile platform consists of millions of apps that each is implemented by different developers independently. Though platforms such as Android strive to achieve interoperability (e.g., using Intent or AIDL mechanisms), apps are more often than not built by convention and do not expose programmatic ways for communication. As such, each app is opaque to the outside world and the only way to manipulate it is through its GUIs. These hurdles while working with a vast array of existing apps are like physical obstacles that cannot be ignored and must be negotiated contextually in their given environment. 7 Conclusion Our work provides an important first step on the challenging problem of grounding natural language instructions to mobile UI actions. Our decomposition of the problem means that progress on either can improve full task performance. For example, action span extraction is related to both semantic role labeling (He et al., 2018) and extraction of multiple facts from text (Jiang et al., 2019) and could benefit from innovations in span identification and multitask learning. Reinforcement learning that has been applied in previous grounding work may help improve out-of-sample prediction for grounding in UIs and improve direct grounding from hidden state representations. Lastly, our work provides a technical foundation for investigating user experiences in language-based human computer interaction. Acknowledgements We would like to thank our anonymous reviewers for their insightful comments that improved the paper. Many thanks to the Google Data Compute team, especially Ashwin Kakarla and Muqthar Mohammad for their help with the annotations, and Song Wang, Justin Cui and Christina Ou for their help on early data preprocessing. 8207 References Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko S¨underhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018. Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. S. R. K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1, ACL ’09, pages 82– 90, Stroudsburg, PA, USA. Association for Computational Linguistics. S.R.K. Branavan, Luke Zettlemoyer, and Regina Barzilay. 2010. Reading between the lines: Learning to map high-level instructions to commands. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1268– 1277, Uppsala, Sweden. Association for Computational Linguistics. David L. Chen and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI’11, pages 859–865. AAAI Press. Howard Chen, Alane Suhr, Dipendra Misra, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In Conference on Computer Vision and Pattern Recognition. Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hibschman, Daniel Afergan, Yang Li, Jeffrey Nichols, and Ranjitha Kumar. 2017. Rico: A mobile app dataset for building data-driven design applications. In Proceedings of the 30th Annual Symposium on User Interface Software and Technology, UIST ’17. Izzeddin Gur, Ulrich Rueckert, Aleksandra Faust, and Dilek Hakkani-Tur. 2019. Learning to navigate the web. In International Conference on Learning Representations. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 364–369, Melbourne, Australia. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh Chawla, and Meng Jiang. 2019. Multi-input multioutput sequence labeling for joint extraction of fact and condition tuples from scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 302–312, Hong Kong, China. Association for Computational Linguistics. Huda Khayrallah, Sean Trott, and Jerome Feldman. 2015. Natural language for human robot interaction. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics. Kenton Lee, Tom Kwiatkowski, Ankur P. Parikh, and Dipanjan Das. 2016. Learning recurrent span representations for extractive question answering. CoRR, abs/1611.01436. Yang Li, Lukasz Kaiser, Samy Bengio, and Si Si. 2019. Area attention. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3846–3855, Long Beach, California, USA. PMLR. E. Z. Liu, K. Guu, P. Pasupat, T. Shi, and P. Liang. 2018. Reinforcement learning on web interfaces using workflow-guided exploration. In International Conference on Learning Representations (ICLR). Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Ramesh Manuvinakurike, Jacqueline Brixey, Trung Bui, Walter Chang, Doo Soon Kim, Ron Artstein, and Kallirroi Georgila. 2018. Edit me: A corpus and a framework for understanding natural language image editing. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA). Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 2772–2778. AAAI Press. 8208 Dipendra Misra, John Langford, and Yoav Artzi. 2017. Mapping instructions and visual observations to actions with reinforcement learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1004–1015. Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. 2016. Learning convolutional neural networks for graphs. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 2014–2023, New York, New York, USA. PMLR. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. 2018. Image transformer. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4055–4064, Stockholmsm¨assan, Stockholm Sweden. PMLR. Lance Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Third Workshop on Very Large Corpora. Zhanna Sarsenbayeva. 2018. Situational impairments during mobile interaction. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, pages 498–503. Alane Suhr, Srinivasan Iyer, and Yoav Artzi. 2018. Learning to map context-dependent sentences to executable formal queries. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2238–2249, New Orleans, Louisiana. Association for Computational Linguistics. Richard Szeliski. 2010. Computer Vision: Algorithms and Applications, 1st edition. Springer-Verlag, Berlin, Heidelberg. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. A Data We present the additional details and analysis of the datasets. To label action phrase spans for the ANDROIDHOWTO dataset, 21 annotators (9 males and 12 females, 23 to 28 years old) were employed as contractors. They were paid hourly wages that are competitive for their locale. They have standard rights as contractors. They were native English speakers, and rated themselves 4 out of 5 regarding their familiarity with Android (1: not familiar and 5: very familiar). Each annotator is presented a web interface, depicted in Figure 6. The instruction to be labeled is shown on the left of the interface. From the instruction, the annotator is asked to extract a sequence of action phrase tuples on the right, providing one tuple per row. Before a labeling session, an annotator is asked to go through the annotation guidelines, which are also accessible throughout the session. To label each tuple, the annotator first indicates the type of operation (Action Type) the step is about by selecting from Click, Swipe, Input and Others (the catch-all category). The annotator then uses a mouse to select the phrase in the instruction for “Action Verb” (i.e., operation description) and for “object description”. A selected phrase span is automatically shown in the corresponding box and the span positions in the instruction are recorded. If the step involves an additional argument, the annotator clicks on “Content Input” and then marks a phrase span in the instruction (see the second row). Once finished with creating a tuple, the annotator moves onto the next tuple by clicking the “+” button on the far right of the interface along the row, which inserts an empty tuple after the row. The annotator can delete a tuple (row) by clicking the “-” button on the row. Finally, the annotator clicks on the “Submit” button at the bottom of the screen to finish a session. The lengths of the instructions range from 19 to 85 tokens, with median of 59, and they describe a sequence of actions from 1 to 19 steps, with a median of 5. Although the description for operations tend to be short (most of them are one to two words), the description for objects can vary dramatically in length, ranging from 1 to 19. The large range of description span lengths requires an efficient algorithm to compute its representation. B Computing Span Representations We evaluated three types of span representations. Here we give details on how each representation is computed. For sum pooling, we use the implementation of area attention (Li et al., 2019) that allows constant time computation of the representation of each span by using summed area tables. The TensorFlow implementation of the representation 8209 Figure 6: The web interface for annotators to label action phrase spans in an ANDROIDHOWTO instruction. is available on Github6. Algorithm 1: Compute the Start-End Concat span representation for all spans in parallel. Input: A tensor H in shape of [L, D] that represents a sequence of vector with length L and depth D. Output: representation of each span, U. 1 Hyperparameter: max span width M. 2 Init start & end tensor: S ←H, E ←H; 3 for m = 1, · · · , M −1 do 4 S ′ ←H[: −m, :] ; 5 E ′ ←H[m :, :] ; 6 S ←[S S ′], concat on the 1st dim; 7 E ←[E E ′], concat on the 1st dim; 8 U ←[S E], concat on the last dim; 9 return U. Algorithm 1 gives the recipe for Start-End Concat (Lee et al., 2016) using Tensor operations. The advanced form (Lee et al., 2017) takes two other features: the weighted sum over all the token embedding vectors within each span and a span length feature. The span length feature is trivial to compute in a constant time. However, computing the weighted sum of each span can be time consuming if not carefully designed. We decompose the computation as a set of summation-based operations (see Algorithm 2 and 3) so as to use summed area tables (Szeliski, 2010), which was been used in Li et al. (2019) for constant time computation of span representations. These pseudocode definitions are designed based on Tensor operations, which are highly optimized and fast. 6https : / / github . com / tensorflow / tensor2tensor/blob/master/tensor2tensor/ layers/area_attention.py Algorithm 2: Compute the weighted embedding sum of each span in parallel, using ComputeSpanVectorSum defined in Algorithm 3. Input: Tensors H and E are the hidden and embedding vectors of a sequence of tokens respectively, in shape of [L, D] with length L and depth D. Output: weighted embedding sum, ˆX. 1 Hyperparameter: max span length M. 2 Compute token weights A: A ←exp(φ(H, θ)W) where φ(·) is a multi-layer perceptron with trainable parameters θ, followed by a linear transformation W. A ∈RL×1; 3 E ′ ←E ⊗A where ⊗is element-wise multiplication. The last dim of A is broadcast; 4 ˆE ←ComputeSpanVectorSum(E ′); 5 ˆA ←ComputeSpanVectorSum(A); 6 ˆX ←ˆE ⊘ˆA where ⊘is element-wise division. The last dim of ˆA is broadcast; 7 return ˆX. Algorithm 3: ComputeSpanVectorSum. Input: A tensor G in shape of [L, D]. Output: Sum of vectors of each span, U. 1 Hyperparameter: max span length M. 2 Compute integral image I by cumulative sum along the first dimension over G; 3 I ←[0 I], padding zero to the left; 4 for m = 0, · · · , M −1 do 5 I1 ←I[m + 1 :, :] ; 6 I2 ←I[: −m −1, :] ; 7 ¯I ←I1 −I2 ; 8 U ←[U ¯I], concat on the first dim; 9 return U. 8210 C Details for Distance GCN Given the structural distance between two objects, based on the view hierarchy tree, we compute the strength of how these objects should affect each other by applying a Gaussian kernel to the distance, as shown the following (Equation 13). Adjacency(oi, oj) = 1 √ 2πσ2 exp(−d(oi, oj)2 2σ2 ) (13) where d(oi, oj) is the distance between object oi and oj, and σ is a constant. With this definition of soft adjacency, the rest of the computation follows the typical GCN (Niepert et al., 2016). D Hyperparameters & Training We tuned all the models on a number of hyperparameters, including the token embedding depth, the hidden size and the number of hidden layers, the learning rate and schedule, and the dropout ratios. We ended up using 128 for the embedding and hidden size for all the models. Adding more dimensions does not seem to improve accuracy and slows down training. For the phrase tuple extraction task, we used 6 hidden layers for Transformer encoder and decoder, with 8-head self and encoder-decoder attention, for all the model configurations. We used 10% dropout ratio for attention, layer preprocessing and relu dropout in Transformer. We followed the learning rate schedule detailed previously (Vaswani et al., 2017), with an increasing learning rate to 0.001 for the first 8K steps followed by an exponential decay. All the models were trained for 1 million steps with a batch size of 128 on a single Tesla V100 GPU, which took 28 to 30 hours. For the grounding task, Filter-1 GCN and Distance GCN used 6 hidden layers with ReLU for nonlinear activation and 10% dropout ratio at each layer. Both GCN models use a smaller peak learning rate of 0.0003. The Transformer screen encoder also uses 6 hidden layers but uses a much larger dropout ratio: ReLU dropout of 30%, attention dropout of 40%, and layer preprocessing dropout of 20%, with a peak learning rate of 0.001. All the grounding models were trained for 250K steps on the same hardware.
2020
729
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 800–806 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 800 Tree-Structured Neural Topic Model Masaru Isonuma1 Junichiro Mori1 Danushka Bollegala2 Ichiro Sakata1 1 The University of Tokyo 2 University of Liverpool {isonuma, isakata}@ipr-ctr.t.u-tokyo.ac.jp [email protected] [email protected] Abstract This paper presents a tree-structured neural topic model, which has a topic distribution over a tree with an infinite number of branches. Our model parameterizes an unbounded ancestral and fraternal topic distribution by applying doubly-recurrent neural networks. With the help of autoencoding variational Bayes, our model improves data scalability and achieves competitive performance when inducing latent topics and tree structures, as compared to a prior tree-structured topic model (Blei et al., 2010). This work extends the tree-structured topic model such that it can be incorporated with neural models for downstream tasks. 1 Introduction Probabilistic topic models, such as latent Dirichlet allocation (LDA; Blei et al., 2003), are applied to numerous tasks including document modeling and information retrieval. Recently, Srivastava and Sutton (2017); Miao et al. (2017) have applied the autoencoding variational Bayes (AEVB; Kingma and Welling, 2014; Rezende et al., 2014) framework to basic topic models such as LDA. AEVB improves data scalability in conventional models. The limitation of the basic topic models is that they induce topics as flat structures, not organizing them into coherent groups or hierarchies. Treestructured topic models (Griffiths et al., 2004), which detect the latent tree structure of topics, can overcome this limitation. These models induce a tree with an infinite number of nodes and assign a generic topic to the root and more detailed topics to the leaf nodes. In Figure 1, we show an example of topics induced by our model. Such characteristics are preferable for several downstream tasks, such as document retrieval (Weninger et al., 2012), aspect-based sentiment analysis (Kim et al., 2013) and extractive summarization (Celikyilmaz Root Carry Purchase Cover 1: quality months zipper time back 11: sleeve inside inch protection nice 111: bottom cover top plastic scratches 112: color cover mac keyboard love 12: perfect quality price bought size 121: item return receive amazon money 122: price recommend buy perfect love 13: pockets carry strap shoulder compartment 131: big laptops tablet description hp 132: books school carry bags back Figure 1: Topics inferred by our tree-structured topic model from Amazon reviews of laptop bags. The five most frequent words are shown and manually labeled. and Hakkani-Tur, 2010), because they provide succinct information from multiple viewpoints. For instance, in the case of document retrieval of product reviews, some users are interested in the general opinions about bag covers, while others pay more attention to specific topics such as the hardness or color of the covers. The tree structure can navigate users to the documents with desirable granularity. However, it is difficult to use tree-structured topic models with neural models for downstream tasks. While neural models require a large amount of data for training, conventional inference algorithms, such as collapsed Gibbs sampling (Blei et al., 2010) or mean-field approximation (Wang and Blei, 2009), have data scalability issues. It is also desirable to optimize the tree structure for downstream tasks by jointly updating the neural model parameters and posteriors of a topic model. To overcome these challenges, we propose a treestructured neural topic model (TSNTM), which is parameterized by neural networks and is trained using AEVB. While prior works have applied AEVB to flat topic models, it is not straightforward to parameterize the unbounded ancestral and fraternal topic distribution. In this paper, we provide a solution to this by applying doubly-recurrent neural networks (DRNN; Alvarez-Melis and Jaakkola, 2017), which have two recurrent structures over respectively the ancestors and siblings. 801 Experimental results show that the TSNTM achieves competitive performance against a prior work (Blei et al., 2010) when inducing latent topics and tree structures. The TSNTM scales to larger datasets and allows for end-to-end training with neural models of several tasks such as aspect-based sentiment analysis (Esmaeili et al., 2019) and abstractive summarization (Wang et al., 2019). 2 Related Works Following the pioneering work of tree-structured topic models by Griffiths et al. (2004), several extended models have been proposed (Ghahramani et al., 2010; Zavitsanos et al., 2011; Kim et al., 2012; Ahmed et al., 2013; Paisley et al., 2014). Our model is based on the modeling assumption of Wang and Blei (2009); Blei et al. (2010), while parameterizing a topic distribution with AEVB. In the context of applying AEVB to flat document or topic modeling (Miao et al., 2016; Srivastava and Sutton, 2017; Ding et al., 2018), Miao et al. (2017) proposed a model, which is closely related to ours, by applying recurrent neural networks (RNN) to parameterize an unbounded flat topic distribution. Our work infers the topic distributions over an infinite tree with a DRNN, which enables us to induce latent tree structures. Goyal et al. (2017) used a tree-structured topic model (Wang and Blei, 2009) with a variational autoencoder (VAE) to represent video frames as a tree. However, their approach is limited to smaller datasets. In fact, they used only 1,241 videos (corresponding to documents) for training and separately updated the VAE parameters and the posteriors of the topic model by mean-field approximation. This motivates us to propose the TSNTM, which scales to larger datasets and allows for end-to-end training with neural models for downstream tasks. 3 Tree-Structured Neural Topic Model We present the generative process of documents and the posterior inference by our model. As shown in Figure 2, we draw a path from the root to a leaf node and a level for each word. The word is drawn from the multinomial distribution assigned to the topic specified by the path and level. 1. For each document index d ∈{1, . . . , D}: Draw a Gaussian vector: xd ∼N(µ0, σ2 0) (1) Obtain a path distribution: πd = fπ(xd) (2) Obtain a level distribution: θd = fθ(xd) (3) β1 β11 β12 β111 β112 β121 cd,1 cd,2 cd,4 cd,3 zd,1 zd,3 zd,2 zd,4 sampling a path sampling a level wd,1 wd,3 wd,2 wd,4 Figure 2: Sampling process of a topic for each word. 2. For each word index n ∈{1, . . . , Nd} in d: Draw a path: cd,n ∼Mult(πd) (4) Draw a level: zd,n ∼Mult(θd) (5) Draw a word: wd,n ∼Mult(βcd,n[zd,n]) (6) where βcd,n[zd,n] ∈∆V −1 is the word distribution assigned to a topic, cd,n[zd,n]. While Wang and Blei (2009); Blei et al. (2010) draw a path for each document, this constrains a document to be generated from only the topics in the path. Hence, we draw a path for each word, enabling a document to be generated from all topics over a tree. Wang and Blei (2009) draws a path and a level distribution via the tree-based stick-breaking construction given by (7) and (8): νk ∼Beta(1, γ), πk =πpar(k)νk k−1 Y j=1 (1 −νj) (7) ηl ∼Beta(1, α), θl =ηl l−1 Y j=1 (1 −ηj) (8) Here, k ∈{1, . . . , K} and par(k) denote the k-th topic and its parent, respectively. l ∈{1, . . . , L} denotes the l-th level. See Appendix A.1 for more details. In contrast, we introduce neural architectures, fπ and fθ, to transform a Gaussian sample to a topic distribution, allowing for posterior inference with AEVB. Specifically, we apply a DRNN to parameterize the path distribution over the tree. 3.1 Parameterizing Topic Distribution A DRNN is a neural network decoder for generating tree-structured objects from encoded representations (Alvarez-Melis and Jaakkola, 2017). A DRNN consists of two RNNs over respectively the ancestors and siblings (see Appendix A.2). We assume that their two recurrent structures can parameterize the unbounded ancestral and fraternal path distribution conditioned on a Gaussian sample x, using a finite number of parameters. 802 The hidden state, hk, of the topic k is given by: hk = tanh(Wphpar(k) + Wshk−1) (9) where hpar(k) and hk−1 are the hidden states of a parent and a previous sibling of the k-th topic, respectively. We alternate the breaking proportions, ν, in (7) and obtain the path distribution, π, as: νk = sigmoid(h⊤ k x) (10) Moreover, we parameterize the unbounded level distribution, θ, by passing a Gaussian vector through a RNN and alternating the breaking proportions, η, in (8) as: hl = tanh(W hl−1) (11) ηl = sigmoid(h⊤ l x) (12) 3.2 Parameterizing Word Distribution Next, we explain the word distribution assigned to each topic1. We introduce the embeddings of the k-th topic, tk ∈RH, and words, U ∈RV ×H, to obtain the word distribution, βk ∈∆V −1, by (13). βk = softmax(U · t⊤ k τ 1 l ) (13) where τ 1 l is a temperature value and produces more sparse probability distribution over words as the level l gets to be deeper (Hinton et al., 2014). As the number of topics is unbounded, the word distributions must be generated dynamically. Hence, we introduce another DRNN to generate topic embeddings as tk = DRNN(tpar(k), tk−1). Several neural topic models (Xie et al., 2015; Miao et al., 2017; He et al., 2017) have introduced diversity regularizer to eliminate redundancy in the topics. While they force all topics to be orthogonal, this is not suitable for tree-structured topic models, which admit the correlation between a parent and its children. Hence, we introduce a tree-specific diversity regularizer with ¯tki = ti −tk as: X k/∈Leaf X i,j∈Chi(k):i̸=j  ¯t⊤ ki · ¯tkj ∥¯tki∥∥¯tkj∥−1 2 (14) where Leaf and Chi(k) denote the set of the topics with no children and the children of the k-th topic, respectively. By adding this regularizer to the variational objective, each child topic becomes orthogonal from the viewpoint of their parent, while allowing parent–children correlations. 1βk can be drawn from another distribution, but here we set it as a model parameter following Miao et al. (2017). 3.3 Variational Inference with AEVB Under our proposed probabilistic model, the likelihood of a document is given by (15): p(wd|µ0, σ0, β) = Z π,θ nY n X cn,zn p(wn|βcn[zn])p(cn|π)p(zn|θ) o p(π, θ|µ0, σ0) dπdθ = Z π,θ nY n (β · φ)wn o p(π, θ|µ0, σ0)dπdθ (15) where φ ∈∆K−1 is the topic distribution and is derived as φk = PL l=1 θl(P c:cl=k πc). As the latent variables cn and zn are integrated out in (15), the evidence lower bound for the document log-likelihood is derived as: Ld =Eq(π,θ|wd) hX n log(β · φ)wn i −KL h q(π, θ|wd)||p(π, θ|µ0, σ0) i (16) where q(π, θ|wd) is the variational distribution approximating posteriors. Following the AEVB framework, we introduce multi-layer perceptrons (MLP) fµ and fσ2 for transforming bag-of-words vector wd to the variational Gaussian distribution. The variational distribution of the posteriors is re-written as: q(π, θ|wd) = q(fπ(x), fθ(x)|wd) = N(x|fµ(wd), fσ2(wd)) (17) We sample ˆπ and ˆθ from q(π, θ|wd) by sampling ˆϵ ∼N(0, I) and computing ˆx = fµ(wd) + ˆϵ · fσ2(wd). The priors, p(π, θ|µ0, σ2 0), is also rewritten as N(x|µ0, σ2 0). To sum up, the evidence lower bound is approximated with sampled topic distribution ˆφ as: Ld ≈ X n log(β · ˆφ)wn− KL  N(x|fµ(wd), fσ2(wd))||N(x|µ0, σ2 0)  (18) 3.4 Dynamically Updating the Tree Structure To allow an unbounded tree structure, we introduce two heuristic rules for adding and pruning the branches. We compute the proportion of the words in topic k: pk =(PD d=1 Nd ˆφd,k)/(PD d=1 Nd). For each non-leaf topic k, if pk is more than a threshold, a child is added to refine the topic. For each topic k, if the cumulative proportion of topics over descendants, P j∈Des(k) pj, is less than a threshold, the k-th topic and its descendants are removed (Des(k) denotes the set of topic k and its descendants). We also remove topics with no children at the bottom. 803 4 Experiments 4.1 Datasets In our experiments, we use the 20NewsGroups and the Amazon product reviews. The 20NewsGroups is a collection of 20 different news groups containing 11, 258 training and 7, 487 testing documents2. For the Amazon product reviews, we use the domain of Laptop Bags provided by Angelidis and Lapata (2018), with 31, 943 training, 385 validation and 416 testing documents3. We use the provided test documents in our evaluations, while randomly splitting the remainder of the documents into training and validation sets. 4.2 Baseline Methods As baselines, we use a tree-structured topic model based on the nested Chinese restaurant process (nCRP) with collapsed Gibbs sampling (Blei et al., 2010). In addition, we use a flat neural topic model, i.e. the recurrent stick-breaking process (RSB), which constructs the unbounded flat topic distribution via an RNN (Miao et al., 2017). 4.3 Implementation Details For the TSNTM and RSB, we use 256-dimensional word embeddings, a one-hidden-layer MLP with 256 hidden units, and a one-layer RNN with 256 hidden units to construct variational parameters. We set the hyper-parameters of Gaussian prior distribution µ0 and σ2 0 as a zero mean vector and a unit variance vector with 32 dimensions, respectively. We train the model using AdaGrad (Duchi et al., 2011) with a learning rate of 10−2, an initial accumulator value of 10−1, and a batch size of 64. We grow and prune a tree with a threshold of 0.05 in Section 3.4 and set a temperature as τ = 10 in Section 3.2 4. Regarding the nCRP-based model, we set the nCRP parameter as γ = 0.01, the GEM parameter as π = 10, m = 0.5, and the Dirichlet parameter as η = 5. The hyperparameters of each model are tuned based on the perplexity on the validation set in the Amazon product reviews. We fix the number of levels in the tree as 3 with an initial number of branches 3 for both the second and third levels. 2For direct comparison against Miao et al. (2017), we use the training/testing splits and the vocabulary provided at https://github.com/akashgit/ autoencoding_vi_for_topic_models. 3https://github.com/stangelid/oposum 4The code to reproduce the results is available at: https: //github.com/misonuma/tsntm. NPMI 20News Amazon RSB (Miao et al., 2017) 0.201 0.102 nCRP (Blei et al., 2010) 0.198 0.112 TSNTM (Our Model) 0.220 0.121 Table 1: Average NPMI of the induced topics. Perplexity 20News Amazon RSB (Miao et al., 2017) 931 472 nCRP (Blei et al., 2010) 681 303 TSNTM (Our Model) 886 460 Table 2: Average perplexity of each model. 4.4 Evaluating Topic Interpretability Several works (Chang et al., 2009; Newman et al., 2010) pointed out that perplexity is not suitable for evaluating topic interpretability. Meanwhile, Lau et al. (2014) showed that the normalized pointwise mutual information (NPMI) between all pairs of words in each topic closely corresponds to the ranking of topic interpretability by human annotators. Thus, we use NPMI instead of perplexity as the primary evaluation measure following Srivastava and Sutton (2017); Ding et al. (2018). Table 1 shows the average NPMI of the topics induced by each model. Our model is competitive with the nCRP-based model and the RSB for each dataset. This indicates that our model can induce interpretable topics similar to the other models. As a note, we also show the average perplexity over the documents of each model in Table 2. For the AEVB-based models (RSB and TSNTM), we calculate the upper bound of the perplexity using ELBO following Miao et al. (2017); Srivastava and Sutton (2017). In contrast, we estimate it by sampling the posteriors in the nCRP-based model with collapsed Gibbs sampling. Even though it is difficult to compare them directly, the perplexity of the nCRP-based model is lower than that of the AEVB-based models. This tendency corresponds to the result of Srivastava and Sutton (2017); Ding et al. (2018), which report that the model with collapsed Gibbs sampling achieves the lowest perplexity in comparison with the AEVB-based models. In addition, Ding et al. (2018) also reports that there is a trade-off between perplexity and NPMI. Therefore, it is natural that our model is competitive with the other models regarding to NPMI, while there is a significant difference in achieved perplexity. 804 1 2 3 Level 0.0 0.2 0.4 0.6 0.8 Topic Specialization 20NewsGroups 1 2 3 Level 0.0 0.2 0.4 0.6 0.8 Amazon product reviews : TSNTM : nCRP Figure 3: Topic specialization scores for each level. TSNTM nCRP 0.0 0.2 0.4 0.6 0.8 Hierarchical Affinity 20NewsGroups TSNTM nCRP 0.0 0.2 0.4 0.6 0.8 Amazon product reviews : Child : Non-Child Figure 4: Hierarchical affinity scores. 4.5 Evaluating Tree-Structure For evaluating the characteristic of the tree structure, we adopt two metrics: topic specialization and hierarchical affinity following Kim et al. (2012). Topic specialization: An important characteristic of the tree-structure is that the most general topic is assigned to the root, while the topics become more specific toward the leaves. To quantify this characteristic, we measure the specialization score as the cosine similarity of the word distribution between each topic and the entire corpus. As the entire corpus is regarded as the most general topic, more specific topics have lower similarity scores. Figure 3 presents the average topic specialization scores for each level. While the root of the nCRP is more general than that of our model, the tendency is roughly similar for both models. Hierarchical Affinity: It is preferable that a parent topic is more similar to its children than the topics descended from the other parents. To verify this property, for each parent in the second level, we calculate the average cosine similarity of the word distribution to children and non-children respectively. Figure 4 shows the average cosine similarity over the topics. While the nCRP-based model induces child topics slightly similar to their parents, our model infers child topics with more similarity to their parent topics. Moreover, lower scores of the TSNTM also indicate that it induces more diverse topics than the nCRP-based model. Example: In Section 1, an example of the induced topics and the latent tree for the laptop bag reviews is shown in Figure 1. 4.6 Evaluating Data Scalability To evaluate how our model scales with the size of the datasets, we measure the training time until the convergence for various numbers of documents. 0 5,000 10,000 15,000 20,000 25,000 30,000 35,000 Number of documents 0 1,000 2,000 3,000 4,000 5,000 Time (sec.) Amazon product reviews TSNTM nCRP Figure 5: Training time for various number of docs. We randomly sample several number of documents (1,000, 2,000, 4,000, 8,000, 16,000 and all) from the training set of the Amazon product reviews and measure the training time for each number of documents. The training is stopped when the perplexity of the validation set is not improved for 10 consecutive iterations over the entire batches. We measure the time to sample the posteriors or update the model parameters, except for the time to compute the perplexity 5. As shown in Figure 5, as the number of documents increases, the training time of our model does not change considerably, whereas that of the nCRP increases significantly. Our model can be trained approximately 15 times faster than the nCRP-based model with 32,000 documents. 5 Conclusion We proposed a novel tree-structured topic model, the TSNTM, which parameterizes the topic distribution over an infinite tree by a DRNN. Experimental results demonstrated that the TSNTM achieves competitive performance when inducing latent topics and their tree structures, as compared to a prior tree-structured topic model (Blei et al., 2010). With the help of AEVB, the TSNTM can be trained approximately 15 times faster and scales to larger datasets than the nCRPbased model. This allows the tree-structured topic model to be incorporated with recent neural models for downstream tasks, such as aspect-based sentiment analysis (Esmaeili et al., 2019) and abstractive summarization (Wang et al., 2019). By incorporating our model instead of flat topic models, they can provide multiple information with desirable granularity. Acknowledgments We would like to thank anonymous reviewers for their valuable feedback. This work was supported by JST ACT-X Grant Number JPMJAX1904 and CREST Grant Number JPMJCR1513, Japan. 5All computational times are measures on the same machine with a Xeon E5-2683-v4 (2.1 GHz, 16 cores) CPU and a single GeForce GTX 1080 (8GB) GPU. 805 References Amr Ahmed, Liangjie Hong, and Alexander J Smola. 2013. The nested chinese restaurant franchise process: User tracking and document modeling. In Proceedings of the 30th International Conference on Machine Learning, pages 1426–1434. David Alvarez-Melis and Tommi S Jaakkola. 2017. Tree-structured decoding with doubly-recurrent neural networks. In Proceedings of the 5th International Conference on Learning Representations. Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3675–3686. David M Blei, Thomas L Griffiths, and Michael I Jordan. 2010. The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. Journal of the ACM, 57(2):7. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Asli Celikyilmaz and Dilek Hakkani-Tur. 2010. A hybrid hierarchical model for multi-document summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 815–824. Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L Boyd-Graber, and David M Blei. 2009. Reading tea leaves: How humans interpret topic models. In Advances in Neural Information Processing Systems, pages 288–296. Ran Ding, Ramesh Nallapati, and Bing Xiang. 2018. Coherence-aware neural topic modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 830– 836. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Babak Esmaeili, Hongyi Huang, Byron Wallace, and Jan-Willem van de Meent. 2019. Structured neural topic models for reviews. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, pages 3429–3439. Zoubin Ghahramani, Michael I Jordan, and Ryan P Adams. 2010. Tree-structured stick breaking for hierarchical data. In Advances in Neural Information Processing Systems, pages 19–27. Prasoon Goyal, Zhiting Hu, Xiaodan Liang, Chenyu Wang, and Eric P Xing. 2017. Nonparametric variational auto-encoders for hierarchical representation learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 5094–5102. Thomas L Griffiths, Michael I Jordan, Joshua B Tenenbaum, and David M Blei. 2004. Hierarchical topic models and the nested chinese restaurant process. In Advances in Neural Information Processing Systems, pages 17–24. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 388–397. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2014. Distilling the knowledge in a neural network. In the NIPS 2014 Deep Learning and Representation Learning Workshop. Joon Hee Kim, Dongwoo Kim, Suin Kim, and Alice Oh. 2012. Modeling topic hierarchies with the recursive chinese restaurant process. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, pages 783–792. Suin Kim, Jianwen Zhang, Zheng Chen, Alice Oh, and Shixia Liu. 2013. A hierarchical aspect-sentiment model for online reviews. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, pages 526–533. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539. Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In Proceedings of the 34th International Conference on Machine Learning, pages 2410–2419. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of the 33rd International Conference on Machine Learning, pages 1727–1736. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Proceedings of the 2010 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 100–108. John Paisley, Chong Wang, David M Blei, and Michael I Jordan. 2014. Nested hierarchical dirichlet processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):256–270. 806 Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, pages 1278–1286. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In Proceedings of the 5th International Conference on Learning Representations. Chong Wang and David M Blei. 2009. Variational inference for the nested chinese restaurant process. In Advances in Neural Information Processing Systems, pages 1990–1998. Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019. Topic-guided variational auto-encoder for text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 166–177. Tim Weninger, Yonatan Bisk, and Jiawei Han. 2012. Document-topic hierarchies from document graphs. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 635–644. Pengtao Xie, Yuntian Deng, and Eric Xing. 2015. Diversifying restricted boltzmann machine for document modeling. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1315–1324. Elias Zavitsanos, Georgios Paliouras, and George A Vouros. 2011. Non-parametric estimation of topic hierarchies from texts with hierarchical dirichlet processes. Journal of Machine Learning Research, 12:2749–2775. A Appendices A.1 Tree-Based Stick-Breaking Construction Figure 6 describes the process of the tree-based stick-breaking construction (Wang and Blei, 2009). At the first level, the stick length is π1 =1. Then, the stick-breaking construction is applied to the first level stick to obtain the path distribution over the second level. For instance, if the second level contains K =3 topics, the probability of each path is obtained as π11 =π1ν11, π12 =π1ν12(1−ν11) and the remaining stick π13 =π1(1−ν12)(1−ν11). Generally, for any values of K, it satisfies PK k=1 π1k = π1. The same process is applied to each stick proportion of the second level and continues until it reaches to the bottom level. π1 = 1 π11 π12 π111 π112 π121 π122 … … … π1 v11 π1 v12 (1 - v11) π123 … … stick-breaking construction: π1 (1 - v11) Figure 6: Tree-based stick-breaking construction. h1 h11 h12 h111 h122 h112 h121 h123 Ancestral Fraternal … … … Figure 7: Doubly-recurrent neural networks. A.2 Doubly-Recurrent Neural Networks Figure 7 shows the architecture of doubly-recurrent neural networks (Alvarez-Melis and Jaakkola, 2017). It consists of two recurrent neural networks over respectively the ancestors and siblings that are combined in each cell as described in (9).
2020
73
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8211–8225 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8211 TVQA+: Spatio-Temporal Grounding for Video Question Answering Jie Lei Licheng Yu Tamara L. Berg Mohit Bansal Department of Computer Science University of North Carolina at Chapel Hill {jielei, licheng, tlberg, mbansal}@cs.unc.edu Abstract We present the task of Spatio-Temporal Video Question Answering, which requires intelligent systems to simultaneously retrieve relevant moments and detect referenced visual concepts (people and objects) to answer natural language questions about videos. We first augment the TVQA dataset with 310.8K bounding boxes, linking depicted objects to visual concepts in questions and answers. We name this augmented version as TVQA+. We then propose Spatio-Temporal Answerer with Grounded Evidence (STAGE), a unified framework that grounds evidence in both spatial and temporal domains to answer questions about videos. Comprehensive experiments and analyses demonstrate the effectiveness of our framework and how the rich annotations in our TVQA+ dataset can contribute to the question answering task. Moreover, by performing this joint task, our model is able to produce insightful and interpretable spatio-temporal attention visualizations.1 1 Introduction We have witnessed great progress in recent years on image-based visual question answering (QA) tasks (Antol et al., 2015; Yu et al., 2015; Zhu et al., 2016b). One key to this success has been spatial attention (Anderson et al., 2018; Shih et al., 2016; Lu et al., 2016), where neural models learn to attend to relevant regions for predicting the correct answer. Compared to image-based QA, there has been less progress on the performance of video-based QA tasks. One possible reason is that attention techniques are hard to generalize to the temporal nature of videos. Moreover, due to the high cost of annotation, most existing video QA datasets only contain QA pairs, without providing labels for the 1Dataset and code are publicly available: http: //tvqa.cs.unc.edu, https://github.com/ jayleicn/TVQAplus Question: What is Sheldon holding when he is talking to Howard about the sword? Correct Answer: A computer. 00:02.314 → 00:06.732 Howard: Sheldon, he’s got Raj. Use your sleep spell. Sheldon! Sheldon! 00:06.902 → 00:10.992 Sheldon: I’ve got the Sword of Azeroth. Question: Who is talking to Howard when he is in the kitchen upset? Correct Answer: Raj is talking to Howard. 00:17.982 → 00:20.532 Howard: That's really stupid advice. 00:20.534 → 00:22.364 Raj: You know that hurts my feelings. Figure 1: Samples from TVQA+. Questions and correct answers are temporally localized to clips, and spatially localized to frame-level bounding boxes. Colors indicate corresponding box-object pairs. Subtitles are shown in dashed blocks. Wrong answers are omitted. key clips or regions needed to answer the question. Inspired by previous work on grounded image and video captioning (Lu et al., 2018; Zhou et al., 2019), we propose methods that explicitly localize video clips as well as spatial regions for answering videobased questions. Such methods are useful in many scenarios, such as natural language guided spatiotemporal localization, and adding explainability to video question answering, which is potentially useful for decision making and model debugging. To enable this line of research, we also collect new joint spatio-temporal annotations for an existing video QA dataset. In the past few years, several video QA datasets have been proposed, e.g., MovieFIB (Maharaj et al., 2017), MovieQA (Tapaswi et al., 2016), TGIF-QA (Jang et al., 2017), PororoQA (Kim et al., 2017), MarioQA (Mun et al., 2017), and TVQA (Lei et al., 2018). TVQA is one of the largest video QA datasets, providing a large video QA dataset built on top of 6 famous TV series. Be8212 cause TVQA was collected on television shows, it is built on natural video content with rich dynamics and complex social interactions, where questionanswer pairs are written by people observing both videos and their accompanying dialogues, encouraging the questions to require both vision and language understanding to answer. Movie (Tapaswi et al., 2016; Maharaj et al., 2017) and television show (Lei et al., 2018) videos come with the limitation of being scripted and edited, but they are still more realistic than cartoon/animation (Kim et al., 2017) and game (Mun et al., 2017) videos, and they also come with richer, real-world-inspired inter-human interactions and span across diverse domains (e.g., medical, crime, sitcom, etc.), making them a useful testbed to study complex video understanding by machine learning models. One key property of TVQA is that it provides temporal annotations denoting which parts of a video clip are necessary for answering a proposed question. However, none of the existing video QA datasets (including TVQA) provide spatial annotation for the answers. Actually, grounding spatial regions correctly could be as important as grounding temporal moments for answering a given question. For example, in Fig. 1, to answer the question of “What is Sheldon holding when he is talking to Howard about the sword?”, we need to localize the moment when “he is talking to Howard about the sword?”, as well as look at the region of “What is Sheldon holding”. Hence, in this paper, we first augment a subset of the TVQA dataset with grounded bounding boxes, resulting in a spatio-temporally grounded video QA dataset, TVQA+. It consists of 29.4K multiplechoice questions grounded in both the temporal and the spatial domains. To collect spatial groundings, we start by identifying a set of visual concept words, i.e., objects and people, mentioned in the question or correct answer. Next, we associate the referenced concepts with object regions in individual frames, if there are any, by annotating bounding boxes for each referred concept (see examples in Fig. 1). Our TVQA+ dataset has a total of 310.8K bounding boxes linked with referred objects and people, spanning across 2.5K categories (more details in Sec. 3). With such richly annotated data, we then propose the task of spatio-temporal video question answering, which requires intelligent systems to localize relevant moments, detect referred objects and people, and answer questions. We further design several metrics to evaluate the performance of the proposed task, including QA accuracy, object grounding precision, temporal localization accuracy, and a joint temporal localization and QA accuracy. To address spatio-temporal video question answering, we propose a novel end-to-end trainable model, Spatio-Temporal Answerer with Grounded Evidence (STAGE), which effectively combines moment localization, object grounding, and question answering in a unified framework. We find that the QA performance benefits from both temporal moment and spatial region supervision. Additionally, we provide visualization of temporal and spatial localization, which is helpful for understanding what our model has learned. Comprehensive ablation studies demonstrate how each of our annotations and model components helps to improve the performance of the tasks. To summarize, our contributions are: • We collect TVQA+, a large-scale spatiotemporal video question answering dataset, which augments the original TVQA dataset with frame-level bounding box annotations. To our knowledge, this is the first dataset that combines moment localization, object grounding, and question answering. • We design a novel video question answering framework, Spatio-Temporal Answerer with Grounded Evidence (STAGE), to jointly localize moments, ground objects, and answer questions. By performing all three sub-tasks together, our model achieves significant performance gains over the baselines, as well as presents insightful, interpretable visualizations. 2 Related Work Question Answering In recent years, multiple question answering datasets and tasks have been proposed to facilitate research towards this goal, in both vision and language communities, in the form of visual question answering (Antol et al., 2015; Yu et al., 2015; Jang et al., 2017) and textual question answering (Rajpurkar et al., 2016; Weston et al., 2016), respectively. Video question answering (Lei et al., 2018; Tapaswi et al., 2016; Kim et al., 2017) with naturally occurring subtitles are particularly interesting, as it combines both visual and textual information for question answering. Different from 8213 Dataset Origin Task #Clips/#QAs #Boxes Temporal (#Sentences) Annotation MovieFIB (Maharaj et al., 2017) Movie QA 118.5K/349K  MovieQA (Tapaswi et al., 2016) Movie QA 6.8K/6.5K  TGIF-QA (Jang et al., 2017) Tumblr QA 71.7K/165.2K  PororoQA (Kim et al., 2017) Cartoon QA 16.1K/8.9K  DiDeMo (Hendricks et al., 2017) Flickr TL 10.5K/40.5K  Charades-STA (Gao et al., 2017) Home TL -/19.5K  TVQA (Lei et al., 2018) TV Show QA/TL 21.8K/152.5K  ANet-Entities (Zhou et al., 2019) Youtube CAP/TL/SL 15K/52K 158K  TVQA+ TV Show QA/TL/SL 4.2K/29.4K 310.8K  Table 1: Comparison of TVQA+ with other video-language datasets. TL=Temporal Localization, SL=Spatial Localization, CAP=Captioning. existing video QA tasks, where a system is only required to predict an answer, we propose a novel task that additionally grounds the answer in both spatial and temporal domains. Language-Guided Retrieval Grounding language in images/videos is an interesting problem that requires jointly understanding both text and visual modalities. Earlier works (Kazemzadeh et al., 2014; Yu et al., 2017, 2018b; Rohrbach et al., 2016) focused on identifying the referred object in an image. Recently, there has been a growing interest in moment retrieval tasks (Hendricks et al., 2017, 2018; Gao et al., 2017), where the goal is to localize a short clip from a long video via a natural language query. Our work integrates the goals of both tasks, requiring a system to ground the referred moments and objects simultaneously. Temporal and Spatial Attention Attention has shown great success on many vision and language tasks, such as image captioning (Anderson et al., 2018; Xu et al., 2015), visual question answering (Anderson et al., 2018; Trott et al., 2018), language grounding (Yu et al., 2018b), etc. However, sometimes the attention learned by the model itself may not agree with human expectations (Liu et al., 2016; Das et al., 2016). Recent works on grounded image captioning and video captioning (Lu et al., 2018; Zhou et al., 2019) show better performance can be achieved by explicitly supervising the attention. In this work, we use annotated frame-wise bounding box annotations to supervise both temporal and spatial attention. Experimental results demonstrate the effectiveness of supervising both domains in video QA. Split #Clips/#QAs #Annotated #Boxes #Categories Images Train 3,364/23,545 118,930 249,236 2,281 Val 431/3,017 15,350 32,682 769 Test 403/2,821 14,188 28,908 680 Total 4,198/29,383 148,468 310,826 2,527 Table 2: Data Statistics for TVQA+ dataset. 3 Dataset In this section, we describe the TVQA+ Dataset, the first video question answering dataset with both spatial and temporal annotations. TVQA+ is built on the TVQA dataset introduced by Lei et al.. TVQA is a large-scale video QA dataset based on 6 popular TV shows, containing 152.5K multiple choice questions from 21.8K, 60-90 second long video clips. The questions in the TVQA dataset are compositional, where each question is comprised of two parts, a question part (“where was Sheldon sitting”), joined via a link word, (“before”, “when”, “after”), to a localization part that temporally locates when the question occurs (“he spilled the milk”). Models should answer questions using both visual information from the video, as well as language information from the naturally associated dialog (subtitles). Since the video clips on which the questions were collected are usually much longer than the context needed for answering the questions, the TVQA dataset also provides a temporal timestamp annotation indicating the minimum span (context) needed to answer each question. While the TVQA dataset provides a novel question format and temporal annotations, it lacks spatial grounding information, i.e., bounding boxes of the concepts (objects and people) mentioned in the QA pair. We hypothesize that object annotations could provide an additional useful training signal for models to learn a deeper understanding 8214 Figure 2: Box distributions for top 60 categories in TVQA+ train set. of visual information. Therefore, to complement the original TVQA dataset, we collect frame-wise bounding boxes for visual concepts mentioned in the questions and correct answers. Since the full TVQA dataset is very large, we start by collecting bounding box annotations for QA pairs associated with The Big Bang Theory. This subset contains 29,383 QA pairs from 4,198 clips. 3.1 Data Collection Identify Visual Concepts To annotate the visual concepts in video frames, the first step is to identify them in the QA pairs. We use the Stanford CoreNLP part-of-speech tagger (Manning et al., 2014) to extract all nouns in the questions and correct answers. This gives us a total of 152,722 words from a vocabulary of 9,690 words. We manually label the non-visual nouns (e.g., “plan”, “time”, etc.) in the top 600 nouns, removing 165 frequent non-visual nouns from the vocabulary. Bounding Box Annotation For the selected The Big Bang Theory videos from TVQA, we first ask Amazon Mechanical Turk workers to adjust the start and end timestamps to refine the temporal annotation, as we found the original temporal annotation were not ideally tight. We then sample one frame every two seconds from each span for spatial annotation. For each frame, we collect the bounding boxes for the visual concepts in each QA pair. We also experimented with semi-automated annotation for people with face detection (Zhang et al., 2016) and recognition model (Liu et al., 2017), but they do not work well mainly due to many partial occlusion of faces (e.g., side faces) in the frames. During annotation, we provide the original videos (with subtitles) to help the workers understand the context for the given QA pair. More annotation details (including quality check) are presented in Figure 3: Box/image area ratios (left) and span length distributions (right) in TVQA+. the appendix. 3.2 Dataset Analysis TVQA+ contains 29,383 QA pairs from 4,198 videos, with 148,468 images annotated with 310,826 bounding boxes. Statistics of TVQA+ are shown in Table 2. Note that we follow the same data splits as the original TVQA dataset, supporting future research on both TVQA and TVQA+. Table 1 compares TVQA+ dataset with other videolanguage datasets. TVQA+ is unique as it supports three tasks: question answering, temporal localization, and spatial localization. It is also of reasonable size compared to the grounded video captioning dataset ANetEntities (Zhou et al., 2019). On average, we obtain 2.09 boxes per image and 10.58 boxes per question. The annotated boxes cover 2,527 categories. We show the number of boxes (in log scale) for each of the top 60 categories in Fig. 2. The distribution has a long tail, e.g., the number of boxes for the most frequent category “sheldon” is around 2 orders of magnitude larger than the 60th category “glasses”. We also show the distribution of bounding box area over image area ratio in Fig. 3 (left). The majority of boxes are fairly small compared to the image, which makes object grounding challenging. Fig. 3 (right) shows the distribution of localized span length. While most spans are 8215 Conv Encoder QA Guided Attention Video-Text Fusion Conv Encoder Max Linear Who is at NASA to pick up Howard after he comes home ? Just Bernadette. BERT Linear Span Predictor QA Guided Attention Start and End probabilities Span Proposal Max Max Concat Attention Loss Answer Loss Linear Linear Slice Start and End indices RCNN Linear Conv Encoder Conv Encoder Linear BERT 00:00.243 →00:01.473 Howard: Where are the guys? 00:01.678 →00:02.768 Bernadette: Oh, it's just me. 00:07.350 →00:08.820 Howard: Don't worry. I can act surprised. Span Predictor Start and End probabilities Linear Linear Linear Conv Encoder Positional Encoding Conv Add & Norm repeat Span Loss T ⇥No ⇥d <latexit sha1_base64="2H8F5EdG9q ZNO8N7b25gavRWLKI=">AB/nicbZDLSsNAFIYnXmu9RcWVm8EiuCpJFXRZcONK KvQGbQiTyaQdOsmEmROhIKv4saFIm59Dne+jdM2grb+MPDxn3M4Z/4gFVyD43xZ K6tr6xubpa3y9s7u3r59cNjWMlOUtagUnUDopngCWsB8G6qWIkDgTrBKObab3z wJTmMmnCOGVeTAYJjzglYCzfPm7iPvCYaXznyx8MfbviVJ2Z8DK4BVRQoYZvf/Z DSbOYJUAF0brnOil4OVHAqWCTcj/TLCV0RAasZzAhZo2Xz86f4DPjhDiSyrwE8Mz 9PZGTWOtxHJjOmMBQL9am5n+1XgbRtZfzJM2AJXS+KMoEBomnWeCQK0ZBjA0Qqri 5FdMhUYSCSaxsQnAXv7wM7VrVvai695eVeq2Io4RO0Ck6Ry6QnV0ixqohSjK0R N6Qa/Wo/VsvVnv89YVq5g5Qn9kfXwD5meUwA=</latexit> T ⇥Ls ⇥d <latexit sha1_base64="oBJl/cWUfpHesmnS3QreSkaEtk=">AB/nicbZDLSsNAFIYnXmu9RcWVm8EiuCpJFXRZcOPCRYXeoA1hMpm0QyeTMHMilFDwVdy4UMSt z+HOt3HaRtDWHwY+/nMO58wfpIJrcJwva2V1bX1js7RV3t7Z3du3Dw7bOskUZS2aiER1A6KZ4JK1gINg3VQxEgeCdYLRzbTeWBK80Q2YZwyLyYDySNOCRjLt4+buA8Zhrf+foHQ9+uOFVnJrwMbgEVKjh25/9MKFZzCRQbTuU4KXk4UcCrYpNzPNEsJHZEB6xmUxKzx8tn5E3xmnBHiTJPAp65vydyEms9jgPTGRMY6sXa1Pyv1sguvZ yLtMmKTzRVEmMCR4mgUOuWIUxNgAoYqbWzEdEkUomMTKJgR38cvL0K5V3Yuqe39ZqdeKOEroBJ2ic+SiK1RHt6iBWoiHD2hF/RqPVrP1pv1Pm9dsYqZI/RH1sc36X2Uwg=</latexit> Lh ⇥d <latexit sha1_base64="Yr6dmSxk2ndlg8wYLdI1S6MylVE=">AB83icbVBNS 8NAEJ34WetX1aOXxSJ4KkV9Fjw4sFDBfsBTSibzaZdutmE3YlQSv+GFw+KePXPePfuG1z0NYHA4/3ZpiZF2ZSGHTdb2dtfWNza7u0U97d2z84rBwdt02a8ZbLJWp7 obUcCkUb6FAybuZ5jQJe+Eo9uZ3ni2ohUPeI40FCB0rEglG0kn/fHxIfRcINifqVqltz5yCrxCtIFQo0+5UvP0pZnCFTFJjep6bYTChGgWTfFr2c8MzykZ0wHuWKm rXBJP5zVNybpWIxKm2pZDM1d8TE5oYM05C25lQHJplbyb+5/VyjG+CiVBZjlyxaI4lwRTMguAREJzhnJsCWVa2FsJG1JNGdqYyjYEb/nlVdKu17zLmvdwVW3UizhKcA pncAEeXEMD7qAJLWCQwTO8wpuTOy/Ou/OxaF1zipkT+APn8wc6X5Ea</latexit> T ⇥Lh ⇥d <latexit sha1_base64="6grhipJsJ/qaylDsj4w3rDeRMS8=">AB/nicbZDLSsNAFIYnXmu9RcWVm8EiuCpJFXRZcOPCRYXeoA1hMpm0QyeTMHMilFDwVdy4UMSt z+HOt3HaRtDWHwY+/nMO58wfpIJrcJwva2V1bX1js7RV3t7Z3du3Dw7bOskUZS2aiER1A6KZ4JK1gINg3VQxEgeCdYLRzbTeWBK80Q2YZwyLyYDySNOCRjLt4+buA8Zhrf+cMfDH274lSdmfAyuAVUKGb3/2w4RmMZNABdG65zopeDlRwKlgk3I/0ywldEQGrGdQErPGy2fnT/CZcUIcJco8CXjm/p7ISaz1OA5MZ0xgqBdrU/O/Wi+D6Nr LuUwzYJLOF0WZwJDgaRY45IpREGMDhCpubsV0SBShYBIrmxDcxS8vQ7tWdS+q7v1lpV4r4ihE3SKzpGLrlAd3aIGaiGKcvSEXtCr9Wg9W2/W+7x1xSpmjtAfWR/f2G6Utw=</latexit> T ⇥Lh ⇥d <latexit sha1_base64="6grhipJsJ/qaylDsj4w3rDeRMS8=">AB/nicbZDL SsNAFIYnXmu9RcWVm8EiuCpJFXRZcOPCRYXeoA1hMpm0QyeTMHMilFDwVdy4UMStz+HOt3HaRtDWHwY+/nMO58wfpIJrcJwva2V1bX1js7RV3t7Z3du3Dw7bOskUZS2a iER1A6KZ4JK1gINg3VQxEgeCdYLRzbTeWBK80Q2YZwyLyYDySNOCRjLt4+buA8Zhrf+cMfDH274lSdmfAyuAVUKGb3/2w4RmMZNABdG65zopeDlRwKlgk3I/0yw ldEQGrGdQErPGy2fnT/CZcUIcJco8CXjm/p7ISaz1OA5MZ0xgqBdrU/O/Wi+D6NrLuUwzYJLOF0WZwJDgaRY45IpREGMDhCpubsV0SBShYBIrmxDcxS8vQ7tWdS+q7v1 lpV4r4ihE3SKzpGLrlAd3aIGaiGKcvSEXtCr9Wg9W2/W+7x1xSpmjtAfWR/f2G6Utw=</latexit> T ⇥Lh ⇥3d <latexit sha1_base64="TrHQjlTsyLAZyjKPtK4TWwkoSrs=">AB/3icbZDL SsNAFIZPvNZ6iwpu3AwWwVJWkGXBTcuXFToDdoQJpNpO3QyCTMTocQufBU3LhRx62u482cthG09YeBj/+cwznzBwlnSjvOl7Wyura+sVnYKm7v7O7t2weHLRWnkt AmiXksOwFWlDNBm5pTjuJpDgKOG0Ho+tpvX1PpWKxaOhxQr0IDwTrM4K1sXz7uIF6mkVUoVt/+IPV0LdLTtmZCS2Dm0MJctV9+7MXxiSNqNCEY6W6rpNoL8NSM8Lp pNhLFU0wGeEB7RoU2Ozxstn9E3RmnBD1Y2me0Gjm/p7IcKTUOApMZ4T1UC3WpuZ/tW6q+1dexkSairIfFE/5UjHaBoGCpmkRPOxAUwkM7ciMsQSE20iK5oQ3MUvL0 OrUnarZfuolSr5HEU4ARO4RxcuIQa3EAdmkDgAZ7gBV6tR+vZerPe560rVj5zBH9kfXwDUgqU9A=</latexit> T ⇥Lh ⇥d <latexit sha1_base64="6grhipJsJ/qaylDsj4w3rDeRMS8=">AB/nicbZDL SsNAFIYnXmu9RcWVm8EiuCpJFXRZcOPCRYXeoA1hMpm0QyeTMHMilFDwVdy4UMStz+HOt3HaRtDWHwY+/nMO58wfpIJrcJwva2V1bX1js7RV3t7Z3du3Dw7bOskUZS2a iER1A6KZ4JK1gINg3VQxEgeCdYLRzbTeWBK80Q2YZwyLyYDySNOCRjLt4+buA8Zhrf+cMfDH274lSdmfAyuAVUKGb3/2w4RmMZNABdG65zopeDlRwKlgk3I/0yw ldEQGrGdQErPGy2fnT/CZcUIcJco8CXjm/p7ISaz1OA5MZ0xgqBdrU/O/Wi+D6NrLuUwzYJLOF0WZwJDgaRY45IpREGMDhCpubsV0SBShYBIrmxDcxS8vQ7tWdS+q7v1 lpV4r4ihE3SKzpGLrlAd3aIGaiGKcvSEXtCr9Wg9W2/W+7x1xSpmjtAfWR/f2G6Utw=</latexit> T ⇥d <latexit sha1_base64="9oUIamfaOEF+W8iGquoB9yICg98=">AB8XicbVBNS 8NAEJ3Ur1q/qh69LBbBU0mqoMeCF48V+oVtKJvNpl262YTdiVBK/4UXD4p49d9489+4bXPQ1gcDj/dmJkXpFIYdN1vp7CxubW9U9wt7e0fHB6Vj0/aJsk04y2WyER3A2 q4FIq3UKDk3VRzGgeSd4Lx3dzvPHFtRKaOEm5H9OhEpFgFK302CR9FDE3JByUK27VXYCsEy8nFcjRGJS/+mHCspgrZJIa0/PcFP0p1SiY5LNSPzM8pWxMh7xnqaJ2jT9 dXDwjF1YJSZRoWwrJQv09MaWxMZM4sJ0xZFZ9ebif14vw+jWnwqVZsgVWy6KMkwIfP3Sg0ZygnlCmhb2VsBHVlKENqWRD8FZfXiftWtW7qnoP15V6LY+jCGdwDpfg wQ3U4R4a0AIGCp7hFd4c47w4787HsrXg5DOn8AfO5w/HKZBH</latexit> T ⇥d <latexit sha1_base64="9oUIamfaOEF+W8iGquoB9yICg98=">AB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeCF48V+oVtKJvNpl262YTdiVBK/4UXD4p49d948 9+4bXPQ1gcDj/dmJkXpFIYdN1vp7CxubW9U9wt7e0fHB6Vj0/aJsk04y2WyER3A2q4FIq3UKDk3VRzGgeSd4Lx3dzvPHFtRKaOEm5H9OhEpFgFK302CR9FDE3JByUK27VXYCsEy8nFcjRGJS/+mHCspgrZJIa0/PcFP0p1SiY5LNSPzM8pWxMh7xnqaJ2jT9dXDwjF1YJSZRoWwrJQv09MaWxMZM4sJ0xZFZ9ebif14vw+jWnwqVZsgVWy6KMk wIfP3Sg0ZygnlCmhb2VsBHVlKENqWRD8FZfXiftWtW7qnoP15V6LY+jCGdwDpfgwQ3U4R4a0AIGCp7hFd4c47w4787HsrXg5DOn8AfO5w/HKZBH</latexit> (ed −st + 1) ⇥d <latexit sha1_base64="JiWdoZjJBpMFQ4v9+/v2p4+jRM=">ACF3icbZDJSgNBEIZ7XGPcoh69NAYhIoaZKOgx4MVjBLNAJoSenpqkSc9Cd40YhryF1/FiwdF vOrNt7GzHDSxoOHj/6uort9LpNBo29/W0vLK6tp6biO/ubW9s1vY2/oOFUc6jyWsWp5TIMUEdRoIRWoCFnoSmN7ge+817UFrE0R0OE+iErBeJQHCGRuoWyq6EAEvURXjADPzR2ZQ0jk4d6irR6+OJcUImvr5bqFol+1J0UVwZlAks6p1C1+uH/M0hAi5ZFq3HTvBTsYUCi5hlHdTDQnjA9aDtsGImT2dbHLXiB4bxadBrMyLkE7U3xMZC7 Uehp7pDBn29bw3Fv/z2ikGV51MREmKEPHpoiCVFGM6Don6QgFHOTAuBLmr5T3mWIcTZTjEJz5kxehUSk752Xn9qJYrcziyJFDckRKxCGXpEpuSI3UCSeP5Jm8kjfryXqx3q2PaeuSNZs5IH/K+vwB8xifEw=</latexit> video {vt}T t=1 <latexit sha1_base64="4ur7Sd/nAE mZxpjHPkeaAz1FUYg=">ACBXicbVDJSgNBEO1xjXEb9aiHwSB4CjMqmIsQ8OI xQjbIxKGnU0ma9Cx01wTDMB68+CtePCji1X/w5t/YWQ6a+KDg8V4VfX8WHCFtv 1tLC2vrK6t5zbym1vbO7vm3n5dRYlkUGORiGTpwoED6GHAU0Ywk08AU0/MH12 G8MQSoehVUcxdAOaC/kXc4oaskzj1yEe0yHvANR9uCmQw/dzEvxysnuqp5ZsIv2 BNYicWakQGaoeOaX24lYEkCITFClWo4dYzulEjkTkOXdREFM2YD2oKVpSANQ7XT yRWadaKVjdSOpK0Rrov6eSGmg1CjwdWdAsa/mvbH4n9dKsFtqpzyME4SQTRd1E2 FhZI0jsTpcAkMx0oQyfWtFutTSRnq4PI6BGf+5UVSPys650Xn9qJQLs3iyJFDc kxOiUMuSZnckAqpEUYeyTN5JW/Gk/FivBsf09YlYzZzQP7A+PwBurWZTg=</la texit> subtitle {st}T t=1 <latexit sha1_base64="SRhGuScV9o2Ssj1xRfLklDbxb8=">ACHicbVDLSsNAFJ3UV62vqEsXBovgqiQq2I1QcOyQl/QxDCZTtqhkwczN2IJcefGX3HjQhG 3foI7/8ZpmoW2Hhg4nHPzNzjxZxJM1vrbS0vLK6Vl6vbGxube/ou3sdGSWC0DaJeCR6HpaUs5C2gQGnvVhQHicdr3x1dTv3lEhWRS2YBJTJ8DkPmMYFCSqx/aQO8hlYmXh7MHO5Uu2JmbwqWV3bZcvWrWzBzGIrEKUkUFmq7+ZQ8ikgQ0BMKxlH3LjMFJsQBG1P0VO5E0xmSMh7SvaIgDKp0XyQzjpUyMPxIqBOCkau/EykOpJwEnpo MIzkvDcV/P6Cfh1J2VhnANyewhP+EGRMa0FWPABCXAJ4pgIpj6q0FGWGACqruKsGaX3mRdE5r1lnNujmvNupFHWV0gI7QCbLQBWqga9REbUTQI3pGr+hNe9JetHftYzZa0orMPvoD7fMHURyavg=</latexit> hypothesis hk <latexit sha1_base64="UEbl0f3NiY3Gt18YFftf31XUJxI=">AB/nicbVBN S8NAEN3Ur1q/ouLJS7AInkqigj0WvHisYD+gDWGznTZLNx/sTsQKv4VLx4U8erv8Oa/cdvmoK0PBh7vzTAz08EV2jb30ZpZXVtfaO8Wdna3tndM/cP2ipOJYMWi0U suz5VIHgELeQoJtIoKEvoOPr6d+5x6k4nF0h1kCbkhHER9yRlFLnUR3jAPMiSGANQXE0eA2/smVW7Zs9gLROnIFVSoOmZX/1BzNIQImSCKtVz7ATdnErkTMCk0k 8VJSN6Qh6mkY0BOXms/Mn1qlWBtYwlroitGbq74mchkploa87Q4qBWvSm4n9eL8Vh3c15lKQIEZsvGqbCwtiaZmENuASGItOEMsn1rRYLqKQMdWIVHYKz+PIyaZ/Xn Iuac3tZbdSLOMrkmJyQM+KQK9IgN6RJWoSRnDyTV/JmPBkvxrvxMW8tGcXMIfkD4/MHr+2Wjw=</latexit> Figure 4: Overview of the proposed STAGE framework. less than 10 seconds, the largest spans are up to 20 seconds. The average span length is 7.2 seconds, which is short compared to the average length of the full video clips (61.49 seconds). 4 Methods Our proposed method, Spatio-Temporal Answerer with Grounded Evidence (STAGE), is a unified framework for moment localization, object grounding and video QA. First, STAGE encodes the video and text (subtitle, QA) via frame-wise regional visual representations and neural language representations, respectively. The encoded video and text representations are then contextualized using a Convolutional Encoder. Second, STAGE computes attention scores from each QA word to object regions and subtitle words. Leveraging the attention scores, STAGE is able to generate QA-aware representations, as well as automatically detecting the referred objects/people. The attended QA-aware video and subtitle representation are then fused together to obtain a joint frame-wise representation. Third, taking the frame-wise representation as input, STAGE learns to predict QA relevant temporal spans, then combines the global and local (span localized) video information to answer the questions. In the following, we describe STAGE in detail. 4.1 Formulation In our tasks, the inputs are: (1) a question with 5 candidate answers; (2) a 60-second long video; (3) a set of subtitle sentences. Our goal is to predict the answer and ground it both spatially and temporally. Given the question, q, and the answers, {ak}5 k=1, we first formulate them as 5 hypotheses (QA-pair) hk = [q, ak] and predict their correctness scores based on the video and subtitle context (Onishi et al., 2016). We denote the ground-truth (GT) answer index as yans and thus the GT hypothesis as hyans. We then extract video frames {vt}T t=1 at 0.5 FPS (T is the number of frames for each video). Subtitle sentences are then temporally aligned with the video frames. Specifically, for each frame vt, we pair it with two neighboring sentences based on the subtitle timestamps. We choose two neighbors since this keeps most of the sentences at our current frame rate, and also avoids severe misalignment between the frames and the sentences. The set of aligned subtitle sentences are denoted as {st}T t=1. We denote the number of words in each hypothesis and subtitle as Lh, Ls, respectively. We use No to denote the number of object regions in a frame, and d = 128 as the hidden size. 4.2 STAGE Architecture Input Embedding Layer For each frame vt, we use Faster R-CNN (Ren et al., 2015) pre-trained on Visual Genome (Krishna et al., 2017) to detect objects and extract their regional representation as our visual features (Anderson et al., 2018). We keep the top-20 object proposals and use PCA to reduce the feature dimension from 2048 to 300, to save GPU memory and computation. We denote ot,r ∈R300 as the r-th object embedding in the t-th frame. To encode the text input, we use BERT (Devlin et al., 2019), a transformer-based language model (Vaswani et al., 2017) that achieves state-ofthe-art performance on various NLP tasks. Specifically, we first fine-tune the BERT-base model using the masked language model and next sentence pre8216 diction objectives on the subtitles and QA pairs from TVQA+ train set. Then, we fix its parameters and use it to extract 768D word-level embeddings from the second-to-last layer for the subtitles and each hypothesis. Both embeddings are projected into a 128D space using a linear layer with ReLU. Convolutional Encoder Inspired by the recent trend of replacing recurrent networks with CNNs (Dauphin et al., 2016; Yu et al., 2018a) and Transformers (Vaswani et al., 2017; Devlin et al., 2019) for sequence modeling, we use positional encoding (PE), CNNs, and layer normalization (Ba et al., 2016) to build our basic encoding block. As shown in the bottom-right corner of Fig. 4, it is comprised of a PE layer and multiple convolutional layers, each with a residual connection (He et al., 2016) and layer normalization. We use Layernorm(ReLU(Conv(x)) + x) to denote a single Conv unit and stack Nconv of such units as the convolutional encoder. x is the input after PE, Conv is a depthwise separable convolution (Chollet, 2017). We use two convolutional encoders at two different levels of STAGE, one with kernel size 7 to encode the raw inputs, and another with kernel size 5 to encode the fused video-text representation. For both encoders, we set Nconv = 2. QA-Guided Attention For each hypothesis hk = [q, ak], we compute its attention scores w.r.t. the object embeddings in each frame and the words in each subtitle sentence, respectively. Given the encoded hypothesis Hk ∈RLh×d for the hypothesis hk with Lh words, and encoded visual feature Vt ∈RNo×d for the frame vt with No objects, we compute their matching scores Mk,t ∈RLh×No = HkV T t . We then apply softmax at the second dimension of Mk,t to get the normalized scores ¯ Mk,t. Finally, we compute the QA-aware visual representation V att k,t ∈RLh×d = ¯ Mk,tVt. Similarly, we compute QA-aware subtitle representation Satt k,t . Video-Text Fusion The above two QA-aware representations are then fused together as: Fk,t = [Satt k,t ; V att k,t ; Satt k,t ⊙V att k,t ]WF + bF , where ⊙denotes hadamard product, WF ∈R3d×d and bF ∈Rd are trainable weights and bias, Fk,t ∈ RLh×d is the fused video-text representation. After collecting F att k,t from all time steps, we get F att k ∈ RT×Lh×d. We then apply another convolutional encoder with a max-pooling layer to obtain the output Ak ∈RT×d. Span Predictor To predict temporal spans, we predict the probability of each position being the start or end of the span. Given the fused input Ak ∈RT×d, we produce start probabilities p1 k ∈RT and end probabilities p2 k ∈RT using two linear layers with softmax, as shown in the top-right corner of Fig. 4. Different from existing works (Seo et al., 2017; Yu et al., 2018a) that used the span predictor for text only, we use it for a joint localization of both video and text, which requires properly-aligned joint embeddings. Span Proposal and Answer Prediction Given the max-pooled video-text representation Ak, we use a linear layer to further encode it. We run maxpool across all the time steps to get a global hypothesis representation Gg k ∈Rd. With the start and end probabilities from the span predictor, we generate span proposals using dynamic programming (Seo et al., 2017). At training time, we combine the set of proposals with IoU ≥0.5 with the GT spans, as well as the GT spans to form the final proposals {stp, edp} (Ren et al., 2015). At inference time, we take the proposals with the highest confidence scores for each hypothesis. For each proposal, we generate a local representation Gl k ∈Rd by maxpooling Ak,stp:edp. The local and global representations are concatenated to obtain Gk ∈R2d. We then forward {Gk}5 k=1 through softmax to get the answer scores pans ∈R5. Compared with existing works (Jang et al., 2017; Zhao et al., 2017) that use soft temporal attention, we use more interpretable hard attention, extracting local features (together with global features) for question answering. 4.3 Training and Inference In this section, we describe the objective functions used in the STAGE framework. Since our spatial and temporal annotations are collected based on the question and GT answer, we only apply the attention loss and span loss on the targets associated with the GT hypothesis (question + GT answer), i.e., Mk=yans,t, p1 k=yans and p2 k=yans. For brevity, we omit the subscript k=yans in the following. Spatial Supervision While the attention described in Sec. 4.2 can be learned in a weakly supervised end-to-end manner, we can also train it with supervision from GT boxes. We define a box as positive if it has an IoU ≥0.5 with the GT box. Consider the attention scores Mt,j ∈RNo from a concept word wj in GT hypothesis hyans to the set of proposal boxes’ representations {ot,r}No r=1 at 8217 frame vt. We expect the attention on positive boxes to be higher than the negative ones, and therefore use LSE (Li et al., 2017) loss for the supervision: Lt,j= X rp∈Ωp,rn∈Ωn log  1 + exp(Mt,j,rn −Mt,j,rp)  , where Mt,j,rp is the rp-th element of the vector Mt,j. Ωp and Ωn denote the set of positive and negative box indices, respectively. LSE loss is a smoothed alternative to the widely used hinge loss, it is easier to optimize than the original hinge loss (Li et al., 2017). During training, we randomly sample two negatives for each positive box. We use Latt i to denote the attention loss for the i-th example, which is obtained by summing over all the annotated frames {vt} and concepts {wj} for Latt t,j . We define the overall attention loss Latt = 1 N PN i=1 Latt i . At inference time, we choose the boxes with scores higher than 0.2 as the predictions. Temporal Supervision Given softmax normalized start and end probabilities p1 and p2, we apply cross-entropy loss: Lspan = −1 2N N X i=1 log p1 y1 i + log p2 y2 i  , where y1 i and y2 i are the GT start and end indices. Answer Prediction Similarly, given answer probabilities pans, our answer prediction loss is: Lans = −1 N N X i=1 log pans yans i , where yans i is the index of the GT answer. Finally, the overall loss is a weighted combination of the three objectives above: L = Lans + wattLatt + wspanLspan, where watt and wspan are set as 0.1 and 0.5 based on validation set tuning. 5 Experiments As introduced, our task is spatio-temporal video question answering, requiring systems to temporally localize relevant moments, spatially detect referred objects and people, and answer questions. In this section, we first define the evaluation metrics, then compare STAGE against several baselines, and finally provide a comprehensive analysis of our model. Additionally, we also evaluate STAGE on the full TVQA dataset. Model QA Grd. Temp. ASA Acc. mAP mIoU ST-VQA (Jang et al., 2017) 48.28 two-stream (Lei et al., 2018) 68.13 STAGE (video) 52.75 26.28 10.90 2.76 STAGE (sub) 67.99 30.16 20.13 STAGE 74.83 27.34 32.49 22.23 Human (Lei et al., 2018) 90.46 Table 3: TVQA+ test set results. 5.1 Metrics To measure QA performance, we use classification accuracy (QA Acc.). We evaluate span prediction using temporal mean Intersection-over-Union (Temp. mIoU) following previous work (Hendricks et al., 2017) on language-guided video moment retrieval. Since the span depends on the hypothesis (QA pair), each QA pair provides a predicted span, but we only evaluate the span of the predicted answer. Additionally, we propose Answer-Span joint Accuracy (ASA), that jointly evaluates both answer prediction and span prediction. For this metric, we define a prediction to be correct if the predicted span has an IoU ≥0.5 with the GT span, provided that the answer prediction is correct. Finally, to evaluate object grounding performance, we follow the standard metric from the PASCAL VOC challenge (Everingham et al., 2015) and report the mean Average Precision (Grd. mAP) at IoU threshold 0.5. We only consider the annotated words and frames when calculating the mAP. 5.2 Comparison with Baseline Methods We consider the two-stream model (Lei et al., 2018) as our main baseline. In this model, two streams are used to predict answer scores from subtitles and videos respectively and final answer scores are produced by summing scores from both streams. We retrain the model using the official code2 on TVQA+ data, with the same feature as STAGE. We also consider ST-VQA (Jang et al., 2017) model, which is primarily designed for question answering on short videos (GIFs). We also provide STAGE variants that use only video or subtitle to study the effect of using only one of the modalities. Table 3 shows the test results of STAGE and the baselines. STAGE outperforms the baseline model (two-stream) by a large margin in QA Acc.,3 with 9.83% relative gains. Additionally, STAGE also lo2https://github.com/jayleicn/TVQA 3This also holds true when considering mean (standarddeviation) of 5 runs: 74.20 (0.42). 8218 Model QA Grd. Temp. ASA Acc. mAP mIoU baseline 65.79 2.74 + CNN 67.25 3.16 + Aligned Fusion (backbone) 68.31 7.31 + Temp. Sup. 71.40 10.86 30.77 20.09 + Spat. Sup. 71.99 24.10 31.16 20.42 + Local Feature (STAGE) 72.56 25.22 31.67 20.78 STAGE with GT Span 73.28 Table 4: Ablation study of STAGE on TVQA+ val set. Each row adds an extra component to the row above it. Model baseline +CNN +AF +TS +SS +LF what (60.52%) 65.66 66.43 67.58 70.76 71.25 72.34 who (10.24%) 65.37 64.08 64.72 72.17 73.14 74.11 where (9.68%) 65.41 64.38 68.49 71.58 71.58 74.32 why (9.55%) 74.31 78.82 77.43 79.86 78.12 76.39 how (9.05%) 60.81 67.03 69.23 66.30 69.96 67.03 total (100%) 65.79 67.25 68.31 71.40 71.99 72.56 Table 5: QA Acc. by question type on TVQA+ val set. For brevity, we only show top-5 question types (percentage in brackets). AF=Aligned Fusion, TS=Temp. Sup., SS=Spat. Sup., LF=Local Feature. Each column adds an extra component to the column before it. calizes the relevant moments with temporal mIoU of 32.49% and detects referred objects and people with mAP of 27.34%. However, a large gap is still observed between STAGE and human, showing space for further improvement. 5.3 Model Analysis Backbone Model Given the full STAGE model defined in Sec. 4, we define the backbone model as the ablated version of it, where we remove the span predictor along with the span proposal module, as well as the explicit attention supervision. We further replace the CNN encoders with RNN encoders, and remove the aligned fusion from the backbone model. This baseline model uses RNN to encode input sequences and interacts QA pairs with subtitles and videos separately. The final confidence score is the sum of the confidence scores from the two modalities. In the backbone model, we align subtitles with video frames from the start, fusing their representation conditioned on the input QA pair, as in Fig. 4. We believe this aligned fusion is essential for improving QA performance, as the latter part of STAGE has a joint understanding of both video and subtitles. With both changes, our backbone model obtains 68.31% on QA Acc., significantly higher than the baseline’s 65.79%. The results are shown in Table 4. Model Temp. Sup. val test-public two-stream (Lei et al., 2018)  65.85 66.46 PAMN (Kim et al., 2019b)  66.38 66.77 multi-task (Kim et al., 2019a)  66.22 67.05 STAGE backbone (GloVe)  66.46 STAGE backbone + Temp. Sup. (GloVe)  66.92 STAGE backbone  68.56 69.67 STAGE backbone + Temp. Sup.  70.50 70.23 Table 6: QA Acc. on the full TVQA dataset. Temporal and Spatial Supervision In Table 4, we also show the results when using temporal and spatial supervision. After adding temporal supervision, the model is be able to ground on the temporal axis, which also improves the model’s performance on other tasks. Adding spatial supervision gives additional improvements, particularly for Grd. mAP, with 121.92% relative gain. Span Proposal and Local Feature In the second-to-last row of Table 4, we show our full STAGE model, which is augmented with local features Gl for question answering. Local features are obtained by max-pooling the span proposal regions, which contain more relevant cues for answering the questions. With Gl, we achieve the best performance across all metrics, indicating the benefit of using local features. Inference with GT Span The last row of Table 4 shows our model uses GT spans instead of predicted spans at inference time. We observe better QA Acc. with GT spans. Accuracy by Question Type In Table 5, we show a breakdown of QA Acc. by question type. We observe a clear increasing trend on “what”, “who”, and “where” questions after using the backbone net and adding attention/span modules in each column. Interestingly, for “why” and “how” questions, our full model fails to present overwhelming performance, indicating some reasoning (textual) module to be incorporated as future work. Qualitative Examples We show two correct predictions in Fig. 5, where Fig. 5(a) uses grounded objects to answer the question, and Fig. 5(b) uses text. More examples (including failure cases) are provided in the appendix. TVQA Results We also conduct experiments on the full TVQA dataset (Table 6), without relying on the bounding boxes and refined timestamps in TVQA+. Without temporal supervision, STAGE backbone is able to achieve 3.91% relative gain from the best published result (multi-task) on 8219 00:16.897 → 00:20.067 Grab a napkin, homey, you just got served. 00:22.236 → 00:23.776 Leonard: It's fine. You win. Q: What did Leonard tell Howard after Howard said that Leonard just got served? A1: Leonard told Howard that he really hates that game. A2: Leonard told Howard that Howard isn't very good. A3: Leonard told Howard that Sheldon will beat his score. A4: Leonard told Howard that it was fine, he wins. Pred GT A5: Leonard told Howard that he will beat him. 00:01.509 → 00:04,539 Leonard: Said the premise is intriguing. 00:04.545 → 00:06.475 Sheldon: Good to see you again. Q: What is Leonard wearing when he says said the premise is intriguing? A1: Glasses. Pred GT A2: Coffee. A3: Rosary. A4: Gang Collars. A5: Hat. (a) (b) Figure 5: Example predictions from STAGE. Span predictions are shown on the top, each block represents a frame, the color indicates the model’s confidence for the spans. For each QA, we show grounding examples and scores for one frame in GT span. GT boxes are in green. Predicted and GT answers are labeled by Pred and GT, respectively. TVQA test-public set. Adding temporal supervision, performance is improved to 70.23%. For a fair comparison, we also provided STAGE variants using GloVe (Pennington et al., 2014) instead of BERT (Devlin et al., 2019) as text feature. Using GloVe, STAGE models still achieve better results. 6 Conclusion We collected the TVQA+ dataset and proposed the spatio-temporal video QA task. This task requires systems to jointly localize relevant moments, detect referred objects/people, and answer questions. We further introduced STAGE, an end-to-end trainable framework to jointly perform all three tasks. Comprehensive experiments show that temporal and spatial predictions help improve QA performance, as well as providing explainable results. Though our STAGE achieves state-of-the-art performance, there is still a large gap compared with human performance, leaving space for further improvement. Acknowledgement We thank the reviewers for their helpful feedback. This research is supported by NSF Awards #1633295, 1562098, 1405822, DARPA MCS Grant #N66001-19-2-4031, DARPA KAIROS Grant #FA8750-19-2-1004, Google Focused Research Award, and ARO-YIP Award #W911NF-18-10336. References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, and Stephen Gould. 2018. Bottom-up and top-down attention for image captioning and vqa. In CVPR. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In ICCV. Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450. Franc¸ois Chollet. 2017. Xception: Deep learning with depthwise separable convolutions. In CVPR. Abhishek Das, Harsh Agrawal, C. Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2016. Human attention in visual question answering: Do humans and deep networks look at the same regions? In EMNLP. Yann Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with gated convolutional networks. In ICML. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. 2015. The pascal visual object classes challenge: A retrospective. IJCV. Jiyang Gao, Chen Sun, Zhenheng Yang, and Ramakant Nevatia. 2017. Tall: Temporal activity localization via language query. In ICCV. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR. Chunhui Gu, Chen Sun, David A. Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, and Jitendra Malik. 2018. Ava: A video dataset of spatio-temporally localized atomic visual actions. In CVPR. 8220 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2018. Localizing moments in video with temporal language. In EMNLP. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan C. Russell. 2017. Localizing moments in video with natural language. In ICCV. Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR. Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. Tgif-qa: Toward spatiotemporal reasoning in visual question answering. In CVPR. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L. Berg. 2014. Referitgame: Referring to objects in photographs of natural scenes. In EMNLP. Junyeong Kim, Minuk Ma, Kyungsu Kim, Sungjin Kim, and Chang Dong Yoo. 2019a. Gaining extra supervision via multi-task learning for multi-modal video question answering. IJCNN. Junyeong Kim, Minuk Ma, Kyungsu Kim, Sungjin Kim, and Chang Dong Yoo. 2019b. Progressive attention memory network for movie story question answering. In CVPR. Kyung-Min Kim, Min-Oh Heo, Seong-Ho Choi, and Byoung-Tak Zhang. 2017. Deepstory: Video story qa by deep embedded memory networks. In IJCAI. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV. Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. 2018. Tvqa: Localized, compositional video question answering. In EMNLP. Yuncheng Li, Yale Song, and Jiebo Luo. 2017. Improving pairwise ranking for multi-label image classification. In CVPR. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV. Chenxi Liu, Junhua Mao, Fei Sha, and Alan Loddon Yuille. 2016. Attention correctness in neural image captioning. In AAAI. Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. 2017. Sphereface: Deep hypersphere embedding for face recognition. In CVPR. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In NeurIPS. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2018. Neural baby talk. In CVPR. Tegan Maharaj, Nicolas Ballas, Aaron C. Courville, and Christopher Joseph Pal. 2017. A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering. In CVPR. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In ACL. Jonghwan Mun, Paul Hongsuck Seo, Ilchae Jung, and Bohyung Han. 2017. Marioqa: Answering questions by watching gameplay videos. In ICCV. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David A. McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. In EMNLP. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy S. Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP. Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. TPAMI. Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. 2016. Grounding of textual phrases in images by reconstruction. In ECCV. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Kevin J. Shih, Saurabh Singh, and Derek Hoiem. 2016. Where to look: Focus regions for visual question answering. In CVPR. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. In EMNLP. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2016. Movieqa: Understanding stories in movies through question-answering. In CVPR. 8221 Alexander Trott, Caiming Xiong, and Richard Socher. 2018. Interpretable counting for visual question answering. In ICLR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan R. Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018a. Qanet: Combining local convolution with global self-attention for reading comprehension. In ICLR. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L. Berg. 2018b. Mattnet: Modular attention network for referring expression comprehension. In CVPR. Licheng Yu, Eunbyung Park, Alexander C. Berg, and Tamara L. Berg. 2015. Visual madlibs: Fill in the blank description generation and question answering. In ICCV. Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. 2017. A joint speaker-listener-reinforcer model for referring expressions. In CVPR. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In CVPR. Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters. Zhou Zhao, Qifan Yang, Deng Cai, Xiaofei He, and Yueting Zhuang. 2017. Video question answering via hierarchical spatio-temporal attention networks. In IJCAI. Luowei Zhou, Yannis Kalantidis, Xinlei Chen, Jason J. Corso, and Marcus Rohrbach. 2019. Grounded video description. In CVPR. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li FeiFei. 2016a. Visual7W: Grounded Question Answering in Images. In CVPR. Yuke Zhu, Oliver Groth, Michael S. Bernstein, and Li Fei-Fei. 2016b. Visual7w: Grounded question answering in images. In CVPR. A Appendices A.1 Timestamp Annotation During our initial analysis, we find the original timestamp annotations from the TVQA (Lei et al., 2018) dataset to be somewhat loose, i.e., around 8.7% of 150 randomly sampled training questions had a span that was at least 5 seconds longer than what is needed. To have better timestamps, we asked a set of Amazon Mechanical Turk (AMT) workers to refine the original timestamps. Specifically, we take the questions that have a localized span length of more than 10 seconds (41.33% of the questions) for refinement while leaving the rest unchanged. During annotation, we show a question, its correct answer, its associated video (with subtitle), as well as the original timestamp to the AMT workers (illustrated in Fig. 6, with instructions omitted). The workers are asked to adjust the start and end timestamps to make the span as small as possible, but need to contain all the information mentioned in the QA pair. Figure 6: Timestamp refinement interface. We show span length distributions of the original and the refined timestamps from TVQA+ train set in Fig. 7. The average span length of the original timestamps is 14.41 secs, while the average for the refined timestamps is 7.2 secs. In Table 7 we show STAGE performance on TVQA+ val set using the original timestamps and the refined timestamps. Models with the refined timestamps performs consistently better than the ones with the original timestamps. A.2 Bounding Box Annotation At each step, we show a question, its correct answer, and the sampled video frames to an AMT 8222 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 span lengths (seconds) 0% 10% 20% Percentage of questions Original Refined Figure 7: Comparison between the original and the refined timestamps in the TVQA+ train set. The refined timestamps are generally tighter than the original ones. Model QA Acc. Original Refined STAGE backbone 68.31 68.31 + Temp. Sup. 70.87 71.40 + Spat. Sup. 71.23 71.99 + Local Feature (STAGE) 70.63 72.56 Table 7: STAGE performance comparison between the original timestamps and the refined timestamps on TVQA+ val set. Each row adds an extra component to the row above it. worker. (illustrated in Fig. 8). We do not annotate the wrong answers as most of them cannot be grounded in the video. We checked 200 sampled QAs - only 3.13% of the wrong answers could be grounded, while 46% of the correct answers could be grounded. As each QA pair has multiple visual concepts as well as multiple frames, each task shows one pair of a concept word and a sampled frame. For example, in Fig. 8, the word “laptop” is highlighted, and workers are instructed to draw a box around it. In our MTurk instructions, we required workers to draw boxes for each instance of a plural word. E.g., for the word “everyone”, the worker need to draw a box for each person in the frame. Note, it is possible that the highlighted word will be a non-visual word or a visual word that is not present in the frame being shown. In that case, the workers are allowed to check the box indicating the object is not present. Recent works (Zellers et al., 2019; Gu et al., 2018) suggest the use of pre-trained detectors for semi-automated annotation. However, since TVQA+ has a wide range of categories (see Fig. 2 and Table 1), it is challenging to use off-the-shelf detectors in the annotation process. As face detection and recognition might be easier than recognizing open set objects, we initially also tried using strong face detection (Zhang et al., 2016) and recognition (Liu Figure 8: Bounding box annotation interface. Here, the worker is asked to draw a box around the highlighted word “laptop”. et al., 2017) model for character face annotation, but the quality was much poorer than expected. Thus, we decided to invest the required funds to collect boxes manually and ensure their accuracy. After the collection, with the GT labels, we again used the above models to test face retrieval performance for 12 most frequently appeared characters in TVQA+. To allow (Liu et al., 2017) to work, we manually collected 5 GT faces for each character as our gallery set. At test time, we assign each test face the label of its closest neighbor from the gallery set in the learned embedding space. This method achieves 55.6 F1/74.4 Precision/44.4 Recall. Such performance is not strong enough to support further research. We found the main reason is due to many partial occlusion of faces (e.g., side faces) in TV shows. A.3 Quality To ensure the quality of the collected bounding boxes, we only allow workers from Englishspeaking countries to participate the task. Besides, we set high requirements for workers – they needed to have at least 3000 accepted HITs and 95% accept rate. Qualified workers were well paid. We also kept track of the quality of the data during collection - workers with poor annotations were disqualified to work on our task. After collection, we further conducted an in-house check, 95.5% of 200 sampled QAs are correctly labeled, indicating the high quality of our data. A.4 Training Details We optimize our model using Adam with an initial learning rate of 1e-3, weight decay 3e-7. A mini8223 Model QA Grd. Temp. ASA Acc. mAP mIoU STAGE-LXMERT 71.46 21.01 26.31 18.04 STAGE 74.83 27.34 32.49 22.23 Table 8: TVQA+ test set results with LXMERT. batch contains 16 questions. We train the model for maximum 100 epochs with early stop – if QA Acc. is not improving for consecutive 5 epochs, the training is stopped. CNN hidden size is set to 128. A.5 Vision-Language Pretrained Features In addition, we also consider features from LXMERT (Tan and Bansal, 2019). This model is pretrained on a large amount of image-text pairs from multiple image captioning (Lin et al., 2014; Krishna et al., 2017) and image question answering (Goyal et al., 2017; Hudson and Manning, 2019; Zhu et al., 2016a) datasets. Specifically, we use video frame-question pairs as input to LXMERT, and use the extracted features to replace Faster R-CNN object features and BERT question features. For answers and subtitles, we still use the original BERT features. The results are shown in Table 8. We notice that using LXMERT feature lowers STAGE’s performance. This is not surprising, as the domains in which the LXMERT model are pretrained on are very different from TVQA+: (captions/questions+image) vs (subtitles+QAs+videos). Future work includes more investigation into adapting these pre-trained vision-language models for more challenging video+dialogue domains. A.6 More Prediction Examples We show 6 correct prediction examples from STAGE in Fig. 9. As can be seen from the figure, correct examples usually have correct temporal and spatial localization. In Fig. 10 we show 6 incorrect examples. Incorrect object localization is one of the most frequent failure reason, while the model is able to localize common objects, it is difficult for it to localize unusual objects (Fig. 10(a, d)), small objects (Fig. 10(b)). Incorrect temporal localization is another most frequent failure reason, e.g., Fig. 10(c, f). There are also cases where the objects being referred are not present in the sampled frame, as in Fig. 10(e). 8224 00:10.268 → 00:11.848 Lesley: ...no extraneous spittle. 00:13.146 → 00:15.356 Lesley: On the other hand, no arousal. Q: What does Lesley say there was none of when Leonard asked about the kiss? A1: Lesley says there was no arousal. Pred GT A2: Lesley says there was no passion. A3: There was no kiss. A4: Lesley says the kiss lacked a certain fire. A5: Lesley says there was no excitement in the kiss. 00:34.309 → 00:37.019 - What's that? - Tea. 00:37.729 → 00:38.899 Sheldon: When people are upset... Q: Where is Leonard sitting before Sheldon brings him the tea ? A1: Sheldon's bed. A2: The armchair. A3: The floor. A4: His bed. A5: The couch. Pred GT (a) (b) 00:00.060 → 00:02.020 - Oh, hey, Leonard. - Good afternoon, Penny. 00:02.187 → 00:04.567 Leonard: So, hi, hey. Uh... Q: Who visited Penny in her house before dinner? A1: No one visited Penny in her house. A2: Howard visited Penny in her house. A3: Raj visited Penny in her house. A4: Sheldon visited Penny in her house. A5: Leonard visited Penny in her house. Pred GT 00:00.141 → 00:01.391 Raj: I don't believe it. 00:01.559 → 00:02.599 Howard: Neither do I. Q: What is Leonard holding when he is listening to Raj? A1: A notepad. A2: A book. A3: A yellow cup . Pred GT A4: A cell phone. A5: A set of keys. (c) (d) 00:50.790 → 00:52.290 Leonard, it's 2 in the morning. 00:53.918 → 00:59.018 - So? - So it's my turn. Q: Where was Leonard when Sheldon walked into the living room at 2am? A1: On the couch. A2: In the time machine. Pred GT A3: In the kitchen. A4: In his room. A5: He wasn't there. Q: Where was Penny when she called to Leonard? A1: Penny was working at a restaurant. Pred GT A2: Penny was at home. A3: Penny was walking in the street. A4: Penny was at bed. A5: Penny was in the kitchen. 00:41.444 → 00:43.274 Leonard: - What's up? - Yeah, well, I'm at work too. 00:43.446 → 00:46.656 Penny: And you'll never guess who's here infecting my entire station. (e) (f) Figure 9: Correct prediction examples from STAGE. The span predictions are shown on the top of each example, each block represents a frame, the color indicates the model’s confidence for the predicted spans. For each QA, we show grounding examples and scores for one frame in GT span, GT boxes are shown in green. Model predicted answers are labeled by Pred, GT answers are labeled by GT. 8225 00:00.343 → 00:03.763 Past Howard: I haven't seen your Oreos! 00:03.972 → 00:07.062 Past Howard: Just take your bath without them! Q: What was Raj doing when Howard was shouting at someone? A1: Raj was playing some music. A2: Raj was seated in the couch. A3: Raj was taking a shower. Pred A4: Raj was not in the room. A5: Raj was eating lots of cookies in his mouth as he watched Howard. GT 00:27,095 → 00:42.315 Leonard: Sheldon? 00:45.072 → 01:08.032 Leonard: Hello? Q: What is Leonard holding when he comes out of the bedroom? A1: Leonard is holding his cell phone. Pred A2: Leonard is holding a baseball bat. A3: Leonard is holding a shovel. A4: Leonard is holding a coat hanger. A5: Leonard is holding a mock lightsaber. GT (a) (b) 00:26.568 → 00:27.818 Leonard: Sounds like a breakthrough. 00:27.986 → 00:30.486 Should I call Science and tell them to hold the cover? Q: What is Leonard wearing when he is talking to Sheldon? A1: A scarf. A2: A hat. A3: A suit. GT A4: A kilt. Pred A5: Jogging pants. 00:17.350 → 00:19.640 Howard: Plus Superman and Godzilla. 00:20.020 → 00:21.690 Leonard: No, no, no. Orcs are magic. Q: Who grab a bottle after Leonard talked? A1: Sheldon. A2: Howard. A3: Penny. A4: Raj. GT A5: Leonard. Pred (c) (d) 00:23.443 → 00:27.403 - You gotta take one for the team. - Yeah. Sack up, dude. 00:28,823 → 00:30.403 Leonard: Fine. Q: What was Leonard 's drink when they are talking about taking one for the team? A1: Fanta. A2: bottle of water. Pred A3: Sprite. A4: Gatorade. A5: Coke Cola. GT 00:03.743 → 00:06.663 Penny: ...something Elton John would drive through the Everglades. 00:12.502 → 00:14.332 Sheldon: It only moves in time. Q: What direction did Sheldon turn to when Penny insulted their time machine ? A1: He looked at his hands. A2: To the left. A3: Up towards the ceiling. Pred A4: He turned to Penny. A5: To the right. GT (e) (f) Figure 10: Wrong prediction examples from STAGE. The span predictions are shown on the top of each example, each block represents a frame, the color indicates the model’s confidence for the predicted spans. For each QA, we show grounding examples and scores for one frame in GT span, GT boxes are shown in green. Model predicted answers are labeled by Pred, GT answers are labeled by GT.
2020
730
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8226–8237 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8226 Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting Po-Yao Huang1, Junjie Hu1, Xiaojun Chang2, Alexander Hauptmann1 1Language Technologies Institute, Carnegie Mellon University 2Faculty of Information Technology, Monash University {poyaoh, junjieh, alex}@cs.cmu.edu, [email protected] Abstract Unsupervised machine translation (MT) has recently achieved impressive results with monolingual corpora only. However, it is still challenging to associate source-target sentences in the latent space. As people speak different languages biologically share similar visual systems, the potential of achieving better alignment through visual content is promising yet under-explored in unsupervised multimodal MT (MMT). In this paper, we investigate how to utilize visual content for disambiguation and promoting latent space alignment in unsupervised MMT. Our model employs multimodal back-translation and features pseudo visual pivoting in which we learn a shared multilingual visual-semantic embedding space and incorporate visuallypivoted captioning as additional weak supervision. The experimental results on the widely used Multi30K dataset show that the proposed model significantly improves over the state-ofthe-art methods and generalizes well when images are not available at the testing time. 1 Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014) has achieved near human-level performance (Wu et al., 2016). However, its effectiveness strongly relies on the availability of large-scale parallel corpora. Unfortunately, preparing the parallel data remains a challenge as there are more than 6,500 languages in the world, and recruiting translators with bilingual or multilingual knowledge to cover all those languages is impractical. As a result, developing methods alleviating the need of well-annotated large parallel corpora has recently attracted increasing attention in the community. These methods fall into two broad categories. The first type of methods use a third language as the pivot (Firat et al., 2016; Chen et al., 2017; Cheng et al., 2017; Johnson et al., 2017) to enable zero-resource translation. Although the progress is encouraging, pivoting with a third language still demands bilingual knowledge for collecting large-scale parallel source-pivot and pivottarget corpora. The second type of methods explore unsupervised approaches (Conneau et al., 2018a; Artetxe et al., 2018; Lample et al., 2018a) have recently achieved impressive translation quality. These methods rely only on monolingual data and back-translation (Sennrich et al., 2016a). However, as discussed in (Lample et al., 2018b), the alignment of source-target sentences is uncertain and highly subject to proper initialization. Using visual content for unsupervised MT (Chen et al., 2018; Su et al., 2019) is a promising solution for pivoting and alignment based on its availability and feasibility. Abundant multimodal content in various languages are available online (e.g. Instagram and YouTube). It is also easier to recruit monolingual annotators to describe an image than to find multilingual translators to translate sentences. Importantly, visual content is eligible to improve the alignments in the language latent spaces since the physical visual perception is similar among people speaking different languages (e.g. similar “blue car” for a German and a French). Based on these insights, we propose a novel unsupervised multimodal MT framework incorporating images as pseudo pivots promoting latent space alignment. In addition to use features of visual objects for multimodal back-translation, we align a shared multilingual visual-semantic embedding (VSE) space via leveraging disjoint image-sentence pairs in different languages. As illustrated in Figure 2, for sentences approximately pivoted by similar images (src-img-tgt), drawing embeddings of corresponding image-sentence pairs closer results in better alignments of semantically equivalent sentences in the language latent spaces. Inspired by back-translation, we further explore another pseudo pivoting strategy, which approximates multilingual 8227 sentence pairs (src-img-tgt) conditioned on a real image via captioning. Instead of using annotation of images for pivoting as in (Chen et al., 2018), we generate sentences in two languages pivoted on the real image, and then approximately pairing them as weak supervision for training unsupervised MT system. This approach is analogous to a cross-modal version of back-translation. We make the following contributions: (1) Building a unified view of employing visual content for pseudo pivoting. (2) We learn and improve the alignments in the shared multilingual multimodal embedding space for unsupervised MMT with disjoint image-text pairs in different languages. (3) Our model achieves state of the art on Multi30K and generalizes well to the text-only scenario. 2 Background Neural Machine Translation Typical NMT models are based on the encoder-decoder framework with attention (Bahdanau et al., 2015). Let x = (x1, · · · , xN) denotes a source sentence and y = (y1, · · · , yM) denotes a target sentence, where (x, y) ∈(X, Y). The encoder-decoder model learns to estimate the following likelihood from the source sentence to the target sentence: px→y(y|x) = M Y i=1 p(yi|y<i, x) (1) When a parallel corpus is available, the maximum likelihood estimation (MLE) is usually adopted to optimize the (source to target language) NMT model by minimizing the following loss: LMT x→y = E(x,y)∼(X,Y) [−log px→y(y|x)] (2) Among all encoder-decoder models, the Transformer (Vaswani et al., 2017) architecture recently achieves state-of-the-art translation quality. Instead of using recurrent or convolutional operations, it facilitates multi-head self-attention (Lin et al., 2017). In this paper, we choose the Transformer as the underlying architecture for both the translation and the captioning modules. Unsupervised Machine Translation While conventional MT systems rely on the availability of a large parallel corpus, translation with zero-resource (unsupervised MT) (Lample et al., 2018a; Artetxe et al., 2018; Lample et al., 2018b) has drawn increasing research attention. Only monolingual sentences are presented at the training and validation phase, i.e., only x ∈X and y ∈Y are available. Successful unsupervised MT systems share several common principles. First, they require the pre-training step to initialize the model and establish strong monolingual language models properly. For example, XLM (Conneau and Lample, 2019) utilizes the masked language model objective in BERT (Devlin et al., 2019). MASS (Song et al., 2019) utilizes a span-based sequence-to-sequence masking objective for language model pre-training. Second, these systems transform the unsupervised problem into a weakly or self-supervised one by automatically generating pseudo sentence pairs via back-translation (Sennrich et al., 2016a). The idea behind can be analogous to the cycleconsistency objective in CycleGAN (Zhu et al., 2017) for image-image translation with unpaired data. Specifically, let us denote by h∗(y) = (ˆx1, · · · , ˆxN) the sentence in the source language inferred from y ∈Y such that h∗(y) = argmax py→x(x|y). Similarly, let us denote by g∗(x) = (ˆy1, · · · , ˆyM) the sentence in the target language inferred from x ∈X such that g∗(x) = argmax px→y(y|x). Then the “pseudo” parallel sentences (h∗(y), y) and (x, g∗(x)) can be further used to train two two MT models (X →Y and Y →X) by minimizing the following backtranslation loss: LBT x↔y = Ex∼X [−log py→x(x|g∗(x))] + Ey∼Y [−log px→y(y|h∗(y))] (3) Although reinforcement learning-based approaches (He et al., 2016a) and Gumbel-softmax reparametrization (Maddison et al., 2017) have been used to handle back-propagation thorough non-differentiable “argmax” predictions. in this paper, we do not back-propagate through h∗(y) and g∗(x) to simplify the training process. 3 Unsupervised Multimodal Machine Translation As illustrated in Figure 1, our model is composed of seven modules: Two encoder-decoder pairs for translation, two decoders for captioning, and one shared visual encoder. In this section, we first detail our basic MMT model architecture and the unsupervised setup. Then we introduce pseudo visual pivoting: learning multilingual VSE and pivoted captioning. 3.1 Multimodal MT Multimodal machine translation (Specia et al., 2016) (MMT) considers additional images as a 8228 Encx a motorcyclist goes down a snow-covered hill Decy Ency ein motorradfahrer fährt einen schneebedeckten … Decx a man on a motorcycle down a snow hill Encv Capy Capx a man on a motorcycle ein mann fährt motorrad Encv Unsupervised Multimodal MT (Sec 3.2) Multilingual VSE (Sec 3.3) Image Captioning for Pseudo Pivoting (Sec 3.4) Figure 1: The proposed model structure (English↔German). We incorporate visual objects for unsupervised multimodal MT and improve the language latent space alignment with pseudo visual pivoting (§3.3-§3.4). complementary information source for MT. An image z and the description in two languages form a triplet (x, y, z) ∈(X, Y, Z). The Transformer encoder reads the source sentence and encodes it with hierarchical self-attention into hx = {hx 1, · · · , hx N}, hx i ∈Rd, where d is the dimension of the embedding space. The visual encoder encodes the image into hz = {hz 1, · · · , hz K}, hz i ∈ Rd, Kmax = 36. Most previous work (Chen et al., 2018; Su et al., 2019) use 2D (K = 14×14) feature maps of ImageNet pre-trained ResNet (He et al., 2016b). In contrast, we utilize the regional features of K salient visual objects in an image extracted by Faster-RCNN (Ren et al., 2015) and a 1-layer MLP as the encoder to encode visual objects. Various attention strategies for sequence-tosequence learning have been addressed in (Libovick´y and Helcl, 2017). Our model employs the hierarchical multi-head multimodal attention for decoding. For decoding at time stamp i, the textual attention Attn(hy i , hx) computes the context vector ci = P j αjhx j via a attention-based alignment αj = Align(hy i , hx j ), where P j αj = 1 and hy i is the decoder state. Essentially, the one-head attention in Transformer is implemented as ci = softmax(Qi(Kx)⊤/ √ d)Vx where {Q, Kx, Vx} are the packed d-dimensional Query, Key, Value vectors, which are the mapped and packed version of {hy i , hx, hx}. For decoding with encoded visual and textual inputs, we utilize multimodal attention to compute the context vector ci: cx i = Attn(hy i−1, hx) + λvAttn(hy i−1, hz) (4) In practice we set λv = 1.0. Our multimodal decoder models the likelihood to predict the next token as: p(yi|y<i, x, z) = softmax(f(ci, yi−1, hy i−1), (5) where f(.) denotes the aggregated non-linear feature mapping in Transformer. 3.2 Unsupervised Learning Unsupervised multimodal MT (Nakayama and Nishida, 2017; Chen et al., 2018; Su et al., 2019) poses a new yet challenging problem. On both the source and target sides, only non-overlapping monolingual multimodal data are presented for training and validation. Specifically, the data available are: (x, zx) ∈(X, Z), (y, zy) ∈(Y, Z), such that {x} ∩{y} = φ, {zx} ∩{zy} = φ. Note that there are no parallel translation pairs available (unsupervised), and the images are mutually exclusive for different languages. For multimodal back-translation, the generated pseudo target sentence conditioned on the source sentence and image can be re-written as g∗(x, zx) = argmax pxz→y(y|x, zx), where pxz→y(y|x, z) = QM i=1 p(yi|y<i, x, z). Similar for pyz→x(x|y, z) and h∗(y, zy). For unsupervised multimodal MT, the multimodal back-translation objective can be extended as: LMBT x↔y = E(x,zx) h -log pyz→x (x|g∗(x, zx), zx) i + E(y,zy) h -log pxz→y y|h∗(y, zy), zy) i (6) We simplify the notation of expectation for clarity. Aligning the latent spaces of the source and target languages without supervision is challenging, as discussed in (Lample et al., 2018b). However, as people speak different languages biologically share similar visual systems, we envision that the shared visual space can serve as the pivot for alignment. Unlike most previous work (Chen et al., 2018; Su et al., 2019) treating images merely as a feature, we propose two visual pivoting approaches: (1) Aligning the multilingual VSE space; (2) Image pseudo pivoting via captioning. As illustrated in Figure 2, for (1), we use images as the approximate pivots connecting real non-parallel sentences. (src-imgtgt.) In (2), for each pivoting real image, we gener8229 a dog running in a field ein hund läuft in einer wiese a biker with a white helmet is in midair. ein mann fährt fahrrad a man ride a bike ein mann der stunts auf einem fahrrad ausführt a man doing stunts on a bike a little boy is going to throw a ball on the beach a little toddler is throwing a volleyball ein kleines kleinkind wirft einen volleyball Alignment in the Multilingual VSE space Pivoted Captioning for Paired-Translation Pivoted Captioning for Back-Translation Figure 2: Pseudo visual pivoting: (1) multilingual VSE (src-img-tgt, in fact src-img1, tgt-img2), and (2) pivoted captioning (src-img-tgt). The italic items do not exist and are approximated (pseudo). (src, img, tgt) is colored in (green, yellow, blue). Solid red and black lines indicate captioning and translation without updates. Encoderdecoder are updated with dashed lines to improve the alignments in the multilingual multimodal embedding space. ate captions in both languages to construct “pseudo” source-target sentence pairs. (src-img-tgt), where the italic item is “pseudo”. We collectively term the proposed approach pseudo visual pivoting. 3.3 Multilingual Visual-Semantic Embedding We posit that for X, Y, Z, the two language spaces X, Y could be properly associated by respectively aligning two monolingual VSE spaces X ↔Z and Y ↔Z. We leverage the contrastive objective in cross-modal retrieval (Kiros et al., 2014; Huang et al., 2019b) for aligning multimodal inputs in the shared VSE space where the embeddings are close if they are semantically associated or paired. Specifically, we generalize the fine-grained (object-level and token-level), monolingual textualto-visual, and visual-to-textual attention (Lee et al., 2018; Huang et al., 2019c) into the multilingual setup. For fine-grained image-sentence alignment, let sij = cos(hx i , hz j) denotes the cosine similarity between the i-th encoded token and the j-th encoded visual object. The image-sentence similarity can be measured by averaging the cosine similarities between the visually-attend sentence embeddings and the visual embeddings of the objects. The visually-attended sentence embeddings hzx are the weighted combination of the encoded tokens hx. Precisely, we compute hzx j = PN i=1 αijhx i , where j = 1 · · · K and αij = softmaxi(sij). Let us denote by S(x, z) = 1 2K PK j=1 cos(hzx j , hz j) + 1 2N PN i=1 cos(hxz i , hx i ) as the image-sentence similarity, the contrastive triplet loss encouraging image-sentence alignment in the VSE space can be written as: Lc(x, z) = max ˜x  γ −S(x, z) + S(˜x, z)  + + max ˜z  γ −S(x, z) + S(x, ˜z)  +, (7) where [.]+ is the hinge function, and ˜x and ˜z are the non-paired (negative) instances for x and z. Intuitively, when the loss decreases, the matched images and sentences will be drawn closer down to a margin γ than the hardest non-paired ones. Formally, we minimizing the following objective for cross-modal alignments in the two VSE spaces: LV SE x,y,z = E(x,zx) h Lc(x, zx) i +E(y,zy) h Lc(y, zy) i (8) 3.4 Image Captioning for Pseudo Pivoting Inspired by back-translation with monolingual corpora, we propose a novel cross-modal approach to generate weakly-supervised pairs to guide language space alignment for unsupervised MMT. Precisely, we leverage image captioning to synthesize pseudo sentence pairs (pivoted and conditioned on the image) for back-translation and paired-translation. Image Captioning Image captioning models are akin to MT models besides the non-sequential visual encoder. For example, an image-to-source captioning model estimates the likelihood as pz→x(x|z) = QN i=1 p(xi|x<i, z), where z is the encoded image. Essentially, the captioning model learns to minimize the following loss: LCAP z→x = E(zx,x) [−log pz→x(x|zx)] (9) As illustrated in Figure 2, we incorporate two captioning models Z →X and Z →Y to generate additional “pseudo” parallel sentences pivoted on the image as additional weak supervision to better align language latent spaces in unsupervised MMT. For example, with Image →English and Image → German, the generated pseudo (English, German) 8230 pair is then pivoted on the Image. Learning captioning models is practical as it is easier to collect large-scale image-text pairs than translation pairs. We pre-train these captioning models and use them to generate sentences in two languages depicting the same image, i.e., c∗ x(zx) = argmaxpz→x(x|zx) and c∗ y(zx) = argmaxpz→y(y|zx). The pivoted captions then enable the following two objectives: Pivoted Captioning for Back-Translation We utilize the synthetic multilingual captions (i.e., c∗ x(zx), c∗ y(zx) from the source images and c∗ x(zy), c∗ y(zy) from the target images) to reversely reconstruct the synthetic captions from their translations in both directions. Formally, we compute the following caption-based back-translation loss: LCBT x↔y = Ezx h -log pyz→x c∗ x(zx)|g∗(c∗ x(zx),zx),zx  -log pxz→y c∗ y(zx)|g∗(c∗ y(zx),zx),zx i +Ezy h -log pyz→x c∗ x(zy)|h∗(c∗ x(zy),zy),zy  -log pxz→y c∗ y(zy)|h∗(c∗ y(zy),zy),zy i (10) Pivoted Captioning for Paired-Translation With the synthetic “pseudo” paired (source, target) captions pivoted on a image (e.g. (c∗ y(zx), c∗ x(zx)), the caption-based paired-translation loss is defined as: LCPT x↔y = Ezx h -log pxz→y(c∗ y(zx)|c∗ x(zx), zx) i + Ezy h -log pyz→x(c∗ x(zy)|c∗ y(zy), zy) i (11) Note that similar to the text back-translation, for LCPT x↔y and LCBT x↔y , we do not back-prop through the captioning step. For optimization, we sample mini-batches and minimizing the following loss: L = LMBT x↔y + LV SE x,y,z + LCBT x↔y + LCPT x↔y (12) Here we drop the weights w of each loss for clarity. In practice, all the weights are set to 1.0 except for wCPT where we employ a decreasing learning scheduler specified in the next section. 4 Experiments and Results We first describe the implementation details and the experimental setup. Then we compare our approach with baselines with detailed analysis. 4.1 Dataset and Preprocessing We conduct experiments on the Multi30K (Elliott et al., 2016) dataset, the benchmark dataset for multimodal MT. It contains 29K training, 1K validation, and 1K testing images. Each image has three descriptions in English/German/French, which are translations of each other. To ensure the model never learn from parallel sentences, we randomly split Multi30K training and validation sets in half for one language and use the complementary half for the other. The resulting M30k-half are two corpora with non-overlapping 14,500 training and 507 validation image-sentence pairs, respectively. For text pre-processing, we use Moses (Koehn et al., 2007) scripts for tokenization and apply the Byte Pair Encoding (BPE) (Sennrich et al., 2016b) from XLM. To identify and extract features of visual objects in images, we use the FasterRCNN (Ren et al., 2015) model in (Anderson et al., 2018) to detect up to 36 salient visual objects per image and extract their corresponding 2048-dim regional features. 4.2 Implementation We use Transformer as the underlying architecture for the translation and captioning modules. Each encoder/decoder of the translator is with 6-layer stacked Transformer network, 8 heads, 1024 hidden units, and 4096 feed-forward filter size. The captioner is a 6-layer Transformer decoder with the same configuration. The visual encoder is a 1layer MLP which maps visual feature to the shared 1,024-dim embedding space then adds the positional encoding to encode spatial locations (normalized top-left and bottom-right coordinates) of visual objects. Our implementation is based on the codebase of XLM and MASS. 4.3 Experimental Details We respectively conduct unsupervised MMT experiments on Multi30K-half for two language pairs: English-French and English-German. Pre-Training Pre-training is a critical step for unsupervised MT. We follow the setup in UMMT (Su et al., 2019) for a fair comparison. For each language, we create a text-only pre-training set by combining the shuffled first 10 million sentences of the WMT News Crawl datasets from 2007 to 2017 with 10 times of M30k-half, resulting in a text-only dataset with 10.145 million unparalleled sentences in English, French, German respectively. For text pre-training, we leverage the script and the masked seq-to-seq objective proposed in 8231 MASS, which randomly masks a span in a sentence then encourages the model to decode and reconstruct the masked sequence as the monolingual language model pre-training. More details can be found in the original paper. Note that there is no fine-tuning (back-translation) on WMT for a fair comparison with other baselines. For multimodal pre-training of the captioning modules, we use the out-of-domain MSCOCO (Lin et al., 2014) dataset. We randomly split the training set into two disjoint subsets. Each set contains 56,643 images and 283,215 sentences. We use the translate-train strategy as in XNLI (Conneau et al., 2018b). We leverage Google Translate to translate one set of English sentences into French and German. We pre-train the captioning modules with Eq. 9 and fix them during fine-tuning to avoid overfitting. Note that the captioning modules are trained on non-parallel sentences with disjoint image subsets, which implies no overlap between English-German or English-French sentences. Fine-tuning on Multi30K-half We fine-tune on the training set of Multi30K-half for 18 epochs. We train our model with the Adam optimizer (Kingma and Ba, 2014) with a linear warm-up and a learning rate varying from 10−7 to 10−5. We apply a linearly decreasing weight from 1.0 to 0.1 at 10-th epoch for wCPT as we empirically observe that the generated captions are relatively too noisy to serve as good pseudo pairs in the later stage of training. The margin γ in VSE is set to 0.1. Other hyper-parameters in Transformer follow the default setting in MASS. We use 4 Titan Xp GPUs with 1,000 tokens in each mini-batch for training. Evaluation and Model selection For evaluation, we report BLEU scores by multi-bleu.pl1 in Moses and METEOR2 scorea on the Multi30K testing set. For model selection without a parallel validation corpus, we consider the unsupervised criterion proposed in (Lample et al., 2018a) based on the BLEU scores of “round-trip” translations (source →target →source and target →source →target) which have been empirically shown to correlate well with the testing metrics. 4.4 Baseline Models We compare recent unsupervised text-only and multimodal MT baselines listed in the following: (1) MUSE (Conneau et al., 2018a) is a word-to-word 1https://github.com/moses-smt/mosesdecoder/blob/master/scripts /generic/multi-bleu.perl 2https://github.com/cmu-mtlab/meteor MT model with pre-trained Wikipedia embeddings. (2) UNMT (Lample et al., 2018a) sets the tone of using denoising autoencoder and back-translation for unsupervised MT. (3) XLM (Conneau and Lample, 2019) deploys masked language model from BERT. (4) MASS (Song et al., 2019) uses a masked seq-to-seq pre-training objective, achieves the current state-of-the-art performance in text-only unsupervised MT. (5) Game-MMT (Chen et al., 2018) is a reinforcement learning-based unsupervised MMT. (6) UMMT (Su et al., 2019) use visual feature for denoising autoencoder and back-translation. UMMT is the current state of the art in unsupervised MMT. We either use the reported scores in the original papers or use their best scripts with their pre-trained language models publicly available for fine-tuning on Multi30K-half. 4.5 Main Results: Unsupervised MMT 4.5.1 Comparison with the Baseline Models Table 1 presents the benchmark results with other state-of-the-art unsupervised MT and MMT models on the Multi30K testing set. The first four rows show the results of the recent text-only MT models. Game-MMT and UMMT are MMT models using both image and text inputs. Our full model (T+V+VSE+CBT+CPT) yields new stateof-the-art performance in BLEU and METEOR, outperforming the text-only and multimodal baseline model by a large margin. Notably, our full model outperforms UMMT by +5.5∼12.5 BLEU scores, sets a new state of the art in unsupervised MMT. Although pre-training plays a vital role in unsupervised MT, comparing Ours-Text only and OursFull, the results suggest that multimodal content can further boost the performance for unsupervised MT. Images provide +2.7∼3.7 BLEU score improvement across four tasks. Note that our model uses different monolingual pre-training corpora to MASS and XLM for the fair comparison with UMMT. With a similar pre-training objective, our text-only model is worse than MASS, while OursFull outperforms MASS by +2.3∼3.7 in BLEU. Comparing the multimodal models trained with and without visual content (UMMT-T vs. UMMTFull and Ours-T vs. Ours-Full), our model achieves +2.5∼3.7 improvements in BLEU while +1.4∼2.5 for UMMT. The results imply that, even with a higher text-only baseline (e.g. 49.5 vs. 37.2 in en →fr), the proposed model incorporates visual 8232 en→fr fr→en en→de de→en Model BLEU METEOR BLEU METEOR BLEU METEOR BLEU METEOR MUSE† (Conneau et al., 2018a) 8.5 16.8 15.7 5.4 UNMT† (Lample et al., 2018a) 32.8 32.1 22.7 26.3 XLM† (Conneau and Lample, 2019) 46.3 64.3 42.0 38.1 27.4 48.7 30.7 31.0 MASS† (Song et al., 2019) 49.8 65.8 43.7 38.7 30.2 51.3 32.5 33.4 Game-MMT (Chen et al., 2018) 16.6 19.6 UMMT-T† (Su et al., 2019) 37.2 33.7* 38.5 36.4 21.0 25.4* 25.0 28.4 UMMT-Full (Su et al., 2019) 39.8 35.5* 40.5 37.2 23.5 26.1* 26.4 29.7 Ours-Text only† 49.5 65.7 43.5 38.5 30.1 51.5 32.4 33.0 Ours-Full 52.3 67.6 46.0 39.8 33.9 54.1 36.1 34.7 Table 1: Results on unsupervised MT. Comparison with benchmarks on the Multi30K testing set. Our full model is with T+V+VSE+CBT+CPT. The best score is marked bold. † means text-only. * is the METEOR score shown in the UMMT paper. content more effectively. In Figure 3, we provide some qualitative results on the Multi30K testing set. We observe a consistent improvement of unsupervised translation quality with our full model to the text-only one. Without parallel translation pairs as the vital supervision, the proposed pseudo visual pivoting successfully disambiguates the word semantics in the similar syntactic category and results in improved cross-lingual word alignment; for instance, “cafe” vs. “soda” machine in the third French example, and “felsigen” (rocky) vs. “verschneiten” (snowy) in the first German example. 4.5.2 Ablation Studies To quantify module-wise contribution in pseudo visual pivoting, we summarize our ablation studies in Table 2. Comparing the performance improvement from text-only to the model with regional visual features (T+V), the features of salient visual objects contribute +0.6∼0.9 BLEU score over a much higher text-only baseline compared to UMMT. In pseudo visual pivoting, +VSE promotes the alignments in the monolingual VSE spaces and results in an additional +1.3∼2.0 gain in BLEU. This improvement validates our hypothesis that the visual space can effectively serve as the bridge connecting the source and target language latent spaces. Also, synthesizing image-pivoted pseudo caption pairs effectively provides weak supervision for aligning the cross-lingual latent space in unsupervised MMT. We observe that the pivoted captions for paired translation (CPT) is more effective than treating them as back-translation pairs (CBT). Utilizing generated image-pivoted captions is shown to be a promising approach for weakly supervised Model (Ours) en→fr fr→en en→de de→en Text only 49.52 43.48 30.10 32.35 T+V 50.43 44.10 31.01 32.95 T+V+VSE 51.72 45.73 32.67 34.94 T+V+CPT 51.64 45.55 33.04 35.02 T+V+CBT 51.23 45.21 32.51 33.87 T+V+VSE+CBT 51.81 45.83 33.01 34.38 T+V+CPT+CBT 51.85 45.65 33.61 35.85 T+V+VSE+CPT 52.19 46.10 33.73 35.60 Full Model 52.29 45.98 33.85 36.07 Table 2: Ablation studies. BLEU comparison of different training objectives. or unsupervised MMT. The full model which employs VSE, CBT, and CPT achieves +1.9∼3.1 improvements compared to our multimodal baseline (row two, visual feature only). 4.5.3 Generalizability How does our unsupervised MMT model generalize when images are not available at the testing time? Table 3 shows the testing results without images. As can be observed, our model generalizes well. The differences are mostly less than 1.0 in BLEU. As our model, when being tested without visual content, still outperforms other unsupervised text-only or multimodal MT models listed in Table 1, the minor drop in BLEU implies that the improved cross-lingual latent space alignment via pseudo visual pivoting is likely to be more critical than using images as an input feature for decoding. Luckily, such alignment is already preserved in the training phase with the proposed approach. An interesting question is: How much does the visual content (as a feature) contribute? As in leave-one-feature-out cross-validation, we compare 8233 T: un jeune garçon se tient sur un chariot de vêtements . T+V: un jeune garçon s’apos accroche à un poteau de vêtements GT: un jeune garçon s’apos accroche à un portant . SRC: a young boy is hanging onto a clothing rack . T: un chat assis sur le sommet d’apos un magasin de vêtements T+V: un chat est assis sur un panneau de magasin . GT: un chat est assis sur une enseigne de magasin . SRC: a cat sits on top of a store sign . T: deux garçons en train de faire une machine à café . T+V: deux garçons devant une machine à soda . GT: deux garçons devant une machine à soda . SRC: two boys in front of a soda machine . (a) English→French T: ein mann und eine junge auf einem verschneiten strand . T+V: ein mann und ein junge auf einem felsigen strand . GT: ein mann und ein junge auf einem felsigen strand . SRC: a man and a boy on a rocky beach . T: mann springt mit einem felsbrocken im hintergrund . T+V: mann springt vor einer felsformation im hintergrund in die luft GT: mann springt vor einer felsformation im hintergrund . SRC : man jumping with a rock formation in background . T: zwei männer spielen gitarre im freien . T+V: zwei männer spielen gitarre vor einem großen publikum . GT: zwei männer spielen gitarre vor einem großen publikum . SRC: two men playing guitar in front of a large audience . (b) English→German Figure 3: Qualitative results of the proposed model. GT: ground truth. T+V: Our full model. Model en→fr fr→en en→de de→en UMMT 39.44-0.35 40.30-0.23 23.18-0.34 25.47-0.92 Ours-no VSE 51.60-0.25 45.39-0.26 33.25-0.36 35.15-0.70 Ours-Full 51.64-0.65 45.48-0.50 33.32-0.53 35.04-1.03 Table 3: BLEU with text-only inputs at the testing time. Subscripts are the differences to testing with T+V. the difference of performance between inferencing with and without images. The larger the difference (the subscripts in Table 3) implies a model better utilizes visual content. Compared with UMMT, our model has better utilization. We observe that the key to such difference is the VSE objective. Our model trained without the VSE objective results in worse utilization (smaller difference at the testing time), possibly because the source text-image pairs are distant in the multilingual VSE space. 4.5.4 Real-pivoting & Low-resource Corpora Will our model benefit from “real” pivoting (srcimg1, img1-tgt, overall src-img1-tgt)? We train our models with overlapped images while leaving sentences in the source and target languages unparalleled (use no translation pairs). From the first three rows in Table 4, the performance is improved when training with the overlapped images and their corresponding sentences. Comparing the improvement from 0% to 100% of the text-only model and the full model, a larger gain is observed with the proposed pseudo visual pivoting which aligns and reduces uncertainty in the language latent spaces. Furthermore, under the low-resource setting (3.0K non-parallel data, row six and seven), a substantial improvement over the text-only model is still observed. These results suggest that the proposed pseudo visual pivoting is likely to generalize to the semi-supervised and the low-resource setting, which we consider as our future work. Img overlap % (# imgs/sents) en→fr fr→en en→de de→en 0% (14.5K/14.5K) 52.29 45.98 33.85 36.07 50% (22K/22K) 55.13 47.54 34.61 37.01 100% (29K/29K) 58.34 50.57 35.45 38.55 0% (T only/14.5K) 49.52 43.48 30.10 32.35 100% (T only/29K) 53.35 46.27 31.35 34.06 0% (3.0K/3.0K) 31.48 27.91 23.94 26.60 0% (T only/3.0K) 30.33 26.95 21.65 23.47 Table 4: Testing BLEU of the full T+V model and the text-only model trained with overlapped images or lowresource unpaired corpora. 4.5.5 Supervised Case Although the proposed pseudo visual pivoting targets unsupervised MMT, we are also interested in its performance under the fully supervised setup. To gain insights, we conduct supervised MMT experiments by changing the back-translation objective for unsupervised MT (Eq. 6) to the supervised MT objective (Eq. 2) with additional visual inputs. We benchmark with recent supervised MMT models, including Imagination (Elliott and K´ad´ar, 2017), LIUM-CVC (Caglayan et al., 2017), and VAG (Zhou et al., 2018) on Multi30K. Table 5 shows the testing results. Our model significantly outperforms other baselines and achieves state-of-the-art performance. Comparing to the unsupervised model trained with full Multi30K (Table 4,100% (29K/29K)), the direct supervision from parallel translation pairs results in a +6.5∼7.1 gain in BLEU. Notably, images provide a minor improvement with full supervision from translation pairs. This result implies that, compared to serving as a complementary feature, visual information likely contributes more to improving crosslingual alignment via pseudo visual pivoting for MMT with limited supervision. 8234 en→fr en→de Model BLEU METEOR BLEU METEOR Imagination 30.2 51.2 LIUM-CVC 52.7 69.5 30.7 52.2 VAG 53.8 70.3 31.6 52.2 Ours (T) 65.2 79.3 42.0 60.5 Ours (T+V) 65.5 79.1 42.3 60.6 Table 5: Supervised MMT results on Multi30K 5 Related Work Unsupervised MT For pivoting with a third language, Firat et al. (2016) pre-train a multi-way multilingual model to generate pseudo pairs to improve zero-shot translation. Chen et al. (2017) use a teacher-student framework and assume parallel sentences share a similar likelihood for generating sentences in the third language while Cheng et al. (2017) maximize the expected likelihood. Our model does not rely on a third language. Our framework is along the line of research in (Lample et al., 2018a,b; Conneau and Lample, 2019), which aims at learning an aligned latent space between the two languages to translate by reconstruction. Nevertheless, we focus on the multimodal setup where the visual space is dissimilar to the language spaces with challenging asymmetric interactions between modalities. Supervised MMT Supervised MMT is introduced in (Specia et al., 2016) as a multi-encoder singledecoder framework with additional image inputs. Huang et al. (2016) encode word sequences with regional visual objects while Calixto and Liu (2017) leverage global visual feature. LIUMCVC (Caglayan et al., 2017) uses element-wise multiplication to model the image-text interaction. Imagination (Elliott and K´ad´ar, 2017) and VAG (Zhou et al., 2018) learns with the auxiliary image reconstruction and source-image-target triplet alignment tasks, respectively. While these methods achieve improvements, their advantage over the text-only models is still minor under the supervised scenario. As analyzed in (Caglayan et al., 2019), visual content is more critical when the textual content is limited or uncertain in MMT. We study the more challenging unsupervised MMT. Unsupervised MMT To our best knowledge, three recent works have generalized MMT to the unsupervised setting. Nakayama and Nishida (2017) learn modal-agnostic fixed length image/sentence embeddings. In contrast, our model promotes finegrained (object-token) varying-length embedding, which better aligns VSE space. Game-MMT (Chen et al., 2018) use a captioning and a translation model maximizing the likelihood of translated captions to original sentences. We synthesize captions for symmetric back-translation and considers no ground truth image annotation in the loop. Empirically, it is preferred to separate real and generated captions. UMMT (Su et al., 2019) uses Transformers, autoencoder loss, and multimodal back-translation. We do not use autoencoder. Our model leverages object detection for multimodal back-translation and equips pseudo visual pivoting. Image Captioning and VSE Our method draws inspiration from captioning and cross-modal retrieval. Recent progress in captioning aims at using reinforcement learning to improve diversity (Dai et al., 2017) or maximize metric (Rennie et al., 2017). We use a vanilla MLE objective. For learning VSE, we leverage the contrastive loss (Kiros et al., 2014) from cross-modal retrieval, which is shown more robust than maximizing canonical correlation among modalities as in (Andrew et al., 2013; Huang et al., 2018). For encoding image and text, we generalize the cross-modality attention from SCAN (Lee et al., 2018) to the multilingual scenario for learning a multilingual VSE space (Gella et al., 2017; Huang et al., 2019a). 6 Conclusion We have presented a novel approach: pseudo visual pivoting for unsupervised multimodal MT. Beyond features, we use visual content to improve the crosslingual alignments in the shared latent space. Precisely, our model utilizes the visual space as the approximate pivot for aligning the multilingual multimodal embedding space. Besides, it synthesizes image-pivoted pseudo sentences in two languages and pairs them to translate by reconstruction without parallel corpora. The experiments on Multi30K show that the proposed model generalizes well and yields new state-of-the-art performance. Acknowledgments This work is supported by the DARPA grants funded under the AIDA program (FA8750-18-20018), the LWLL program (FA8750-18-2-0501), and the GAILA program (award HR00111990063). Xiaojun Chang is supported by Australian Research Council Discovery Early Career Award (DE190100626). The authors would like to thank the anonymous reviewers for their suggestions and Google Cloud for providing the research credits. 8235 References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR. Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. 2013. Deep canonical correlation analysis. In International Conference on Machine Learning, pages 1247–1255. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes Garcia-Martinez, Fethi Bougares, Lo¨ıc Barrault, Marc Masana, Luis Herranz, and Joost Van de Weijer. 2017. Lium-cvc submissions for wmt17 multimodal translation task. In SECOND CONFERENCE ON MACHINE TRANSLATION, volume 2, pages 432–439. Ozan Caglayan, Pranava Madhyastha, Lucia Specia, and Lo¨ıc Barrault. 2019. Probing the need for visual context in multimodal machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4159–4170, Minneapolis, Minnesota. Association for Computational Linguistics. Iacer Calixto and Qun Liu. 2017. Incorporating global visual features into attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 992–1003. Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A teacher-student framework for zeroresource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1925–1935, Vancouver, Canada. Association for Computational Linguistics. Yun Chen, Yang Liu, and Victor OK Li. 2018. Zeroresource neural machine translation with multiagent communication game. In Thirty-Second AAAI Conference on Artificial Intelligence. Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivot-based neural machine translation. In Proceedings of the TwentySixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 3974–3980. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 7057–7067. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018a. Word translation without parallel data. In International Conference on Learning Representations (ICLR). Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. XNLI: evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2475–2485. Bo Dai, Sanja Fidler, Raquel Urtasun, and Dahua Lin. 2017. Towards diverse and natural image descriptions via a conditional gan. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 2989–2998. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30k: Multilingual englishgerman image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70–74. Association for Computational Linguistics. Desmond Elliott and ´Akos K´ad´ar. 2017. Imagination improves multimodal translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017 - Volume 1: Long Papers, pages 130–141. Asian Federation of Natural Language Processing. Orhan Firat, Baskaran Sankaran, Yaser Al-onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016. Zero-resource translation with multi-lingual neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 268–277, Austin, Texas. Association for Computational Linguistics. 8236 Spandana Gella, Rico Sennrich, Frank Keller, and Mirella Lapata. 2017. Image pivoting for learning multilingual multimodal representations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2839– 2845. Association for Computational Linguistics. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016a. Dual learning for machine translation. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 820–828. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Po-Yao Huang, Xiaojun Chang, and Alexander G. Hauptmann. 2019a. Multi-head attention with diversity for learning grounded multilingual multimodal representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLPIJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1461–1467. Association for Computational Linguistics. Po-Yao Huang, Guoliang Kang, Wenhe Liu, Xiaojun Chang, and Alexander G. Hauptmann. 2019b. Annotation efficient cross-modal retrieval with adversarial attentive alignment. In Proceedings of the 27th ACM International Conference on Multimedia, MM 2019, Nice, France, October 21-25, 2019, pages 1758–1767. ACM. Po-Yao Huang, Junwei Liang, Jean-Baptiste Lamare, and Alexander G. Hauptmann. 2018. Multimodal filtering of social media for temporal monitoring and event analysis. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, ICMR 2018, Yokohama, Japan, June 11-14, 2018, pages 450–457. ACM. Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attention-based multimodal neural machine translation. In Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, August 11-12, Berlin, Germany, pages 639–645. The Association for Computer Linguistics. Po-Yao Huang, Vaibhav, Xiaojun Chang, and Alexander G. Hauptmann. 2019c. Improving what crossmodal retrieval models learn through object-oriented inter- and intra-modal attention networks. In Proceedings of the 2019 on International Conference on Multimedia Retrieval, ICMR 2019, Ottawa, ON, Canada, June 10-13, 2019, pages 244–252. ACM. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1700–1709. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. NIPS Workshop. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177–180. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations (ICLR). Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics. Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. 2018. Stacked cross attention for image-text matching. arXiv preprint arXiv:1803.08024. Jindrich Libovick´y and Jindrich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 2: Short Papers, pages 196–202. Association for Computational Linguistics. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. 8237 Zhouhan Lin, Minwei Feng, C´ıcero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relaxation of discrete random variables. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Hideki Nakayama and Noriki Nishida. 2017. Zeroresource machine translation by multimodal encoder–decoder network with multimedia pivot. Machine Translation, 31(1-2):49–64. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99. Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1179–1195. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. In International Conference on Machine Learning, pages 5926–5936. Lucia Specia, Stella Frank, Khalil Sima’an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, August 11-12, Berlin, Germany, pages 543–553. Yuanhang Su, Kai Fan, Nguyen Bach, C-C Jay Kuo, and Fei Huang. 2019. Unsupervised multi-modal neural machine translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10482–10491. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Mingyang Zhou, Runxiang Cheng, Yong Jae Lee, and Zhou Yu. 2018. A visual attention grounding neural model for multimodal machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3643– 3653, Brussels, Belgium. Association for Computational Linguistics. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 2242–2251.
2020
731
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8238–8247 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8238 A Multitask Learning Approach for Diacritic Restoration Sawsan Alqahtani 1,2 and Ajay Mishra1 and Mona Diab2∗ 1AWS, Amazon AI 2The George Washington University [email protected], [email protected], [email protected] Abstract In many languages like Arabic, diacritics are used to specify pronunciations as well as meanings. Such diacritics are often omitted in written text, increasing the number of possible pronunciations and meanings for a word. This results in a more ambiguous text making computational processing on such text more difficult. Diacritic restoration is the task of restoring missing diacritics in the written text. Most state-of-the-art diacritic restoration models are built on character level information which helps generalize the model to unseen data, but presumably lose useful information at the word level. Thus, to compensate for this loss, we investigate the use of multi-task learning to jointly optimize diacritic restoration with related NLP problems namely word segmentation, part-of-speech tagging, and syntactic diacritization. We use Arabic as a case study since it has sufficient data resources for tasks that we consider in our joint modeling. Our joint models significantly outperform the baselines and are comparable to the state-ofthe-art models that are more complex relying on morphological analyzers and/or a lot more data (e.g. dialectal data). 1 Introduction In contrast to English, some vowels in languages such as Arabic and Hebrew are not part of the alphabet and diacritics are used for vowel specification.1 In addition to pertaining vowels, diacritics can also represent other features such as case marking and phonological gemination in Arabic. Not including diacritics in the written text in such languages increases the number of possible meanings as well as pronunciations. Humans rely on the surrounding ∗*The work was conducted while the author was with AWS, Amazon AI. 1Diacritics are marks that are added above, below, or inbetween the letters to compose a new letter or characterize the letter with a different sound (Wells, 2000). context and their previous knowledge to infer the meanings and/or pronunciations of words. However, computational models, on the other hand, are inherently limited to deal with missing diacritics which pose a challenge for such models due to increased ambiguity. Diacritic restoration (or diacritization) is the process of restoring these missing diacritics for every character in the written texts. It can specify pronunciation and can be viewed as a relaxed variant of word sense disambiguation. For example, the Arabic word ÕΫ Elm2 can mean “flag” or “knowledge”, but the meaning as well as pronunciation is specified when the word is diacritized ( Õ Î « Ealamu means “flag” while ÕΫ Eilomo means “knowledge”). As an illustrative example in English, if we omit the vowels in the word pn, the word can be read as pan, pin, pun, and pen, each of these variants have different pronunciations and meanings if it composes a valid word in the language. The state-of-the-art diacritic restoration models reached a decent performance over the years using recurrent or convolutional neural networks in terms of accuracy (Zalmout and Habash, 2017; Alqahtani et al., 2019; Orife, 2018) and/or efficiency (Alqahtani et al., 2019; Orife, 2018); yet, there is still room for further improvements. Most of these models are built on character level information which help generalize the model to unseen data, but presumably lose some useful information at the word level. Since word level resources are insufficient to be relied upon for training diacritic restoration models, we integrate additional linguistic information that considers word morphology as well as word relationships within a sentence to partially compensate for this loss. 2We use Buckwalter Transliteration encoding http://www.qamus.org/transliteration.htm. 8239 In this paper, we improve the performance of diacritic restoration by building a multitask learning model (i.e. joint modeling). Multitask learning refers to models that learn more than one task at the same time, and has recently been shown to provide good solutions for a number of NLP tasks (Hashimoto et al., 2016; Kendall et al., 2018). The use of a multitask learning approach provides an end-to-end solution, in contrast to generating the linguistic features for diacritic restoration as a preprocessing step. In addition, it alleviates the reliance on other computational and/or data resources to generate these features. Furthermore, the proposed model is flexible such that a task can be added or removed depending on the data availability. This makes the model adaptable to other languages and dialects. We consider the following auxiliary tasks to boost the performance of diacritic restoration: word segmentation, part-of-speech (POS) tagging, and syntactic diacritization. We use Arabic as a case study for our approach since it has sufficient data resources for tasks that we consider in our joint modeling.3 The contributions of this paper are twofold: 1. We investigate the benefits of automatically learning related tasks to boost the performance of diacritic restoration; 2. In doing so, we devise a state-of-the-art model for Arabic diacritic restoration as well as a framework for improving diacritic restoration in other languages that include diacritics. 2 Diacritization and Auxiliary Tasks We formulate the problem of (full) diacritic restoration (DIAC) as follows: given a sequence of characters, we identify the diacritic corresponding to each character in that sequence from the following set of diacritics {a, u, i, o, K, F, N, ∼, ∼a, ∼u, ∼i, ∼F, ∼K, and ∼N}. We additionally consider three auxiliary tasks: syntactic diacritization, partof-speech tagging, and word segmentation. Two of which operate at the word level (syntactic diacritization and POS tagging) and the remaining tasks (diacritic restoration and word segmentation) operate at the character level. This helps diacritic restoration utilize information from both charac3Other languages that include diacritics lack such resources; however, the same multitask learning framework can be applied if data resources become available. ter and word level information, bridging the gap between the two levels. Syntactic Diacritization (SYN): This refers to the task of retrieving diacritics related to the syntactic positions for each word in the sentence, which is a sub-task of full diacritic restoration. Arabic is a templatic language where words comprise roots and patterns in which patterns are typically reflective of diacritic distributions. Verb patterns are more or less predictable however nouns tend to be more complex. Arabic diacritics can be divided into lexical and inflectional (or syntactic) diacritics. Lexical diacritics change the meanings of words as well as their pronunciations and their distribution is bound by patterns/templates. In contrast, inflectional diacritics are related to the syntactic positions of words in the sentence and are added to the last letter of the main morphemes of words (word finally), changing their pronunciations.4 Inflectional diacritics are also affected by word’s root (e.g. weak roots) and semantic or morphological properties (e.g. with the same grammatical case, masculine and feminine plurals take different diacritics). Thus, the same word can be assigned a different syntactic diacritic reflecting syntactic case, i.e. depending on its relations to the remaining words in the sentence (e.g. subject or object). For example, the diacritized variants Õ Î « Ealama and Õ Î « Ealamu which both mean “flag” have the corresponding syntactic diacritics: a and u, respectively. That being said, the main trigger for accurate syntactic prediction is the relationships between words, capturing semantic and most importantly, syntactic information. Because Arabic has a unique set of diacritics, this study formulates syntactic diacritization in the following way: each word in the input is tagged with a single diacritic representing its syntactic position in the sentence.5 The set of diacritics in syntactic diacritization is the same as the set of diacritics for full diacritic restoration. Other languages that include diacritics can include syntactic related diacritics but in a different manner and complexity 4Diacritics that are added due to passivization are also syntactic in nature but are not considered in our syntactic diacritization task. That said, they are still considered in the full diacritic restoration model. 5Combinations of diacritics is possible but we combine valid possibilities together as one single unit in our model. For example, the diacritics ∼and a are combined to form an additional diacritic ∼a. 8240 compared to Arabic. Word segmentation (SEG): This refers to the process of separating affixes from the main unit of the word. Word segmentation is commonly used as a preprocessing step for different NLP applications and its usefulness is apparent in morphologically rich languages. For example, the undiacritized word whm Ñëð might be diacritized as waham∼a Ñ ë ð “and concerned”, waham Ñ ë ð “illusion”, where the first diacritized word consists of two segments “wa ham∼a” Ñ ë ð while the second is composed of one word. Word segmentation can be formulated in the following way: each character in the input is tagged following IOB tagging scheme (B: beginning of a segment; I: inside a segment; O: out of the segment) (Diab et al., 2004). Part-Of-Speech Tagging (POS): This refers to the task of determining the syntactic role of a word (i.e. part of speech) within a sentence. POS tags are highly correlated with diacritics (both syntactic and lexical): knowing one helps determine or reduce the possible choices of the other. For instance, the word I. J» ktb in the sentence ktb [someone] means “books” if we know it to be a noun whereas the word would be either I. J » katab “someone wrote” or I. J » kat∼ab “made someone write” if it is known to be a verb. POS tagging can be formulated in the following way: each word in the input is assigned a POS tag from the Universal Dependencies tagset (Taji et al., 2017).6 3 Approach We built a diacritic restoration joint model and studied the extent to which sharing information is plausible to improve diacritic restoration performance. Our joint model is motivated by the recent success of the hierarchical modeling proposed in (Hashimoto et al., 2016) such that information learned from an auxiliary task is passed as input to the diacritic restoration related layers.7 6Refer to https://universaldependencies.org/. This tagset is chosen because it includes essential POS tags in the language, and it is unified across different languages which makes it suitable to investigate more languages in the future. 7We also experimented with learning tasks sharing some levels and then diverging to specific layers for each tasks. However, this did not improve the performance compared to the diacritic restoration model when we don’t consider any additional task. 3.1 Input Representation Since our joint model may involve both character and word level based tasks, we began our investigation by asking the following question: how to integrate information between these two levels? Starting from the randomly initialized character embeddings as well as a pretrained set of embeddings for words, we follow two approaches (Figure 1 visually illustrates the two approaches with an example). Figure 1: An example of embedding vectors for the word cat and its individual characters: c,a, and t. (i) A character-based representation for the word cat from its individual characters; (ii) A concatenation for the word embedding with each of its individual characters. (1) Character Based Representation: We pass information learned by character level tasks into word level tasks by composing a word embedding from the word’s characters. We first concatenate the individual embeddings of characters in that word, and then apply a Bidirectional Long Short Term Memory (BiLSTM) layer to generate denser vectors.8 This helps representing morphology and word composition into the model. (2) Word-To-Character Representation: To pass information learned by word level tasks into character level tasks, we concatenate each word with each of its composed characters during each pass, similar to what is described in Watson et al. (2018)’s study. This helps distinguishing the individual characters based on the surrounding context, implicitly capturing additional semantic and syntactic information. 8We also evaluated the use of a feedforward layer and unidirectional Long Short Term Memory (LSTM) but a BiLSTM layer yielded better results. 8241 Figure 2: The diacritic restoration joint model. All Char Embed entities refer to the same randomly initialized character embedding learned during the training process. Pretrained embeddings refer to fixed word embeddings obtained from fastText (Bojanowski et al., 2017). (i) shows the input representation for CharToWord and WordToChar embedding which is the same as in Figure 1. (ii) represents the diacritic restoration joint model; output labels from each task are concatenated with WordToChar embedding and optionally with segmentation hidden. 3.2 The Joint Model For all architectures, the main component is BiLSTM (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997), which preserves the temporal order of the sequence and has been shown to provide the state-of-the-art performance in terms of accuracy (Zalmout and Habash, 2017; Alqahtani et al., 2019). After representing characters through random initialization and representing words using pretrained embeddings obtained from fastText (Bojanowski et al., 2017), the learning process for each batch runs as follows: 1. We extract the two additional input representation described in Section 3.1; 2. We apply BiLSTM for each of the different tasks separately to obtain their corresponding outputs; 3. We pass all outputs from all tasks as well as WordToChar embedding vectors as input to the diacritic restoration model and obtain our diacritic outputs. Figure 2 illustrates the diacritic restoration joint model. As can be seen, SYN as well as POS tagging are trained on top of CharToWord representation which is basically the concatenation of the pretrained embedding for each word with the character-based representations described in Figure 1. SEG is also trained separately on top of the character embeddings. We pass the outputs of all these tasks along with WordToChar representation to train the BiLSTM diacritic restoration model. Omitting a task is rather easy, we just remove the related components for that task to yield the appropriate model. We optionally pass the last hidden layer for SEG along with the remaining input to the diacritic restoration model.9 4 Experimental Setups Dataset: We use the Arabic Treebank (ATB) dataset: parts 1, 2, and 3 and follow the same data division as Diab et al. (2013). Table 1 illustrates the data statistics. For word based tasks, we segment each sentence into space tokenized words. For character based tasks, we, in addition, add the special boundary “<w>” between these words, and then each word is further segmented into its characters, similar to that in (Alqahtani et al., 2019). We pass each word through the model along with a specific number of previous and future words (+/- 10 words). Parameter Settings: For all tasks, we use 250 hidden units in each direction (500 units in both directions combined) and 300 as embedding size. We use 3 hidden layers for tasks except in SEG in 9Passing the last hidden layer for POS tagging and/or SYN did not improve the performance; the pretrained embeddings are sufficient to capture important linguistic signals. 8242 Train Test Dev OOV 502,938 63,168 63,126 7.3% Table 1: Number of words and out of vocabulary (OOV) rate for Arabic. OOV rate indicates the percentage of undiacritized words in the test set that have not been observed during training. which we use only one layer. We use Adam for learning optimization with a learning rate of 0.001. We use 20 for epoch size, 16 for batch size, 0.3 for hidden dropout, and 0.5 for embedding dropout. We initialize the embedding with a uniform distribution [-0.1,0.1] and the hidden layers with normal distribution. The loss scores for all considered tasks are combined and then normalized by the number of tasks in the model. Evaluation metrics: We use accuracy for all tasks except diacritic restoration. For diacritic restoration, the two most typically used metrics are Word Error Rate (WER) and Diacritic Error Rate (DER), the percentages of incorrectly diacritized words and characters, respectively. In order to approximate errors in the syntactic diacritics, we use Last Diacritic Error Rate (LER), the percentage of words that have incorrect diacritics in the last positions of words. To evaluate the models’ ability to generalize beyond observed data, we compute WER on OOV (out-of-vocabulary) words.10 Significance testing: We ran each experiment three times and reported the mean score.11 We used the t-test with p = 0.05 to evaluate whether the difference between models’ performance and the diacritic restoration is significant (Dror et al., 2018). 5 Results and Analysis Table 2 shows the performance of joint diacritic restoration models when different tasks are considered. When we consider WordToChar as input to the diacritic restoration model, we observe statistically significant improvements for all evaluation metrics. This is justified by the ability of word embeddings to capture syntactic and semantic information at the sentence level. The same character is disambiguated in terms of the surrounding context 10Words that appear in the training dataset but do not appear in the test dataset. 11Higher number of experiments provide more robust conclusion about the models’ performance. We only considered the minimum acceptable number of times to run each experiment due to limited computational resources. as well as the word it appears in (e.g. the character t in the word cat would be represented slightly different than t in a related word cats or even a different word table). We consider both character based model as well as WordToChar based model as our baselines (BASE). We use WordToChar representation rather than characters for all remaining models that jointly learn more than one task. For all experiments, we observe improvements compared to both baselines across all evaluation metrics. Furthermore, all models except DIAC+SEG outperform WordToChar diacritic restoration model in terms of WER, showing the benefits of considering output distributions for the other tasks. Despite leveraging tasks focused on syntax (SYN/POS) or morpheme boundaries (SEG), the improvements extend to lexical diacritics as well. Thus, the proposed joint diacritic restoration model is also helpful in settings beyond word final syntactic related diacritics. The best performance is achieved when we consider all auxiliary tasks within the diacritic restoration model. Impact of Auxiliary Tasks: We discuss the impact of adding each investigated task towards the performance of the diacritic restoration model. Word segmentation (DIAC+SEG): When morpheme boundaries as well as diacritics are learned jointly, the WER performance is slightly reduced on all and OOV words. This reduction is attributed mostly to lexical diacritics. As Arabic exhibits a non-concatenative fusional morphology, reducing its complexity to a segmentation task might inherently obscure morphological processes for each form. Observing only slight improvement is surprising; we believe that this is due to our experimental setup and does not negate the importance of having morphemes that assign the appropriate diacritics. We speculate that the reason for this is that we do not capture the interaction between morphemes as an entity, losing some level of morphological information. For instances, the words waham∼a versus wahum for the undiacritized words whm (bold letters refer to consonants distinguishing it from diacritics) would benefit from morpheme boundary identifications to tease apart wa from hum in the second variant (wahum), emphasizing that these are two words. But on the other hand, it adds an 8243 Task WER DER LER/Lex OOV WER Zalmout and Habash (2017) 8.21 20.2 Zalmout and Habash (2019a) 7.50 Alqahtani and Diab (2019a) 7.6 2.7 32.1 BASE (Char) 8.51 (±0.01) 2.80 5.20/5.54 34.56 BASE (WordToChar) 8.09 (±0.05) 2.73 5.00/5.30 32.10 DIAC+SEG 8.35 (±0.02) 2.82 5.20/5.46 33.97 DIAC+SYN 7.70* (±0.02) 2.60 4.72/5.08 30.94 DIAC+POS 7.86* (±0.14) 2.65 4.72/5.20 32.28 DIAC+SEG+SYN 7.70* (±0.05) 2.59 4.65/5.03 31.33 DIAC+SEG+POS 7.73* (±0.08) 2.62 4.73/5.01 31.31 DIAC+SYN+POS 7.72* (±0.06) 2.61 4.62/5.06 31.05 ALL 7.51* (±0.09) 2.54 4.54/4.91 31.07 Table 2: Performance of the joint diacritic restoration model when different related tasks are considered. Bold numbers represent the highest score per column. Almost all scores are higher than the base model BASE (char). * denotes statistically significant improvements compared to the baselines. Lex refers to the percentage of words that have incorrect lexical diacritics only, excluding syntactic diacritics. additional layer of ambiguity for other cases like the morpheme ktb in the diacritic variants kataba, kutubu, sayakotubo - note that the underlined segment has the same consonants as the other variants in which identifying morphemes increased the number of possible diacritic variants without learning the interactions between adjacent morphemes. Furthermore, we found inconsistencies in the dataset for morphemes which might cause the drop in performance when we only consider SEG. When we consider all tasks together, these inconsistencies are reduced because of the combined information from different linguistic signals towards improving the performance of the diacritic restoration model. Syntactic diacritization (DIAC+SYN): By enforcing inflectional diacritics through an additional focused layer within the diacritic restoration model, we observe improvements on WER compared to the baselines. We notice improvements on syntactic related diacritics (LER score), which is expected given the nature of syntactic diacritization in which it learns the underlying syntactic structure to assign the appropriate syntactic diacritics for each word. Improvements also extend to lexical diacritics, and this is because word relationships are captured during learning syntactic diacritics in which BiLSTM modeling for words is integrated. POS tagging (DIAC+POS): When we jointly train POS tagging with full diacritic restoration, we notice improvements compared to both baselines. Compared to syntactic diacritization, we obtain similar findings across all evaluation metrics except for WER on OOV words in which POS tagging drops. Including POS tagging within diacritic restoration also captures important information about the words; the idea of POS tagging is to learn the underlying syntax of the sentence. In comparison to syntactic diacritization, it involves different types of information like passivization which could be essential in learning correct diacritics. Ablation Analysis: Incorporating all the auxiliary tasks under study within the diacritic restoration model (ALL) provides the best performance across all measures except WER on OOV words in which the best performance was given by DIAC+SYN. We discuss the impact of removing one task at a time from ALL and examine whether its exclusion significantly impacts the performance. Excluding SEG from the process drops the performance of diacritic restoration. This shows that even though SEG did not help greatly when it was combined solely with diacritic restoration, the combinations of SEG and the other word based tasks filled in the gaps that were missing from just identifying morpheme boundaries. Excluding either POS tagging or syntactic diacritization also hurts the performance which shows that these tasks complement each other and, taken together, they improve the performance of diacritic restoration model. Input Representation: Impact of output labels: Table 3 shows the different models when we do not pass the labels of the investigated tasks (the input is only WordToChar representation) against the same models when we do. We noticed a drop in performance 8244 across all models. Notice that all models - even when we do not consider the label have better performance than the baselines. This also supports the benefits of WordToChar representation. Tasks With Labels Without Labels DIAC+SYN 7.70 7.99 DIAC+POS 7.86 7.93 DIAC+SEG+SYN 7.70 7.93 DIAC+SEG+POS 7.73 7.99 DIAC+SYN+POS 7.72 7.97 ALL 7.51 7.91 Table 3: WER performance when we do not consider the output labels for the investigated tasks. Bold numbers represent the highest score per row. Last hidden layer of SEG: Identifying morpheme boundaries did not increase accuracy as we expected. Therefore, we examined whether information learned from the BiLSTM layer would help us learn morpheme interactions by passing the output of last BiLSTM layer to the diacritic restoration model along with segmentation labels. We did not observe any improvements towards predicting accurate diacritics when we pass information regarding the last BiLSTM layer. For ALL, the WER score increased by 0.22%. Thus, it is sufficient to only utilize the segment labels for diacritic restoration. Passive and active verbs: Passivation in Arabic is denoted through diacritics and missing such diacritic can cause ambiguity in some cases (Hermena et al., 2015; Diab et al., 2007). To examine its impact, we further divide verbs in the POS tagset into passive and active, increasing the size by one. Table 4 shows the diacritic restoration performance with and without considering passivation. We notice improvements, in some combinations of tasks, across all evaluation metrics compared to the pure POS tagging, showing its importance in diacritic restoration models. Task With Pass Without Pass DIAC+POS 7.65 7.86 DIAC+SEG+POS 7.65 7.73 DIAC+SYN+POS 7.78 7.72 ALL 7.62 7.51 Table 4: WER performance for different diacritic restoration models when passivation is considered. Bold numbers represent the highest score per row. Level of linguistic information: The joint diacritic restoration model were built empirically and tested against the development set. We noticed that to improve the performance, soft parameter sharing in a hierarchical fashion performs better on diacritic restoration. We experimented with building a joint diacritic restoration model that jointly learns segmentation and diacritics through hard parameter sharing. To learn segmentation with diacritic restoration, we shared the embedding layer between the two tasks as well as sharing some or all layers of BiLSTM. We got WER on all words (8.53∼9.35) in which no improvements were shown compared to character based diacritic restoration. To learn word based tasks with diacritic restoration, we pass WordToChar representation to the diacritic restoration and/or CharToWord representation for word-based tasks. The best that we could get for both tasks is 8.23%∼9.6%; no statistically significant improvements were found. This shows the importance of hierarchical structure for appropriate diacritic assignments. Qualitative analysis: We compared random errors that are correct in DIAC (character-based diacritic restoration) with ALL in which we consider all investigated tasks. Although ALL provides accurate results for more words, it introduces errors in other words that have been correctly diacritized by DIAC. The patterns of such words are not clear. We did not find a particular category that occurs in one model but not the other. Rather, the types and quantity of errors differ in each of these categories. State-of-the-art Comparison: Table 2 also shows the performance of the state-of-the-art models. ALL model surpass the performance of Zalmout and Habash (2017). However, Zalmout and Habash (2017)’s model performs significantly better on OOV words. Zalmout and Habash (2019a) provides comparable performance to ALL model. The difference between their work and that in (Zalmout and Habash, 2017) is the use of a joint model to learn morphological features other than diacritics (or features at the word level), rather than learning these features individually. Zalmout and Habash (2019a) obtained an additional boost in performance (0.3% improvement over ours) when they add a dialect variant of Arabic in the learning process, sharing information between both languages. Alqahtani and Diab (2019a) provides comparable performance to ALL and better performance on some task combinations in terms of WER on all and OOV words. The difference between their model and our BASE model is the addition of a 8245 CRF (Conditional Random Fields) layer which incorporate dependencies in the output space at the cost of model’s computational efficiency (memory and speed). Zalmout and Habash (2019b) provides the current state-of-the-art performance in which they build a morphological disambiguation framework in Arabic similar to (Zalmout and Habash, 2017, 2019a). They reported their scores based on the development set which was not used for tuning. In the development set, they obtained 93.9% which significantly outperforms our best model (ALL) by 1.4%. Our approach is similar to (Zalmout and Habash, 2019b). We both follow WordToChar as well as CharToWord input representations discussed in Section 3.1, regardless of the specifics. Furthermore, we both consider the morphological outputs as features in our diacritic restoration model. In Zalmout and Habash (2019b), morphological feature space that are considered is larger, making use of all morphological features in Arabic. Furthermore, Zalmout and Habash (2019b) use sequence-to-sequence modeling rather than sequence classification as ours. Unlike Zalmout and Habash (2019b), our model is more flexible allowing additional tasks to be added when sufficient resources are available. We believe that neither the underlying architecture nor the consideration of all possible features were the crucial factor that led to the significant reduction in WER performance. Rather, morphological analyzers is crucial in such significant improvement. As a matter of fact, in Zalmout and Habash (2019b), the performance significantly drops to 7.2 when they, similar to our approach, take the highest probabilistic value as a solution. Thus, we believe that the use of morphological analyzers enforces valid word composition in the language and filter out invalid words (a side effect of using characters as input representation). This also justifies the significant improvement on OOV words obtained by (Zalmout and Habash, 2017). Thus, we believe that a global knowledge of words and internal constraints within words are captured. Auxiliary tasks: We compared the base model of the auxiliary tasks to the state-of-the-art (SOTA). For SEG, BiLSTM model has comparable performance to that in (Zalmout and Habash, 2017) (SEG yields 99.88% F1 compared to SOTA 99.6%). For POS, we use a shallower tag set (16 number of tags compared to ∼70) than typically used in previous models hence we do not have a valid comparison set. For SYN, we compare our results with (Hifny, 2018) which uses a hybrid network of BiLSTM and Maximum Entropy to solve syntactic diacritization. The SYN yields results comparable to SOTA (our model performs 94.22 vs. SOTA 94.70). 6 Related Work The problem of diacritization has been addressed using classical machine learning approaches (e.g. Maximum Entropy and Support Vector Machine) (Zitouni and Sarikaya, 2009; Pasha et al., 2014) or neural based approaches for different languages that include diacritics such as Arabic, Vietnamese, and Yoruba. Neural based approaches yield stateof-the-art performance for diacritic restoration by using Bidirectional LSTM or temporal convolutional networks (Zalmout and Habash, 2017; Orife, 2018; Alqahtani et al., 2019; Alqahtani and Diab, 2019a). Arabic syntactic diacritization has been consistently reported to be difficult, degrading the performance of full diacritic restoration (Zitouni et al., 2006; Habash et al., 2007; Said et al., 2013; Shaalan et al., 2009; Shahrour et al., 2015; Darwish et al., 2017). To improve the performance of syntactic diacritization or full diacritic restoration in general, previous studies followed different approaches. Some studies separate lexical from syntactic diacritization (Shaalan et al., 2009; Darwish et al., 2017). Other studies consider additional linguistic features such as POS tags and word segmentation (i.e. tokens or morphemes) (Ananthakrishnan et al., 2005; Zitouni et al., 2006; Zitouni and Sarikaya, 2009; Shaalan et al., 2009). Hifny (2018) addresses syntactic diacritization by building BiLSTM model in which its input embeddings are augmented with manually generated features of context, POS tags, and word segments. Rashwan et al. (2015) use deep belief network to build a diacritization model for Arabic that focuses on improving syntactic diacritization and build subclassifiers based on the analysis of a confusion matrix and POS tags. Regarding incorporating linguistic features into the model, previous studies have either used morphological features as a preprocessing step or as a ranking step for building diacritic restoration models. As a preprocessing step, the words are converted to their constituents (e.g. morphemes, lemmas, or n-grams) and then diacritic restoration 8246 models are built on top of that (Ananthakrishnan et al., 2005; Alqahtani and Diab, 2019b). Ananthakrishnan et al. (2005) use POS tags to improve diacritic restoration at the syntax level assuming that POS tags are known at inference time. As a ranking procedure, all possible analyses of words are generated and then the most probable analysis is chosen (Pasha et al., 2014; Zalmout and Habash, 2017, 2019a,b). Zalmout and Habash (2017) develop a morphological disambiguation model to determine Arabic morphological features including diacritization. They train the model using BiLSTM and consult with a LSTM-based language model as well as other morphological features to rank and score the output analysis. Similar methodology can be found in (Pasha et al., 2014) but using Support Vector Machines. This methodology shows better performance on out of vocabulary (OOV) words compared to pure character models. 7 Discussion & Conclusion We present a diacritic restoration joint model that considers the output distributions for different related tasks to improve the performance of diacritic restoration. Our results shows statistically significant improvements across all evaluation metrics. This shows the importance of considering additional linguistic information at morphological and/or sentence levels. Including semantic information through pretrained word embeddings within the diacritic restoration model also helped boosting the diacritic restoration performance. Although we apply our joint model on Arabic, this model provides a framework for other languages that include diacritics whenever resources become available. Although we observed improvements in terms of generalizing beyond observed data when using the proposed linguistic features, the OOV performance is still an issue for diacritic restoration. References Sawsan Alqahtani and Mona Diab. 2019a. Investigating input and output units in diacritic restoration. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). IEEE. Sawsan Alqahtani and Mona Diab. 2019b. Investigating input and output units in diacritic restoration. In 2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA). Sawsan Alqahtani, Ajay Mishra, and Mona Diab. 2019. Convolutional neural networks for diacritic restoration. In EMNLP. Sankaranarayanan Ananthakrishnan, Shrikanth Narayanan, and Srinivas Bangalore. 2005. Automatic diacritization of arabic transcripts for automatic speech recognition. In Proceedings of the 4th International Conference on Natural Language Processing, pages 47–54. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics. Kareem Darwish, Hamdy Mubarak, and Ahmed Abdelali. 2017. Arabic diacritization: Stats, rules, and hacks. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 9–17. Mona Diab, Mahmoud Ghoneim, and Nizar Habash. 2007. Arabic diacritization in the context of statistical machine translation. In Proceedings of MTSummit. Mona Diab, Nizar Habash, Owen Rambow, and Ryan Roth. 2013. Ldc arabic treebanks and associated corpora: Data divisions manual. arXiv preprint arXiv:1309.5652. Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2004. Automatic tagging of arabic text: From raw text to base phrase chunks. In Proceedings of HLT-NAACL 2004: Short papers. Association for Computational Linguistics. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhikers guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1383–1392. Nizar Habash, Ryan Gabbard, Owen Rambow, Seth Kulick, and Mitch Marcus. 2007. Determining case in arabic: Learning complex linguistic behavior requires complex linguistic features. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2016. A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587. Ehab W Hermena, Denis Drieghe, Sam Hellmuth, and Simon P Liversedge. 2015. Processing of arabic diacritical marks: Phonological–syntactic disambiguation of homographic verbs and visual crowding effects. Journal of Experimental Psychology: Human Perception and Performance, 41(2):494. Yasser Hifny. 2018. Hybrid lstm/maxent networks for arabic syntactic diacritics restoration. IEEE Signal Processing Letters, 25(10):1515–1519. 8247 Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7482–7491. Iroro Orife. 2018. Attentive sequence-to-sequence learning for diacritic restoration of yor\ub\’a language text. arXiv preprint arXiv:1804.00832. Arfath Pasha, Mohamed Al-Badrashiny, Mona T Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of arabic. In LREC, volume 14, pages 1094–1101. Mohsen AA Rashwan, Ahmad A Al Sallab, Hazem M Raafat, and Ahmed Rafea. 2015. Deep learning framework with confused sub-set resolution architecture for automatic arabic diacritization. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 23(3):505–516. Ahmed Said, Mohamed El-Sharqwi, Achraf Chalabi, and Eslam Kamal. 2013. A hybrid approach for arabic diacritization. In International Conference on Application of Natural Language to Information Systems, pages 53–64. Springer. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Khaled Shaalan, Hitham M Abo Bakr, and Ibrahim Ziedan. 2009. A hybrid approach for building arabic diacritizer. In Proceedings of the EACL 2009 workshop on computational approaches to semitic languages, pages 27–35. Association for Computational Linguistics. Anas Shahrour, Salam Khalifa, and Nizar Habash. 2015. Improving arabic diacritization through syntactic analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1309–1315. Dima Taji, Nizar Habash, and Daniel Zeman. 2017. Universal dependencies for arabic. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 166–176. Daniel Watson, Nasser Zalmout, and Nizar Habash. 2018. Utilizing character and word embeddings for text normalization with sequence-to-sequence models. arXiv preprint arXiv:1809.01534. JC Wells. 2000. Orthographic diacritics and multilingual computing. Language problems and language planning, 24(3):249–272. Nasser Zalmout and Nizar Habash. 2017. Don’t throw those morphological analyzers away just yet: Neural morphological disambiguation for arabic. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 704– 713. Nasser Zalmout and Nizar Habash. 2019a. Adversarial multitask learning for joint multi-feature and multidialect morphological modeling. arXiv preprint arXiv:1910.12702. Nasser Zalmout and Nizar Habash. 2019b. Joint diacritization, lemmatization, normalization, and finegrained morphological tagging. arXiv preprint arXiv:1910.02267. Imed Zitouni and Ruhi Sarikaya. 2009. Arabic diacritic restoration approach based on maximum entropy models. Computer Speech & Language, 23(3):257– 276. Imed Zitouni, Jeffrey S Sorensen, and Ruhi Sarikaya. 2006. Maximum entropy based restoration of arabic diacritics. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 577–584. Association for Computational Linguistics.
2020
732
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8248–8273 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8248 Frugal Paradigm Completion Alexander Erdmann,† Tom Kenter, Markus Becker and Christian Schallhart ∗ Google, UK †Ohio State University, USA [email protected], {tomkenter,mabecker,schallhart}@google.com Abstract Lexica distinguishing all morphologically related forms of each lexeme are crucial to many language technologies, yet building them is expensive. We propose Frugal Paradigm Completion, an approach that predicts all related forms in a morphological paradigm from as few manually provided forms as possible. It induces typological information during training which it uses to determine the best sources at test time. We evaluate our language-agnostic approach on 7 diverse languages. Compared to popular alternative approaches, our Frugal Paradigm Completion approach reduces manual labor by 16-63% and is the most robust to typological variation. 1 Introduction From syntactic parsing (Seeker and Kuhn, 2013) to text-to-speech (Zen et al., 2016; Wan et al., 2019), many linguistic technologies rely on accurate lexica decorated with morphological information. Yet, building such lexica requires much human effort (Buckwalter, 2002; Tadi´c and Fulgosi, 2003; Forsberg et al., 2006; Sagot, 2010; Eskander et al., 2013). We present a language-agnostic method for minimizing the manual labor required to add new paradigms to an existing lexicon. Formally, let each lexicon entry, or realization, be a triple (P, C, f). P marks membership in some paradigm P of morphologically related words, C defines a cell in P as a bundle of morphosyntactic features, and f is the form realizing C in P. Hence, paradigm SING can be expressed (in the UniMorph schema (Kirov et al., 2018)) as a set of realizations: {(SING, NFIN, sing), (SING, 3.SG.PRES, sings), . . . }. For each paradigm to be added to the lexicon, e.g., FLY, we aim to select as few sources as pos∗This work was carried out during the first author’s internship at Google UK in 2019. sible to be manually realized, e.g., {(FLY, NFIN, fly), (FLY, PST, flew)} such that the forms realizing the remaining cells can be predicted, i.e., flies, flying, flown. Here, sources are manually provided realizations. Targets are realizations whose forms must be predicted from sources. Our work differs from traditional paradigm completion (Durrett and DeNero, 2013) in that sources are not given blindly, but the system must strategically select which sources it wants to be given at test time. Paradigm completion from one source is typically non-deterministic due to multiple inflection classes realizing different exponents in some cells, e.g., suffixing +ed generates the past tense for WALK, but not for SING or FLY which are members of different classes. Hence, many works discuss paradigm completion in the context of (implicit) inflection class disambiguation (Ackerman et al., 2009; Montermini and Bonami, 2013; Beniamine et al., 2018). Finkel and Stump (2007) propose three approaches to select the fewest sources required to deterministically identify class. Yet, neural sequence models can often complete paradigms accurately from less sources without fully disambiguating inflection class (Kann and Schütze, 2016; Aharoni and Goldberg, 2017; Wu and Cotterell, 2019). See Elsner et al. (2019) for an overview of the application of neural sequence models to morphological theory. We propose Frugal Paradigm Completion (FPC), inspired by work on inflection class disambiguation and neural sequence modeling. We train a source selection agent (SSA) to induce typological knowledge regarding the distribution of complexity in paradigms and use this to request informative source cells to be realized by an oracle. Sources are fed to a predictor to generate target forms. For each paradigm, SSA iteratively requests sources until the oracle confirms all cells have been realized correctly. 8249 We introduce a novel metric, auto-rate, to quantify the manual labour (performed by the oracle) needed to complete each paradigm. Using this metric, we demonstrate that FPC reduces labor by 63% over predicting targets from lemmata, and 47% over predicting them from the smallest set of sources that fully disambiguates inflection class. We propose a new typology for discussing the organization of complexity in paradigms which helps explain why strategies perform better or worse on certain languages while FPC, being sensitive to typological variation, performs robustly. After discussing related paradigm completion approaches in Section 2, we describe FPC in Section 3. Section 4 covers all data and experimental set up details. We discuss results in Section 5 and analyze FPC’s behavior in Section 6. 2 Paradigm Completion Approaches Here we discuss several paradigm completion approaches related to FPC. Lemma-based Paradigm Completion The standard paradigm completion approach does not select sources, but assumes one source: the lemma (Dreyer and Eisner, 2011), whose distinction is ultimately arbitrary. Yet many have shown that more informative sources can be chosen (Finkel and Stump, 2007; Cotterell et al., 2017b; Kann and Schütze, 2018). Most Informative Source For each target form to be predicted, Kann and Schütze (2018) select the source most likely to predict that form. Unlike FPC, they do not attempt to minimize the number of unique sources that must be manually realized. Static Principal Parts To minimize sources required to fully disambiguate inflection class, Finkel and Stump (2007); Stump and Finkel (2013) propose three approaches: static, dynamic, and adaptive. In the static approach, the same sources must be used for every paradigm (these sources are referred to as principal parts in a much older pedagogical tradition dating back to ancient Rome with Varro’s de lingua latina (Grinstead, 1916; Ahern, 1990)). Cotterell et al. (2017b) train a model on static sources and attain near 100% accuracy in Latin verb paradigm completion. However, they do not consider that one paradigm may require fewer sources than another, nor that paradigm completion may require fewer sources than inflection class disambiguation. Dynamic Principal Parts Finkel and Stump (2007)’s dynamic approach selects a minimal set of sources necessary to fully disambiguate inflection class which can be unique to that inflection class. While efficient, this is impractical in that it requires oracular knowledge of class prior to seeing any forms. Adaptive Principal Parts Finkel and Stump (2007)’s adaptive approach, like our FPC method, chooses the same first source cell for each paradigm P. Subsequent sources are selected conditional on the set of inflection classes P could belong to given the sources realized so far. Hence, the number of sources required per paradigm is upper bounded by the static approach and lower bounded by the dynamic. Our FPC approach is a neural update, inspired by their adaptive approach. While their implementation tracks viable inflection classes explicitly with rules operating on oracularly segmented affixes, we use sequence models operating on whole words to remove reliance on oracular segmentation and leverage stem-internal phonology known to correlate with inflection class (Aronoff, 1992; Dressler and Thornton, 1996; Dawdy-Hesterberg and Pierrehumbert, 2014). 3 Frugal Paradigm Completion This section describes the interactions of the three FPC components. As illustrated in Figure 1, the predictor takes a source cell and its realizing form as input, e.g., 3.SG.PRES: sings, or cell 2: form 2 in the figure. The predictor is composed of as many sub-predictors as there are cells in the paradigm, each of which is trained to predict the entire paradigm from one source cell’s realization. Cell 2 in the paradigm is grayed out in the figure, as this was provided as input so it does not have to be predicted. The predicted paradigm is evaluated by the oracle. If there are no errors, we are done. Otherwise, based on previous sources, SSA chooses a new cell to be realized by the oracle and gives it to the predictor as the next source. Because cell 3 is chosen in the figure, sub-predictor 3 will be used to predict the paradigm going forward, and cells 2 and 3 will both be grayed out. The process continues like this until all cells have been correctly predicted by at least one sub-predictor. Crucially, during inference, each test paradigm is empty, i.e., no realization has been seen during training and no source is available to inflect from 8250 ✓ sub-predictor 1 sub-predictor n ... sub-predictor 2 predictor cell 1: form 1 cell n: form n cell 2: form 2 paradigm ... oracle done error source selection agent (SSA) cell3: form 3 cell 2: form 2 input cell 3 oracle Figure 1: Schematic representation of the flow of Frugal Paradigm Completion at inference time. a-priori. Our setup aims to minimize the number of sources which the SSA must request from the oracle (typically a human in the loop at inference time) to predict the remaining paradigm slots correctly. 3.1 Predictor The predictor outputs a target form given its cell, a source form and the source form’s cell as input. To train the predictor, for each possible source cell, we train a sub-predictor to predict every possible target form in every paradigm in the training data given the realization of that source cell in that paradigm. Details of all sequence model architectures are provided in Section 4. 3.2 Source Selection Agent SSA’s choice of a cell for a given paradigm depends on all previously selected cells for that paradigm and their corresponding forms. This allows SSA to learn, e.g., that given a previous English PST source, PST.PTCP should only be requested as a subsequent source if the PST form did not take the regular -ed suffix. Otherwise, PST.PTCP is likely to be regular and unlikely to contribute new information. To induce such knowledge, we train SSA on an oracle policy of ideal source selections extracted from the train set (Ross et al., 2011; Ross and Bagnell, 2014; Welleck et al., 2019).1 To extract the oracle policy, we divide the training lexicon into two folds and train one predictor on each, allowing us to cross-validate each predictor on its held out fold. For each training paradigm, we test which target forms can be correctly predicted by which source cells’ sub-predictors. As shown for SING 1While we borrow the term oracle policy from Imitation Learning (Ross et al., 2011; Ross and Bagnell, 2014; Welleck et al., 2019), we mimic the oracle policy with simple sequence learning. Our analysis suggests even this may be more machinery than necessary. in Figure 2, we use this information to extract minimum set covers, i.e., the fewest source cells such that the union of the subsets they predict correctly covers the entire paradigm. These covers constitute the oracle policy used to train SSA. The minimum set cover problem is NPcomplete (Lund and Yannakakis, 1994; Kuhn et al., 2005), but we approximate it in O(loge|P|) by iteratively selecting the cell whose subset most enlarges the union. We break ties by averaging predictiveness (Equation 1) over both folds, where fold F contains |F| paradigms; Pm, |Pm| cells; and Acc(Pm, Ctrg, Csrc) returns 1 if using Csrc’s realization as a source correctly predicts the form realizing cell Ctrg in paradigm Pm. predictiveness(Csrc, F) = P|F| m=1 P|Pm| j=1 Acc(Pm, Cj, Csrc) P|F| m=1 |Pm| (1) At this stage, paradigm covers are dynamic in that no single cell need be shared by all covers. Yet, when selecting the first source, SSA has no previous sources to condition on, making it impossible to predict the first cell. Thus, we get adaptive minimum set covers by designating the start cell to be that which occurs in the most dynamic covers. Then we re-approximate all covers such that each includes this cell.2 Finally, we rank cells within each cover by the total number of covers in which they appear. For each cell in each cover, we train SSA to predict said cell from all higher ranked cells and their realizing forms (holding out 2% of them for development). 2We train and test on a single part-of-speech for each language, so each paradigm should contain the start cell. For defective paradigms lacking said cell, we back off to the most frequent cell that exists in the paradigm. 8251 NFIN=sing 3.SG.PRES=sings PRES.PTCP=singing NFIN=sing 3.SG.PRES=sings PRES.PTCP=singing NFIN=sing 3.SG.PRES=sings PRES.PTCP=singing PST=sang PST.PTCP=sung PST.PTCP=sung PST.PTCP=sung PST=sang PRES.PTCP=singing 3.SG.PRES=sings NFIN=sing Targets predicted correctly Given Figure 2: Minimum set cover example for SING, which is {NFIN, PST}. 3.3 Oracle The oracle represents a human-in-the-loop during inference, providing requested source realizations to the predictor and informing SSA when a paradigm is complete and accurate (Figure 1). In our implementation, the oracle does not specify which individual predictions are incorrect, but it thus must resolve any discrepancies when two sub-predictors disagree after the fact. We do not attempt to model the additional cost this incurs, as it is unclear how to combine it with the presumably more expensive cost of correcting errors, which we model instead. This is worth re-visiting in future work. 4 Experimental Details We evaluate 4 paradigm completion approaches on 7 languages. Here we discuss implementation, data and evaluation details. 4.1 Prediction Architecture All sequence models in all implementations of any paradigm completion approach use the Transformer architecture (Vaswani et al., 2017). Here we describe the formatting of input and outputs as well as our hyperparameters. Input and Output Formats Following Kann and Schütze (2016), input sequences combine characters and morphosyntactic features. The following is a sample input and output for a single source FPC sub-predictor specializing in the cell NFIN: Input: f l y out_V.PTCP out_PST Output: f l o w n For any inflected-form-predicting sequence model whose input is not limited to realizations of a single cell—as in, e.g., the static principal parts approach—source cell features are prepended to the input as such: Input: in_NFIN f l y out_V.PTCP out_PST Output: f l o w n For multi-source sequence models, the features of each source are inserted into the input and the target features are listed after the first source. We experimented with several different multi-source representations and the Transformer performed fairly similarly with all of them. Input: in_NFIN f l y out_V.PTCP out_PST in_PST f l e w Output: f l o w n The FPC’s SSA predicts not a form, but a cell, conditional on any previously realized sources. To predict the first source, it is given nothing and will thus deterministically select the best starting cell as determined by the oracle policy (see Section 3.2). To predict any subsequent source, it conditions on the realizations of all previously requested sources for that paradigm. The following exemplifies SSA inputs and outputs when predicting the second source for paradigm FLY: Input: in_NFIN f l y Output: in_V.PTCP in_PST Wu et al. (2018) and others have achieved improvements by embedding morphosyntactic features separately and concatenating them to the encoder output prior to feeding it to the decoder. Our error analysis, however, suggests Transformers handle Kann and Schütze (2016)-style input well. More sophisticated feature handling may not be necessary, but should be investigated in future work. Hyperparameters We train all Transformer models for 100 epochs in batches of 64 with 0.1 dropout probability. The final model is restored from the epoch with the highest dev accuracy. We stop early if there is no improvement for 20 8252 Train Dev Test Arabic nouns paradigms 1006 100 100 instances 24160 2260 2352 German verbs paradigms 1031 100 100 instances 27762 2690 2692 English verbs paradigms 2908 200 201 instances 14522 1000 1001 Russian nouns paradigms 3289 100 100 instances 37423 1133 1137 Latin nouns paradigms 1630 100 100 instances 19150 1180 1174 Hungarian nouns paradigms 1405 100 100 instances 47689 3400 3383 Irish nouns paradigms 549 100 100 instances 6460 1197 1195 Table 1: Number of paradigms and instances by split for every language and POS considered. epochs. The only exception is during FPC crossvalidation where sub-predictor models are trained for only 50 epochs with early stopping after 5 epochs without improvement. This is just to reduce computational cost as it is sufficient to induce an oracle policy. The final sub-predictor models however (those used at inference time, not those used to induce the oracle policy), are trained on the full training data set using the full 100 epochs with 20 epochs patience for early stopping. As for Transformer-specific hyperparameters, using the original notation of Vaswani et al. (2017), we set N = 4, dmodel = 128, dff = 512, and h = 8, scaling down the hyperparameters recommended for machine translation as our task is less expensive (Aharoni and Goldberg, 2017; Wu et al., 2018). 4.2 Data Preparation For every language and part of speech (POS) considered, we extract train, dev and test sets from UniMorph (Kirov et al., 2018). Each split contains full paradigms, though the cells realized in each may vary due to defectiveness (Corbett, 2005; Sims, 2015). We filter many gold errors by removing paradigms for which no realization can be attested in actual text. We use Universal Dependencies (UD) (Nivre et al., 2016) to check for attestations. We also filter overabundant realizations (multiple forms realizing one cell), keeping only the most frequent form, as attested in UD. While some languages allow for overabundance (Thornton, 2010, 2011), in UniMorph, this often indicates a gold error. We randomly divide paradigms into splits such that train is maximally large and dev and test contain at least 100 paradigms and 1,000 realizations. Exact quantities are displayed in Table 1. Arabic, German, English, and Russian were used for development, while Irish, Hungarian, and Latin were only evaluated after fixing hyperparameters. The languages considered represent 3 families and 4 diverse Indo-European branches. They exhibit multiple non-canonical behaviors (Corbett, 2005) and present diverse challenges from nonconcatenative morphology to complex inflection class systems. 4.3 Evaluation Paradigm completion is usually evaluated via exact match accuracy on held out target forms (Cotterell et al., 2016, 2017a, 2018; McCarthy et al., 2019). Yet we use as many sources as are necessary to reach 100% accuracy in predicting the remaining slots, so accuracy is not a meaningful metric for the FPC. Some theoretical works focus on the sources required to unambiguously complete a paradigm given some implicit knowledge of viable inflection classes (Finkel and Stump, 2007; Ackerman and Malouf, 2013). Yet these tend not to propose actual paradigm completion models or evaluate their decisions in ambiguous cases. To evaluate our system and bridge these traditions, we propose auto-rate: auto-rate = Pn i=1 auto(Pi) Pn i=1 |Pi| , (2) where auto(P) denotes the number of realizations correctly predicted while not having been provided as sources for paradigm P by the oracle. Intuitively, auto-rate is like accuracy but it counts oracularly provided sources as additional errors since both errors and sources require labor, i.e., sources require manual input and errors, post-correction. We also report manual cells per paradigm, i.e., sources plus errors. Of course, FPC resolves all errors eventually, but other systems can make errors requiring post-correction. 4.4 Baselines We compare the FPC method to three baselines. One is a variant of FPC using a random SSA. 8253 This allows us to distinguish the benefit of a smart SSA from that of simply receiving additional feedback from an oracle in the loop. Each time a source must be selected, random SSA chooses randomly without replacement. Its performance is averaged over two runs. The lemma approach baseline predicts all paradigm forms from one designated source: the lemma. Finally, for the static approach baseline, we considered two static approach implementations. The single-source implementation predicts each target from the source that is, in theory, its best predictor (Kann and Schütze, 2018). The multi-source implementation concatenates these sources, predicting each target from the concatenated input. As results are nearly identical for either implementation, we report results only for single-source—with the exception of Latin, as explained presently. For some languages, there is little theoretical or pedagogical literature to help identify the best sources for the static approach. Our single-source static approach for Arabic nouns predicts singular and dual forms from SG;NDEF;NOM and plurals from PL;NDEF;NOM. In theory, any non-plural plus any plural should be sufficient (Brustad et al., 2005; Habash, 2010). For German verbs, we predict present and imperative forms from NFIN and past forms from IND;PST;1SG (Grebe et al., 1966). We predict English present forms from NFIN; PST and V.PTCP;PST predict themselves. For Russian nouns, Zaliznyak (1980) argues for five sources, yet Parker (2016) demonstrates that three are usually sufficient. We follow the latter, predicting all nominative or accusative forms from ACC;SG, all other singulars from INS;SG, and all other plurals from GEN;PL. In preliminary experiments, we found this to match the accuracy of the five source approach, thus achieving a higher autorate. For Latin, we could not evaluate a singlesource static implementation as it is unclear which source cell best predicts each target. The multisource static approach for Latin nouns predicts all forms from NOM;SG and GEN;SG (following the classical grammatical analyses of Varro, Priscian and the Roman ars grammatica). For Irish and Hungarian, we do not evaluate a static approach as we lack the requisite linguistic knowledge to determine the best sources. Accuracy Auto-rate Mcpp Dev Test Dev Test Dev Test Arabic nouns Lemma 62.0 58.8 59.3 56.5 9.6 10.7 Static 95.9 99.4 89.5 93.1 2.9 2.1 Random Ag. 90.2 90.9 2.2 2.2 FPC 90.2 93.6 2.2 1.5 German verbs Lemma 87.6 89.0 84.1 85.8 4.3 4.0 Static 94.1 96.4 86.7 88.9 3.6 3.0 Random Ag. 90.0 92.1 2.4 1.9 FPC 91.8 92.5 2.0 1.8 English verbs Lemma 96.5 94.0 76.7 74.2 1.2 1.3 Static 99.7 98.4 39.7 38.4 3.0 3.0 Random Ag. 76.0 73.3 1.2 1.4 FPC 77.3 74.3 1.1 1.3 Russian nouns Lemma 97.1 95.6 88.3 87.5 1.3 1.5 Static 98.4 98.3 72.6 72.3 3.2 3.2 Random Ag. 86.1 84.3 1.6 1.8 FPC 88.5 89.1 1.3 1.2 Latin nouns Lemma 65.5 51.6 63.6 49.6 5.1 6.7 Static 97.7 96.8 80.8 79.7 2.3 2.4 Random Ag. 85.9 84.7 1.7 1.8 FPC 89.0 87.8 1.3 1.4 Hungarian nouns Lemma 95.6 90.9 92.8 88.0 2.5 4.1 Random Ag. 95.0 94.6 1.7 1.9 FPC 95.5 95.2 1.5 1.6 Irish nouns Lemma 63.5 66.9 56.1 59.6 5.4 5.0 Random Ag. 64.9 68.2 4.2 3.8 FPC 72.1 69.6 3.3 3.6 Table 2: Evaluation of paradigm completion approaches with metrics defined in Section 4. We do not report accuracy for FPC or its random agent variant (Random Ag.), as it is trivially 100% (see Section 4.3). Mcpp stands for Manual cells per paradigm. 5 Results and Discussion As shown in Table 2, FPC always ties or beats the next best approach, while the next best approach varies by language. On average, FPC reduces labor by 63% over the lemma approach, 47% over static, 16% over random agent, and 13% over the next best approach. Its success is mainly due to (1) making predictions from fewer sources than are required for fully disambiguating inflection class and (2) receiving feedback after each source. Surprisingly, training a sophisticated SSA does not improve much over using a random agent. We 8254 argue this is due to an unexpectedly large margin of error in the agent’s source selection task. Despite the complexity of source selection strategies required for inflection class disambiguation, FPC uses lexical frequencies to expect regularity and stem-internal clues to anticipate irregular classes, requiring a median of just one source per paradigm for all languages except under-resourced Irish. Furthermore, inspection of the source selection minimum set covers reveals that it is often the case that a paradigm can be completed correctly from any single source. This is surprising in light of the precise strategies required for completely deterministic paradigm completion in Finkel and Stump (2007)’s framework and in light of Albright (2002)’s case for the privileged status of a single form per paradigm, though in our framework with full words and full paradigms for training, it seems that many sources can often serve as good enough singleton principal parts. This supports Bonami and Beniamine (2016) proposal of gradient principal part analyses. 6 Analysis Here, we discuss patterns relating SSA’s first and second sources chosen (Figures 3a-b and 4a-b) to the inter-predictability of cells represented by heat maps (3c and 4c). Maps display the average accuracies with which each target (column) can be predicted from each source (row). We analyze specific SSA choices and predictor errors in Arabic and Latin. The maps (for all languages, see the Appendix) suggest complexity can be distributed within paradigms in systematically distinct ways. Ackerman and Malouf (2013) propose integrative (I-) complexity, using average conditional entropy to describe paradigmatic organization, but this has been criticized for obscuring differences in the predictability of sub-paradigm regions (Cotterell et al., 2019; Elsner et al., 2019). To remedy this, we propose a typology for measuring the extent to which I-complexity is realized via different organizational strategies, which is useful for discussing source selection strategies. Our typology describes paradigms in terms of mutual predictability, the correlation of a map and its transpose, and entropy predictiveness, the negative correlation of cells’ average predictiveness (see Equation 1) and average predictability, defined here in comparable terms as: predictability(Ctrg, F) = P|F| m=1 P|Pm| j=1 Acc(Pm, Ctrg, Cj) P|F| m=1 |Pm| (3) Intuitively, a paradigm is mutually predictable if the fact that cell A predicts cell B means that B is likely to predict A. Such paradigms often feature regions of mutually predictable cells (as in 3c), such that an optimal strategy avoids picking multiple sources from one region. For entropy predictive paradigms, if A is generally more difficult to predict than B, A is likely to be a better predictor of the remaining cells (following the information theoretic logic that surprisal is informative (Shannon, 1948; Jaeger, 2010)). For such paradigms, the optimal strategy selects the source which would have been the most difficult target to predict. Unlike Sims (2020)’s graph-theoretic typology for describing inflection class structure, our typology is a two-dimensional description of how the optimal paradigm completion strategy is affected by underlying class structure. In this sense, our typology is complementary to hers and future work might investigate the relationship between traits in her typology and mutual predictability or entropy predictiveness. Furthermore, our typology might be updated to consider the impact of type frequency (Sims and Parker, 2016) in a framework where distributional data is available. Figure 5 demonstrates that cross-linguistic variation is vast with respect to our typology, as some languages even exhibit negative entropy predictiveness or mutual predictability. This partly explains why non-FPC approaches perform erratically: if paradigmatic organization varies by language, source selection strategies must be able to adapt to the data. 6.1 Arabic Error Analysis Arabic nouns are mutually predictable (Figure 5). Any singular or dual form can predict another. Plural forms also predict each other. Yet, in general, plurals are less predictive/able (Figure 3c) due to several inflection classes varying in the plural. The sound plurals take suffixes while broken plural classes are realized via non-concatenative processes. For example, I. »@P rAkb, rider from root H. ¼ P r k b, takes the broken plural pattern _ _ A _, becoming H. A¿P rkAb. Yet, having heard only singular realizations, a human might posit a sound plural, i.e., ∗ àñJ.»@P rAkbwn, realizing the 8255 a) Coverage after 1 source b) Coverage after 2 sources c) Inter-predictability heat map Figure 3: Arabic analysis. (a) and (b) define how likely each cell is to be a source (white), correctly predicted (gray), or an error (black) after one (a) or two (b) sources. (c) shows the predictiveness/ability of source (rows) and target (columns) cells. Darker cells are less predictive/able. For a more detailed rendering of this graphic, please see the appendix. more productive exponent. SSA learns an ideal strategy, requesting a singular source (Figure 3a) and then a plural (3b). Interestingly, 6 of 18 sound feminine plurals (most frequent single class) require multiple sources and 8 of 28 broken plurals do not. Thus, the predictor does not default to regularity, but uses steminternal phonology to anticipate irregularity. Most errors made from the first source posit a viable broken plural, just not the right one. In future work, modeling semantics can fix such errors, e.g., knowing that I. »@P rAkb is animate makes plural a) Coverage after 1 source b) Coverage after 2 sources c) Inter-predictability heat map Figure 4: Latin analysis. (a) and (b) define how likely each cell is to be a source (white), correctly predicted (gray), or an error (black) after one (a) or two (b) sources. (c) shows the predictiveness/ability of source (rows) and target (columns) cells. Darker cells are less predictive/able. For a more detailed rendering of this graphic, please see the appendix. ∗I. »@ðP rwAkb unlikely, as animate nouns seldom take that inflection class. For future work, we can pre-train on raw corpora to give our model access to such information (Devlin et al., 2019). Indeed Erdmann and Habash (2018) found distributional information to benefit inflectional paradigm clustering in Arabic. Though the benefits should generalize as semantics correlates with inflection class in many languages (Wurzel, 1989; Aronoff, 1992; Harris, 1992; Noyer, 1992; Carstairs-McCarthy, 1994; Corbett and Fraser, 2000; Kastner, 2019). 8256 Entropy predictiveness Mutual predictability Figure 5: Integrative typology for describing paradigms by the mutual predictability of cells and the predictive power of hard-to-predict cells. Cell 1st 2ndM 2ndN 3rd 4thM 4thN NOM;SG a us um varies us ¯u GEN;SG ae ¯ı ¯ı is ¯us ¯us DAT;SG ae ¯o ¯o ¯ı u¯ı ¯u ACC;SG am um um em um ¯u ABL;SG ¯a ¯o ¯o em ¯u ¯u VOC;SG generally matches NOM;SG NOM;PL ae ¯ı a ¯es ¯us ua GEN;PL arum ¯orum ¯orum um uum uum DAT;PL is ¯ıs ¯ıs ibus ibus ibus ACC;PL as ¯os a ¯es ¯us ua ABL;PL is ¯ıs ¯ıs ibus ibus ibus VOC;PL generally matches NOM;PL Table 3: Plat of the suffixes taken for each Latin nominal declension. 6.2 Latin Error Analysis Latin is not mutually predictable with moderate entropy predictiveness. SSA’s choices are, at first, opaque, but Table 3 shows that ACC;PL narrows the inflection class to variants of one declension. Remaining ambiguity mostly involves 3rd declension nominative and vocative realizations, which can usually be predicted from the preferred second source cell, VOC;SG. 44 of 100 test paradigms were 3rd declension, which required multiple sources at the highest rate (16 of 44; 2nd masculine declension was next highest at 3 of 15). There was no correlation between declension and second source chosen, yet high auto-rate suggests SSA’s choices may not need to condition on previously realized source forms, but only their cells. While 77 of 100 paradigms were completed from a single source, we found paradigms requiring three sources that might be completable from two using a multi-source FPC implementation. For example, greg¯es, flocks realizes GREX.ACC;PL, but the predictor mistakenly posits ∗gregium for GEN;PL from this source, guessing the wrong 3rd declension variant. While second source VOC;SG grex corrects this, it obscures the underlying stem, as x can be an allophone of g or c. Thus, we still get an error, ∗grecum. A multi-source predictor could avoid forgetting the underlying allophone g after seeing the second source.3 That said, multi-source FPC is not as simple as multi-source static. Heuristic sampling of training instances based on the oracle policy yields predictors that only attend to one source or make bad predictions when only given one. This is worth exploring further in future work as there is more evidence of paradigms that are difficult to handle without jointly encoding sources in the linguistic literature (Corbett, 2005; Bonami and Beniamine, 2016). 7 Conclusion We presented Frugal Paradigm Completion, which reduces the manual labor required to expand a morphological lexicon by 16-63% over competitive approaches across 7 languages. We demonstrated that typologically distinct morphological systems require unique treatment and benefit from our SSA, that learns its strategy from data. We found that inducing this strategy is not as challenging as previously suggested (Finkel and Stump, 2007). Thus, SSA might be replaced with a less costly architecture while our model might be improved by conditioning on semantics and jointly decoding from a variable number of sources. Acknowledgments We are indebted to helpful conversations with Shijie Wu, Ryan Cotterell, Katharina Kann, Andrea Sims, and Olivier Bonami. We would also like to acknowledge insightful feedback from Google’s Word Graph and TTS teams, as well as four anonymous reviewers. 3See rex–regis, king or pax–pacis, peace, which are technically conditioned on preceding vowel quality, though there are probably not enough training examples for the model to learn that. 8257 References Farrell Ackerman, James P Blevins, and Robert Malouf. 2009. Parts and wholes: Implicative patterns in inflectional paradigms. Analogy in grammar: Form and acquisition, pages 54–82. Farrell Ackerman and Robert Malouf. 2013. Morphological organization: The low conditional entropy conjecture. Language, pages 429–464. Roee Aharoni and Yoav Goldberg. 2017. Morphological inflection generation with hard monotonic attention. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2004–2015. Thomas Ahern. 1990. The role of linguistics in Latin instruction. The Classical World, pages 505–510. Adam Albright. 2002. The identification of bases in morphological paradigms. UCLA Ph.D. Ph.D. thesis, dissertation. Mark Aronoff. 1992. Noun classes in Arapesh. In Yearbook of Morphology 1991, pages 21–32. Springer. Sacha Beniamine, Olivier Bonami, and Benoît Sagot. 2018. Inferring inflection classes with description length. Journal of Language Modelling, 5(3):465– 525. Olivier Bonami and Sarah Beniamine. 2016. Joint predictiveness in inflectional paradigms. Word Structure, 9(2):156–182. Kristen Brustad, Abbas Al-Tonsi, and Mahmoud AlBatal. 2005. Al-Kitaab Fii Ta’Aallum Al-’Arabiyya: A Textbook for Arabic. Georgetown University Press. Tim Buckwalter. 2002. Arabic morphological analyzer (AraMorph). Linguistic Data Consortium, Philadelphia. Andrew Carstairs-McCarthy. 1994. Inflection classes, gender, and the principle of contrast. Language, pages 737–788. Greville G Corbett. 2005. The canonical approach in typology. Linguistic diversity and language theories, 25:49. Greville G Corbett and Norman M Fraser. 2000. Gender assignment: a typology and a model. Systems of nominal classification, 4:293–325. Ryan Cotterell, Christo Kirov, Mans Hulden, and Jason Eisner. 2019. On the complexity and typology of inflectional morphological systems. Transactions of the Association for Computational Linguistics, 7:327–342. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Arya D McCarthy, Katharina Kann, Sebastian Mielke, Garrett Nicolai, Miikka Silfverberg, et al. 2018. The conll–sigmorphon 2018 shared task: Universal morphological reinflection. Proceedings of the CoNLL SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection, pages 1–27. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017a. CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. CoRR, abs/1706.09031. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task— morphological reinflection. In Proceedings of the Workshop of the Special Interest Group on Computational Morphology and Phonology (SIGMORPHON), Berlin, Germany. Ryan Cotterell, John Sylak-Glassman, and Christo Kirov. 2017b. Neural graphical models over strings for principal parts morphological paradigm completion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 759–765. Lisa Garnand Dawdy-Hesterberg and Janet Breckenridge Pierrehumbert. 2014. Learnability and generalisation of Arabic broken plural nouns. Language, cognition and neuroscience, 29(10):1268–1282. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Wolfgang U Dressler and Anna M Thornton. 1996. Italian nominal inflection. Wiener Linguistische Gazette, pages 55–57. Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a Dirichlet process mixture model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 616–627. Association for Computational Linguistics. Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1185–1195. Micha Elsner, Andrea D Sims, Alexander Erdmann, Antonio Hernandez, Evan Jaffe, Lifeng Jin, Martha Booker Johnson, Shuan Karim, David L King, Luana Lamberti Nunes, et al. 2019. Modeling morphological learning, typology, and change: What can the neural sequence-to-sequence framework contribute? Journal of Language Modelling, 7(1):53–98. Alexander Erdmann and Nizar Habash. 2018. Complementary strategies for low resourced morphological modeling. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 54–65. 8258 Ramy Eskander, Nizar Habash, and Owen Rambow. 2013. Automatic extraction of morphological lexicons from morphologically annotated corpora. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1032–1043. Raphael Finkel and Gregory Stump. 2007. Principal parts and morphological typology. Morphology, 17(1):39–75. Markus Forsberg, Harald Hammarström, and Aarne Ranta. 2006. Morphological lexicon extraction from raw text data. In International Conference on Natural Language Processing (in Finland), pages 488– 499. Springer. Paul Grebe, Helmut Gipper, and Günther Drosdowski, editors. 1966. Duden Grammatik der deutschen Gegenwartssprache, 2., verm. und verb. aufledition. Bibliogr. Inst., Dudenverl., Mannheim. Wren Jones Grinstead. 1916. An Introduction to the Psychology and Pedagogy of High School Latin: With Special Reference to the Value of a Oneyear Course, volume 1. University of Wisconsin– Madison. Nizar Y Habash. 2010. Introduction to Arabic natural language processing. Synthesis Lectures on Human Language Technologies, 3(1):1–187. James W Harris. 1992. The form classes of Spanish substantives. In Yearbook of morphology 1991, pages 65–88. Springer. T Florian Jaeger. 2010. Redundancy and reduction: Speakers manage syntactic information density. Cognitive psychology, 61(1):23–62. Katharina Kann and Hinrich Schütze. 2016. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 62–70. Katharina Kann and Hinrich Schütze. 2018. Neural transductive learning and beyond: Morphological generation in the minimal-resource setting. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3254– 3264. Itamar Kastner. 2019. Templatic morphology as an emergent property. Natural Language & Linguistic Theory, 37(2):571–619. Christo Kirov, Ryan Cotterell, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian J Mielke, Arya McCarthy, Sandra Kübler, et al. 2018. UniMorph 2.0: Universal morphology. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). Fabian Kuhn, Pascal Von Rickenbach, Roger Wattenhofer, Emo Welzl, and Aaron Zollinger. 2005. Interference in cellular networks: The minimum membership set cover problem. In International Computing and Combinatorics Conference, pages 188–198. Springer. Carsten Lund and Mihalis Yannakakis. 1994. On the hardness of approximating minimization problems. Journal of the ACM (JACM), 41(5):960–981. Arya D McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Garrett Nicolai, Christo Kirov, Miikka Silfverberg, Sebastian Mielke, Jeffrey Heinz, et al. 2019. The SIGMORPHON 2019 shared task: Crosslinguality and context in morphology. In Proceedings of the 16th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. Fabio Montermini and Olivier Bonami. 2013. Stem spaces and predictability in verbal inflection. Lingue e linguaggio, 12(2):171–190. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1659–1666. Robert Rolf Noyer. 1992. Features, positions and affixes in autonomous morphological structure. Ph.D. thesis, Massachusetts Institute of Technology. Jeff Parker. 2016. Inflectional Complexity and Cognitive Processing: An Experimental and Corpusbased Investigation of Russian Nouns. Ph.D. thesis, Ohio State University. Stephane Ross and J Andrew Bagnell. 2014. Reinforcement and imitation learning via interactive noregret learning. arXiv preprint arXiv:1406.5979. Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627– 635. Benoît Sagot. 2010. The Lefff, a freely available and large-coverage morphological and syntactic lexicon for French. In 7th international conference on Language Resources and Evaluation (LREC 2010). Wolfgang Seeker and Jonas Kuhn. 2013. Morphological and syntactic case in statistical dependency parsing. Computational Linguistics, 39(1):23–55. Claude Elwood Shannon. 1948. A mathematical theory of communication. Bell system technical journal, 27(3):379–423. Andrea D Sims. 2015. Inflectional defectiveness, volume 148. Cambridge University Press. Andrea D Sims. 2020. Inflectional networks: Graphtheoretic tools for inflectional typology. Proceedings of the Society for Computation in Linguistics: Vol, 3:10. Andrea D Sims and Jeff Parker. 2016. How inflection class systems work: On the informativity of implicative structure. Word Structure, 9(2):215–239. 8259 Gregory Stump and Raphael A Finkel. 2013. Morphological typology: From word to paradigm, volume 138. Cambridge University Press. Marko Tadi´c and Sanja Fulgosi. 2003. Building the Croatian morphological lexicon. In Proceedings of the 2003 EACL Workshop on Morphological Processing of Slavic Languages, pages 41–46. Association for Computational Linguistics. Anna Thornton. 2011. Overabundance (multiple forms realizing the same cell): a non-canonical phenomenon in Italian verb morphology. Anna M Thornton. 2010. Towards a typology of overabundance. In Décembrettes 7: International Conference on Morphology, University of Toulouse, pages 2–3. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Vincent Wan, Chun an Chan, Tom Kenter, Jakub Vit, and Rob Clark. 2019. CHiVE: Varying prosody in speech synthesis with a linguistically driven dynamic hierarchical conditional variational network. In ICML 2019. Sean Welleck, Kianté Brantley, Hal Daumé III, and Kyunghyun Cho. 2019. Non-monotonic sequential text generation. In International Conference on Machine Learning, pages 6716–6726. Shijie Wu and Ryan Cotterell. 2019. Exact hard monotonic attention for character-level transduction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1530– 1537. Shijie Wu, Pamela Shapiro, and Ryan Cotterell. 2018. Hard non-monotonic attention for character-level transduction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4425–4438. Wolfgang Ullrich Wurzel. 1989. Inflectional morphology and naturalness, volume 9. Springer Science & Business Media. A. A. Zaliznyak. 1980. The grammar dictionary of the Russian language. Heiga Zen, Yannis Agiomyrgiannakis, Niels Egberts, Fergus Henderson, and Przemysław Szczepaniak. 2016. Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthesizers for mobile devices. In Interspeech 2016, 17th Annual Conference of the International Speech Communication Association. A Expanded Results The figures in this appendix demonstrate the coverage after 1 and 2 sources for every language considered as well as their inter-predictability heat maps. Figures are enlarged to show all individual cells for the reader’s convenience. Hence, the Arabic and Latin figures in this appendix correspond to Figures 3 and 4 in Section 6, but show more detail. 8260 Figure 6: Coverage of Arabic target cells after SSA chooses the first two sources. 8261 Figure 7: Inter-predictability heat map of Arabic cells. 8262 Figure 8: Coverage of German target cells after SSA chooses the first two sources. 8263 Figure 9: Inter-predictability heat map of German cells. 8264 Figure 10: Coverage of English target cells after SSA chooses the first two sources. 8265 Figure 11: Inter-predictability heat map of English cells. 8266 Figure 12: Coverage of Russian target cells after SSA chooses the first two sources. 8267 Figure 13: Inter-predictability heat map of Russian cells. 8268 Figure 14: Coverage of Latin target cells after SSA chooses the first two sources. 8269 Figure 15: Inter-predictability heat map of Latin cells. 8270 Figure 16: Coverage of Hungarian target cells after SSA chooses the first two sources. 8271 Figure 17: Inter-predictability heat map of Hungarian cells. 8272 Figure 18: Coverage of Irish target cells after SSA chooses the first two sources. 8273 Figure 19: Inter-predictability heat map of Irish cells.
2020
733
Improving Chinese Word Segmentation with Wordhood Memory Networks Yuanhe Tian♥∗, Yan Song♠† , Fei Xia♥, Tong Zhang♦, Yonggang Wang♠ ♥University of Washington, ♠Sinovation Ventures ♦The Hong Kong University of Science and Technology ♥{yhtian, fxia}@uw.edu ♠[email protected][email protected][email protected] Abstract Contextual features always play an important role in Chinese word segmentation (CWS). Wordhood information, being one of the contextual features, is proved to be useful in many conventional character-based segmenters. However, this feature receives less attention in recent neural models and it is also challenging to design a framework that can properly integrate wordhood information from different wordhood measures to existing neural frameworks. In this paper, we therefore propose a neural framework, WMSEG, which uses memory networks to incorporate wordhood information with several popular encoder-decoder combinations for CWS. Experimental results on five benchmark datasets indicate the memory mechanism successfully models wordhood information for neural segmenters and helps WMSEG achieve state-ofthe-art performance on all those datasets. Further experiments and analyses also demonstrate the robustness of our proposed framework with respect to different wordhood measures and the efficiency of wordhood information in cross-domain experiments.1 1 Introduction Unlike most written languages in the world, the Chinese writing system does not use explicit delimiters (e.g., white space) to separate words in written text. Therefore, Chinese word segmentation (CWS) conventionally serves as the first step in Chinese language processing, especially for many downstream tasks such as text classification (Zeng et al., 2018), question answering (Liu et al., 2018), machine translation (Yang et al., 2018), etc. In the past two decades, the mainstream methodology of CWS treated CWS as a character-based ∗Partially done as an intern at Sinovation Ventures. †Corresponding author. 1WMSEG (code and the best performing models) is released at https://github.com/SVAIGBA/WMSeg. sequence labeling task (Tseng et al., 2005; Song et al., 2006; Sun and Xu, 2011; Pei et al., 2014; Chen et al., 2015; Zhang et al., 2016; Chen et al., 2017; Ma et al., 2018; Higashiyama et al., 2019; Qiu et al., 2019), where various studies were proposed to effectively extract contextual features to help better predicting segmentation labels for each character (Zhang et al., 2013; Zhou et al., 2017; Higashiyama et al., 2019). Among all the contextual features, the ones measuring wordhood for n-grams illustrate their helpfulness in many nonneural CWS models (Sun et al., 1998; Xue and Shen, 2003; Feng et al., 2004; Song and Xia, 2012). Later, following the track of the sequence labeling methodology, recent approaches with neural networks are proved to be powerful in this task (Chen et al., 2015; Ma et al., 2018; Higashiyama et al., 2019). However, since neural networks (e.g., LSTM) is considered to be able to provide a good modeling of contextual dependencies, less attention is paid to the idea of explicitly leveraging wordhood information of n-grams in the context as what had previously been done in non-neural models. Although some studies sidestepped the idea by incorporating contextual n-grams (Pei et al., 2014; Zhou et al., 2017) or word attention (Higashiyama et al., 2019) into the sequence labeling process, they are limited in either concatenating word and character embeddings or requiring a well-defined word lexicon. Therefore, it has not been fully explored what would be the best way of representing contextual information such as wordhood features in neural CWS models. Moreover, consider there are various choices of wordhood measures, it is also a challenge to design a framework that can incorporate different wordhood features so that the entire CWS approach can be general while being effective in accommodating the input from any measures. In this paper, we propose WMSEG, a neural framework with a memory mechanism, to improve Figure 1: The architecture of WMSEG. “N” denotes a lexicon constructed by wordhood measures. N-grams (keys) appearing in the input sentence “部分居民生活水平” (some residents’ living standard) and the wordhood information (values) of those n-grams are extracted from the lexicon. Then, together with the output from the text encoder, n-grams (keys) and their wordhood information (values) are fed into the memory module, whose output passes through a decoder to get final predictions of segmentation labels for every character in the input sentence. CWS by leveraging wordhood information. In detail, we utilize key-value memory networks (Miller et al., 2016) to incorporate character n-grams with their wordhood measurements in a general sequence labeling paradigm, where the memory module can be incorporated with different prevailing encoders (e.g., BiLSTM and BERT) and decoders (e.g., softmax and CRF). For the memory, we map n-grams and their wordhood information to keys and values in it, respectively, and one can use different wordhood measures to generate such information. Then for each input character, the memory module addresses all the n-grams in the key list that contain the character and uses their corresponding values to generate an output vector to enhance the decoder for assigning a segmentation label to the character. Experimental results from five widely used benchmark datasets confirm that WMSEG with wordhood information can improve CWS over powerful baseline segmenters and ourperform previous studies, where state-of-the-art performance is observed on all the datasets. Further experiments and analyses are also performed to investigate different factors affecting WMSEG’s performance. 2 The Proposed Framework Following previous studies, we regard CWS as a character-based sequence labeling task. The architecture of WMSEG is illustrated in Figure 1, where the general sequence labeling paradigm is the top part with a memory module inserted between the encoder and the decoder. The model predicts a tag (e.g., tag B for the 1st character in a word) for each character, and the predicted tag sequence is then converted to word boundary in the system output. The bottom part of the figure starts with a lexicon N, which is simply a list of n-grams and can be built by various methods (see Section 2.1). Given an input sentence X = x1x2...xi...xl, for each character xi in X, our approach uses the lexicon N to generate (keys, values) for xi and send it to the memory module. In all, the process of WMSEG to perform CWS can be formalized as bY = arg max Y∈T l p(Y|X, M(X, N)) (1) where T denotes the set of all types of segmentation labels, and l stands for the length of the input sentence X. The output Y is the corresponding label sequence for X with bY representing the best label sequence according to the model. M is the memory module proposed in this paper that consumes X and N and provides corresponding wordhood information for X to maximize p. In the rest of this section, we describe the construction of the n-gram lexicon, the proposed wordhood memory networks, and how it is integrated with different encoders and decoders, respectively. 2.1 Lexicon Construction To build the wordhood memory networks, the first step is to construct the lexicon N because the keys in the memory module are built upon N, where each n-gram in N is stored as a key in it.2 In this study, N is simply a list of n-grams, and technically, it can be constructed through many existing resources or automatic methods. Compared to using an off-the-shelf lexicon or the word dictionary from the training data, it is hypothesized that, for the purpose of incorporating wordhood information into the general sequence labeling framework, unsupervised wordhood measures, such as accessor variety (AV) (Feng et al., 2004), pointwise mutual information (PMI) (Sun et al., 1998), and description length gain (DLG) (Kit and Wilks, 1999), would perform better. For example, AV measures the wordhood of an n-gram k by AV (k) = min(Lav(k), Rav(k)) (2) where Lav(k) and Rav(k) denote the number of different character types that can precede (left access number) or follow (right access number) the n-gram k. Normally, the higher the AV score is, the more likely the n-gram forms a word. 2.2 Wordhood Memory Networks To encode both n-grams and the wordhood information they carry, one requires an appropriate framework to do so for CWS. Compared with other network structures that can exploit n-grams such as the attention mechanism, key-value memory networks are more appropriate to model such pairwise knowledge via transforms between keys and values. In the memory, we map n-grams and their wordhood information to keys and values, respectively. Following Miller et al. (2016), we illustrate how our memory module generates and operates the (keys, values) pair for each xi in this subsection. N-gram Addressing For each xi in a training/test instance, normally there are many ngrams in N that contain xi. Therefore, the ngram addressing step is to generate all n-grams from xi’s context (including xi) and keep only the ones that appear in N, resulting Ki = [ki,1, ki,2 · · · , ki,j, · · · ki,mi] that xi is a part of ki,j. For example, in the input sentence shown in Figure 1, the n-grams that contain the character x4 =“民” (people) form the list K4 = [“民” (people), “居民” 2Therefore n-gram and key are equivalent in the memory. Rule vi,j xi is the beginning of the key ki,j VB xi is inside the key ki,j VI xi is the ending of the key ki,j VE xi is the single-character key ki,j VS Table 1: The rules for assigning different values to xi according to its position in a key ki,j. (resident), “民生” (livelihood), “居民生活” (residents’ life)], which are highlighted in the dashed boxes illustrated at the bottom part of the figure. Then, the memory module activates the corresponding keys in it, addresses their embeddings (which are denoted as ek i,j for each ki,j), and computes the probability distribution for them with pi,j = exp(hi · ek i,j) Pmi j=1 exp(hi · ek i,j) (3) for each key, where hi is the vector for xi which can be generated from any text encoder. Wordhood Reading Values in the memory represent the wordhood information for a given xi and ki,j pair, which is not a straightforward mapping because xi may have different roles in each ki,j. For example, ki,j delivers different wordhood information when xi appears at the beginning or the ending of ki,j. Therefore, we set rules in Table 1 to read a value for a key according to different situations of xi in ki,j, where we use a set of values {VB, VI, VE, VS} with embeddings {eVB, eVI, eVE, eVS} (illustrated in different colors in Figure 1) so that all n-grams should map to one of the values based on xi’s position in ki,j. To illustrate that, in the aforementioned example, ngrams in K4 for x4 =“民” (people) are mapped to a value list V4 = [VS, VE, VB, VI] (see Figure 1). As a result, each Ki for xi has a list of values denoted by Vi = [vi,1, vi,2 · · · , vi,j. · · · vi,mi]. Then the total wordhood memory for xi is computed from the weighted sum of all keys and values by oi = mi X j=1 pi,jev i,j (4) where ev i,j is the embedding for vi,j. Afterwards, oi is summed element-wise with hi and the result is passed through a fully connected layer by ai = Wo · (hi + oi) (5) MSR PKU AS CITYU CTB6 TRAIN TEST TRAIN TEST TRAIN TEST TRAIN TEST TRAIN DEV TEST CHAR # 4,050K 184K 1,826K 173K 8,368K 198K 2,403K 68K 1,056K 100K 134K WORD # 2,368K 107K 1,110K 104K 5,500K 123K 1,456K 41K 641K 60K 82K CHAR TYPE # 5K 3K 5K 3K 6K 4K 5K 3K 4K 3K 3K WORD TYPE # 88K 13K 55K 13K 141K 19K 69K 9K 42K 10K 12K OOV RATE 2.7 5.8 4.3 7.2 5.4 5.6 Table 2: Statistics of the five benchmark datasets, in terms of the number of character and word tokens and types in each training and test set. Out-of-vocabulary (OOV) rate is the percentage of unseen word tokens in the test set. where Wo is a trainable parameter and the output ai ∈R|T | is a weight vector with its each dimension corresponding to a segmentation label. 2.3 Text Encoders and Decoders To ensure wordhood memory networks functionalize, one requires to generate hi for each xi by [h1, h2, ..., hi, ..., hl] = Encoder(X) (6) where the Encoder can be different models, e.g., Bi-LSTM and BERT (Devlin et al., 2019), to represent a sequence of Chinese characters into vectors. Once all ai are generated from the memory for each xi, a decoder takes them to predict a sequence of segmentation labels bY = by1by2 · · · byl for X by bY = Decoder(A) (7) where A = a1a2 · · · ai · · · al is the sequence of output from Eq. 5. The Decoder can be implemented by different algorithms, such as softmax: byi = arg max exp(at i) P|T | t=1 exp(at i) (8) where at i is the value at dimension t in ai. Or one can use CRF for the Decoder: byi = arg max yi∈T exp(Wc · ai + bc) P yi−1yi exp(Wc · ai) + bc (9) where Wc ∈R|T |×|T | and bc ∈R|T | are trainable parameters to model the transition for yi−1 to yi. 3 Experimental Settings 3.1 Datasets We employ five benchmark datasets in our experiments: four of them, namely, MSR, PKU, AS, and CITYU, are from SIGHAN 2005 Bakeoff (Emerson, 2005) and the fifth one is CTB6 (Xue et al., 2005). AS and CITYU are in traditional Chinese characters whereas the other three use simplified BC BN MZ NW WEB CHAR # 275K 483K 403K 443K 342K WORD # 184K 287K 258K 260K 210K CHAR TYPE # 3K 3K 4K 3K 4K WORD TYPE # 12K 23K 26K 21K 21K OOV RATE 3.4 6.0 8.9 5.9 7.1 Table 3: Statistics of CTB7 with respect to five different genres. The OOV rate for each genre is computed based on the vocabulary from all the other four genres. ones. Following previous studies (Chen et al., 2015, 2017; Qiu et al., 2019), we convert traditional Chinese characters in AS and CITYU into simplified ones.3 For MSR, AS, PKU, and CITYU, we follow their official training/test data split. For CTB6, we use the same split as that stated in Yang and Xue (2012); Chen et al. (2015); Higashiyama et al. (2019), and only use its test set for the final experiment. Table 2 show the statistics of all datasets in terms of the number of characters and words and the percentage of out-of-vocabulary (OOV) words in the dev/test sets with respect to the training set. In addition, we also use CTB7 (LDC2010T07) to perform our cross-domain experiments. There are five genres in CTB7, including broadcast conversation (BC), broadcast news (BN), magazine (MZ), newswire (NW), and weblog (WEB). The statistics of all the genres are reported in Table 3, where the OOV rate for each genre is computed according to the union of all other genres. For example, the OOV rate for BC is computed with respect to the union of BN, MZ, NW, and WEB. 3.2 Wordhood Measures We experiment with three wordhood measures to construct N. The main experiment adopts the aforementioned AV as the measure to rank all ngrams, because AV was shown to be the most effective wordhood measure in previous CWS studies (Zhao and Kit, 2008). Since AV is sensitive to 3The conversion scripts are from https://github. com/skydark/nstools/tree/master/zhtools MSR PKU AS CITYU CTB6 AV 49K 71K 105K 104K 50K PMI 18K 16K 22K 21K 16K DLG 32K 22K 32K 27K 16K Table 4: The size of lexicon N generated from different wordhood measures under our settings. corpus size, in our experiments we use different AV thresholds when building the lexicon for each dataset: the threshold is 2 for PKU, CITYU, CTB6 and CTB7, and 5 for MSR and AS. To test the the robustness of WMSEG, we also try two other wordhood measures, i.e., PMI (Sun et al., 1998) and DLG (Kit and Wilks, 1999). PMI measures pointwise mutual information between two Chinese characters, x′ and x′′, via PMI(x′, x′′) = log p(x′x′′) p(x′)p(x′′) (10) where p computes the probability of an n-gram (i.e., x′, x′′ and x′x′′) in a dataset. A high PMI score indicates that the two characters co-occur a lot in the dataset and are likely to form a word. Hence, we use a threshold to determine whether a word boundary delimiter should be inserted between two adjacent characters in the dataset. In our experiments, we set the threshold to 0, PMI score lower than it will result in a segmentation. In other words, for each dataset, we use PMI to perform unsupervised segmentation and collect the segmented words from it to build the n-gram lexicon N. The other measure, DLG, computes wordhood of an n-gram s according to the change of the description length of a dataset D with and without treating that n-gram as a segment: DLG(s) = DL(D) −DL(D[r →s] ⊕s) (11) where D denotes the original dataset and D[r → s]⊕s represents a new dataset by treating s as a new segment, replacing all the occurrences of s with a new symbol r (which can be seen as an index for newly identified segment s), and then appending s at the end. DL(D) is the Shannon-Fano code length of a dataset D, calculated by DL(D) = − X x∈V c(x)logc(x) |D| (12) where V refers to the vocabulary of D and c(x) the count of segment x. We set the threshold for DLG to 0 and use the n-grams whose DLG is higher than it to build lexicon N for each dataset. Bi-LSTM BERT / ZEN Word Embedding Size 200 Hidden State Size 100 768 Hidden State Layers 1 12 Key Embedding Size 200 768 Value Embedding Size 200 768 Dropout Rate 0.2 0.1 Table 5: The hyper-parameters for our models w.r.t. different encoders, i.e., Bi-LSTM, BERT and ZEN. All aforementioned measures are conducted on the union of the training and test sets, so that ngrams and their wordhood information are shared in both the learning and prediction phase. We remove all white spaces from the data and use the resulted raw texts to perform these measures. Table 4 shows the sizes of the lexicons created with these wordhood measures on the five datasets. 3.3 Model Implementation Following previous studies (Sun and Xu, 2011; Chen et al., 2015, 2017; Ma et al., 2018; Qiu et al., 2019), we use four segmentation labels in our experiments, i.e., T = {B, I, E, S}. Among them, B, I, and E indicate a character is the beginning, inside, and the ending of a word and S denotes that the character is a single-character word. Since text representation plays an important role to facilitate many tasks (Conneau et al., 2017; Song et al., 2017, 2018; Sileo et al., 2019), we try two effective and well-known encoders, i.e., Bi-LSTM and BERT4. In addition, we test WMSEG on a pretrained encoder for Chinese language, i.e., ZEN5 (Diao et al., 2019), which learns n-gram information in its pre-training from large raw corpora and outperforms BERT on many Chinese NLP tasks. Table 5 shows the hyperparameter settings for all the encoders: for the Bi-LSTM encoder, we follow the setting of Chen et al. (2015) and adopt their character embeddings for ex i , and for BERT and ZEN encoders, we follow the default settings in their papers (Devlin et al., 2019; Diao et al., 2019). For the decoders, we use softmax and CRF, and set their loss functions as cross-entropy and negative log-likelihood, respectively. The memory module can be initialized by random or pre-trained word embeddings for keys and values. In our experiments, we use random initialization for them.6 4We use the Chinese base model from https://s3. amazonaws.com/models.huggingface.co/. 5https://github.com/sinovation/ZEN. 6We tried different initialization methods, and they did not show a significant difference in CWS performance. CONFIG MSR PKU AS CITYU CTB6 EN-DN WM F ROOV F ROOV F ROOV F ROOV F ROOV BL-SM × 95.53 62.96 91.85 48.84 94.52 62.21 93.79 67.26 93.56 67.39 √ 95.61 63.94 91.97 49.00 94.70 64.18 93.88 69.20 93.70 68.52 BL-CRF × 95.80 66.17 92.35 52.04 94.39 61.59 93.96 67.84 93.84 70.81 √ 95.98 68.75 92.43 56.80 95.07 68.17 94.20 69.91 94.03 71.88 BT-SM × 97.84 86.32 96.20 84.43 96.33 77.86 97.51 86.69 96.90 88.46 √ 98.16 86.50 96.47 86.34 96.52 78.67 97.77 86.62 97.13 88.30 BT-CRF × 97.98 85.52 96.32 85.04 96.34 77.75 97.63 86.66 96.98 87.43 √ 98.28 86.67 96.51 86.76 96.58 78.48 97.80 87.57 97.16 88.00 ZEN-SM × 98.35 85.78 96.27 84.50 96.38 77.62 97.78 90.69 97.08 86.20 √ 98.36 85.30 96.49 84.95 96.55 78.02 97.86 90.89 97.22 86.83 ZEN-CRF × 98.36 86.82 96.36 84.81 96.39 77.81 97.81 91.78 97.13 87.08 √ 98.40 84.87 96.53 85.36 96.62 79.64 97.93 90.15 97.25 88.46 Table 6: Experimental results of WMSEG on SIGHAN2005 and CTB6 datasets with different configurations. “ENDN” stands for the text encoders (“BL” for Bi-LSTM and “BT” for BERT) and decoders (“SM” for softmax and “CRF” for CRF). The “WM” column indicates whether the wordhood memories are used (√) or not (×). 4 Results and Analyses In this section, we firstly report the results of WMSEG with different configurations on five benchmark datasets and its comparison with existing models. Then we explore the effect of using different lexicon N and different wordhood measures in WMSEG. We also use a cross-domain experiment to illustrate the effectiveness of WMSEG when more OOVs are in the test set. Lastly, a case study is performed to visualize how the wordhood information used in WMSEG helps CWS. 4.1 Results on Benchmark Datasets In the main experiment, we illustrate the validity of the proposed memory module by comparing WMSEG in different configurations, i.e., with and without the memory in integrating with three encoders, i.e., Bi-LSTM, BERT, and ZEN, and two decoders, i.e., softmax and CRF. The experimental results on the aforementioned five benchmark datasets are shown in Table 6, where the overall F-score and the recall of OOV are reported. With five datasets and six encoder-decoder configurations, the table includes results from 30 pairs of experiments, each pair with or without using the memories. There are several observations drawn from the results. First, the overall comparison clearly indicates that, WMSEG (i.e., the model with wordhood memories) outperforms the baseline (i.e., the model without wordhood memories) for all 30 pairs in terms of F-scores and for 25 pairs in terms of ROOV . Second, the proposed memory module works smoothly with different encoders and decoders, where some improvement is pretty significant; for instance, when using Bi-LSTM as the encoder and CRF as the decoder, WMSEG improves the F-score on the AS dataset from 94.39 to 95.07 and ROOV from 61.59 to 68.17. With BERT or ZEN as the encoder, even when the baseline system performs very well, the improvement of WMSEG on F-scores is still decent. Third, among the models with ZEN, the ones with the memory module further improve their baselines, although the context information carried by n-grams is already learned in pre-training ZEN. This indicates that wordhood information provides additional cues (besides the contextual features) that can benefit CWS, and our proposed memory module is able to provide further task-specific guidance to an n-gram integrated encoder. Fourth, the wordhood memory shows its robustness with different lexicon size when we consider WMSEG’s performance with the lexicon statistics reported in Table 4 together. To summarize, the results in this experiment not only confirm that wordhood information is a simple yet effective source of knowledge to help CWS without requiring external support such as a well-defined dictionary or manually crafted heuristics, but also fully illustrate that the design of our model can effectively integrate this type of knowledge. To further illustrate the validity and the effectiveness of WMSEG, we compare our best-performing model with the ones in previous studies on the same benchmark datasets. The comparison is presented in Table 7, where WMSEG (both the one with BERT and ZEN) outperforms all existing models with respect to the F-scores and achieves new state-of-the-art performance on all datasets. MSR PKU AS CITYU CTB6 F ROOV F ROOV F ROOV F ROOV F ROOV ZHANG ET AL. (2013) 97.5 96.1 73.1 PEI ET AL. (2014) 97.2 95.2 MA AND HINRICHS (2015) 96.6 87.2 95.1 76.0 CHEN ET AL. (2015) 97.4 96.5 96.0 XU AND SUN (2016) 96.3 96.1 95.8 ZHANG ET AL. (2016) 97.7 95.7 95.95 CHEN ET AL. (2017) 96.04 71.60 94.32 72.64 94.75 75.34 95.55 81.40 WANG AND XU (2017) 98.0 96.5 ZHOU ET AL. (2017) 97.8 96.0 96.2 MA ET AL. (2018) 98.1 80.0 96.1 78.8 96.2 70.7 97.2 87.5 96.7 85.4 GONG ET AL. (2019) 97.78 64.20 96.15 69.88 95.22 77.33 96.22 73.58 HIGASHIYAMA ET AL. (2019) 97.8 96.4 QIU ET AL. (2019) 98.05 78.92 96.41 78.91 96.44 76.39 96.91 86.91 WMSEG (BERT-CRF) 98.28 86.67 96.51 86.76 96.58 78.48 97.80 87.57 97.16 88.00 WMSEG (ZEN-CRF) 98.40 84.87 96.53 85.36 96.62 79.64 97.93 90.15 97.25 88.46 Table 7: Performance (F-score) comparison between WMSEG (BT-CRF and ZEN-CRF with woodhood memory networks) and previous state-of-the-art models on the test set of five benchmark datasets. 4.2 Cross-Domain Performance As domain variance is always an important factor affecting the performance of NLP systems especially word semgenters (Song et al., 2012; Song and Xia, 2013), in addition to the experiments on benchmark datasets, we also run WMSEG on CTB7 across domains (genres in this case) with and without the memory module. To test on each genre, we use the union of the data from the other four genres to train our segmenter and use AV to extract n-grams from the entire raw text from CTB7 in this experiment. Table 8 reports the results in Fscore and OOV recall, which show a similar trend as that in Table 6, where WMSEG outperforms baselines for all five genres. Particularly, for genres with large domain variance (e.g., the ones with high OOV rates such as MZ and WEB), CWS is difficult, and its relatively low F-scores in Table 8 from baseline models confirm that. Yet WMSEG offers a decent way to improve cross-domain CWS performance without any help from external knowledge or complicated model design, which further illustrates the effectiveness of the memory module. The reason could be that many n-grams are shared in both training and test data; these n-grams with their wordhood information present a strong indication to the model on what combinations of characters can be treated as words, even though some of them never appear in the training data. 4.3 Effect of Using Different N To analyze the robustness of WMSEG with respect to the lexicon, we compare four ways (ID: 2-5 in Table 9) of constructing the lexicon (N): the first one simply uses the vocabulary from the training data (marked as GOLD LABEL in Table 9; ID: 2); the other three ways use AV to extract n-grams from the unsegmented training data only (ID: 3), the test data only (ID: 4), and training + test set (ID: 5), respectively.7 Table 9 shows the results of running BERT-CRF on the WEB genre of CTB7 without the wordhood memories (ID: 1) and with the memories (ID: 2-5), following the cross-domain setting in §4.2. While the four methods with memories achieve similar results on the F score, indicating the robustness of our proposed framework, the one that builds N using the raw texts from both training and test sets through unsupervised method (ID: 5) achieves the biggest improvement on ROOV , demonstrating the advantage of including the unlabeled test set by incorporating the results from unsupervised wordhood measures into the models. 4.4 Effect of Different Wordhood Measures WMSEG provides a general way of integrating wordhood information for CWS, we expect other wordhood measures to play the same role in it. Therefore, we test PMI and DLG in our model and compare them with the previous results from AV (see Table 6). Specifically, we use our best performing BERT-based model, i.e., BERT-CRF, with the n-gram lexicons constructed by the aforementioned three measures and run it on all benchmark datasets. We draw the histograms of the F-scores obtained from WMSEG with each measure (red, green, and blue bars for AV, PMI, and DLG, re7One could also use an external corpus to build N, which is not considered in this experiment. CONFIG BC BN MZ NW WEB EN-DN WM F ROOV F ROOV F ROOV F ROOV F ROOV BL-SM × 93.73 63.39 93.65 68.88 90.55 66.95 93.70 69.57 90.81 55.50 √ 94.04 63.53 93.91 72.32 90.76 65.65 93.83 72.40 91.22 56.62 BL-CRF × 93.95 65.60 93.87 71.89 90.67 67.13 93.87 72.17 91.12 57.51 √ 94.21 66.81 94.11 74.22 90.95 67.29 93.96 74.38 91.49 58.37 BT-SM × 96.27 80.76 96.88 87.90 94.97 84.45 97.08 89.78 94.82 74.00 √ 96.41 81.15 97.00 89.47 95.10 85.48 97.24 91.96 95.00 75.51 BT-CRF × 96.25 79.04 96.87 89.15 94.94 85.27 96.99 91.34 94.79 75.58 √ 96.43 81.29 97.09 90.29 95.11 85.32 97.21 92.48 95.03 76.30 ZEN-SM × 96.39 79.97 96.95 88.93 95.05 85.14 97.17 91.33 94.03 75.33 √ 96.45 81.34 97.03 89.78 95.06 85.60 97.21 91.73 95.08 75.60 ZEN-CRF × 96.30 80.05 96.97 90.38 94.93 85.64 97.10 91.03 94.90 74.98 √ 96.50 80.44 97.11 90.29 95.13 85.96 97.24 91.68 95.04 75.74 Table 8: Experimental results on five genres of CTB7. Abbreviations follow the same notation in Table 6. ID TRAIN TEST GOLD LABEL F ROOV 1 94.79 75.58 2 × × √ +0.22 +0.21 3 √ × × +0.21 +0.20 4 × √ × +0.23 +0.33 5 √ √ × +0.24 +0.72 Table 9: Comparisons of performance gain on the WEB genre of CTB7 with respect to the baseline BERT-CRF model when the n-gram lexicon N for WMSEG is built upon different sources. √and × refer to if a corresponding data source is used or not, respectively. spectively) in Figure 2, where the F-scores of the baseline model are also presented in orange bars. As shown in the figure, the performances of using the three measures are very similar, which indicates that WMSEG is able to robustly incorporate the wordhood information from various measures, despite that those measures focus on different aspects of n-grams when determining whether the ngrams should be treated as words. Particularly, consider that the lexicons produced by the three measures are rather different in their sizes (as shown in Table 4), the results in Figure 2 strongly demonstrate the effectiveness of our proposed approach in learning with a limited number of n-grams. This observation also reveals the possibility that many n-grams may be redundant for our model, and WMSEG is thus able to identify the most useful ones from them, which is analyzed in the case study. 4.5 Case Study To investigate how the memory learns from the wordhood information carried by n-grams, we conduct a case study with an example input sentence “他/从小/学/电脑/技术” (He learned computer techniques since childhood). In this sentence, the Figure 2: The F-scores of WMSEG (BERT) using three different wordhood measures, namely AV (red), PMI (green), and DLG (blue), on five benchmark datasets. n-gram “从小学” is ambiguous with two possible interpretations: “从小/学” (learn since childhood) and “从/小学” (from primary school). Native Chinese speakers can easily choose the first one with the given context but a word segmenter might incorrectly choose the second segmentation. We feed this case into our BERT-CRF model with the memory module. In Figure 3, we visualize the resulted weights that learned from keys (a) and values (b) of the memory, as well as from the final tagger (c). The heatmaps of all keys and values in the memory with respect to each corresponding input character clearly illustrate that the appropriate n-grams, e.g., “他” (he), “学” (learn), “从小” (from childhood), etc., receive higher weights than others and the corresponding values for them are also emphasized, which further affects final CWS tagging so that the weight distributions from (b) and (c) look alike to each other. Therefore, this visualization explains, to some extent, that the proposed memory mechanism can identify and distinguish important n-grams within a certain context and thus improves CWS performance accordingly. Figure 3: Heatmaps of weights learned for (a) keys and (b) values in the memory, and (c) the tags from the decoder, with respect to each character in an input sentence. Higher weights are visualized with darker colors. 5 Related Work As one of the most fundamental NLP tasks for Chinese language processing, CWS has been studied for decades, with two steams of methods, i.e., word-based and character-based ones (Xue and Shen, 2003; Peng et al., 2004; Levow, 2006; Zhao et al., 2006; Zhao and Kit, 2008; Li and Sun, 2009; Song et al., 2009a; Li, 2011; Sun and Xu, 2011; Mansur et al., 2013; Zhang et al., 2013; Pei et al., 2014; Chen et al., 2015; Ma and Hinrichs, 2015; Liu et al., 2016; Zhang et al., 2016; Wang and Xu, 2017; Zhou et al., 2017; Chen et al., 2017; Ma et al., 2018; Higashiyama et al., 2019; Gong et al., 2019; Qiu et al., 2019). Among these studies, most of them follow the character-based paradigm to predict segmentation labels for each character in an input sentence; n-grams are used in some of these studies to enhance model performance, which is also observed in many other NLP tasks (Song et al., 2009b; Xiong et al., 2011; Shrestha, 2014; Shi et al., 2016; Diao et al., 2019). Recently, CWS benefits from neural networks and further progress are made with embeddings (Pei et al., 2014; Ma and Hinrichs, 2015; Liu et al., 2016; Zhang et al., 2016; Wang and Xu, 2017; Zhou et al., 2017), recurrent neural models (Chen et al., 2015; Ma et al., 2018; Higashiyama et al., 2019; Gong et al., 2019) and even adversarial learning (Chen et al., 2017). To enhance CWS with neural models, there were studies leverage external information, such as vocabularies from auto-segmented external corpus (Wang and Xu, 2017; Higashiyama et al., 2019), where Higashiyama et al. (2019) introduced a word attention mechanism to learn from large granular texts during the CWS process. In addition, the studies from Chen et al. (2017) and Qiu et al. (2019) try to improve CWS by learning from data annotated through different segmentation criteria. Moreover, there is a study leveraging auto-analyzed syntactic knowledge obtained from off-the-shelf toolkits to help CWS and part-of-speech tagging (Tian et al., 2020). Compare to these studies, WMSEG offers an alternative solution to robustly enhancing neural CWS models without requiring external resources. 6 Conclusion In this paper, we propose WMSEG, a neural framework for CWS using wordhood memory networks, which maps n-grams and their wordhood information to keys and values in it and appropriately models the values according to the importance of keys in a specific context. The framework follows the sequence labeling paradigm, and the encoders and decoders in it can be implemented by various prevailing models. To the best of our knowledge, this is the first work using key-value memory networks and utilizing wordhood information for neural models in CWS. Experimental results on various widely used benchmark datasets illustrate the effectiveness of WMSEG, where state-of-the-art performance is achieved on all datasets. Further experiments and analyses also demonstrate the robustness of WMSEG in the cross-domain scenario as well as when using different lexicons and wordhood measures. References Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015. Long Short-Term Memory Neural Networks for Chinese Word Segmentation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1197–1206. Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial Multi-Criteria Learning for Chinese Word Segmentation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1193–1203. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang. 2019. ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations. ArXiv, abs/1911.00720. Thomas Emerson. 2005. The Second International Chinese Word Segmentation Bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing, pages 123–133. Haodi Feng, Kang Chen, Xiaotie Deng, and Weimin Zheng. 2004. Accessor Variety Criteria for Chinese Word Extraction. Computational Linguistics, 30(1):75–93. Jingjing Gong, Xinchi Chen, Tao Gui, and Xipeng Qiu. 2019. Switch-LSTMs for Multi-Criteria Chinese Word Segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6457–6464. Shohei Higashiyama, Masao Utiyama, Eiichiro Sumita, Masao Ideuchi, Yoshiaki Oida, Yohei Sakamoto, and Isaac Okada. 2019. Incorporating Word Attention into Character-Based Word Segmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2699– 2709. Chunyu Kit and Yorick Wilks. 1999. Unsupervised Learning of Word Boundary with Description Length Gain. In EACL 1999: CoNLL-99 Computational Natural Language Learning, pages 1–6. Gina-Anne Levow. 2006. The Third International Chinese Language Processing Bakeoff: Word Segmentation and Named Entity Recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108–117. Zhongguo Li. 2011. Parsing the Internal Structure of Words: A New Paradigm for Chinese Word Segmentation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1405–1414, Portland, Oregon, USA. Zhongguo Li and Maosong Sun. 2009. Punctuation as Implicit Annotations for Chinese Word Segmentation. Computational Linguistics, 35(4):505–512. Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring Segment Representations for Neural Segmentation Models. arXiv preprint arXiv:1604.05499. Ziqing Liu, Enwei Peng, Shixing Yan, Guozheng Li, and Tianyong Hao. 2018. T-Know: a Knowledge Graph-based Question Answering and Infor-mation Retrieval System for Traditional Chinese Medicine. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, pages 15–19, Santa Fe, New Mexico. Ji Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art Chinese Word Segmentation with Bi-LSTMs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4902–4908. Jianqiang Ma and Erhard Hinrichs. 2015. Accurate Linear-Time Chinese Word Segmentation via Embedding Matching. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1733–1743. Mairgup Mansur, Wenzhe Pei, and Baobao Chang. 2013. Feature-based Neural Language Model and Chinese Word Segmentation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1271–1277, Nagoya, Japan. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-Value Memory Networks for Directly Reading Documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400–1409. Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Maxmargin Tensor Neural Network for Chinese Word Segmentation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 293–303. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese Segmentation and New Word Detection Using Conditional Random Fields. In Proceedings of the 20th international conference on Computational Linguistics, page 562. Xipeng Qiu, Hengzhi Pei, Hang Yan, and Xuanjing Huang. 2019. Multi-Criteria Chinese Word Segmentation with Transformer. arXiv preprint arXiv:1906.12035. Yangyang Shi, Kaisheng Yao, Le Tian, and Daxin Jiang. 2016. Deep LSTM based Feature Mapping for Query Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1501–1511, San Diego, California. Prajwol Shrestha. 2014. Incremental N-gram Approach for Language Identification in CodeSwitched Text. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 133–138, Doha, Qatar. Damien Sileo, Tim Van De Cruys, Camille Pradel, and Philippe Muller. 2019. Mining Discourse Markers for Unsupervised Sentence Representation Learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3477– 3486, Minneapolis, Minnesota. Yan Song, Dongfeng Cai, Guiping Zhang, and Hai Zhao. 2009a. Approach to Chinese Word Segmentation Based on Character-word Joint Decoding. Journal of Software, 20(9):2236–2376. Yan Song, Jiaqing Guo, and Dongfeng Cai. 2006. Chinese Word Segmentation Based on an Approach of Maximum Entropy Modeling. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 201–204, Sydney, Australia. Yan Song, Chunyu Kit, and Xiao Chen. 2009b. Transliteration of Name Entity via Improved Statistical Translation on Character Sequences. In Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009), pages 57–60, Suntec, Singapore. Yan Song, Prescott Klassen, Fei Xia, and Chunyu Kit. 2012. Entropy-based Training Data Selection for Domain Adaptation. In Proceedings of COLING 2012: Posters, pages 1191–1200, Mumbai, India. Yan Song, Chia-Jung Lee, and Fei Xia. 2017. Learning Word Representations with Regularization from Prior Knowledge. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 143–152, Vancouver, Canada. Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional Skip-Gram: Explicitly Distinguishing Left and Right Context for Word Embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2, pages 175–180, New Orleans, Louisiana. Yan Song and Fei Xia. 2012. Using a Goodness Measurement for Domain Adaptation: A Case Study on Chinese Word Segmentation. In LREC, pages 3853– 3860. Yan Song and Fei Xia. 2013. A Common Case of Jekyll and Hyde: The Synergistic Effect of Using Divided Source Training Data for Feature Augmentation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 623–631, Nagoya, Japan. Maosong Sun, Dayang Shen, and Benjamin K. Tsou. 1998. Chinese Word Segmentation without Using Lexicon and Hand-crafted Training Data. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2, pages 1265–1271, Montreal, Quebec, Canada. Weiwei Sun and Jia Xu. 2011. Enhancing Chinese Word Segmentation Using Unlabeled Data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 970–979. Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiaojun Quan, Tong Zhang, and Yonggang Wang. 2020. Joint Chinese Word Segmentation and Partof-speech Tagging via Two-way Attentions of Autoanalyzed Knowledge. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8286–8296, Online. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A Conditional Random Field Word Segmenter for Sighan Bakeoff 2005. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 168–171. Chunqi Wang and Bo Xu. 2017. Convolutional Neural Network with Word Embeddings for Chinese Word Segmentation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 163–172, Taipei, Taiwan. Deyi Xiong, Min Zhang, and Haizhou Li. 2011. Enhancing Language Models in Statistical Machine Translation with Backward N-grams and Mutual Information Triggers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1288–1297, Portland, Oregon, USA. Jingjing Xu and Xu Sun. 2016. Dependency-based Gated Recursive Neural Network for Chinese Word Segmentation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 567–572, Berlin, Germany. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207–238. Nianwen Xue and Libin Shen. 2003. Chinese Word Segmentation as LMR Tagging. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 176–179. Yaqin Yang and Nianwen Xue. 2012. Chinese Comma Disambiguation for Discourse Analysis. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1, pages 786–794. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1346–1355, New Orleans, Louisiana. Jichuan Zeng, Jing Li, Yan Song, Cuiyun Gao, Michael R. Lyu, and Irwin King. 2018. Topic Memory Networks for Short Text Classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3120– 3131, Brussels, Belgium. Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring Representations from Unlabeled Data with Co-training for Chinese Word Segmentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 311–321. Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Transition-Based Neural Word Segmentation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 421–431. Hai Zhao, Chang-Ning Huang, and Mu Li. 2006. An Improved Chinese Word Segmentation System with Conditional Random Field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 162–165, Sydney, Australia. Hai Zhao and Chunyu Kit. 2008. An Empirical Comparison of Goodness Measures for Unsupervised Chinese Word Segmentation with a Unified Framework. In Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I, pages 9–16. Hao Zhou, Zhenting Yu, Yue Zhang, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2017. Word-Context Character Embeddings for Chinese Word Segmentation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 760–766, Copenhagen, Denmark.
2020
734
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8286–8296 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8286 Joint Chinese Word Segmentation and Part-of-speech Tagging via Two-way Attentions of Auto-analyzed Knowledge Yuanhe Tian♥∗, Yan Song♠† , Xiang Ao♣, Fei Xia♥, Xiaojun Quan△, Tong Zhang♦, Yonggang Wang♠ ♥University of Washington, ♠Sinovation Ventures ♣Chinese Academy of Sciences, △Sun Yat-sen University ♦The Hong Kong University of Science and Technology ♥{yhtian, fxia}@uw.edu ♠[email protected][email protected][email protected][email protected][email protected] Abstract Chinese word segmentation (CWS) and partof-speech (POS) tagging are important fundamental tasks for Chinese language processing, where joint learning of them is an effective one-step solution for both tasks. Previous studies for joint CWS and POS tagging mainly follow the character-based tagging paradigm with introducing contextual information such as n-gram features or sentential representations from recurrent neural models. However, for many cases, the joint tagging needs not only modeling from context features but also knowledge attached to them (e.g., syntactic relations among words); limited efforts have been made by existing research to meet such needs. In this paper, we propose a neural model named TWASP for joint CWS and POS tagging following the character-based sequence labeling paradigm, where a two-way attention mechanism is used to incorporate both context feature and their corresponding syntactic knowledge for each input character. Particularly, we use existing language processing toolkits to obtain the auto-analyzed syntactic knowledge for the context, and the proposed attention module can learn and benefit from them although their quality may not be perfect. Our experiments illustrate the effectiveness of the two-way attentions for joint CWS and POS tagging, where state-of-the-art performance is achieved on five benchmark datasets.1 1 Introduction Chinese word segmentation (CWS) and part-ofspeech (POS) tagging are two fundamental and crucial tasks in natural language processing (NLP) for Chinese. The former one aims to find word ∗Partially done as an intern at Sinovation Ventures. †Corresponding author. 1TWASP (code and the best performing models) is released at https://github.com/SVAIGBA/TwASP. Figure 1: An example sentence with CWS and POS tagging results, where the ambiguous part (in green color) has dependencies from distant words (in yellow color). boundaries in a sentence and the latter, on the top of segmentation results, assigns a POS tag to each word to indicate its syntactical property in the sentence. To effectively perform CWS and POS tagging, combining them into a joint task is proved to have better performance than separately conducting the two tasks in a sequence (Ng and Low, 2004). Therefore, many studies were proposed in the past decade for joint CWS and POS tagging (Jiang et al., 2008, 2009; Sun, 2011; Zeng et al., 2013; Zheng et al., 2013; Kurita et al., 2017; Shao et al., 2017; Zhang et al., 2018). These studies, regardless of whether they used conventional approaches (Jiang et al., 2008, 2009; Sun, 2011; Zeng et al., 2013) or deep learning based approaches (Zheng et al., 2013; Kurita et al., 2017; Shao et al., 2017; Zhang et al., 2018), focused on incorporating contextual information into their joint tagger. In addition, it is well known that syntactic structure is also able to capture and provide the information of long-distance dependencies among words. For example, Figure 1 shows an example of local ambiguity, where the green highlighted part has two possible interpretations – “报告VV/书NN” (report a book) and “报告书NN” (the report). The ambiguity can be resolved with syntactic analysis; for instance, the dependency structure, if available, would prefer the first interpretation. While the subject and the object of the sentence (highlighted in yellow) are far away from the ambiguous part in 8287 Figure 2: The architecture of TWASP for the joint CWS and POS tagging with the two-way attention mechanism, which is presented with example context features and their dependency knowledge (highlighted in yellow) from auto-analyzed results for a character (i.e., “分” (split) highlighted in green) in the given sentence. the surface word order, they are much closer in the dependency structure (the subject depends on “报告VV” and ”书NN” depends on the the object). This example shows that syntactic structure provides useful cues for CWS and POS tagging. Syntactic knowledge can be obtained from manually constructed resources such as treebanks and grammars, but such resources require considerate efforts to create and might not be available for a particular language or a particular domain. A more practical alternative is to use syntactic structures automatically generated by off-the-shelf toolkits. Some previous studies (Huang et al., 2007; Jiang et al., 2009; Wang et al., 2011; Zhang et al., 2018) verified the idea for this task by learning from autoprocessed corpora. However, their studies treat auto-processed corpora as gold reference and thus are unable to distinguishingly use it according to its quality (the resulted knowledge is not accurate in most cases). Therefore, the way to effectively leverage such auto-generated knowledge for the joint CWS and POS tagging task is not fully explored. In this paper, we propose a neural model named TWASP with a two-way attention mechanism to improve joint CWS and POS tagging by learning from auto-analyzed syntactic knowledge, which are generated by existing NLP toolkits and provide necessary (although not perfect) information for the task. In detail, for each input character, the proposed attention module extracts the context features associated with the character and their corresponding knowledge instances according to the auto-analyzed results, then computes the attentions separately for features and knowledge in each attention way, and finally concatenates the attentions from two ways to guide the tagging process. In doing so, our model can distinguish the important auto-analyzed knowledge based on their contributions to the task and thus avoid being influenced by some inferior knowledge instances. Compared to another prevailing model, i.e., key-value memory networks (Miller et al., 2016), which can learn from pair-wisely organized information, the twoway attentions not only are able to do so, but also fully leverage features and their knowledge rather than using one to weight the other.2 We experiment with three types of knowledge, namely, POS labels, syntactic constituents, and dependency relations, in our experiments. The experimental results on five benchmark datasets illustrate the effectiveness of our model, where state-of-the-art performance for the joint task is achieved on all datasets. We also perform several analyses, which confirm the validity of using two-way attentions and demonstrate that our model can be further improved by synchronously using multiple types of knowledge. 2 The Model The architecture of TWASP is illustrated in Figure 2. The left part shows the backbone of the model for the joint CWS and POS tagging following 2We explain it in later part of the paper that, the output of key-value memory networks mainly rely on the value embeddings, where keys are used to weight such embeddings. 8288 Figure 3: Examples of context features and their corresponding knowledge from (a) POS labels, (b) syntactic constituents and (c) dependency relations. Features and knowledge for the character “分” are highlighted in yellow. the character-based sequence labeling paradigm, where the input is a character sequence X = x1x2 · · · xi · · · xl and the output is a sequence of joint labels Y = y1y2 · · · yi · · · yl. To enhance the backbone paradigm, the proposed two-way attention module (as shown in the right part of Figure 2) takes the syntactic knowledge produced from the input sentence, analyzes it and then feeds it to the tagging process. In this section, we firstly introduce the auto-analyzed knowledge, then explain how the two-way attentions consume such knowledge, and finally describe how the joint CWS and POS tagging works with the resulted attentions. 2.1 Auto-analyzed Knowledge Auto-analyzed knowledge is demonstrated to be an effective type of resources to help NLP systems understand the texts (Song et al., 2017; Seyler et al., 2018; Huang and Carley, 2019). One challenge for leveraging external knowledge for the joint task is that gold-standard annotations are extremely rare for text in most domains, especially the syntactic annotations. An alternative solution is to use off-the-shelf NLP systems to produce such knowledge, which is proved to be useful in previous studies (Huang et al., 2007; Jiang et al., 2009; Wang et al., 2011; Zhang et al., 2018). Rather than processing an entire corpus and then extracting features or training embeddings from the resulted corpus as in previous studies, our model does not treat knowledge as gold references: it generates auto-analyzed knowledge for each sentence and learns the weights of the corresponding features. Formally, for a character sequence X, let S and K denote the lists of context features and knowledge for X, respectively. For each character xi in X, let Si = [si,1, si,2, · · · si,j, · · · si,mi] and Ki = [ki,1, ki,2, · · · ki,j, · · · ki,mi] be the sublists of S and K for xi. Here, si,j and ki,j denote a context feature and a knowledge instance, respectively. In this paper, we use three types of syntactic knowledge for the joint task, namely POS labels, syntactic constituents, and dependency relations, where POS labels indicate the syntactic information of individual words, syntactic constituents provide the structural grouping information for a text span, and dependencies offer dependency relations between words. Figure 3 shows an example sentence and the corresponding S and K. For character “分” (highlighted in green), its Si and Ki are highlighted in yellow. In order to distinguish same knowledge appearing with different context features, we use a feature-knowledge combination tag to represent each knowledge instance (e.g., “分子NN”, “分 子NP”, and “分子dobj” in Figure 3). We explain each type of knowledge below. POS Labels Figure 3 (a) shows that, for each xi (e.g., x6 =“分”), we use a 2-word window for both sides to extract context features from S to form Si (i.e., S6 = [“分子”, “结合”, “成”, “时”]), and then get their corresponding knowledge instances of POS labels from K to form Ki (i.e., K6 = [“分 子NN”, “结合VV”, “成VV”, “时LC”]). Syntactic Constituents As shown in Figure 3 (b), the rule for extracting syntactic constituency knowledge is as follows. We start with the word containing the given character xi, go up the con8289 stituency tree to the first ancestor whose label is in a pre-defined syntactic label list,3 then use all the words under this node to select context features from S, and finally combine the words with the syntactic label of the node to select knowledge instances from K. For example, for x6=“分”, the lowest syntactic node governing “分子” is NP (highlighted in yellow); thus S6 = [“分子”] and K6 = [“分子NP”]. Another example is x5=“成”, the lowest acceptable node on its syntactic path is VP; therefore, S5 = [“结合”, “成”, “分子”] and K5 = [“结合VP”, “成VP”, “分子VP”]. Dependency Relations Given a character xi, let wi be the word that contains xi. The context features Si include wi, wi’s governor, and wi’s dependents in the dependency structure; those words combined with their inbound dependency relation labels form Ki. For example, for x6=“分”, w6 = “分子”, which depends on “结合” with a dependency label dobj. Therefore, S6 = [“分子”, “结 合”], and K6 = [“分子obj”, “结合root”]. 2.2 Two-Way Attentions Attention has been shown to be an effective method for incorporating knowledge into NLP systems (Kumar et al., 2018; Margatina et al., 2019) but it cannot be used directly for feature and knowledge in pair-wise forms. Previous studies on the joint task normally directly concatenate the embeddings from context features and knowledge instances into the embeddings of characters (Zhang et al., 2018), which could be problematic for incorporating auto-analyzed, error-prone syntactic knowledge obtained from off-the-shelf toolkits. For both features and their knowledge instances for X, we use a two-way attention design to have separate attention for S and K. Particularly, the two ways, namely, the feature way and the knowledge way, are identical in architecture, where each way has a feed-forward attention module (Raffel and Ellis, 2015). For each xi, its Si and Ki are firstly fed into the feature attention way and the knowledge attention way, respectively, then computed within each way, and their final attention vectors are combined to feedback to the backbone model. Take the feature way as an example, the attention 3Following Chen et al. (2006), the list has 12 syntactic labels, namely, ADJP, ADVP, CLP, DNP, DP, DVP, LCP, LST, NP, PP, QP, and VP. weight for each context feature si,j is computed by as i,j = exp(h⊤ i · es i,j) Pmi j=1 exp(h⊤ i · es i,j) (1) where hi is the vector from a text encoder for xi and es i,j the embedding of si,j. Then we have the weighted embedding as i for all si,j in Si via as i = mi X j=1 as i,jes i,j (2) where P denotes a element-wise sum operation. For the knowledge way, the same process is applied to get ak i by distinguishing and weighting each knowledge instance ki,j. Finally, the output of the two attention ways are obtained through an concatenation of the two vectors: ai = as i ⊕ak i . 2.3 Joint Tagging with Two-way Attentions To functionalize the joint tagging, the two-way attentions interact with the backbone model through the encoded vector hi and its output ai for each xi. For hi, one can apply many prevailing encoders, e.g., Bi-LSTM or BERT (Devlin et al., 2019), to get the vector list [h1h2 · · · hi · · · hl] for X. Once ai is obtained, we concatenate it with hi and send it through a fully connected layer to align the dimension of the output for final prediction: oi = W · (hi ⊕ai) + b (3) where W and b are trainable parameters. Afterwards, conditional random fields (CRF) is used to estimate the probability for yi over all possible joint CWS and POS tags under xi and yi−1 by p(yi|xi) = exp(Wc · oi + bc) P yi−1yi exp(Wc · oi + bc) (4) Here, Wc and bc are the weight matrix and the bias vector, respectively, and they are estimated using the (yi−1, yi) tag pairs in the gold standard. 3 Experiments 3.1 Datasets We employ five benchmark datasets in our experiments, where four of them, namely, CTB5, CTB6, CTB7, and CTB9, are from the Penn Chinese TreeBank4 (Xue et al., 2005) and the fifth one is 4We obtain the Penn Chinese TreeBank data from the official release of Linguistic Data Consortium. The catalog numbers for CTB5, CTB6, CTB7, and CTB9 are LDC2005T01, LDC2007T36, LDC2010T07, and LDC2016T13, respectively. 8290 Datasets Char Word Sent OOV % CTB5 Train 805K 494K 18K Dev 12K 7K 350 8.1 Test 14K 8K 348 3.5 CTB6 Train 1,056K 641K 23K Dev 100K 60K 2K 5.4 Test 134K 82K 3K 5.6 CTB7 Train 1,160K 718K 31K Dev 387K 237K 10K 5.5 Test 399K 245K 10K 5.2 CTB9 (general) Train 2,643K 1,696K 106K Dev 210K 136K 10K 2.9 Test 379K 242K 16K 3.1 UD Train 156K 99K 4K Dev 20K 13K 500 12.1 Test 19K 12K 500 12.4 CTB9 (genres) BC 275K 184K 12K 2.8 BN 483K 287K 10K 5.1 CS 228K 160K 17K 5.5 DF 644K 421K 20K 3.7 MZ 403K 258K 8K 7.5 NW 427K 251K 10K 5.1 SC 430K 304K 44K 4.0 WB 342K 210K 10K 5.3 Table 1: The statistics of all experimental datasets in terms of character, word and sentence numbers. For normal splits, OOV % is computed according to the training set; for each genre in CTB9, OOV % is computed with respect to the union of other seven genres. the Chinese part of Universal Dependencies (UD)5 (Nivre et al., 2016). The CTB datasets are in simplified Chinese characters while the UD dataset is in traditional Chinese. Following Shao et al. (2017), we convert the UD dataset into simplified Chinese6 before conducting experiments on it. CTB uses 33 POS tags, and we split CTB5CTB9 following previous studies (Wang et al., 2011; Jiang et al., 2008; Shao et al., 2017). In addition, because the data in CTB9 come from eight genres – broadcast conversation (BC), broadcast news (BN), conversational speech(CS), discussion forums (DF), magazine articles (MZ), newswire (NW), SMS/chat messages (SC), and weblog (WB) – we also use CTB9 in a crossdomain study (see Section 3.4). UD uses two POS tagsets, namely the universal tagset (15 tags) and language-specific tagset (42 tags for Chinese). We refer to the corpus with the two tagsets as UD1 and UD2, respectively, and use the official splits of train/dev/test in our experiments. The statistics for the aforementioned datasets are in Table 1. 5We use its version 2.4 downloaded from https:// universaldependencies.org/. 6The conversation scripts are from https://github. com/skydark/nstools/tree/master/zhtools CTB5 CTB6 CTB7 CTB9 UD S 20K 23K 24K 41K 7K K SCT POS 22K 25K 27K 46K 7K Syn. 70K 82K 87K 141K 31K Dep. 32K 39K 42K 77K 8K BNP POS 22K 26K 28K 48K 8K Syn. 69K 81K 85K 136K 29K Table 2: Numbers of context features (S) and their corresponding knowledge instances (K) for five benchmark datasets, based on the output of SCT and BNP. Note that the K for the UD dataset follows the CTB criteria, because SCT and BNP were trained on CTB. 3.2 Implementation To obtain the aforementioned three types of knowledge, we use two off-the-shelf toolkits, Stanford CoreNLP Toolkit (SCT)7 (Manning et al., 2014) and Berkeley Neural Parser (BNP)8 (Kitaev and Klein, 2018): the former tokenizes and parses a Chinese sentence, producing POS tags, phrase structure and dependency structure of the sentence; the latter does POS tagging and syntactic parsing on a pre-tokenized sentence. Both toolkits were trained on CTB data and thus produced CTB POS tags. To extract knowledge, we firstly use SCT to automatically segment sentences and then run both SCT and BNP for POS tagging and parsing. Table 2 shows the size of S and K for all the datasets. We test the model with three encoders: two of them, namely Bi-LSTM and BERT9 (Devlin et al., 2019), are widely used; the third encoder is ZEN10 (Diao et al., 2019), which is a recently released Chinese encoder pre-trained with n-gram information and outperforms BERT in many downstream tasks. For the Bi-LSTM encoder, we set its hidden state size to 200 and use the character embeddings released by Shao et al. (2017) to initialize its input representations. For BERT and ZEN, we follow their default settings, e.g., 12 layers of selfattentions with the dimension of 768. For the two-way attention module, we randomly initialize the embeddings for all context features and their corresponding knowledge instances, where one can also use pre-trained embeddings (Song et al., 2018; Grave et al., 2018; Zhang et al., 2019; Yamada et al., 2020) for them. For all the 7We use its version 3.9.2 downloaded from https:// stanfordnlp.github.io/CoreNLP/. 8We download the model from https://github. com/nikitakit/self-attentive-parser. 9We use the Chinese base model from https://s3. amazonaws.com/models.huggingface.co/. 10https://github.com/sinovation/ZEN 8291 CTB5 CTB6 CTB7 CTB9 UD1 UD2 Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint SCT 98.02 95.49 96.62 90.85 96.53 92.73 93.63 88.23 80.50* 0.00* 80.50* 36.11* BNP 95.50 94.43 92.95 88.09 0.00* 37.16* Bi-LSTM 97.69 93.73 95.46 90.63 95.46 89.98 96.45 91.80 94.96 88.72 95.01 88.75 + POS (SCT) 98.07 94.68 96.23 91.04 96.32 91.60 96.75 92.36 94.86 88.90 95.08 88.99 + Syn. (SCT) 98.03 95.66 96.06 90.97 95.90 91.90 96.57 92.40 94.88 88.87 94.71 88.90 + Dep. (SCT) 97.84 94.25 95.85 90.70 95.87 91.08 96.63 92.26 94.88 88.93 94.91 89.05 + POS (BNP) 98.06 95.34 96.46 93.31 96.58 92.87 96.73 93.38 95.02 89.27 94.89 89.17 + Syn. (BNP) 98.01 94.82 96.08 92.33 96.06 91.04 96.65 92.97 94.48 88.84 94.86 89.20 BERT 98.28 96.03 97.36 94.65 96.78 93.38 97.33 94.40 97.74 94.82 97.70 94.76 + POS (SCT) 98.77 96.77 97.43 94.82 97.31 94.12 97.75 94.87 98.32 95.60 98.33 95.46 + Syn. (SCT) 98.75 96.66 97.37 94.73 97.07 93.84 97.67 94.83 98.11 95.43 98.10 95.42 + Dep. (SCT) 98.65 96.69 97.35 94.87 97.10 93.89 97.67 94.82 98.10 95.41 98.11 95.36 + POS (BNP) 98.63 96.60 97.34 94.95 97.25 94.21 97.65 94.82 98.16 95.51 98.22 95.23 + Syn. (BNP) 98.75 96.72 97.39 94.99 97.32 94.28 97.69 94.85 98.25 95.42 98.17 95.18 ZEN 98.61 96.60 97.35 94.70 97.09 93.80 97.64 94.64 98.14 95.15 98.02 95.05 + POS (SCT) 98.81 96.92 97.45 94.87 97.27 94.20 97.77 94.88 98.33 95.69 98.18 95.49 + Syn. (SCT) 98.85 96.86 97.42 94.72 97.31 94.32 97.73 94.85 98.17 95.48 98.35 95.50 + Dep. (SCT) 98.82 96.85 97.38 94.75 97.25 94.22 97.70 94.85 98.27 95.68 98.28 95.32 + POS (BNP) 98.72 96.83 97.47 95.02 97.24 94.18 97.69 94.82 98.26 95.52 98.22 95.28 + Syn. (BNP) 98.83 96.83 97.44 94.95 97.25 94.18 97.67 94.86 98.22 95.49 98.20 95.45 Table 3: Experimental results (the F-scores for segmentation and joint tagging) of TWASP using different encoders with and without auto-analyzed knowledge on the five benchmark datasets. “Syn.” and “Dep.” refer to syntactic constituents and dependency relations, respectively. The results of SCT and BNP are also reported as references, where * marks that the segmentation and POS tagging criteria from the toolkits and the UD dataset are different. models, we set the maximum character length of the input sequence to 300 and use negative loglikelihood loss function. Other hyper-parameters of the models are tuned on the dev set and the tuned models are evaluated on the test set for each dataset (each genre for CTB9). F-scores for word segmentation and the joint CWS-POS tags are used as main evaluation metrics11 in all experiments. 3.3 Overall Performance In our main experiment, we run our TWASP on the five benchmark datasets using the three encoders, i.e., Bi-LSTM, BERT, and ZEN. The results on the F-scores of word segmentation and joint CWS and POS tagging are in Table 3, which also includes the performance of the baselines without attention and the two toolkits (i.e., SCT and BNP). The results of SCT and BNP on the UD dataset are bad because they were trained on CTB, which used different segmentation and POS tagging criteria. There are several observations. First, for all encoders, the two-way attentions provide consistent enhancement to the baselines with different types of knowledge. Particularly, although the baseline model is well-performed when BERT (or ZEN) serves as the encoder, the attention mod11We use the evaluation script from https://github. com/chakki-works/seqeval. ule is still able to further improve its performance with the knowledge produced by the toolkits even though the toolkits have worse-than-baseline results for the joint task. Second, among different types of knowledge, POS labels are the most effective ones that help the joint task. For instance, among BERT-based models, the one enhanced by POS knowledge from SCT achieves the best performance on most datasets, which is not surprising because such knowledge matches the outcome of the task. In addition, for BERT-based models enhanced by knowledge from BNP (i.e., BERT + POS (BNP) and BERT + Syn. (BNP)), syntactic constituents provide more improvement than POS labels on all CTB datasets. This observation could be explained by that BNP is originally designed for constituency parsing with CTB criteria; the syntactic constituents are complicated while effective when they are accurate. Third, while SCT and BNP were trained on CTB, whose tagset is very different from the two tagsets for UD, TWASP still outperforms the baselines on UD with the knowledge provided by SCT and BNP, indicating that syntactic knowledge is useful even when it uses different word segmentation and POS tagging criteria. Table 4 shows the results of our best models (i.e. BERT and ZEN with POS (SCT)) and previous studies on the same datasets. Our approach 8292 CTB5 CTB6 CTB7 CTB9 UD1 UD2 Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint Jiang et al. (2008) 97.85 93.41 Kruengkrai et al. (2009) 97.87 93.67 Sun (2011) 98.17 94.02 Wang et al. (2011) 98.11 94.18 95.79 91.12 95.65 90.46 Qian and Liu (2012) 97.85 93.53 Shen et al. (2014) 98.03 93.80 Kurita et al. (2017) 98.41 94.84 96.23 91.25 Shao et al. (2017) 98.02 94.38 96.67 92.34 95.16 89.75 95.09 89.42 Zhang et al. (2018) 98.50 94.95 96.36 92.51 96.25 91.87 BERT + POS (SCT) 98.77 96.77 97.43 94.82 97.31 94.12 97.75 94.87 98.32 95.60 98.33 95.46 ZEN + POS (SCT) 98.81 96.92 97.45 94.87 97.27 94.20 97.77 94.88 98.33 95.69 98.18 95.49 Table 4: Comparison (in F-scores of word segmentation and joint tagging) of TWASP (with BERT or ZEN encoder) with previous studies. Cells with “-” refer to the results are not reported or they are not applicable. outperforms previous studies on the joint task and achieves new state-of-the-art performance on all datasets. While some of the previous studies use auto-analyzed knowledge (Wang et al., 2011; Zhang et al., 2018), they regard such knowledge as gold reference and consequently could suffer from errors in the auto-analyzed results. In contrast, our proposed model is able to selectively model the input information and to discriminate useful knowledge instances through the two-way attentions. 3.4 Cross-Domain Performance Domain variance is an important factor affecting the performance of NLP systems (Guo et al., 2009; McClosky et al., 2010; Song and Xia, 2013). To further demonstrate the effectiveness of TWASP, we conduct cross-domain experiments on the eight genres of CTB9 using BERT and ZEN as the baseline and their enhanced version with POS knowledge from SCT. In doing so, we test on each genre with the models trained on the data from all other genres. The results on both segmentation and the joint task are reported in Table 5, where the SCT results are also included as a reference. The comparison between the baselines and TWASP with POS knowledge clearly shows the consistency of performance improvement with twoway attentions, where for both BERT and ZEN, TWASP outperforms the baselines for all genres on the joint labels. In addition, similar to the observations from the previous experiment, both accurate and inaccurate POS knowledge are able to help the joint task. For example, although the SCT results on several genres (e.g., CS, DF, SC) are much worse than of the BERT baseline, the POS labels produced by SCT can still enhance TWASP on word segmentation and joint tagging with the proposed two-way attention module. 4 Analysis 4.1 The Effect of Two Attention Ways In the first analysis, we compare our two-way attention with normal attention. For normal attention, we experiment three ways of incorporating context features and knowledge: (1) using context features and knowledge together in the attention, where all features or knowledge instances are equally treated in it; (2) using context features only; and (3) using knowledge only. We run these experiments with BERT encoder and POS knowledge from SCT on CTB5 and report the results in Table 6. Overall, the two-way attentions outperform all three settings for normal attention, which clearly indicates the validity of using two attention ways for features and knowledge, i.e., compared to (1), as well as the advantage of learning from both of them, i.e., compared to (2) and (3). Interestingly, in the three settings, (3) outperforms (1), which could be explained by that, with normal attention, mixed feature and knowledge instances in it may make it difficult to weight them for the joint task. There are other methods for using both context features and knowledge in a neural framework, such as key-value memory networks (kvMN) (Miller et al., 2016), which is demonstrated to improve CWS by Tian et al. (2020). Thus we compare our approach with kvMN, in which context features are mapped to keys and knowledge to values. We follow the standard protocol of the kvMN, e.g., addressing keys by Si and reading values from Ki through the corresponding knowledge for each key, computing weights from all key embeddings, and outputting the weighted embeddings from all values. The result from the kvMN is reported at the last row of Table 6, where its performance is not as good as the two-way attentions, and even 8293 Genre SCT BERT BERT+POS ZEN ZEN+POS Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint BC 96.27 93.55 96.29 92.08 96.38 92.34 96.48 92.25 96.63 92.41 BN 96.98 93.98 96.93 93.73 97.20 94.02 97.05 93.91 97.21 94.14 CS 89.83 81.93 95.17 89.18 95.14 89.46 95.10 89.24 95.87 89.67 DF 91.34 84.28 96.79 92.02 96.44 92.44 96.33 92.11 96.55 92.51 MZ 95.69 91.99 95.62 91.97 95.83 92.17 95.69 92.00 95.78 92.18 NW 97.41 94.75 97.55 94.44 97.49 94.64 97.49 94.51 97.57 94.70 SC 84.87 76.55 95.97 91.13 96.27 91.77 96.09 91.47 96.38 91.85 WB 95.99 92.86 95.09 89.59 95.11 89.85 95.10 89.74 95.35 90.10 Table 5: Experimental results (the F-scores for word segmentation and joint tagging) from baselines and TWASP with different encoders on eight genres of CTB9. The incorporated knowledge is the POS labels from SCT. Ways Seg Joint Feature Knowledge F ROOV F √ √ 98.55 87.28 96.62 √ × 98.67 87.38 96.50 × √ 98.71 88.17 96.69 Two-way Attentions 98.77 88.13 96.77 Key-value Memory 98.62 88.51 96.58 Table 6: Performance comparison among different ways of knowledge integration, including normal attention (with respect to what knowledge type is used), the two-way attention, and key-value memory networks. worse than using normal attention with setting (3). The reason could be straightforward: the output of kvMN is built upon value (knowledge) embeddings and therefore information from key (context feature) embeddings does not directly contribute to it other than providing weights for the value. As a result, kvMN acts in a similar yet inferior12 way of setting (3) where only knowledge is used. 4.2 Knowledge Ensemble Since every type of knowledge works well in our model, it is expected to investigate how the model performs when multiple types of knowledge are used together. To this end, we run experiments on CTB5 to test on our BERT-based TWASP with knowledge ensemble, where two ensemble strategies, i.e., averaging and concatenation, are applied with respect to how ai for each knowledge type is combined with others. The results are reported in Table 7. In this table, the first seven rows (ID: 1-7) indicate that different types of knowledge are 12The “inferior” is explained by that, in kvMN, the value weights are inaccurate because they are computed with respect to the contribution of keys rather than knowledge instances. ID SCT BNP Joint F POS Syn. Dep. POS Syn. P L 1 √ √ 96.79 96.80 2 √ √ 96.78 96.81 3 √ √ 96.79 96.80 4 √ √ √ 96.82 96.87 5 √ √ 96.76 96.81 6 √ √ 96.81 96.83 7 √ √ √ 96.82 96.84 8 √ √ √ √ √ 96.87 96.90 Table 7: Comparison of different knowledge ensemble results, which are presented by the joint tagging Fscores from our BERT-based TWASP on CTB5. P and L refer to averaging and concatenation of attentions from different knowledge types, respectively. As a reference, the best result on CTB5 for BERTbased model without knowledge ensemble is 96.77% achieved by BERT + POS (SCT) (see Table 3). combined according to whether they come from the same toolkit (ID: 1-5) or belong to the same category (ID: 6 and 7); and the last row (ID: 8) is for the case that all types of knowledge are combined. There are several observations. First, compared to only using one type of knowledge (refer to Table 3), knowledge ensemble improves model performance where more knowledge types contribute to better results. The best model is thus obtained when all knowledge (from each toolkit and from both toolkits) are used. Second, knowledge in the same type from different toolkits may complement to each other and thus enhance model performance accordingly, which is confirmed by the results from the models assembling POS (or Syn+Dep) information from both SCT and BNP. Third, for different ensemble strategies, concatenation tends to perform better than averaging, which is not surprising since concatenation actually turns the model into a multi-way structure for knowledge integration. 8294 Figure 4: Comparison of joint tagging results between BERT and BERT+Dep (SCT) on an example sentence. 4.3 Case Study When the toolkit provides accurate knowledge, it is not surprising that our two-way attention model would benefit from the auto-analyzed knowledge. Interestingly, even when the toolkit provides inaccurate output, our model might still be able to benefit from such output. Figure 4 shows such an example, where our system uses BERT+Dep using SCT and the baseline system is BERT without twoway attention. The sentence contains an ambiguity character bigram “马上”, which has two possible interpretations, “马上AD” (immediately) and “马NN/上LC” (on the horse). The second one is correct, yet the baseline tagger chooses the former because “马上” (immediately) is a very common adverb. Although SCT also chooses the wrong segmentation and thus has an incorrect dependency structure, our system is still able to produce correct segmentation and POS tags. One plausible explanation for this is that the inaccurate dependency structure includes an advmod link between “马上” (immediately) and “很好”(very good). Because such a dependency pair seldom appears in the corpus, the attention from such knowledge is weak and hence encourages our system to choose the correct word segmentation and POS tags. 5 Related Work There are basically two approaches to CWS and POS tagging: to perform POS tagging right after word segmentation in a pipeline, or to conduct the two tasks simultaneously, known as joint CWS and POS tagging. In the past two decades, many studies have shown that joint tagging outperforms the pipeline approach (Ng and Low, 2004; Jiang et al., 2008, 2009; Wang et al., 2011; Sun, 2011; Zeng et al., 2013). In recent years, neural methods started to play a dominant role for this task (Zheng et al., 2013; Kurita et al., 2017; Shao et al., 2017; Zhang et al., 2018), where some of them tried to incorporate extra knowledge in their studies. For example, Kurita et al. (2017) exploited to model n-grams to improve the task; Shao et al. (2017) extended the idea by incorporating pre-trained n-gram embeddings, as well as radical embeddings, into character representations. Zhang et al. (2018) tried to leverage the knowledge from character embeddings, trained on an automatically tagged corpus by a baseline tagger. Compared to these previous studies, TWASP provides a simple, yet effective, neural model for joint tagging, without requiring a complicated mechanism of incorporating different features or pre-processing a corpus. 6 Conclusion In this paper, we propose neural approach with a two-way attention mechanism to incorporate autoanalyzed knowledge for joint CWS and POS tagging, following a character-based sequence labeling paradigm. Our proposed attention module learns and weights context features and their corresponding knowledge instances in two separate ways, and use the combined attentions from the two ways to enhance the joint tagging. Experimental results on five benchmark datasets illustrate the validity and effectiveness of our model, where the two-way attentions can be integrated with different encoders and provide consistent improvements over baseline taggers. Our model achieves stateof-the-art performance on all the datasets. Overall, this work presents an elegant way to use autoanalyzed knowledge and enhance neural models with existing NLP tools. For future work, we plan to apply the same methodology to other NLP tasks. Acknowledgement Xiang Ao was partially supported by the National Natural Science Foundation of China under Grant No. 61976204, U1811461, the Natural Science Foundation of Chongqing under Grant No. cstc2019jcyj-msxmX0149 and the Project of Youth Innovation Promotion Association CAS. References Wenliang Chen, Yujie Zhang, and Hitoshi Isahara. 2006. An Empirical Study of Chinese Chunking. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 97–104, Sydney, Australia. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of 8295 Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang. 2019. ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations. ArXiv, abs/1911.00720. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning Word Vectors for 157 Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Honglei Guo, Huijia Zhu, Zhili Guo, Xiaoxun Zhang, Xian Wu, and Zhong Su. 2009. Domain Adaptation with Latent Semantic Association for Named Entity Recognition. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 281–289, Boulder, Colorado. Binxuan Huang and Kathleen M Carley. 2019. SyntaxAware Aspect Level Sentiment Classification with Graph Attention Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5472–5480. Zhongqiang Huang, Mary Harper, and Wen Wang. 2007. Mandarin Part-of-Speech Tagging and Discriminative Reranking. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1093– 1102, Prague, Czech Republic. Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Automatic Adaptation of Annotation Standards: Chinese Word Segmentation and POS Tagging – A Case Study. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 522–530, Suntec, Singapore. Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan L¨u. 2008. A Cascaded Linear Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging. In Proceedings of ACL-08: HLT, pages 897– 904, Columbus, Ohio. Nikita Kitaev and Dan Klein. 2018. Constituency Parsing with a Self-Attentive Encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Canasai Kruengkrai, Kiyotaka Uchimoto, Jun’ichi Kazama, Yiou Wang, Kentaro Torisawa, and Hitoshi Isahara. 2009. An Error-Driven Word-Character Hybrid Model for Joint Chinese Word Segmentation and POS Tagging. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 513–521, Suntec, Singapore. Abhishek Kumar, Daisuke Kawahara, and Sadao Kurohashi. 2018. Knowledge-Enriched Two-Layered Attention Network for Sentiment Analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 253–258, New Orleans, Louisiana. Shuhei Kurita, Daisuke Kawahara, and Sadao Kurohashi. 2017. Neural Joint Model for Transitionbased Chinese Syntactic Analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1204–1214, Vancouver, Canada. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60, Baltimore, Maryland. Katerina Margatina, Christos Baziotis, and Alexandros Potamianos. 2019. Attention-based Conditioning Methods for External Knowledge Integration. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3944– 3951, Florence, Italy. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic Domain Adaptation for Parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 28–36, Los Angeles, California. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-Value Memory Networks for Directly Reading Documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400–1409. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese Part-of-Speech Tagging: One-at-a-Time or All-atOnce? Word-Based or Character-Based? In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 277– 284, Barcelona, Spain. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, 8296 Natalia Silveira, et al. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1659–1666. Xian Qian and Yang Liu. 2012. Joint Chinese Word Segmentation, POS Tagging and Parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 501– 511, Jeju Island, Korea. Colin Raffel and Daniel PW Ellis. 2015. FeedForward Networks with Attention Can Solve Some Long-Term Memory Problems. arXiv preprint arXiv:1512.08756. Dominic Seyler, Tatiana Dembelova, Luciano Del Corro, Johannes Hoffart, and Gerhard Weikum. 2018. A Study of the Importance of External Knowledge in the Named Entity Recognition Task. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 241–246. Yan Shao, Christian Hardmeier, J¨org Tiedemann, and Joakim Nivre. 2017. Character-based Joint Segmentation and POS Tagging for Chinese using Bidirectional RNN-CRF. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 173– 183, Taipei, Taiwan. Mo Shen, Hongxiao Liu, Daisuke Kawahara, and Sadao Kurohashi. 2014. Chinese Morphological Analysis with Character-level POS Tagging. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 253–258, Baltimore, Maryland. Yan Song, Chia-Jung Lee, and Fei Xia. 2017. Learning Word Representations with Regularization from Prior Knowledge. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 143–152, Vancouver, Canada. Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional Skip-Gram: Explicitly Distinguishing Left and Right Context for Word Embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2, pages 175–180, New Orleans, Louisiana. Yan Song and Fei Xia. 2013. A Common Case of Jekyll and Hyde: The Synergistic Effect of Using Divided Source Training Data for Feature Augmentation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 623–631, Nagoya, Japan. Weiwei Sun. 2011. A Stacked Sub-Word Model for Joint Chinese Word Segmentation and Part-ofSpeech Tagging. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1385–1394, Portland, Oregon, USA. Yuanhe Tian, Yan Song, Fei Xia, Tong Zhang, and Yonggang Wang. 2020. Improving Chinese Word Segmentation with Wordhood Memory Networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, Washington, USA. Yiou Wang, Jun’ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving Chinese Word Segmentation and POS Tagging with Semi-supervised Methods Using Large Auto-Analyzed Data. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 309–317, Chiang Mai, Thailand. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207–238. Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, and Yuji Matsumoto. 2020. Wikipedia2Vec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from Wikipedia. arXiv preprint 1812.06280v3. Xiaodong Zeng, Derek F. Wong, Lidia S. Chao, and Isabel Trancoso. 2013. Graph-based Semi-Supervised Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 770–779, Sofia, Bulgaria. Hongming Zhang, Jiaxin Bai, Yan Song, Kun Xu, Changlong Yu, Yangqiu Song, Wilfred Ng, and Dong Yu. 2019. Multiplex Word Embeddings for Selectional Preference Acquisition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5247–5256, Hong Kong, China. Meishan Zhang, Nan Yu, and Guohong Fu. 2018. A Simple and Effective Neural Model for Joint Word Segmentation and POS Tagging. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 26(9):1528–1538. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep Learning for Chinese Word Segmentation and POS Tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 647–657, Seattle, Washington, USA.
2020
735
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8297–8307 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8297 Joint Diacritization, Lemmatization, Normalization, and Fine-Grained Morphological Tagging Nasser Zalmout and Nizar Habash Computational Approaches to Modeling Language Lab New York University Abu Dhabi United Arab Emirates {nasser.zalmout,nizar.habash}@nyu.edu Abstract The written forms of Semitic languages are both highly ambiguous and morphologically rich: a word can have multiple interpretations and is one of many inflected forms of the same concept or lemma. This is further exacerbated for dialectal content, which is more prone to noise and lacks a standard orthography. The morphological features can be lexicalized, like lemmas and diacritized forms, or non-lexicalized, like gender, number, and partof-speech tags, among others. Joint modeling of the lexicalized and non-lexicalized features can identify more intricate morphological patterns, which provide better context modeling, and further disambiguate ambiguous lexical choices. However, the different modeling granularity can make joint modeling more difficult. Our approach models the different features jointly, whether lexicalized (on the characterlevel), or non-lexicalized (on the word-level). We use Arabic as a test case, and achieve stateof-the-art results for Modern Standard Arabic with 20% relative error reduction, and Egyptian Arabic with 11% relative error reduction. 1 Introduction Morphological modeling in Semitic languages is challenging. Their optional short vowels (diacritics) increase the overall ambiguity of surface forms; and their morphological richness results in large target spaces, which increase model sparsity. The different morphological features can be modeled through combined feature tags, using a single (but very large) target space, or through having separate models for each of the features. The combined features approach models the relationships between the different features explicitly, but the large target spaces for morphologically rich languages further increase sparsity. On the other hand, separate feature modeling guarantees smaller target spaces for the individual features, but the hard separation between the features prevents modeling any interfeature dependencies. The set of morphological features includes lexicalized and non-lexicalized features, which further exacerbates joint modeling. Non-lexicalized features, like gender, and number, among others, have limited target spaces, and usually modeled as tagging tasks. Lexicalized features, like lemmas and diacritized forms (for Semitic languages), are open-ended, with large target vocabularies. Moreover, non-lexicalized features are modeled on the word level, whereas lexicalized features are optimally modeled on the character level. This difference in the modeling granularity can be challenging for joint models. In this paper we present a model for handling lexicalized and non-lexicalized features jointly. We use a sequence-to-sequence architecture, with different parameter sharing strategies at the encoder and decoder sides for the different features. The non-lexicalized features are handled with a tagger, which shares several parameters with the encoder, and uses a multitask-learning architecture to model the different non-lexicalized features jointly. The lexicalized features, on the other hand, are handled with a specific decoder for each feature, sharing the same encoder. Our architecture models the non-lexicalized features on the word level, with a context representation that spans the entire sentence. The lexicalized features are modeled on the character level, with a fixed character context window. The character level modeling is also suitable for surface form normalization, which is important for noisy texts common in dialectal content. We use Modern Standard Arabic (MSA) and Egyptian Arabic (EGY) as test cases. Our joint model achieves 20% relative error reduction (1.9% absolute improvement) for MSA, and 11% relative error reduction (2.5% absolute improvement) for EGY, compared to a baseline that models the different morphological features separately. 8298 The rest of the paper is structured as follows. We present a brief background and a survey of related work in Section 2. We introduce the approach and various models in Section 3, and discuss the experimental setup and results in Section 4. We conclude and provide some directions for future work in Section 5. 2 Background and Related Work In this section we present a brief linguistic overview of the challenges facing morphological modeling in Semitic and morphologically rich languages. We then discuss related contributions in literature, and how our model compares to them. 2.1 Linguistic Introduction Morphologically rich languages (MRLs) tend to have more fully inflected words than other languages, realized through many morphemes that represent several morphological features. The target space for the combined morphological features therefore tends to be large, which increases sparsity. MRLs also can be highly ambiguous, with different interpretations of the same surface forms. Ambiguity is further exacerbated for Semitic languages, like Arabic and Hebrew, at which the short vowels (diacritics) can be kept or dropped. The high degree of ambiguity in Arabic results in having about 12 analyses per word on average (Pasha et al., 2014).1 Both morphological richness and ambiguity can be modeled with morphological analyzers, or morphological dictionaries, which are used to encode all potential word inflections in the language. Morphological analyzers should ideally return all the possible analyses of a surface word (to model ambiguity), and cover all the inflected forms of a word lemma (to model morphological richness), covering all related features. The best analysis can then be chosen through morphological disambiguation; by predicting the different morphological feature values and use them to rank the relevant analyses from the analyzer. The morphological features that we model for Arabic include: • Lexicalized features: lemmas (lex) and diacritized forms (diac). • Non-lexicalized features: aspect (asp), case (cas), gender (gen), person (per), part-of1For more information on Arabic natural language processing, see (Habash, 2010). speech (POS), number (num), mood (mod), state (stt), voice (vox). • Clitics: enclitics, like pronominal enclitics, negative particle enclitics; proclitics, like article proclitic, preposition proclitics, conjunction proclitics, question proclitics. Table 1 shows an example highlighting the different morphological features. The example presents a subset of the possible analyses for the word ÑîDÖÏ lmthm.2 Disambiguation using the non-lexicalized features only is not conclusive enough, as we see in the last two analyses, where the lemma and diacritization only can disambiguate the right analysis. Dialectal Arabic (DA) includes several dialects of Arabic, like EGY, that vary by the geographical location in the Arab world. DA is also Semitic and an MRL, but it is mainly spoken, and lacks a standard orthography (Habash et al., 2012a). The lack of a standard orthography further increases sparsity and ambiguity, hence requiring explicit normalization. Habash et al. (2012a, 2018) proposed CODA, a Conventional Orthography for Dialectal Arabic, which aims to provide a conventionalized orthography across the various Arabic dialects. We use CODA as the reference for the normalization task. 2.2 Morphological Tagging Arabic morphological tagging and disambiguation have been studied extensively in literature, with contributions for MSA (Khalifa et al., 2016; Abdelali et al., 2016; Habash and Rambow, 2005; Diab et al., 2004), and DA (Habash et al., 2013; Al-Sabbagh and Girju, 2012; Duh and Kirchhoff, 2005). There are also several recent contributions that showed significant accuracy improvement using deep learning models (Zalmout et al., 2018; Inoue et al., 2017; Zalmout and Habash, 2017; Heigold et al., 2016). In addition to other deep learning contributions that showed limited success for Arabic (Shen et al., 2016). Most of these contributions model the different morphological features separately, or focus on a limited feature subset. We elaborate on the contributions with some joint modeling aspects later in the section. 2.3 Diacritization and Lemmatization Diacritization and lemmatization are very useful for tasks like information retrieval, machine translation, and text-to-speech, among others. 2Arabic transliteration is presented in the Habash-SoudiBuckwalter scheme (Habash et al., 2007). 8299 Diacrtization Lemma English POS Prc3 Prc2 Prc1 Prc0 Per Asp Vox Mod Gen Num Stt Cas Enc0 lam∼at.hum lam∼ she collected them verb 0 0 0 0 3 p a i f s na na dobj3mp lum.tahum lAm you [m.s.] blamed them verb 0 0 0 0 2 p a i m s na na dobj3mp lum.tihim lAm you [f.s.] blamed them verb 0 0 0 0 2 p a i f s na na dobj3mp lum.tuhum lAm I blamed them verb 0 0 0 0 1 p a i m s na na dobj3mp lam∼atuhum lam∼a¯h their collection noun 0 0 0 0 na na na na f s c n poss3mp limut∼aham˜ı mut∼aham for a suspect noun 0 0 li (prep) 0 na na na na m s i g 0 limut∼ahim˜ı mut∼ahim for an accuser noun 0 0 li (prep) 0 na na na na m s i g 0 Table 1: A subset of all the possible analyses for the word ÑîDÖÏ lmthm. Notice that in the last two analyses the words are disambiguated through the lemmas and diacritized forms only, and they share all the other features. Diacritization has generally been an active area of research (Darwish et al., 2017; Zitouni et al., 2006; Nelken and Shieber, 2005). More recent contributions use Deep Learning models in different configurations; Belinkov and Glass (2015) model diacritization as a classification task, using Long Short Term Memory (LSTM) cells. And Abandah et al. (2015) use LSTMs to model diacritization as a sequence transcription task, similar to Mubarak et al. (2019) who model diacritization as a sequence-to-sequence task. Early contributions for lemmatization used finite state machines (Schmid et al., 2004; Minnen et al., 2001), which had a limited capacity for modeling unseen words or lemmas. There were also several contributions that utilize a joint tagging and lemmatization approach, using CRFs and Maximum Entropy models (Müller et al., 2015; Chrupala et al., 2008). Other contributions approached lemmatization as a lemma selection task (Ezeiza et al., 1998), where the goal is to select the correct lemma from a set of lemmas provided by a morphological analyzer. Many of the lemmatization models for Arabic use a similar approach (Pasha et al., 2014; Roth et al., 2008). More recently, sequenceto-sequence models with attention (Bahdanau et al., 2014) have been shown useful in several NLP tasks, with several lemmatization contributions (Malaviya et al., 2019; Bergmanis and Goldwater, 2018; Pütz et al., 2018). Other contributions use additional morphosyntactic features as part of the modeling architecture (Kanerva et al., 2019; Kondratyuk et al., 2018), somewhat similar to our approach. 2.4 Joint Morphological Modeling in Arabic There are also several contributions for the joint modeling of the different morphological features in Arabic. However, most of these contributions use separate models for each of the features, and usually use a ranking step to select the best overall morphological analysis from an external morphological analyzer (Roth et al., 2008; Habash and Rambow, 2007). MADAMIRA (Pasha et al., 2014) is a popular system for Arabic morphological tagging and disambiguation. It uses SVMs for the different nonlexicalized features, and n-gram language models for the lemmas and diacritized forms. Zalmout and Habash (2017) presented a neural extension of this model, with LSTM taggers for the individual features, and neural language models for the lexicalized features. Inoue et al. (2017) used multi-task learning for fine-grained POS tagging, modeling the different morphological features jointly, but they do not model lemmas or diacritized forms. Zalmout and Habash (2019) also used multitask learning for the different non-lexicalized morphological features, and neural language models for lemmas and diacritized forms. This model currently provides state-of-the-art results for Arabic. In the models that rely on morphological analyzers (Zalmout and Habash, 2019, 2017; Pasha et al., 2014) surface form normalization are byproducts of selecting the correct analysis, rather than being explicitly modeled. 3 Approach Non-lexicalized features are usually modeled on the word level, whereas lexicalized features are better handled through character level models. Moreover, the context representation for morphological tagging of the non-lexicalized features usually spans the entire sentence, using LSTMs for example. The optimal context representation for the lexicalized features, on the other hand, is through a fixed number of characters before and after the target word (Bergmanis and Goldwater, 2018). This difference in modeling granularity, in terms of context representation or word/character level modeling, can be very challenging for joint modeling. We use a modified sequence-to-sequence architecture, where some components of the encoder are shared between a tagger, for the non-lexicalized 8300 features, and the encoder-decoder architecture, for the lexicalized features. We also use separate decoders for the different lexicalized features, that share the same encoder and trained jointly using a shared loss function. The remainder of this section discusses the architecture in more detail. 3.1 Tagger The tagging architecture is similar to the architecture presented by Zalmout and Habash (2019). We use two Bi-LSTM layers on the word level to model the context for each direction of the target word. The context in the tagging network spans the entire input sentence. For each sentence of length L {w1, w2, ..., wL}, every word wj is represented by vector vj, which is comprised of the concatenation: vj = [wj; sj; aj], where wj is the word embedding vector, sj is a vector representation of the characters within the word, and aj is a vector representing all the candidate morphological tags (from an analyzer), for all the non-lexicalized morphological features. To obtain the vector sj, we use an LSTM-based model, applied to the character sequence in each word separately. We use the last state vector as the embedding representation of the word’s characters. Whereas to get the aj vector, for each morphological feature f, we use a morphological analyzer to obtain all possible feature values of the word to be analyzed. We then embed each value separately (with separate embedding tensors for each feature, learnt within the model), then sum all the resulting vectors to to get af j (since these tags are alternatives and do not constitute a sequence) (Zalmout and Habash, 2019). We concatenate the individual af j vectors for each morphological feature f of each word, to get a single representation, aj, for all the features: af j = Nf X n=1 af j,n aj = [apos j ; ...; anum j ; ...; avox j ] Where Nf is the set of possible candidate values for each feature f (from the analyzer). The aj vector does not constitute a hard constraint and can be discarded if a morphological analyzer is not used. Several previous contributions for Arabic showed that pretraining the word embeddings is very useful (Erdmann et al., 2018; Watson et al., 2018; Zalmout and Habash, 2017), including the baselines used in this paper. We therefore pre-train the word embeddings with FastText (Bojanowski et al., 2017), using a large external dataset. The pre-trained embeddings are fixed during the model training. The character and tag embeddings are learnt within the model. We use a multitask learning setup to train the different morphological features jointly, through sharing the parameters of the hidden layers in the BiLSTM network. The input is also shared, through the vj vector. The output of the network is then fed to a separate non-linearity function, output layer, and softmax, for a probability distribution of each of the features separately. Figure 1 shows the overall tagging architecture. 3.2 Encoder We share the character and word embeddings from the tagger network in the encoder. The input context is modeled through a sliding window of a fixed number of characters around the target word, as in the Lematus model (Bergmanis and Goldwater, 2018). We also use additional special symbols for the whitespace and target word boundaries. In addition to the character embeddings, we also condition on the word level embedding of the word containing the characters. We concatenate the word embedding vector with the input character embeddings. Each character embedding ci is replaced by the concatenation [ci; wj], where wj is the dw-dimensional word embedding of the word j in which character i appears in. Given the characters of input sentence c and its lemmatized equivalent y, the goal is to model P(yk|ci, wj). We then feed the input vectors to a network of two Bi-LSTM layers for the hidden representation at the encoder. 3.3 Decoders We use separate decoders for lemmatization and diacritization, with two LSTM layers for each. Both decoders share the same input and parameters of the encoder Bi-LSTM network. For each decoder, we condition on the decoder output of the previous step, along with Luong attention (Luong et al., 2015) over the encoder outputs hi, and the predicted tags from the tagger. We use the last encoder output as the initial states for the decoder layers. We use scheduled sampling (Bengio et al., 2015) during training, and feed the dc-dimensional character embeddings at every time step. But we found empirically that using a constant sampling probability instead of scheduling provides better results. 8301 We also use dropout on the non-recurrent connections of both the encoder and decoder layers during training. The decoder outputs are fed to a softmax layer that reshapes the vectors to dimension dvoc, then argmax to yield an output sequence y one character at a time. Conditioning on the Predicted Tags In addition to the attention distribution and the previous time step, we also condition on the predicted tags from the tagger during decoding. The goal is to provide an additional contextual signal to the decoders, and to disambiguate the possible lexical choices. We use the output of the argmax (over the softmax distribution) for each feature, and concatenate the different tags as in the aj vector: ˆtj = [ˆtasp j ; ...;ˆtpos j ; ...;ˆtvox j ] Bi-LSTM z!" #" $" %" ••••    •••• •••• •••• z          •••• ̂'"()* ̂'" +), •••• argmax softmax argmax softmax ̂'" -./ argmax softmax 01" •••• z!2 #2 $2 %2    Out Layer -./ Out Layer +), Out Layer ()* z          •••• ̂'2 ()* ̂'2 +), •••• argmax softmax argmax softmax ̂'2 -./ argmax softmax 012 •••• Out Layer -./ Out Layer +), Out Layer ()* Figure 1: The tagger model, showing the multitask learning architecture for the features. The concatenated predicted tags are used to condition on, at the decoders. Preventing Backpropagation to Tagger The decoder produces the lexicalized features at the character level, whereas the predicted tags are on the word level. The different granularities might create some biases, and we found that backpropagating gradients from the decoder to the tagger network leads to instability at the tagger. Therefore, we prevent the decoder from backpropagating gradients to the tagger during training. This is consistent with the model of Kondratyuk et al. (2018). 3.4 Surface Form Normalization We use the term normalization in the sense of enriched normalization introduced by El Kholy and Habash (2012) for MSA; and in the sense of spelling conventionalization (into CODA) for DA as described by Eskander et al. (2013). Both Bi-LSTM Encoder Lex Decoder 𝑙" 𝑙# 𝑙$ …. Attention Diac Decoder 𝑑" 𝑑# 𝑑& 𝑑' …. 𝑙& 𝒄)," 𝒘) ●● ●● 𝒄),# 𝒘) ●● ●● 𝒄),& 𝒘) ●● ●● 𝒄),, 𝒘) ●● ●● …. ● ● ● ● ● -𝒕) Stop Gradients Figure 2: The sequence-to-sequence architecture for the lexicalized features, with a shared encoder, and separate decoders for lemmatization and diacritization. The figure does not show the fixed context window of 10 characters before and after the target word. are non-trivial tasks comparable to true-casing or spelling correction for other languages. The normalization task is particularly important for dialectal content, which lack a standardized orthography. The training data that we use has the diacritized annotations already in the CODA normalized form for EGY. So the output sequence of the diacritization task should be both the diacritized and CODA normalized version of the input sequence. This normalization is learnt explicitly in our character level sequence-to-sequence model. For MSA there is no need for CODA normalization, so the normalized output includes any error correction that might happen in the training dataset. Normalization is assessed as part of the overall diacritization accuracy. 3.5 Training Procedure We use a small held out tuning set of about 5% of the training data to save the best model during training. We did not use the development set here to be consistent with other contributions in literature, where the development set is primarily used to evaluate high level design decisions only. We train the model for a fixed number of epochs and select the model that performs best on the tuning set. This method provided the most stable results, compared to early stopping or other methods. The loss function is based on minimizing cross entropy H for each feature f. The overall loss is the average of the individual losses for the different features, whether lexicalized or non-lexicalized: 8302 H(ˆy, y) = 1 |F| X f∈F H(ˆyf, yf) Where F is the set of features that we model. y represents the true feature value, and ˆy is the predicted value. We experimented with having different optimizers for the lexicalized and nonlexicalized features. We also experimented with a weighted average for the different features, where the weights are learnt as part of the end-to-end system. None of these modifications provided any improvement. We use Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0005, and we run the various models for 50 epochs. 3.6 Full Morphological Disambiguation Morphological disambiguation involves predicting the right combination of morphological features for each word in context. We can either present the predicted features from the model directly, or use a morphological analyzer to guarantee more consistent feature values. If a morphological analyzer is used, the disambiguation system selects the optimal analysis for the word from the set of analyses returned by the analyzer. We use the predicted tags to rank the analyses, and select the analysis with highest number of matched feature values. The different features can be assigned different weights during ranking. Refer to other contributions that use a similar approach for more details (Zalmout and Habash, 2019, 2017; Pasha et al., 2014). 4 Experiments and Results 4.1 Data We use the Penn Arabic Treebank (PATB parts 1,2, and 3) (Maamouri et al., 2004) for MSA, and the ARZ dataset (Maamouri et al., 2012) from the Linguistic Data Consortium (LDC), parts 1–5, for EGY. We use the same datasets as used in MADAMIRA (Pasha et al., 2014), which involves synchronizing the datasets with morphological analyzers, using the process described by Habash and Rambow (2005). We follow the data splits recommended by Diab et al. (2013) for TRAIN, DEVTEST, and BLINDTEST.3 Both datasets include gold annotations for the diacritized forms, lemmas, and the remaining 14 features. The diacritized forms are normalized following the CODA guidelines for EGY. We use Alif/Ya and Hamza normalization, which is commonly used for morphological modeling in Arabic (Zalmout et al., 2018; Pasha et al., 2014; Habash et al., 2013). Table 2 shows the data sizes. The TUNE dataset is used during the model training process, for early stopping or to keep the best performing model. TUNE is extracted randomly from the original TRAIN split (almost 5% of TRAIN), so the other splits are consistent with the splits used in literature. The DEVTEST dataset is used during the system development to assess design choices. The BLINDTEST dataset is used to evaluate the system after finalizing the architecture design, and to report the overall performance. TRAIN TUNE DEVTEST BLINDTEST MSA 479K 23K 63K 63K EGY 127K 6K 21K 20K Table 2: Word count statistics for MSA and EGY. We use the same morphological analyzers that were used in MADAMIRA (Pasha et al., 2014), and the other baselines, for both MSA and EGY. For MSA we use SAMA (Graff et al., 2009), and the combination of SAMA, CALIMA (Habash et al., 2012b), and ADAM (Salloum and Habash, 2014) for EGY. We use the LDC’s Gigaword corpus (Parker et al., 2011) to pretrain the MSA word embeddings, and the BOLT Arabic Forum Discussions corpus (Tracey et al., 2018) for EGY, as used in the reported baselines. We preprocessed both datasets with Alif/Ya and Hamza normalization, as we did for the training dataset. 4.2 Experimental Setup Tagger We use a similar setup as used by Zalmout and Habash (2019). We use two Bi-LSTM hidden layers of size 800, and dropout probability of 0.4, with peephole connections. The LSTM 3We use the LDC datasets because their annotations cover many of the tasks that are relevant to morphological disambiguation, and they are often used for benchmarking purposes. Other available datasets are usually limited to a particular task, like diacritization or POS tagging (Darwish et al., 2017, 2018; Abandah et al., 2015). Evaluating our model using these datasets is also not straightforward, since they often use different tagsets or representations (especially for diacritization), for which automatic conversion would require extensive post-processing. 8303 character embedding architecture uses two LSTM layers of size 100, and embedding size 50. We use FastText (Bojanowski et al., 2017) to pretrain the word embeddings, with embedding dimension of 250, and an embedding window of size two. Encoder-Decoder We use two LSTM layers of size 400 for both the encoder and decoder (bidirectional for the encoder), dropout value of 0.4, fixed sampling probability of 0.4 (Bengio et al., 2015). We use the same word and character embeddings as the tagger. We use beam decoding with beam size of 5, and a context window of 10 characters before and after the target word. Metrics The evaluation metrics we use include: • POS accuracy (POS): The accuracy of the POS tags, of a tagset comprised of 36 tags (Habash et al., 2013). • Non-lexicalized morphological features accuracy (TAGS): The accuracy of the combined 14 morphological features we model, excluding lemmas and diacritized forms. • Diacritization accuracy (DIAC): The accuracy of the diacritized forms, for MSA only. • CODA-based normalization accuracy (CODA): The accuracy of the CODAnormalized, and diacritized, EGY forms. MSA does not need CODA normalization. • Lemmatization accuracy (LEMMA): Lemma accuracy. The lemmas are also fully diacritized in the LDC datasets, so this metric reflects the fully diacritized lemmas. • Full Analysis Accuracy (FULL): Accuracy over the full analysis – the strictest metric. Baselines The first baseline is MADAMIRA (Pasha et al., 2014), which is one of the most commonly used morphological disambiguation models for Arabic. We also use the model suggested by Zalmout and Habash (2017), which is based on a similar architecture, but uses LSTM taggers instead of the SVM models in MADAMIRA, and LSTMbased language models instead of the n-gram models. The last baseline uses a multitask learning architecture to model the different non-lexicalized features jointly, but neural language models for the lexicalized features (Zalmout and Habash, 2019). We use the same feature weights during the disambiguation process as this baseline. 4.3 Results Table 3 presents the results for the baselines, and the joint modeling architecture. The results show a significant accuracy improvement for the joint modeling approach, compared to all baselines. Diacritization The diacritization task seems to have benefited the most of the joint modeling architecture, with about 16% relative error reduction for MSA. This is probably due to the relatively large target space for diacritized forms when using the language modeling approach in the baseline, compared to lemmatization for example, which has a smaller overall types count. The character level sequence-to-sequence architecture is more suitable to this task, with a small character target space. Normalization In the baseline model normalization is a byproduct of selecting the right analysis, rather than a modeling goal. However, character level models provide for an explicit and direct normalization capability, as the model learns to map the erroneous sequence to the normalized target sequence. Our model results in 12% relative error reduction for EGY. Overall Feature Consistency An analysis is consistent if all the feature values are linguistically acceptable to co-occur with each other. For example, case is undefined for verbs, so if a verb analysis had a defined case value, this analysis is inconsistent. The same applies to consistency between the tags and the corresponding lemma (or diacritized form). The TAGS metric, which represents the accuracy of the combined non-lexicalized features, also shows noticeable improvement for MSA. The fact that TAGS improved, along with FULL, while the POS accuracy remained somewhat similar, indicates that the model is now producing more consistent morphological predictions. This improved consistency is probably the result of enhanced diacritization and lemmatization models, which provide a better signal to the overall analysis ranking. The improvement in TAGS for EGY, on the other hand, is limited. This indicates that the model was probably already producing more consistent non-lexicalized morphological features, and the improvement in the FULL metric is due to improved diacritization and lemmatization only. The Role of Morphological Analyzers Morphological analyzers are also used to guarantee consistency in the predicted features. The base8304 Model FULL TAGS DIAC LEX POS (a) MADAMIRA (SVM models + analyzer) (Pasha et al., 2014) 85.6 87.1 87.7 96.3 97.1 (b) LSTM models + analyzer (Zalmout and Habash, 2017) 90.4 92.3 92.4 96.9 97.9 MSA (c) + Multitask learning for the tags (Zalmout and Habash, 2019) 90.8 92.7 92.7 96.9 97.9 (d) Joint modeling + analyzer 92.3 93.5 93.9 97.6 98.1 (e) Joint modeling without analyzer 90.3 92.7 92.8 96.3 97.7 Model FULL TAGS CODA LEX POS (a) MADAMIRA (SVM models + analyzer) (Pasha et al., 2014) 76.2 86.7 82.4 86.4 91.7 (b) LSTM models + analyzer (Zalmout and Habash, 2017) 77.0 88.8 82.9 87.6 92.9 EGY (c) + Multitask learning for the tags (Zalmout and Habash, 2019) 77.2 88.8 82.9 87.6 93.1 (d) Joint modeling + analyzer 79.5 89.0 85.0 88.5 93.1 (e) Joint modeling without analyzer 73.2 84.9 81.5 84.4 91.1 Table 3: The results of the various models on the DEVTEST for MSA and EGY. The first and second baselines, (a) and (b), use separate models for the features, and the third, (c), uses a multitask learning architecture for the non-lexicalized features only. lines and our best performing model all use morphological analyzers, to get the candidate tags at the input, and to produce the best analysis through the ranking process. We train our model without using the analyzer – without the t vector and without ranking – to evaluate its role in the morphological disambiguation task. The results are lower, both for MSA and EGY. However, the result for MSA is very close to the (Zalmout and Habash, 2017) baseline, which uses separate feature models (with the analyzer). This indicates that our model can match the accuracy of a strong baseline, without relying on expensive external resources. This does not apply to EGY, probably due to the lower training data size and noisier content. Even with a better model, morphological analyzers still provide additional consistency between the different features. BLINDTEST Results The results for the BLINDTEST dataset were consistent with the DEVTEST. The accuracy for EGY using the strongest baseline is 78.1, based on the multitask learning architecture for the tags. The accuracy of the best system, using the joint modeling architecture along with the morphological analyzer, is 80.3. We also observed the same behavior for MSA, with somewhat similar values to DEVTEST. The strongest baseline had an accuracy of 90.8, whereas the best model had an accuracy of 92.6. 4.4 Error Analysis The Role of Morphological Analyzers The goal is to assess the role of morphological analyzers in the consistency (following the consistency definition mentioned earlier) of the predicted features. We took a sample of 1000 words from the MSA DEVTEST, and ran it through the joint model that does not use a morphological analyzer, and checked the errors in the predictions. There were 110 errors (11% of the sample), for an accuracy of 89%, which is close to the reported accuracy over the entire dataset. About 62% of the errors had consistent feature predictions, but the predicted analysis did not match the gold. And around 13% of the errors are due to gold errors. Around 25% of the errors (2.8% of sample) had inconsistent predictions. This roughly matches the accuracy gap between the joint model with and without the morphological analyzer, which is also around 2%. This indicates that the accuracy boost that the morphological analyzer provides is to a large extent due to the consistency it conveys. We also observed that 37% of the inconsistent predictions (1% of the sample) had a correct lemma, but the lemma was inconsistent with the analysis. The remaining 63% (1.7% of sample), had an invalid lemma. Joint Modeling vs Separate Modeling We also investigated the distribution of errors over the different features for the joint model against the baseline of separate feature models, both using the morphological analyzer. We annotated the errors in a 1000-word sample from DEVTEST, for both MSA and EGY, with the main erroneous feature. For example, if the predicted analysis is a verb inflection of a gold noun, the main erroneous feature would be the POS tag, even if other features ended up being wrong as a result. For MSA, the error distribution for the baseline is: case 27%, diacritization 22%, POS 18%, lemmatization 13%, gold errors 11%, and smaller percentages for state, voice, person, and enclitics. Whereas the distribution for the joint model is: case 26%, POS 21%, lemmatization 18%, gold errors 14%, diacritization 13%, and 8305 small percentages for state, voice, and person. In both models, case dominates the error distribution, since identifying the case ending in MSA is particularly challenging. The main difference between the models in terms of error distribution is the diacritization, where we observe a significant boost when we use the joint model. The apparent increase in the error percentages of the other error types at the joint model is due to the drop in the overall errors count, while many have a lower drop rate. For EGY, a notable error pattern is when the prediction matches the MSA-equivalent analysis of the dialectal word, like having an MSA-like diacritization, or having a case ending (DA, like EGY, does not have case ending). This happens due to codeswitching with MSA in the dialectal content, which is also reflected at the analyzer. This error type is not an error per se, but we do include it in the analysis. The error distribution for the separate features baseline is: gold errors 23%, MSA-equivalents 21%, POS 17%, lemmatization 14%, diacritization 12%, and smaller percentages for several other error types. Whereas the distribution for the joint model is: gold errors 27%, MSA-equivalents 21%, lemmatization 18%, POS 14%, diacritization 7%, and smaller frequencies for the other errors. Gold errors are frequent, but this is consistent with other contributions that use the same dataset (Zalmout et al., 2018). Like MSA, the percentage increase of the other error types is due to lower drop rates. 5 Conclusions and Future Work We presented a joint modeling approach for the lexicalized and non-lexicalized features in morphologically rich and Semitic languages. Our model achieves a significant improvement over several baselines for Arabic, and matches the baseline for MSA without having to use an expensive morphological analyzer. The results highlight the benefits of joint modeling, where diacritization seems to have benefitted the most. We observe, however, that further research is needed to enhance the overall consistency of the predicted features, without relying on external morphological analyzers. 6 Acknowledgment The first author was supported by the New York University Abu Dhabi Global PhD Student Fellowship program. The support and resources from the High Performance Computing Center at New York University Abu Dhabi are gratefully acknowledged. References Gheith A Abandah, Alex Graves, Balkees Al-Shagoor, Alaa Arabiyat, Fuad Jamour, and Majid Al-Taee. 2015. Automatic diacritization of Arabic text using recurrent neural networks. International Journal on Document Analysis and Recognition (IJDAR), 18(2):183–197. Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for Arabic. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 11–16, San Diego, California. Rania Al-Sabbagh and Roxana Girju. 2012. A supervised POS tagger for written Arabic social networking corpora. In Proceedings of KONVENS 2012, pages 39–52. OGAI. Main track: oral presentations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yonatan Belinkov and James Glass. 2015. Arabic diacritization with recurrent neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2281– 2285, Lisbon, Portugal. Association for Computational Linguistics. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1, pages 1171–1179. MIT Press. Toms Bergmanis and Sharon Goldwater. 2018. Context sensitive neural lemmatization with lematus. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1391– 1400. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Grzegorz Chrupala, Georgiana Dinu, and Josef van Genabith. 2008. Learning morphology with morfette. LREC 2008, pages 2362–2367. Kareem Darwish, Hamdy Mubarak, and Ahmed Abdelali. 2017. Arabic diacritization: Stats, rules, and hacks. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 9–17. Kareem Darwish, Hamdy Mubarak, Ahmed Abdelali, Mohamed Eldesouki, Younes Samih, Randah Alharbi, Mohammed Attia, Walid Magdy, and Laura Kallmeyer. 2018. Multi-dialect Arabic POS tagging: A CRF approach. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). 8306 Mona Diab, Nizar Habash, Owen Rambow, and Ryan Roth. 2013. LDC Arabic treebanks and associated corpora: Data divisions manual. arXiv preprint arXiv:1309.5652. Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2004. Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks. In Proceedings of the 5th Meeting of the North American Chapter of the Association for Computational Linguistics/Human Language Technologies Conference (HLT-NAACL04), pages 149–152, Boston, MA. Kevin Duh and Katrin Kirchhoff. 2005. POS tagging of dialectal Arabic: a minimally supervised approach. In Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages, Semitic ’05, pages 55–62, Ann Arbor, Michigan. Ahmed El Kholy and Nizar Habash. 2012. Orthographic and morphological processing for English– Arabic statistical machine translation. Machine Translation, 26(1-2):25–45. Alexander Erdmann, Nasser Zalmout, and Nizar Habash. 2018. Addressing noise in multidialectal word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 558– 565, Melbourne, Australia. Association for Computational Linguistics. Ramy Eskander, Nizar Habash, Owen Rambow, and Nadi Tomeh. 2013. Processing Spontaneous Orthography. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Atlanta, GA. Nerea Ezeiza, Iñaki Alegria, José María Arriola, Rubén Urizar, and Itziar Aduriz. 1998. Combining stochastic and rule-based methods for disambiguation in agglutinative languages. In Proceedings of the 17th international conference on Computational linguistics-Volume 1, pages 380–384. Association for Computational Linguistics. David Graff, Mohamed Maamouri, Basma Bouziri, Sondos Krouna, Seth Kulick, and Tim Buckwalter. 2009. Standard Arabic Morphological Analyzer (SAMA) Version 3.1. Linguistic Data Consortium LDC2009E73. Nizar Habash, Mona Diab, and Owen Rambow. 2012a. Conventional Orthography for Dialectal Arabic: Principles and Guidelines – Egyptian Arabic. Technical Report CCLS-12-02, Columbia University Center for Computational Learning Systems. Nizar Habash, Fadhl Eryani, Salam Khalifa, Owen Rambow, Dana Abdulrahim, Alexander Erdmann, Reem Faraj, Wajdi Zaghouani, Houda Bouamor, Nasser Zalmout, Sara Hassan, Faisal Al-Shargi, Sakhar Alkhereyf, Basma Abdulkareem, Ramy Eskander, Mohammad Salameh, and Hind Saddiki. 2018. Unified guidelines and resources for Arabic dialect orthography. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan. European Language Resource Association. Nizar Habash, Ramy Eskander, and Abdelati Hawwari. 2012b. A Morphological Analyzer for Egyptian Arabic. In Proceedings of the Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology, pages 1–9, Montréal, Canada. Nizar Habash and Owen Rambow. 2005. Arabic Tokenization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop. In Proceedings of the 43rd Annual Meeting of the ACL, pages 573–580, Ann Arbor, Michigan. Nizar Habash and Owen Rambow. 2007. Arabic diacritization through full morphological tagging. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, pages 53–56, Rochester, New York. Association for Computational Linguistics. Nizar Habash, Ryan Roth, Owen Rambow, Ramy Eskander, and Nadi Tomeh. 2013. Morphological analysis and disambiguation for dialectal Arabic. In Proceedings of NAACL-HLT, pages 426–432, Atlanta, Georgia. Nizar Habash, Abdelhadi Soudi, and Tim Buckwalter. 2007. On Arabic Transliteration. In A. van den Bosch and A. Soudi, editors, Arabic Computational Morphology: Knowledge-based and Empirical Methods. Springer. Nizar Y Habash. 2010. Introduction to Arabic natural language processing, volume 3. Morgan & Claypool Publishers. Georg Heigold, Josef van Genabith, and Günter Neumann. 2016. Scaling character-based morphological tagging to fourteen languages. In 2016 IEEE International Conference on Big Data (Big Data), pages 3895–3902. Go Inoue, Hiroyuki Shindo, and Yuji Matsumoto. 2017. Joint prediction of morphosyntactic categories for fine-grained Arabic part-of-speech tagging exploiting tag dictionary information. In Proceedings of the 21st SIGNLL Conference on Computational Natural Language Learning (CoNLL), Vancouver, Canada. Jenna Kanerva, Filip Ginter, and Tapio Salakoski. 2019. Universal lemmatizer: A sequence to sequence model for lemmatizing universal dependencies treebanks. arXiv preprint arXiv:1902.00972. Salam Khalifa, Nasser Zalmout, and Nizar Habash. 2016. Yamama: Yet another multi-dialect Arabic morphological analyzer. In Proceedings of the International Conference on Computational Linguistics (COLING): System Demonstrations, pages 223–227. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Daniel Kondratyuk, Tomáš Gavenˇciak, Milan Straka, and Jan Hajiˇc. 2018. Lemmatag: Jointly tagging and lemmatizing for morphologically rich languages with brnns. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4921–4928. 8307 Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Mohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus. In NEMLAR Conference on Arabic Language Resources and Tools, pages 102–109, Cairo, Egypt. Mohamed Maamouri, Sondos Krouna, Dalila Tabessi, Nadia Hamrouni, and Nizar Habash. 2012. Egyptian Arabic Morphological Annotation Guidelines. Chaitanya Malaviya, Shijie Wu, and Ryan Cotterell. 2019. A simple joint model for improved contextual neural lemmatization. CoRR, abs/1904.02306. Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Natural Language Engineering, 7(3):207–223. Hamdy Mubarak, Ahmed Abdelali, Hassan Sajjad, Younes Samih, and Kareem Darwish. 2019. Highly effective Arabic diacritization using sequence to sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2390–2395, Minneapolis, Minnesota. Association for Computational Linguistics. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2268–2274. Rani Nelken and Stuart M. Shieber. 2005. Arabic Diacritization Using Finite-State Transducers. In Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages, pages 79– 86, Ann Arbor. Robert Parker, David Graff, Ke Chen, Junbo Kong, and Kazuaki Maeda. 2011. Arabic Gigaword Fifth Edition. LDC catalog number No. LDC2011T11, ISBN 1-58563-595-2. Arfath Pasha, Mohamed Al-Badrashiny, Ahmed El Kholy, Ramy Eskander, Mona Diab, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. MADAMIRA: A Fast, Comprehensive Tool for Morphological Analysis and Disambiguation of Arabic. In In Proceedings of LREC, Reykjavik, Iceland. Tobias Pütz, Daniël De Kok, Sebastian Pütz, and Erhard Hinrichs. 2018. Seq2seq or perceptrons for robust lemmatization. an empirical examination. In Proceedings of the 17th International Workshop on Treebanks and Linguistic Theories (TLT 2018), December 13–14, 2018, Oslo University, Norway, 155, pages 193–207. Linköping University Electronic Press. Ryan Roth, Owen Rambow, Nizar Habash, Mona Diab, and Cynthia Rudin. 2008. Arabic morphological tagging, diacritization, and lemmatization using lexeme models and feature ranking. In ACL 2008: The Conference of the Association for Computational Linguistics; Companion Volume, Short Papers, Columbus, Ohio. Association for Computational Linguistics. Wael Salloum and Nizar Habash. 2014. ADAM: Analyzer for Dialectal Arabic Morphology. Journal of King Saud University-Computer and Information Sciences, 26(4):372–378. Helmut Schmid, Arne Fitschen, and Ulrich Heid. 2004. Smor: A german computational morphology covering derivation, composition and inflection. In LREC, pages 1–263. Lisbon. Qinlan Shen, Daniel Clothiaux, Emily Tagtow, Patrick Littell, and Chris Dyer. 2016. The role of context in neural morphological disambiguation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 181–191, Osaka, Japan. The COLING 2016 Organizing Committee. Jennifer Tracey, Haejoong Lee, Stephanie Strassel, and Safa Ismael. 2018. BOLT Arabic Discussion Forum Source Data. LDC catalog number LDC2018T10. Daniel Watson, Nasser Zalmout, and Nizar Habash. 2018. Utilizing character and word embeddings for text normalization with sequence-to-sequence models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 837–843. Nasser Zalmout, Alexander Erdmann, and Nizar Habash. 2018. Noise-robust morphological disambiguation for dialectal Arabic. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 953–964, New Orleans, Louisiana. Association for Computational Linguistics. Nasser Zalmout and Nizar Habash. 2017. Don’t throw those morphological analyzers away just yet: Neural morphological disambiguation for Arabic. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 704– 713, Copenhagen, Denmark. Association for Computational Linguistics. Nasser Zalmout and Nizar Habash. 2019. Adversarial multitask learning for joint multi-feature and multidialect morphological modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL ’19, Stroudsburg, PA, USA. Association for Computational Linguistics. Imed Zitouni, Jeffrey S. Sorensen, and Ruhi Sarikaya. 2006. Maximum entropy based restoration of Arabic diacritics. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 577–584, Sydney, Australia. Association for Computational Linguistics.
2020
736
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8308–8319 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8308 Phonetic and Visual Priors for Decipherment of Informal Romanization Maria Ryskina1 Matthew R. Gormley2 Taylor Berg-Kirkpatrick3 1Language Technologies Institute, Carnegie Mellon University 2Machine Learning Department, Carnegie Mellon University 3Computer Science and Engineering, University of California, San Diego {mryskina,mgormley}@cs.cmu.edu [email protected] Abstract Informal romanization is an idiosyncratic process used by humans in informal digital communication to encode non-Latin script languages into Latin character sets found on common keyboards. Character substitution choices differ between users but have been shown to be governed by the same main principles observed across a variety of languages— namely, character pairs are often associated through phonetic or visual similarity. We propose a noisy-channel WFST cascade model for deciphering the original non-Latin script from observed romanized text in an unsupervised fashion. We train our model directly on romanized data from two languages: Egyptian Arabic and Russian. We demonstrate that adding inductive bias through phonetic and visual priors on character mappings substantially improves the model’s performance on both languages, yielding results much closer to the supervised skyline. Finally, we introduce a new dataset of romanized Russian, collected from a Russian social network website and partially annotated for our experiments.1 1 Introduction Written online communication poses a number of challenges for natural language processing systems, including the presence of neologisms, codeswitching, and the use of non-standard orthography. One notable example of orthographic variation in social media is informal romanization2— speakers of languages written in non-Latin alphabets encoding their messages in Latin characters, for convenience or due to technical constraints (improper rendering of native script or keyboard 1The code and data are available at https://github. com/ryskina/romanization-decipherment 2Our focus on informal transliteration excludes formal settings such as pinyin for Mandarin where transliteration conventions are well established. хорошо xopowo horosho [Phonetic] [Visual] [Cyrillic] [Phonetically romanized] [Visually romanized] [Underlying Cyrillic] [Underlying Cyrillic] [Visually romanized] [Phonetically romanized] Figure 1: Example transliterations of a Russian word horoxo [horošo, ‘good’] (middle) based on phonetic (top) and visual (bottom) similarity, with character alignments displayed. The phoneticvisual dichotomy gives rise to one-to-many mappings such as x /S/ →sh / w. layout incompatibility). An example of such a sentence can be found in Figure 2. Unlike named entity transliteration where the change of script represents the change of language, here Latin characters serve as an intermediate symbolic representation to be decoded by another speaker of the same source language, calling for a completely different transliteration mechanism: instead of expressing the pronunciation of the word according to the phonetic rules of another language, informal transliteration can be viewed as a substitution cipher, where each source character is replaced with a similar Latin character. In this paper, we focus on decoding informally romanized texts back into their original scripts. We view the task as a decipherment problem and propose an unsupervised approach, which allows us to save annotation effort since parallel data for informal transliteration does not occur naturally. We propose a weighted finite-state transducer (WFST) cascade model that learns to decode informal romanization without parallel text, relying only on transliterated data and a language model over the original orthography. We test it on two languages, Egyptian Arabic and Russian, collecting our own dataset of romanized Russian from a Russian social network website vk.com. 8309 4to mowet bit’ ly4we? [Romanized] Qto moet byt~ luqxe? [Latent Cyrillic] ˇCto možet byt’ luˇcše? [Scientific] /Sto "moZ1t b1tj "lu>tSS1/ [IPA] What can be better? [Translated] Figure 2: Example of an informally romanized sentence from the dataset presented in this paper, containing a many-to-one mapping  / x →w. Scientific transliteration, broad phonetic transcription, and translation are not included in the dataset and are presented for illustration only. Since informal transliteration is not standardized, converting romanized text back to its original orthography requires reasoning about the specific user’s transliteration preferences and handling many-to-one (Figure 2) and one-to-many (Figure 1) character mappings, which is beyond traditional rule-based converters. Although user behaviors vary, there are two dominant patterns in informal romanization that have been observed independently across different languages, such as Russian (Paulsen, 2014), dialectal Arabic (Darwish, 2014) or Greek (Chalamandaris et al., 2006): Phonetic similarity: Users represent source characters with Latin characters or digraphs associated with similar phonemes (e.g. m /m/ →m, l /l/ →l in Figure 2). This substitution method requires implicitly tying the Latin characters to a phonetic system of an intermediate language (typically, English). Visual similarity: Users replace source characters with similar-looking symbols (e.g. q / > tSj/ →4, u /u/ →y in Figure 2). Visual similarity choices often involve numerals, especially when the corresponding source language phoneme has no English equivalent (e.g. Arabic Š /Q/ →3). Taking that consistency across languages into account, we show that incorporating these style patterns into our model as priors on the emission parameters—also constructed from naturally occurring resources—improves the decoding accuracy on both languages. We compare the proposed unsupervised WFST model with a supervised WFST, an unsupervised neural architecture, and commercial systems for decoding romanized Russian (translit) and Arabic (Arabizi). Our unsupervised WFST outperforms the unsupervised neural baseline on both languages. 2 Related work Prior work on informal transliteration uses supervised approaches with character substitution rules either manually defined or learned from automatically extracted character alignments (Darwish, 2014; Chalamandaris et al., 2004). Typically, such approaches are pipelined: they produce candidate transliterations and rerank them using modules encoding knowledge of the source language, such as morphological analyzers or wordlevel language models (Al-Badrashiny et al., 2014; Eskander et al., 2014). Supervised finite-state approaches have also been explored (Wolf-Sonkin et al., 2019; Hellsten et al., 2017); these WFST cascade models are similar to the one we propose, but they encode a different set of assumptions about the transliteration process due to being designed for abugida scripts (using consonant-vowel syllables as units) rather than alphabets. To our knowledge, there is no prior unsupervised work on this problem. Named entity transliteration, a task closely related to ours, is better explored, but there is little unsupervised work on this task as well. In particular, Ravi and Knight (2009) propose a fully unsupervised version of the WFST approach introduced by Knight and Graehl (1998), reframing the task as a decipherment problem and learning cross-lingual phoneme mappings from monolingual data. We take a similar path, although it should be noted that named entity transliteration methods cannot be straightforwardly adapted to our task due to the different nature of the transliteration choices. The goal of the standard transliteration task is to communicate the pronunciation of a sequence in the source language (SL) to a speaker of the target language (TL) by rendering it appropriately in the TL alphabet; in contrast, informal romanization emerges in communication between SL speakers only, and TL is not specified. If we picked any specific Latinscript language to represent TL (e.g. English, which is often used to ground phonetic substitutions), many of the informally romanized sequences would still not conform to its pronunciation rules: the transliteration process is characterlevel rather than phoneme-level and does not take possible TL digraphs into account (e.g. Russian sh /sx/ →sh), and it often involves eclectic visual substitution choices such as numerals or punctua8310 tion (e.g. Arabic  [tHt, ‘under’]3 →ta7t, Russian dl [dlja, ‘for’] →dl9| ). Finally, another relevant task is translating between closely related languages, possibly written in different scripts. An approach similar to ours is proposed by Pourdamghani and Knight (2017). They also take an unsupervised decipherment approach: the cipher model, parameterized as a WFST, is trained to encode the source language character sequences into the target language alphabet as part of a character-level noisy-channel model, and at decoding time it is composed with a word-level language model of the source language. Recently, the unsupervised neural architectures (Lample et al., 2018, 2019) have also been used for related language translation and similar decipherment tasks (He et al., 2020), and we extend one of these neural models to our characterlevel setup to serve as a baseline (§5). 3 Methods We train a character-based noisy-channel model that transforms a character sequence o in the native alphabet of the language into a sequence of Latin characters l, and use it to decode the romanized sequence l back into the original orthography. Our proposed model is composed of separate transition and emission components as discussed in §3.1, similarly to an HMM. However, an HMM assumes a one-to-one alignment between the characters of the observed and the latent sequences, which is not true for our task. One original script character can be aligned to two consecutive Latin characters or vice versa: for example, when a phoneme is represented with a single symbol on one side but with a digraph on the other (Figure 1), or when a character is omitted on one side but explicitly written on the other (e.g. short vowels not written in unvocalized Arabic but written in transliteration, or the Russian soft sign ~ representing palatalization being often omitted in the romanized version). To handle those alignments, we introduce insertions and deletions into the emission model and modify the emission transducer to limit the number of consecutive insertions and deletions. In our experiments, we compare the performance of the model with and without informative phonetic and visual similarity priors described in §3.2. 3The square brackets following a foreign word show its linguistic transliteration (using the scientific and the Buckwalter schemas for Russian and Arabic respectively) and its English translation. 3.1 Model If we view the process of romanization as encoding a source sequence o into Latin characters, we can consider each observation l to have originated via o being generated from a distribution p(o) and then transformed to Latin script according to another distribution p(l|o). We can write the probability of the observed Latin sequence as: p(l) = X o p(o; γ) · p(l|o; θ) · pprior(θ; α) (1) The first two terms in (1) correspond to the probabilities under the transition model (the language model trained on the original orthography) and the emission model respectively. The third term represents the prior distribution on the emission model parameters through which we introduce human knowledge into the model. Our goal is to learn the parameters θ of the emission distribution with the transition parameters γ being fixed. We parameterize the emission and transition distributions as weighted finite-state transducers (WFSTs): Transition WFSA The n-gram weighted finitestate acceptor (WFSA) T represents a characterlevel n-gram language model of the language in the native script, producing the native alphabet character sequence o with the probability p(o; γ). We use the parameterization of Allauzen et al. (2003), with the states encoding conditioning history, arcs weighted by n-gram probabilities, and failure transitions representing backoffs. The role of T is to inform the model of what well-formed text in the original orthography looks like; its parameters γ are learned from a separate corpus and kept fixed during the rest of the training. Emission WFST The emission WFST S transduces the original script sequence o to a Latin sequence l with the probability p(l|o; θ). Since there can be multiple paths through S that correspond to the input-output pair (o, l), this probability is summed over all such paths (i.e. is a marginal over all possible monotonic character alignments): p(l|o; θ) = X e p(l, e|o; θ) (2) We view each path e as a sequence of edit operations: substitutions of original characters with Latin ones (co →cl), insertions of Latin characters (ϵ →cl), and deletions of original characters (co →ϵ). Each arc in S corresponds to one 8311 of the possible edit operations; an arc representing the edit co →cl is characterized by the input label co, the output label cl, and the weight −log p(cl|co; θ). The emission parameters θ are the multinomial conditional probabilities of the edit operations p(cl|co); we learn θ using the algorithm described in §3.3. 3.2 Phonetic and visual priors To inform the model of which pairs of symbols are close in the phonetic or visual space, we introduce the priors on the emission parameters, increasing the probability of an original alphabet character being substituted by a similar Latin one. Rather than attempting to operationalize the notions of phonetic or visual similarity, we choose to read the likely mappings between symbols off humancompiled resources that use the same underlying principle: phonetic keyboard layouts and visually confusable symbol lists. Examples of mappings that we encode as priors can be found in Table 1. Phonetic similarity Since we think of the informal romanization as a cipher, we aim to capture the phonetic similarity between characters based on association rather than on the actual graphemeto-phoneme mappings in specific words. We approximate it using phonetic keyboard layouts, oneto-one mappings built to bring together “similarsounding” characters in different alphabets. We take the character pairs from a union of multiple layouts for each language, two for Arabic4 and four for Russian.5 The main drawback of using keyboard layouts is that they require every character to have a Latin counterpart, so some mappings will inevitably be arbitrary; we compensate for this effect by averaging over several layouts. Visual similarity The strongest example of visual character similarity would be homoglyphs— symbols from different alphabets represented by the same glyph, such as Cyrillic a and Latin a. The fact that homoglyph pairs can be made indistinguishable in certain fonts has been exploited in phishing attacks, e.g. when Latin characters are replaced by virtually identical Cyrillic ones (Gabrilovich and Gontmakher, 2002). This led the Unicode Consortium to publish a list of symbols and symbol combinations similar enough to be po4http://arabic.omaralzabir.com/, https://thomasplagwitz.com/2013/01/06/ imrans-phonetic-keyboard-for-arabic/ 5http://winrus.com/kbd_e.htm Original Latin Phon. Vis. r /r/ r p b /b/ b b, 6 v /v/ v, w b ¤ /w, u:, o:/ w, u —  /x/ k, x — Table 1: Example Cyrillic–Latin and Arabic– Latin mappings encoded in the visual and phonetic priors respectively. tentially confusing to the human eye (referred to as confusables).6 This list contains not only exact homoglyphs but also strongly homoglyphic pairs such as Cyrillic  and Latin lO. We construct a visual prior for the Russian model from all Cyrillic–Latin symbol pairs in the Unicode confusables list.7 Although this list does not cover more complex visual associations used in informal romanization, such as partial similarity (Arabic Alif with Hamza  →2 due to Hamza º resembling an inverted 2) or similarity conditioned on a transformation such as reflection (Russian l →v), it makes a sensible starting point. However, this restrictive definition of visual similarity does not allow us to create a visual prior for Arabic—the two scripts are dissimilar enough that the confusables list does not contain any Arabic–Latin character pairs. Proposing a more nuanced definition of visual similarity for Arabic and the associated prior is left for future work. We incorporate these mappings into the model as Dirichlet priors on the emission parameters: θ ∼Dir(α), where each dimension of the parameter α corresponds to a character pair (co, cl), and the corresponding element of α is set to the number of times these symbols are mapped to each other in the predefined mapping set. 3.3 Learning We learn the emission WFST parameters in an unsupervised fashion, observing only the Latin side of the training instances. The marginal likelihood of a romanized sequence l can be computed by 6https://www.unicode.org/Public/ security/latest/confusables.txt 7In our parameterization, we cannot introduce a mapping from one to multiple symbols or vice versa, so we map all possible pairs instead: (, lo) →(, l), (, o). 8312 −2 −1 0 1 2 ✏: ⇤l ⇤o : ✏ ⇤o : ✏ ⇤o : ✏ ⇤o : ✏ ⇤o : ⇤l ⇤o : ⇤l ⇤o : ⇤l ⇤o : ⇤l ✏: ⇤l ✏: ⇤l ✏: ⇤l ⇤o : ⇤l Figure 3: Schematic of the emission WFST with limited delay (here, up to 2) with states labeled by their delay values. ∗o and ∗l represent an arbitrary original or Latin symbol respectively. Weights of the arcs are omitted for clarity; weights with the same inputoutput label pairs are tied. summing over the weights of all paths through a lattice obtained by composing T ◦S ◦A(l). Here A(l) is an unweighted acceptor of l, which, when composed with a lattice, constrains all paths through the lattice to produce l as the output sequence. The expectation–maximization (EM) algorithm is commonly used to maximize marginal likelihood; however, the size of the lattice would make the computation prohibitively slow. We combine online learning (Liang and Klein, 2009) and curriculum learning (Bengio et al., 2009) to achieve faster convergence, as described in §3.3.1. 3.3.1 Unsupervised learning We use a version of the stepwise EM algorithm described by Liang and Klein (2009), reminiscent of the stochastic gradient descent in the space of the sufficient statistics. Training data is split into mini-batches, and after processing each minibatch we update the overall vector of the sufficient statistics µ and re-estimate the parameters based on the updated vector. The update is performed by interpolating between the current value of the overall vector and the vector of sufficient statistics sk collected from the k-th mini-batch: µ(k+1) ←(1 −ηk)µ(k) + ηksk. The stepsize is gradually decreased, causing the model to make smaller changes to the parameters as the learning stabilizes. Following Liang and Klein (2009), we set it to ηk = (k + 2)−β. However, if the mini-batch contains long sequences, summing over all paths in the corresponding lattices could still take a long time. As we know, the character substitutions are not arbitrary: each original alphabet symbols is likely to be mapped to only a few Latin characters, which means that most of the paths through the lattice would have very low probabilities. We prune the improbable arcs in the emission WFST while training on batches of shorter sentences. Doing this eliminates up to 66% and up to 76% of the emission arcs for Arabic and Russian respectively. We discourage excessive use of insertions and deletions by keeping the corresponding probabilities low at the early stages of training: during the first several updates, we freeze the deletion probabilities at a small initial value and disable insertions completely to keep the model locally normalized. We also iteratively increase the language model order as learning progresses. Once most of the emission WFST arcs have been pruned, we can afford to compose it with a larger language model WFST without the size of the resulting lattice rendering the computation impractical. The two steps of the EM algorithm are performed as follows: E-step At the E-step we compute the sufficient statistics for updating θ, which in our case would be the expected number of traversals of each of the emission WFST arcs. For ease of bookkeeping, we compute those expectations using finitestate methods in the expectation semiring (Eisner, 2002). Summing over all paths in the lattice is usually performed via shortest distance computation in log semiring; in the expectation semiring, we augment the weight of each arc with a basis vector, where the only non-zero element corresponds to the index of the emission edit operation associated with the arc (i.e. the input-output label pair). This way the shortest distance algorithm yields not only the marginal likelihood but also the vector of the sufficient statistics for the input sequence. To speed up the shortest distance computation, we shrink the lattice by limiting delay of all paths through the emission WFST. Delay of a path is defined as the difference between the number of the epsilon labels on the input and output sides of the path. Figure 3 shows the schema of the emission WFST where delay is limited. Substitutions are performed without a state change, and each deletion or insertion arc transitions to the next or previous state respectively. When the first (last) state is reached, further deletions (insertions) are no longer allowed. M-step The M-step then corresponds to simply re-estimating θ by appropriately normalizing the obtained expected counts. 8313 Arabic Russian Sent. Char. Sent. Char. LM train 49K 935K 307K 111M Train 5K 104K 5K 319K Validation 301 8K 227 15K Test 1K 20K 1K 72K Table 2: Splits of the Arabic and Russian data used in our experiments. All Arabic data comes from the LDC BOLT Phase 2 corpus, in which all sentences are annotated with their transliteration into the Arabic script. For the experiments on Russian, the language model is trained on a section of the Taiga corpus, and the train, validation, and test portions are collected by the authors; only the validation and test sentences are annotated. 3.3.2 Supervised learning We also compare the performance of our model with the same model trained in a supervised way, using the annotated portion of the data that contains parallel o and l sequences. In the supervised case we can additionally constrain the lattice with an acceptor of the original orthography sequence: A(o) ◦T ◦S ◦A(l). However, the alignment between the symbols in o and l is still latent. To optimize this marginal likelihood we still employ the EM algorithm. As this constrained lattice is much smaller, we can run the standard EM without the modifications discussed in §3.3.1. 3.4 Decoding Inference at test time is also performed using finite-state methods and closely resembles the Estep of the unsupervised learning: given a Latin sequence l, we construct the machine T ◦S ◦A(l) in the tropical semiring and run the shortest path algorithm to obtain the most probable path ˆe; the source sequence ˆo is read off the obtained path. 4 Datasets Here we discuss the data used to train the unsupervised model. Unlike Arabizi, which has been explored in prior work due to its popularity in the modern online community, a dataset of informally romanized Russian was not available, so we collect and partially annotate our own dataset from the Russian social network vk.com. 4.1 Arabic We use the Arabizi portion of the LDC BOLT Phase 2 SMS/Chat dataset (Bies et al., 2014; Song et al., 2014), a collection of written informal conversations in romanized Egyptian Arabic annotated with their Arabic script representation. To prevent the annotators from introducing orthographic variation inherent to dialectal Arabic, compliance with the Conventional orthography for dialectal Arabic (CODA; Habash et al., 2012) is ensured. However, the effects of some of the normalization choices (e.g. expanding frequent abbreviations) would pose difficulties to our model. To obtain a subset of the data better suited for our task, we discard any instances which are not originally romanized (5% of all data), ones where the Arabic annotation contains Latin characters (4%), or where emoji/emoticon normalization was performed (12%). The information about the splits is provided in Table 2. Most of the data is allocated to the language model training set in order to give the unsupervised model enough signal from the native script side. We choose to train the transition model on the annotations from the same corpus to make the language model specific to both the informal domain and the CODA orthography. 4.2 Russian We collect our own dataset of romanized Russian text from a social network website vk.com, adopting an approach similar to the one described by Darwish (2014). We take a list of the 50 most frequent Russian lemmas (Lyashevskaya and Sharov, 2009), filtering out those shorter than 3 characters, and produce a set of candidate romanizations for each of them to use as queries to the vk.com API. In order to encourage diversity of romanization styles in our dataset, we generate the queries by defining all plausible visual and phonetic mappings for each Cyrillic character and applying all possible combinations of those substitutions to the underlying Russian word. We scrape public posts on the user and group pages, retaining only the information about which posts were authored by the same user, and manually go over the collected set to filter out coincidental results. Our dataset consists of 1796 wall posts from 1681 users and communities. Since the posts are quite long on average (248 characters, longest ones up to 15K), we split them into sentences using the NLTK sentence tokenizer, with manual 8314 correction when needed. The obtained sentences are used as data points, split into training, validation and test according to the numbers in Table 2. The average length of an obtained sentence is 65 characters, which is 3 times longer than an average Arabizi sentence; we believe this is due to the different nature of the data (social media posts vs. SMS). Sentences collected from the same user are distributed across different splits so that we observe a diverse set of romanization preferences in both training and testing. Each sentence in the validation and test sets is annotated by one of the two native speaker annotators, following guidelines similar to those designed for the Arabizi BOLT data (Bies et al., 2014). For more details on the annotation guidelines and inter-annotator agreement, see Appendix A. Since we do not have enough annotations to train the Russian language model on the same corpus, we use a separate in-domain dataset. We take a portion of the Taiga dataset (Shavrina and Shapovalova, 2017), containing 307K comments scraped from the same social network vk.com, and apply the same preprocessing steps as we did in the collection process. 5 Experiments Here we discuss the experimental setup used to determine how much information relevant for our task is contained in the character similarity mappings, and how it compares to the amount of information encoded in the human annotations. We compare them by evaluating the effect of the informative priors (described in §3.2) on the performance of the unsupervised model and comparing it to the performance of the supervised model. Methods We compare the performance of our model trained in three different setups: unsupervised with a uniform prior on the emission parameters, unsupervised with informative phonetic and visual priors (§3.2), and supervised. We additionally compare them to a commercial online decoding system for each language (directly encoding human knowledge about the transliteration process) and a character-level unsupervised neural machine translation architecture (encoding no assumptions about the underlying process at all). We train the unsupervised models with the stepwise EM algorithm as described in §3.3.1, performing stochastic updates and making only one pass over the entire training set. The supervised models are trained on the validation set with five iterations of EM with a six-gram transition model. It should be noted that only a subset of the validation data is actually used in the supervised training: if the absolute value of the delay of the emission WFST paths is limited by n, we will not be able to compose a lattice for any data points where the input and output sequences differ in length by more than n (those constitute 22% of the Arabic validation data and 33% of the Russian validation data for n = 5 and n = 2 respectively). Since all of the Arabic data comes annotated, we can perform the same experiment using the full training set; surprisingly, the performance of the supervised model does not improve (see Table 3). The online transliteration decoding systems we use are translit.net for Russian and Yamli8 for Arabic. The Russian decoder is rule-based, but the information about what algorithm the Arabic decoder uses is not disclosed. We take the unsupervised neural machine translation (UNMT) model of Lample et al. (2018) as the neural baseline, using the implementation from the codebase of He et al. (2020), with one important difference: since the romanization process is known to be strictly character-level, we tokenize the text into characters rather than words. Implementation We use the OpenFst library (Allauzen et al., 2007) for the implementation of all the finite-state methods, in conjunction with the OpenGrm NGram library (Roark et al., 2012) for training the transition model specifically. We train the character-level n-gram models with Witten– Bell smoothing (Witten and Bell, 1991) of orders from two to six. Since the WFSTs encoding full higher-order models become very large (for example, the Russian six-gram model has 3M states and 13M arcs), we shrink all the models except for the bigram one using relative entropy pruning (Stolcke, 1998). However, since pruning decreases the quality of the language model, we observe most of the improvement in accuracy while training with the unpruned bigram model, and the subsequent order increases lead to relatively minor gains. Hyperparameter settings for training the transition and emission WFSTs are described in Appendix B. We optimize the delay limit for each language separately, obtaining best results with 2 for Russian and 5 for Arabic. To approximate the mono8https://www.yamli.com/ 8315 Arabic Russian Unsupervised: uniform prior 0.735 0.660 Unsupervised: phonetic prior 0.377 0.222 Unsupervised: visual prior — 0.372 Unsupervised: combined prior — 0.212 Supervised 0.225* 0.140 UNMT 0.791 0.242 Commercial 0.206 0.137 Table 3: Character error rate for different experimental setups. We compare unsupervised models with and without informative priors with the supervised model (trained on validation data) and a commercial online system. We do not have a visual prior for Arabic due to the Arabic–Latin visual character similarity not being captured by the restrictive confusables list that defines the prior (see §3.2). Each supervised and unsupervised experiment is performed with 5 random restarts. *The Arabic supervised experiment result is for the model trained on the validation set; training on the 5K training set yields 0.226. tonic word-level alignment between the original and Latin sequences, we restrict the operations on the space character to only three: insertion, deletion, and substitution with itself. We apply the same to the punctuation marks (with specialized substitutions for certain Arabic symbols, such as ? →?). This substantially reduces the number of arcs in the emission WFST, as punctuation marks make up over half of each of the alphabets. Evaluation We use character error rate (CER) as our evaluation metric. We compute CER as the ratio of the character-level edit distance between the predicted original script sequence and the human annotation to the length of the annotation sequence in characters. 6 Results and analysis The CER values for the models we compare are presented in Table 3. One trend we notice is that the error rate is lower for Russian than for Arabic in all the experiments, including the uniform prior setting, which suggests that decoding Arabizi is an inherently harder task. Some of the errors of the Arabic commercial system could be explained by the decoder predictions being plausible but not matching the CODA orthography of the reference. Original Latin r /r/ r (.93), p (.05) b /b/ b (.95), 6 (.02) v /v/ v (.87), 8 (.05), w (.05) ¤ /w, u:, o:/ w (.48), o (.33), u (.06)  /x/ 5 (.76), k (.24) Table 4: Emission probabilities learned by the supervised model (compare to Table 1). All substitutions with probability greater than 0.01 are shown. Effect of priors The unsupervised models without an informative prior perform poorly for either language, which means that there is not enough signal in the language model alone under the training constraints we enforce. Possibly, the algorithm could have converged to a better local optimum if we did not use the online algorithm and prune both the language model and the emission model; however, that experiment would be infeasibly slow. Incorporating a phonetic prior reduces the error rate by 0.36 and 0.44 for Arabic and Russian respectively, which provides a substantial improvement while maintaining the efficiency advantage. The visual prior for Russian appears to be slightly less helpful, improving CER by 0.29. We attribute the better performance of the model with the phonetic prior to the sparsity and restrictiveness of the visually confusable symbol mappings, or it could be due to the phonetic substitutions being more popular with users. Finally, combining the two priors for Russian leads to a slight additional improvement in accuracy over the phonetic prior only. We additionally verify that the phonetic and visual similarity-based substitutions are prominent in informal romanization by inspecting the emission parameters learned by the supervised model with a uniform prior (Table 4). We observe that: (a) the highest-probability substitutions can be explained by either phonetic or visual similarity, and (b) the external mappings we use for our priors are indeed appropriate since the supervised model recovers the same mappings in the annotated data. Error analysis Figure 4 shows some of the elements of the confusion matrices for the test predictions of the best-performing unsupervised models in both languages. We see that many of the frequent errors are caused by the model failing to disambiguate between two plausible decodings of a Latin character, either mapped to it 8316 through different types of similarity ( n /n/ [phonetic] →n ←[visual] p, n [visual] →h ←[phonetic] h /x/ ), or the same one (visual 8 →8 ←v, phonetic £ /h/ →h ← /è/ ); such cases could be ambiguous for humans to decode as well. Other errors in Figure 4 illustrate the limitations of our parameterization and the resources we rely on. Our model does not allow one-to-many alignments, which leads to digraph interpretation errors such as x /s/ + £ /h/ →sh ←M /S/. Some artifacts of the resources our priors are based on also pollute the results: for example, the confusion between ~ and h in Russian is explained by the Russian soft sign ~, which has no English phonetic equivalent, being arbitrarily mapped to the Latin x in one of the phonetic keyboard layouts. Comparison to UNMT The unsupervised neural model trained on Russian performs only marginally worse than the unsupervised WFST model with an informative prior, demonstrating that with a sufficient amount of data the neural architecture is powerful enough to learn the character substitution rules without the need for the inductive bias. However, we cannot say the same about Arabic—with a smaller training set (see Table 2), the UNMT model is outperformed by the unsupervised WFST even without an informative prior. The main difference in the performance between the two models comes down to the trade-off between structure and power: although the neural architecture captures long-range dependencies better due to having a stronger language model, it does not provide an easy way of enforcing character-level constraints on the decoding process, which the WFST model encodes by design. As a result, we observe that while the UNMT model can recover whole words more successfully (for Russian it achieves 45.8 BLEU score, while the best-performing unsupervised WFST is at 20.4), it also tends to arbitrarily insert or repeat words in the output, which leads to higher CER. 7 Conclusion This paper tackles the problem of decoding nonstandardized informal romanization used in social media into the original orthography without parallel text. We train a WFST noisy-channel model to decode romanized Egyptian Arabic and Russian to their original scripts with the stepwise EM algorithm combined with curriculum learning and demonstrate that while the unsupervised model by 73 26 20 0 7 3 28 1 29 ه س ع إ ش ح 0 88 123 3 0 0 2 8 н ь х п в 155 101 Figure 4: Fragments of the confusion matrix comparing test time predictions of the best-performing unsupervised models for Arabic (left) and Russian (right) to human annotations. Each number represents the count of the corresponding substitution in the best alignment (edit distance path) between the predicted and gold sequences, summed over the test set. Rows stand for predictions, columns correspond to ground truth. itself performs poorly, introducing an informative prior that encodes the notion of phonetic or visual character similarity brings its performance substantially closer to that of the supervised model. The informative priors used in our experiments are constructed using sets of character mappings compiled for other purposes but using the same underlying principle (phonetic keyboard layouts and the Unicode confusable symbol list). While these mappings provide a convenient way to avoid formalizing the complex notions of the phonetic and visual similarity, they are restrictive and do not capture all the diverse aspects of similarity that idiosyncratic romanization uses, so designing more suitable priors via operationalizing the concept of character similarity could be a promising direction for future work. Another research avenue that could be explored is modeling specific user preferences: since each user likely favors a certain set of character substitutions, allowing user-specific parameters could improve decoding and be useful for authorship attribution. Acknowledgments This project is funded in part by the NSF under grants 1618044 and 1936155, and by the NEH under grant HAA256044-17. The authors thank John Wieting, Shruti Rijhwani, David Mortensen, Nikita Srivatsan, and Mahmoud Al Ismail for helpful discussion, Junxian He for help with the UNMT experiments, Stas Kashepava for data annotation, and the three anonymous reviewers for their valuable feedback. 8317 References Mohamed Al-Badrashiny, Ramy Eskander, Nizar Habash, and Owen Rambow. 2014. Automatic transliteration of Romanized dialectal Arabic. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 30– 38, Ann Arbor, Michigan. Association for Computational Linguistics. Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing statistical language models. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 40–47, Sapporo, Japan. Association for Computational Linguistics. Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wojciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of the Ninth International Conference on Implementation and Application of Automata, (CIAA 2007), volume 4783 of Lecture Notes in Computer Science, pages 11–23. Springer. http://www.openfst.org. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, page 41–48, New York, NY, USA. Association for Computing Machinery. Ann Bies, Zhiyi Song, Mohamed Maamouri, Stephen Grimes, Haejoong Lee, Jonathan Wright, Stephanie Strassel, Nizar Habash, Ramy Eskander, and Owen Rambow. 2014. Transliteration of Arabizi into Arabic orthography: Developing a parallel annotated Arabizi-Arabic script SMS/chat corpus. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pages 93–103, Doha, Qatar. Association for Computational Linguistics. Aimilios Chalamandaris, Athanassios Protopapas, Pirros Tsiakoulis, and Spyros Raptis. 2006. All Greek to me! An automatic Greeklish to Greek transliteration system. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA). Aimilios Chalamandaris, Pirros Tsiakoulis, Spyros Raptis, G Giannopoulos, and George Carayannis. 2004. Bypassing Greeklish! In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04), Lisbon, Portugal. European Language Resources Association (ELRA). Kareem Darwish. 2014. Arabizi detection and conversion to Arabic. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pages 217–224, Doha, Qatar. Association for Computational Linguistics. Kareem Darwish, Walid Magdy, and Ahmed Mourad. 2012. Language processing for arabic microblog retrieval. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12, page 2427–2430, New York, NY, USA. Association for Computing Machinery. Jason Eisner. 2002. Parameter estimation for probabilistic finite-state transducers. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 1–8, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ramy Eskander, Mohamed Al-Badrashiny, Nizar Habash, and Owen Rambow. 2014. Foreign words and the automatic processing of Arabic social media text written in Roman script. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 1–12, Doha, Qatar. Association for Computational Linguistics. Evgeniy Gabrilovich and Alex Gontmakher. 2002. The homograph attack. Commun. ACM, 45(2):128. Nizar Habash, Mona Diab, and Owen Rambow. 2012. Conventional orthography for dialectal Arabic. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), pages 711–718, Istanbul, Turkey. European Language Resources Association (ELRA). Junxian He, Xinyi Wang, Graham Neubig, and Taylor Berg-Kirkpatrick. 2020. A probabilistic formulation of unsupervised text style transfer. In International Conference on Learning Representations. Lars Hellsten, Brian Roark, Prasoon Goyal, Cyril Allauzen, Françoise Beaufays, Tom Ouyang, Michael Riley, and David Rybach. 2017. Transliterated mobile keyboard input via weighted finite-state transducers. In Proceedings of the 13th International Conference on Finite State Methods and Natural Language Processing (FSMNLP 2017), pages 10– 19, Umeå, Sweden. Association for Computational Linguistics. Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4):599–612. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations. Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, and YLan Boureau. 2019. Multiple-attribute text rewriting. In International Conference on Learning Representations. 8318 Percy Liang and Dan Klein. 2009. Online EM for unsupervised models. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 611–619, Boulder, Colorado. Association for Computational Linguistics. Olga N Lyashevskaya and Sergey A Sharov. 2009. Frequency dictionary of modern Russian based on the Russian National Corpus [Chastotnyy slovar’ sovremennogo russkogo jazyka (na materiale Nacional’nogo korpusa russkogo jazyka)]. Azbukovnik, Moscow. Martin Paulsen. 2014. Translit: Computer-mediated digraphia on the Runet. Digital Russia: The Language, Culture and Politics of New Media Communication. Nima Pourdamghani and Kevin Knight. 2017. Deciphering related languages. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2513–2518, Copenhagen, Denmark. Association for Computational Linguistics. Sujith Ravi and Kevin Knight. 2009. Learning phoneme mappings for transliteration without parallel data. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 37–45, Boulder, Colorado. Association for Computational Linguistics. Brian Roark, Richard Sproat, Cyril Allauzen, Michael Riley, Jeffrey Sorensen, and Terry Tai. 2012. The OpenGrm open-source finite-state grammar software libraries. In Proceedings of the ACL 2012 System Demonstrations, pages 61–66, Jeju Island, Korea. Association for Computational Linguistics. Tatiana Shavrina and Olga Shapovalova. 2017. To the methodology of corpus construction for machine learning: Taiga syntax tree corpus and parser. In Proc. CORPORA 2017 International Conference, pages 78–84, St. Petersburg. Zhiyi Song, Stephanie Strassel, Haejoong Lee, Kevin Walker, Jonathan Wright, Jennifer Garland, Dana Fore, Brian Gainor, Preston Cabe, Thomas Thomas, Brendan Callahan, and Ann Sawyer. 2014. Collecting natural SMS and chat conversations in multiple languages: The BOLT phase 2 corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 1699–1704, Reykjavik, Iceland. European Language Resources Association (ELRA). Andreas Stolcke. 1998. Entropy-based pruning of backoff language models. In Proc. DARPA Broadcast News Transcription and Understanding Workshop, pages 270––274. Ian H Witten and Timothy C Bell. 1991. The zerofrequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE transactions on information theory, 37(4):1085– 1094. Lawrence Wolf-Sonkin, Vlad Schogol, Brian Roark, and Michael Riley. 2019. Latin script keyboards for South Asian languages with finite-state normalization. In Proceedings of the 14th International Conference on Finite-State Methods and Natural Language Processing, pages 108–117, Dresden, Germany. Association for Computational Linguistics. 8319 A Data collection and annotation Preprocessing We generate a set of 270 candidate transliterations of 26 Russian words to use as queries. However, many of the produced combinations are highly unlikely and yield no results, and some happen to share the spelling with words in other languages (most often other Slavic languages that use Latin script, such as Polish). We scrape public posts on user and group pages, retaining only the information about which posts were authored by the same user, and manually go over the collected set to filter out coincidental results. We additionally preprocess the collected data by normalizing punctuation and removing non-ASCII characters and emoji. We also replace all substrings of the same character repeated more than twice to only two repetitions, as suggested by Darwish et al. (2012), since these repetitions are more likely to be a written expression of emotion than to be explained by the underlying Russian sentence. The same preprocessing steps are applied to the original script side of the data (the annotations and the monolingual language model training corpus) as well. Annotation guidelines While transliterating, annotators perform orthographic normalization wherever possible, correcting typos and errors in word boundaries; grammatical errors are not corrected. Tokens that do not require transliteration (foreign words, emoticons) or ones that annotator fails to identify (proper names, badly misspelled words) are removed from the romanized sentence and not transliterated. Although it means that some of the test set sentences will not exactly represent the original romanized sequence, it will help us ensure that we are only testing our model’s ability to transliterate rather than make word-byword normalization decisions. In addition, 200 of the validation sequences are dually annotated to measure the inter-annotator agreement. We evaluate it using character error rate (CER; edit distance between the two sequences normalized by the length of the reference sequence), the same metric we use to evaluate the model’s performance. In this case, since neither of the annotations is the ground truth, we compute CER in both directions and average. Despite the discrepancies caused by the annotators deleting unknown words at their discretion, average CER is only 0.014, which indicates a very high level of agreement. B Hyperparameter settings WFST model The Witten–Bell smoothing parameter for the language model is set to 10, and the relative entropy pruning threshold is 10−5 for the trigram model and 2 · 10−5 for higher-order models. Unsupervised training is performed in batches of size 10 and the language model order is increased every 100 batches. While training with the bigram model, we disallow insertions and freeze all the deletion probabilities at e−100. The EM stepsize decay rate is β = 0.9. The emission arc pruning threshold is gradually decreased from 5 to 4.5 (in the negative log probability space). We perform multiple random restarts for each experiment, initializing the emission distribution to uniform plus random noise. UNMT baseline Our unsupervised neural baseline uses a single-layer LSTM with hidden state size 512 for both the encoder and the decoder. The embedding dimension is set to 128. For the denoising autoencoding loss, we adopt the default noise model and hyperparameters as described by Lample et al. (2018). The autoencoding loss is annealed over the first 3 epochs. We tune the maximum training sequence length (controlling how much training data is used) and the maximum allowed decoding length by optimizing the validation set CER. In our case, the maximum output length is important because the evaluation metric penalizes the discrepancy in length between the prediction and the reference; we observe the best results when setting it to 40 characters for Arabic and 180 for Russian. At training time, we filter out sequences longer than 100 characters for either language, which constitute 1% of the available Arabic training data (both the Arabic-only LM training set and the Latin-only training set combined) but almost 70% of the Russian data. Surprisingly, the Russian model trained on the remaining 30% achieves better results than the one trained on the full data; we hypothesize that the improvement comes from having a more balanced training set, since the full data is heavily skewed towards the Cyrillic side (LM training set) otherwise (see Table 2).
2020
737
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8320–8331 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8320 Active Learning for Coreference Resolution using Discrete Annotation Belinda Z. Li†∗ Gabriel Stanovsky♠♦ Luke Zettlemoyer♠† ♠University of Washington ♦Allen Institute for AI †Facebook [email protected] {gabis,lsz}@cs.washington.edu Abstract We improve upon pairwise annotation for active learning in coreference resolution, by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent. This simple modification, when combined with a novel mention clustering algorithm for selecting which examples to label, is much more efficient in terms of the performance obtained per annotation budget. In experiments with existing benchmark coreference datasets, we show that the signal from this additional question leads to significant performance gains per human-annotation hour. Future work can use our annotation protocol to effectively develop coreference models for new domains. Our code is publicly available.1 1 Introduction Coreference resolution is the task of resolving anaphoric expressions to their antecedents (see Figure 1). It is often required in downstream applications such as question answering (Dasigi et al., 2019) or machine translation (Stanovsky et al., 2019). Exhaustively annotating coreference is an expensive process as it requires tracking coreference chains across long passages of text. In news stories, for example, important entities may be referenced many paragraphs after their introduction. Active learning is a technique which aims to reduce costs by annotating samples which will be most beneficial for the learning process, rather than fully labeling a large fixed training set. Active learning consists of two components: (1) a taskspecific learning algorithm, and (2) an iterative sample selection algorithm, which examines the performance of the model trained at the previous iteration and selects samples to add to the annotated ∗*Work done while at the University of Washington. 1https://github.com/belindal/ discrete-active-learning-coref A volcano in Mexico, known to locals as Po-po , just started spewing molten rock. Are the two mentions coreferent? No What is the first appearance of the entity that the yellowhighlighted text refers to? A volcano in Mexico Figure 1: Discrete annotation. The annotator is shown the document, a span (yellow), and the span’s predicted antecedent (blue). In case the answer to the coreference question is negative (i.e., the spans are not coreferring), we present a follow-up question (“what is the first appearance of the entity?”), providing additional cost-effective signal. Our annotation interface can be seen in Figure 5 in the Appendix. training set. This method has proven successful for various tasks in low-resource domains (Garrette and Baldridge, 2013; Kholghi et al., 2015; Syed et al., 2016, 2017). Sachan et al. (2015) showed that active learning can be employed for the coreference resolution task. They used gold data to simulate pairwise human-annotations, where two entity mentions are annotated as either coreferring or not (see first question in Figure 1). In this paper, we propose two improvements to active learning for coreference resolution. First, we introduce the notion of discrete annotation (Section 3), which augments pairwise annotation by introducing a simple additional question: if the user deems the two mentions non-coreferring, they are asked to mark the first occurrence of one of the mentions (see second question in Figure 1). We show that this simple addition has several positive implications. The feedback is relatively easy for annotators to give, and provides meaningful signal which dramatically reduces the number of annotations needed to fully label a document. Second, we introduce mention clustering (Section 4). When selecting the next mention to label, we take into account aggregate model predictions 8321 for all antecedents which belong to the same cluster. This avoids repeated labeling that would come with separately verifying every mention pair within the same cluster, as done in previous methods. We conduct experiments across several sample selection algorithms using existing gold data for user labels and show that both of our contributions significantly improve performance on the CoNLL2012 dataset (Pradhan et al., 2012). Overall, our active learning method presents a superior alternative to pairwise annotation for coreference resolution, achieving better performing models for a given annotation budget. 2 Background Our work relies on two main components: a coreference resolution model and a sample selection algorithm. Coreference resolution model We use the span ranking model introduced by Lee et al. (2017), and later implemented in AllenNLP framework (Gardner et al., 2018). This model computes span embeddings for all possible spans i in a document, and uses them to compute a probability distribution P(y = ant(i)) over the set of all candidate antecedents Y(i) = {K previous mentions in the document} ∪{ϵ}, where ϵ is a dummy antecedent signifying that span i has no antecedent. This model does not require additional resources, such as syntactic dependencies or named entity recognition, and is thus well-suited for active learning scenarios for low-resource domains. Sample selection algorithm Previous approaches for the annotation of coreference resolution have used mostly pairwise selection, where pairs of mentions are shown to a human annotator who marks whether they are co-referring (Gasperin, 2009; Laws et al., 2012; Zhao and Ng, 2014; Sachan et al., 2015). To incorporate these binary annotations into their clustering coreference model, Sachan et al. (2015) introduced the notion of must-link and cannot-link penalties, which we describe and extend in Section 4. 3 Discrete Annotation In discrete annotation, as exemplified in Figure 1, we present the annotator with a document where the least certain span i (“Po-po”, in the example) and i’s model-predicted antecedent, A(i) (“locals”), are highlighted. Similarly to pairwise annotation, annotators are first asked whether i and A(i) are coreferent. If they answer positively, we move on to the next sample. Otherwise, we deviate from pairwise sampling and ask the annotator to mark the antecedent for i (“A volcano in Mexico”) as the follow-up question.2 The annotator can abstain from answering the follow-up question in case i is not a valid mention or if it does not have an antecedent in the document. See Figure 5 in the Appendix for more example annotations. In Section 5, we show that discrete annotation is superior to the classic pairwise annotation in several aspects. First, it makes better use of human annotation time, as often an annotator needs to resolve the antecedent of the presented mention to answer the first question. For example, identifying that “Po-po” refers to the volcano, and not the locals. Second, we find that discrete annotation is a better fit for mention ranking models (Lee et al., 2017), which assign the most-likely antecedent to each mention, just as an annotator does in discrete annotation. 4 Mention Clustering We experiment with three selection techniques by applying popular active learning selectors like entropy or query-by-committee (Settles, 2010) to clusters of spans. Because our model outputs antecedent probabilities and predictions, we would like to aggregate these outputs, such that we have only one probability per mention cluster rather than one per antecedent. We motivate this with an example: suppose span i’s top two most likely antecedents are y1 and y2. In scenario 1, y1 and y2 are predicted to be clustered together, and in scenario 2, they are predicted to be clustered apart. Span i should have a “higher certainty” in scenario 1 (and thus be less likely to be picked by active learning), because its two most likely antecedents both imply the same clustering, whereas in scenario 2, picking y1 vs. y2 results in a different downstream clustering. Thus, rather than simply using the raw probability i refers to a particular antecedents, we use the probability i belongs to a certain cluster. This implies modelling y1 and y2 “jointly” in scenario 1, and separately in scenario 2. Formally, we compute the probability that a span i belongs in a cluster C by summing P(ant(i) = y) 2For consistency, we ask annotators to select the first antecedent of i in the document. 8322 for all y that belong in some cluster C, since i having an antecedent in a cluster necessarily also implies i is also in that cluster. This allows us to convert the predicted antecedent probabilities to in-cluster probabilities: P(i ∈C) = X y∈C∩Y(i) P(ant(i) = y) (1) Similarly, for query-by-committee, we aggregate predictions such that we have one vote per cluster rather than one vote per antecedent: V (i ∈C) = X y∈C∩Y(i) V (A(i) = y) (2) where V (A(i) = y) ∈{0, 1, · · · , M} refers to the number of models that voted y to be the antecedent of i. The cluster information (y ∈C ∩Y(i)) we use in Equations 1 and 2 is computed from a combination of model-predicted labels and labels queried through active learning. Antecedents which were not predicted to be in clusters are treated as singleton clusters. Additionally, to respect user annotations during the selection process, we must keep track of all prior annotations. To do this, we use the concept of must-link (ML; if two mentions are judged coreferent) and cannot-link (CL; if two mentions are judged non-coreferent) relations between mentions introduced by Sachan et al. (2015), and adapt it for our purposes. Specifically, in our discrete setting, we build the links as follows: if the user deems the pair coreferent, it is added to ML. Otherwise, it is added to CL, while the user-corrected pair (from the second question) is always added to ML. In addition, we use these links to guide how we select for the next mention to query. For example, if a CL relation exists between spans m1 and m2, we will be less likely to query for m1, since we are slightly more certain on what m1’s antecedent should be (not m2). Formally, we revise probabilities and votes P(i ∈C) and V (i ∈C) in accordance to our link relations, which affects the selector uncertainty scores.3 Finally, following (Sachan et al., 2015), we impose transitivity constraints, which allow us to model links beyond what has been explicitly 3See Section A.2 in the appendix for more details. pointed out during annotation: ML(mi, mj) ∧ML(mj, mk) →ML(mi, mk) (3) CL(mi, mj) ∧ML(mi, mk) →CL(mj, mk) (4) However, recomputing these closures after each active learning iteration can be extremely inefficient. Instead, we build up the closure incrementally by adding only the minimum number of necessary links to maintain the closure every time a new link is added. We experiment with the following clustered selection techniques: Clustered entropy We compute entropy over cluster probabilities and select the mention with the highest clustered entropy: E(i) = − X C∈all clusters P(i ∈C) · log P(i ∈C) (5) Where P(i ∈C) is defined as in Equation 1. Clustered query-by-committee We train M models (with different random seeds) and select the mention with the highest cluster vote entropy: VE(i) = − X C∈all clusters) V (i ∈C) M · log V (i ∈C) M (6) Using votes counted over clusters, as defined in Equation 2. Least coreferent clustered mentions / Most coreferent unclustered mentions (LCC/MCU) We aim to select a subset of spans for which the model was least confident in its prediction. For each span i which was assigned a cluster Ci, we compute a score sC(i) = P(i ∈Ci), and choose n spans with the smallest sC(i). For each singleton j, we give an “unclustered” score sU(i) = maxC∈all clusters P(j ∈C) and choose m spans with the largest sU(i). P(i ∈Ci) and P(j ∈C) are computed with Equation 1. 5 Evaluation We compare discrete versus pairwise annotation using the English CoNLL-2012 coreference dataset (Pradhan et al., 2012). Following Sachan et al. (2015), we conduct experiments where user judgments are simulated from gold labels. 8323 Figure 2: Comparing various selectors for discrete versus pairwise annotation (dashed orange line). Active learning Set # labels/doc iteration # docs # ?s 20 1st (retrained 0x) 5 15 A 20 7th (retrained 6x) 5 15 200 2nd (retrained 1x) 5 15 200 8th (retrained 7x) 5 15 20 2nd (retrained 1x) 5 15 B 20 8th (retrained 7x) 5 15 200 1st (retrained 0x) 5 15 200 7th (retrained 6x) 5 15 Table 1: Timing experiments sampling. For each of the 2 datasets, we collected 60 total active learning questions from 20 documents. We collected 5 documents and 15 questions for each of the 4 categories: trained with many/few labels per document, and early/late in active learning process. The 15 questions were sampled randomly from within an iteration. Annotation time estimation To compare annotation times between pairwise and discrete questions, we collected eight 30-minute sessions from 7 in-house annotators with background in NLP. Annotators were asked to answer as many instances as they could during those 30 minutes. We additionally asked 1 annotator to annotate only discrete questions for 30 minutes. To be as representative as possible, the active learning queries for these experiments were sampled from various stages of active learning (see Table 1). On average, an annotator completed about 67 questions in a single session, half of which were answered negatively, requiring the additional discrete question. Overall, these estimates rely on 826 annotated answers. Our annotation interface is publicly available,4 see examples in Figure 5 in the Appendix. Timing results are shown in Table 2. Answering 4https://belindal.github.io/timing_ experiments Avg. Time per ? Initial question 15.96s Follow-up question 15.57s ONLY Follow-up questions 28.01s Table 2: Average annotation time for the initial pairwise question, the discrete followup question, and the discrete question on its own. the discrete question after the initial pairwise question takes about the same time as answering the first question (about 16s). Furthermore, answering only discrete questions took 28.01s per question, which confirmed that having an initial pairwise question indeed saves annotator time if answered positively. In the following experiments, we use these measurements to calibrate pairwise and discrete followup questions when computing total annotation times. Baselines We implement a baseline for pairwise annotation with entropy selector. We also implement two discrete annotation baselines with random selection. The partially-labelled baseline follows the standard active learning training loop, but selects the next mention to label at random. The fully-labelled baseline creates a subset of the training data by taking as input an annotation time t and selecting at random a set of documents that the user can fully label in t hours using ONLY discrete annotation. By comparing the fully-labelled baseline against our active learning results, we can determine whether active learning is effective over labelling documents exhaustively . Hyperparameters We use the model hyperparameters from the AllenNLP implementation of Lee et al. (2017). We train up to 20 epochs with a patience of 2 before adding labels. After all documents have been added, we retrain from scratch. We use a query-by-committee of M = 3 models, due to memory constraints. For LCC/MCU, given L annotations per document, we split the annotations equally between clusters and singletons. Results Figure 2 plots the performance of discrete annotation with the various selectors from Section 4, against the performance of pairwise annotation, calibrated according to our timing experiments. In all figures, we report MUC, B3, and CEAFe as an averaged F1 score. The three non-random active learning frameworks outperform the fully-labelled baseline, show8324 Figure 3: Mention detection accuracy (in documentmicro F1) for pairwise versus discrete selection per human annotation time. ing that active learning is more effective for coreference resolution when annotation budget is limited. Most notably, Figure 2 shows that every nonrandom discrete selection protocol outperforms pairwise annotation. Where the gap in performance is the largest (> 15 minutes per document), we consistently improve by ∼4% absolute F1 over pairwise selection. 6 Analysis A major reason discrete annotation outperforms the pairwise baseline is that the number of pairwise annotations needed to fully label a document is much larger than the number of discrete annotations. In an average development document with 201 candidates per mention, the number of pairwise queries needed to fully label a document is 15, 050, while the maximum number of discrete queries is only 201 (i.e., asking for the antecedent of every mention). Thus, the average document can be fully annotated via discrete annotation in only 2.6% of the time it takes to fully label it with pairwise annotation, suggesting that our framework is also a viable exhaustive annotation scheme. Further analysis shows that the improvement in discrete selection stems in part from better use of annotation time for mention detection accuracy (Figure 3) and pronoun resolution (Figure 4), in which we measure performance only on clusters with pronouns, as identified automatically by the spaCy tagger (Honnibal and Montani, 2017) . Finally, Table 3 shows ablations on our discrete annotation framework, showing the contribution of each component of our paradigm. Figure 4: Pronoun resolution accuracy (average F1) for pairwise versus discrete selection per human annotation time. F1 score Discrete annotation 57.08 −clustered probabilities 56.49 −incremental link 56.98 closures Pairwise annotation 54.27 Table 3: Ablations over the different model elements, at a single point (∼315 annotation hours). Entropy selector was used for all experiments. 7 Discussion and Conclusion We presented discrete annotation, an attractive alternative to pairwise annotation in active learning of coreference resolution in low-resource domains. By adding a simple question to the annotation interface, we obtained significantly better models per human-annotation hour. In addition, we introduced a clustering technique which further optimizes sample selection during the annotation process. More broadly, our work suggests that improvements in annotation interfaces can elicit responses which are more efficient in terms of the obtained performance versus the invested annotation time. Acknowledgements We would like to thank Christopher Clark, Terra Blevins, and the anonymous reviewers for their helpful feedback, and Aaron Jaech, Mason Kamb, Madian Khabsa, Kaushal Mangipudi, Nayeon Lee, and Anisha Uppugonduri for their participation in our timing experiments. 8325 References Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. CoRR, abs/1803.07640. Dan Garrette and Jason Baldridge. 2013. Learning a part-of-speech tagger from two hours of annotation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 138–147, Atlanta, Georgia. Association for Computational Linguistics. Caroline Gasperin. 2009. Active learning for anaphora resolution. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, HLT ’09, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear. Mahnoosh Kholghi, Laurianne Sitbon, Guido Zuccon, and Anthony Nguyen. 2015. Active learning: a step towards automating medical concept extraction. Journal of the American Medical Informatics Association, 23(2):289–296. Florian Laws, Florian Heimerl, and Hinrich Sch¨utze. 2012. Active learning for coreference resolution. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508–512, Montr´eal, Canada. Association for Computational Linguistics. Kenton Lee, Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2017. End-to-end neural coreference resolution. ArXiv, abs/1707.07045. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 1– 40. Association for Computational Linguistics. Mrinmaya Sachan, Eduard Hovy, and Eric P. Xing. 2015. An active learning approach to coreference resolution. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI’15, pages 1312–1318. AAAI Press. Burr Settles. 2010. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11. Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In ACL, page (to appear), Florence, Italy. Association for Computational Linguistics. A. R. Syed, A. Rosenberg, and E. Kislal. 2016. Supervised and unsupervised active learning for automatic speech recognition of low-resource languages. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5320–5324. A. R. Syed, A. Rosenberg, and M. Mandel. 2017. Active learning for low-resource speech recognition: Impact of selection size and language modeling data. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5315–5319. Shanheng Zhao and Hwee Tou Ng. 2014. Domain adaptation with active learning for coreference resolution. In Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi), pages 21–29, Gothenburg, Sweden. Association for Computational Linguistics. 8326 A Appendix A.1 Timing Experiment Details and Computations. In order to properly calibrate the results from discrete and pairwise querying, we conducted experiments (eight 30-minute sessions) to time how long annotators take to answer discrete and pairwise questions. See Figure 5 for the interface we designed for our experiments. The questions we ask for the experiment are all sampled from real queries from full runs of our active learning simulations. To obtain representative times, we sampled a diverse selection of active learning questions–at various stages of active learning (first iteration before retraining vs. after retraining n times) and various numbers of annotation per document (20 vs. 200). For each document, we randomly selected between 1-5 questions (of the total 20 or 200) to ask the annotator. Full details on how we sampled our queries can be found in Table 1. Note that we divided our samples into two datasets. We ran four 30-minute sessions with Dataset A before Dataset B and four 30-minute sessions with Dataset B before Dataset A–for a total of eight 30-minute sessions across 7 annotators (1 annotator completed a 1-hour session). Since pairwise annotation is the same as answering only the initial question under the discrete setting, we run a single discrete experiment for each annotation session and use the time taken to answer an initial question as a proxy for pairwise annotation time. Our results show that answering the initial question took an average of 15.96s whereas answering the follow-up question took 15.57s. Thus, we derive the following formulas to compute the time it takes for pairwise and discrete annotation: t = 15.96p (7) t = 15.96dc + 15.57dnc (8) where p = # of pairwise instances. dc, dnc = # of discrete instances for which the initial pair was “coreferent” (dc) and “not coreferent” (dnc), respectively. We also compute the number of pairwise examples p we can query in the same time it takes to query dc + dnc discrete examples: 15.96p = 15.96dc + 15.57dnc p = dc + 0.976dnc (9) Moreover, we additionally conduct a single 30minute experiment to determine how long it takes to answer only discrete questions (without the initial pairwise step). We find that it takes 28.01s per question under the only-discrete setting. This is longer than the time it takes to answer a pairwise question, thus confirming that having an initial pairwise question indeed saves time if the pair is coreferent. Moreover, this also shows that answering the initial pairwise question significantly helps with answering the follow-up discrete question. A.2 Additional Model Adaptations Adapting Link Relations for our Model We use must-link and cannot-link relations between mentions to guide our active learning selector. We revise probabilities and model outputs (from which the model computes uncertainty scores for entropy, QBC, and LCC/MCU) in accordance to the following rules: 1. Clustered entropy. For every CL(a, b) relationship, we set P(ant(a) = b) = 0 and re-normalize probabilities of all other candidate antecedents. This decreases the probability that the active learning selector chooses a. Moreover, for every ML(a, b) relationship, we set P(ant(a) = b) = 1 and P(ant(a) = c) = 0 for all c ̸= b. If there are multiple ML relationships involving a, we choose only one of a’s antecedent to set to 1 (to maintain the integrity of the probability distribution). This guarantees that the active learning selector will never select a, as any ML link out of a means we have already queried for a. 2. Clustered query-by-committee. To ensure we do not choose a mention we have already queried for, after each user judgment, for every ML(a, b) relation, we set V (A(a) = b) = M, and V (A(a) = c) = 0 for all other c ̸= b. Moreover, for every CL(a, b) relation, we set V (A(a) = b) = 0, which decreases the vote entropy of a, making it less likely for the selector to choose a. 3. LCC/MCU. We revise the probabilities in the same way as in clustered entropy and add the constraint that, when choosing MCU spans j, we disregard those that already have probability 1 (signifying that we have already queried for them). Incremental Closures Algorithm We introduce an algorithm to compute link closures incrementally. Instead of re-computing and re-adding the 8327 Figure 5: Timing experiments interface. Top: The initial pairwise question. Bottom: The user is presented with the discrete question when they click “No”. They are asked to select the appropriate tokens in the text representing the first occurrence of the yellow entity in the text. entire set of closures (based on a set of all prior human annotations that we keep track of) each time we query for a new mention, we add the minimum set of necessary links. See Algorithm 1. To determine how much time our incremental closure algorithm saves over recomputing closures from scratch, we simulated annotations on a single document with 1600 mentions, and recorded how long it took to re-compute the closure after each annotation. Our experiments show that recomputing from scratch takes progressively longer as more labels get added: at 1600 labels, our incremental algorithm is 556 times faster than recomputing from scratch (1630ms vs. 2.93ms). Figure 6 plots the runtime of our incremental closure algorithm (“incremental closure”) against the run-time of recomputing closures from scratch (“closure”) using Equations 3 and 4. In the latter case, we keep track of the set of user-added edges which we update after each annotation, and re-compute the closures from that set. A.3 Additional Analysis Computing the time to fully-label a document under discrete and pairwise annotation. First, we compute the maximum number of pairwise questions we can ask. We consider the setup of Lee et al. (2017)’s model. This model considers only spans with highest mention scores (the “top spans”), and only considers at most K antecedents per top span. Thus, for a document with m top spans, we can ask up to K(K −1) 2 + (m −K)K (10) pairwise questions. The first factor K(K−1) 2 comes from considering the first K spans in the document. For each of these spans i = 1 · · · K, we can ask about the first i −1 spans. The second factor (m −K)K comes from considering the spans after the K-th span. For each of these m −K spans in the document, we can only consider up to K antecedents. Using statistics for the average document (m = 201) and the standard hyper-parameter settings (K = 100), we plug into Equation 10 to 8328 Figure 6: Under each closure algorithm, the time to compute the closure after the next annotation is added, as # of existing annotations increases. get 15, 050 overall pairwise questions needed to fully label a document (in worst-case). Meanwhile, the maximum number of discrete questions we can ask is only 201 (i.e., asking for the antecedent of every mention). Using timing Equations 7 and 8, we compute that it takes at most 6337.53s to answer 201 discrete questions in the worst-case scenario, and 240198s to answer 15050 pairwise questions. Thus, in the worst-case scenario for both discrete and pairwise selection, discrete selection will take only 2.64% of the time it takes pairwise selection to fully label a document. Quantifying “Information Gain” from Discrete and Pairwise Annotation. Let DU be the set of training documents we are annotating for in a given round of active learning. To better quantify how much information discrete and pairwise annotation can supply in same amount of time, we define ∆F1 as the change in the F1 score on DU, before and after model predictions are supplemented with user annotation. Figure 7 shows average ∆F1 as annotation time increases for discrete and pairwise annotation. Across the 10 annotation times we recorded, discrete annotation results in an average ∆F1 that more than twice that of pairwise, in the same annotation time. A.4 Hyperparameters Model. We preserve the hyperparameters from the AllenNLP implementation of Lee et al. (2017)’s model. The AllenNLP implementation mostly maintains the original hyperparameters, except it sets the maximum number of antecedents considered to K = 100, and excludes speaker features Figure 7: Comparing F1 score improvement on DU for discrete vs. pairwise annotation. and variational dropout, due to machine memory limitations. Training. We use a 700/2102 fullylabelled/unlabelled initial split of the training data, and actively label 280 documents at a time. We train to convergence each round. Before all documents have been added, we train up to 20 epochs with a patience of 2 before we add more training documents. After all documents have been added, we retrain from scratch and use the original training hyperparameters from Lee et al. (2017). Selectors. For query-by-committee, we use a committee of M = 3 models. We were not able to experiment with more due to memory constraints. For LCC/MCU, given L annotations per document, we allocate n annotations to least-coreferent clustered mentions and the remaining m to mostcoreferent unclustered mentions. We use n = min (L/2, number of clustered spans), and m = min(L −n, number of un-clustered spans). A.5 Active Learning Training Setup Full Details In our active learning setup, we begin by training our model on a 700-document subset of the full training set. We discard the labels of the remaining 2102 documents. In each round of active learning, we choose 280 unlabelled documents, and query up to Q annotations per document. We then add these documents to the labelled set and continue training our model on this set (now with new documents). After all documents have been labelled, we retrain our model on the full document set from scratch, resetting all model and trainer parameters. 8329 In Algorithm 2, we show our main training loop for active learning using discrete selection. This is the training loop we use for our clustered entropy and LCC/MCU selectors, and our partially-labelled random baseline. In Algorithm 3, we modify that loop for the clustered query-by-committee selector. In Algorithm 1, we show our incremental closures algorithm, which builds up the transitive closure incrementally by adding only the minimum number of necessary links to maintain the closure each time a new link is added. 8330 Algorithm 1: Incremental Link Closures Algorithm Let (a, b) = link pair being added, A = a’s old cluster before the pair is added, B = b’s old cluster before the pair is added, A = set of element a has a CL relationship to before the pair is added, B = set of elements b has a CL relationship to before the pair is added. 1. If pair (a, b) was added to must-link, both must-link and cannot-link needs to be updated. First, resolve the MLs by adding a ML relationship between every element in A and every element in B: ∀a′, b′ (ML(a, a′) ∧ML(b, b′)) →(ML(a, b′) ∧ML(a′, b) ∧ML(a′, b′)) Next, resolve the CLs by adding a CL relationship between every element of A and B, and every element of B and A: ∀a′,ˆb (ML(a, a′) ∧CL(b,ˆb)) →(CL(a,ˆb) ∧CL(a′,ˆb)) ∀b′, ˆa (ML(b, b′) ∧CL(a, ˆa)) →(CL(b, ˆa) ∧CL(b′, ˆa)) 2. If pair (a, b) was added to cannot-link, only cannot-link needs to be updated. Add a CL relationship between every element of A and every element of B: ∀a′, b′ (ML(a, a′) ∧ML(b, b′)) →(CL(a, b′) ∧CL(a′, b) ∧CL(a′, b′)) Algorithm 2: Training loop for active learning DF = {fully-labelled docs}, DU = {unlabelled docs}, DA = {docs labelled through active learning}, M = model, ML = must-link pairs, CL = cannot-link pairs; Init: DF = {first 700 docs}, DU = {remaining docs}, DA = ∅, ML = CL = ∅; while DU is not empty do train M to convergence on data DF ∪DA; DU = 280-document subset of DU; for D ∈DU do PD, LD, CD = run M on D; PD = model-outputted probabilities = {P(y = ant(i))|y ∈Y(i), i ∈top spans(D)} LD = model-outputted antecedent labels = {(i, A(i))|i ∈top spans(D)} CD = model-outputted clusters from LD while num queried < num to query do m = choose-next-mention-to-query(PD, CD); [[Section 4]] a = maxy∈Y(m)\ϵ P(y = ant(m)); if user deems m and a coreferent then ML = ML ∪(a, m); LD = LD ∪(a, m); Add (a, m) to CD; else ˆa = user-selected antecedent for m; CL = CL ∪(a, m); ML = ML ∪(ˆa, m); LD = (LD\(a, m)) ∪(ˆa, m); Remove (a, m) and add (ˆa, m) to CD; end ML, CL = compute-link-closures; [[Algorithm 1]] PD = update-based-on-links(ML, CL); [[Section A.2]] end Label D with CD; end DA = DA ∪DU; DU = DU\DU; end 8331 Algorithm 3: Training loop for active learning with QBC selector (Differences from Algorithm 2 are highlighted) DF = {fully-labelled docs}, DU = {unlabelled docs}, DA = {docs labelled through active learning}, c M = ensemble model of submodels {M1, · · · , MM}, ML = must-link pairs, CL = cannot-link pairs; Init: DF = {first 700 docs}, DU = {remaining docs}, DA = ∅, ML = CL = ∅; while DU is not empty do train all M1, · · · , MM to convergence on data DF ∪DA; DU = 280-document subset of DU; for D ∈DU do {PD,i}, {LD,i}, PD, LD, CD = run c M on D; PD,i = submodel i’s output probabilities LD,i = submodel i’s output antecedent labels PD = ensembled (averaged) output probabilities from each submodel LD = ensembled antecedent labels computed from PD CD = ensembled clusters computed from LD while num queried < num to query do m = choose-next-mention-to-query({LD,i}, CD); [[Section 4]] a = maxy∈Y(m)\ϵ P(y = ant(m)); [[Probabilities from PD]] if user deems m and a coreferent then ML = ML ∪(a, m); Add (a, m) to CD; else ˆa = user-selected antecedent for m; CL = CL ∪(a, m); ML = ML ∪(ˆa, m); Remove (a, m) and add (ˆa, m) to CD; end ML, CL = compute-link-closures(ML, CL); [[Algorithm 1]] LD,i = update-based-on-links(ML, CL); [[Section A.2]] end Label D with CD; end DA = DA ∪DU; DU = DU\DU; end
2020
738
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8332–8341 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8332 Beyond Possession Existence: Duration and Co-Possession Dhivya Chinnappa∗ Center for Cognitive Computing Thomson Reuters Srikala Murugan and Eduardo Blanco Human Intelligence and Language Technologies Lab University of North Texas Abstract This paper introduces two tasks: determining (a) the duration of possession relations and (b) co-possessions, i.e., whether multiple possessors possess a possessee at the same time. We present new annotations on top of corpora annotating possession existence, and experimental results. Regarding possession duration, we derive the time spans we work with empirically from annotations indicating lower and upper bounds. Regarding co-possessions, we use a binary label. Cohen’s kappa coefficients indicate substantial agreement, and experimental results show that text is more useful than the image for solving these tasks. 1 Introduction Relation extraction is a core problem in natural language processing. Extracting relations is generally defined as linking two text chunks with a label. For example, relations such as PRESIDENT OF and MARRIED TO are common in information extraction (Angeli et al., 2015). Within computational semantics, relations capture spatial and temporal knowledge (Kordjamshidi et al., 2018; McDowell et al., 2017), as well as many other meanings (Abend and Rappoport, 2017). Approaches to relation extraction usually only determine the right label—often referred to as relation name or type—between two text chunks. Relation labels are certainly useful, but there is almost always complementary information that can be extracted. For example, relation labels do not give any hint about for how long the relation holds true or whether the relation is one-to-one or oneto-many. Many relations would benefit from having this additional information available, including LOCATED AT (people have many locations over time) and AGENT (some events are carried out by ∗Work done at the University of North Texas Figure 1: Sample tweet with text and an image. The author of the tweet possesses the cup for a few weeks or months. The tweet does not indicate a co-possession. only one person but not all; the additional agents may not be explicitly named in a given text). Possession relations are ubiquitous and understudied from a computational perspective. Possessions are defined as someone (the possessor) possessing something (the possessee), where possessing includes not only ownership but also control, kinship, physical and temporal proximity, and others (Section 2). From a computational perspective, previous work on extracting possessions targets possession existence (i.e., whether a possessor x possesses a possessee y) and limited temporal information using anchors, (e.g., at some point of time before or after an event, Section 2). In this paper, we complement previous work targeting possession existence with two attributes: duration (for how long does the possession hold true?) and co-possession (are there other possessors possessing the possessee concurrently?). Consider the tweet in Figure 1. The possessee is the cup, and from the text we understand that it is reusable. Thus the author of the tweet is likely to have the cup for a few weeks or months. If the 8333 possessee were a paper cup, however, the author would probably have it for at most one hour. Similarly, if the possessee were a personal coffee mug, the author would have it for longer—probably years. On the other hand, if either the text or image indicated that the setting was a restaurant, the author most likely would only have the cup for at most a couple hours, and there would be a copossession—the restaurant and the customer. The main contributions of this paper are: (a) strategy to determine sound intervals for possession durations grounded on lower and upper temporal bounds; (b) corpus of possession relations annotated with durations and copossessions;1 (c) detailed corpus analysis; and (d) experimental results showing that both tasks can be automated. While we work with possessions, a similar approach could be used to determine the duration of any relation and distinguish between one-to-one and one-to-many relations. 2 Related Work Most previous work on relation extraction does not identify the temporal bounds during which a relation holds true. There are, however, some exceptions that assign temporal information to relations (Ji et al., 2011; McClosky and Manning, 2012). Unlike these previous efforts, we work with durations that are rarely explicitly stated. Previous works on extracting possession relations primarily fall under efforts to extract large relation inventories. The goal of these efforts is to identify which relation—out of a predefined inventory—holds between two arguments. For example, Tratz and Hovy (2013) investigate semantic relations realized by English possessive constructions, both Nakov and Hearst (2013) and Tratz and Hovy (2010) consider relations realized by noun compounds such as family estate, and Badulescu and Moldovan (2009) extract relations realized by English genitives. Recently, Blodgett and Schneider (2018) present a corpus of web reviews in which the s-genitive and of-genitive are annotated with semantic labels (or supersenses). Regardless of the lexico-syntactic pattern, possession relations are a minority of the relations targeted by these previous works (other relations include THEME, QUANTITY, CAUSE, ORIGINATOR, EXPERIENCER, etc.). In addition, they do not target possession duration or co-possession. 1Available at http://dhivyachinnappa.com To the best of our knowledge, there are three previous works on extracting possession relations. All of them introduce their own annotations and present experimental results. In our previous work (Chinnappa and Blanco, 2018), we consider possession relations between individuals (named entity person and personal pronouns) and concrete objects mentioned within the same sentence in the OntoNotes corpus. Regarding time, we indicate whether the possession held true before, during or after the event in the sentence. Banea and Mihalcea (2018) consider possessions between the author of a weblog (i.e., the possessor is fixed) and the possessees identified in the weblog. Regarding time, they exclusively target possessions that held true when the weblog was written—not before or after. More recently, we investigate the problem of determining whether authors of tweets possess the objects they tweet about, and use tweets consisting of text and images (Chinnappa et al., 2019). All of these previous efforts target possession existence (i.e., whether a possession relation holds true) and very limited temporal information. Unlike them, we go beyond possession existence and target possession duration and co-possession. Finally, we note that theoretical works consider having temporary control of something as a type of possession (Tham, 2004). For example, ship captains and plane pilots have control possession of the ships and planes under their command, but usually not ownership or alienable possession. Similarly, office workers have control possession of their work desk and computer, but they do not own them. According to this definition, control possessions indicate co-possession. We note, however, that control possessions are only a subset of possessions thus they are insufficient to determine co-possession. Event Durations. Our methodology to annotate possession durations is heavily inspired by previous work targeting event durations (Pan et al., 2011). The main difference is that we do not target events (e.g., How long did met in John met his advisor on Thursday last?) but possession relations. As we shall see, we derive sound time intervals for possession durations from lower and upper temporal bounds. To the best of our knowledge, we are the first to target the duration in which a semantic relation holds true. Not surprisingly, we find that possession durations tend to be longer than events. For example, events may last only a few seconds 8334 (e.g., turn on a car), but possessions last at least a few minutes and many last over a year. 3 Annotating Possession Duration and Co-possession To the best of our knowledge, we are the first to go beyond possession existence and target possession duration and co-possession. More generally, we are the first to determine for how long a semantic relations holds true, and distinguish between oneto-one and one-to-many relations. Thus, we create a new corpus to tackle these tasks. Source Corpora. Starting from plain text is a straightforward choice. Since existing corpora already annotate possession existence, however, it would be suboptimal. Thus we work with the corpora by Chinnappa and Blanco (2018), Banea and Mihalcea (2018), and Chinnappa et al. (2019), and enhance their possession existence annotations with possession duration and co-possession annotations. These source corpora contain 2,257 possession relations, a relatively small amount. We note, however, that the source corpora are diverse (Section 2) and include possession relations identified in formal (OntoNotes) and informal texts (weblogs, Twitter). Additionally, we work with possessions identified from not only text (OntoNotes and weblogs), but also tweets consisting of text and images. The corpus by Chinnappa and Blanco (2018) contains 979 sentences, and we select the 358 intra-sentential possessions annotated in those sentences. The corpus by Banea and Mihalcea (2018) contains 799 possession relations. The possessor is always the author of a weblog, and the possessee is mentioned in the weblog and can be: (a) a concrete object, e.g., car, notebook; (b) an implicit concrete object associated with an event, e.g., car for driving, cell phone for texting; or (c) an abstract object, e.g., wifi, idea. The corpus by Chinnappa et al. (2019) contains 5,000 tweets (text + image). We select 1,100 tweets in which the author (the possessor) possesses a concrete object mentioned in the tweet (the possessee). 3.1 Annotation Process and Post-Processing The annotations were done by two graduate students who fully annotated the whole corpus. Regarding possession duration, they annotate lower and upper bounds. Then, we post-process their annotations to obtain time intervals for possession durations. Regarding co-possession, they use a binary label and no post-processing takes place. 3.1.1 Possession Duration How long do possession relations hold true for? The answer to this question is not obvious, and previous work has named temporal durations in general a significant issue for temporal reasoning (Allen and Ferguson, 1994). Intuitively, possessors have possession of some possessees for short periods of time (e.g., ice cream, pencils) and other possessees for long periods of time (e.g., cars). But there are exceptions, e.g., drivers have (relatively) short possessions of rental cars—at least compared to the cars they own. In addition, possession durations are almost never explicitly stated in text (e.g., I got rid of this computer 5 years after buying it), despite humans have no issues inferring some duration information. To address the inherent difficulties of annotating temporal durations, we follow previous work on determining event durations (Pan et al., 2011). Specifically, we ask annotators to provide lower and upper bounds for the duration of the possession relation between possessor and possessee (recall that we already know whether a possession exists). Lower and upper bounds consist of an integer followed by a unit of time (seconds, minutes, hours, days, weeks, months or years). These annotations are rather open and we do not expect to obtain high agreements. As we shall see, however, a simple post-processing allows us to obtain sound time intervals for possession duration, where sound means empirically driven and with substantial agreements (Section 3.2). We argue that any predefined duration intervals (e.g., less than five minutes, between five minutes and a day, more than a day and less than a month, over a month) would be arbitrary—at least to a certain degree. Additionally, we would have to go back and forth annotating and redefining the predefined intervals until we obtain (a) a reasonable distribution of duration intervals (e.g., avoid 95% of possessions assigned to a single interval) and (b) substantial agreements. Asking annotators for lower and upper bounds and the proposed postprocessing bypasses all these issues. Post-Processing Possession Durations. We postprocess the annotations of lower and upper bounds for possession durations following two steps: 1. Convert lower and upper bounds to minutes and calculate the mean. 8335 2 4 6 8 10 12 14 16 18 0.0% 5.0% 10.0% 15.0% 20.0% 25.0% Figure 2: Distribution of mean possession durations after post-processing (i.e., after converting to minutes and calculating the natural logarithm). We determine duration labels after identifying changes in frequency at 6 (6 hours) and 13 (10 months). 2. Calculate the natural logarithm of the mean duration from Step (1). Converting to minutes allows us to measure time with a single unit and facilitates further postprocessing and calculating agreements (Section 3.2). We convert to minutes (as opposed to, for example, seconds) because the annotators never chose less than a minute as a lower bound. Calculating the logarithm is useful to account for the fact that temporal differences must be calculated in relative terms. For example, the differences between (a) 5 minutes and 10 minutes and (b) 5 years and 10 years should be roughly the same. On the other hand, the differences between (b) 5 years and 10 years and (c) 5 years and 5 minutes, and 10 years and 10 minutes should be close to zero. Figure 2 plots the frequency of mean possession durations after post-processing. The distribution shows a drop at 6 (equivalent to 6 hours) and a rise at 13 (equivalent to 10 months). Based on this observations, we define the following intervals to specify possession durations: • short: possessions lasting less than 6 hours, • medium: possessions lasting at least 6 hours and less than 10 months; and • long: possessions lasting at least 10 months. The annotations we release include (a) lower and upper bounds and (b) the 3-way labels for each possession existence. Except to discuss agreements, however, in the remaining of this paper we work with the three duration labels. 3.1.2 Co-Possession Annotating co-possession is relatively straightforward. Knowing that a possession relation exists between a possessor x and a possessee y, annotators use a binary label to indicate whether an additional possessor x’ has possession of y concurrently with x. x’ must not be named explicitly, as otherwise an explicit possession relation would exist. Co-possession can sometimes be determined based on the possessee. For example, commercial plane pilots have control possession of the planes they fly, but usually there are concurrent possessors (e.g., co-pilot, owner). Determining many co-possessions, however, requires context. For example, consider a blogger writing down I was using the wifiat the coffee shop. There is a possession relation between the author of the blog and wifi, and that is a co-possession because other people are concurrent possessors (e.g., the owners of the coffee shop, other clients). 3.2 Inter-Annotator Agreement Possession Duration: short, medium and long. We use unweighted Cohen’s kappa (κ) to calculate the inter-annotator agreement with the three possession duration labels: short, medium and long. The κ coefficient is 0.63, which is consider substantial. Interpreting κ coefficient is somewhat subjective, but over 0.8 would be considered nearly perfect (Artstein and Poesio, 2008). We also note that a weighted version of agreement would yield higher agreements. Possession Duration: Lower and Upper Bounds. Calculating agreement between the lower and upper bounds for possession duration is not straightforward. For example, the agreement between at least 30 minutes and at most 12 hours and at least 1 hour and at most 1 day should be considerable despite the lower and upper bounds differ by a sizable amount (half and double respec8336 11 12 13 14 15 16 17 18 Figure 3: Observed agreement for the POSSESSION(x, y) in [We]x brought the kids rod and reels from [home]y so they could fish. The first annotator chose 6 months and 50 years as lower and upper bound (steeper curve), and the second annotator chose 1 year and 100 years (flatter curve). Observed agreement is the overlap between both curves, which is 0.64. tively). Cohen’s κ is usually used for categorical labels and not directly applicable to ranges of durations defined by lower and upper bounds. We follow previous work on event durations to calculate the agreement (Section 2). The formula for Cohen’s κ is κ = P(A)−P(E) 1−P(E) , where P(A) is the observed agreement between annotators and P(E) is the expected agreement. We assume that possession durations follow a normal distribution, and that the lower and upper bounds account for 80% of the distribution. Under these assumptions, the lower (xlower) and upper(xupper) bounds are 1.28 standard deviations (σ) from the mean (µ), thus σ = xupper−µ 1.28 = xlower−µ −1.28 and µ = xupper−xlower 2 . We calculate observed agreement between annotations (P(A)) as the overlap between their normal distributions, as exemplified in Figure 3. We calculate expected agreement (P(E)) as the average overlap between each annotation and the global distribution. In other words, the expected agreement would result from annotations that follow perfectly the global normal distribution. The κ coefficient for lower and upper bounds is low, 0.37. We note, however, that (a) it would be larger if we assumed that annotators annotate less than 80% of the duration distribution, and (b) previous work on event durations obtained 0.08 κ under the same assumptions. Additionally, we experiment with the three duration intervals described above (κ: 0.63); our rationale to annotate lower Only text (source: OntoNotes and weblogs) Possession duration short 15.1% medium 6.2% long 78.7% Co-Possession no 56.5% yes 43.5% Text + image (source: Tweets) Possession duration short 4.3% medium 38.0% long 57.7% Co-Possession no 72.7% yes 27.3% Table 1: Label distributions. Top block: possessions identified in text (from OntoNotes and weblogs); bottom block: possessions identified in text and image (from tweets). and upper bounds is to derive sound intervals. Co-Possession. The Cohen’s kappa (κ) coefficient for co-possession (two labels: yes and no) is 0.65, which again is considered substantial. 4 Corpus Analysis Table 1 presents the label distribution in our corpus. We distinguish between possessions identified in text (Chinnappa and Blanco, 2018; Banea and Mihalcea, 2018), and those identified in tweets consisting of text and an image (Chinnappa et al., 2019). Regarding possession duration, most possessions are long (over 10 months, 78.7% and 57.7%). Possessions identified in tweets are much more likely to have medium length (38.0%) than those identified in text (6.2%), and the opposite it true about short durations: 4.3% vs. 15.1%. Regarding co-possession, yes and no are roughly uniformly distributed with possessions identified in text (yes: 56.5% and no: 43.5%). In tweets consisting of text and an image, however, no dominates yes (72.7% vs. 27.3%). We present label distributions based on the WordNet synset and number of the possessee in Table 2. The majority (96.5%) of possessees are nouns. The top 4 most frequent WordNet synsets (container, device, vehicle, and covering) show interesting patterns. First, vehicles (e.g., car, truck) and containers (e.g., handbag, spoon) are most of the times part of long possessions. Second, devices (e.g., comb, cell phone) are twice as likely to be part of a medium length possession. Third, coverings (e.g., jacket, pants, shirt) are (b.1) almost never part of short possessions and (b.2) almost always (80%) part of long possessions Pos8337 WordNet Synsets (top 4 most frequent) Number Container Device Vehicle Covering Singular Plural Not noun % Possession duration short 1.5 2.3 2.5 0.6 6.9 2.1 0.7 9.7 medium 4.8 6.1 4.0 1.7 14.4 7.3 0.4 22.1 long 18.4 9.5 10.8 8.0 49.0 16.8 2.4 68.2 All 24.7 17.9 17.3 10.3 70.3 26.2 3.5 100.0 Co-Poss. no 13.5 11.8 11.2 4.5 43.1 19.5 2.0 64.6 yes 11.2 6.1 6.1 5.8 27.2 6.7 1.5 35.4 All 24.7 17.9 17.3 10.3 70.3 26.2 3.5 100.0 Table 2: Label distribution of duration and co-possession labels depending on the WordNet synset and number of the possessee. All numbers are percentages in the whole corpus (text and text + images). Sentence with possessor x and possessee y Duration Co-Poss. 1 Everything served cold, with [ice cream]y, fruit salad and strawberry yoghurt pudding for dessert [...] (x: the author of the weblog) short no 2 ”At least [we]x have the decency to drop [bombs]y from airplanes”, he said. medium no 3 I had to get out my [phone]y for a couple pics. (x: the author of the weblog) long no 4 [We]x took a [taxi]y along the path of the highway that heads toward Disney, trying to experience this mysterious park from close by. short yes 5 The first two months of the summer, I drove Andrew’s wrapped car that has his face all over it (lucky me), and then the last two months, the dealership was able to provide me with a [loaner car]y. (x: author of the weblog) medium yes 6 [They]x kept my father’s [car]y for a year without writing a confiscation order. long yes Table 3: Annotation examples on selected possessions identified in text. (x: possessors, y: possessees). sesses not present in WordNet (e.g., Garmin, dupioni) and those not subsumed by the top 4 most frequent synsets have roughly the same distribution than all possessees (Table 1). Regarding copossession, devices (e.g., computer, watch) and vehicles (e.g., plane, truck) follow a similar distribution: co-possession is roughly twice as likely. The distribution of other synsets indicate that possessees are unlikely to have co-possessors, but to a lesser degree. The right-hand side of Table 2 shows the label distributions depending on the possessee number. Plural and singular nouns follow a similar distribution with possession duration, but plural nouns are less likely to have concurrent co-possessors than singular nouns. Examples. Table 3 presents annotation examples on top of possessions identified in text. In Example (1), the possessor is the author of the blog and the possessee is the ice cream. The author is describing a meal, and it is clear that the possession lasted for a short period of time. There is no indication that the author shared the ice cream thus annotators chose no for co-possession. Example (2) belongs to a document describing a war zone were bombs (the possessee) were dropped. Annotators interpreted that the speaker uses we to refer to his nation, and annotated medium duration as bombs are not stored for long periods of time during war. They also decided that there is no co-possession since the possessor we is a collective noun referring to an entire nation. Example (3) is from a weblog. The possessor is the author and the possessee is a phone. It is reasonable to infer from context that the possessee is a cell phone (landline phones do not have cameras) and that the author is the owner. Thus, annotators chose long duration and no co-possession. In Example (4), the possessor we is the client of a taxi driver, and the possessee is the taxi. While not explicitly stated, annotators inferred that (a) the possession lasted for a short period of time and (b) there are concurrent co-possessors (e.g., the taxi driver). Note that the possession duration between the taxi driver and the same possessee is likely to be medium or long, but we only annotate the duration between we and taxi. Example (5) illustrates a rare phenomenon: an explicit temporal interval (i.e., two months) indicating the possession duration. Thus, annotators chose medium duration. Regarding co-possession, the company loaning the car was clearly a copossessor of the loaner car while the author of the blog borrowed the car, so annotators chose yes. Finally, Example (6) exemplifies a long pos8338 a) b) c) Possessee: bowl Possessee: pen Possessee: computer Duration Co-Poss. Duration Co-Poss. Duration Co-Poss. short no medium no long no d) e) f) Possessee: hats Possessee: jackets Possessee: shirts Duration Co-Poss. Duration Co-Poss. Duration Co-Poss. short yes medium yes long yes Table 4: Annotation examples on selected possessions identified in tweets consisting of text and an image. The possessors are the authors of the tweets, and the possessees are concrete objects in their tweets. session with co-possession. The context is a law enforcement operation in which They (the police) kept the possessee (car). The duration of the possession is explicit (a year), and during that time my father was still the owner. Thus, annotators chose long and yes for duration and co-possession. Table 4 presents annotation examples using possession relations identified in tweets consisting of text and images. We do not describe these examples in detail as they are self-explanatory. 5 Experiments and Results In order to predict possession duration and copossession, we experiment with Logistic Regression and a neural network ensemble including a text component and two image components. Each possession relation becomes an instance, and we create stratified training (80%) and test (20%) sets. We also reserve 20% of the training as validation set. More specifically, we build two classifiers: one for possession duration (short, medium, or long) and one for co-possession (yes or no). Logistic Regression. We use the implementation by scikit-learn (Pedregosa et al., 2011), and use bag-of-words features for the sentence at hand. Specifically, we use binary flags indicating word presence, and additional flags to indicate the word corresponding to the possessor and possessee. Neural Network. The network architecture is similar to the one in our previous work (Chinnappa et al., 2019). It includes a text component and an image component (Table 4). The latter component is disabled if no image is available. The text component is an LSTM that takes as input the sentence (or tweet) containing the possessee. Words are represented with the concatenation of their 300-dimensional GloVe embedding (Pennington et al., 2014) and an additional embedding indicating whether a token is the possessor, possessee, or neither. We train the additional embeddings from scratch with the rest of the network. The image component uses two pretrained neural networks. First, we concatenate to the softmax output layer the weights from the average pooling layer (second to last layer) of InceptionNet (Szegedy et al., 2015). Second, we obtain the top 5 tags from the Google Cloud Vision API and incorporate them as an additional textual input. 8339 ... LSTM ... pink LSTM ... smoothie LSTM ... material LSTM ... property LSTM ... drink LSTM Wanted some reusable cup today Vision API InceptionNet ... ... ... ... ... ... ... ... ... LSTM LSTM LSTM LSTM ... ... Word embed. Addtl. embed. top 5 tags Word embed. weights from average pooling layer (second to last) Softmax text image ... Figure 4: Neural network architecture to predict possession duration and co-possession. We include a text component (above dotted line) and two image components (below dotted line). Note that the top 5 tags from the Vision API become a textual input, and we use pretrained word embeddings and an LSTM for them. Majority Baseline Log. Regression LSTMword embedings LSTM+addtl. embeds. P R F P R F P R F P R F short 0.00 0.00 0.00 1.00 0.35 0.52 0.88 0.35 0.50 0.75 0.60 0.67 medium 0.00 0.00 0.00 0.68 0.35 0.46 0.64 0.21 0.32 0.68 0.49 0.57 long 0.73 1.00 0.84 0.81 0.97 0.88 0.78 0.96 0.86 0.94 0.90 0.82 W. Avg. 0.53 0.73 0.61 0.80 0.80 0.77 0.76 0.73 0.77 0.82 0.83 0.82 yes 0.00 0.00 0.00 0.62 0.58 0.60 0.75 0.58 0.65 0.72 0.62 0.67 no 0.56 1.00 0.72 0.69 0.72 0.70 0.72 0.85 0.78 0.73 0.82 0.77 W. Avg 0.31 0.56 0.40 0.66 0.66 0.66 0.73 0.73 0.72 0.73 0.73 0.73 Table 5: Results obtained with possession relations identified from text (OntoNotes and weblogs). Addtl. embeddings refers to the embeddings indicating whether a token is the possessor, the possessee, or neither one. More specifically, we use GloVe embeddings and an LSTM to process the additional textual input. Note that individual tags identified in the image are sometimes multiple tokens (e.g., coffee mug), so an LSTM is a good choice. We use the implementation by Keras (Chollet et al., 2015) with TensorFlow backend (Abadi et al., 2015). More specifically, we use the Adam optimizer (Kingma and Ba, 2014) and categorical cross entropy as a loss function. We use batch size 32 for up to 200 epochs, but stop earlier if there is no improvements in the validation for 5 epochs. 5.1 Results Table 5 presents the results with instances including only text. Regarding possession duration, the majority baseline (always long) obtains 0.61 Fmeasure. The second baseline, Logistic Regression, obtains 0.77 F-measure. These results are strong, however, Logistic Regression is biased towards the most common label (long, Table 1), and performs poorly with the other labels (short and medium). In fact, Logistic Regression outperforms LSTM+addtl. embeds. with long, but the weighted F-measure is lower (0.77 vs. 0.82). Regarding co-possession, we observe a similar trend, but the LSTM performs similar with both labels. LSTM and Additional Embeddings. Table 5 presents results obtained with the LSTM using (a) only the word embeddings and (b) incorporating the additional embeddings for the possessor and possessee. The LSTM with only word embeddings obtains worse results predicting possession durations (0.77 vs. 0.82 weighted Fmeasure), and virtually the same results predicting co-possessions (0.72 vs. 0.73 weighted Fmeasure). These results lead to the conclusion that the specific possessor and possessee along with context are important to determine how long a possession holds true. On the other hand, determining whether there are concurrent co-possessors does not benefit from the specific possessor and possessee (i.e., events and other information contained in the sentence are sufficient). Table 6 presents results with the tweets (all of them include both text and images). The results indicate that the text is vital to determine possession duration and co-possession, and that the image components do not bring any improvements. Logistic Regression obtains best results for both pos8340 Majority Baseline Log. Regression LSTM+addtl. embeds only image comp. text + image P R F P R F P R F P R F P R F short .00 .00 .00 .00 .00 .00 .50 .11 .18 .00 .00 .00 .12 .10 .11 medium .00 .00 .00 .61 .57 .59 .58 .45 .50 .39 .29 .33 .52 .30 .38 long .58 1.00 .73 .70 .78 .74 .66 .80 .73 .57 .71 .63 .64 .83 .72 W. Avg. .33 .58 .42 .64 .67 .65 .62 .64 .62 .48 .52 .49 .57 .59 .56 yes .00 .00 .00 .41 .29 .34 .37 .32 .34 .30 .14 .19 .33 .15 .21 no .73 1.00 .84 .76 .85 .80 .76 .79 .78 .73 .88 .80 .74 .89 .81 W.Avg. .53 .73 .62 .67 .70 .68 .65 .67 .66 .61 .68 .63 .63 .69 .64 Table 6: Results obtained with possession relations identified from text and image (tweets). session duration and co-possession, and obtains similar results than the text component of the neural network (LSTM+addtl embeddings): 0.65 vs. 0.62 F-measure (duration) and 0.68 vs 0.66 F-measure (co-possession). While including the image component slightly decreases the results predicting copossession (0.66 vs. 0.64 F-measure), it heavily decreases results predicting possession duration (0.62 vs. 0.56 F-measure). We attribute these unexpected results to the nature of the tasks. Image tags provide high-level information about the possessee (e.g., cup), and determining possession durations and co-possessions require fine-grained information about the possessee (e.g., reusable, disposable) as well as knowledge about the events that connect the possessor and possessee. 6 Conclusions Standard relation extraction does not provide information about for how long relations hold true or whether relations are one-to-one or one-to-many. In this paper, we tackle both problems and determine possession durations and co-possessions. Possessions are ubiquitous yet understudied from a computational perspective. From a theoretical perspective, they include having control over something (e.g. flying a plane, impounding a vehicle, eating ice cream) thus most objects are actually possessees of one or more possessors. Additionally, as just exemplified, many possessions can be extracted even if prototypical possession verbs (e.g., have, buy, acquire) are missing. We have presented new annotations on top of existing corpora. Regarding durations, we collect lower and upper bounds in order to derive sound duration intervals. The resulting three intervals obtain substantial agreement (0.63 Cohen’s κ). Regarding co-possessions, we obtain slightly better agreement (0.65 Cohen’s κ). We have also presented baseline models and a neural network architecture to solve both tasks. Beyond word embeddings, the LSTM benefits from additional embeddings indicating the tokens that are the possessor and possessee. Information extracted from the image, however, is not helpful. While the work presented here targets possession relations, we believe that a similar approach could be used to to determine for how long any semantic relation holds true. References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. Omri Abend and Ari Rappoport. 2017. The state of the art in semantic representation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 77–89, Vancouver, Canada. Association for Computational Linguistics. James F Allen and George Ferguson. 1994. Actions and events in interval temporal logic. Journal of logic and computation, 4(5):531–579. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 8341 344–354, Beijing, China. Association for Computational Linguistics. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Comput. Linguist., 34(4):555–596. Adriana Badulescu and Dan Moldovan. 2009. A semantic scattering model for the automatic interpretation of english genitives. NLE. Carmen Banea and Rada Mihalcea. 2018. Possession identification in text. Natural Language Engineering, 24(4):589610. Austin Blodgett and Nathan Schneider. 2018. Semantic Supersenses for English Possessives. In LREC. Dhivya Chinnappa and Eduardo Blanco. 2018. Mining possessions: Existence, type and temporal anchors. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 496–505, New Orleans, Louisiana, USA. Association for Computational Linguistics. Dhivya Chinnappa, Srikala Murugan, and Eduardo Blanco. 2019. Extracting possessions from social media: Images complement language. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 663– 672, Hong Kong, China. Association for Computational Linguistics. Franc¸ois Chollet et al. 2015. Keras. https:// github.com/fchollet/keras. Heng Ji, Ralph Grishman, and Hoa Dang. 2011. Overview of the TAC2011 knowledge base population track. In TAC 2011 Proceedings Papers. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Parisa Kordjamshidi, Archna Bhatia, James Pustejovsky, and Marie-Francine Moens. 2018. Proceedings of the first international workshop on spatial language understanding. New Orleans. Association for Computational Linguistics. David McClosky and Christopher D. Manning. 2012. Learning constraints for consistent timeline extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 873–882, Jeju Island, Korea. Association for Computational Linguistics. Bill McDowell, Nathanael Chambers, Alexander Ororbia II, and David Reitter. 2017. Event ordering with a generalized model for sieve prediction ranking. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 843–853, Taipei, Taiwan. Asian Federation of Natural Language Processing. Preslav I. Nakov and Marti A. Hearst. 2013. Semantic interpretation of noun compounds using verbal and other paraphrases. ACM Trans. Speech Lang. Process., 10(3):13:1–13:51. Feng Pan, Rutu Mulkar-Mehta, and Jerry R. Hobbs. 2011. Annotating and learning event durations in text. Computational Linguistics, 37(4):727–752. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9. Shiao Wei Tham. 2004. Representing Possessive Predication: Semantic Dimensions and Pragmatic Bases. Ph.D. thesis, Stanford University. Stephen Tratz and Eduard Hovy. 2010. A taxonomy, dataset, and classifier for automatic noun compound interpretation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 678–687, Stroudsburg, PA, USA. Association for Computational Linguistics. Stephen Tratz and Eduard H. Hovy. 2013. Automatic interpretation of the english possessive. In ACL (1), pages 372–381. The Association for Computer Linguistics.
2020
739
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 807–812 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 807 Unsupervised FAQ Retrieval with Question Generation and BERT Yosi Mass, Boaz Carmeli, Haggai Roitman and David Konopnicki IBM Research AI Haifa University, Mount Carmel, Haifa, HA 31905, Israel {yosimass,boazc,haggai,davidko}@il.ibm.com Abstract We focus on the task of Frequently Asked Questions (FAQ) retrieval. A given user query can be matched against the questions and/or the answers in the FAQ. We present a fully unsupervised method that exploits the FAQ pairs to train two BERT models. The two models match user queries to FAQ answers and questions, respectively. We alleviate the missing labeled data of the latter by automatically generating high-quality question paraphrases. We show that our model is on par and even outperforms supervised models on existing datasets. 1 Introduction Many websites and online communities publish FAQ to help their users find relevant answers to common questions. An FAQ consists of pairs of questions and answers {(q, a)}. The FAQ retrieval task involves ranking {(q, a)} pairs for a given user query Q.1 Searching over FAQ can leverage multifield indexing and retrieval (Karan and ˇSnajder, 2016). Hence, a user query Q may be matched with either the question field q, the answer field a or the concatenated field q+a (Karan and ˇSnajder, 2016). The association of questions to answers in the FAQ pairs, can be utilized as weak supervision, for training neural models to predict the similarity between user queries and answers (i.e., Q-to-a matching) (Gupta and Carvalho, 2019; Karan and ˇSnajder, 2018; Sakata et al., 2019). However, FAQ pairs by themselves do not provide the required labeled data for training a model to predict the association between user queries and FAQ questions (i.e., Q-to-q matching). Thus, a labeled dataset with user queries Q and their matching {(q, a)} 1Throughout this paper we use the term “question” (q) to denote a question within a given FAQ pair, and “query” (Q) to denote an issued user query. pairs is required for supervised learning (Gupta and Carvalho, 2019; Karan and ˇSnajder, 2018; Sakata et al., 2019). Such a dataset is usually manually generated or obtained from query-log mining. Yet, the construction of such a dataset either requires domain expertise (e.g., enriching the dataset with manually generated question paraphrases (Karan and ˇSnajder, 2018)) or assumes the availability of query-logs (Kim and Seo, 2006, 2008). Whenever such a dataset is unavailable, one must resort to utilizing unsupervised retrieval models for Q-to-q matching. Previous unsupervised FAQ retrieval models (Burke et al., 1997; Brill et al., 2002; Karan et al., 2013; Karan and ˇSnajder, 2018; Wu et al., 2005) have utilized so far “traditional” information retrieval techniques, such as lexical and semantic text matching, query expansion, etc. In this paper we overcome the aforementioned unsupervised gap, by using distant supervision to train neural models. Our method is composed of a combination of three unsupervised methods. Each method is utilized for re-ranking an initial pool of FAQ pairs obtained by a simple BM25 retrieval (Robertson and Zaragoza, 2009). The first method applies a focused-retrieval approach, utilizing passages for answer re-ranking (Bendersky and Kurland, 2008). Each one of the two other methods fine-tunes a BERT model (Devlin et al., 2019), one for matching Q-to-a and one for matching Q-to-q. To overcome the lack of training data in the latter’s case, we further implement a novel weaksupervision approach using automatically generated question paraphrases, coupled with smart filtering to ensure high-quality paraphrases. We then combine the outcome of the three methods using an unsupervised late-fusion method. Overall, we show that our unsupervised FAQ retrieval approach is on par and sometimes even outperforms state-ofthe-art supervised models. 808 2 Related work Several previous works have also utilized Deep Neural Networks (DNN) for FAQ retrieval. (Karan and ˇSnajder, 2016) used Convolution Neural Networks (CNN) for matching user queries to FAQ. (Gupta and Carvalho, 2019) used combinations of Long Short-Term Memory (LSTM) to capture Qto-q and Q-to-a similarities. Yet, those works are supervised and use user queries (Q) for training. Following the success of BERT (Devlin et al., 2019) in NLP tasks, (Sakata et al., 2019) have recently used a search engine for Q-to-q matching and then combined its results with a supervised BERT model for Q-to-a matching. We use a similar BERT model for Q-to-a matching, but differently from (Sakata et al., 2019), we use it in an unsupervised way, and we further introduce a second unsupervised BERT model for Q-to-q matching. A somewhat related area of research is Community Question Answering (CQA) (Patra, 2017; Zhou et al., 2015) and the related TREC tracks.23 While CQA shares some common features to FAQ retrieval, in CQA there are additional signals such as votes on questions and answers, or the association of user-answer and user-question. Clearly, in a pure FAQ retrieval setting, such auxiliary data is unavailable. Hence, we refrain from comparing with such works. 3 Unsupervised FAQ Retrieval Approach Our proposed FAQ retrieval approach uses distant supervision to train neural models and is based on an initial candidates retrieval followed by a reranking step. Recall that, the FAQ dataset is composed of {(q, a)} pairs. The initial candidate retrieval is based on indexing {(q, a)} pairs into a search engine index (Section 3.1) and searching against the index. The re-ranking step combines three unsupervised re-rankers. The first one (Section 3.2) is based on a focused-retrieval approach, utilizing passages for answer re-scoring. The two other rerankers fine-tune two independent BERT models. The first BERT model (Section 3.3), inspired by (Sakata et al., 2019), is fine-tuned to match questions (q) to answers (a). At run time, given a user query Q, this model re-ranks top-k {(q, a)} candidate pairs by matching the user query Q to the answers (a) only. 2http://alt.qcri.org/semeval2016/task3/ 3http://alt.qcri.org/semeval2017/task3/ The second BERT model (Section 3.4) is designed to match user queries to FAQ questions. Here, we utilize weak-supervision for generating high quality question paraphrases from the FAQ pairs. The BERT model is fine-tuned on the questions and their generated paraphrases. At run time, given a user query Q, this model gets the topk {(q, a)} candidate pairs and re-ranks them by matching the user query Q to the questions (q) only. The final re-ranking is obtained by combining the three re-rankers using an unsupervised latefusion step (Section 3.5). The components of our method are described in the rest of this section. 3.1 Indexing and initial candidates retrieval We index the FAQ pairs using the ElasticSearch4 search engine. To this end, we represent each FAQ pair (q, a) as a multifield document having three main fields, namely: question q, answer a, and the concatenated field q+a. Given a user query Q, we match it (using BM25 similarity (Robertson and Zaragoza, 2009)) against the q+a field5 and retrieve an initial pool of top-k FAQ candidates. 3.2 Passage-based re-ranking Our first unsupervised re-ranker applies a focused retrieval approach. To this end, following (Bendersky and Kurland, 2008), we re-rank the candidates using a maximum-passage approach. Such an approach is simply implemented by running a sliding window (i.e., passage) on each candidate’s q+a field text, and scoring the candidate according to the passage with the highest BM25 similarity to Q (Gry and Largeron, 2011). We hereinafter term this first re-ranking method as bm25-maxpsg. 3.3 BERT model for Q-to-a similarity Among the two BERT (Devlin et al., 2019) rerankers, the first one, BERT-Q-a, aims at reranking the candidate FAQ pairs {(q, a)} according to the similarity between a given user query Q and each pair’s answer a. To this end, we fine-tune the BERT model from the FAQ pairs {(q, a)}, using a triplet network (Hoffer and Ailon, 2015). This network is adopted for BERT fine-tuning (Mass et al., 2019) using triplets (q, a, a′), where (q, a) constitutes an FAQ pair and a′ is a negative sampled answer as 4https://www.elastic.co/ 5Searching only the q or a fields obtained inferior results 809 follows. For each question q we have positive answers {ai} from all the pairs {(q, ai)}.6 Negative examples are randomly selected from those FAQ that do not have q as their question. To further challenge the model into learning small nuances between close answers, instead of sampling the negative examples from all FAQ pairs, we run q against the q+a field of the search index (from Section 3.1 above). We then sample only among the top-k (e.g., k = 100) retrieved pairs, that do not have q as their question. Our BERT-Q-a is different from that of (Sakata et al., 2019) in two aspects. First, (Sakata et al., 2019) fine tunes a BERT model for Q-to-a matching using both FAQ (q, a) pairs as well as user queries and their matched answers (Q, a). This is, therefore, a supervised setting, since user queries are not part of the FAQ and thus require labeling efforts. Compared to that, we fine tune the BERT-Q-a using only FAQ (q, a) pairs. Second, unlike (Sakata et al., 2019), which fine-tunes BERT for a classification task (i.e., point-wise training) we train a triplet network (Hoffer and Ailon, 2015) that learns the relative preferences between a question and a pair of answers. Our network thus implements a pair-wise learning-to-rank approach (Li, 2011). At inference time, given a user query Q and the top-k retrieved (q, a) pairs, we re-rank the (q, a) pairs using the score of each (Q, a) pair as assigned by the fine-tuned BERT-Q-a model (Mass et al., 2019). 3.4 BERT model for Q-to-q similarity The second BERT model, BERT-Q-q, is independent from the first BERT-Q-a model (Section 3.3) and is trained to match user queries to FAQ questions. To fine-tune this model, we generate a weakly-supervised dataset from the FAQ pairs. Inspired by (Anaby-Tavor et al., 2019), we fine-tune a generative pre-training (GPT-2) neural network model (Radford, 2018) for generating question paraphrases. GPT-2 is pre-trained on huge bodies of text, capturing the natural language structure and producing deeply coherent text paragraphs. Intuitively, we would like to use the FAQ answers to generate paraphrases to questions. Unlike the work of (Anaby-Tavor et al., 2019) which fine 6Usually i = 1, i.e., there is a single answer for each FAQ question q. Yet, it is possible that i > 1. tunes a GPT-2 model given classes, where each class has a title and several examples, here we consider each answer a as a class with only one example which is its question q. We thus concatenate all the FAQ pairs into a long text U = a1 SEP q1 EOS · · · an SEP qn EOS, where answers precede their questions,7 having EOS and SEP as special tokens. The former separates between FAQ pairs and the latter separates answers from their questions inside the pairs. The GPT-2 fine-tuning samples a sequence of l consecutive tokens wj−l, · · · , wj from U and maximizes the conditional probability P(wj|wj−l, . . . , wj−1) of wj to appear next in the sequence. We repeat this process several times. Once the model is fine-tuned, we feed it with the text “a SEP”, (a is an answer in an FAQ pair (q, a)), and let it generate tokens until EOS. We take all generated tokens until EOS, as a paraphrase to a’s question q. By repeating this generation process we may generate any number of question paraphrases. For example, the paraphrase “Is there a way to deactivate my account on Facebook?” was generated for the question “How do I delete my Facebook account?”. One obstacle in using generated text is the noise it may introduce. To overcome this problem we apply a filtering step as follows. The idea is to keep only paraphrases that are semantically similar to their original question (i.e., have similar answers). Let GT(q)={(q, ai)} be the FAQ pairs of question q (i.e., the ground truth answers of q). For each generated paraphrase p of q, we run p as a query against the FAQ index (See section 3.1), and check that among the returned top-k results, there are at least min(n, |GT(q)|) pairs from GT(q) for some n. In the experiments (see Section 4 below) we used k=10 and n=2. To select the best paraphrases for each question q, we further sort the paraphrases that passed the above filter, by the score of their top-1 returned (q, a) pair (when running each paraphrase p as a query against the FAQ index). The motivation is that a higher score of a returned (q, a) for a query p, implies a higher similarity between p and q. 8 Similar to the BERT-Q-a, this model is finetuned using triplets (p, q, q′), where p is a paraphrase of q and q′ is a randomly selected question 7FAQ questions with more than one answer are treated here as different questions. 8The filtered paraphrases can be downloaded from https://github.com/YosiMass/faq-retrieval 810 from the FAQ questions. At inference time, given a user query Q and the top-k retrieved (q, a) pairs, we re-rank the answers (q, a) answers, using the score of each (Q, q) pair as assigned by the finetuned BERT-Q-q model (Mass et al., 2019). 3.5 Re-rankers combination We combine the three re-ranking methods (i.e., bm25-maxpsg and the two fined-tuned BERT models) using two alternative late-fusion methods. The first one, CombSUM (Kurland and Culpepper, 2018), calculates a combined score by summing for each candidate pair the scores that were assigned to it by the three re-ranking methods.9 Following (Roitman, 2018), as a second alternative, we implement the PoolRank method. PoolRank first ranks the candidate pairs using CombSUM. The top pairs are then used to introduce an unsupervised query expansion step (RM1 model (Lavrenko and Croft, 2001)) which is used to re-rank the whole candidates pool. 10 4 Experiments 4.1 Datasets We use two FAQ datasets in our evaluation, namely: FAQIR (Karan and ˇSnajder, 2016)11 and StackFAQ (Karan and ˇSnajder, 2018).12 The FAQIR dataset was derived from the “maintenance & repair” domain of the Yahoo! Answers community QA (CQA) website. It consists of 4313 FAQ pairs and 1233 user queries. The StackFAQ dataset was derived from the “web apps” domain of the StackExchange CQA website. It consists of 719 FAQ pairs (resulted from 125 threads; some questions have more than one answer) and 1249 user queries. 4.2 Baselines On both datasets, we compare against the results of the various methods that were evaluated in (Karan and ˇSnajder, 2018), namely: RC – an ensemble of three unsupervised methods (BM25, Vector-Space and word-embeddings); ListNet and LambdaMART – two (supervised) learningto-rank methods that were trained over a diverse set of text similarity features; and CNN-Rank – a 9Each re-ranker’s scores are first max-min normalized. 10Further following (Roitman, 2018), we use the normalized CombSUM fusion scores as the weak-relevance labels for the RM1 model estimation. 11http://takelab.fer.hr/data/faqir/ 12http://takelab.fer.hr/data/StackFAQ (supervised) learning-to-rank approach based on a convolutional neural network (CNN). On the StackFAQ dataset, we further report the result of (Sakata et al., 2019), which serves as the strongest supervised baseline. This baseline combines two methods: TSUBAKI (Shinzato et al., 2008) – a search engine for Q-to-q matching; and a supervised fine-tuned BERT model for Q-to-a matching. We put the results of this work (that were available only on the StackFAQ dataset), just to emphasize that our approach can reach the quality of a supervised approach, and not to directly compare with it. 4.3 Experimental setup We used ElasticSearch to index the FAQ pairs. For the first ranker (Section 3.1) we used a sliding window of size 100 characters with 10% overlap. For fine-tuning the BERT-Q-a model, we randomly sampled 2 and 5 negative examples for each positive example (q, a) on FAQIR and StackFAQ datasets, respectively. To fine-tune GPT-2 for generating the question paraphrases (Section 3.4), we segmented U into consecutive sequences of l = 100 tokens each. We used OpenAI’s Medium-sized GPT-2 English model: 24-layer, 1024-hidden, 16-heads, 345M parameters. We then used the fine-tuned model to generate 100 paraphrases for each question q and selected the top-10 that passed filtering (as described in Section 3.4). Overall on FAQIR, 22,736 paraphrases passed the filter and enriched 3,532 out of the 4,313 questions. On StackFAQ, 856 paraphrases passed the filter and enriched 109 out of the 125 thread questions. Similar to the BERT-Q-a fine-tuning, we selected 2 and 5 negative examples for each (p, q) (paraphrase-question) pair on FAQIR and StackFAQ, respectively. The two BERT models used the pre-trained BERT-Base-Uncased model (12-layer, 768-hidden, 12-heads, 110M parameters). Fine-tuning was done with a learning rate of 2e-5 and 3 training epochs. Similar to previous works, we used the following metrics: P@5, Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR), calculated on an initial candidate list of 100 FAQs retrieved by the search engine using standard BM25. 811 4.4 Results Table 1 reports the results for the two datasets.13 We compare the base BM25 retrieval (bm25(q+a)), our three proposed unsupervised re-ranking methods (bm25-maxpsg, BERT-Q-a and BERT-Q-q) and their fusion-based combinations (CombSUM and PoolRank) with the state-of-the-art unsupervised and supervised baselines. We also compare to PoolRank+, which is same as PoolRank except that the two BERT models (i.e., BERT-Q-a and BERT-Q-q) are fine-tuned on the union of the respective training sets of both the FAQIR and StackFAQ datasets. We observe that, among our three re-rankers, BERT-Q-q was the best. For example, on FAQIR it achieved 0.67, 0.61 and 0.90 for P@5, MAP and MRR, respectively. This in comparison to 0.54, 0.50 and 0.81, obtained by bm25-maxpsg for P@5, MAP and MRR, respectively. This confirms previous findings (Karan and ˇSnajder, 2016), that Q-to-q matching gives the best signal in FAQ retrieval. Furthermore, on both datasets, the fusion methods achieved better results than the individual re-rankers, with better performance by the PoolRank variants over ComboSum. An exception is FAQIR, where BERT-Q-q achieved same results as the ComboSUM fusion. As mentioned above, BERT-Q-q has a significantly better performance on FAQIR than the other two individual rankers, thus a simple fusion method such as CombSUM can not handle such cases well. PoolRank, which uses relevance model, is a better approach and thus gives better fusion results. Further comparing with the baselines, we can see that, on FAQIR, our unsupervised PoolRank outperformed all other methods; including the supervised methods on all three metrics. On StackFAQ, PoolRank outperformed all other methods, except the supervised TSUBAKI+BERT (Sakata et al., 2019). We note that, our unsupervised results PoolRank+ achieved (0.75, 0.88 and 0.90 for P@5, MAP and MRR, respectively), which is quite close to the supervised results (0.78, 0.90 and 0.94 respectively) of (Sakata et al., 2019). 13Similar to (Karan and ˇSnajder, 2018), the FAQIR initial retrieval is done against a subset of 789 FAQ pairs that are relevant to at least one user query. FAQIR P@5 MAP MRR bm25 (q+a) 0.48 0.44 0.74 bm25-maxpsg 0.54 0.50 0.81 BERT-Q-a 0.53 0.46 0.81 BERT-Q-q 0.67 0.61 0.90 CombSUM 0.67 0.61 0.90 PoolRank 0.69 0.62 0.88 PoolRank+ 0.69 0.62 0.88 RC 0.58 0.53 0.80 ListNet 0.57 0.53 0.80 LambdaMART 0.61 0.57 0.84 CNN-Rank 0.66 0.58 0.85 StackFAQ P@5 MAP MRR bm25 (q+a) 0.56 0.67 0.79 bm25-maxpsg 0.63 0.75 0.81 BERT-Q-a 0.54 0.63 0.81 BERT-Q-q 0.68 0.82 0.80 CombSUM 0.72 0.85 0.91 PoolRank 0.74 0.87 0.88 PoolRank+ 0.75 0.88 0.90 RC 0.52 0.63 0.8 ListNet 0.51 0.54 0.70 LambdaMART 0.60 0.74 0.84 CNN-Rank 0.62 0.74 0.84 TSUBAKI+BERT 0.78 0.9 0.94 Table 1: Evaluation results 5 Summary and Conclusions We presented a fully unsupervised method for FAQ retrieval. The method is based on an initial retrieval of FAQ candidates followed by three rerankers. The first one is based on an IR passage retrieval approach, and the others two are independent BERT models that are fine-tuned to predict query-to-answer and query-to-question matching. We showed that we can overcome the “unsupervised gap” by generating high-quality question paraphrases and use them to fine-tune the query-toquestion BERT model. We experimentally showed that our unsupervised method is on par and sometimes even outperforms existing supervised methods. References Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2019. Not enough data? deep learning to the rescue! CoRR, abs/1911.03118. Michael Bendersky and Oren Kurland. 2008. Utilizing passage-based language models for document retrieval. In Proceedings of the IR Research, 30th European Conference on Advances in Information Retrieval, ECIR’08, pages 162–174, Berlin, Heidelberg. Springer-Verlag. 812 Eric Brill, Susan Dumais, and Michele Banko. 2002. An analysis of the askmsr question-answering system. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing - Volume 10, EMNLP ’02, pages 257–264, Stroudsburg, PA, USA. Association for Computational Linguistics. Robin Burke, Kristian Hammond, Vladimir Kulyukin, Steven Lytinen, Noriko Tomuro, and Scott Schoenberg. 1997. Question answering from frequently asked question files: Experiences with the FAQ FINDER system. AI Magazine, 18:57–66. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Sparsh Gupta and Vitor R. Carvalho. 2019. FAQ retrieval using attentive matching. In Proceedings of the 42Nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’19, pages 929–932, New York, NY, USA. ACM. Mathias Gry and Christine Largeron. 2011. BM25t: A BM25 extension for focused information retrieval. Knowledge and Information Systems - KAIS, 32:1– 25. Elad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings. Mladen Karan and Jan ˇSnajder. 2016. FAQIR - a frequently asked questions retrieval test collection. In Text, Speech, and Dialogue (TSD), volume 9924. Springer. Mladen Karan and Jan ˇSnajder. 2018. Paraphrasefocused learning to rank for domain-specific frequently asked questions retrieval. Expert Systems with Applications, 91:418 – 433. Mladen Karan, Lovro ˇZmak, and Jan ˇSnajder. 2013. Frequently asked questions retrieval for Croatian based on semantic textual similarity. In Proceedings of the 4th Biennial International Workshop on BaltoSlavic Natural Language Processing, pages 24–33, Sofia, Bulgaria. Association for Computational Linguistics. Harksoo Kim and Jungyun Seo. 2006. Highperformance faq retrieval using an automatic clustering method of query logs. Information Processing and Management, 42(3):650 – 661. Harksoo Kim and Jungyun Seo. 2008. Cluster-based faq retrieval using latent term weights. IEEE Intelligent Systems, 23(2):58–65. Oren Kurland and J. Shane Culpepper. 2018. Fusion in information retrieval: Sigir 2018 half-day tutorial. In The 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’18, pages 1383–1386, New York, NY, USA. ACM. Victor Lavrenko and W. Bruce Croft. 2001. Relevance based language models. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’01, pages 120–127, New York, NY, USA. ACM. Hang Li. 2011. A short introduction to learning to rank. IEICE Transactions, 94-D:1854–1862. Yosi Mass, Haggai Roitman, Shai Erera, Or Rivlin, Bar Weiner, and David Konopnicki. 2019. A study of bert for non-factoid question-answering under passage length constraints. CoRR, abs/1908.06780. Barun Patra. 2017. A survey of community question answering. CoRR, abs/1705.04009. Alec Radford. 2018. Improving language understanding by generative pre-training. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr., 3(4):333–389. Haggai Roitman. 2018. Utilizing pseudo-relevance feedback in fusion-based retrieval. In Proceedings of the 2018 ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR ’18, pages 203–206, New York, NY, USA. ACM. Wataru Sakata, Tomohide Shibata, Ribeka Tanaka, and Sadao Kurohashi. 2019. FAQ retrieval using queryquestion similarity and bert-based query-answer relevance. CoRR, abs/1905.02851. Keiji Shinzato, Tomohide Shibata, Daisuke Kawahara, Chikara Hashimoto, and Sadao Kurohashi. 2008. TSUBAKI: An open search engine infrastructure for developing new information access methodology. In Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I. Chung-Hsien Wu, Jui-Feng Yeh, and Ming-Jun Chen. 2005. Domain-specific faq retrieval using independent aspects. ACM Transactions on Asian Language Information Processing, 4(1):117. Xiaoqiang Zhou, Baotian Hu, Qingcai Chen, Buzhou Tang, and Xiaolong Wang. 2015. Answer sequence learning with neural networks for answer selection in community question answering. CoRR, abs/1506.06490.
2020
74
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8342 Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks Suchin Gururangan† Ana Marasovi´c†♦ Swabha Swayamdipta† Kyle Lo† Iz Beltagy† Doug Downey† Noah A. Smith†♦ †Allen Institute for Artificial Intelligence, Seattle, WA, USA ♦Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA {suching,anam,swabhas,kylel,beltagy,dougd,noah}@allenai.org Abstract Language models pretrained on text from a wide variety of sources form the foundation of today’s NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining indomain (domain-adaptive pretraining) leads to performance gains, under both high- and low-resource settings. Moreover, adapting to the task’s unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable. Overall, we consistently find that multiphase adaptive pretraining offers large gains in task performance. 1 Introduction Today’s pretrained language models are trained on massive, heterogeneous corpora (Raffel et al., 2019; Yang et al., 2019). For instance, ROBERTA (Liu et al., 2019) was trained on over 160GB of uncompressed text, with sources ranging from Englishlanguage encyclopedic and news articles, to literary works and web content. Representations learned by such models achieve strong performance across many tasks with datasets of varying sizes drawn from a variety of sources (e.g., Wang et al., 2018, 2019). This leads us to ask whether a task’s textual domain—a term typically used to denote a distribution over language characterizing a given topic or genre (such as “science” or “mystery novels”)—is still relevant. Do the latest large pretrained models work universally or is it still helpful to build Figure 1: An illustration of data distributions. Task data is comprised of an observable task distribution, usually non-randomly sampled from a wider distribution (light grey ellipsis) within an even larger target domain, which is not necessarily one of the domains included in the original LM pretraining domain – though overlap is possible. We explore the benefits of continued pretraining on data from the task distribution and the domain distribution. separate pretrained models for specific domains? While some studies have shown the benefit of continued pretraining on domain-specific unlabeled data (e.g., Lee et al., 2019), these studies only consider a single domain at a time and use a language model that is pretrained on a smaller and less diverse corpus than the most recent language models. Moreover, it is not known how the benefit of continued pretraining may vary with factors like the amount of available labeled task data, or the proximity of the target domain to the original pretraining corpus (see Figure 1). We address this question for one such highperforming model, ROBERTA (Liu et al., 2019) (§2). We consider four domains (biomedical and computer science publications, news, and reviews; §3) and eight classification tasks (two in each domain). For targets that are not already in-domain for ROBERTA, our experiments show that contin8343 ued pretraining on the domain (which we refer to as domain-adaptive pretraining or DAPT) consistently improves performance on tasks from the target domain, in both high- and low-resource settings. Above, we consider domains defined around genres and forums, but it is also possible to induce a domain from a given corpus used for a task, such as the one used in supervised training of a model. This raises the question of whether pretraining on a corpus more directly tied to the task can further improve performance. We study how domainadaptive pretraining compares to task-adaptive pretraining, or TAPT, on a smaller but directly taskrelevant corpus: the unlabeled task dataset (§4), drawn from the task distribution. Task-adaptive pretraining has been shown effective (Howard and Ruder, 2018), but is not typically used with the most recent models. We find that TAPT provides a large performance boost for ROBERTA, with or without domain-adaptive pretraining. Finally, we show that the benefits from taskadaptive pretraining increase when we have additional unlabeled data from the task distribution that has been manually curated by task designers or annotators. Inspired by this success, we propose ways to automatically select additional task-relevant unlabeled text, and show how this improves performance in certain low-resource cases (§5). On all tasks, our results using adaptive pretraining techniques are competitive with the state of the art. In summary, our contributions include: • a thorough analysis of domain- and taskadaptive pretraining across four domains and eight tasks, spanning low- and high-resource settings; • an investigation into the transferability of adapted LMs across domains and tasks; and • a study highlighting the importance of pretraining on human-curated datasets, and a simple data selection strategy to automatically approach this performance. Our code as well as pretrained models for multiple domains and tasks are publicly available.1 2 Background: Pretraining Learning for most NLP research systems since 2018 consists of training in two stages. First, a neural language model (LM), often with millions of parameters, is trained on large unlabeled cor1https://github.com/allenai/ dont-stop-pretraining pora. The word (or wordpiece; Wu et al. 2016) representations learned in the pretrained model are then reused in supervised training for a downstream task, with optional updates (fine-tuning) of the representations and network from the first stage. One such pretrained LM is ROBERTA (Liu et al., 2019), which uses the same transformerbased architecture (Vaswani et al., 2017) as its predecessor, BERT (Devlin et al., 2019). It is trained with a masked language modeling objective (i.e., cross-entropy loss on predicting randomly masked tokens). The unlabeled pretraining corpus for ROBERTA contains over 160 GB of uncompressed raw text from different English-language corpora (see Appendix §A.1). ROBERTA attains better performance on an assortment of tasks than its predecessors, making it our baseline of choice. Although ROBERTA’s pretraining corpus is derived from multiple sources, it has not yet been established if these sources are diverse enough to generalize to most of the variation in the English language. In other words, we would like to understand what is out of ROBERTA’s domain. Towards this end, we explore further adaptation by continued pretraining of this large LM into two categories of unlabeled data: (i) large corpora of domain-specific text (§3), and (ii) available unlabeled data associated with a given task (§4). 3 Domain-Adaptive Pretraining Our approach to domain-adaptive pretraining (DAPT) is straightforward—we continue pretraining ROBERTA on a large corpus of unlabeled domain-specific text. The four domains we focus on are biomedical (BIOMED) papers, computer science (CS) papers, newstext from REALNEWS, and AMAZON reviews. We choose these domains because they have been popular in previous work, and datasets for text classification are available in each. Table 1 lists the specifics of the unlabeled datasets in all four domains, as well as ROBERTA’s training corpus.1 3.1 Analyzing Domain Similarity Before performing DAPT, we attempt to quantify the similarity of the target domain to ROBERTA’s pretraining domain. We consider domain vocabularies containing the top 10K most frequent unigrams (excluding stopwords) in comparably sized 1For BIOMED and CS, we used an internal version of S2ORC that contains papers that cannot be released due to copyright restrictions. 8344 Domain Pretraining Corpus # Tokens Size LROB. LDAPT BIOMED 2.68M full-text papers from S2ORC (Lo et al., 2020) 7.55B 47GB 1.32 0.99 CS 2.22M full-text papers from S2ORC (Lo et al., 2020) 8.10B 48GB 1.63 1.34 NEWS 11.90M articles from REALNEWS (Zellers et al., 2019) 6.66B 39GB 1.08 1.16 REVIEWS 24.75M AMAZON reviews (He and McAuley, 2016) 2.11B 11GB 2.10 1.93 ROBERTA (baseline) see Appendix §A.1 N/A 160GB ‡1.19 Table 1: List of the domain-specific unlabeled datasets. In columns 5 and 6, we report ROBERTA’s masked LM loss on 50K randomly sampled held-out documents from each domain before (LROB.) and after (LDAPT) DAPT (lower implies a better fit on the sample). ‡ indicates that the masked LM loss is estimated on data sampled from sources similar to ROBERTA’s pretraining corpus. PT News Reviews BioMed CS PT News Reviews BioMed CS 100.0 54.1 34.5 27.3 19.2 54.1 100.0 40.0 24.9 17.3 34.5 40.0 100.0 18.3 12.7 27.3 24.9 18.3 100.0 21.4 19.2 17.3 12.7 21.4 100.0 Figure 2: Vocabulary overlap (%) between domains. PT denotes a sample from sources similar to ROBERTA’s pretraining corpus. Vocabularies for each domain are created by considering the top 10K most frequent words (excluding stopwords) in documents sampled from each domain. random samples of held-out documents in each domain’s corpus. We use 50K held-out documents for each domain other than REVIEWS, and 150K held-out documents in REVIEWS, since they are much shorter. We also sample 50K documents from sources similar to ROBERTA’s pretraining corpus (i.e., BOOKCORPUS, STORIES, WIKIPEDIA, and REALNEWS) to construct the pretraining domain vocabulary, since the original pretraining corpus is not released. Figure 2 shows the vocabulary overlap across these samples. We observe that ROBERTA’s pretraining domain has strong vocabulary overlap with NEWS and REVIEWS, while CS and BIOMED are far more dissimilar to the other domains. This simple analysis suggests the degree of benefit to be expected by adaptation of ROBERTA to different domains—the more dissimilar the domain, the higher the potential for DAPT. 3.2 Experiments Our LM adaptation follows the settings prescribed for training ROBERTA. We train ROBERTA on each domain for 12.5K steps, which amounts to single pass on each domain dataset, on a v3-8 TPU; see other details in Appendix B. This second phase of pretraining results in four domain-adapted LMs, one for each domain. We present the masked LM loss of ROBERTA on each domain before and after DAPT in Table 1. We observe that masked LM loss decreases in all domains except NEWS after DAPT, where we observe a marginal increase. We discuss cross-domain masked LM loss in Appendix §E. Under each domain, we consider two text classification tasks, as shown in Table 2. Our tasks represent both high- and low-resource (≤5K labeled training examples, and no additional unlabeled data) settings. For HYPERPARTISAN, we use the data splits from Beltagy et al. (2020). For RCT, we represent all sentences in one long sequence for simultaneous prediction. Baseline As our baseline, we use an off-the-shelf ROBERTA-base model and perform supervised fine-tuning of its parameters for each classification task. On average, ROBERTA is not drastically behind the state of the art (details in Appendix §A.2), and serves as a good baseline since it provides a single LM to adapt to different domains. Classification Architecture Following standard practice (Devlin et al., 2019) we pass the final layer [CLS] token representation to a task-specific feedforward layer for prediction (see Table 14 in Appendix for more hyperparameter details). Results Test results are shown under the DAPT column of Table 3 (see Appendix §C for validation results). We observe that DAPT improves over ROBERTA in all domains. For BIOMED, CS, and REVIEWS, we see consistent improve8345 Domain Task Label Type Train (Lab.) Train (Unl.) Dev. Test Classes BIOMED CHEMPROT relation classification 4169 2427 3469 13 †RCT abstract sent. roles 18040 30212 30135 5 CS ACL-ARC citation intent 1688 114 139 6 SCIERC relation classification 3219 455 974 7 NEWS HYPERPARTISAN partisanship 515 5000 65 65 2 †AGNEWS topic 115000 5000 7600 4 REVIEWS †HELPFULNESS review helpfulness 115251 5000 25000 2 †IMDB review sentiment 20000 50000 5000 25000 2 Table 2: Specifications of the various target task datasets. † indicates high-resource settings. Sources: CHEMPROT (Kringelum et al., 2016), RCT (Dernoncourt and Lee, 2017), ACL-ARC (Jurgens et al., 2018), SCIERC (Luan et al., 2018), HYPERPARTISAN (Kiesel et al., 2019), AGNEWS (Zhang et al., 2015), HELPFULNESS (McAuley et al., 2015), IMDB (Maas et al., 2011). Dom. Task ROBA. DAPT ¬DAPT BM CHEMPROT 81.91.0 84.20.2 79.41.3 †RCT 87.20.1 87.60.1 86.90.1 CS ACL-ARC 63.05.8 75.42.5 66.44.1 SCIERC 77.31.9 80.81.5 79.20.9 NEWS HYP. 86.60.9 88.25.9 76.44.9 †AGNEWS 93.90.2 93.90.2 93.50.2 REV. †HELPFUL. 65.13.4 66.51.4 65.12.8 †IMDB 95.00.2 95.40.2 94.10.4 Table 3: Comparison of ROBERTA (ROBA.) and DAPT to adaptation to an irrelevant domain (¬ DAPT). Reported results are test macro-F1, except for CHEMPROT and RCT, for which we report micro-F1, following Beltagy et al. (2019). We report averages across five random seeds, with standard deviations as subscripts. † indicates high-resource settings. Best task performance is boldfaced. See §3.3 for our choice of irrelevant domains. ments over ROBERTA, demonstrating the benefit of DAPT when the target domain is more distant from ROBERTA’s source domain. The pattern is consistent across high- and low- resource settings. Although DAPT does not increase performance on AGNEWS, the benefit we observe in HYPERPARTISAN suggests that DAPT may be useful even for tasks that align more closely with ROBERTA’s source domain. 3.3 Domain Relevance for DAPT Additionally, we compare DAPT against a setting where for each task, we adapt the LM to a domain outside the domain of interest. This controls for the case in which the improvements over ROBERTA might be attributed simply to exposure to more data, regardless of the domain. In this setting, for NEWS, we use a CS LM; for REVIEWS, a BIOMED LM; for CS, a NEWS LM; for BIOMED, a REVIEWS LM. We use the vocabulary overlap statistics in Figure 2 to guide these choices. Our results are shown in Table 3, where the last column (¬DAPT) corresponds to this setting. For each task, DAPT significantly outperforms adapting to an irrelevant domain, suggesting the importance of pretraining on domain-relevant data. Furthermore, we generally observe that ¬DAPT results in worse performance than even ROBERTA on end-tasks. Taken together, these results indicate that in most settings, exposure to more data without considering domain relevance is detrimental to end-task performance. However, there are two tasks (SCIERC and ACL-ARC) in which ¬DAPT marginally improves performance over ROBERTA. This may suggest that in some cases, continued pretraining on any additional data is useful, as noted in Baevski et al. (2019). 3.4 Domain Overlap Our analysis of DAPT is based on prior intuitions about how task data is assigned to specific domains. For instance, to perform DAPT for HELPFULNESS, we only adapt to AMAZON reviews, but not to any REALNEWS articles. However, the gradations in Figure 2 suggest that the boundaries between domains are in some sense fuzzy; for example, 40% of unigrams are shared between REVIEWS and NEWS. As further indication of this overlap, we also qualitatively identify documents that overlap cross-domain: in Table 4, we showcase reviews and REALNEWS articles that are similar to these reviews (other examples can be found in Appendix §D). In fact, we find that adapting ROBERTA to 8346 IMDB review REALNEWS article “The Shop Around the Corner“ is one of the great films from director Ernst Lubitsch . In addition to the talents of James Stewart and Margaret Sullavan , it’s filled with a terrific cast of top character actors such as Frank Morgan and Felix Bressart. [...] The makers of “You’ve Got Mail“ claim their film to be a remake , but that’s just nothing but a lot of inflated self praise. Anyway, if you have an affection for romantic comedies of the 1940 ’s, you’ll find “The Shop Around the Corner“ to be nothing short of wonderful. Just as good with repeat viewings. [...] Three great festive films... The Shop Around the Corner (1940) Delightful Comedy by Ernst Lubitsch stars James Stewart and Margaret Sullavan falling in love at Christmas. Remade as You’ve Got Mail. [...] HELPFULNESS review REALNEWS article Simply the Best! I’ve owned countless Droids and iPhones, but this one destroys them all. Samsung really nailed it with this one, extremely fast , very pocketable, gorgeous display , exceptional battery life , good audio quality, perfect GPS & WiFi performance, transparent status bar, battery percentage, ability to turn off soft key lights, superb camera for a smartphone and more! [...] We’re living in a world with a new Samsung. [...] more on battery life later [...] Exposure is usually spot on and focusing is very fast. [...] The design, display, camera and performance are all best in class, and the phone feels smaller than it looks. [...] Table 4: Examples that illustrate how some domains might have overlaps with others, leading to unexpected positive transfer. We highlight expressions in the reviews that are also found in the REALNEWS articles. NEWS not as harmful to its performance on REVIEWS tasks (DAPT on NEWS achieves 65.52.3 on HELPFULNESS and 95.00.1 on IMDB). Although this analysis is by no means comprehensive, it indicates that the factors that give rise to observable domain differences are likely not mutually exclusive. It is possible that pretraining beyond conventional domain boundaries could result in more effective DAPT; we leave this investigation to future work. In general, the provenance of data, including the processes by which corpora are curated, must be kept in mind when designing pretraining procedures and creating new benchmarks that test out-of-domain generalization abilities. 4 Task-Adaptive Pretraining Datasets curated to capture specific tasks of interest tend to cover only a subset of the text available within the broader domain. For example, the CHEMPROT dataset for extracting relations between chemicals and proteins focuses on abstracts of recently-published, high-impact articles from hand-selected PubMed categories (Krallinger et al., 2017, 2015). We hypothesize that such cases where the task data is a narrowly-defined subset of the broader domain, pretraining on the task dataset itself or data relevant to the task may be helpful. Task-adaptive pretraining (TAPT) refers to pretraining on the unlabeled training set for a given task; prior work has shown its effectiveness (e.g. Howard and Ruder, 2018). Compared to domainadaptive pretraining (DAPT; §3), the task-adaptive approach strikes a different trade-off: it uses a far smaller pretraining corpus, but one that is much more task-relevant (under the assumption that the training set represents aspects of the task well). This makes TAPT much less expensive to run than DAPT, and as we show in our experiments, the performance of TAPT is often competitive with that of DAPT. 4.1 Experiments Similar to DAPT, task-adaptive pretraining consists of a second phase of pretraining ROBERTA, but only on the available task-specific training data. In contrast to DAPT, which we train for 12.5K steps, we perform TAPT for 100 epochs. We artificially augment each dataset by randomly masking different words (using the masking probability of 0.15) across epochs. As in our DAPT experiments, we pass the final layer [CLS] token representation to a task-specific feedforward layer for classification (see Table 14 in Appendix for more hyperparameter details). Our results are shown in the TAPT column of Table 5. TAPT consistently improves the ROBERTA baseline for all tasks across domains. Even on the news domain, which was part of ROBERTA pretraining corpus, TAPT improves over ROBERTA, showcasing the advantage of task adaptation. Particularly remarkable are the relative differences between TAPT and DAPT. DAPT is more resource intensive (see Table 9 in §5.3), but TAPT manages to match its performance in some of the tasks, such as SCIERC. In RCT, HYPERPARTISAN, AGNEWS, HELPFULNESS, and IMDB, the results even exceed those of DAPT, highlighting the efficacy of this cheaper adaptation technique. 8347 Additional Pretraining Phases Domain Task ROBERTA DAPT TAPT DAPT + TAPT BIOMED CHEMPROT 81.91.0 84.20.2 82.60.4 84.40.4 †RCT 87.20.1 87.60.1 87.70.1 87.80.1 CS ACL-ARC 63.05.8 75.42.5 67.41.8 75.63.8 SCIERC 77.31.9 80.81.5 79.31.5 81.31.8 NEWS HYPERPARTISAN 86.60.9 88.25.9 90.45.2 90.06.6 †AGNEWS 93.90.2 93.90.2 94.50.1 94.60.1 REVIEWS †HELPFULNESS 65.13.4 66.51.4 68.51.9 68.71.8 †IMDB 95.00.2 95.40.1 95.50.1 95.60.1 Table 5: Results on different phases of adaptive pretraining compared to the baseline ROBERTA (col. 1). Our approaches are DAPT (col. 2, §3), TAPT (col. 3, §4), and a combination of both (col. 4). Reported results follow the same format as Table 3. State-of-the-art results we can compare to: CHEMPROT (84.6), RCT (92.9), ACL-ARC (71.0), SCIERC (81.8), HYPERPARTISAN (94.8), AGNEWS (95.5), IMDB (96.2); references in §A.2. BIOMED RCT CHEMPROT TAPT 87.70.1 82.60.5 Transfer-TAPT 87.10.4 (↓0.6) 80.40.6 (↓2.2) NEWS HYPERPARTISAN AGNEWS TAPT 89.99.5 94.50.1 Transfer-TAPT 82.27.7 (↓7.7) 93.90.2 (↓0.6) CS ACL-ARC SCIERC TAPT 67.41.8 79.31.5 Transfer-TAPT 64.12.7 (↓3.3) 79.12.5 (↓0.2) REVIEWS HELPFULNESS IMDB TAPT 68.51.9 95.70.1 Transfer-TAPT 65.02.6 (↓3.5) 95.00.1 (↓0.7) Table 6: Though TAPT is effective (Table 5), it is harmful when applied across tasks. These findings illustrate differences in task distributions within a domain. Combined DAPT and TAPT We investigate the effect of using both adaptation techniques together. We begin with ROBERTA and apply DAPT then TAPT under this setting. The three phases of pretraining add up to make this the most computationally expensive of all our settings (see Table 9). As expected, combined domain- and task-adaptive pretraining achieves the best performance on all tasks (Table 5).2 Overall, our results show that DAPT followed by TAPT achieves the best of both worlds of domain and task awareness, yielding the best performance. While we speculate that TAPT followed by DAPT would be susceptible to catastrophic forgetting of the task-relevant corpus (Yogatama et al., 2019), alternate methods of combining the procedures may result in better downstream performance. Future work may explore pretraining with a more sophisticated curriculum of domain and task distributions. 2Results on HYPERPARTISAN match those of TAPT, within a standard deviation arising from the five seeds. Cross-Task Transfer We complete the comparison between DAPT and TAPT by exploring whether adapting to one task transfers to other tasks in the same domain. For instance, we further pretrain the LM using the RCT unlabeled data, fine-tune it with the CHEMPROT labeled data, and observe the effect. We refer to this setting as Transfer-TAPT. Our results for tasks in all four domains are shown in Table 6. We see that TAPT optimizes for single task performance, to the detriment of cross-task transfer. These results demonstrate that data distributions of tasks within a given domain might differ. Further, this could also explain why adapting only to a broad domain is not sufficient, and why TAPT after DAPT is effective. 5 Augmenting Training Data for Task-Adaptive Pretraining In §4, we continued pretraining the LM for task adaptation using only the training data for a supervised task. Inspired by the success of TAPT, we next investigate another setting where a larger pool of unlabeled data from the task distribution exists, 8348 Pretraining BIOMED NEWS REVIEWS RCT-500 HYP. IMDB † TAPT 79.81.4 90.45.2 95.50.1 DAPT + TAPT 83.00.3 90.06.6 95.60.1 Curated-TAPT 83.40.3 89.99.5 95.70.1 DAPT + Curated-TAPT 83.80.5 92.13.6 95.80.1 Table 7: Mean test set macro-F1 (for HYP. and IMDB) and micro-F1 (for RCT-500), with CuratedTAPT across five random seeds, with standard deviations as subscripts. † indicates high-resource settings. typically curated by humans. We explore two scenarios. First, for three tasks (RCT, HYPERPARTISAN, and IMDB) we use this larger pool of unlabeled data from an available human-curated corpus (§5.1). Next, we explore retrieving related unlabeled data for TAPT, from a large unlabeled in-domain corpus, for tasks where extra human-curated data is unavailable (§5.2). 5.1 Human Curated-TAPT Dataset creation often involves collection of a large unlabeled corpus from known sources. This corpus is then downsampled to collect annotations, based on the annotation budget. The larger unlabeled corpus is thus expected to have a similar distribution to the task’s training data. Moreover, it is usually available. We explore the role of such corpora in task-adaptive pretraining. Data We simulate a low-resource setting RCT500, by downsampling the training data of the RCT dataset to 500 examples (out of 180K available), and treat the rest of the training data as unlabeled. The HYPERPARTISAN shared task (Kiesel et al., 2019) has two tracks: low- and high-resource. We use 5K documents from the high-resource setting as Curated-TAPT unlabeled data and the original lowresource training documents for task fine-tuning. For IMDB, we use the extra unlabeled data manually curated by task annotators, drawn from the same distribution as the labeled data (Maas et al., 2011). Results We compare Curated-TAPT to TAPT and DAPT + TAPT in Table 7. Curated-TAPT further improves our prior results from §4 across all three datasets. Applying Curated-TAPT after adapting to the domain results in the largest boost in performance on all tasks; in HYPERPARTISAN, DAPT + Curated-TAPT is within standard deviation of Curated-TAPT. Moreover, curated-TAPT achieves Figure 3: An illustration of automated data selection (§5.2). We map unlabeled CHEMPROT and 1M BIOMED sentences to a shared vector space using the VAMPIRE model trained on these sentences. Then, for each CHEMPROT sentence, we identify k nearest neighbors, from the BIOMED domain. Pretraining BIOMED CS CHEMPROT RCT-500 ACL-ARC ROBERTA 81.91.0 79.30.6 63.05.8 TAPT 82.60.4 79.81.4 67.41.8 RAND-TAPT 81.90.6 80.60.4 69.73.4 50NN-TAPT 83.30.7 80.80.6 70.72.8 150NN-TAPT 83.20.6 81.20.8 73.32.7 500NN-TAPT 83.30.7 81.70.4 75.51.9 DAPT 84.20.2 82.50.5 75.42.5 Table 8: Mean test set micro-F1 (for CHEMPROT and RCT) and macro-F1 (for ACL-ARC), across five random seeds, with standard deviations as subscripts, comparing RAND-TAPT (with 50 candidates) and kNNTAPT selection. Neighbors of the task data are selected from the domain data. 95% of the performance of DAPT + TAPT with the fully labeled RCT corpus (Table 5) with only 0.3% of the labeled data. These results suggest that curating large amounts of data from the task distribution is extremely beneficial to end-task performance. We recommend that task designers release a large pool of unlabeled task data for their tasks to aid model adaptation through pretraining. 5.2 Automated Data Selection for TAPT Consider a low-resource scenario without access to large amounts of unlabeled data to adequately benefit from TAPT, as well as absence of computational resources necessary for DAPT (see Table 9 for details of computational requirements for different pretraining phases). We propose simple unsuper8349 vised methods to retrieve unlabeled text that aligns with the task distribution, from a large in-domain corpus. Our approach finds task-relevant data from the domain by embedding text from both the task and domain in a shared space, then selects candidates from the domain based on queries using the task data. Importantly, the embedding method must be lightweight enough to embed possibly millions of sentences in a reasonable time. Given these constraints, we employ VAMPIRE (Gururangan et al., 2019; Figure 3), a lightweight bag-of-words language model. We pretrain VAMPIRE on a large deduplicated3 sample of the domain (1M sentences) to obtain embeddings of the text from both the task and domain sample. We then select k candidates of each task sentence from the domain sample, in embeddings space. Candidates are selected (i) via nearest neighbors selection (kNN-TAPT)4, or (ii) randomly (RAND-TAPT). We continue pretraining ROBERTA on this augmented corpus with both the task data (as in TAPT) as well as the selected candidate pool. Results Results in Table 8 show that kNN-TAPT outperforms TAPT for all cases. RAND-TAPT is generally worse than kNN-TAPT, but within a standard deviation arising from 5 seeds for RCT and ACLARC. As we increase k, kNN-TAPT performance steadily increases, and approaches that of DAPT. Appendix F shows examples of nearest neighbors of task data. Future work might consider a closer study of kNN-TAPT, more sophisticated data selection methods, and the tradeoff between the diversity and task relevance of selected examples. 5.3 Computational Requirements The computational requirements for all our adaptation techniques on RCT-500 in the BIOMED domain in Table 9. TAPT is nearly 60 times faster to train than DAPT on a single v3-8 TPU and storage requirements for DAPT on this task are 5.8M times that of TAPT. Our best setting of DAPT + TAPT amounts to three phases of pretraining, and at first glance appears to be very expensive. However, once the LM has been adapted to a broad domain, it can be reused for multiple tasks within that domain, with only a single additional TAPT phase per task. While Curated-TAPT tends to achieve the best cost3We deduplicated this set to limit computation, since different sentences can share neighbors. 4We use a flat search index with cosine similarity between embeddings with the FAISS (Johnson et al., 2019) library. Pretraining Steps Docs. Storage F1 ROBERTA 79.30.6 TAPT 0.2K 500 80KB 79.81.4 50NN-TAPT 1.1K 24K 3MB 80.80.6 150NN-TAPT 3.2K 66K 8MB 81.20.8 500NN-TAPT 9.0K 185K 24MB 81.70.4 Curated-TAPT 8.8K 180K 27MB 83.40.3 DAPT 12.5K 25M 47GB 82.50.5 DAPT + TAPT 12.6K 25M 47GB 83.00.3 Table 9: Computational requirements for adapting to the RCT-500 task, comparing DAPT (§3) and the various TAPT modifications described in §4 and §5. benefit ratio in this comparison, one must also take into account the cost of curating large in-domain data. Automatic methods such as kNN-TAPT are much cheaper than DAPT. 6 Related Work Transfer learning for domain adaptation Prior work has shown the benefit of continued pretraining in domain (Alsentzer et al., 2019; Chakrabarty et al., 2019; Lee et al., 2019).5 We have contributed further investigation of the effects of a shift between a large, diverse pretraining corpus and target domain on task performance. Other studies (e.g., Huang et al., 2019) have trained language models (LMs) in their domain of interest, from scratch. In contrast, our work explores multiple domains, and is arguably more cost effective, since we continue pretraining an already powerful LM. Task-adaptive pretraining Continued pretraining of a LM on the unlabeled data of a given task (TAPT) has been show to be beneficial for endtask performance (e.g. in Howard and Ruder, 2018; Phang et al., 2018; Sun et al., 2019). In the presence of domain shift between train and test data distributions of the same task, domain-adaptive pretraining (DAPT) is sometimes used to describe what we term TAPT (Logeswaran et al., 2019; Han and Eisenstein, 2019). Related approaches include language modeling as an auxiliary objective to task classifier fine-tuning (Chronopoulou et al., 2019; Radford et al., 2018) or consider simple syntactic structure of the input while adapting to task-specific 5In contrast, Peters et al. (2019) find that the JensenShannon divergence on term distributions between BERT’s pretraining corpora and each MULTINLI domain (Williams et al., 2018) does not predict its performance, though this might be an isolated finding specific to the MultiNLI dataset. 8350 Training Data Domain (Unlabeled) Task (Unlabeled) Task (Labeled) ROBERTA ✓ DAPT ✓ ✓ TAPT ✓ ✓ DAPT + TAPT ✓ ✓ ✓ kNN-TAPT (Subset) ✓ ✓ Curated-TAPT (Extra) ✓ Table 10: Summary of strategies for multi-phase pretraining explored in this paper. data (Swayamdipta et al., 2019). We compare DAPT and TAPT as well as their interplay with respect to dataset size for continued pretraining (hence, expense of more rounds of pretraining), relevance to a data sample of a given task, and transferability to other tasks and datasets. See Table 11 in Appendix §A for a summary of multi-phase pretraining strategies from related work. Data selection for transfer learning Selecting data for transfer learning has been explored in NLP (Moore and Lewis, 2010; Ruder and Plank, 2017; Zhang et al., 2019, among others). Dai et al. (2019) focus on identifying the most suitable corpus to pretrain a LM from scratch, for a single task: NER, whereas we select relevant examples for various tasks in §5.2. Concurrent to our work, Aharoni and Goldberg (2020) propose data selection methods for NMT based on cosine similarity in embedding space, using DISTILBERT (Sanh et al., 2019) for efficiency. In contrast, we use VAMPIRE, and focus on augmenting TAPT data for text classification tasks. Khandelwal et al. (2020) introduced kNN-LMs that allows easy domain adaptation of pretrained LMs by simply adding a datastore per domain and no further training; an alternative to integrate domain information in an LM. Our study of human-curated data §5.1 is related to focused crawling (Chakrabarti et al., 1999) for collection of suitable data, especially with LM reliance (Remus and Biemann, 2016). What is a domain? Despite the popularity of domain adaptation techniques, most research and practice seems to use an intuitive understanding of domains. A small body of work has attempted to address this question (Lee, 2001; Eisenstein et al., 2014; van der Wees et al., 2015; Plank, 2016; Ruder et al., 2016, among others). For instance, Aharoni and Goldberg (2020) define domains by implicit clusters of sentence representations in pretrained LMs. Our results show that DAPT and TAPT complement each other, which suggests a spectra of domains defined around tasks at various levels of granularity (e.g., Amazon reviews for a specific product, all Amazon reviews, all reviews on the web, the web). 7 Conclusion We investigate several variations for adapting pretrained LMs to domains and tasks within those domains, summarized in Table 10. Our experiments reveal that even a model of hundreds of millions of parameters struggles to encode the complexity of a single textual domain, let alone all of language. We show that pretraining the model towards a specific task or small corpus can provide significant benefits. Our findings suggest it may be valuable to complement work on ever-larger LMs with parallel efforts to identify and use domain- and taskrelevant corpora to specialize models. While our results demonstrate how these approaches can improve ROBERTA, a powerful LM, the approaches we studied are general enough to be applied to any pretrained LM. Our work points to numerous future directions, such as better data selection for TAPT, efficient adaptation large pretrained language models to distant domains, and building reusable language models after adaptation. Acknowledgments The authors thank Dallas Card, Mark Neumann, Nelson Liu, Eric Wallace, members of the AllenNLP team, and anonymous reviewers for helpful feedback, and Arman Cohan for providing data. This research was supported in part by the Office of Naval Research under the MURI grant N0001418-1-2670. References Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In ACL. To appear. Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop. Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Cloze-driven pretraining of self-attention networks. In EMNLP. 8351 Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In EMNLP. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. Soumen Chakrabarti, Martin van den Berg, and Byron Dom. 1999. Focused Crawling: A New Approach to Topic-Specific Web Resource Discovery. Comput. Networks, 31:1623–1640. Tuhin Chakrabarty, Christopher Hidey, and Kathy McKeown. 2019. IMHO fine-tuning improves claim detection. In NAACL. Ciprian Chelba, Tomas Mikolov, Michael Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2014. One billion word benchmark for measuring progress in statistical language modeling. In INTERSPEECH. Alexandra Chronopoulou, Christos Baziotis, and Alexandros Potamianos. 2019. An embarrassingly simple approach for transfer learning from pretrained language models. In NAACL. Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, and Dan Weld. 2019. Pretrained language models for sequential sentence classification. In EMNLP. Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2019. Using similarity measures to select pretraining data for NER. In NAACL. Franck Dernoncourt and Ji Young Lee. 2017. Pubmed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In IJCNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A Smith. 2019. Show your work: Improved reporting of experimental results. In EMNLP. Jacob Eisenstein, Brendan O’connor, Noah A. Smith, and Eric P. Xing. 2014. Diffusion of lexical change in social media. PloS ONE. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In NLP-OSS. Aaron Gokaslan and Vanya Cohen. 2019. OpenWebText Corpus. Suchin Gururangan, Tam Dang, Dallas Card, and Noah A. Smith. 2019. Variational pretraining for semi-supervised text classification. In ACL. Xiaochuang Han and Jacob Eisenstein. 2019. Unsupervised domain adaptation of contextualized embeddings for sequence labeling. In EMNLP. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In WWW. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In ACL. Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. ClinicalBERT: Modeling clinical notes and predicting hospital readmission. arXiv:1904.05342. Jeff Johnson, Matthijs Douze, and Herv´e J´egou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data. David Jurgens, Srijan Kumar, Raine Hoover, Daniel A. McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. TACL. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In ICLR. To appear. Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval2019 Task 4: Hyperpartisan news detection. In SemEval. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Martin Krallinger, Obdulia Rabal, Saber Ahmad Akhondi, Mart´ın P´erez P´erez, J´es´us L´opez Santamar´ıa, Gael P´erez Rodr´ıguez, Georgios Tsatsaronis, Ander Intxaurrondo, Jos´e Antonio Baso L´opez, Umesh Nandal, E. M. van Buel, A. Poorna Chandrasekhar, Marleen Rodenburg, Astrid Lægreid, Marius A. Doornenbal, Julen Oyarz´abal, An´alia Lourenc¸o, and Alfonso Valencia. 2017. Overview of the biocreative vi chemical-protein interaction track. In Proceedings of the BioCreative VI Workshop. Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M Lowe, et al. 2015. The chemdner corpus of chemicals and drugs and its annotation principles. Journal of cheminformatics, 7(1):S2. Jens Kringelum, Sonny Kim Kjærulff, Søren Brunak, Ole Lund, Tudor I. Oprea, and Olivier Taboureau. 2016. ChemProt-3.0: a global chemical biology diseases mapping. In Database. 8352 David YW Lee. 2001. Genres, registers, text types, domains and styles: Clarifying the concepts and navigating a path through the BNC jungle. Language Learning & Technology. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: A pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692. Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel S. Weld. 2020. S2ORC: The Semantic Scholar Open Research Corpus. In ACL. To appear. Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity descriptions. In ACL. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In EMNLP. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In ACM SIGIR. Arindam Mitra, Pratyay Banerjee, Kuntal Kumar Pal, Swaroop Ranjan Mishra, and Chitta Baral. 2020. Exploring ways to incorporate additional knowledge to improve natural language commonsense question answering. arXiv:1909.08855v3. Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In ACL. Sebastian Nagel. 2016. CC-NEWS. Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing. Proceedings of the 18th BioNLP Workshop and Shared Task. Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? Adapting pretrained representations to diverse tasks. In RepL4NLP. Jason Phang, Thibault F´evry, and Samuel R. Bowman. 2018. Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks. arXiv:1811.01088. Barbara Plank. 2016. What to do about non-standard (or non-canonical) language in NLP. In KONVENS. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Colin Raffel, Noam Shazeer, Adam Kaleo Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683. Steffen Remus and Chris Biemann. 2016. DomainSpecific Corpus Expansion with Focused Webcrawling. In LREC. Sebastian Ruder, Parsa Ghaffari, and John G. Breslin. 2016. Towards a continuous modeling of natural language domains. In Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods. Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian optimization. In EMNLP. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In EMC2 @ NeurIPS. Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune BERT for text classification? In CCL. Swabha Swayamdipta, Matthew Peters, Brendan Roof, Chris Dyer, and Noah A Smith. 2019. Shallow syntax in deep water. arXiv:1908.11047. Tan Thongtan and Tanasanee Phienthrakul. 2019. Sentiment classification using document embeddings trained with cosine similarity. In ACL SRW. Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. arXiv:1806.02847. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In NeurIPS. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In BlackboxNLP @ EMNLP. Marlies van der Wees, Arianna Bisazza, Wouter Weerkamp, and Christof Monz. 2015. What’s in a domain? Analyzing genre and topic differences in statistical machine translation. In ACL. 8353 Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art natural language processing. arXiv:1910.03771. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. Hu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019a. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In NAACL. Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019b. Review conversational reading comprehension. arXiv:1902.00821v2. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In NeurIPS. Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tom´as Kocisk´y, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In NeurIPS. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NeurIPS. Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul McNamee, Marine Carpuat, and Kevin Duh. 2019. Curriculum learning for domain adaptation in neural machine translation. In NAACL. Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV. 8354 Appendix Overview In this supplementary material, we provide: (i) additional information for producing the results in the paper, and (ii) results that we could not fit into the main body of the paper. Appendix A. A tabular overview of related work described in Section §6, a description of the corpus used to train ROBERTA in Liu et al. (2019), and references to the state of the art on our tasks. Appendix B. Details about the data preprocessing, training, and implementation of domain- and taskadaptive pretraining. Appendix C. Development set results. Appendix D. Examples of domain overlap. Appendix E. The cross-domain masked LM loss and reproducibility challenges. Appendix F. Illustration of our data selection method and examples of nearest neighbours. A Related Work Table 11 shows which of the strategies for continued pretraining have already been explored in the prior work from the Related Work (§6). As evident from the table, our work compares various strategies as well as their interplay using a pretrained language model trained on a much more heterogeneous pretraining corpus. A.1 ROBERTA’s Pretraining Corpus ROBERTA was trained on data from BOOKCORPUS (Zhu et al., 2015),6 WIKIPEDIA,7 a portion of the CCNEWS dataset (Nagel, 2016),8 OPENWEBTEXT corpus of Web content extracted from URLs shared on Reddit (Gokaslan and Cohen, 2019),9 and a subset of CommonCrawl that it is said to resemble the “story-like” style of WINOGRAD schemas (STORIES; Trinh and Le, 2018).10 A.2 State of the Art In this section, we specify the models achieving state of the art on our tasks. See the caption of 6https://github.com/soskek/bookcorpus 7https://github.com/google-research/ bert 8https://github.com/fhamborg/ news-please 9https://github.com/jcpeterson/ openwebtext 10https://github.com/tensorflow/models/ tree/master/research/lm_commonsense Table 5 for the reported performance of these models. For ACL-ARC, that is SCIBERT (Beltagy et al., 2019), a BERT-base model for trained from scratch on scientific text. For CHEMPROT and SCIERC, that is S2ORC-BERT (Lo et al., 2020), a similar model to SCIBERT. For AGNEWS and IMDB, XLNet-large, a much larger model. For RCT, Cohan et al. (2019). For HYPERPARTISAN, LONGFORMER, a modified Transformer language model for long documents (Beltagy et al., 2020). Thongtan and Phienthrakul (2019) report a higher number (97.42) on IMDB, but they train their word vectors on the test set. Our baseline establishes the first benchmark for the HELPFULNESS dataset. B Experimental Setup Preprocessing for DAPT The unlabeled corpus in each domain was pre-processed prior to language model training. Abstracts and body paragraphs from biomedical and computer science articles were used after sentence splitting using scispaCy (Neumann et al., 2019). We used summaries and full text of each news article, and the entire body of review from Amazon reviews. For both news and reviews, we perform sentence splitting using spaCy (Honnibal and Montani, 2017). Training details for DAPT We train ROBERTA on each domain for 12.5K steps. We focused on matching all the domain dataset sizes (see Table 1) such that each domain is exposed to the same amount of data as for 12.5K steps it is trained for. AMAZON reviews contain more documents, but each is shorter. We used an effective batch size of 2048 through gradient accumulation, as recommended in Liu et al. (2019). See Table 13 for more hyperparameter details. Training details for TAPT We use the same pretraining hyperparameters as DAPT, but we artificially augmented each dataset for TAPT by randomly masking different tokens across epochs, using the masking probability of 0.15. Each dataset was trained for 100 epochs. For tasks with less than 5K examples, we used a batch size of 256 through gradient accumulation. See Table 13 for more hyperparameter details. Optimization We used the Adam optimizer (Kingma and Ba, 2015), a linear learning rate scheduler with 6% warm-up, a maximum learning rate of 0.0005. When we used a batch size of 256, we 8355 DAPT Domains (if applicable) Tasks Model DAPT TAPT DAPT + TAPT kNNTAPT CuratedTAPT This Paper biomedical & computer science papers, news, reviews 8 classification tasks ROBERTA ✓ ✓ ✓ ✓ ✓ Aharoni and Goldberg (2020) NMT DISTILBERT + Transformer NMT similar Alsentzer et al. (2019) clinical text NER, NLI, de-identification (BIO)BERT ✓ Chakrabarty et al. (2019) opinionated claims from Reddit claim detection ULMFIT ✓ ✓ Chronopoulou et al. (2019) 5 classification tasks ULMFIT† similar Han and Eisenstein (2019) NER in historical texts ELMO, BERT ✓ Howard and Ruder (2018) 6 classification tasks ULMFIT ✓ Khandelwal et al. (2020) language modeling Transformer LM similar Lee et al. (2019) biomedical papers NER, QA, relation extraction BERT ✓ Logeswaran et al. (2019) zero-shot entity linking in Wikia BERT ✓ Mitra et al. (2020) commonsense QA BERT ✓ Phang et al. (2018) GLUE tasks ELMO, BERT, GPT ✓ Radford et al. (2018) NLI, QA, similarity, classification GPT similar Sun et al. (2019) sentiment, question, topic 7 classification tasks BERT ✓ ✓ Swayamdipta et al. (2019) NER, parsing, classification ELMO similar Xu et al. (2019a) reviews RC, aspect extract., sentiment classification BERT ✓ ✓ ✓ Xu et al. (2019b) restaurant reviews, laptop reviews conversational RC BERT ✓ ✓ Table 11: Overview of prior work across strategies for continued pre-training summarized in Table 10. ULMFIT is pretrained on English Wikipedia; ULMFIT† on English tweets; ELMO on the 1BWORDBENCHMARK (newswire; Chelba et al., 2014); GPT on BOOKCORPUS; BERT on English Wikipedia and BOOKCORPUS. In comparison to these pretraining corpora, ROBERTA’s pretraining corpus is substantially more diverse (see Appendix §A.1). used a maximum learning rate of 0.0001, as recommended in Liu et al. (2019). We observe a high variance in performance between random seeds when fine-tuning ROBERTA to HYPERPARTISAN, because the dataset is extremely small. To produce final results on this task, we discard and resample degenerate seeds. We display the full hyperparameter settings in Table 13. Implementation Our LM implementation uses the HuggingFace transformers library (Wolf et al., 2019)11 and PyTorch XLA for TPU compatibility.12 Each adaptive pretraining exper11https://github.com/huggingface/ transformers 12https://github.com/pytorch/xla iment was performed on a single v3-8 TPU from Google Cloud.13 For the text classification tasks, we used AllenNLP (Gardner et al., 2018). Following standard practice (Devlin et al., 2019) we pass the final layer [CLS] token representation to a task-specific feedforward layer for prediction. C Development Set Results Adhering to the standards suggested by Dodge et al. (2019) for replication, we report our development set results in Tables 15, 17, and 18. 13http://github.com/allenai/ tpu-pretrain 8356 D Analysis of Domain Overlap In Table 20 we display additional examples that highlight the overlap between IMDB reviews and REALNEWS articles, relevant for analysis in §3.1. E Analysis of Cross-Domain Masked LM Loss In Section §3.2, we provide ROBERTA’s masked LM loss before and after DAPT. We display crossdomain masked-LM loss in Table 12, where we evaluate masked LM loss on text samples in other domains after performing DAPT. We observe that the cross-domain masked-LM loss mostly follows our intuition and insights from the paper, i.e. ROBERTA’s pretraining corpus and NEWS are closer, and BIOMED to CS (relative to other domains). However, our analysis in §3.1 illustrates that REVIEWS and NEWS also have some similarities. This is supported with the loss of ROBERTA that is adapted to NEWS, calculated on a sample of REVIEWS. However, ROBERTA that is adapted to REVIEWS results in the highest loss for a NEWS sample. This is the case for all domains. One of the properties that distinguishes REVIEWS from all other domains is that its documents are significantly shorter. In general, we find that cross-DAPT masked-LM loss can in some cases be a noisy predictor of domain similarity. F k-Nearest Neighbors Data Selection In Table 21, we display nearest neighbor documents in the BIOMED domain identified by our selection method, on the RCT dataset. 8357 Data Sample Unseen During DAPT PT BIOMED CS NEWS REVIEWS ROBERTA 1.19 1.32 1.63 1.08 2.10 DAPT      BIOMED 1.63 0.99 1.63 1.69 2.59 CS 1.82 1.43 1.34 1.92 2.78 NEWS 1.33 1.50 1.82 1.16 2.16 REVIEWS 2.07 2.23 2.44 2.27 1.93 Table 12: ROBERTA’s (row 1) and domain-adapted ROBERTA’s (rows 2–5) masked LM loss on randomly sampled held-out documents from each domain (lower implies a better fit). PT denotes a sample from sources similar to ROBERTA’s pretraining corpus. The lowest masked LM for each domain sample is boldfaced. Computing Infrastructure Google Cloud v3-8 TPU Model implementations https://github.com/allenai/tpu_pretrain Hyperparameter Assignment number of steps 100 epochs (TAPT) or 12.5K steps (DAPT) batch size 256 or 2058 maximum learning rate 0.0001 or 0.0005 learning rate optimizer Adam Adam epsilon 1e-6 Adam beta weights 0.9, 0.98 learning rate scheduler None or warmup linear Weight decay 0.01 Warmup proportion 0.06 learning rate decay linear Table 13: Hyperparameters for domain- and task- adaptive pretraining. Computing Infrastructure Quadro RTX 8000 GPU Model implementation https://github.com/allenai/dont-stop-pretraining Hyperparameter Assignment number of epochs 3 or 10 patience 3 batch size 16 learning rate 2e-5 dropout 0.1 feedforward layer 1 feedforward nonlinearity tanh classification layer 1 Table 14: Hyperparameters for ROBERTA text classifier. 8358 Additional Pretraining Phases Domain Task ROBERTA DAPT TAPT DAPT + TAPT BIOMED CHEMPROT 83.21.4 84.10.5 83.00.6 84.10.5 †RCT 88.10.05 88.50.1 88.30.1 88.50.1 CS ACL-ARC 71.32.8 73.21.5 73.23.6 78.62.9 SCIERC 83.81.1 88.41.7 85.90.8 88.01.3 NEWS HYPERPARTISAN 84.01.5 79.13.5 82.73.3 80.82.3 †AGNEWS 94.30.1 94.30.1 94.70.1 94.90.1 REVIEWS †HELPFULNESS 65.53.4 66.51.4 69.22.4 69.42.1 †IMDB 94.80.1 95.30.1 95.40.1 95.70.2 Table 15: Results on different phases of adaptive pretraining compared to the baseline ROBERTA (col. 1). Our approaches are DAPT (col. 2, §3), TAPT (col. 3, §4), and a combination of both (col. 4). Reported results are development macro-F1, except for CHEMPROT and RCT, for which we report micro-F1, following Beltagy et al. (2019). We report averages across five random seeds, with standard deviations as subscripts. † indicates high-resource settings. Best task performance is boldfaced. State-of-the-art results we can compare to: CHEMPROT (84.6), RCT (92.9), ACL-ARC (71.0), SCIERC (81.8), HYPERPARTISAN (94.8), AGNEWS (95.5), IMDB (96.2); references in §A.2. Dom. Task ROB. DAPT ¬DAPT BM CHEMPROT 83.21.4 84.10.5 80.90.5 †RCT 88.10.0 88.50.1 87.90.1 CS ACL-ARC 71.32.8 73.21.5 68.15.4 SCIERC 83.81.1 88.41.7 83.90.9 NEWS HYP. 84.01.5 79.13.5 71.64.6 †AGNEWS 94.30.1 94.30.1 94.00.1 REV. †HELPFUL. 65.53.4 66.51.4 65.53.0 †IMDB 94.80.1 95.30.1 93.80.2 Table 16: Development comparison of ROBERTA (ROBA.) and DAPT to adaptation to an irrelevant domain (¬ DAPT). See §3.3 for our choice of irrelevant domains. Reported results follow the same format as Table 5. BIOMED RCT CHEMPROT TAPT 88.30.1 83.00.6 Transfer-TAPT 88.00.1 (↓0.3) 81.10.5 (↓1.9) NEWS HYPERPARTISAN AGNEWS TAPT 82.73.3 94.70.1 Transfer-TAPT 77.63.6 (↓5.1) 94.40.1 (↓0.4) CS ACL-ARC SCIERC TAPT 73.23.6 85.90.8 Transfer-TAPT 74.04.5 (↑1.2) 85.51.1 (↓0.4) AMAZON reviews HELPFULNESS IMDB TAPT 69.22.4 95.40.1 Transfer-TAPT 65.42.7 (↓3.8) 94.90.1 (↓0.5) Table 17: Development results for TAPT transferability. Pretraining BIOMED NEWS REVIEWS RCT-500 HYPERPARTISAN †IMDB TAPT 80.51.3 82.73.3 95.40.1 DAPT + TAPT 83.90.3 80.82.3 95.70.2 Curated-TAPT 84.40.3 84.91.9 95.80.1 DAPT + Curated-TAPT 84.50.3 83.13.7 96.00.1 Table 18: Mean development set macro-F1 (for HYPERPARTISAN and IMDB) and micro-F1 (for RCT-500), with Curated-TAPT across five random seeds, with standard deviations as subscripts. † indicates high-resource settings. 8359 Pretraining BIOMED CS CHEMPROT RCT-500 ACL-ARC ROBERTA 83.21.4 80.30.5 71.32.8 TAPT 83.00.6 80.51.3 73.23.6 RAND-TAPT 83.30.5 81.60.6 78.74.0 50NN-TAPT 83.30.8 81.70.5 70.13.5 150NN-TAPT 83.30.9 81.90.8 78.52.2 500NN-TAPT 84.50.4 82.60.4 77.42.3 DAPT 84.10.5 83.50.8 73.21.5 Table 19: Mean development set macro-F1 (for HYP. and IMDB) and micro-F1 (for RCT), across five random seeds, with standard deviations as subscripts, comparing RAND-TAPT (with 50 candidates) and kNN-TAPT selection. Neighbors of the task data are selected from the domain data. IMDB review REALNEWS article Spooks is enjoyable trash, featuring some well directed sequences, ridiculous plots and dialogue, and some third rate acting. Many have described this is a UK version of “24“, and one can see the similarities. The American version shares the weak silly plots, but the execution is so much slicker, sexier and I suspect, expensive. Some people describe weak comedy as “gentle comedy“. This is gentle spy story hour, the exact opposite of anything created by John Le Carre. Give me Smiley any day. [...] Remember poor Helen Flynn from Spooks? In 2002, the headlong BBC spy caper was in such a hurry to establish the high-wire stakes of its morally compromised world that Lisa Faulkner’s keen-as-mustard MI5 rookie turned out to be a lot more expendable than her prominent billing suggested. [...] Functioning as both a shocking twist and rather callous statement that No-One Is Safe, it gave the slick drama an instant patina of edginess while generating a record-breaking number of complaints. [...] The Sopranos is perhaps the most mind-opening series you could possibly ever want to watch. It’s smart, it’s quirky, it’s funny - and it carries the mafia genre so well that most people can’t resist watching. The best aspect of this show is the overwhelming realism of the characters, set in the subterranean world of the New York crime families. For most of the time, you really don’t know whether the wise guys will stab someone in the back, or buy them lunch. Further adding to the realistic approach of the characters in this show is the depth of their personalities - These are dangerous men, most of them murderers, but by God if you don’t love them too. I’ve laughed at their wisecracks, been torn when they’ve made err in judgement, and felt scared at the sheer ruthlessness of a serious criminal. [...] The drumbeat regarding the “Breaking Bad” finale has led to the inevitable speculation on whether the final chapter in this serialized gem will live up to the hype or disappoint (thank you, “Dexter,” for setting that bar pretty low), with debate, second-guessing and graduate-thesis-length analysis sure to follow. The Most Memorable TV Series Finales of AllTime [...] No ending in recent years has been more divisive than “The Sopranos” – for some, a brilliant flash (literally, in a way) of genius; for others (including yours truly), a too-cute copout, cryptically leaving its characters in perpetual limbo. The precedent to that would be “St. Elsewhere,” which irked many with its provocative, surreal notion that the whole series was, in fact, conjured in the mind of an autistic child. [...] The Wicker Man, starring Nicolas Cage, is by no means a good movie, but I can’t really say it’s one I regret watching. I could go on and on about the negative aspects of the movie, like the terrible acting and the lengthy scenes where Cage is looking for the girl, has a hallucination, followed by another hallucination, followed by a dream sequence- with a hallucination, etc., but it’s just not worth dwelling on when it comes to a movie like this. Instead, here’s five reasons why you SHOULD watch The Wicker Man, even though it’s bad: 5. It’s hard to deny that it has some genuinely creepy ideas to it, the only problem is in its cheesy, unintentionally funny execution. If nothing else, this is a movie that may inspire you to see the original 1973 film, or even read the short story on which it is based. 4. For a cheesy horror/thriller, it is really aesthetically pleasing. [...] NOTE: The Unrated version of the movie is the best to watch, and it’s better to watch the Theatrical version just for its little added on epilogue, which features a cameo from James Franco. [...] What did you ultimately feel about ”The Wicker Man” movie when all was said and done? [...] I’m a fan of the original and I’m glad that I made the movie because they don’t make movies like that anymore and probably the result of what ”Wicker Man” did is the reason why they don’t make movies like that anymore. Again, it’s kind of that ’70’s sensibility, but I’m trying to do things that are outside the box. Sometimes that means it’ll work and other times it won’t. Again though I’m going to try and learn from anything that I do. I think that it was a great cast, and Neil La Bute is one of the easiest directors that I’ve ever worked with. He really loves actors and he really gives you a relaxed feeling on the set, that you can achieve whatever it is that you’re trying to put together, but at the end of the day the frustration that I had with ‘The Wicker Man,’ which I think has been remedied on the DVD because I believe the DVD has the directors original cut, is that they cut the horror out of the horror film to try and get a PG-13 rating. I mean, I don’t know how to stop something like that. So I’m not happy with the way that the picture ended, but I’m happy with the spirit with which it was made. [...] Dr. Seuss would sure be mad right now if he was alive. Cat in the Hat proves to show how movie productions can take a classic story and turn it into a mindless pile of goop. We have Mike Myers as the infamous Cat in the Hat, big mistake! Myers proves he can’t act in this film. He acts like a prissy show girl with a thousand tricks up his sleeve. The kids in this movie are all right, somewhere in between the lines of dull and annoying. The story is just like the original with a couple of tweaks and like most movies based on other stories, never tweak with the original story! Bringing in the evil neighbor Quin was a bad idea. He is a stupid villain that would never get anywhere in life. [...] The Cat in the Hat, [...] Based on the book by Dr. Seuss [...] From the moment his tall, red-and-white-striped hat appears at their door, Sally and her brother know that the Cat in the Hat is the most mischievous cat they will ever meet. Suddenly the rainy afternoon is transformed by the Cat and his antics. Will their house ever be the same? Can the kids clean up before mom comes home? With some tricks (and a fish) and Thing Two and Thing One, with the Cat in The Hat, the fun’s never done!Dr. Seuss is known worldwide as the imaginative master of children’s literature. His books include a wonderful blend of invented and actual words, and his rhymes have helped many children and adults learn and better their understanding of the English language. [...] Table 20: Additional examples that highlight the overlap between IMDB reviews and REALNEWS articles. 8360 Source During median follow-up of 905 days ( IQR 773-1050 ) , 49 people died and 987 unplanned admissions were recorded ( totalling 5530 days in hospital ) . Neighbor 0 Of this group, 26% died after discharge from hospital, and the median time to death was 11 days (interquartile range, 4.0-15.0 days) after discharge. Neighbor 1 The median hospital stay was 17 days (range 8-26 days), and all the patients were discharged within 1 month. Neighbor 2 The median hospital stay was 17 days (range 8-26 days). Neighbor 3 The median time between discharge and death was 25 days (mean, 59.1 days) and no patient was alive after 193 days. Neighbor 4 The length of hospital stay after colostomy formation ranged from 3 days to 14 days with a median duration of 6 days (+IQR of 4 to 8 days). Source Randomized , controlled , parallel clinical trial . Neighbor 0 Design: Unblinded, randomised clinical controlled trial. Neighbor 1 These studies and others led to the phase III randomized trial RTOG 0617/NCCTG 0628/ CALGB 30609. Neighbor 2 -Definitive randomized controlled clinical trial (RCT): Neighbor 3 RCT 1 4 randomized controlled trial. Neighbor 4 randomized controlled trial [ Fig. 3(A)]. Source Forty primary molar teeth in 40 healthy children aged 5-9 years were treated by direct pulp capping . Neighbor 0 In our study, we specifically determined the usefulness of the Er:YAG laser in caries removal and cavity preparation of primary and young permanent teeth in children ages 4 to 1 8 years. Neighbor 1 Males watched more TV than females, although it was only in primary school-aged children and on weekdays. Neighbor 2 Assent was obtained from children and adolescents aged 7-17 years. Neighbor 3 Cardiopulmonary resuscitation was not applied to children aged ¡5 years (Table 2). Neighbor 4 It measures HRQoL in children and adolescents aged 2 to 25 years. Table 21: 5 nearest neighbors of sentences from the RCT dataset (Source) in the BIOMED domain (Neighbors 0–4).
2020
740
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8361–8371 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8361 Estimating Mutual Information Between Dense Word Embeddings Vitalii Zhelezniak, Aleksandar Savkov & Nils Hammerla Babylon Health {firstname.lastname}@babylonhealth.com Abstract Word embedding-based similarity measures are currently among the top-performing methods on unsupervised semantic textual similarity (STS) tasks. Recent work has increasingly adopted a statistical view on these embeddings, with some of the top approaches being essentially various correlations (which include the famous cosine similarity). Another excellent candidate for a similarity measure is mutual information (MI), which can capture arbitrary dependencies between the variables and has a simple and intuitive expression. Unfortunately, its use in the context of dense word embeddings has so far been avoided due to difficulties with estimating MI for continuous data. In this work we go through a vast literature on estimating MI in such cases and single out the most promising methods, yielding a simple and elegant similarity measure for word embeddings. We show that mutual information is a viable alternative to correlations, gives an excellent signal that correlates well with human judgements of similarity and rivals existing state-of-the-art unsupervised methods. 1 Introduction Neural text embeddings learned from unlabeled data are a key component of modern approaches to semantic textual similarity (STS). Despite the impressive performance of large pretrained models (Kiros et al., 2015; Conneau et al., 2017; Subramanian et al., 2018; Cer et al., 2018; Peters et al., 2018; Radford, 2018; Devlin et al., 2018; Dai et al., 2019; Yang et al., 2019a) on a a plethora of hard NLP tasks, deep models do not currently offer a clear advantage over much simpler static word embeddings (Bengio et al., 2003; Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017; Joulin et al., 2017) on standard unsupervised STS benchmarks (Hill et al., 2016; Arora et al., 2017; Wieting et al., 2016; Wieting and Gimpel, 2018; Zhelezniak et al., 2019b,a,c). Instead, the main sources of improvement here have come from training on supervised paraphrastic corpora (Wieting et al., 2015, 2016; Wieting and Gimpel, 2018), designing better composition functions (Mitchell and Lapata, 2008; De Boom et al., 2016; Arora et al., 2017; Zhao and Mao, 2017; R¨uckl´e et al., 2018; Zhelezniak et al., 2019b,c; Yang et al., 2019b) and exploring novel similarity measures between word embeddings, in particular those inspired by optimal transport (Kusner et al., 2015; Huang et al., 2016), soft and fuzzy sets (Jimenez et al., 2010, 2015; Zhelezniak et al., 2019b), and statistics (Lev et al., 2015; Nikolentzos et al., 2017; Torki, 2018; Zhelezniak et al., 2019a,c). Recently, Zhelezniak et al. (2019a,c) advocated for a new statistical perspective on word embeddings where each word embedding itself is viewed as a sample of (e.g. 300) observations from some scalar random variable. They conducted a statistical analysis of several popular pretrained word embeddings and their compositions and established that the ubiquitous cosine similarity is practically equivalent to Pearson correlation. They also demonstrated significant gains in performance when one instead uses non-parametric rank correlation coefficients (Spearman’s ρ, Kendall’s τ) and cross-covariance operators between reproducing kernel Hilbert spaces (Hilbert-Schmidt independence criterion (HSIC) (Gretton et al., 2005), Centered Kernel Alignment (CKA)) (Cortes et al., 2012; Kornblith et al., 2019). One prominent alternative to those correlationbased approaches is mutual information (MI), which is of great importance in information theory and statistics. In some sense, mutual information is an excellent candidate for a similarity measure between word embeddings as it can capture arbitrary dependencies between the variables and has a simple and intuitive expression. Unfortunately, 8362 its use in the context of continuous dense word representations has so far been avoided due to the difficulties in estimating MI for continuous random variables (joint and marginal densities are not known in practice). In this work we make the first steps towards the adoption of MI as a measure of semantic similarity between dense word embeddings. We begin our discussion with how to apply MI for this purpose in principle. Next we carefully summarise the vast literature on estimation of MI for continuous random variables and identify approaches most suitable for our use case. Our chief goal here is to identify the estimators that yield elegant, almost closed-form expressions for the resulting similarity measure as opposed to complicated estimation procedures. Finally, we show that such estimators of mutual information give an excellent signal that correlates very well with human judgements and comfortably rivals existing state-of-the-art unsupervised STS approaches. 2 Background: Statistical Approaches to Word Embeddings Suppose we are given a word embedding matrix W ∈RN×D, where N is the vocabulary size and D is the embedding dimension (commonly D = 300). Ultimately, the matrix W is simply a table of some numbers and just like any dataset, it is subject to a statistical analysis. There are essentially two ways we can proceed: we can either choose to view W as N observations from D random variables or we can instead consider WT and view it as D observations from N random variables. The first approach allows us to study ‘global’ properties of the word embedding space (e.g. via PCA, clustering, etc.) and defines ‘global’ similarity structures, such as Mahalanobis distance, Fisher kernel (Lev et al., 2015), etc. In the second approach we study the distribution P(W1, W2, . . . , WN), where a word embedding wi is a sample of D (= 300) observations from some scalar random variable Wi corresponding to the word wi (Zhelezniak et al., 2019a,c). The ‘local’ similarity between two words wi and wj is then encoded in the dependencies between the corresponding random variables Wi, Wj. Since the distribution P(Wi, Wj) is unknown, we estimate these dependencies based on the sample wi, wj. Certain dependencies can be captured by Pearson, Spearman and Kendall correlation coefficients between word embeddings bρ(wi, wj), where the choice of the coefficient depends on the statistics of each word embedding model (Zhelezniak et al., 2019a). Conveniently, correlations can also be used to measure semantic similarity between two sets of words (e.g. phrases and sentences) if one considers the correlations between random vectors X = (X1, X2, . . . , Xlx) and Y = (Y1, Y2, . . . , Yly), where scalar random variables Xi correspond to the words in the first sentence and Yj to the words in the second sentence. This, for example, can be done by first pooling (e.g. mean- or max-pooling) random vectors into scalar variables Xpool and Ypool and then estimating univariate correlations corr(Xpool, Ypool) as before. Alternatively, we can measure correlations between random vectors directly using norms of cross-covariance matrices/operators (e.g. the Hilbert-Schmidt independence criterion (Gretton et al., 2005)). Both approaches are known to give excellent results on standard STS benchmarks (Zhelezniak et al., 2019c). A viable alternative to correlations is mutual information (MI), which can detect any kind of dependence between random variables, but which has so far not been explored for this problem. 3 Mutual Information between Dense Word Embeddings We operate within the previous setting where we consider two sentences x = x1x2 . . . xlx and y = y1y2 . . . yly. Our goal now is to estimate the mutual information I(X; Y) between the corresponding random vectors X = (X1, X2, . . . , Xlx) and Y = (Y1, Y2, . . . , Yly) I(X; Y) = ZZ pXY(x, y)log pXY(x, y) pX(x)pY(y)dxdy, (1) where pXY(x, y) is the joint density of X and Y and pX(x) = R Y pXY(x, y)dy and pY(y) = R X pXY(x, y)dx are the marginal densities. Unfortunately, these theoretical quantities are not available to us and we must somehow estimate bI(X; Y) directly from the word embeddings bX = (x(1), x(2), . . . , x(lx)) and bY = (y(1), y(2), . . . , y(ly)). Luckily, there is a vast literature on how to estimate mutual information between continuous random variables based on the sample. The first class of methods partitions the supports X, Y into a finite number of bins of equal or unequal (adaptive) size and estimates bI(X; Y) 8363 based on discrete counts in each bin (Moddemeijer, 1989; Fraser and Swinney, 1986; Darbellay and Vajda, 1999; Reshef et al., 2011; Ince et al., 2016). While such methods are easy to understand conceptually, they might suffer from the curse of dimensionality (especially when sentences are long) and in some sense violate our desire for an elegant closed-form similarity measure. The next class of methods constructs kernel density estimates (KDE) and then numerically integrates such approximate densities to obtain MI (Moon et al., 1995; Steuer et al., 2002). These methods might require a careful choice of kernels and the bandwidth parameters and also violate our simplicity requirement. The third class of methods that has recently gained popularity in the deep learning community is based on neural-network-based estimation of various bounds on mutual information (e.g. by training a critic to estimate the density ratio in (1)) (Suzuki et al., 2008; Alemi et al., 2017; Belghazi et al., 2018; Hjelm et al., 2019; Poole et al., 2019). Such estimators are usually differentiable and scale well to high dimensions and large sample sizes (Belghazi et al., 2018). However, in our case the sample size (e.g. 300) and dimensionality are not too large (at least for short phrases and sentences), and thus training a separate neural network for a simple similarity computation is hardly justified. This leaves us with the last class of methods that estimates mutual information from the k-nearest neighbour statistics (Kraskov et al., 2004; Ver Steeg and Galstyan, 2013; Ver Steeg, 2014; Ross, 2014; Gao et al., 2015; Gao et al., 2018). These approaches are not without problems (Gao et al., 2015) and inherit the weaknesses of kNN in large dimensions but are very simple to implement. In particular, we focus on the Kraskov– St¨ogbauer–Grassberger (KSG) estimator (Kraskov et al., 2004) which admits a particularly elegant expression for the resulting similarity measure. 3.1 The KSG Similarity Measure It can be verified that the mutual information is given by I(X; Y) = H(X) + H(Y) −H(X, Y), i.e. the difference between the sum of marginal entropies and the joint entropy. Thus, in order to estimate MI, it is sufficient to be able to estimate various entropies in the above equation. In their seminal work, Kozachenko and Leonenko (1987) show how to estimate such differential entropies based on the nearest neighbour statistics. Concretely, these methods approximate the log-density Algorithm 1 Kraskov–St¨ogbauer–Grassberger (KSG) Similarity Measure Require: Word embeddings for the first sentence X ∈Rlx×D Require: Word embeddings for the second sentence Y ∈Rly×D Require: The number of nearest neighbours k < D (default k = 3) Ensure: Similarity measure KSG Z ←STACK ROWS(X, Y) ||zi −zj||Z ←max(||xi −xj||X , ||yi −yj||Y) i, j = 1, . . . , D # ←set cardinality for zd, d = 1, . . . , D do ϵ[d] ←||zd −zdk||, zdk = k-NN of zd nx[d] ←#{xd′ : ||xd −xd′||X < ϵ[d]} ny[d] ←#{yd′ : ||yd −yd′||Y < ϵ[d]} d′ ∈{1, . . . D} \ {d} end for ψ(x) ←digamma function S ←PD d=1 (ψ(nx[d] + 1) + ψ(ny[d] + 1)) KSG ←ψ(D) + ψ(k) −S at a point by a uniform density in a e.g. Euclidean or Chebyshev norm ball containing its k-nearest neighbours. Kraskov et al. (2004) modify this idea to construct their famous KSG estimator of mutual information given by KSG(X; Y) = ψ(D) + ψ(k)− D X d=1 (ψ(nx[d] + 1) + ψ(ny[d] + 1)) , (2) where D is the embedding dimension, k is the number of nearest neighbours, ψ(x) = Γ′(x)/Γ(x) is the digamma function and nx[d], ny[d] are certain nearest neighbour statistics. These statistics are obtained by counting the number of neighbours that fall within less than ϵ[d] from xd and yd in the marginal spaces X and Y respectively, where ϵ[d] is the distance from zd = (xd, yd) to its knearest neighbour in the joint space (X, Y). We illustrate how the estimator can be applied to measure similarity between sets of word embeddings in Algorithm 1 and refer the reader to Kraskov et al. (2004) for its full derivation and justification as well as an alternative version. 8364 Similarity STS 12 13 14 15 16 Popular approaches USE (Transf.) 63.8 63.1 66.0 77.1 76.4 BERT Small 50.8 50.4 54.0 62.9 63.8 BERT Large 51.0 47.2 51.8 58.0 62.7 WMD 54.8 47.0 57.7 65.8 63.2 SoftCard 54.8 50.6 58.1 66.5 65.9 DynaMax 61.3 61.7 66.9 76.5 74.7 MeanPool+COS 58.8 58.8 63.4 69.1 68.3 SIF+PCA 58.1 67.2 66.5 73.8 73.0 Correlation-based Approaches MaxPool+SPR 61.4 63.8 68.0 75.8 75.9 CKA Gaussian 60.8 64.6 68.0 76.4 73.8 CKA dCorr 60.9 63.4 67.8 76.2 73.4 Mutual Information (KSG) KSG k = 3 59.9 61.6 67.8 76.7 74.7 KSG k = 10 60.4 61.5 68.3 77.0 75.1 MaxPool+KSG 10 59.5 60.2 67.5 75.0 74.1 Table 1: Average Spearman correlation between system and human scores on STS 12–16 tasks. FastText is used for all methods that rely on word embeddings. Similarity measures based on Mutual Information (KSG) perform on par with correlation-based measures and other popular methods from the literature. 4 Experiments We now explore the empirical performance of the KSG similarity measure on a standard suite of Semantic Textual Similarity (STS) benchmarks (Agirre et al., 2012, 2013, 2014, 2015, 2016) and report Spearman correlation between the system and human scores. Our focus here is on fastText vectors (Bojanowski et al., 2017) trained on Common Crawl (600B tokens), as previous literature suggests that among unsupervised vectors fastText yields the best performance for all tasks and similarity measures (Conneau et al., 2017; Perone et al., 2018; Zhelezniak et al., 2019a,b,c). We defer evaluations and significance analysis on all 24 STS subtasks for other word vectors (word2vec and GloVe) to the Appendix. Our evaluations are run in the SentEval toolkit (Conneau and Kiela, 2018) and our code is available on GitHub1. Note that we do not report results on the STS13 SMT subtask as it is no longer publicly available. 1https://github.com/babylonhealth/ corrsim Similarity Time complexity WMD O(m2D + m3 log m) WMD (relaxed) O(m2D) SoftCard O(m2D) DynaMax O(m2D) MaxPool+SPR O(mD + D log D) MaxPool+KSG O(mD + D3/2) CKA O(mD2) KSG O(mD2) Table 2: Computational complexity of some word embedding-based methods, where m is the length of the longer sentence and D is the word embedding dimension. The number of nearest neighbours for KSG that is known to work well in practice on a variety of datasets is k = 3 (Kraskov et al., 2004; Khan et al., 2007). This value seems to strike a good balance between the bias and variance of the estimator. We also run experiments for k = 10 to show that KSG is not very sensitive to this hyperparameter, at least in our setting. As an interesting addition, we also run KSG (k = 10) for max-pooled scalar random variables (MaxPool+KSG 10). We compare KSG to the following approaches from the literature: Universal Sentence Encoder (Transformer) (Cer et al., 2018), BERT (penultimate layer, mean-pooling) (Devlin et al., 2018), Word Mover’s Distance (WMD) (Kusner et al., 2015), soft cardinality (Jimenez et al., 2010, 2015) with cosine similarity and the softness parameter p = 1, DynaMax-Jaccard (Zhelezniak et al., 2019b), mean-pooling with cosine similarity (MeanPool+COS) and Smooth Inverse Frequency (SIF) + PCA (Arora et al., 2017). Next we compare KSG with the following top-performing correlations: max-pooling with Spearman correlation (MaxPool+SPR), Centered Kernel Alignment (Gaussian kernel with median estimation for σ2) and distance correlation (Zhelezniak et al., 2019c). The evaluation results are given in Table 1. In summary, we can see that similarity measures based on mutual information (KSG) perform on par with top correlation-based measures and other leading methods from the literature. Moreover, KSG between pooled variables (MaxPool) is faster and performs only slightly worse than multivariate KSG. 8365 5 Conclusion In this work we explored how to apply mutual information (MI) as a semantic similarity measure for continuous dense word embeddings. We have summarised the vast literature on estimating MI for continuous random variables from the sample and singled out a simple and elegant KSG estimator which is based on elementary nearest-neighbour statistics. We showed empirically that this estimator and mutual information in general can be an excellent candidate for a similarity measure between dense word embeddings. Acknowledgements We would like to thank Adam Bozson and the four anonymous reviewers for their useful feedback and suggestions. References Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 252–263. Association for Computational Linguistics. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81–91. Association for Computational Linguistics. Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 497–511. Association for Computational Linguistics. Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385– 393. Association for Computational Linguistics. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32–43. Association for Computational Linguistics. Alex Alemi, Ian Fischer, Josh Dillon, and Kevin Murphy. 2017. Deep variational information bottleneck. In ICLR. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A Simple but Tough-to-Beat Baseline for Sentence Embeddings. International Conference on Learning Representations. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. 2018. Mutual information neural estimation. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 531–540, Stockholmsmssan, Stockholm Sweden. PMLR. Yoshua Bengio, Rjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for english. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174. Association for Computational Linguistics. Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). European Language Resource Association. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680. Association for Computational Linguistics. Corinna Cortes, Mehryar Mohri, and Afshin Rostamizadeh. 2012. Algorithms for learning kernels based on centered alignment. Journal of Machine Learning Research, 13(Mar):795–828. 8366 Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988, Florence, Italy. Association for Computational Linguistics. G.A. Darbellay and I. Vajda. 1999. Estimation of the information by an adaptive partitioning of the observation space. IEEE Transactions on Information Theory, 45(4):1315–1321. Cedric De Boom, Steven Van Canneyt, Thomas Demeester, and Bart Dhoedt. 2016. Representation learning for very short texts using weighted word embedding aggregation. Pattern Recogn. Lett., 80(C):150–156. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bradley Efron. 1987. Better bootstrap confidence intervals. Journal of the American Statistical Association, 82(397):171–185. Andrew M. Fraser and Harry L. Swinney. 1986. Independent coordinates for strange attractors from mutual information. Phys. Rev. A, 33:1134–1140. Shuyang Gao, Greg Ver Steeg, and Aram Galstyan. 2015. Efficient Estimation of Mutual Information for Strongly Dependent Variables. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38 of Proceedings of Machine Learning Research, pages 277– 286, San Diego, California, USA. PMLR. W. Gao, S. Oh, and P. Viswanath. 2018. Demystifying fixed k -nearest neighbor information estimators. IEEE Transactions on Information Theory, 64(8):5629–5661. Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Sch¨olkopf. 2005. Measuring statistical dependence with hilbert-schmidt norms. In International conference on algorithmic learning theory, pages 63–77. Springer. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1367–1377. Association for Computational Linguistics. R Devon Hjelm, Alex Fedorov, Samuel LavoieMarchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations. Gao Huang, Chuan Quo, Matt J. Kusner, Yu Sun, Kilian Q. Weinberger, and Fei Sha. 2016. Supervised word mover’s distance. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pages 4869– 4877, USA. Curran Associates Inc. Robin A.A. Ince, Bruno L. Giordano, Christoph Kayser, Guillaume A. Rousselet, Joachim Gross, and Philippe G. Schyns. 2016. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula. Human Brain Mapping, 38(3):1541–1573. Sergio Jimenez, Fabio Gonzalez, and Alexander Gelbukh. 2010. Text comparison using soft cardinality. In String Processing and Information Retrieval, pages 297–302, Berlin, Heidelberg. Springer Berlin Heidelberg. Sergio Jimenez, Fabio A. Gonzalez, and Alexander Gelbukh. 2015. Soft cardinality in semantic text processing: Experience of the SemEval international competitions. Polibits, 51:63–72. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431. Association for Computational Linguistics. Shiraj Khan, Sharba Bandyopadhyay, Auroop R. Ganguly, Sunil Saigal, David J. Erickson, Vladimir Protopopescu, and George Ostrouchov. 2007. Relative performance of mutual information estimation methods for quantifying the dependence among short and noisy data. Physical Review E, 76(2). Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-Thought Vectors. In Advances in Neural Information Processing Systems, pages 3294–3302. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural network representations revisited. In ICML. LF Kozachenko and Nikolai N Leonenko. 1987. Sample estimate of the entropy of a random vector. Problemy Peredachi Informatsii, 23(2):9–16. Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. 2004. Estimating mutual information. Phys. Rev. E, 69:066138. Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning, volume 37 of ICML’15, pages 957–966. JMLR.org. 8367 Guy Lev, Benjamin Klein, and Lior Wolf. 2015. In defense of word embedding for generic text representation. In Natural Language Processing and Information Systems, pages 35–50, Cham. Springer International Publishing. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236–244. Association for Computational Linguistics. R. Moddemeijer. 1989. On estimation of entropy and mutual information of continuous distributions. Signal Processing, 16(3):233–248. Young-Il Moon, Balaji Rajagopalan, and Upmanu Lall. 1995. Estimation of mutual information using kernel density estimators. Physical Review E, 52(3):2318–2321. Giannis Nikolentzos, Polykarpos Meladianos, Francois Rousseau, Yannis Stavrakas, and Michalis Vazirgiannis. 2017. Multivariate Gaussian document representation from word embeddings for text categorization. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 450–455, Valencia, Spain. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Christian S Perone, Roberto Silveira, and Thomas S Paula. 2018. Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv preprint arXiv:1806.06259. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. 2019. On variational bounds of mutual information. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5171–5180, Long Beach, California, USA. PMLR. Alec Radford. 2018. Improving language understanding by generative pre-training. D. N. Reshef, Y. A. Reshef, H. K. Finucane, S. R. Grossman, G. McVean, P. J. Turnbaugh, E. S. Lander, M. Mitzenmacher, and P. C. Sabeti. 2011. Detecting novel associations in large data sets. Science, 334(6062):1518–1524. Brian C. Ross. 2014. Mutual information between discrete and continuous data sets. PLoS ONE, 9(2):e87357. Andreas R¨uckl´e, Steffen Eger, Maxime Peyrard, and Iryna Gurevych. 2018. Concatenated p-mean word embeddings as universal cross-lingual sentence representations. CoRR, abs/1803.01400. R. Steuer, J. Kurths, C. O. Daub, J. Weise, and J. Selbig. 2002. The mutual information: Detecting and evaluating dependencies between variables. Bioinformatics, 18(Suppl 2):S231–S240. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. In International Conference on Learning Representations. Taiji Suzuki, Masashi Sugiyama, Jun Sese, and Takafumi Kanamori. 2008. Approximating mutual information by maximum likelihood density ratio estimation. In Proceedings of the Workshop on New Challenges for Feature Selection in Data Mining and Knowledge Discovery at ECML/PKDD 2008, volume 4 of Proceedings of Machine Learning Research, pages 5–20, Antwerp, Belgium. PMLR. Marwan Torki. 2018. A document descriptor using covariance of word vectors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 527–532, Melbourne, Australia. Association for Computational Linguistics. Greg Ver Steeg. 2014. Non-parametric entropy estimation toolbox (NPEET). Greg Ver Steeg and Aram Galstyan. 2013. Informationtheoretic measures of influence based on content dynamics. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, WSDM ’13, pages 3–12, New York, NY, USA. ACM. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the Association for Computational Linguistics, 3:345–358. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards Universal Paraphrastic Sentence Embeddings. In International Conference on Learning Representations. 8368 John Wieting and Kevin Gimpel. 2018. Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019a. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237. Ziyi Yang, Chenguang Zhu, and Weizhu Chen. 2019b. Parameter-free sentence embedding via orthogonal basis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 638–648, Hong Kong, China. Association for Computational Linguistics. Rui Zhao and Kezhi Mao. 2017. Fuzzy bag-of-words model for document representation. IEEE Transactions on Fuzzy Systems, pages 1–1. Vitalii Zhelezniak, Aleksandar Savkov, April Shen, and Nils Hammerla. 2019a. Correlation coefficients and semantic textual similarity. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 951–962, Minneapolis, Minnesota. Association for Computational Linguistics. Vitalii Zhelezniak, Aleksandar Savkov, April Shen, Francesco Moramarco, Jack Flann, and Nils Y. Hammerla. 2019b. Don’t settle for average, go for the max: Fuzzy sets and max-pooled word vectors. In International Conference on Learning Representations. Vitalii Zhelezniak, April Shen, Daniel Busbridge, Aleksandar Savkov, and Nils Hammerla. 2019c. Correlations between word vector sets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 77–87, Hong Kong, China. Association for Computational Linguistics. Appendix 8369 GloVe fastText word2vec COS KSG ∆95% CI COS KSG ∆95% CI COS KSG ∆95% CI STS12 MSRpar 45.17 47.79 [-7.49, 2.09] 44.49 49.37 [-9.84, -0.14] 40.86 39.34 [-2.18, 5.11] MSRvid 67.50 70.95 [-5.99, -1.04] 73.79 76.65 [-4.92, -0.87] 76.46 71.87 [2.84, 6.68] SMTeuroparl 58.46 55.68 [-1.67, 7.41] 62.34 58.76 [-0.10, 7.30] 49.58 47.00 [-0.44, 5.72] surprise.OnWN 61.06 68.72 [-10.58, -4.86] 67.50 70.09 [-4.73, -0.53] 67.94 68.48 [-2.13, 1.07] surprise.SMTnews 33.92 45.59 [-18.19, -5.90] 45.92 47.06 [-5.27, 2.87] 42.18 44.72 [-6.51, 1.19] STS13 FNWN 36.57 45.14 [-20.76, 2.89] 39.99 48.43 [-20.07, 2.65] 40.76 46.23 [-15.47, 4.28] headlines 63.12 70.76 [-10.50, -5.00] 70.25 73.15 [-5.14, -0.93] 64.09 64.88 [-2.73, 1.04] OnWN 52.57 52.58 [-4.11, 4.01] 66.26 62.90 [0.34, 6.71] 69.84 62.09 [5.14, 10.80] STS14 deft-forum 34.72 48.14 [-20.66, -6.81] 41.07 53.65 [-18.35, -7.35] 44.57 48.82 [-9.05, 0.34] deft-news 64.56 65.85 [-7.21, 4.80] 67.46 67.37 [-4.28, 4.52] 64.00 61.22 [-0.73, 6.95] headlines 55.10 63.42 [-11.10, -5.83] 61.78 65.21 [-5.71, -1.39] 58.27 60.12 [-3.84, 0.01] images 61.26 74.69 [-16.90, -10.32] 69.62 76.88 [-10.00, -4.69] 74.60 76.01 [-3.11, 0.30] OnWN 64.35 66.84 [-5.12, 0.07] 74.48 74.15 [-1.39, 2.07] 77.85 73.46 [2.82, 6.14] tweet-news 53.83 72.51 [-23.66, -14.00] 66.15 72.54 [-9.74, -3.28] 65.00 70.54 [-8.74, -2.88] STS15 answers-forums 37.02 66.53 [-37.58, -22.43] 56.73 72.83 [-22.20, -10.58] 51.26 63.91 [-19.15, -6.76] answers-students 68.36 75.16 [-9.85, -4.32] 74.15 75.34 [-3.36, 0.71] 74.55 75.03 [-2.51, 1.28] belief 52.76 72.92 [-27.61, -13.50] 64.97 77.97 [-18.26, -8.35] 65.30 76.32 [-16.52, -6.51] headlines 66.21 73.31 [-9.50, -4.96] 71.85 75.18 [-4.98, -1.79] 67.57 68.71 [-2.74, 0.42] images 71.87 80.35 [-11.23, -5.96] 77.72 83.67 [-8.06, -4.00] 81.21 82.36 [-2.61, 0.26] STS16 answer-answer 42.52 63.01 [-30.07, -11.93] 49.44 66.27 [-25.49, -9.76] 44.41 60.87 [-25.21, -9.08] headlines 65.88 73.06 [-11.77, -3.34] 71.29 74.83 [-7.04, -0.66] 67.90 67.38 [-2.32, 3.16] plagiarism 56.10 80.85 [-34.89, -16.47] 76.16 82.10 [-11.81, -0.80] 75.90 80.58 [-10.08, 0.14] postediting 71.76 83.57 [-18.32, -6.74] 78.29 84.06 [-10.91, -1.59] 78.94 83.08 [-7.66, -1.16] question-question 53.31 60.90 [-16.60, 1.33] 66.40 68.39 [-8.75, 4.79] 64.95 59.31 [-1.20, 13.43] Table 3: MeanPool+Cosine vs. KGS (k = 10): Spearman correlation between human and system sentence similarity scores. Values in bold indicate the best result on a subtask for a given set of word vectors. The winner is determined based on a 95% BCa confidence interval (Efron, 1987) on the difference in performance between the two systems. When there is no significant difference, both values are in bold. 8370 GloVe fastText word2vec SPR KSG ∆95% CI SPR KSG ∆95% CI SPR KSG ∆95% CI STS12 MSRpar 41.28 47.79 [-9.47, -3.75] 44.79 49.37 [-7.36, -2.03] 36.81 39.34 [-5.10, 0.11] MSRvid 77.32 70.95 [4.65, 8.48] 81.76 76.65 [3.72, 6.74] 74.14 71.87 [0.78, 4.00] SMTeuroparl 53.63 55.68 [-4.22, 0.10] 58.54 58.76 [-2.24, 1.81] 47.28 47.00 [-1.82, 2.42] surprise.OnWN 68.71 68.72 [-1.29, 1.28] 71.96 70.09 [0.60, 3.16] 67.99 68.48 [-1.80, 0.83] surprise.SMTnews 45.71 45.59 [-3.40, 3.72] 49.82 47.06 [-0.24, 5.85] 41.96 44.72 [-5.84, 0.17] STS13 FNWN 47.53 45.14 [-5.09, 10.30] 47.68 48.43 [-10.10, 8.35] 50.77 46.23 [-5.57, 15.97] headlines 69.45 70.76 [-2.95, 0.34] 72.23 73.15 [-2.40, 0.52] 64.29 64.88 [-2.12, 0.89] OnWN 61.98 52.58 [6.94, 12.29] 71.43 62.90 [6.31, 11.24] 69.68 62.09 [5.58, 9.93] STS14 deft-forum 44.17 48.14 [-8.00, 0.06] 51.27 53.65 [-5.62, 0.95] 44.23 48.82 [-8.32, -1.13] deft-news 66.90 65.85 [-1.48, 3.94] 65.72 67.37 [-4.69, 1.12] 59.60 61.22 [-4.45, 1.29] headlines 61.58 63.42 [-3.50, -0.27] 64.03 65.21 [-2.81, 0.33] 58.98 60.12 [-2.76, 0.45] images 75.37 74.69 [-0.72, 2.22] 77.72 76.88 [-0.47, 2.19] 75.78 76.01 [-1.57, 1.11] OnWN 72.29 66.84 [3.94, 7.23] 77.63 74.15 [2.23, 4.85] 77.37 73.46 [2.68, 5.37] tweet-news 70.12 72.51 [-4.04, -0.92] 71.42 72.54 [-2.55, 0.31] 68.09 70.54 [-3.86, -1.09] STS15 answers-forums 66.02 66.53 [-4.54, 3.53] 69.46 72.83 [-7.04, -0.29] 59.98 63.91 [-8.30, 0.45] answers-students 71.34 75.16 [-5.49, -2.34] 73.32 75.34 [-3.58, -0.54] 74.48 75.03 [-1.67, 0.54] belief 73.50 72.92 [-2.25, 3.71] 77.69 77.97 [-2.83, 2.48] 73.53 76.32 [-5.69, 0.28] headlines 71.77 73.31 [-2.85, -0.32] 74.17 75.18 [-2.20, 0.10] 67.87 68.71 [-2.13, 0.43] images 81.94 80.35 [0.33, 2.88] 84.49 83.67 [-0.12, 1.78] 82.60 82.36 [-0.68, 1.21] STS16 answer-answer 61.30 63.01 [-5.73, 2.46] 65.98 66.27 [-4.08, 3.97] 59.09 60.87 [-5.43, 1.76] headlines 70.03 73.06 [-5.17, -1.24] 72.96 74.83 [-3.96, -0.00] 67.87 67.38 [-1.12, 2.31] plagiarism 77.72 80.85 [-5.93, -0.98] 83.75 82.10 [-0.09, 4.08] 80.28 80.58 [-2.44, 1.55] postediting 81.45 83.57 [-3.69, -0.77] 82.85 84.06 [-2.94, 0.35] 80.06 83.08 [-4.96, -1.37] question-question 66.80 60.90 [1.23, 11.53] 74.03 68.39 [2.14, 10.06] 65.87 59.31 [1.37, 13.10] Table 4: MaxPool+Spearman vs. KGS (k = 10): Spearman correlation between human and system sentence similarity scores. Values in bold indicate the best result on a subtask for a given set of word vectors. The winner is determined based on a 95% BCa confidence interval (Efron, 1987) on the difference in performance between the two systems. When there is no significant difference, both values are in bold. 8371 GloVe fastText word2vec CKA KSG ∆95% CI CKA KSG ∆95% CI CKA KSG ∆95% CI STS12 MSRpar 42.65 47.79 [-7.97, -2.60] 45.12 49.37 [-6.64, -2.18] 36.00 39.34 [-5.92, -1.01] MSRvid 76.93 70.95 [4.47, 7.77] 83.78 76.65 [5.66, 8.89] 79.64 71.87 [6.23, 9.62] SMTeuroparl 57.62 55.68 [0.33, 3.81] 58.74 58.76 [-2.18, 2.02] 46.80 47.00 [-1.84, 1.46] surprise.OnWN 66.21 68.72 [-3.82, -1.16] 68.10 70.09 [-3.31, -0.70] 66.36 68.48 [-3.42, -0.91] surprise.SMTnews 46.94 45.59 [-1.19, 3.88] 48.45 47.06 [-1.62, 4.46] 44.12 44.72 [-3.70, 2.09] STS13 FNWN 37.98 45.14 [-15.29, 1.05] 48.85 48.43 [-8.13, 9.24] 42.02 46.23 [-12.02, 3.58] headlines 70.34 70.76 [-1.94, 1.00] 71.65 73.15 [-2.96, -0.17] 63.02 64.88 [-3.28, -0.47] OnWN 61.35 52.58 [6.67, 11.17] 73.46 62.90 [8.19, 13.37] 71.23 62.09 [7.11, 11.57] STS14 deft-forum 50.80 48.14 [-0.62, 6.11] 53.67 53.65 [-3.66, 3.57] 51.43 48.82 [-0.94, 6.17] deft-news 67.78 65.85 [-1.13, 5.22] 67.18 67.37 [-3.07, 2.76] 61.48 61.22 [-2.45, 3.31] headlines 61.51 63.42 [-3.41, -0.51] 63.47 65.21 [-3.22, -0.35] 58.31 60.12 [-3.36, -0.37] images 74.08 74.69 [-2.02, 0.79] 77.50 76.88 [-0.49, 1.84] 76.44 76.01 [-0.65, 1.54] OnWN 72.14 66.84 [4.00, 6.77] 79.28 74.15 [3.76, 6.63] 78.45 73.46 [3.79, 6.46] tweet-news 67.22 72.51 [-7.62, -3.32] 66.81 72.54 [-8.07, -3.85] 65.75 70.54 [-6.73, -3.15] STS15 answers-forums 64.46 66.53 [-4.87, 0.52] 73.62 72.83 [-1.30, 2.99] 62.50 63.91 [-4.09, 1.24] answers-students 73.23 75.16 [-3.86, -0.21] 72.11 75.34 [-5.03, -1.69] 73.90 75.03 [-2.58, 0.10] belief 71.67 72.92 [-4.59, 2.19] 76.50 77.97 [-4.26, 1.14] 74.04 76.32 [-5.15, 0.17] headlines 73.10 73.31 [-1.36, 0.90] 74.60 75.18 [-1.76, 0.53] 67.90 68.71 [-2.01, 0.41] images 81.48 80.35 [-0.16, 2.44] 85.04 83.67 [0.46, 2.34] 83.75 82.36 [0.49, 2.38] STS16 answer-answer 55.29 63.01 [-13.91, -2.04] 61.19 66.27 [-10.28, -0.36] 52.34 60.87 [-14.13, -3.81] headlines 70.79 73.06 [-4.30, -0.38] 72.35 74.83 [-4.44, -0.59] 65.16 67.38 [-4.27, -0.34] plagiarism 79.90 80.85 [-4.24, 1.71] 80.19 82.10 [-4.65, 0.16] 80.53 80.58 [-1.80, 1.86] postediting 81.37 83.57 [-4.92, -0.14] 81.96 84.06 [-4.12, -0.49] 80.85 83.08 [-4.24, -0.42] question-question 72.46 60.90 [6.29, 18.67] 73.32 68.39 [1.05, 10.39] 70.08 59.31 [5.74, 17.83] Table 5: CKA (Gaussian) vs. KGS (k = 10): Spearman correlation between human and system sentence similarity scores. Values in bold indicate the best result on a subtask for a given set of word vectors. The winner is determined based on a 95% BCa confidence interval (Efron, 1987) on the difference in performance between the two systems. When there is no significant difference, both values are in bold.
2020
741
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8372–8388 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8372 Exploring Unexplored Generalization Challenges for Cross-Database Semantic Parsing Alane Suhr‡∗, Ming-Wei Chang†, Peter Shaw†, and Kenton Lee† ‡ Cornell University Department of Computer Science and Cornell Tech New York, NY 10044 [email protected] †Google Research {mingweichang, petershaw, kentonl}@google.com Abstract We study the task of cross-database semantic parsing (XSP), where a system that maps natural language utterances to executable SQL queries is evaluated on databases unseen during training. Recently, several datasets, including Spider, were proposed to support development of XSP systems. We propose a challenging evaluation setup for cross-database semantic parsing, focusing on variation across database schemas and in-domain language use. We re-purpose eight semantic parsing datasets that have been well-studied in the setting where in-domain training data is available, and instead use them as additional evaluation data for XSP systems instead. We build a system that performs well on Spider, and find that it struggles to generalize to our re-purposed set. Our setup uncovers several generalization challenges for cross-database semantic parsing, demonstrating the need to use and develop diverse training and evaluation datasets. 1 Introduction Semantic parsing is the task of mapping natural language utterances to formal meaning representations, and has been studied in tasks including instruction following, evaluating sentence meaning, and building interfaces to knowledge bases. In this paper, we focus on the task of mapping from natural language utterances to SQL queries executable in a database. Most prior work in mapping from natural language to SQL queries train and test the system on a single database. We refer to this setup as single-database semantic parsing (SSP). Well-studied datasets used in the SSP setting include GeoQuery (Zelle and Mooney, 1996) and ATIS (Hemphill et al., 1990; Dahl et al., 1994). However, semantic parsing systems should be able to generalize to new domains and databases, ∗Work done during an internship at Google. Advising (Finegan-Dollak et al., 2018) NL: For EECS 478, how many credits is it? SQL: select distinct credits from course where department =‘EECS’ and number = 478; GeoQuery (Zelle and Mooney, 1996) NL: How many people live in mississippi? SQL: select population from state where state name = ‘mississippi’; ATIS (Hemphill et al., 1990; Dahl et al., 1994) NL: Flights from Phoenix to Milwaukee SQL: select distinct T1.flight id from airport service as T2, airport service as T3, city as T4, city as T5, flight as T1 where T4.city code = T2.city code and T4.city name = ‘Phoenix’ and T5.city code = T3.city code and T5.city name = ‘Milwaukee’ and T1.from airport = T2.airport code and T1.to airport = T3.airport code; Spider (Yu et al., 2018) NL: List the emails of the professionals who live in the state of Hawaii or the state of Wisconsin. SQL: select email address from professionals where state = ‘Hawaii’ or state = ‘Wisconsin’; Figure 1: Examples of generalization challenges revealed in the cross-database semantic parsing (XSP) setting. The top three examples are from datasets originally studied in the single-database (SSP) setting. Without in-domain training data, generalization is more difficult, requiring identifying entities, mapping unfamiliar phrases and entities to the database, and using new and complex database schemas. In contrast, existing XSP evaluation data such as Spider simplifies some of these challenges, for example by including utterances that closely match their paired SQL query. as it is often cost-prohibitive to collect a sufficient number of training examples for all possible databases. Several datasets, including Spider (Yu et al., 2018), were proposed to evaluate this dimension of generalization. These datasets include 8373 examples grounded in multiple databases, distinguishing between training databases and evaluation databases. We refer to this setup as cross-database semantic parsing (XSP). While these datasets have been valuable in understanding and addressing some of the additional generalization challenges introduced by XSP, current evaluation of XSP systems has been limited to datasets designed for XSP. This limits the types of generalization challenges studied to those introduced by these datasets. Existing XSP evaluation data such as Spider simplifies some of these challenges, for example by including utterances that closely match their paired SQL query, as shown in last row of Figure 1. This setup misses an important opportunity for studying cross-database semantic parsing: evaluating on challenging datasets designed for singledatabase semantic parsing, like GeoQuery and ATIS. While the in-domain challenges of these datasets are relatively well-understood, generalization challenges introduced by studying these datasets in an XSP context have not been addressed. In this paper, we propose a more holistic analysis and evaluation setup for XSP. We propose to evaluate a semantic parsing system not only on evaluation data designed for XSP, but also on datasets that have only been studied in the SSP setting. Our repurposed evaluation set includes eight well-studied datasets like ATIS, but in a completely new setting. Instead of training on the original training data for these datasets, we train a single model on training data designed for the XSP setting, and evaluate the trained model on each evaluation dataset. These datasets were collected at different times, by different researchers, and with different motivations. This results in a wide variety of language usage, database structures, and SQL styles across datasets, further stressing a system’s ability to adapt to unseen datasets. These variations pose many new generalization challenges for cross-database semantic parsing models, where in-domain examples are not available at training time. Our proposed XSP evaluation setup addresses several evaluation challenges posed by these dataset variations. With our proposed setup, we are able to analyze the potential limitations of current crossdatabase semantic parsing models. We uncover and attempt to address several new forms of generalization in cross-dataset semantic parsing. We develop a neural semantic parsing model is competitive all public systems on the Spider development set, and evaluate its ability to generalize to the evaluation datasets. First, we observe that the datasets originally designed for SSP become much more difficult under the XSP setting, with a notable drop in performance from both the Spider development results. Second, we experiment with several techniques that improve generalization to the eight evaluation datasets. Finally, we provide in-depth qualitative analysis on our results. Our results and analysis demonstrate a need for diverse training and evaluation datasets for XSP. Our code and experimental setup is available at https://github.com/google-research/ language/tree/master/language/xsp. 2 Background and Related Work We focus on the task of semantic parsing for databases. A natural language utterance u is a sequence u1, . . . , u|u| , where each ui is a natural language token. The task is to map u to an executable formal query y = y1, . . . , y|y| executable in a database D, where each yi is a SQL query token. Single-database Semantic Parsing (SSP) In SSP, all data is grounded in the same knowledge database. The training data consists of N pairs of utterances and SQL queries {x(l), y(l)}N l=1 grounded in database D. The evaluation data contains M unseen pairs of utterances and SQL queries {x(l), y(l)}M l=1, also grounded in D. SSP has been studied using a number of datasets including ATIS (Hemphill et al., 1990; Dahl et al., 1994) and GeoQuery (Zelle and Mooney, 1996). Many prior approaches in SSP assume access to database contents at inference time. At test time, this allows the system to resolve the columns containing novel entities by performing a database look-up; for example, by labeling entity mentions in the input utterance with the columns in which they appear (Dong and Lapata, 2016; Iyer et al., 2017; Suhr et al., 2018). Cross-database Semantic Parsing (XSP) In the XSP setting, examples from the evaluation databases are not seen at training time (Yu et al., 2018, 2019b,a). Previously, the cross-domain semantic parsing task focused mostly on databases consisting of a single table (Pasupat and Liang, 2015; Iyyer et al., 2017; Zhong et al., 2017). However, the cross-database setting requires generaliz8374 ing to unseen domains and novel database schemas. In XSP, the N training examples are {x(l), y(l), D(l) i }N l=1 and the M evaluation examples are {x(l), y(l), D(l) j }N l=1, where each D is a database. Importantly, the set of training and evaluation datasets do not overlap. In addition to the generalization challenges posed by SSP, this setting adds several challenges, including generalizing to new schema structures, domain-specific phrases, and database conventions. Unlike SSP, prior work in XSP does not assume that the system has access to database contents at model inference time (Yu et al., 2018). Preprocessing steps that perform database look-ups are unavailable at inference time. Instead, the model only has access to the database schema for each evaluation example. This setting requires additional generalization, where the model must be able to map unfamiliar entities to columns in domains unseen during training. Other Related Work Semantic parsing has been widely studied for tasks including sentence understanding (Zettlemoyer and Collins, 2005, 2007; Banarescu et al., 2013), instruction following (Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Long et al., 2016; Givoli and Reichart, 2019), and knowledge base querying (Popescu et al., 2004; Poon, 2013; Iyer et al., 2017). Related to the task of semantic parsing is code generation (Oda et al., 2015; Ling et al., 2016; Yin et al., 2018; Lin et al., 2018; Iyer et al., 2018). While our experiments are performed on English-langauge data, a limited amount of existing work has explored semantic parsing in languages besides English (Wong and Mooney, 2006; Min et al., 2019). Annotating SQL queries for new domains can be expensive. Several prior works present approaches to reduce this cost, for example by having crowdworkers paraphrase generated examples (Wang et al., 2015; Zhong et al., 2017), give feedback (Iyer et al., 2017), interact with a system (Artzi and Zettlemoyer, 2011; Thomason et al., 2015; Labutov et al., 2018), or a combination (Herzig and Berant, 2019). Research in SSP and code generation has led to innovations including constrained decoding and grammar-based decoding (Xiao et al., 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Lin et al., 2019). SSP has also been studied alongside additional generalization challenges, including to new compositional structures (Finegan-Dollak et al., 2018) and with additional context (Miller et al., 1996; Zettlemoyer and Collins, 2009; Suhr et al., 2018). Recent works evaluating in the XSP setting have explored methods of jointly embedding an utterance and the database schema (Shaw et al., 2019; Bogin et al., 2019a), interactive learning (Yao et al., 2019), and using intermediate output representations and new inference methods (Herzig and Berant, 2018; Guo et al., 2019; Zhang et al., 2019; Bogin et al., 2019b; Lin et al., 2019). We incorporate several such methods proposed into our proposed system. 3 Evaluating on Re-purposed Data We propose to study the task of XSP by training on datasets designed for XSP, and evaluating on datasets originally designed for SSP. In our full model, we use both the Spider1 (Yu et al., 2018) and WikiSQL (Zhong et al., 2017) datasets for training. For evaluation, in addition to the Spider development set,2 we use eight English-language SSP datasets curated by Finegan-Dollak et al. (2018) covering a variety of domains, for example flights, geography, and movies.3 For each dataset, we evaluate on as much data as possible, excluding test sets. Table 1 describes our evaluation datasets. Developing evaluation metrics for these repurposed evaluation sets is challenging because of the diversity of SQL styles across different databases. Yu et al. (2018)’s proposed evaluation metric compares components of the predicted and correct query, allowing for variation in the exact form of the query, for example using different table aliases. However, it does not capture all possible SQL syntax, and fails to cover some of the gold queries in our evaluation datasets. For example, it does not handle assigning an alias to the results of an intermediate SELECT statement. Moreover, it does not measure equivalence of values, meaning 1In addition to introducing Spider, Yu et al. (2018) propose to use a number of SSP datasets, including GeoQuery, as additional training examples for systems evaluated on Spider. However, these SSP datasets were not previously used as evaluation data in the XSP setting. During training, we use only the original Spider data, and discard this additional training data used by some Spider systems. 2WikiSQL contains much more simplified language, SQL queries, and databases than Spider. Therefore, we focus on Spider as part of our proposed XSP evaluation setup. 3Finegan-Dollak et al. (2018) re-split these datasets to evaluate generalization to novel query structures. However, this work still operates in the SSP setting, where in-domain training examples are available. Our setup uses the original splits of the data, rather than the structure-based splits (Table 1). 8375 Original Task Dataset Splits # Examples (All/Filtered) % Col. Mentioned SSP ATIS (Hemphill et al., 1990; Dahl et al., 1994) dev 486/289 0.2 GeoQuery (Zelle and Mooney, 1996) train/dev 598/532 32.4 Restaurants (Tang and Mooney, 2000) splits 0–9 378/ 27 0.0 Academic (Li and Jagadish, 2014) splits 0–9 196/180 8.2 IMDB (Yaghmazadeh et al., 2017) splits 0–9 131/107 4.6 Yelp (Yaghmazadeh et al., 2017) splits 0–9 128/ 54 7.0 Scholar (Iyer et al., 2017) train/dev 599/394 1.0 Advising (Finegan-Dollak et al., 2018) train/dev 2858/309 0.5 XSP Spider (Yu et al., 2018) dev 1034/ – 72.4 Table 1: Basic statistics for our evaluation datasets. We use all ten cross-validation sets for Restaurants, Academic, IMDB, and Yelp. Filtered refers to the focused subset of evaluation data where relative performance of systems is more meaningful, as we removed examples that yield empty tables and those that are likely impossible to solve due to dataset conventions. % Col. Mentioned shows the estimated proportion of examples in the evaluation set where all columns compared against entities in the gold query are explicitly mentioned in the utterance. predictions correct according to this metric need not execute correctly. We propose to use variation of execution accuracy as our main metric. Execution accuracy over an evaluation set is the proportion of predicted queries which, when executed against the database, result in a table equivalent to the correct query’s result. If the correct query requires ordering on the final table, we require the tables be exactly the same; if it does not, we consider result tables equivalent if they contain the same set of rows. We supplement the results with additional baselines and data filtering to address the problem of over-crediting spurious predictions. We report the empty-table prior for each dataset, demonstrating how well a model could perform if predicting incorrect queries that result in empty tables. We create a filtered subset where relative performance of systems is more meaningful, including attempting to remove examples that are impossible to solve without in-domain training data. Our heuristics include removing examples with correct queries that result in empty tables, and where the correct query contains a value token that is not copiable from the input.4 For example, in Restaurants, the phrase good restaurant always requires constraining the SQL query to restaurants with a rating greater than 2.5, even when the rating is not explicitly mentioned. 4 Generalization Challenges Single-database semantic parsing requires recognizing unseen, in-domain entities, understanding new compositional structures, and generating executable representations. Cross-database semantic 4Details are included in Appendix A. parsing introduces additional challenges, which we analyze and discuss below. We find that with existing XSP datasets, these challenges have been relatively under-explored. In our proposed setup, where we evaluate on datasets designed for SSP, these challenges become more prevalent. 4.1 Language Variation Across Domains Generalizing to a new domain requires understanding domain-particular language, including entity mentions and their types, and how to map domainspecific phrases to SQL queries. Identifying Entities In the XSP setting, identifying spans of tokens comprising relevant entities in the utterance is difficult, especially without access to the database contents. For example, in some databases, first and last names are stored in separate columns, so the corresponding tokens should appear in different parts of the SQL query. In other databases, a single column is used to store names. Even if a model is trained on databases where names are always stored in a single column, it still must generalize to databases where first and last names are stored in separate columns. This becomes more challenging with domain-specific entities. For example, in the Advising example in Figure 1, the span EECS 478 refers to two distinct database entities, rather than a single entity. This requires taking into account the database schema, for example by considering that the course table has distinct columns for department and number. Mapping Entities to Columns Mapping a natural language utterance to an executable SQL query requires correctly identifying which columns and 8376 tables each entity should be compared with. Consider the following example (from GeoQuery): NL: what states are next to the mississippi SQL: select traverse from river where river name = ‘mississippi’; To correctly identify that the entity mississippi refers to a river name in the river table, the system must have some domain knowledge. mississippi appears twice in the database: as a state and as a river. Even in the SSP setup, if the system has access to database contents, this entity mention’s type is ambiguous without reasoning about its context in the utterance. In the XSP setup, this problem becomes even more difficult. Database contents are not available at model inference time, so an exhaustive search over the database for matching columns is not possible. Without in-domain training data, the model must still be able to choose the most likely column match for each mentioned entity. However, sometimes the column name is explicitly mentioned in the utterance, making the matching problem much easier, as demonstrated in the Spider example of Figure 1. We measure how prevalent the challenge of mapping from entities to column names is in our XSP setup. In each evaluation set, we estimate the proportion of examples whose entity mentions can be resolved using exact string matching between the utterance and the schema.5 Yavuz et al. (2018) perform a similar analysis manually on the WikiSQL dataset, estimating that roughly 54.1% of examples can be solved using exact match. The rightmost column in Table 1 compares all eight evaluation datasets and the Spider development set. In all eight evaluation datasets originally developed for SSP, fewer than half of examples explicitly mention column names for all entities in the utterance. In contrast, all column names are explicitly mentioned in least 72.4% of examples in the Spider development set. These results demonstrate that addressing this challenge is critical to XSP on completely unseen domains. Domain-Specific Phrases Generalizing to new domains requires mapping domain-specific phrases to implementations in SQL. Consider the following examples (from GeoQuery): 5More details on this analysis are available in Appendix A. NL: what is the smallest city in arkansas SQL: select city name from city where population = (select min (population) from city where state name = ‘arkansas’) and state name = ‘arkansas’ NL: what is the smallest state that borders texas SQL: select state name from state where area = (select min (area) from state where state name in (select border from border info where state name = ‘texas’)) and state name in (select border from border info where state name = ‘texas’) When smallest describes a city, it requires sorting by the city.population column, but when used to describe a state, it requires sorting by the state.area column, even though the state table also has a population column. Another phrase whose implementation may change in a new database is how many. This phrase is often mapped to the count operator, but is sometimes mapped to specific database columns. For example, in Figure 1, how many credits maps to the credits table in Advising, and how many people maps to the population table in GeoQuery. To scope the problem, Yu et al. (2018) avoid including examples in Spider that require commonsense reasoning, including examples of domain-specific phrases. However, understanding domain-specific phrases is an important capability for a domain-general semantic parsing system. 4.2 Novel Database and Query Structures Cross-database semantic parsing requires generalizing to new database schemas, including larger tables and compositions of SQL components. Four of our evaluation datasets have at least ten tables in the database, with the largest database being ATIS with 32 tables.6 Figure 1 demonstrates that generating queries for large databases such as ATIS often requires reasoning about the relationships between many tables. In contrast, our training databases are relatively small, with one table per example in WikiSQL and an average of 5.1 per database in Spider. The queries themselves also vary in complexity. Finegan-Dollak et al. (2018) show that our eight target evaluation datasets range from using 1.4 to 6.4 tables per SQL query. We estimate that in Spider, an average query uses around 1.7 tables, which 6Finegan-Dollak et al. (2018) and Yu et al. (2018) provide comprehensive statistics on the databases and gold queries in our evaluation domains. 8377 is more than only one target dataset (GeoQuery). Generalizing to new databases and datasets in our setting requires generating queries that use more tables than the training data. 4.3 Dataset Conventions In some evaluation datasets, the system must not only reason about the input utterance and schema, but about dataset-specific conventions that are not specified in the inputs. Consider the following example (from Scholar): NL: papers on semantic parsing SQL: select distinct T1.paperid from keyphrase as T2, paper as T1, paperkeyphrase as T3, where T2.keyphrasename = ‘semantic parsing’ and T3.keyphraseid = T2.keyphraseid and T1.paperid = T3.paperid; The annotated SQL query for this utterance returns the paperid column from the paper table. However, the paper table also includes a column named title. The utterance does not specify whether the final column should be paperid or title. While both columns may seem like reasonable options, the dataset’s convention is that a list of papers should be presented using the paperid column, and a query selecting the title column will have an incorrect execution result. Such conventions are difficult, if not impossible, to learn without any in-domain training data. Unfortunately, these cases occur in nearly all target datasets. We do not focus on addressing this type of generalization, and instead report how pervasive this problem is during error analysis. A possible direction for future work is to assume access to a small number of in-domain training examples and perform few-shot learning. 5 Model and Learning Our model takes as input an utterance x and a database schema S. Similar to Guo et al. (2019), we serialize S into a sequence of wordpieces s = t0+t1+· · ·+t|S|. Each ti is a serialization of table Ti, where ti = ⟨TAB⟩+ T i + ci,0 + ci,1 + . . . ci,|Ti|. TAB is a token noting the beginning of a table schema serialization. T i is the tokenization of table Ti’s name. Each ci,j is a serialization of a column Ci,j, where ci,j = CTi,j + Ci,j. CTi,j is a token denoting the type of the column’s contents as provided by the database schema, for example numerical or text. Ci,j is the tokenization of the column’s name. The ordering of table schemas in s and table columns in each ti is arbitrary.7 The input to the encoder is the concatenation of the query wordpieces and the serialized schema, represented as the sequence of tokens x = ⟨CLS⟩+ u + ⟨SEP⟩+ s. The inputs to the encoder are embedded and passed to a pretrained Transformer encoder such as BERT (Devlin et al., 2019). The decoder is an autoregressive Transformer decoder (Vaswani et al., 2017) that attends over the outputs of the encoder and the generated prefix. We use a training set {x(l), y(l), S(l)}N l=1 consisting of pairs of natural language utterances, gold SQL queries, and database schemas. We train the encoder and decoder end-to-end, minimizing the token-level cross-entropy loss of the gold query y(l). We update the parameters of the pre-trained encoder during training. For training data we use training sets developed for XSP. Importantly, to ensure we are evaluating the cross-database setting, our training data does not include examples from the evaluation databases. During inference, we use beam search and execute the highest-probability, syntactically correct prediction. We impose a maximum execution time of 45 seconds for predictions. More details on the model, learning, and evaluation setup are available in Appendix B. 5.1 Generalization Strategies While using pre-trained language models can help encode natural language text, we need other strategies to reason jointly about the language and the database schema in completely unseen domains. We focus on generalizing to domain-specific language and novel database structures. Value Copying Similar to previous work (Jia and Liang, 2016; Gu et al., 2016; Gulcehre et al., 2016; See et al., 2017), we use a copy mechanism in the decoder. At each output step, the decoder generates a distribution over possible actions, including selecting a symbol from the output vocabulary, and copying a token from the input x. We only allow copying of certain token types, and mask out invalid copying actions, including independent wordpieces from u and TAB and column-type tokens. For table and column tokens, the name of 7To discourage over-fitting to an arbitrary ordering of schema elements, we duplicate each Spider training example seven times with randomly permuted orderings. Duplicating seven times results in the number of Spider training examples roughly matching the number of WikiSQL training examples (Section 5.1). 8378 the corresponding table or column is recovered by post-processing the predicted sequence y. Previous approaches on Spider do not evaluate execution accuracy over the databases. Because the main metric does not require values in the predicted and gold queries to be the same, many approaches simplify the problem by using a placeholder token for all values during training. However, correctly generating values is critical for correctly executing predicted queries. To the best of our knowledge, our approach is the first to evaluate on Spider with execution accuracy and to generate SQL queries without placeholder values. Multiple Data Sources We train with training data from Spider (Yu et al., 2018) and WikiSQL (Zhong et al., 2017). Spider includes examples of complex SQL queries grounded in multitable databases, while queries in WikiSQL are compositionally simple and grounded in single web tables. We use WikiSQL to improve generalization to domain-specific data, as it covers a large variety of domains. WikiSQL contains many more tables than Spider, and prior work estimates that roughly half of WikiSQL examples require using domain knowledge to map from entity mentions to column names (Yavuz et al., 2018). Different Output Space Guo et al. (2019) demonstrated improvements on Spider by deterministically mapping SQL to an intermediate representation, SemQL, and learning to predict outputs in this space. SemQL does not require predicting all of the tables in the FROM clause of the SQL query, or explicitly predicting the columns on which tables are joined. Instead of reasoning about foreign keys, the model predicts queries in the SemQL space, which are deterministically transformed to a final SQL query. In most cases, SemQL queries can be mapped back to SQL using database foreign key relations. We implement this aspect of SemQL as a mapping from SQL to a representation with an under-specified FROM clause, which we call SQLUF. Conversion from SQL to SQLUF removes tables from the FROM clause(s) of the SQL query implicitly referenced via a column elsewhere in the query, and removes JOIN clauses. Conversion from SQLUF to SQL restores these tables, and joins between tables are inferred by greedily identifying a path that connects all tables in the FROM clause, given foreign key relations.8 Examples of SQLUF are shown in Appendix C. 6 Experiments Comparison to Existing XSP Systems Our best model performs well on the Spider development set. Table 3 compares our system with top systems on the Spider leaderboard9. On the development set, our model performs competitively with contemporaneous systems. Table 3 shows that Spider performance correlated to the choice of the pre-trained models. Public BERTLARGE is better than BERTBASE. To further improve performance, we experiment with an enhanced pre-trained model BERTLARGE+ following the recipe proposed by Liu et al. (2019). The BERTLARGE+ model is trained with 8K batch size and 100k training steps, and in contrast to RoBERTa, is only trained on the Wiki+Books Corpus used in Devlin et al. (2019). Training our model to predict value placeholders (– Value Copying) instead of copying values from the input results in a performance drop, showing a benefit of modeling values even when ignored by the metric.10 XSP on Unseen Datasets Table 2 shows results on all evaluation data, including datasets originally studied in the SSP setting. We report results on the filtered set (Section 3) as well as the full set of these datasets. A large portion of the examples in datasets such as Restaurants and Advising yield empty execution results. This shows the need to also evaluate on the filtered set, where incorrect spurious predictions are much less likely to result in the same table as a gold query with an empty table result. Second, while execution accuracy on Spider is relatively high, performance on the other evaluation datasets is much lower. We find that all three techniques for addressing generalization challenges are effective. First, including WikiSQL in the training data results in better performance than only using Spider training data. We hypothesize that this is due to the addi8Like SemQL, this conversion is not possible if foreign key relations between predicted tables are not provided or if a given table is referenced more than once in a FROM clause. This can also result in a lossy or ambiguous conversion if there are multiple foreign key relations between a pair of tables. 9https://yale-lily.github.io/spider. In Table 3, we include non-anonymized leaderboard submissions, and for anonymous systems, the most recent submission for duplicate systems. 10About 55% of examples in Spider do not require copying values from the input utterance to the gold query. 8379 Dataset Metric # Examples Our best –WikiSQL –SQLUF – Value copying Empty Prior ATIS 289 ( 486) 0.8 (11.9) 0.5 (11.9) 0.8 (11.9) 0.1 (10.8) 0.0 (11.9) GeoQuery Execution 532 ( 598) 41.6 (40.0) 35.6 (35.0) 34.7 (33.4) 2.2 ( 5.6) 0.0 ( 4.0) Restaurants 27 ( 378) 3.7 (45.2) 3.7 (46.3) 0.0 (46.6) 0.0 (51.1) 0.0 (51.6) Academic 180 ( 196) 8.2 (12.1) 6.1 ( 9.4) 5.7 ( 9.0) 2.8 ( 7.7) 0.0 ( 4.1) IMDB 107 ( 131) 24.6 (33.3) 24.3 (32.3) 23.1 (32.3) 0.0 (14.3) 0.0 (13.0) Yelp 54 ( 128) 19.8 (49.2) 16.7 (47.9) 14.8 (47.9) 4.9 (53.1) 0.0 (41.4) Scholar 394 ( 599) 0.5 ( 6.8) 0.4 ( 7.4) 0.5 ( 8.6) 0.2 ( 7.8) 0.0 ( 9.3) Advising 309 (2858) 2.3 (35.2) 1.2 (35.7) 1.4 (37.3) 0.0 (38.0) 0.0 (38.3) Spider Execution 1034 69.0 68.4 65.1 33.9 4.7 Exact Set Match 65.0 65.1 60.5 54.1 – Table 2: Execution accuracy on the XSP task for the eight evaluation datasets and Spider, comparing our best system with baselines and independent ablations. For Spider, we also report performance using Exact Set Match, the official Spider metric. Results are averaged over three trials. The full set results are reported in parentheses. The empty prior represents the baseline accuracy of returning empty set for all queries. The accuracies on the re-purposed datasets are much lower than the Spider performance. System Exact Set Match (Dev.) Top Leaderboard Systems (As of May 1, 2020) RYANSQL v2 + BERT (Choi et al., 2020) 70.6 RYANSQL + BERT (Choi et al., 2020) 66.6 RATSQL v2 + BERT (Anonymous) 65.8 IRNet++ + XLNET (Anonymous) 65.5 IRNet + BERT (Guo et al., 2019) 61.9 RASQL + BERT (Anonymous) 60.8 GIRN + BERT (Anonymous) 60.2 CNSQL (Anonymous) 58.0 EditSQL + LSL + BERT (Anonymous) 57.9 GNN + Bertrand-DR (Kelkar et al., 2020) 57.9 Ours with BERTLARGE+ 65.8 Ours with BERTLARGE 63.2 Ours with BERTBASE 60.4 Ours with BERTLARGE+ - Value Copying 55.1 Table 3: Performance on the Spider development set using Spider’s official evaluation metric (Exact Set Match), ordered by the development set performance. For our systems, we report the Exact Set Match of the best of three trials. While the focus of our paper is not on Spider performance, our system still performs well. tional domains in WikiSQL, as well as the larger proportion of examples that require mapping from entities in the utterance to column names (Yavuz et al., 2018). Using SQLUF also improves performance, as it produces queries coherent with respect to the schema, for example only selecting columns from tables where the column exists. Finally, using value placeholders significantly reduces execution accuracy in all datasets. While masking values decreases Exact Set Match on Spider by 10.9%, its effect on execution accuracy can be devastating both for Spider and the eight evaluation datasets. This demonstrates the need to consider execution results when evaluating semantic parsing systems. Error Analysis For each evaluation dataset, we analyze twenty random predictions from the filtered subset. Examples of the most common error types are shown in Figure 2, along with the proportion of analyzed predictions in the eight target datasets that contain the error type. Appendix D discusses the complete results of error analysis. 40% of errors are caused by comparing an entity to the wrong column, for example searching for ‘James Bond’ in the director.name column when it actually refers to a movie.title. This usually requires using domain knowledge identify to columns that are likely to contain the mentioned entity (Section 4.1). 31.1% of errors are caused by missing constraints specified in the utterance, for example by failing to use a relevant entity in the predicted query. 28.8% of errors are also caused by incorrectly identifying entity spans, for example by treating FIN 340 as a single entity rather than two separate entities in the database (Section 4.1). Another common error is predicting the wrong final column. While choosing what to return is difficult for the model due to understanding domain-specific phrases such as how many (20.0% of errors; Section 4.1), sometimes the errors are due to dataset conventions (26.9% of errors; Section 4.3). For example, the paperid column should be selected instead of the title column in Scholar. Such dataset conventions could be learned through fewshot learning, where a small number of in-domain training examples are available. Our system is required to generalize to larger databases than it was trained on, including more complex compositions of tables (Section 4.2). For 8380 40.0% →Entity-column matching (IMDBXSP) NL: List “James Bond” directors Pred.: select director.name from directed by join director on directed by.did = director.did where director.name = ‘James Bond’; Gold: select T1.name from directed by as T2, director as T1, movie as T3 where T1.did = T2.did and T3.mid = T2.mid and T3.title = ‘James Bond’; 31.3% →Missing constraint (AcademicXSP) NL: return me the year of “Making database systems usable” Pred: select publication.year from publication; Gold: select T1.year from publication as T1 where T1.title = ‘Making database systems usable’; 28.8% →Entity identification and copying (AdvisingXSP) NL: What’s the number of times FIN 340 has been offered? Pred.: select count(*) from course join course offering on course.course id = course offering.course id where course.name = ‘FIN 340’; Gold: select count(distinct T1.offering id) from course as T2, course offering as T1 where T2.course id = T1.course id and T2.department = ‘FIN’ and T2.number = 340; 26.9% →Ambiguous final column (ScholarXSP) NL: papers from 2014 Pred.: select distinct paper.title from paper where paper.year = 2014; Gold: select distinct T1.paperid from paper as T1 where T1.year = 2014; 20.0% →Wrong final column (GeoQueryXSP) NL: how many people live in austin Pred.: select count(*) from city where city.state name = ‘austin’; Gold: select T1.population from city as T1 where T1.city name = ‘austin’; Figure 2: The most common error types made by our best system, including an example. The subscript indicates the results are in the XSP setting. Each prediction may be annotated with more than one error type. example, while SQLUF can be used to represent most gold queries in most evaluation datasets (shown in Appendix C), in ATIS, only 17.3% of gold queries are covered by SQLUF. Most of the uncovered examples require mapping two columns, to airport and from airport, in the same table flight to the same foreign key airport service.airport code. This compositional structure is not covered by SQLUF, but is critical to perform well on ATIS. 7 Discussion We study the task of cross-database semantic parsing (XSP), where a system that maps natural language utterances to executable SQL queries is evaluated on databases unseen at training time. While this task has been studied through datasets developed specifically for XSP, we propose a more holistic evaluation for XSP, where we also evaluate on datasets originally studied in a setting where in-domain training data is available. We identify several new generalization challenges that arise when evaluating in our proposed setup, including identifying entities, mapping entities and domainspecific phrases to a database schema, and generalizing to more complex database schemas. Using a model that performs well on evaluation data designed for XSP, we are able to move towards addressing some of the generalization challenges on these additional evaluation sets without any indomain training data. Our results and analysis demonstrate the need for developing more holistic evaluation of cross-database semantic parsing using a more diverse set of language and databases. Several significant generalization challenges remaining, including improving commonsense and in-domain reasoning and table schema understanding capabilities. Some examples in our filtered evaluation set still require reasoning about dataset conventions that are difficult to acquire without indomain training examples. Future work could also make the stronger assumption that a small number of in-domain training examples are available, and train and evaluate in a few-shot setting. Acknowledgments The first author is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1650441. We thank the Google Research Language and Cornell NLP groups for their comments and feedback during the project’s development. We also thank Jing Li for pre-training the BERTLARGE+ model, and Philip Massey, Zuyao Li, Angelica Chen, Karl Pichotta, and Francesco Piccinno for their contributions to the codebase. Finally, we thank the anonymous reviewers for their comments and suggestions. 8381 References Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 421–432, Edinburgh, Scotland, UK. Association for Computational Linguistics. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1:49–62. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics. Ben Bogin, Jonathan Berant, and Matt Gardner. 2019a. Representing schema structure with graph neural networks for text-to-SQL parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4560–4565, Florence, Italy. Association for Computational Linguistics. Ben Bogin, Matt Gardner, and Jonathan Berant. 2019b. Global reasoning over database structures for textto-SQL parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3650–3655, Hong Kong, China. Association for Computational Linguistics. David L. Chen and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. pages 859–865. DongHyun Choi, Myeong Cheol Shin, EungGyun Kim, and Dong Ryeol Shin. 2020. RYANSQL: Recursively applying sketch-based slot fillings for complex text-to-sql in cross-domain databases. Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In HUMAN LANGUAGE TECHNOLOGY: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33–43, Berlin, Germany. Association for Computational Linguistics. Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-SQL evaluation methodology. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 351–360, Melbourne, Australia. Association for Computational Linguistics. Ofer Givoli and Roi Reichart. 2019. Zero-shot semantic parsing for instructions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4454–4464, Florence, Italy. Association for Computational Linguistics. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany. Association for Computational Linguistics. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 140–149, Berlin, Germany. Association for Computational Linguistics. Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in crossdomain database with intermediate representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524–4535, Florence, Italy. Association for Computational Linguistics. Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990. Jonathan Herzig and Jonathan Berant. 2018. Decoupling structure and lexicon for zero-shot semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1619–1629, Brussels, Belgium. Association for Computational Linguistics. Jonathan Herzig and Jonathan Berant. 2019. Don’t paraphrase, detect! rapid and effective data collection for semantic parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International 8382 Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3801–3811, Hong Kong, China. Association for Computational Linguistics. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 963–973, Vancouver, Canada. Association for Computational Linguistics. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1643–1652, Brussels, Belgium. Association for Computational Linguistics. Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1821–1831, Vancouver, Canada. Association for Computational Linguistics. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22, Berlin, Germany. Association for Computational Linguistics. Amol Kelkar, Rohan Relan, Vaishali Bhardwaj, Saurabh Vaichal, and Peter Relan. 2020. BertrandDR: Improving text-to-SQL using a discriminative re-ranker. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526, Copenhagen, Denmark. Association for Computational Linguistics. Igor Labutov, Bishan Yang, and Tom Mitchell. 2018. Learning to learn semantic parsers from natural language supervision. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1676–1690, Brussels, Belgium. Association for Computational Linguistics. Fei Li and H. V. Jagadish. 2014. Constructing an interactive natural language interface for relational databases. Proceedings of the VLDB Endowment, 8(1):73–84. Kevin Lin, Ben Bogin, Mark Neumann, Jonathan Berant, and Matt Gardner. 2019. Grammar-based neural text-to-SQL generation. arXiv preprint arXiv:1905.13326. Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, and Michael D. Ernst. 2018. NL2Bash: A corpus and semantic parser for natural language interface to the linux operating system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 599–609, Berlin, Germany. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler context-dependent logical forms via model projections. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1456– 1465, Berlin, Germany. Association for Computational Linguistics. Scott Miller, David Stallard, Robert Bobrow, and Richard Schwartz. 1996. A fully statistical approach to natural language interfaces. In 34th Annual Meeting of the Association for Computational Linguistics, pages 55–61, Santa Cruz, California, USA. Association for Computational Linguistics. Qingkai Min, Yuefeng Shi, and Yue Zhang. 2019. A pilot study for Chinese SQL semantic parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3643– 3649, Hong Kong, China. Association for Computational Linguistics. Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical machine translation. In 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 574–584. IEEE. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 8383 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470–1480, Beijing, China. Association for Computational Linguistics. Hoifung Poon. 2013. Grounded unsupervised semantic parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 933–943, Sofia, Bulgaria. Association for Computational Linguistics. Ana-Maria Popescu, Alex Armanasu, Oren Etzioni, David Ko, and Alexander Yates. 2004. Modern natural language interfaces to databases: Composing statistical parsing with semantic tractability. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 141–147, Geneva, Switzerland. COLING. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, and Yasemin Altun. 2019. Generating logical forms from graph representations of text and entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 95–106, Florence, Italy. Association for Computational Linguistics. Alane Suhr, Srinivasan Iyer, and Yoav Artzi. 2018. Learning to map context-dependent sentences to executable formal queries. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2238–2249, New Orleans, Louisiana. Association for Computational Linguistics. Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: Intergrating statistical and relational learning for semantic parsing. In 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 133–141, Hong Kong, China. Association for Computational Linguistics. Jesse Thomason, Shiqi Zhang, Raymond Mooney, and Peter Stone. 2015. Learning to interpret natural language commands through human-robot dialog. In International Joint Conference on Artificial Intelligence (IJCAI). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1332–1342, Beijing, China. Association for Computational Linguistics. Yuk Wah Wong and Raymond Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 439–446, New York City, USA. Association for Computational Linguistics. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1341– 1350, Berlin, Germany. Association for Computational Linguistics. Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, , and Thomas Dillig. 2017. SQLizer: Query synthesis from natural language. In International Conference on Object-Oriented Programming, Systems, Languages, and Applications, ACM, pages 63:1–63:26. Ziyu Yao, Yu Su, Huan Sun, and Wen-tau Yih. 2019. Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5450– 5461, Hong Kong, China. Association for Computational Linguistics. Semih Yavuz, Izzeddin Gur, Yu Su, and Xifeng Yan. 2018. What it takes to achieve 100% condition accuracy on WikiSQL. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1702–1711, Brussels, Belgium. Association for Computational Linguistics. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR), pages 476–486. IEEE. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440–450, Vancouver, Canada. Association for Computational Linguistics. 8384 Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019a. CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1962– 1979, Hong Kong, China. Association for Computational Linguistics. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics. Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019b. SParC: Cross-domain semantic parsing in context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4511–4523, Florence, Italy. Association for Computational Linguistics. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence - Volume 2, pages 1050–1055. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 678–687, Prague, Czech Republic. Association for Computational Linguistics. Luke Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 976–984, Suntec, Singapore. Association for Computational Linguistics. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI’05, pages 658–666, Arlington, Virginia, United States. AUAI Press. Rui Zhang, Tao Yu, Heyang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019. Editing-based SQL query generation for cross-domain context-dependent questions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5341–5352, Hong Kong, China. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. 8385 A Data Details Measuring Exact Column Match in Utterances For each example, we identify columns used for direct comparison with values in the correct SQL query (not considering columns used to link two tables, order or group results, or in the top-level SELECT statement). We then heuristically identify whether any of the used column names appear in the utterance by canonicalizing the column name (e.g., replacing underscores with spaces) and performing a basic substring match. Because slight variants of column names may appear in the utterance, our reported results show an lower bound. Heuristically Filtering Datasets We use several heuristics to filter evaluation data. Although we cannot automatically filter out all examples where database conventions are required to select the correct final column, we found that a good heuristic is filtering out examples that require selecting more than one final column. For example, in Advising, when an utterance asks for a list of classes, the labeled query always selects four columns from the course table: department, name, number, and semester. We remove all examples where a numerical or text value is not copiable from the input utterance, except for the numbers 0 and 1, which are often not copied from the input (for example, limiting table results in LIMIT 1). We also remove all examples that result in an empty table, and examples where the gold query returns a count, and the resulting table is [0]. B Experimental Details To choose model and learning hyperparameters, we began with the hyperparameters of Shaw et al. (2019), and performed a small number of experiments to improve performance on Spider. Model For our encoder, we use a pre-trained BERT model (Devlin et al., 2019). All input tokens use the same segment ID. We use absolute positional embeddings. The word embedding size for all tokens is 128. We use wordpiece tokenization for the input utterance. To tokenize column and table names, we replace underscores with spaces, and then apply wordpiece tokenization. The outputs of the encoder are transformed using a linear layer before being used by the decoder. We use a two-layer Transformer decoder (Vaswani et al., 2017) with eight attention heads. We use a gated copying mechanism, supervising gating decisions during training. Learning During training, we use a maximum input size of 512 tokens. During training and inference, we use a maximum decoding length of 100. For training, we create seven examples per original Spider training example, with randomized permutations of table ordering, and column ordering within the table spans. We use a batch size of 32 and train for 30,000 update steps. We apply teacher-forcing during training. During training, we apply a dropout at a rate of 0.3 on the embeddings of decoder tokens and within the decoder Transformer. We use Adam optimizer (Kingma and Ba, 2014). We increase learning rate linearly from 0 to 0.00008 until 5,625 steps, then decrease it linearly to 0.0 by the end of training. The encoder begins as a pre-trained BERT model, whose parameters we freeze for the first 2,100 updates. Inference For each evaluation example, we perform beam search against our trained model with a beam size of 100. Among the 10 most probable predictions from the beam search, we choose the highest-probability prediction that is a syntactically valid SQL query. To check syntactic validity, we test each prediction’s execution against an emptied copy of the database (emptied to ensure we are not using database contents) and pass over queries which are inexecutable. We test the top 10 items in the beam to limit evaluation time, and find that in nearly all cases, if a syntactically correct prediction exists in the beam, it appears as one of the top 10 items. To correctly resolve predicted SQLUF queries, we use gold-standard foreign keys for all target databases. Some queries are highly inefficient, and can take minutes to execute due to the sizes of the databases. To reduce the execution time of gold queries, we add database indices where possible. Even with these indices, some predicted queries still take a long time to execute. To make evaluation tractable, we use a timeout of 45 seconds per predicted query. If the query has not executed after 45 seconds, we return the empty table. Figure 3 shows the influence of the timeout threshold on the model performance for our best model on each evaluation dataset. By 45 seconds, the vast majority of predictions can execute, and execution accuracy has stabilized. 8386 0 15 30 45 60 0 25 50 75 100 (a) Exec. Acc. (Full) ATIS GeoQuery Restaurants Academic IMDB Yelp Scholar Advising Spider 0 15 30 45 60 0 25 50 75 100 (b) Prop. Exec. (Full) 0 15 30 45 60 0 25 50 75 100 (c) Exec. Acc. (Filtered) 0 15 30 45 60 0 25 50 75 100 (d) Prop. Exec. (Filtered) Figure 3: Influence of the timeout threshold on model performance for each dataset evaluation dataset. The x-axis shows the maximum execution time (in seconds) we allow before terminating query execution. In our experiments, we use a timeout of 45 seconds. We show how the execution accuracy is influenced by the cutoff time for the full (a) and filtered (c) evaluation sets. We also show the proportion of queries which finished execution for the full (b) and filtered (d) evaluation sets. Results are generated using the model which achieved the highest execution accuracy on the Spider development set. By 45 seconds, the majority of queries return an execution result, and execution accuracy has stabilized. Dataset % of Gold Queries Covered by SQLUF ATIS 17.3 GeoQuery 97.5 Restaurants 100.0 Academic 85.7 IMDB 94.7 Yelp 81.3 Scholar 92.2 Advising 87.4 Spider 97.4 Table 4: Estimates of coverage of SQLUF across each evaluation dataset. C SQLUF Examples Examples of our intermediate representation SQLUF are shown in Figure 4. Table 4 shows an estimate of the proportion of all gold queries that can be generated by our system. D Supplementary Error Analysis Table 5 shows the rate of occurrence of eight types of errors in each evaluation dataset, out of twenty random incorrect predictions. Below, we give more detailed descriptions of these error types, as well as examples from our model predictions. Entity Understanding We consider two types of errors: entity identification errors, and column matching errors. Entity identification errors include copying the wrong span of tokens that comprise an entity into the output. Column matching errors include comparing an entity to the wrong database column type. Examples for these errors are included in Figure 2. Final Column We consider two types of final column errors: incorrect predictions, and ambiguous predictions. In incorrect predictions, the final column type is obviously incorrect with respect to the utterance. In ambiguous predictions, the final column is a reasonable prediction with respect to the utterance, but due to the dataset conventions, is incorrect. Examples for these errors are included in Figure 2. Database Understanding We consider two types of database understanding errors: syntactically incorrect predictions, and predictions which do not compose the tables correctly. Although the second category execute successfully, their result8387 SQL: SELECT people.name FROM people JOIN films ON people.id = film.person id WHERE films.id = 5 SQLUF: SELECT people.name UF WHERE films.id = 5 SQL: SELECT people.name FROM people JOIN films ON people.id = film.person id SQLUF: SELECT people.name UF films SQL: SELECT cities.state, count(*) FROM cities GROUP BY cities.state SQLUF: SELECT cities.state, count(*) UF GROUP BY cities.state SQL: SELECT count(*) FROM cities SQLUF: SELECT count(*) UF cities SQL: SELECT student id FROM student course registrations UNION SELECT student id FROM student course attendance SQLUF: SELECT student course registrations.student id UF UNION SELECT student course attendance.student id UF SQL: SELECT table 1.id, table 3.id FROM table 1 JOIN table 2 ON table 1.table 2 id = table 2.id JOIN table 3 ON table 2.table 3 id SQLUF: SELECT table 1.id, table 3.id UF table 2 Figure 4: Examples of SQLUF, which uses under-specified FROM clauses. Tables are omitted from the FROM clause unless a column belonging to the given table is not mentioned elsewhere in the query. Under certain assumptions, reconstruction of the original SQL is possible given schema information. ing tables are incorrect due to how the database is structured. For example, in Restaurants, although the restaurant table has a city name column, the correct way to construct a query corresponding to an utterance like how many places for french food are there in palo alto? is to traverse the location table instead. Query Implementation We consider two types of errors related to incorrectly implementing the utterance’s intent in SQL: missing and incorrect constraints. Missing constraints involve ignoring a constraint mentioned in the utterance, such as the paper title constraint in the Academic example in Figure 2. Incorrect constraints are incorrect for reasons besides those described above, such as entity-column matching. For example (from GeoQuery): NL: what is the smallest state in the usa Pred.: select state.state name from state where state.country name = ‘usa’ order by state.population limit 1; Gold: select T1.state name from state as T1 where T1.area = (select min(T2.area) from state as T2); Our model’s prediction incorrectly orders by population rather than the state’s area. E Additional Results on Spider Table 6 shows the performance on the Spider (Yu et al., 2018) development set for our single best model split by hardness level. Table 7 shows the F1 over query components of our best model on the Spider development set. 8388 Error Type Spider Restaurants IMDB ATIS Academic Scholar Yelp GeoQuery Advising Entity understanding Identification 4 9 3 2 5 9 5 5 8 Column match 6 9 6 20 4 5 8 4 8 Final column Incorrect 5 0 3 3 8 4 1 5 8 Ambiguous 0 0 0 16 3 9 11 0 4 Database understanding Syntax error 0 2 15 0 0 1 1 0 1 Table composition 6 6 0 1 2 0 1 2 1 Query implementation Missing constraint 2 6 2 9 10 4 4 3 12 Incorrect constraint 2 0 0 0 0 0 0 6 0 Table 5: For each evaluation dataset, we analyzed twenty random incorrect predictions of our best model. We categorized each into one or more error categories, including errors of understanding entities mentioned in the utterance, generating the correct top-level column selection, understanding the database structure, and correctly implementing the constraints of the utterance in SQL. We report the number of predictions, out of the twenty analyzed per dataset, which had each error type. System Easy Medium Hard Extra Hard Ours with BERTLARGE+ 83.2 71.1 57.5 34.7 Table 6: Spider’s official evaluation metric results for our best model split by hardness level on the Spider development set. Component F1 SELECT 89.2 SELECT (no AGG) 90.4 WHERE 71.7 WHERE (no OP) 76.3 GROUP (no HAVING) 82.0 GROUP 79.4 ORDER 82.9 AND/OR 98.1 IEUN 46.5 KEYWORDS 89.5 Table 7: Per-component F1 of our best model on the Spider development set.
2020
742
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8389–8401 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8389 Predicting the Focus of Negation: Model and Error Analysis Md Mosharaf Hossain* Kathleen Hamilton** Alexis Palmer** Eduardo Blanco* *Department of Computer Science and Engineering, University of North Texas [email protected], [email protected] **Department of Linguistics, University of North Texas [email protected], [email protected] Abstract The focus of a negation is the set of tokens intended to be negated, and a key component for revealing affirmative alternatives to negated utterances. In this paper, we experiment with neural networks to predict the focus of negation. Our main novelty is leveraging a scope detector to introduce the scope of negation as an additional input to the network. Experimental results show that doing so obtains the best results to date. Additionally, we perform a detailed error analysis providing insights into the main error categories, and analyze errors depending on whether the model takes into account scope and context information. 1 Introduction Negation is a complex phenomenon present in all human languages. Horn (2010) put it beautifully when he wrote “negation is what makes us human, imbuing us with the capacity to deny, to contradict, to misrepresent, to lie, and to convey irony.” Broadly speaking, negation “relates an expression e to another expression with a meaning that is in some way opposed to the meaning of e” (Horn and Wansing, 2017). The key challenge to understanding negation is thus to figure out the meaning that is in some way opposed to e—a semantic and highly ambiguous undertaking that comes naturally to humans in everyday communication. Negation is generally understood to carry positive meaning, or in other words, to suggest an affirmative alternative. For example, John didn’t leave the house implicates that John stayed inside the house. Hasson and Glucksberg (2006) show that comprehending negation involves considering the representation of affirmative alternatives. While not fully understood, there is evidence that negation involves reduced access to the affirmative mental representation (Djokic et al., 2019). Orenes et al. (2014) provide evidence that humans switch to the affirmative alternative in binary scenarios (e.g., from not red to green when processing The figure could be red or green. The figure is not red). In such multary scenarios, however, humans keep the negated representation unless the affirmative interpretation is obvious from context (e.g., humans keep not red when processing The figure is red, green, yellow or blue. The figure is not red.). From a linguistic perspective, negation is understood in terms of scope and focus (Section 2). The scope is the part of the meaning that is negated, and the focus is the part of the scope that is most prominently or explicitly negated (Huddleston and Pullum, 2002). Identifying the focus is a semantic task, and it is critical for revealing implicit affirmative alternatives. Indeed, the focus of negation usually contains only a few tokens, and it is rarely grammatically modified by a negation cue such as never or not. Only the focus of a negation is actually intended to be negated, and the resulting affirmative alternatives range from implicatures to entailments as exemplified below (focus is underlined, and affirmative alternatives are in italics): • He didn’t report the incident to his superiors until confronted with the evidence. He reported the incident to his superiors, but not until confronted with the evidence. • The board didn’t learn the details about the millions of dollars wasted in duplicate work. The board learnt about the millions of dollars wasted in duplicate work, but not the details. In this paper, we experiment with neural networks for predicting the focus of negation. We work with the largest corpus annotating the focus of negation (PB-FOC, 3,544 negations), and obtain the best results to date. The main contributions of this paper are: (a) neural network architecture taking into account the scope of negation and context, (b) experimental results showing that scope information as predicted by an automated scope detector is more 8390 beneficial than context, (c) quantitative analysis profiling which foci are easier and harder to predict, and (d) detailed qualitative analysis providing insights into the errors made by the models. Crucially, the scope detector we leverage to predict focus is trained with CD-SCO, a corpus created independently of PB-FOC (Section 2). Our results suggest that negation scopes may transfer across (a) genres (short stories vs. news) and (b) negation types (all negations vs. only verbal negations, i.e., when the negation cue modifies a verb). 2 Background It is generally understood that negation has scope and focus. Scope is “the part of the meaning that is negated” and includes all elements whose individual falsity would make the negated statement strictly true (Huddleston and Pullum, 2002). Consider the following statement (1) John doesn’t know exactly how they met. This statement is true if one or more of the following propositions are false: (1a) Somebody knows something, (1b) John is the one who knows, (1c) exactly is the manner of knowing, and (1d) how they met is what is known. Thus, the scope of the negation in statement (1) is (1a–d). The focus of a negation is “the part of the scope that is most prominently or explicitly negated”, or in other words, the element of the scope that is intended to be interpreted as false to make the overall negative true (Huddleston and Pullum, 2002). Determining the focus consists in pinpointing which parts of the scope are intended to be interpreted as true and false given the original statement. Without further context, one can conclude that the intended meaning of statement (1) is John knows how they met, but not exactly, or alternatively, that (1a–b, 1d) are intended to be interpreted as true, and (1c) as false. This interpretation results from selecting as focus (1c), i.e., the manner of knowing. We summarize below corpora annotating scope and focus of negation, emphasizing the ones we work with. The survey by Jim´enez-Zafra et al. (2020) provides a more comprehensive analysis including corpora in languages other than English. Corpus Annotating Scope. In the experiments described here, we work with a scope detector trained with CD-SCO (Morante and Daelemans, 2012), which annotates negation cues and negation scopes in two stories by Conan Doyle: The Hound of the Baskervilles and The Adventure of Wisteria Lodge. The corpus contains 5,520 sentences, 1,227 %foci %verb with %role is focus ARG0 4.09 67.44 6.06 ARG1 43.76 90.47 48.36 ARG2 5.53 14.24 38.81 ARG3 0.39 1.49 26.42 ARG4 0.51 0.79 64.29 M-NEG 26.08 99.89 26.11 M-TMP 7.16 16.80 42.62 M-MNR 5.50 7.36 74.71 M-ADV 3.30 13.53 24.38 M-LOC 1.01 3.72 27.27 M-EXT 0.45 0.56 80.00 M-DIR 0.25 1.07 23.68 M-PNC 1.49 2.42 61.63 M-DIS 0.28 7.81 3.61 M-CAU 0.11 2.88 3.92 Table 1: Analysis of PB-FOC: overall percentages of foci per role, percentages of negated verbs having each role, and percentage of each role being the focus. of which contain a negation. CD-SCO annotates all negations, including verbs (e.g., I fail to see how you could have done more), adverbs (e.g., It was never proved that [...]), determiners (e.g., There is no friend like [...]), pronouns (e.g., [...] has yielded nothing to a careful search), affixes (e.g., The inexplicable tangle seemed [...]), and others. Other corpora annotating scope in English include efforts with biomedical texts (Vincze et al., 2008) and working with reviews (Councill et al., 2010; Konstantinova et al., 2012). Corpora Annotating Focus. Although focus of negation is defined as a subset of the scope, there is no corpus annotating both of them in the same texts. We work with PB-FOC, the largest publicly available corpus annotating focus of negation (Blanco and Moldovan, 2011). PB-FOC annotates the focus of the negations marked with M-NEG role in PropBank (Palmer et al., 2005), which in turn annotates semantic roles on top of the Penn TreeBank (Taylor et al., 2003). As a result, PB-FOC annotates the focus of 3,544 verbal negations (i.e., when a negation cue such as never or not syntactically modifies a verb). As per the authors, the annotation process consisted of selecting the semantic role most likely to be the focus. Therefore, focus annotations in PB-FOC are always all the tokens corresponding to a semantic role of the (negated) verb. Finally, M-NEG role is chosen when the focus is the verb. The annotations in PB-FOC were carried out taking into account the previous and next sentences. We provide examples below, and Section 5 provides ad8391 ditional examples. We indicate the semantic roles in PropBank with square brackets, and the role selected as focus is underlined. • Even if [that deal]ARG1 is[n’t]M-NEG [revived]verb, NBC hopes to find another. • [A decision]ARG1 is[n’t]M-NEG [expected]verb [until some time next year]M-TMP. • But [quite a few money managers]ARG0 are[n’t]M-NEG [buying]verb [it]ARG1. Table 1 presents basic statistics for PB-FOC. ARG1 is the most frequent role to be focus (43.76%) followed by M-NEG (26.08%) and a relatively long list of infrequent roles (ARG0, ARG2, M-TMP, MMNR: 4.09–7.16%). More interestingly, the last two columns in Table 1 indicate (a) how often a negated verb has each semantic role, and (b) how often a role of a negated verb is the focus—if a negated verb-argument structure does not have a particular role, that role obviously cannot be the focus. These percentages reveal that role presence does not uniquely identify foci, but some semantic roles, although infrequent overall, are likely to be the focus if present (M-EXT: 80.00%, MMNR: 74.71%, ARG4: 64.29%, M-PNC: 61.63%). Other corpora annotating the focus in English redefine the annotation guidelines (Anand and Martell, 2012), use dependency trees instead of roles (Sarabi and Blanco, 2016), target non-verbal negations (Sarabi and Blanco, 2017), and work with tutorial dialogues (Banjade and Rus, 2016). 3 Previous Work In addition to identifying negation cues and resolving the scope and focus of negation, there is work showing that processing negation is important for natural language understanding in general. In particular, sentiment analysis benefits from processing negation (Wiegand et al., 2010). For example, like generally carries positive sentiment, but not when modified by a negation cue (e.g., don’t like). Wilson et al. (2005) introduce the idea of contextual polarity, and note that negation may intensify rather than change polarity (e.g., not good vs. not only good but amazing). Jia et al. (2009) present a set of heuristic rules to determine sentiment when negation is present, and Councill et al. (2010) show that information about the scope of negation is beneficial to predict sentiment. Outside sentiment analysis, Bentivogli et al. (2016) point out that neural machine translation struggles translating negation, and point to focus detection as a possible solution. Neural networks are hard to interpret, but there is evidence that they learn to process negation—to a certain degree—when trained to predict sentiment analysis. Li et al. (2016) visually show that neural networks are capable of meaning composition in the presence of, among others, negation and intensification. Wang et al. (2015) show that an LSTM architecture is capable of determining sentiment of sequences containing negation such as not good and not bad. These previous works train a model for a particular task (i.e., sentiment analysis) and then investigate whether the model learnt anything related to negation that is useful for that task. Unlike them, we target focus of negation detection—and the resulting affirmative alternatives—and work with task-independent negations. Scope Identification. Compared to focus identification, scope identification has received substantially more attention. The first proposals (Morante and Daelemans, 2009) were trained in the biomedical domain with BioScope (Szarvas et al., 2008). The *SEM-2012 Shared Task (Morante and Blanco, 2012) included scope identification with CD-SCO (Section 2), and the winner proposed an SVM-based ranking of syntactic constituents to identify the scope (Read et al., 2012). More recently, Fancellu et al. (2016) present neural networks for this task, and Packard et al. (2014) present a complementary approach that operates over semantic representations obtained with an offthe-shelf parser. Finally, Fancellu et al. (2017) present an error analysis showing that scope is much easier to identify when delimited by punctuation. In this paper, we use a scope detector trained with CD-SCO to predict the focus of negation. While we only incorporate small modifications to previously proposed architectures, our scope detector outperforms previous work (Section 4). Focus Identification. Although focus is part of the scope, state-of-the-art approaches to identify the focus of negation ignore information about scope. Possible reasons are that (a) existing corpora annotating scope and focus contain substantially different texts (Section 2), and (b) incorporating scope information is not straightforward with traditional machine learning and manually defined features. The initial proposals obtain modest results and only consider the sentence containing the negation (Blanco and Moldovan, 2011), including scope information in a rule-based system (Rosenberg and Bergler, 2012). Zou et al. (2014, 2015) propose 8392 The not turned a profit FC FC FC FC FC CRF Layer N_F Word emb. (ELMo) SRL emb. Neg. verb emb. Scope emb. an an ap ap (previous sentence) (current sentence) ap ap ap an an an 3 layer BiLSTM 2 layer BiLSTM 2 layer BiLSTM StatesWest Nevada (next sentence) The guarantees N_F N_F F F (N, A0, I_S) (Y, V, I_S) (N, AM-NEG, O_S) (N, A1, I_S) (N, A1, I_S) (neg. verb, roles, scope) context, previous context, next scope information Figure 1: Neural network to predict the focus of negation. The core of the architecture (NN, all components except those inside dotted shapes) takes as input the sentence containing the negation, and each word is represented with its word embedding and specialized embeddings for the negated verb and semantic roles. The additional components inside dotted shapes incorporate information about (a) the scope and (b) context (previous and next sentences). graph-based models that incorporate discourse information and obtain improvements over previous works. In addition, Shen et al. (2019) present a neural model that leverages word-level and topic-level attention mechanisms to utilize contextual information. We compare our results and theirs in Section 4.2. In this paper, we show that (a) neural networks considering the scope of negation obtain the best results to date and (b) context is not beneficial if scope is available (Section 4). 4 Predicting the Focus of Negation We approach the task of predicting focus of negation as a sequence labeling task with a neural network. We first describe the network architecture, and then present quantitative results. Section 5 presents a detailed error and qualitative analysis. 4.1 Neural Network Architecture The network architecture (Fig. 1) consists of a base NN (all components except those inside dotted shapes) plus additional components to include information about the scope and context of negation. Base NN. The base network is inspired by Huang et al. (2015) and Reimers and Gurevych (2017). It is a 3-layer Bidirectional Long Short-Term Memory (BiLSTM) network with a Conditional Random Field (CRF) layer. The network takes as input the sentence containing the negation whose focus is to be predicted, where each word is represented with the concatenation of (a) its pre-trained ELMo embedding Peters et al. (2018), (b) a specialized embedding indicating whether a token is the negated verb (not the negation cue), and (c) a specialized embedding indicating semantic roles (one per role label). The specialized embeddings are trained from scratch as part of the tuning of the network. Scope Information. We add an extra input at the token level indicating whether a token belongs to the scope of the negation whose focus is to be predicted. This new input is then mapped to a third specialized embedding (two values: inside or outside the scope), and concatenated to the word representation prior to feeding it to the 3-layer BiLSTM. Scope information is taken from a scope detector inspired by Fancellu et al. (2016). Our modifications are as follows. First, we add a CRF layer on top of the 2-layer BiLSTM. Second, we use GloVe embeddings instead of word2vec embeddings. We train the scope detector with CD-SCO (Section 3), and our simple modifications yield the best results to date predicting the scope of negation: 79.41 F1 (vs. 77.77 F1). We do not elaborate on the scope detector as we only leverage it to predict focus. Context. We also experiment with an additional component to add contextual information (previous and next sentences), as previous work has shown empirically that doing so is beneficial (Zou et al., 2014). While we tried many strategies (e.g., concatenating sentence embeddings to the representations from the 3-layer BiLSTM), we present only the one yielding the best results. Specifically, we use 2-layer Bi-LSTMs with an attention mechanism (Bahdanau et al., 2014; Yang et al., 2016). The attention weights (ap and an for the previous 8393 P R F1 Acc Zou et al. (2014) 71.67 67.43 69.49 67.1 Zou et al. (2015) n/a n/a n/a 69.4 Shen et al. (2019) n/a n/a n/a 70.5 NN (baseline) 72.14 71.63 71.88 71.6 NN + S 75.92 75.7 75.81 75.7 NN + Cntxt 73.69 73.17 73.43 73.2 NN + S + Cntxt 74.15 73.74 73.94 73.7 Table 2: Focus prediction results of the best performing previous works and our neural network (baseline network and adding components). S and Cntxt refer to Scope and Context, respectively. Note that Zou et al. (2014) do not report the accuracy of their model, but they do in their follow-up work (Zou et al., 2015). and next sentences respectively) are concatenated to the representations from the 3-layer BiLSTM. Hyperparameters and Training Details. The cell states of all BiLSTMs have size 350 and we use dropout with a ratio of 0.6. We use the stochastic gradient descent algorithm with Adam optimizer (Kingma and Ba, 2014) and a learning rate of 0.001 for tuning weights. We set batch size to 24 and stop the training process after the F1 on the development split does not increase for 50 epochs. The final model is the one which yields the highest F1 on the development split. We combined the original train and development splits from PB-FOC and used 95% of the result as training split and the remaining 5% as development split. The implementation uses PyTorch (Paszke et al., 2019).1 We refer the readers to the supplemental material for additional details on the neural architecture. 4.2 Quantitative Analysis Table 2 presents the results obtained with the *SEM Shared Task test split and evaluation script. Our best network architecture (NN + Scope) outperforms all previous works (Accuracy: +5.2, 7.4%). Not all components of the architecture we experiment with are beneficial. Our main finding is that scope information, as predicted by a scope detector trained on CD-SCO, is very useful. Indeed, the core of the network (3-layer BiLSTM and CRF layer) obtains 75.81 F1 (vs. 71.88) when the input includes scope information. Disabling other specialized embeddings—indicating the negated verb and semantic roles—results in substantial drops in performance (not shown in Table 2). 1Code available at https://github.com/mosharafhossain/focusof-negation %insts. P R F1 ARG0 4.07 92.9 44.8 60.5 ARG1 43.82 77.9 90.4 83.7 ARG2 4.92 62.0 88.6 72.9 ARG3 0.42 16.7 33.3 22.2 ARG4 0.56 60.0 75.0 66.7 M-NEG 25.98 83.8 50.3 62.8 M-TMP 7.16 71.2 92.2 80.3 M-MNR 5.76 88.6 95.1 91.8 M-ADV 3.09 85.0 77.3 81.0 M-LOC 1.12 60.0 75.0 66.7 M-EXT 0.84 100.0 100.0 100.0 M-DIR 0.28 50.0 100.0 66.7 M-PNC 1.69 84.6 91.7 88.0 M-DIS 0.14 100.0 100.0 100.0 M-CAU 0.14 0.0 0.0 0.0 Table 3: Results per role with our best system (NN + Scope, Figure 1). % insts. indicates the percentage of foci per role in the test set. According to the creators of PB-FOC and more recent work (Zou et al., 2014, 2015), context is important to determine the focus of negation. Our results confirm this observation: adding the previous and next sentences via attention mechanisms improves the results: 73.43 vs. 71.88 F1. Our results also show, however, that the scope of negation— not previously considered—is more beneficial than context. As a matter of fact, adding context is detrimental if scope is taken into account. Table 3 presents the results of the best system (NN + Scope) per role. We observe that all roles obtain relatively high F1 scores (>60.5) with two exceptions: ARG3 (22.2) and M-CAU (0.0). Many roles are rarely the focus (≤5%: ARG0, ARG2, ARG3, ARG4, etc.), yet the F1 scores with those roles are similar or even higher than more frequent roles (e.g., ARG1). In other words, the neural model is able to predict the focus with similar F1 scores, regardless of what role is the focus. In Table 4, we provide a quantitative analysis of the results obtained with the best system (NN + Scope). We split the test set into four categories and subcategories, and then evaluate the test instances that fall into each subcategory. Specifically, we consider the focus length measured in tokens, the sentence length measured in tokens, the number of roles in the verb-argument structure of the negated verb (intuitively, the more roles to choose from, the harder to predict the right one), and the verb class of the negated verb. We obtained verb classes from the lexical files in WordNet (Miller, 1995). 8394 %insts. P R F1 focus length 1 39.47 85.2 61.2 66.0 2–5 33.85 92.2 85.5 87.7 6–15 21.91 95.3 93.6 93.7 >15 4.78 89.7 82.4 84.1 sent. length 5–10 7.44 82.6 73.6 74.1 11–15 10.39 88.0 85.1 85.5 16–30 46.63 79.8 77.7 76.7 >30 35.53 77.9 75.9 75.1 #roles 2, 3 roles 10.81 90.3 89.6 89.7 4 roles 35.25 80.4 79.3 77.2 5 roles 37.50 77.8 77.5 76.1 >5 roles 16.43 72.9 65.8 64.7 verb class possession 17.70 75.1 73.0 71.0 commun. 14.04 80.0 80.0 79.7 cognition 12.36 88.9 85.2 81.8 social 10.81 77.2 75.3 74.3 Table 4: Quantitative analysis of the results in the test set. We measure focus and sentence lengths in tokens. We provide weighted averages per label, thus the F1 scores may not fall between P and R. Regarding focus length, we observe that singleword foci are the hardest followed by long foci (over 15 tokens). This leads to the conclusion that the network struggles to represent single words and long sequences of words. We note that many foci are single words (39.47%) despite this subcategory obtaining the worst results (F1: 66.0). Regarding sentence length, we observe comparable F1 scores (74.1–76.7) except with sentences between 11 and 15 tokens (85.5). These results lead to the conclusion that since the focus prediction task is defined at the semantic role level, role length is more important than sentence length. Unsurprisingly, the model obtains worse results depending on the number of roles in the verbargument structure of the negated verb—effectively, the model suffers when it has more roles to choose from. Negated verbs with up to three roles obtain the highest F1 scores (89.7), and results drop significantly (64.7) when there are more than 5 roles (only 16.43% of instances). Finally, we provide detailed results for the verbs belonging to the most frequent verb classes: possession (buy, take, get, etc.), communication (say, allege, etc.), cognition (think, believe, imagine, etc.), and social (meet, party, etc.). Communication and cognition verbs obtain the best results; this is due in part to the fact that verbs belonging to those verb classes tend to have fewer semantic roles. 5 Error and Qualitative Analysis To better understand the strengths and weaknesses of our models, we perform a detailed qualitative analysis of the errors made in predicting focus. Negation is a complex semantic phenomenon which interacts with other aspects of the meaning and structure of sentences, and this complexity is reflected in the diversity of errors. We perform the analysis over all 712 negations in the test set, investigating how linguistic properties of the negated sentences influence performance across the four models (baseline, scope, context, and combined); we consider nearly 3,000 predictions in total. The counts in this section reflect instance-model pairings; it could happen, for example, that three of the four models predict the wrong focus for a sentence with a particular linguistic property. For some sentences, multiple error types are relevant. We identify three broad categories of errors: syntactic (5.1), semantic (5.2), and other (5.3). There are multiple error types within each category, and each error type is associated with a particular linguistic property of the negated sentence. Here we focus on the most frequently-occurring error types per category, as these offer the greatest insight into specific strengths and weaknesses of the models. The distribution of error categories across the four models is shown in Table 8 and discussed in more detail below (5.4). Representative examples from PB-FOC for each error type appear in Tables 5, 6, and 7. For each example, we show the full sentence, with predicted scope (as output by the scope detector trained with CD-SCO) between double angle brackets and semantic roles in square brackets. For each negated sentence, the table shows the gold focus (GF)2 and the predicted focus (PF), along with the model(s) responsible for the incorrect prediction. 5.1 Syntactic Error Types Our analysis reveals three prominent error types related to the structure of negated sentences. 1. Complex verb errors occur when the target verb is part of a complex verb constellation, due to passivization, complex tense constructions, or modal constructions. These constructions result in multi-word verb constellations, such as can’t be cured in example 1.1 (Table 5). These are challeng2Gold focus annotations come from the PB-FOC corpus and may include some errors. Some properties of the PB-FOC annotations are discussed in Section 2. 8395 Syntactic Error Type Examples from PB-FOC 1.1. Complex verb There is [nothing]ARG2 wrong with the market ≪[that]ARG2 [ca]M-MOD n’t be [cured]verb [by a little coherence and common sense in Washington.]ARG3 ≫ GF: [nothing ... that]ARG2 (all models) −−−−−−−−→ PF: [a little coherence and common sense in Washington]ARG3 1.2. Complex sentence Since production costs were guaranteed, it didn’t matter that ≪[a program]ARG1 [could]M-MOD n’t be [sold]verb [abroad]M-LOC or put into syndication, ≫[as most American programs are.]M-ADV GF: [abroad]M-LOC (NN + context) −−−−−−−→ PF: [as most American programs are]M-ADV 1.3. Role adjacency It was an overreaction to [an event (the failure of a management and union group to get bank financing for a takeover of UAL) that]ARG0 ≪doesn’t [mean]verb [that much]ARG1 [to lots of stocks.]M-MNR ≫ GF: [that much]ARG1 (NN + scope + context) −−→ PF: [much]ARG1[to lots of stocks]M-MNR Table 5: Syntactic error types. KEY: [semantic role], ≪predicted scope≫, GF: gold focus, PF: predicted focus. ing for all models, but especially for the baseline, with 56 error cases (vs. 36, 43, and 41 for the scope, context, and combined models). 2. Complex sentence structure errors are even more common, with 116/73/87/63 occurrences for the four models. Instances triggering this error type are sentences with relative clauses or complement clauses, as well as sentences with non-canonical linking between argument structure and grammatical function, such as passives and questions. According to Horn (2010), relative and complement clauses can alter the behavior of negation, compared to simple declarative sentences. Example 1.2 in Table 5 shows scope helping with complex sentence structure—both models which incorporate scope predict the correct focus, which occurs within the predicted scope. The other two models choose an argument outside of the predicted scope. Our third type of syntactic error occurs due to 3. Role adjacency in the sentence, leading to errors in span prediction. The property associated with this error type is linear adjacency of semantic roles, with no textual material in between. Example 1.3 in Table 5 shows that the model predicts part of the correct role but then extends the span to incorporate a second role. In summary, models with access to predicted scope make fewer syntactic errors than models without scope. 5.2 Semantic Error Types Three different types of errors related to meaning occur with high frequency. 1. Errors due to distractors are the most frequent individual error type. The term distractor is most familiar from pedagogical discussion of multiple-choice questions, where a distractor is an incorrect option that test-takers are likely to mistake for a correct answer. We use the term here to refer to textual material which leads the neural network away from the gold focus. Specifically, distractors are found in two aspects of the input representation for a given instance: the predicted scope, and the adjacent sentences (previous and next) provided as part of the models which incorporate context. This error type is, by definition, not applicable for the baseline model. We identify 124 occurrences of distractor errors for the scope model, 87 for the context model, and 130 for the combined model, making this the largest error category. Example 2.1 in Table 6 marks distractors in bold-face type. In this case, all models predict after the last crash as the focus.3 The predicted focus occurs in the predicted scope, and the head noun crash appears in the surrounding context. In addition to the direct repetition the 1987 crash in the sentence following, we see the synonym market plunge in the previous sentence. 2. Lack of referential specificity in the gold focus is a less-frequent and more speculative error type. The idea is that focus is difficult to predict correctly when the focused semantic role is pronominal or otherwise requires additional information for reference resolution. Across the models, we count 22 occurrences. In most of these cases, the gold focus is a pronoun (it, ex. 2.2). All models seem to 3An argument could be made for M-NEG as the negated role; however, we show the gold focus according to PB-FOC. 8396 Semantic Error Type Examples from PB-FOC 2.1. Distractors A further slide also would resurrect debate over a host of [other, more sweeping changes]ARG1 proposed – but [not]M-NEG ≪[implemented]verb [after the last crash.]M-TMP ≫ GF: [other, more sweeping changes]ARG1 (all models) −−−−−−→ PF: [after the last crash]M-TMP PrevSent: A deeper market plunge today could give them ... NextSent: Most notably, several of the regulatory steps recommended by the Brady Task Force, which analyzed the 1987 crash, would be revived ... 2.2. Lack of specificity The main advantage of a convertible mortgage is that ≪[it]ARG0 is ≫not a sale and [therefore]M-DIS ≪does not [trigger]verb [costly transfer taxes and reappraisal.]ARG1 ≫ GF: [it]ARG0 (all models) −−−−−−→ PF: [costly transfer taxes and reappraisal]ARG1 2.3. Neg. Polarity Items (NPIs) [And]M-DIS [unlike IBM’s water-cooled mainframes]M-ADV, ≪[it]ARG0 doesn’t [need]verb [any plumbing.]ARG1 ≫ GF: [it]ARG0 (all models) −−−−−−→ PF: [any plumbing]ARG1 Table 6: Semantic error types. KEY: [semantic role], ≪predicted scope≫, GF: gold focus, PF: predicted focus. Other Error Types Examples from PB-FOC 3.1. Quotations [No,]M-DIS to my mind, ≪[the Journal]ARG0 did not [“defend]verb [sleaze,]ARG1 ≫ [fraud, waste, embezzlement, influence-peddling and abuse of the public trust.”]ARG1 GF: [not]M-NEG (all models)−−−−−−→ PF: [sleaze, fraud, waste, ... public trust]ARG1 3.2. Particle verbs, PPs and inf. complements [But]M-DIS ≪don’t [pay]verb [30 times earnings]ARG1 [for a company that’s expected to grow at 15% a year.]ARG3 ≫ GF: [for a company that’s expected to grow at 15% a year]ARG3 (all models)−−−−−−→ PF: [30 times earnings]ARG1 Table 7: Other error types. KEY: [semantic role], ≪predicted scope≫, GF: gold focus, PF: predicted focus. disprefer predicting bare pronouns as focus. Occurrence of 3. negative polarity items (NPIs) also influences the accuracy of the model. Negative polarity items (such as any or yet, see Horn (2010)) are licensed in the scope of negation but ungrammatical elsewhere. For example, it’s ungrammatical to say *I have eaten any fish. Given the strong association between negation and NPIs, it is not surprising that our models tend to predict as focus any role which contains an NPI (example 2.3). This error type occurs roughly twice as often in models with scope than in models without scope. 5.3 Other Error Types. Two other error types occur often enough to deserve mention. 1. Quotation errors generally involve quoted direct speech, which seems to be especially problematic when only part of a clause is quoted speech. In example 3.1, the quoted speech is the verb plus its direct object, and all models select the role of the direct object as predicted focus. The final error type is a sort of catch-all: 2. Particle verbs, prepositional phrases, and infinitival complements. As with complex sentence structures, these error types reflect complex verbal argument structure. 5.4 Discussion Table 8 shows the distribution of error types across the four systems. Errors due to particular syntactic structures are the most common, with the subtype of complex sentences making up the bulk of these (339).4 The baseline network deals very poorly with both complex verb constellations and complex sentence structures, and incorporating predicted scope consistently reduces the number of errors 4An error count is incremented whenever the relevant linguistic property is identified in a sentence for which the relevant system has made an incorrect prediction. Note that one sentence may present more than one linguistic property. 8397 Synt. Sem. Other n 593 407 164 NN (baseline) 32.2 3.0 25.6 NN + Scope 21.3 35.4 24.4 NN + Context 25.6 24.3 25.6 NN + Scope + Context 20.9 37.3 24.4 Total 100.0 100.0 100.0 Table 8: Number of errors made by all systems per category (n), and percentage made by each system. Synt. Sem. Other NN (baseline) 78.6 4.5 16.9 NN + Scope 40.5 46.6 12.9 NN + Context 51.7 33.9 14.4 NN + Scope + Ctx 39.4 47.9 12.7 Table 9: Percentages per error category, for each system. of this type. This suggests that considering scope helps the system to deal with complex sentences. For errors related to semantics, the picture is reversed. The systems which consider scope are especially prone to distractor errors, the most common error type over all (341). When we have both scope and context, the system has even more potential distractor candidates and makes more errors. The two error types in the Other category are distributed roughly evenly across the models, suggesting that none of the current models is any better than the others at dealing with these error types. In Table 9 we see a second view on the error distributions, now considering each category as a proportion of the errors made by the system. Again we see that predicted scope shifts the balance of error types from syntactic to semantic. By reinforcing a subsection of the text in the input representation, the search space for complex sentences narrows and the system has a better chance of selecting the correct focus. This same behavior is a disadvantage when the gold focus is not part of the predicted scope, as the scope distracts attention away from other plausible candidate roles. Similarly, including context through adjacent sentences sometimes reinforces the correct focus through introduction of other semantically-related terms, and sometimes clutters the field through the very same mechanism. 6 Conclusions Negation is generally understood to carry positive meaning, or in other words, to suggest affirmative alternatives. Predicting the focus of negation (i.e., pinpointing the usually few tokens that are actually negated) is key to revealing affirmative alternatives. In this paper, we have presented a neural architecture to predict the focus of negation. We work with PB-FOC, a corpus of verbal negations (i.e., when a negation cue grammatically modifies a verb) in which one semantic role is annotated as focus. Experimental results show that incorporating scope of negation information yields better results, despite the fact that we train the scope detector with data in a different domain (short stories vs. news). These results suggest that scope of negation transfers across domains. Our best model (NN + Scope) obtains the best focus prediction results to date. A quantitative analysis shows that this model is robust across most role labels (Table 3), sentence lengths, and verb classes (Table 4). The model obtains worse results, however, when the role that is the focus is only one token, or the negated verb has more than 5 roles (Table 4). In addition to state-of-the-art results, we have presented a detailed qualitative analysis. We discover three main error categories (syntactic, semantic, and other) and 8 error types after manual analysis of the predictions made by the four models with all test instances. We draw two main insights from the qualitative analysis. First, including scope information solves many syntactic errors but introduces semantic errors (recall that scope information is beneficial from a quantitative point of view). Second, the lower results after including context, at least with the current architecture, are largely due to additional semantic errors via distractors in the previous and next sentences. Acknowledgements Thanks to the anonymous reviewers for their insightful comments. This material is based upon work supported by the NSF under Grant No. 1845757. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. The Titan Xp used for this research was donated by the NVIDIA Corporation. Computational resources were also provided by the UNT office of High-Performance Computing. References Pranav Anand and Craig Martell. 2012. Annotating the focus of negation in terms of questions under 8398 discussion. In Proceedings of the Workshop on ExtraPropositional Aspects of Meaning in Computational Linguistics, pages 65–69, Jeju, Republic of Korea. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Rajendra Banjade and Vasile Rus. 2016. Dt-neg: Tutorial dialogues annotated for negation scope and focus in context. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA). Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus phrasebased machine translation quality: a case study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 257– 267, Austin, Texas. Association for Computational Linguistics. Eduardo Blanco and Dan Moldovan. 2011. Semantic representation of negation using focus detection. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 581–589, Portland, Oregon, USA. Association for Computational Linguistics. Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL2005), pages 152–164, Ann Arbor, Michigan. Association for Computational Linguistics. Isaac Councill, Ryan McDonald, and Leonid Velikovich. 2010. What’s great and what’s not: learning to classify the scope of negation for improved sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, pages 51–59, Uppsala, Sweden. University of Antwerp. Vesna Djokic, Jean Maillard, Luana Bulat, and Ekaterina Shutova. 2019. Modeling affirmative and negated action processing in the brain with lexical and compositional semantic models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5155–5165, Florence, Italy. Association for Computational Linguistics. Federico Fancellu, Adam Lopez, and Bonnie Webber. 2016. Neural networks for negation scope detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 495–504, Berlin, Germany. Association for Computational Linguistics. Federico Fancellu, Adam Lopez, Bonnie Webber, and Hangfeng He. 2017. Detecting negation scope is easy, except when it isn’t. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 58–63, Valencia, Spain. Association for Computational Linguistics. Uri Hasson and Sam Glucksberg. 2006. Does understanding negation entail affirmation?: An examination of negated metaphors. Journal of Pragmatics, 38(7):1015–1032. Laurence R Horn, editor. 2010. The expression of negation. Mouton de Gruyter. Laurence R. Horn and Heinrich Wansing. 2017. Negation. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, spring 2017 edition. Metaphysics Research Lab, Stanford University. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Rodney D. Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cambridge University Press. Lifeng Jia, Clement Yu, and Weiyi Meng. 2009. The effect of negation on sentiment analysis and retrieval effectiveness. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 1827–1830. ACM. Salud Mar´ıa Jim´enez-Zafra, Roser Morante, Mar´ıa Teresa Mart´ın-Valdivia, and L. Alfonso Ure˜na-L´opez. 2020. Corpora annotated with negation: An overview. Computational Linguistics, 46(1):1–52. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Natalia Konstantinova, Sheila C.M. de Sousa, Noa P. Cruz, Manuel J. Ma˜na, Maite Taboada, and Ruslan Mitkov. 2012. A review corpus annotated for negation, speculation and their scope. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), pages 3190–3195, Istanbul, Turkey. European Language Resources Association (ELRA). Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681–691, San Diego, California. Association for Computational Linguistics. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41. Roser Morante and Eduardo Blanco. 2012. *SEM 2012 Shared Task: Resolving the Scope and Focus of Negation. In Proceedings of the First Joint Conference on 8399 Lexical and Computational Semantics (*SEM 2012), pages 265–274, Montr´eal, Canada. Roser Morante and Walter Daelemans. 2009. A metalearning approach to processing the scope of negation. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL2009), pages 21–29, Boulder, Colorado. Association for Computational Linguistics. Roser Morante and Walter Daelemans. 2012. ConanDoyle-neg: Annotation of negation cues and their scope in Conan Doyle stories. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). European Language Resources Association (ELRA). Isabel Orenes, David Beltr´an, and Carlos Santamar´ıa. 2014. How negation is understood: Evidence from the visual world paradigm. Journal of Memory and Language, 74:36–45. Woodley Packard, Emily M. Bender, Jonathon Read, Stephan Oepen, and Rebecca Dridan. 2014. Simple negation scope resolution through deep parsing: A semantic solution to a semantic problem. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 69–78, Baltimore, Maryland. Association for Computational Linguistics. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71– 106. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Jonathon Read, Erik Velldal, Lilja Øvrelid, and Stephan Oepen. 2012. UiO1: Constituent-based discriminative ranking for negation resolution. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 310– 318, Montr´eal, Canada. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2017. Optimal hyperparameters for deep lstm-networks for sequence labeling tasks. arXiv preprint arXiv:1707.06799. Sabine Rosenberg and Sabine Bergler. 2012. UConcordia: CLaC negation focus detection at *Sem 2012. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 294–300, Montr´eal, Canada. Association for Computational Linguistics. Zahra Sarabi and Eduardo Blanco. 2016. Understanding negation in positive terms using syntactic dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1108–1118, Austin, Texas. Association for Computational Linguistics. Zahra Sarabi and Eduardo Blanco. 2017. If no media were allowed inside the venue, was anybody allowed? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 860–869, Valencia, Spain. Association for Computational Linguistics. Longxiang Shen, Bowei Zou, Yu Hong, Guodong Zhou, Qiaoming Zhu, and AiTi Aw. 2019. Negative focus detection via contextual attention mechanism. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2251– 2261, Hong Kong, China. Association for Computational Linguistics. Gy¨orgy Szarvas, Veronika Vincze, Rich´ard Farkas, and J´anos Csirik. 2008. The bioscope corpus: Annotation for negation, uncertainty and their scope in biomedical texts. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing, BioNLP ’08, pages 38–45, Stroudsburg, PA, USA. Association for Computational Linguistics. Ann Taylor, Mitchell Marcus, and Beatrice Santorini. 2003. The Penn Treebank: An Overview, pages 5–22. Springer Netherlands, Dordrecht. Veronika Vincze, Gy¨orgy Szarvas, Rich´ard Farkas, Gy¨orgy M´ora, and J´anos Csirik. 2008. The bioscope corpus: biomedical texts annotated for uncertainty, negation and their scopes. BMC Bioinformatics, 9:S9 – S9. Xin Wang, Yuanchao Liu, Chengjie Sun, Baoxun Wang, and Xiaolong Wang. 2015. Predicting polarities of tweets by composing word embeddings with long short-term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1343–1353, Beijing, China. Association for Computational Linguistics. 8400 Michael Wiegand, Alexandra Balahur, Benjamin Roth, Dietrich Klakow, and Andr´es Montoyo. 2010. A survey on the role of negation in sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, pages 60–68, Uppsala, Sweden. University of Antwerp. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 347–354, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489, San Diego, California. Association for Computational Linguistics. Bowei Zou, Guodong Zhou, and Qiaoming Zhu. 2014. Negation focus identification with contextual discourse information. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 522–530, Baltimore, Maryland. Association for Computational Linguistics. Bowei Zou, Guodong Zhou, and Qiaoming Zhu. 2015. Unsupervised negation focus identification with word-topic graph model. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1632–1636, Lisbon, Portugal. Association for Computational Linguistics. A Appendix In this section, we provide additional details on the neural models discussed in this paper. A.1 Details on the Neural Architecture The neural model shown in Figure 1 is our full model. It consists of a base network and additional components indicated with dotted shapes. The additional components incorporate information about the scope of the negation and context (previous and next sentence). In this section, we provide additional information about the input representation and the additional components. Input Representation. As discussed in Section 4, we map each word token to its 1,024-dimensional pre-trained ELMo embedding (Peters et al., 2018). We do not update the ELMo embeddings during the training of the network. Our baseline model leverages two additional embeddings for encoding positional information of the negated verb as well as the semantic role labels of the input tokens. We extract semantic roles from the training, development and test sets in the the CoNLL-2005 Shared Task (Carreras and M`arquez, 2005). The embeddings indicating the negated verb and semantic role labels are trained from scratch along with all the other weights in the full network. We employ an additional embedding to incorporate scope information into the network (Section 4). Like the two additional embeddings described above, the embeddings to indicate scope information are trained from scratch. Figure 2 shows the construction of the input representation. The input sentence is “The carrier has not yet turned a profit.” Etoken (top) denotes the 1,024-dimensional ELMo embeddings of token. The other embedding vectors shown in Figure 2 are to indicate the position of the negated verb, semantic roles and the scope of the negation. We have two tags to indicate the negated verb (“Y” when the token is the negated verb and “N” otherwise), two tags to indicate the scope (“I S” when the token is inside the scope and “O S” otherwise), and 15 tags to indicate semantic roles (one per role label). All the embedding weights for each token are concatenated before feeding them into the first BiLSTM layer. The final input dimension per token is 1,474:1,024 from the word token embedding, 50 from the negated verb embedding, 200 from the semantic role embedding, and another 200 from the scope embedding. Note that in the sample sentence shown in Figure 2, all tokens are inside the scope of the negation except the negation cue (negation cues are annotated as outside of the scope in CD-SCO (Morante and Daelemans, 2012)). The scope of a negation, however, can span over all the tokens or a small part of a sentence, or even be discontinuous (Morante and Daelemans, 2012). In the example sentence below, for example, the scope of the negation only spans over the last clause: Mr./O S Paul/O S says/O S he/O S had/O S not/O S one/O S but/O S four/O S advisers/O S and/O S that/O S he/I S never/O S bid/I S impulsively/I S ./O S BiLSTM-Attention Network for Context. To capture contextual information, we add two attention-based recurrent networks, one for the previous sentence and another one for the next sentence. These additional components are shown in 8401 EThe Ecarrier Ehas Enot Eyet Eturned Ea Eprofit EN EN EN EN EN EY EN EN EA0 EA0 EAM-MOD EAM-NEG EAM-TMP EV EA1 EA1 Token Emb. (1024) Negated Verb Emb. (50) SRL Emb. (200) EI_S EI_S EO_S EI_S EI_S EI_S EI_S Scope Emb. (200) EI_S Figure 2: Input representation of our neural model. The symbol ⊕denotes concatenation, not addition. the left and right dotted rectangles in Figure 1. The previous and next sentences in the example shown in Figure 1 are “StatesWest operates four twinengine turboprop aircraft, connecting 10 cities in California, Arizona and Nevada” and “The former president of FirstSouth F.A., a defunct Arkansas thrift, pleaded guilty to conspiring to inflate the institution’s earnings by concealing worthless loan guarantees” respectively. Like in the baseline model, we map each word of the adjacent sentences to its 1,024-dimensional ELMo embedding vector before feeding them into the recurrent network. Each network component consists of a 2-layer Bidirectional LSTM with 50 hidden units. A dropout rate of 30% is applied to the recurrent layers. We add an attention layer on top of the final BiLSTM layer. More specifically, we adopt the word-attention technique proposed by Yang et al. (2016). The attention weights from both networks are concatenated with the final hidden representation of the base 3-layer BiLSTM network (Figure 1). Subsequently, the additional network components are trained with the original BiLSTM network.
2020
743
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8402–8412 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8402 Structured Tuning for Semantic Role Labeling Tao Li University of Utah [email protected] Parth Anand Jawale University of Colorado [email protected] Martha Palmer University of Colorado [email protected] Vivek Srikumar University of Utah [email protected] Abstract Recent neural network-driven semantic role labeling (SRL) systems have shown impressive improvements in F1 scores. These improvements are due to expressive input representations, which, at least at the surface, are orthogonal to knowledge-rich constrained decoding mechanisms that helped linear SRL models. Introducing the benefits of structure to inform neural models presents a methodological challenge. In this paper, we present a structured tuning framework to improve models using softened constraints only at training time. Our framework leverages the expressiveness of neural networks and provides supervision with structured loss components. We start with a strong baseline (RoBERTa) to validate the impact of our approach, and show that our framework outperforms the baseline by learning to comply with declarative constraints. Additionally, our experiments with smaller training sizes show that we can achieve consistent improvements under low-resource scenarios. 1 Introduction Semantic Role Labeling (SRL, Palmer et al., 2010) is the task of labeling semantic arguments of predicates in sentences to identify who does what to whom. Such representations can come in handy in tasks involving text understanding, such as coreference resolution (Ponzetto and Strube, 2006) and reading comprehension (e.g., Berant et al., 2014; Zhang et al., 2020). This paper focuses on the question of how knowledge can influence modern semantic role labeling models. Linguistic knowledge can help SRL models in several ways. For example, syntax can drive feature design (e.g., Punyakanok et al., 2005; Toutanova et al., 2005; Kshirsagar et al., 2015; Johansson and Nugues, 2008, and others), and can also be embedded into neural network architectures (Strubell et al., 2018). In addition to such influences on input representations, knowledge about the nature of semantic roles can inform structured decoding algorithms used to construct the outputs. The SRL literature is witness to a rich array of techniques for structured inference, including integer linear programs (e.g., Punyakanok et al., 2005, 2008), bespoke inference algorithms (e.g., T¨ackstr¨om et al., 2015), A* decoding (e.g., He et al., 2017), greedy heuristics (e.g., Ouchi et al., 2018), or simple Viterbi decoding to ensure that token tags are BIO-consistent. By virtue of being constrained by the definition of the task, global inference promises semantically meaningful outputs, and could provide valuable signal when models are being trained. However, beyond Viterbi decoding, it may impose prohibitive computational costs, thus ruling out using inference during training. Indeed, optimal inference may be intractable, and inference-driven training may require ignoring certain constraints that render inference difficult. While global inference was a mainstay of SRL models until recently, today’s end-to-end trained neural architectures have shown remarkable successes without needing decoding. These successes can be attributed to the expressive input and internal representations learned by neural networks. The only structured component used with such models, if at all, involves sequential dependencies between labels that admit efficient decoding. In this paper, we ask: Can we train neural network models for semantic roles in the presence of general output constraints, without paying the high computational cost of inference? We propose a structured tuning approach that exposes a neural SRL model to differentiable constraints during the finetuning step. To do so, we first write the output space constraints as logic rules. Next, we relax such statements into differentiable forms that serve as regularizers to inform the model at training time. 8403 Finally, during inference, our structure-tuned models are free to make their own judgments about labels without any inference algorithms beyond a simple linear sequence decoder. We evaluate our structured tuning on the CoNLL05 (Carreras and M`arquez, 2005) and CoNLL-12 English SRL (Pradhan et al., 2013) shared task datasets, and show that by learning to comply with declarative constraints, trained models can make more consistent and more accurate predictions. We instantiate our framework on top of a strong baseline system based on the RoBERTa (Liu et al., 2019) encoder, which by itself performs on par with previous best SRL models that are not ensembled. We evaluate the impact of three different types of constraints. Our experiments on the CoNLL-05 data show that our constrained models outperform the baseline system by 0.2 F1 on the WSJ section and 1.2 F1 on the Brown test set. Even with the larger and cleaner CoNLL-12 data, our constrained models show improvements without introducing any additional trainable parameters. Finally, we also evaluate the effectiveness of our approach on low training data scenarios, and show that constraints can be more impactful when we do not have large training sets. In summary, our contributions are: 1. We present a structured tuning framework for SRL which uses soft constraints to improve models without introducing additional trainable parameters.1 2. Our framework outperforms strong baseline systems, and shows especially large improvements in low data regimes. 2 Model & Constraints In this section, we will introduce our structured tuning framework for semantic role labeling. In §2.1, we will briefly cover the baseline system. To that, we will add three constraints, all treated as combinatorial constraints requiring inference algorithms in past work: Unique Core Roles in §2.3, Exclusively Overlapping Roles in §2.4, and Frame Core Roles in §2.5. For each constraint, we will discuss how to use its softened version during training. We should point out that the specific constraints chosen serve as a proof-of-concept for the general methodology of tuning with declarative knowledge. 1Our code to replay our experiments is archived at https: //github.com/utahnlp/structured tuning srl. For simplicity, for all our experiments, we use the ground truth predicates and their senses. 2.1 Baseline We use RoBERTa (Liu et al., 2019) base version to develop our baseline SRL system. The large number of parameters not only allows it to make fast and accurate predictions, but also offers the capacity to learn from the rich output structure, including the constraints from the subsequent sections. Our base system is a standard BIO tagger, briefly outlined below. Given a sentence s, the goal is to assign a label of the form B-X, I-X or O for each word i being an argument with label X for a predicate at word u. These unary decisions are scored as follows: e = map(RoBERTa(s)) (1) vu, ai = fv(eu), fa(ei) (2) φu,i = fva([vu, ai]) (3) yu,i = g(φu,i) (4) Here, map converts the wordpiece embeddings e to whole word embeddings by summation, fv and fa are linear transformations of the predicate and argument embeddings respectively, fva is a twolayer ReLU with concatenated inputs, and finally g is a linear layer followed by softmax activation that predicts a probability distribution over labels for each word i when u is a predicate. In addition, we also have a standard first-order sequence model over label sequences for each predicate in the form of a CRF layer that is Viterbi decoded. We use the standard cross-entropy loss to train the model. 2.2 Designing Constraints Before looking at the specifics of individual constraints, let us first look at a broad overview of our methodology. We will see concrete examples in the subsequent sections. Output space constraints serve as prior domain knowledge for the SRL task. We will design our constraints as invariants at the training stage. To do so, we will first define constraints as statements in logic. Then we will systematically relax these Boolean statements into differentiable forms using concepts borrowed from the study of triangular norms (t-norms, Klement et al., 2013). Finally, we will treat these relaxations as regularizers in addition to the standard cross-entropy loss. 8404 All the constraints we consider are conditional statements of the form: ∀x, L(x) →R(x) (5) where the left- and the right-hand sides— L(x), R(x) respectively—can be either disjunctive or conjunctive expressions. The literals that constitute these expressions are associated with classification neurons, i.e., the predicted output probabilities are soft versions of these literals. What we want is that model predictions satisfy our constraints. To teach a model to do so, we transform conditional statements into regularizers, such that during training, the model receives a penalty if the rule is not satisfied for an example.2 To soften logic, we use the conversions shown in Table 1 that combine the product and G¨odel tnorms. We use this combination because it offers cleaner derivatives make learning easier. A similar combination of t-norms was also used in prior work (Minervini and Riedel, 2018). Finally, we will transform the derived losses into log space to be consistent with cross-entropy loss. Li et al. (2019) outlines this relationship between the crossentropy loss and constraint-derived regularizers in more detail. Logic V i ai W i ai ¬a a →b G¨odel min (ai) max (ai) 1 −a – Product Πai – 1 −a min 1, b a  Table 1: Converting logical operations to differentiable forms. For literals inside of L(s) and R(s), we use the G¨odel t-norm. For the top-level conditional statement, we use the product t-norm. Operations not used this paper are marked as ‘–’. 2.3 Unique Core Roles (U) Our first constraint captures the idea that, in a frame, there can be at most one core participant of a given type. Operationally, this means that for every predicate in an input sentence s, there can be no more than one occurrence of each core argument (i.e, Acore = {A0, A1, A2, A3, A4, A5}). In 2Constraint-derived regularizers are dependent on examples, but not necessarily labeled ones. For simplicity, in this paper, we work with sentences from the labeled corpus. However, the methodology described here can be extended to use unlabeled examples as well. first-order logic, we have: ∀u, i ∈s, X ∈Acore, BX(u, i) → ^ j∈s,j̸=i ¬BX(u, j) (6) which says, for a predicate u, if a model tags the i-th word as the beginning of the core argument span, then it should not predict that any other token is the beginning of the same label. In the above rule, the literal BX is associated with the predicted probability for the label B-X3. This association is the cornerstone for deriving constraint-driven regularizers. Using the conversion in Table 1 and taking the natural log of the resulting expression, we can convert the implication in (6) as l(u, i, X): max  log BX (u, i) − min j∈s,j̸=i log (1 −BX (u, j))  . Adding up the terms for all tokens and labels, we get the final regularizer LU(s): LU(s) = X (u,i)∈s,X∈Acore l(u, i, X). (7) Our constraint is universally applied to all words and predicates (i.e., i, u respectively) in the given sentence s. Whenever there is a pair of predicted labels for tokens i, j that violate the rule (6), our loss will yield a positive penalty. Error Measurement ρu To measure the violation rate of this constraint, we will report the percentages of propositions that have duplicate core arguments. We will refer to this error rate as ρu. 2.4 Exclusively Overlapping Roles (O) We adopt this constraint from Punyakanok et al. (2008) and related work. In any sentence, an argument for one predicate can either be contained in or entirely outside another argument for any other predicate. We illustrate the intuition of this constraint in Table 2, assuming core argument spans are unique and tags are BIO-consistent. Based on Table 2, we design a constraint that says: if an argument has boundary [i, j], then no other argument span can cross the boundary at j. 3 We will use BX(u, i) to represent both the literal that the token i is labeled with B-X for predicate u and also the probability for this event. We follow a similar convention for the I-X labels. 8405 Token index i · · · j j + 1 [i-j] has label X BX · · · IX ¬IX Not allowed – – BY IY Not allowed ¬BY ∧¬IY – IY IY Table 2: Formalizing the exclusively overlapping role constraint in terms of the B and I literals. For every possible span [i-j] in a sentence, whenever it has a label X for some predicate (first row), token labels as in the subsequent rows are not allowed for any other predicate for any other argument Y. Note that this constraint does not affect the cells marked with a –. This constraint applies to all argument labels in the task, denoted by the set A. ∀u, i, j ∈s such that j > i, and ∀X ∈A, P(u, i, j, X) → ^ v∈s,Y∈A (u,X)̸=(v,Y) Q(v, i, j, Y) (8) where P(u, i, j, X) = BX(u, i) ∧IX(u, j) ∧¬IX(u, j + 1) Q(v, i, j, Y) = Q1(v, i, j, Y) ∧Q2(v, i, j, η) Q1(v, i, j, Y) = ¬BY(v, j) ∨¬IY(v, j + 1) Q2(v, i, j, Y) = BY(v, i) ∨IY(v, i) ∨¬IY(v, j) ∨¬IY(v, j + 1) Here, the term P(u, i, j, X) denotes the indicator for the argument span [i, j] having the label X for a predicate u and corresponds to the first row of Table 2. The terms Q1(v, i, j, Y) and Q2(v, i, j, Y) each correspond to prohibitions of the type described in the second and third rows respectively. As before, the literals BX, etc are relaxed as model probabilities to define the loss. By combining the G¨odel and product t-norms, we translate Rule (8) into: LO(s) = X (u,i,j)∈s j>i,X∈A l(u, i, j, X). (9) where, l(u, i, j, X) = max 0, log P(u, i, j, X) − min v∈s,Y∈A (u,X)̸=(v,Y) log Q(v, i, j, Y)  P(u, i, j, X) = min (BX (u, i) , IX (u, j) , 1 −IX (u, j + 1)) Q(v, i, j, Y) = min (Q1(v, i, j, Y), Q2(v, i, j, Y)) Q1(v, i, j, Y) = 1 −min (BY(v, j), IY(v, j + 1)) Q2(v, i, j, Y) = max (BY(v, i), IY(v, i), 1 −IY(v, j), 1 −IY(v, j + 1)) Again, our constraint applies to all predicted probabilities. However, doing so requires scanning over 6 axes defined by (u, v, i, j, X, Y), which is computationally expensive. To get around this, we observe that, since we have a conditional statement, the higher the probability of P(u, i, j, X), the more likely it yields non-zero penalty. These cases are precisely the ones we hope the constraint helps. Thus, for faster training and ease of implementation, we modify Equation 8 by squeezing the (i, j) dimensions using top-k to redefine LO above as: T (u, X) = arg top-k(i,j)∈sP (u, i, j, X) (10) LO(s) = X u∈s,X∈A X (i,j)∈T (v,X) l(u, i, j, X). (11) where T denotes the set of the top-k span boundaries for predicate u and argument label X. This change results in a constraint defined by u, v, X, Y and the k elements of T . Error Measurement ρo We will refer to the error of the overlap constraint as ρo, which describes the total number of non-exclusively overlapped pairs of arguments. In practice, we found that models rarely make such observed mistakes. In §3, we will see that using this constraint during training helps models generalize better with other constraints. In §4, we will analyze the impact of the parameter k in the optimization described above. 2.5 Frame Core Roles (F) The task of semantic role labeling is defined using the PropBank frame definitions. That is, for any predicate lemma of a given sense, PropBank defines which core arguments it can take and what they mean. The definitions allow for natural constraints that can teach models to avoid predicting core arguments outside of the predefined set. ∀u ∈s, k ∈S(u), Sense(u, k) → ^ i∈s X̸∈R(u,k) ¬ (BX(u, i) ∧IX(u, i)) where S(u) denotes the set of senses for a predicate u, and R(u, k) denotes the set of acceptable core arguments when the predicate u has sense k. As noted in §2.2, literals in the above statement can to be associated with classification neurons. Thus the Sense(u, k) corresponds to either model prediction or ground truth. Since our focus is to 8406 validate the approach of using relaxed constraints for SRL, we will use the latter. This constraint can be also converted into regularizer following previous examples, giving us a loss term LF (s). Error Measurement ρf We will use ρf to denote the violation rate. It represents the percentage of propositions that have predicted core arguments outside the role sets of PropBank frames. Loss Our final loss is defined as: LE(s) + λULU(s) + λOLO(s) + λF LF (s) (12) Here, LE(s) is the standard cross entropy loss over the BIO labels, and the λ’s are hyperparameters. 3 Experiments & Results In this section, we study the question: In what scenarios can we inform an end-to-end trained neural model with declarative knowledge? To this end, we experiment with the CoNLL-05 and CoNLL-12 datasets, using standard splits and the official evaluation script for measuring performance. To empirically verify our framework in various data regimes, we consider scenarios ranging from where only limited training data is available, to ones where large amounts of clean data are available. 3.1 Experiment Setup Our baseline (described in §2.1) is based on RoBERTa. We used the pre-trained base version released by Wolf et al. (2019). Before the final linear layer, we added a dropout layer (Srivastava et al., 2014) with probability 0.5. To capture the sequential dependencies between labels, we added a standard CRF layer. At testing time, Viterbi decoding with hard transition constraints was employed across all settings. In all experiments, we used the gold predicate and gold frame senses. Model training proceeded in two stages: 1. We use the finetuned the pre-trained RoBERTa model on SRL with only crossentropy loss for 30 epochs with learning rate 3 × 10−5. 2. Then we continued finetuning with the combined loss in Equation 12 for another 5 epochs with a lowered learning rate of 1 × 10−5. During both stages, learning rates were warmed up linearly for the first 10% updates. For fair comparison, we finetuned our baseline twice (as with the constrained models); we found that it consistently outperformed the singly finetuned baseline in terms of both error rates and role F1. We grid-searched the λ’s by incrementally adding regularizers. The combination of λ’s with good balance between F1 and error ρ’s on the dev set were selected for testing. We refer readers to the appendix for the values of λ’s. For models trained on the CoNLL-05 data, we report performance on the dev set, and the WSJ and Brown test sets. For CoNLL-12 models, we report performance on the dev and the test splits. 3.2 Scenario 1: Low Training Data Creating SRL datasets requires expert annotation, which is expensive. While there are some efforts on semi-automatic annotation targeting low-resource languages (e.g., Akbik et al., 2016), achieving high neural network performance with small or unlabeled datasets remains a challenge (e.g., F¨urstenau and Lapata, 2009, 2012; Titov and Klementiev, 2012; Gormley et al., 2014; Abend et al., 2009). In this paper, we study the scenario where we have small amounts of fully labeled training data. We sample 3% of the training data and an equivalent amount of development examples. The same training/dev subsets are used across all models. Table 3 reports the performances of using 3% training data from CoNLL-05 and CoNLL-12 (top and bottom respectively). We compare our strong baseline model with structure-tuned models using all three constraints. Note that for all these evaluations, while we use subsamples of the dev set for model selection, the evaluations are reported using the full dev and test sets. We see that training with constraints greatly improves precision with low training data, while recall reduces. This trade-off is accompanied by a reduction in the violation rates ρu and ρf. As noted in §2.4, models rarely predict label sequences that violate the exclusively overlapping roles constraint. As a result, the error rate ρo (the number of violations) only slightly fluctuates. 3.3 Scenario 2: Large Training Data Table 4 reports the performance of models trained with our framework using the full training set of the CoNLL-05 dataset which consists of 35k sentences with 91k propositions. Again, we compare RoBERTa (twice finetuned) with our structuretuned models. We see that the constrained models 8407 CoNLL-05 (3%, 1.1k) Dev P R F1 δF1 ρu ρo ρf RoBERTa2 67.79 72.69 70.15 14.56 23 6.19 +U,F,O 70.40 71.91 71.15 1.0 8.56 20 5.82 WSJ P R F1 δF1 ρu ρo ρf RoBERTa2 70.48 74.96 72.65 13.35 37 NA +U,F,O 72.60 74.13 73.36 0.7 7.46 49 NA Brown P R F1 δF1 ρu ρo ρf RoBERTa2 62.16 66.93 64.45 12.94 6 NA +U,F,O 64.31 65.64 64.97 0.5 5.47 6 NA CoNLL-12 (3%, 2.7k) Dev P R F1 δF1 ρu ρo ρf RoBERTa2 74.39 76.88 75.62 7.43 294 3.23 +U,F,O 75.99 76.80 76.39 0.8 4.37 245 3.01 Test P R F1 δF1 ρu ρo ρf RoBERTa2 74.79 77.17 75.96 6.92 156 2.67 +U,F,O 76.31 76.88 76.59 0.6 4.12 171 2.41 Table 3: Results on low training data (3% of CoNLL05 and CoNLL-12). RoBERTa2: Baseline finetuned twice. U: Unique core roles. F: Frame core roles. O: Exclusively overlapping roles. δF1: improvement over baseline. ρf is marked NA for the CoNLL-05 test results because ground truth sense is unavailable on the CoNLL-05 shared task page. CoNLL-05 (100%, 36k) Dev P R F1 δF1 ρu ρf RoBERTa2 86.74 87.24 86.99 1.97 3.23 +U,F,O 87.24 87.26 87.25 0.3 1.35 2.99 Oracle 0.40 2.34 WSJ P R F1 δF1 ρu ρf RoBERTa2 87.75 87.94 87.85 1.71 NA +U,F,O 88.05 88.00 88.03 0.2 0.85 NA Oracle 0.30 NA Brown P R F1 δF1 ρu ρf RoBERTa2 79.38 78.92 78.64 3.36 NA +U,F,O 80.04 79.56 79.80 1.2 1.24 NA Oracle 0.30 NA Table 4: Results on the full CoNLL-05 data. Oracle: Errors of oracle. ρo is in [0,6] across all settings. consistently outperform baselines on the dev, WSJ, and Brown sets. With all three constraints, the constrained model reaches 88 F1 on the WSJ. It also generalizes well on new domain by outperforming the baseline by 1.2 points on the Brown test set. As in the low training data experiments, we observe improved precision due to the constraints. This suggests that even with large training data, direct label supervision might not be enough for neural models to pick up the rich output space structure. Our framework helps neural networks, even as strong as RoBERTa, to make more correct predictions from differentiable constraints. Surprisingly, the development ground truth has a 2.34% error rate on the frame role constraint, and 0.40% on the unique role constraint. Similar percentages of unique role errors also appear in WSJ and Brown test sets. For ρo, the oracle has no violations on the CoNLL-05 dataset. The exclusively overlapping constraint (i.e. ρo) is omitted as we found models rarely make such prediction errors. After adding constraints, the error rate of our model approached the lower bound. Note that our framework focuses on the learning stage without any specialized decoding algorithms in the prediction phase except the Viterbi algorithm to guarantee that there will be no BIO violations. What about even larger and cleaner data? The ideal scenario, of course, is when we have the luxury of massive and clean data to power neural network training. In Table 5, we present results on CoNLL-12 which is about 3 times as large as CoNLL-05. It consists of 90k sentences and 253k propositions. The dataset is also less noisy with respect to the constraints. For instance, the oracle development set has no violations for both the unique core and the exclusively overlapping constraints. We see that, while adding constraints reduced error rates of ρu and ρf, the improvements on label consistency do not affect F1 much. As a result, our best constrained model performes on a par with the baseline on the dev set, and is slightly better than the baseline (by 0.1) on the test set. Thus we believe when we have the luxury of data, learning with constraints would become optional. This observation is in line with recent results in Li and Srikumar (2019) and Li et al. (2019). But is it due to the large data or the strong baseline? To investigate whether the seemingly saturated performance is from data or from the model, we also evaluate our framework on the original BERT (Devlin et al., 2019) which is relatively less powerful. We follow the same model setup for experiments and report the performances in Table 5 and Table 9. We see that compared to RoBERTa, BERT obtains similar F1 gains on the test set, sug8408 gesting performance ceiling is due to the train size. CoNLL-12 (100%, 90k) Dev P R F1 δF1 ρu ρf RoBERTa2 86.62 86.91 86.76 0.86 1.18 +U,F,O 86.60 86.89 86.74 0 0.59 1.04 Oracle 0 0.38 Test P R F1 δF1 ρu ρf RoBERTa2 86.28 86.67 86.47 0.91 0.97 +U,F,O 86.40 86.83 86.61 0.1 0.50 0.93 Oracle 0 0.42 Dev P R F1 δF1 ρu ρf BERT2 85.62 86.22 85.92 1.41 1.12 +U,F,O 85.97 86.38 86.18 0.3 0.78 1.07 Test P R F1 δF1 ρu ρf BERT2 85.52 86.24 85.88 1.32 0.94 +U,F,O 85.82 86.36 86.09 0.2 0.79 0.90 Table 5: Results on CoNLL-12. BERT2: The original BERT finetuned twice. ρo is around 50 across all settings. With the luxury of large and clean data, constrained learning becomes less effective. 4 Ablations & Analysis In §3, we saw that constraints not just improve model performance, but also make outputs more structurally consistent. In this section, we will show the results of an ablation study that adds one constraint at a time. Then, we will examine the sources of improved F-score by looking at individual labels, and also the effect of the top-k relaxation for the constraint O. Furthermore, we will examine the robustness of our method against randomness involved during training. We will end this section with a discussion about the ability of constrained neural models to handle structured outputs. Constraint Ablations We present the ablation analysis on our constraints in Table 6. We see that as models become more constrained, precision improves. Furthermore, one class of constraints do not necessarily reduce the violation rate for the others. Combining all three constraints offers a balance between precision, recall, and constraint violation. One interesting observation that adding the O constraints improve F-scores even though the ρo values were already close to zero. As noted in §2.4, our constraints apply to the predicted scores of all labels for a given argument, while the actual decoded label sequence is just the highest scoring sequence using the Viterbi algorithm. Seen this way, our regularizers increase the decision margins on affected labels. As a result, the model predicts scores that help Viterbi decoding, and, also generalizes better to new domains i.e., the Brown set. CoNLL-05 (100%, 36k) Dev P R F1 ρu ρf RoBERTa2 86.74 87.24 86.99 1.97 3.23 +U 87.21 87.32 87.27 1.29 3.23 +U,F 87.19 87.54 87.37 1.20 3.11 +U,F,O 87.24 87.26 87.25 1.35 2.99 WSJ P R F1 ρu ρf RoBERTa2 87.75 87.94 87.85 1.71 NA +U 87.88 88.01 87.95 1.18 NA +U,F 88.05 88.09 88.07 0.89 NA +U,F,O 88.05 88.00 88.03 0.85 NA Brown P R F1 ρu ρf RoBERTa2 79.38 78.92 78.64 3.36 NA +U 79.36 79.15 79.25 1.74 NA +U,F 79.60 79.24 79.42 1.00 NA +U,F,O 80.04 79.56 79.80 1.24 NA Table 6: Ablation tests on CoNLL-05. Sources of Improvement Table 7 shows labelwise F1 scores for each argument. Under low training data conditions, our constrained models gained improvements primarily from the frequent labels, e.g., A0-A2. On CoNLL-05 dataset, we found the location modifier (AM-LOC) posed challenges to our constrained models which significantly performed worse than the baseline. Another challenge is the negation modifier (AM-NEG), where our models underperformed on both datasets, particularly with small training data. When using the CoNLL12 training set, our models performed on par with the baseline even on frequent labels, confirming that the performance of soft-structured learning is nearly saturated on the larger, cleaner dataset. Impact of Top-k Beam Size As noted in §2.4, we used the top-k strategy to implement the constraint O. As a result, there is a certain chance for predicted label sequences to have non-exclusive overlap without our regularizer penalizing them. What we want instead is a good balance between coverage and runtime cost. To this end, we analyze the CoNLL-12 development set using the baseline trained on 3% of CoNLL-12 data. Specifically, we count the examples which have such overlap but the regularization loss is ≤0.001. In Table 8, we 8409 CoNLL-05 3% CoNLL-05 100% CoNLL-12 3% CoNLL-12 100% RoBERTa2 +U,F,O RoBERTa2 +U,F,O RoBERTa2 +U,F,O RoBERTa2 +U,F,O A0 81.28 82.11 93.43 93.52 84.99 85.73 92.78 92.81 A1 72.12 73.59 89.23 89.80 78.36 79.67 89.88 89.75 A2 46.50 47.52 79.53 79.73 68.24 69.20 84.93 84.90 A3 39.58 42.11 81.45 81.86 33.26 34.47 72.96 73.24 A4 51.61 51.56 74.60 75.59 56.29 58.38 80.80 80.33 AM-ADV 44.07 47.56 66.67 66.91 55.26 54.93 66.37 66.92 AM-DIR 16.39 18.92 55.26 55.56 36.51 35.81 64.92 64.95 AM-DIS 71.07 70.84 80.20 80.50 76.35 76.40 82.86 82.71 AM-LOC 53.08 51.60 69.02 66.50 59.74 59.94 72.74 73.21 AM-MNR 44.30 44.18 68.63 69.87 56.14 55.67 70.89 71.13 AM-MOD 91.88 91.60 98.27 98.60 95.50 95.76 97.88 98.04 AM-NEG 91.18 88.35 94.06 93.60 93.29 93.05 95.93 95.83 AM-TMP 74.05 74.13 88.24 88.08 79.00 78.78 87.58 87.56 Overall 70.48 71.55 87.33 87.61 76.66 77.45 87.60 87.58 Table 7: Label-wise F1 scores for the CoNLL-05 and CoNLL-12 development sets. see that k = 4 yields good coverage. k 1 2 4 6 # Ex. 10 8 3 2 Table 8: Impact of k for the top-k strategy, showing the number of missed examples for different k. We set k = 4 across all experiments. Robustness to random initialization We observed that model performance with structured tuning is generally robust to random initialization. As an illustration, we show the performance of models trained on the full CoNLL-12 dataset with different random initializations in Table 9. CoNLL-12 (100%, 90k) Test F1 Seed1 Seed2 Seed3 avg δF1 BERT2 85.88 85.91 86.13 +U,F,O 86.09 86.07 86.19 0.1 Test F1 Seed1 Seed2 Seed3 avg δF1 RoBERTa2 86.47 86.33 86.45 +U,F,O 86.61 86.48 86.57 0.1 Table 9: F1 scores models trained on the CoNLL-12 data with different random seeds. The randomness affects the initialization of the classification layers and the batch ordering during training. Can Constrained Networks Handle Structured Prediction? Larger, cleaner data may presumably be better for training constrained neural models. But it is not that simple. We will approach the above question by looking at how good the transformer models are at dealing with two classes of constraints, namely: 1) structural constraints that rely only on available decisions (constraint U), 2) constraints involving external knowledge (constraint F). For the former, we expected neural models to perform very well since the constraint U represents a simple local pattern. From Tables 4 and 5, we see that the constrained models indeed reduced violations ρu substantially. However, when the training data is limited, i.e., comparing CoNLL-05 3% and 100%, the constrained models, while reducing the number of errors, still make many invalid predictions. We conjecture this is because networks learn with constraints mostly by memorization. Thus the ability to generalize learned patterns on unseen examples relies on training size. The constraint F requires external knowledge from the PropBank frames. We see that even with large training data, constrained models were only able to reduce error rate ρf by a small margin. In our development experiments, having larger λF tends to strongly sacrifice argument F1, yet still does not to improve development error rate substantially. Without additional training signal in the form of such background knowledge, constrained inference becomes a necessity, even with strong neural network models. 5 Discussion & Conclusion Semantic Role Labeling & Constraints The SRL task is inherently knowledge rich; the outputs are defined in terms of an external ontology of frames. The work presented here can be gener8410 alized to several different flavors of the task, and indeed, constraints could be used to model the interplay between them. For example, we could revisit the analysis of Yi et al. (2007), who showed that the PropBank A2 label takes on multiple meanings, but by mapping them to VerbNet, they can be disambiguated. Such mappings naturally define constraints that link semantic ontologies. Constraints have long been a cornerstone in the SRL models. Several early linear models for SRL (e.g. Punyakanok et al., 2004, 2008; Surdeanu et al., 2007) modeled inference for PropBank SRL using integer linear programming. Riedel and MezaRuiz (2008) used Markov Logic Networks to learn and predict semantic roles with declarative constraints. The work of (T¨ackstr¨om et al., 2015) showed that certain SRL constraints admit efficient decoding, leading to a neural model that used this framework (FitzGerald et al., 2015). Learning with constraints has also been widely adopted in semisupervised SRL (e.g., F¨urstenau and Lapata, 2012). With the increasing influence of neural networks in NLP, however, the role of declarative constraints seem to have decreased in favor of fully end-toend training (e.g., He et al., 2017; Strubell et al., 2018, and others). In this paper, we show that even in the world of neural networks with contextual embeddings, there is still room for systematically introducing knowledge in the form of constraints, without sacrificing the benefits of end-to-end learning. Structured Losses Chang et al. (2012) and Ganchev et al. (2010) developed models for structured learning with declarative constraints. Our work is in the same spirit of training models that attempts to maintain output consistency. There are some recent works on the design of models and loss functions by relaxing Boolean formulas. Kimmig et al. (2012) used the Łukasiewicz t-norm for probabilistic soft logic. Li and Srikumar (2019) augment the neural network architecture itself using such soft logic. Xu et al. (2018) present a general framework for loss design that does not rely on soft logic. Introducing extra regularization terms to a downstream task have been shown to be beneficial in terms of both output structure consistency and prediction accuracy (e.g., Minervini and Riedel, 2018; Hsu et al., 2018; Mehta et al., 2018; Du et al., 2019; Li et al., 2019). Final words In this work, we have presented a framework that seeks to predict structurally consistent outputs without extensive model redesign, or any expensive decoding at prediction time. Our experiments on the semantic role labeling task show that such an approach can be especially helpful in scenarios where we do not have the luxury of massive annotated datasets. Acknowledgements We thank members of the NLP group at the University of Utah for their valuable insights and suggestions; and reviewers for pointers to related works, corrections, and helpful comments. We also acknowledge the support of NSF Cyberlearning1822877, SaTC-1801446, U.S. DARPA KAIROS Program No. FA8750-19-2-1004, DARPA Communicating with Computers DARPA 15-18-CwC-FP032, HDTRA1-16-1-0002, and gifts from Google and NVIDIA. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. References Omri Abend, Roi Reichart, and Ari Rappoport. 2009. Unsupervised argument identification for semantic role labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Alan Akbik, Vishwajeet Kumar, and Yunyao Li. 2016. Towards semi-automatic generation of proposition Banks for low-resource languages. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D. Manning. 2014. Modeling biological processes for reading comprehension. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005). 8411 Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2012. Structured Learning with Constrained Conditional Models. Machine learning, 88(3):399–431. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Xinya Du, Bhavana Dalvi, Niket Tandon, Antoine Bosselut, Wen tau Yih, Peter Clark, and Claire Cardie. 2019. Be Consistent! Improving Procedural Text Comprehension using Label Consistency. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Nicholas FitzGerald, Oscar T¨ackstr¨om, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labeling with neural network factors. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 960–970. Hagen F¨urstenau and Mirella Lapata. 2009. Graph alignment for semi-supervised semantic role labeling. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Hagen F¨urstenau and Mirella Lapata. 2012. Semisupervised semantic role labeling via structural alignment. Computational Linguistics, 38(1):135– 171. Kuzman Ganchev, Jennifer Gillenwater, Ben Taskar, et al. 2010. Posterior Regularization for Structured Latent Variable Models. Journal of Machine Learning Research. Matthew R. Gormley, Margaret Mitchell, Benjamin Van Durme, and Mark Dredze. 2014. Low-resource semantic role labeling. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Richard Johansson and Pierre Nugues. 2008. Dependency-based semantic role labeling of PropBank. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Angelika Kimmig, Stephen Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2012. A short Introduction to Probabilistic Soft Logic. In Proceedings of the NIPS Workshop on Probabilistic Programming: Foundations and Applications. Erich Peter Klement, Radko Mesiar, and Endre Pap. 2013. Triangular Norms. Springer Science & Business Media. Meghana Kshirsagar, Sam Thomson, Nathan Schneider, Jaime Carbonell, Noah A. Smith, and Chris Dyer. 2015. Frame-semantic role labeling with heterogeneous annotations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Tao Li, Vivek Gupta, Maitrey Mehta, and Vivek Srikumar. 2019. A logic-driven framework for consistency of neural models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Tao Li and Vivek Srikumar. 2019. Augmenting Neural Networks with First-order Logic. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Sanket Vaibhav Mehta, Jay Yoon Lee, and Jaime Carbonell. 2018. Towards Semi-Supervised Learning for Deep Semantic Role Labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Pasquale Minervini and Sebastian Riedel. 2018. Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge. In Proceedings of the 22nd Conference on Computational Natural Language Learning. Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2018. A span selection model for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Martha Palmer, Daniel Gildea, and Nianwen Xue. 2010. Semantic role labeling. Synthesis Lectures on Human Language Technologies, 3(1):1–103. Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, 8412 Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2005. The necessity of syntactic parsing for semantic role labeling. In IJCAI. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics. Vasin Punyakanok, Dan Roth, Wen-tau Yih, and Dav Zimak. 2004. Semantic role labeling via integer linear programming inference. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics. Sebastian Riedel and Ivan Meza-Ruiz. 2008. Collective semantic role labelling with Markov logic. In CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-Informed Self-Attention for Semantic Role Labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Mihai Surdeanu, Llu´ıs M`arquez, Xavier Carreras, and Pere R Comas. 2007. Combination strategies for semantic role labeling. Journal of Artificial Intelligence Research, 29:105–151. Oscar T¨ackstr¨om, Kuzman Ganchev, and Dipanjan Das. 2015. Efficient inference and structured learning for semantic role labeling. Transactions of the Association for Computational Linguistics. Ivan Titov and Alexandre Klementiev. 2012. Semisupervised semantic role labeling: Approaching from an unsupervised perspective. In Proceedings of COLING 2012. Kristina Toutanova, Aria Haghighi, and Christopher Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05). Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. 2019. Transformers: State-of-theart natural language processing. arXiv preprint arXiv:1910.03771. Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Van den Broeck. 2018. A Semantic Loss Function for Deep Learning with Symbolic Knowledge. In International Conference on Machine Learning. Szu-ting Yi, Edward Loper, and Martha Palmer. 2007. Can Semantic Roles Generalize Across Genres? In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 548–555, Rochester, New York. Association for Computational Linguistics. Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020. Semantics-aware bert for language understanding. AAAI Conference on Artificial Intelligence (AAAI). A Appendices A.1 Hyperparameters We show the hyperparameters of λ‘s in Table 10. We conducted grid search on the combinations of λ‘s for each setting and the best one on development set is selected for reporting. Model λU λO λF RoBERTa CoNLL-05 (3%) +U,F,O 2 0.5 0.5 RoBERTa CoNLL-2012 (3%) +U,F,O 1 2 1 RoBERTa CoNLL-05 (100%) +U 1 +U,F 1 0.5 +U,F,O 1 0.5 0.1 RoBERTa CoNLL-2012 (100%) +U,F,O 1 1 0.1 BERT CoNLL-2012 (100%) +U,F,O 0.5 1 0.1 Table 10: Values of hyperparameter λ‘s.
2020
744
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413–8426 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8413 TABERT: Pretraining for Joint Understanding of Textual and Tabular Data Pengcheng Yin∗ Graham Neubig Carnegie Mellon University {pcyin,gneubig}@cs.cmu.edu Wen-tau Yih Sebastian Riedel Facebook AI Research {scottyih,sriedel}@fb.com Abstract Recent years have witnessed the burgeoning of pretrained language models (LMs) for textbased natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TABERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TABERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TABERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WIKITABLEQUESTIONS, while performing competitively on the text-toSQL dataset SPIDER.1 1 Introduction Recent years have witnessed a rapid advance in the ability to understand and answer questions about free-form natural language (NL) text (Rajpurkar et al., 2016), largely due to large-scale, pretrained language models (LMs) like BERT (Devlin et al., 2019). These models allow us to capture the syntax and semantics of text via representations learned in an unsupervised manner, before fine-tuning the model to downstream tasks (Melamud et al., 2016; McCann et al., 2017; Peters et al., 2018; Liu et al., 2019b; Yang et al., 2019; Goldberg, 2019). It is also relatively easy to apply such pretrained LMs to comprehension tasks that are modeled as text span selection problems, where the boundary of an answer span can be predicted using a simple classifier on top of the LM (Joshi et al., 2019). ∗Work done while at Facebook AI Research. 1Available at github.com/facebookresearch/TaBERT However, it is less clear how one could pretrain and fine-tune such models for other QA tasks that involve joint reasoning over both free-form NL text and structured data. One example task is semantic parsing for access to databases (DBs) (Zelle and Mooney, 1996; Berant et al., 2013; Yih et al., 2015), the task of transducing an NL utterance (e.g., “Which country has the largest GDP?”) into a structured query over DB tables (e.g., SQL querying a database of economics). A key challenge in this scenario is understanding the structured schema of DB tables (e.g., the name, data type, and stored values of columns), and more importantly, the alignment between the input text and the schema (e.g., the token “GDP” refers to the Gross Domestic Product column), which is essential for inferring the correct DB query (Berant and Liang, 2014). Neural semantic parsers tailored to this task therefore attempt to learn joint representations of NL utterances and the (semi-)structured schema of DB tables (e.g., representations of its columns or cell values, as in Krishnamurthy et al. (2017); Bogin et al. (2019b); Wang et al. (2019a), inter alia). However, this unique setting poses several challenges in applying pretrained LMs. First, information stored in DB tables exhibit strong underlying structure, while existing LMs (e.g., BERT) are solely trained for encoding free-form text. Second, a DB table could potentially have a large number of rows, and naively encoding all of them using a resource-heavy LM is computationally intractable. Finally, unlike most text-based QA tasks (e.g., SQuAD, Rajpurkar et al. (2016)) which could be formulated as a generic answer span selection problem and solved by a pretrained model with additional classification layers, semantic parsing is highly domain-specific, and the architecture of a neural parser is strongly coupled with the structure of its underlying DB (e.g., systems for SQL-based and other types of DBs use different encoder mod8414 els). In fact, existing systems have attempted to leverage BERT, but each with their own domainspecific, in-house strategies to encode the structured information in the DB (Guo et al., 2019; Zhang et al., 2019a; Hwang et al., 2019), and importantly, without pretraining representations on structured data. These challenges call for development of general-purpose pretraining approaches tailored to learning representations for both NL utterances and structured DB tables. In this paper we present TABERT, a pretraining approach for joint understanding of NL text and (semi-)structured tabular data (§ 3). TABERT is built on top of BERT, and jointly learns contextual representations for utterances and the structured schema of DB tables (e.g., a vector for each utterance token and table column). Specifically, TABERT linearizes the structure of tables to be compatible with a Transformer-based BERT model. To cope with large tables, we propose content snapshots, a method to encode a subset of table content most relevant to the input utterance. This strategy is further combined with a vertical attention mechanism to share information among cell representations in different rows (§ 3.1). To capture the association between tabular data and related NL text, TABERT is pretrained on a parallel corpus of 26 million tables and English paragraphs (§ 3.2). TABERT can be plugged into a neural semantic parser as a general-purpose encoder to compute representations for utterances and tables. Our key insight is that although semantic parsers are highly domain-specific, most systems rely on representations of input utterances and the table schemas to facilitate subsequent generation of DB queries, and these representations can be provided by TABERT, regardless of the domain of the parsing task. We apply TABERT to two different semantic parsing paradigms: (1) a classical supervised learning setting on the SPIDER text-to-SQL dataset (Yu et al., 2018c), where TABERT is fine-tuned together with a task-specific parser using parallel NL utterances and labeled DB queries (§ 4.1); and (2) a challenging weakly-supervised learning benchmark WIKITABLEQUESTIONS (Pasupat and Liang, 2015), where a system has to infer latent DB queries from its execution results (§ 4.2). We demonstrate TABERT is effective in both scenarios, showing that it is a drop-in replacement of a parser’s original encoder for computing contextual representations of NL utterances and DB tables. Specifically, systems augmented with TABERT outperforms their counterparts using BERT, registering state-of-the-art performance on WIKITABLEQUESTIONS, while performing competitively on SPIDER (§ 5). 2 Background Semantic Parsing over Tables Semantic parsing tackles the task of translating an NL utterance u into a formal meaning representation (MR) z. Specifically, we focus on parsing utterances to access database tables, where z is a structured query (e.g., an SQL query) executable on a set of relational DB tables T = {Tt}. A relational table T is a listing of N rows {Ri}N i=1 of data, with each row Ri consisting of M cells {s⟨i,j⟩}M j=1, one for each column cj. Each cell s⟨i,j⟩contains a list of tokens. Depending on the underlying data representation schema used by the DB, a table could either be fully structured with strongly-typed and normalized contents (e.g., a table column named distance has a unit of kilometers, with all of its cell values, like 200, bearing the same unit), as is commonly the case for SQL-based DBs (§ 4.1). Alternatively, it could be semi-structured with unnormalized, textual cell values (e.g., 200 km, § 4.2). The query language could also take a variety of forms, from general-purpose DB access languages like SQL to domain-specific ones tailored to a particular task. Given an utterance and its associated tables, a neural semantic parser generates a DB query from the vector representations of the utterance tokens and the structured schema of tables. In this paper we refer schema as the set of columns in a table, and its representation as the list of vectors that represent its columns2. We will introduce how TABERT computes these representations in § 3.1. Masked Language Models Given a sequence of NL tokens x = x1, x2, . . . , xn, a masked language model (e.g., BERT) is an LM trained using the masked language modeling objective, which aims to recover the original tokens in x from a “corrupted” context created by randomly masking out certain tokens in x. Specifically, let xm = {xi1, . . . , xim} be the subset of tokens in x selected to be masked out, and ex denote the masked sequence with tokens in xm replaced by a [MASK] symbol. A masked LM defines a distribu2Column representations for more complex schemas, e.g., those capturing inter-table dependency via primary and foreign keys, could be derived from these table-wise representations. 8415 ... ... Vertical Self-Attention Layer ... ... ... ... Vertical Pooling Utterance Token Representations ... Column Representations Transformer (BERT) Cell-wise Pooling Cell-wise Pooling Cell-wise Pooling Cell Vectors Utterance Token Vectors ... ... Year Venue Position Event Erfurt Tampere Izmir Moscow Bangkok 2003 2005 2005 2006 2007 3rd 1st 1st 2nd 1st EU Junior Championship EU U23 Championship Universiade World Indoor Championship Universiade In which city did Piotr's last 1st place finish occur? (B) Per-row Encoding (for each row in content snapshot, using as an example) Selected Rows as Content Snapshot 2005 Erfurt 1st [CLS] In which city did Piotr's ... [SEP] Year | real | 2005 [SEP] Venue | text | Erfurt [SEP] Position | text | 1st [SEP] In which city [CLS] ... Year Venue Position In which city did ... (A) Content Snapshot from Input Table (C) Vertical Self-Attention over Aligned Row Encodings In In In which city ? which city ? which city ? 2005 Erfurt 1st 2005 Izmir 1st 2007 Bangkok 1st did [CLS] [CLS] [CLS] Figure 1: Overview of TABERT for learning representations of utterances and table schemas with an example from WIKITABLEQUESTIONS3. (A) A content snapshot of the table is created based on the input NL utterance. (B) Each row in the snapshot is encoded by a Transformer (only R2 is shown), producing row-wise encodings for utterance tokens and cells. (C) All row-wise encodings are aligned and processed by V vertical self-attention layers, generating utterance and column representations. tion pθ(xm|ex) over the target tokens xm given the masked context ex. BERT parameterizes pθ(xm|ex) using a Transformer model. During the pretraining phase, BERT maximizes pθ(xm|ex) on large-scale textual corpora. In the fine-tuning phase, the pretrained model is used as an encoder to compute representations of input NL tokens, and its parameters are jointly tuned with other task-specific neural components. 3 TABERT: Learning Joint Representations over Textual and Tabular Data We first present how TABERT computes representations for NL utterances and table schemas (§ 3.1), and then describe the pretraining procedure (§ 3.2). 3.1 Computing Representations for NL Utterances and Table Schemas Fig. 1 presents a schematic overview of TABERT. Given an utterance u and a table T, TABERT first creates a content snapshot of T. This snapshot consists of sampled rows that summarize the information in T most relevant to the input utterance. The model then linearizes each row in the snapshot, concatenates each linearized row with the utterance, and uses the concatenated string as input to a Transformer (e.g., BERT) model, which outputs row-wise encoding vectors of utterance tokens and cells. The encodings for all the rows in 3Example adapted from stanford.io/38iZ8Pf the snapshot are fed into a series of vertical selfattention layers, where a cell representation (or an utterance token representation) is computed by attending to vertically-aligned vectors of the same column (or the same NL token). Finally, representations for each utterance token and column are generated from a pooling layer. Content Snapshot One major feature of TABERT is its use of the table contents, as opposed to just using the column names, in encoding the table schema. This is motivated by the fact that contents provide more detail about the semantics of a column than just the column’s name, which might be ambiguous. For instance, the Venue column in Fig. 1 which is used to answer the example question actually refers to host cities, and encoding the sampled cell values while creating its representation may help match the term “city” in the input utterance to this column. However, a DB table could potentially have a large number of rows, with only few of them actually relevant to answering the input utterance. Encoding all of the contents using a resource-heavy Transformer is both computationally intractable and likely not necessary. Thus, we instead use a content snapshot consisting of only a few rows that are most relevant to the input utterance, providing an efficient approach to calculate content-sensitive column representations from cell values. We use a simple strategy to create content snap8416 shots of K rows based on the relevance between the utterance and a row. For K > 1, we select the top-K rows in the input table that have the highest n-gram overlap ratio with the utterance.4 For K = 1, to include in the snapshot as much information relevant to the utterance as possible, we create a synthetic row by selecting the cell values from each column that have the highest n-gram overlap with the utterance. Using synthetic rows in this restricted setting is motivated by the fact that cell values most relevant to answer the utterance could come from multiple rows. As an example, consider the utterance “How many more participants were there in 2008 than in the London Olympics?”, and an associating table with columns Year, Host City and Number of Participants, the most relevant cells to the utterance, 2008 (from Year) and London (from Host City), are from different rows, which could be included in a single synthetic row. In the initial experiments we found synthetic rows also help stabilize learning. Row Linearization TABERT creates a linearized sequence for each row in the content snapshot as input to the Transformer model. Fig. 1(B) depicts the linearization for R2, which consists of a concatenation of the utterance, columns, and their cell values. Specifically, each cell is represented by the name and data type5 of the column, together with its actual value, separated by a vertical bar. As an example, the cell s⟨2,1⟩valued 2005 in R2 in Fig. 1 is encoded as Year | {z} Column Name | real | {z} Column Type | 2005 | {z } Cell Value (1) The linearization of a row is then formed by concatenating the above string encodings of all the cells, separated by the [SEP] symbol. We then prefix the row linearization with utterance tokens as input sequence to the Transformer. Existing works have applied different linearization strategies to encode tables with Transformers (Hwang et al., 2019; Chen et al., 2019), while our row approach is specifically designed for encoding content snapshots. We present in § 5 results with different linearization choices. 4We use n ≤3 in our experiments. Empirically this simple matching heuristic is able to correctly identify the best-matched rows for 40 out of 50 sampled examples on WIKITABLEQUESTIONS. 5We use two data types, text, and real for numbers, predicted by majority voting over the NER labels of cell tokens. Vertical Self-Attention Mechanism The base Transformer model in TABERT outputs vector encodings of utterance and cell tokens for each row. These row-level vectors are computed separately and therefore independent of each other. To allow for information flow across cell representations of different rows, we propose vertical self-attention, a self-attention mechanism that operates over vertically aligned vectors from different rows. As in Fig. 1(C), TABERT has V stacked verticallevel self-attention layers. To generate aligned inputs for vertical attention, we first compute a fixedlength initial vector for each cell at position ⟨i, j⟩, which is given by mean-pooling over the sequence of the Transformer’s output vectors that correspond to its variable-length linearization as in Eq. (1). Next, the sequence of word vectors for the NL utterance (from the base Transformer model) are concatenated with the cell vectors as initial inputs to the vertical attention layer. Each vertical attention layer has the same parameterization as the Transformer layer in (Vaswani et al., 2017), but operates on vertically aligned elements, i.e., utterance and cell vectors that correspond to the same question token and column, respectively. This vertical self-attention mechanism enables the model to aggregate information from different rows in the content snapshot, allowing TABERT to capture cross-row dependencies on cell values. Utterance and Column Representations A representation cj is computed for each column cj by mean-pooling over its vertically aligned cell vectors, {s⟨i,j⟩: Ri in content snapshot}, from the last vertical layer. A representation for each utterance token, xj, is computed similarly over the vertically aligned token vectors. These representations will be used by downstream neural semantic parsers. TABERT also outputs an optional fixedlength table representation T using the representation of the prefixed [CLS] symbol, which is useful for parsers that operate on multiple DB tables. 3.2 Pretraining Procedure Training Data Since there is no large-scale, high-quality parallel corpus of NL text and structured tables, we instead use semi-structured tables that commonly exist on the Web as a surrogate data source. As a first step in this line, we focus on collecting parallel data in English, while extending to multilingual scenarios would be an 8417 interesting avenue for future work. Specifically, we collect tables and their surrounding NL text from English Wikipedia and the WDC WebTable Corpus (Lehmberg et al., 2016), a large-scale table collection from CommonCrawl. The raw data is extremely noisy, and we apply aggressive cleaning heuristics to filter out invalid examples (e.g., examples with HTML snippets or in foreign languages, and non-relational tables without headers). See Appendix § A.1 for details of data pre-processing. The pre-processed corpus contains 26.6 million parallel examples of tables and NL sentences. We perform sub-tokenization using the Wordpiece tokenizer shipped with BERT. Unsupervised Learning Objectives We apply different objectives for learning representations of the NL context and structured tables. For NL contexts, we use the standard Masked Language Modeling (MLM) objective (Devlin et al., 2019), with a masking rate of 15% sub-tokens in an NL context. For learning column representations, we design two objectives motivated by the intuition that a column representation should contain both the general information of the column (e.g., its name and data type), and representative cell values relevant to the NL context. First, a Masked Column Prediction (MCP) objective encourages the model to recover the names and data types of masked columns. Specifically, we randomly select 20% of the columns in an input table, masking their names and data types in each row linearization (e.g., if the column Year in Fig. 1 is selected, the tokens Year and real in Eq. (1) will be masked). Given the column representation cj, TABERT is trained to predict the bag of masked (name and type) tokens from cj using a multi-label classification objective. Intuitively, MCP encourages the model to recover column information from its contexts. Next, we use an auxiliary Cell Value Recovery (CVR) objective to ensure information of representative cell values in content snapshots is retained after additional layers of vertical self-attention. Specifically, for each masked column cj in the above MCP objective, CVR predicts the original tokens of each cell s⟨i,j⟩(of cj) in the content snapshot conditioned on its cell vector s⟨i,j⟩.6 For instance, for the example cell s⟨2,1⟩in Eq. (1), we predict its value 2005 from s⟨2,1⟩. Since a cell 6The cell value tokens are not masked in the input sequence, since predicting masked cell values is challenging even with the presence of its surrounding context. could have multiple value tokens, we apply the span-based prediction objective (Joshi et al., 2019). Specifically, to predict a cell token s⟨i,j⟩k ∈s⟨i,j⟩, its positional embedding ek and the cell representations s⟨i,j⟩are fed into a two-layer network f(·) with GeLU activations (Hendrycks and Gimpel, 2016). The output of f(·) is then used to predict the original value token s⟨i,j⟩k from a softmax layer. 4 Example Application: Semantic Parsing over Tables We apply TABERT for representation learning on two semantic parsing paradigms, a classical supervised text-to-SQL task over structured DBs (§ 4.1), and a weakly supervised parsing problem on semistructured Web tables (§ 4.2). 4.1 Supervised Semantic Parsing Benchmark Dataset Supervised learning is the typical scenario of learning a parser using parallel data of utterances and queries. We use SPIDER (Yu et al., 2018c), a text-to-SQL dataset with 10,181 examples across 200 DBs. Each example consists of an utterance (e.g., “What is the total number of languages used in Aruba?”), a DB with one or more tables, and an annotated SQL query, which typically involves joining multiple tables to get the answer (e.g., SELECT COUNT(*) FROM Country JOIN Lang ON Country.Code = Lang.CountryCode WHERE Name = ‘Aruba’). Base Semantic Parser We aim to show TABERT could help improve upon an already strong parser. Unfortunately, at the time of writing, none of the top systems on SPIDER were publicly available. To establish a reasonable testbed, we developed our in-house system based on TranX (Yin and Neubig, 2018), an open-source general-purpose semantic parser. TranX translates an NL utterance into an intermediate meaning representation guided by a user-defined grammar. The generated intermediate MR could then be deterministically converted to domain-specific query languages (e.g., SQL). We use TABERT as encoder of utterances and table schemas. Specifically, for a given utterance u and a DB with a set of tables T = {Tt}, we first pair u with each table Tt in T as inputs to TABERT, which generates |T | sets of table-specific representations of utterances and columns. At each time step, an LSTM decoder performs hierarchical attention (Libovick´y and Helcl, 2017) over the list of table-specific representations, constructing an 8418 MR based on the predefined grammar. Following the IRNet model (Guo et al., 2019) which achieved the best performance on SPIDER as the time of writing, we use SemQL, a simplified version of the SQL, as the underlying grammar. We refer interested readers to Appendix § B.1 for details of our system. 4.2 Weakly Supervised Semantic Parsing Benchmark Dataset Weakly supervised semantic parsing considers the reinforcement learning task of inferring the correct query from its execution results (i.e., whether the answer is correct). Compared to supervised learning, weakly supervised parsing is significantly more challenging, as the parser does not have access to the labeled query, and has to explore the exponentially large search space of possible queries guided by the noisy binary reward signal of execution results. WIKITABLEQUESTIONS (Pasupat and Liang, 2015) is a popular dataset for weakly supervised semantic parsing, which has 22,033 utterances and 2,108 semi-structured Web tables from Wikipedia.7 Compared to SPIDER, examples in this dataset do not involve joining multiple tables, but typically require compositional, multi-hop reasoning over a series of entries in the given table (e.g., to answer the example in Fig. 1 the parser needs to reason over the row set {R2, R3, R5}, locating the Venue field with the largest value of Year). Base Semantic Parser MAPO (Liang et al., 2018) is a strong system for weakly supervised semantic parsing. It improves the sample efficiency of the REINFORCE algorithm by biasing the exploration of queries towards the high-rewarding ones already discovered by the model. MAPO uses a domain-specific query language tailored to answering compositional questions on single tables, and its utterances and column representations are derived from an LSTM encoder, which we replaced with our TABERT model. See Appendix § B.2 for details of MAPO and our adaptation. 5 Experiments In this section we evaluate TABERT on downstream tasks of semantic parsing to DB tables. 7While some of the 421 testing Wikipedia tables might be included in our pretraining corpora, they only account for a very tiny fraction. In our pilot study, we also found pretraining only on Wikipedia tables resulted in worse performance. Pretraining Configuration We train two variants of the model, TABERTBase and TABERTLarge, with the underlying Transformer model initialized with the uncased versions of BERTBase and BERTLarge, respectively.8 During pretraining, for each table and its associated NL context in the corpus, we create a series of training instances of paired NL sentences (as synthetically generated utterances) and tables (as content snapshots) by (1) sliding a (non-overlapping) context window of sentences with a maximum length of 128 tokens, and (2) using the NL tokens in the window as the utterance, and pairing it with randomly sampled rows from the table as content snapshots. TABERT is implemented in PyTorch using distributed training. Refer to Appendix § A.2 for details of pretraining. Comparing Models We mainly present results for two variants of TABERT by varying the size of content snapshots K. TABERT(K = 3) uses three rows from input tables as content snapshots and three vertical self-attention layers. TABERT(K = 1) uses one synthetically generated row as the content snapshot as described in § 3.1. Since this model does not have multi-row input, we do not use additional vertical attention layers (and the cell value recovery learning objective). Its column representation cj is defined by mean-pooling over the Transformer’s output encodings that correspond to the column name (e.g., the representation for the Year column in Fig. 1 is derived from the vector of the Year token in Eq. (1)). We find this strategy gives better results compared with using the cell representation sj as cj. We also compare with BERT using the same row linearization and content snapshot approach as TABERT(K = 1), which reduces to a TABERT(K = 1) model without pretraining on tabular corpora. Evaluation Metrics As standard, we report execution accuracy on WIKITABLEQUESTIONS and exact-match accuracy of DB queries on SPIDER. 5.1 Main Results Tab. 1 and Tab. 2 summarize the end-to-end evaluation results on WIKITABLEQUESTIONS and SPIDER, respectively. First, comparing with existing strong semantic parsing systems, we found our 8We also attempted to train TABERT on our collected corpus from scratch without initialization from BERT, but with inferior results, potentially due to the average lower quality of web-scraped tables compared to purely textual corpora. We leave improving the quality of training data as future work. 8419 Previous Systems on WikiTableQuestions Model DEV TEST Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al. (2016) 34.1 34.2 Ensemble 15 Models 37.5 37.7 Zhang et al. (2017) 40.6 43.7 Dasigi et al. (2019) 43.1 44.3 Agarwal et al. (2019) 43.2 44.1 Ensemble 10 Models – 46.9 Wang et al. (2019b) 43.7 44.5 Our System based on MAPO (Liang et al., 2018) DEV Best TEST Best Base Parser† 42.3 ±0.3 42.7 43.1 ±0.5 43.8 w/ BERTBase (K = 1) 49.6 ±0.5 50.4 49.4 ±0.5 49.2 −content snapshot 49.1 ±0.6 50.0 48.8 ±0.9 50.2 w/ TABERTBase (K = 1) 51.2 ±0.5 51.6 50.4 ±0.5 51.2 −content snapshot 49.9 ±0.4 50.3 49.4 ±0.4 50.0 w/ TABERTBase (K = 3) 51.6 ±0.5 52.4 51.4 ±0.3 51.3 w/ BERTLarge (K = 1) 50.3 ±0.4 50.8 49.6 ±0.5 50.1 w/ TABERTLarge (K = 1) 51.6 ±1.1 52.7 51.2 ±0.9 51.5 w/ TABERTLarge (K = 3) 52.2 ±0.7 53.0 51.8 ±0.6 52.3 Table 1: Execution accuracies on WIKITABLEQUESTIONS. †Results from Liang et al. (2018). (TA)BERT models are evaluated with 10 random runs. We report mean, standard deviation and the best results. TEST7→BEST refers to the result from the run with the best performance on DEV. set. parsers with TABERT as the utterance and table encoder perform competitively. On the test set of WIKITABLEQUESTIONS, MAPO augmented with a TABERTLarge model with three-row content snapshots, TABERTLarge(K = 3), registers a singlemodel exact-match accuracy of 52.3%, surpassing the previously best ensemble system (46.9%) from Agarwal et al. (2019) by 5.4% absolute. On SPIDER, our semantic parser based on TranX and SemQL (§ 4.1) is conceptually similar to the base version of IRNet as both systems use the SemQL grammar, while our system has a simpler decoder. Interestingly, we observe that its performance with BERTBase (61.8%) matches the full BERT-augmented IRNet model with a stronger decoder using augmented memory and coarse-to-fine decoding (61.9%). This confirms that our base parser is an effective baseline. Augmented with representations produced by TABERTLarge(K = 3), our parser achieves up to 65.2% exact-match accuracy, a 2.8% increase over the base model using BERTBase. Note that while other competitive systems on the leaderboard use BERT with more sophisticated semantic parsing models, our best DEV. result is already close to the score registered by the best submission (RyanSQL+BERT). This suggests that if they instead used TABERT as the representation layer, they would see further gains. Comparing semantic parsers augmented with Top-ranked Systems on Spider Leaderboard Model DEV. ACC. Global–GNN (Bogin et al., 2019a) 52.7 EditSQL + BERT (Zhang et al., 2019a) 57.6 RatSQL (Wang et al., 2019a) 60.9 IRNet + BERT (Guo et al., 2019) 60.3 + Memory + Coarse-to-Fine 61.9 IRNet V2 + BERT 63.9 RyanSQL + BERT (Choi et al., 2020) 66.6 Our System based on TranX (Yin and Neubig, 2018) Mean Best w/ BERTBase (K = 1) 61.8 ±0.8 62.4 −content snapshot 59.6 ±0.7 60.3 w/ TABERTBase (K = 1) 63.3 ±0.6 64.2 −content snapshot 60.4 ±1.3 61.8 w/ TABERTBase (K = 3) 63.3 ±0.7 64.1 w/ BERTLarge (K = 1) 61.3 ±1.2 62.9 w/ TABERTLarge (K = 1) 64.0 ±0.4 64.4 w/ TABERTLarge (K = 3) 64.5 ±0.6 65.2 Table 2: Exact match accuracies on the public development set of SPIDER. Models are evaluated with 5 random runs. TABERT and BERT, we found TABERT is more effective across the board. We hypothesize that the performance improvements would be attributed by two factors. First, pre-training on large parallel textual and tabular corpora helps TABERT learn to encode structure-rich tabular inputs in their linearized form (Eq. (1)), whose format is different from the ordinary natural language data that BERT is trained on. Second, pre-training on parallel data could also helps the model produce representations that better capture the alignment between an utterance and the relevant information presented in the structured schema, which is important for semantic parsing. Overall, the results on the two benchmarks demonstrate that pretraining on aligned textual and tabular data is necessary for joint understanding of NL utterances and tables, and TABERT works well with both structured (SPIDER) and semi-structured (WIKITABLEQUESTIONS) DBs, and agnostic of the task-specific structures of semantic parsers. Effect of Content Snapshots In this paper we propose using content snapshots to capture the information in input DB tables that is most relevant to answering the NL utterance. We therefore study the effectiveness of including content snapshots when generating schema representations. We include in Tab. 1 and Tab. 2 results of models without using content in row linearization (“−content snapshot”). Under this setting a column is rep8420 u: How many years before was the film Bacchae out before the Watermelon? Input to TABERTLarge (K = 3) ▷Content Snapshot with Three Rows Film Year Function Notes The Bacchae 2002 Producer Screen adaptation of... The Trojan Women 2004 Producer/Actress Documutary film... The Watermelon 2008 Producer Oddball romantic comedy... Input to TABERTLarge (K = 1) ▷Content Snapshot with One Synthetic Row Film Year Function Notes The Watermelon 2013 Producer Screen adaptation of... Table 3: Content snapshots generated by two models for a WIKITABLEQUESTIONS DEV. example. Matched tokens between the question and content snapshots are underlined. resented as “Column Name | Type” without cell values (c.f., Eq. (1)). We find that content snapshots are helpful for both BERT and TABERT, especially for TABERT. As discussed in § 3.1, encoding sampled values from columns in learning their representations helps the model infer alignments between entity and relational phrases in the utterance and the corresponding column. This is particularly helpful for identifying relevant columns from a DB table that is mentioned in the input utterance. As an example, empirically we observe that on SPIDER our semantic parser with TABERTBase using just one row of content snapshots (K = 1) registers a higher accuracy of selecting the correct columns when generating SQL queries (e.g., columns in SELECT and WHERE clauses), compared to the TABERTBase model without encoding content information (87.4% v.s. 86.4%). Additionally, comparing TABERT using one synthetic row (K = 1) and three rows from input tables (K = 3) as content snapshots, the latter generally performs better. Intuitively, encoding more table contents relevant to the input utterance could potentially help answer questions that involve reasoning over information across multiple rows in the table. Tab. 3 shows such an example, and to answer this question a parser need to subtract the values of Year in the rows for “The Watermelon” and “The Bacchae”. TABERTLarge (K = 3) is able to capture the two target rows in its content snapshot and generates the correct DB query, while the TABERTLarge(K = 1) model with only one row as content snapshot fails to answer this example. Effect of Row Linearization TABERT uses row linearization to represent a table row as sequential input to Transformer. Tab. 4 (upper half) presents results using various linearization methods. We find adding type information and content snapshots improves performance, as they provide more hints about the meaning of a column. Cell Linearization Template WIKIQ. SPIDER Pretrained TABERTBase Models (K = 1) Column Name 49.6 ±0.4 60.0 ±1.1 Column Name | Type† (−content snap.) 49.9 ±0.4 60.4 ±1.3 Column Name | Type | Cell Value† 51.2 ±0.5 63.3 ±0.6 BERTBase Models Column Name (Hwang et al., 2019) 49.0 ±0.4 58.6 ±0.3 Column Name is Cell Value (Chen19) 50.2 ±0.4 63.1 ±0.7 Table 4: Performance of pretrained TABERTBase models and BERTBase on the DEV. sets with different linearization methods. Slot names are underlined. †Results copied from Tab. 1 and Tab. 2. Learning Objective WIKIQ. SPIDER MCP only 51.6 ±0.7 62.6 ±0.7 MCP + CVR 51.6 ±0.5 63.3 ±0.7 Table 5: Performance of pretrained TABERTBase(K = 3) on DEV. sets with different pretraining objectives. We also compare with existing linearization methods in literature using a TABERTBase model, with results shown in Tab. 4 (lower half). Hwang et al. (2019) uses BERT to encode concatenated column names to learn column representations. In line with our previous discussion on the effectiveness content snapshots, this simple strategy without encoding cell contents underperforms (although with TABERTBase pretrained on our tabular corpus the results become slightly better). Additionally, we remark that linearizing table contents has also be applied to other BERT-based tabular reasoning tasks. For instance, Chen et al. (2019) propose a “natural” linearization approach for checking if an NL statement entails the factual information listed in a table using a binary classifier with representations from BERT, where a table is linearized by concatenating the semicolon-separated cell linearization for all rows. Each cell is represented by a phrase “column name is cell value”. For completeness, we also tested this cell linearization approach, and find BERTBase achieved improved results. We leave pretraining TABERT with this linearization strategy as promising future work. Impact of Pretraining Objectives TABERT uses two objectives (§ 3.2), a masked column prediction (MCP) and a cell value recovery (CVR) objective, to learn column representations that could capture both the general information of the column (via MCP) and its representative cell values related to the utterance (via CVR). Tab. 5 shows ablation results of pretraining TABERT with different objectives. We find TABERT trained with both MCP and the auxiliary CVR objectives gets a slight advantage, suggesting CVR could potentially lead to 8421 more representative column representations with additional cell information. 6 Related Works Semantic Parsing over Tables Tables are important media of world knowledge. Semantic parsers have been adapted to operate over structured DB tables (Wang et al., 2015; Xu et al., 2017; Dong and Lapata, 2018; Yu et al., 2018b; Shi et al., 2018; Wang et al., 2018), and open-domain, semistructured Web tables (Pasupat and Liang, 2015; Sun et al., 2016; Neelakantan et al., 2016). To improve representations of utterances and tables for neural semantic parsing, existing systems have applied pretrained word embeddings (e.g.., GloVe, as in Zhong et al. (2017); Yu et al. (2018a); Sun et al. (2018); Liang et al. (2018)), and BERT-family models for learning joint contextual representations of utterances and tables, but with domain-specific approaches to encode the structured information in tables (Hwang et al., 2019; He et al., 2019; Guo et al., 2019; Zhang et al., 2019a). TABERT advances this line of research by presenting a generalpurpose, pretrained encoder over parallel corpora of Web tables and NL context. Another relevant direction is to augment representations of columns from an individual table with global information of its linked tables defined by the DB schema (Bogin et al., 2019a; Wang et al., 2019a). TABERT could also potentially improve performance of these systems with improved table-level representations. Knowledge-enhanced Pretraining Recent pretraining models have incorporated structured information from knowledge bases (KBs) or other structured semantic annotations into training contextual word representations, either by fusing vector representations of entities and relations on KBs into word representations of LMs (Peters et al., 2019; Zhang et al., 2019b,c), or by encouraging the LM to recover KB entities and relations from text (Sun et al., 2019; Liu et al., 2019a). TABERT is broadly relevant to this line in that it also exposes an LM with structured data (i.e., tables), while aiming to learn joint representations for both textual and structured tabular data. 7 Conclusion and Future Work We present TABERT, a pretrained encoder for joint understanding of textual and tabular data. We show that semantic parsers using TABERT as a general-purpose feature representation layer achieved strong results on two benchmarks. This work also opens up several avenues for future work. First, we plan to evaluate TABERT on other related tasks involving joint reasoning over textual and tabular data (e.g., table retrieval and table-totext generation). Second, following the discussions in § 5, we will explore other table linearization strategies with Transformers, improving the quality of pretraining corpora, as well as novel unsupervised objectives. Finally, to extend TABERT to cross-lingual settings with utterances in foreign languages and structured schemas defined in English, we plan to apply more advanced semantic similarity metrics for creating content snapshots. References Rishabh Agarwal, Chen Liang, Dale Schuurmans, and Mohammad Norouzi. 2019. Learning to generalize from sparse and underspecified rewards. In ICML. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of EMNLP. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of ACL. Ben Bogin, Matt Gardner, and Jonathan Berant. 2019a. Global reasoning over database structures for text-tosql parsing. ArXiv, abs/1908.11214. Ben Bogin, Matthew Gardner, and Jonathan Berant. 2019b. Representing schema structure with graph neural networks for text-to-sql parsing. In Proceedings of ACL. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019. TabFact: A largescale dataset for table-based fact verification. ArXiv, abs/1909.02164. Donghyun Choi, Myeong Cheol Shin, EungGyun Kim, and Dong Ryeol Shin. 2020. Ryansql: Recursively applying sketch-based slot fillings for complex text-to-sql in cross-domain databases. ArXiv, abs/2004.03125. Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke S. Zettlemoyer, and Eduard H. Hovy. 2019. Iterative search for weakly supervised semantic parsing. In Proceedings of NAACL-HLT. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT. Li Dong and Mirella Lapata. 2018. coarse-to-fine decoding for neural semantic parsing. In Proceedings of ACL. 8422 Yoav Goldberg. 2019. Assessing bert’s syntactic abilities. ArXiv, abs/1901.05287. Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-sql in cross-domain database with intermediate representation. In Proceedings of ACL. Pengcheng He, Yi Mao, Kaushik Chakrabarti, and Weizhu Chen. 2019. X-sql: reinforce schema representation with context. ArXiv, abs/1908.08113. Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). ArXiv, abs/1606.08415. Wonseok Hwang, Jinyeung Yim, Seunghyun Park, and Minjoon Seo. 2019. A comprehensive exploration on wikisql with table-aware word contextualization. ArXiv, abs/1902.01069. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke S. Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. In Proceedings of EMNLP. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of EMNLP. Oliver Lehmberg, Dominique Ritze, Robert Meusel, and Christian Bizer. 2016. A large public corpus of web tables containing time and context metadata. In Proceedings of WWW. Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc V Le, and Ni Lao. 2018. Memory augmented policy optimization for program synthesis and semantic parsing. In Proceedings of NIPS. Jindrich Libovick´y and Jindrich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. In Proceedings of ACL. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2019a. K-bert: Enabling language representation with knowledge graph. ArXiv, abs/1909.07606. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke S. Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Proceedings of NIPS. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of CoNLL. Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In Proceedings of ICLR. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of ACL. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL. Matthew E. Peters, Mark Neumann, IV RobertLLogan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of EMNLP. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of EMNLP. Tianze Shi, Kedar Tatwawadi, Kaushik Chakrabarti, Yi Mao, Oleksandr Polozov, and Weizhu Chen. 2018. Incsql: Training incremental text-to-sql parsers with non-deterministic oracles. ArXiv, abs/1809.05054. Huan Sun, Hao Ma, Xiaodong He, Wen tau Yih, Yu Su, and Xifeng Yan. 2016. Table cell search for question answering. In Proceedings of WWW. Yibo Sun, Duyu Tang, Nan Duan, Jianshu Ji, Guihong Cao, Xiaocheng Feng, Bing Qin, Ting Liu, and Ming Zhou. 2018. Semantic parsing with syntaxand table-aware SQL generation. In Proceedings of EMNLP. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. ArXiv, abs/1904.09223. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Margot Richardson. 2019a. Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers. ArXiv, abs/1911.04942. Bailin Wang, Ivan Titov, and Mirella Lapata. 2019b. Learning semantic parsers from denotations with latent structured alignments and abstract programs. In EMNLP/IJCNLP. Chenglong Wang, Kedar Tatwawadi, Marc Brockschmidt, Po-Sen Huang, Yi Xin Mao, Oleksandr Polozov, and Rishabh Singh. 2018. Robust text-to-sql generation with execution-guided decoding. ArXiv, abs/1807.03100. 8423 Daniel C. Wang, Andrew W. Appel, Jeffrey L. Korn, and Christopher S. Serra. 1997. The Zephyr abstract syntax description language. In Proceedings of DSL. Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of ACL. Xiaojun Xu, Chang Liu, and Dawn Song. 2017. SQLNet: Generating structured queries from natural language without reinforcement learning. arXiv, abs/1711.04436. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Proceedings of NIPS. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of ACL. Pengcheng Yin and Graham Neubig. 2018. TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation. In Proceedings of EMNLP Demonstration Track. Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir R. Radev. 2018a. TypeSQL: Knowledgebased type-aware neural text-to-sql generation. In Proceedings of NAACL-HLT. Tao Yu, Michihiro Yasunaga, Kai Yang, Rui Zhang, Dongxu Wang, Zifan Li, and Dragomir R. Radev. 2018b. Syntaxsqlnet: Syntax tree networks for complex and cross-domain text-to-sql task. In Proceedings of EMNLP. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. 2018c. Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-sql task. In Proceedings of EMNLP. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of AAAI. Rui Zhang, Tao Yu, He Yang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2019a. Editing-based sql query generation for cross-domain context-dependent questions. ArXiv, abs/1909.00786. Yuchen Zhang, Panupong Pasupat, and Percy Liang. 2017. Macro grammars and holistic triggering for efficient semantic parsing. In Proceedings of EMNLP. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019b. Ernie: Enhanced language representation with informative entities. In Proceedings of ACL. Zhuosheng Zhang, Yu-Wei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiaodong Zhou. 2019c. Semantics-aware bert for language understanding. ArXiv, abs/1909.02209. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating structured queries from natural language using reinforcement learning. arXiv, abs/1709.00103. 8424 Supplementary Materials A Pretraining Details A.1 Training Data We collect parallel examples of tables and their surrounding NL sentences from two sources: Wikipedia Tables We extract all the tables on English Wikipedia9. For each table, we use the preceding three paragraphs as the NL context, as we observe that most Wiki tables are located after where they are described in the body text. WDC WebTable Corpus (Lehmberg et al., 2016) is a large collection of Web tables extracted from the Common Crawl Web scrape10. We use its 2015 English-language relational subset, which consists of 50.8 million relational tables and their surrounding NL contexts. Preprocessing Our dataset is collected from arbitrary Web tables, which are extremely noisy. We develop a set of heuristics to clean the data by: (1) removing columns whose names have more than 10 tokens; (2) filtering cells with more than two non-ASCII characters or 20 tokens; (3) removing empty or repetitive rows and columns; (4) filtering tables with less than three rows and four columns, and (5) running spaCy to identify the data type of columns (text or real value) by majority voting over the NER labels of column tokens, (6) rotating vertically oriented tables. We sub-tokenize the corpus using the Wordpiece tokenizer in Devlin et al. (2019). The pre-processing results in 1.3 million tables from Wikipedia and 25.3 million tables from the WDC corpus. A.2 Pretraining Setup As discussed in § 5, we create training instances of NL sentences (as synthetic utterances) and content snapshots from tables by sampling from the parallel corpus of NL contexts and tables. Each epoch contains 37.6M training instances. We train TABERT for 10 epochs. Tab. 6 lists the hyper-parameters used in training. Learning rates are validated on the development set of WIKITABLEQUESTIONS. We use a batch size of 512 for large models to reduce training time. The training objective is sum of the three pretraining objectives in § 3.2: the masked 9We do not use infoboxes (tables on the top-right of a Wiki page that describe properties of the main topic), as they are not relational tables. 10http://webdatacommons.org/webtables language modeling (MLM) objective for utterance tokens, the masked column prediction (MCP) objective for columns, and the column value recovery (CVR) objective for their cell values. An exception is pretraining the TABERT(K = 1) models. Since there are no additional vertical attention layers, we do not use the CVR objective, and the MCP objective reduces to the vanilla MLM objective over encodings from the base Transformer model. Our largest model TABERTLarge(K = 3) takes six days to train for 10 epochs on 128 Tesla V100 GPUs using mixed precision training. B Semantic Parsers B.1 Supervised Parsing on SPIDER Model We develop our text-to-SQL parser based on TranX (Yin and Neubig, 2018), which translates an NL utterance into a tree-structured abstract meaning representation following user-specified grammar, before deterministically convert the generated abstract MR into an SQL query. TranX models the construction process of an abstract MR (treestructured representation of an SQL query) using a transition-based system, which decomposes its generation story into a sequence of actions following the user defined grammar. Formally, given an input NL utterance u and a database with a set of tables T = {Ti}, the probability of generating of an SQL query (i.e., its semantically equivalent MR) z is decomposed as the production of action probabilities: p(z|u, T ) = Y p(at|a<t, u, T ) (2) where at is the action applied to the hypothesis at time stamp t. a<t denote the previous action history. We refer readers to Yin and Neubig (2018) for details of the transition system and how individual action probabilities are computed. In our adaptation of TranX to text-to-SQL parsing on SPIDER, we follow Guo et al. (2019) and use SemQL as the underlying grammar, which is a simplification of the SQL language. Fig. 2 lists the SemSQL grammar specified using the abstract syntax description language (Wang et al., 1997). Intuitively, the generation starts from a treestructured derivation with the root production rule select stmt7→SelectStatement, which lays out overall the structure of an SQL query. At each time step, the decoder algorithm locates the current opening node on the derivation tree, following a depth-first, left-to-right order. If the opening node 8425 Parameter TABERTBase(K = 1) TABERTLarge(K = 1) TABERTBase(K = 3) TABERTLarge(K = 3) Batch Size 256 512 512 512 Learning Rate 2 × 10−5 2 × 10−5 4 × 10−5 4 × 10−5 Max Epoch 10 Weight Decay 0.01 Gradient Norm Clipping 1.0 Table 6: Hyper-parameters using in pretraining is not a leaf node, the decoder invokes an action at which expands the opening node using a production rule with appropriate type. If the current opening node is a leaf node (e.g., a node denoting string literal), the decoder fills in the leaf node using actions that emit terminal values. To use such a transition system to generate SQL queries, we extend its action space with two new types of actions, SELECTTABLE(Ti) for node of type table ref in Fig. 2, which selects a table Ti (e.g., for predicting target tables for a FROM clause), and SELECTCOLUMN(Ti, cj) for node of type column ref, which selects the column cj from table Ti (e.g., for predicting a result column used in the SELECT clause). As described in § 4.1, TABERT produces a list of entries, with one entry ⟨Ti, Xi, Ci⟩for each table Ti: M = n ⟨Ti, Xi = {x1, x2, . . .}, Ci = {c1, c2, . . . , }⟩i o|T | i=1 (3) where each entry ⟨Ti, Xi, Ci⟩in M consists of Ti, the representation of table Ti given by the output vector of the prefixed [CLS] symbol, the tablespecific representations of utterance tokens Xi = {x1, x2, . . .}, and representations of columns in Ti, Ci = {c1, c2, . . .}. At each time step t, the decoder in TranX performs hierarchical attention over representations in M to compute a context vector. First, a table-wise attention score is computed using the LSTM’s previous state, statet−1 with the set of table representations. score(Ti) = Softmax  DotProduct(statet−1, key(Ti))  , (4) where the linear projection key(·) ∈R256 projects the table representations to key space. Next, for each table Ti ∈T , a table-wise context vector ctx(Ti) is generated by attending over the union of vectors in utterance token representations Xi and column representations Ci: ctx(Ti) = DotProductAttention  statet−1, key(Xi ∪Ci), value(Xi ∪Ci)  , (5) with the LSTM state as the query, key(·) as the key, and another linear transformation value(·) ∈R256 to project the representations to value vectors. The final context vector is then given by the weighted sum of these table-wise context vectors ctx(Ti) (i ∈{1, . . . , |T |}) weighted by the attention scores score(Ti). The generated context vector is then used to update the state of the decoder LSTM to statet. The updated decoder state is then used to compute the probability of carrying out the action defined at time step t, at. For a SELECTTABLE(Ti) action, its probability of is defined similarly as Eq. (4). For a SELECTCOLUMN(Ti, cj) action, it is factorized as the probability of selecting the table Ti (given by Eq. (4)), times the probability of selecting the column cj. The latter is defined as score(cj) = Softmax  DotProduct(statet, cj)  . (6) We also add simple entity linking features to the representations in M, defined by the following heuristics: (1) If an utterance token x ∈u matches with the name of a table T, we concatenate a trainable embedding vector (table match ∈ R16) to the representations of x and T. (2) Similarly, we concatenate an embedding vector (column match ∈R16) to the representations of an utterance token and a column if their names match. (3) Finally, we concatenate a zero-vector (0 ∈R16) to representations of all unmatched elements. Configuration We use the default configuration of TranX. For TABERT parameters, we use an Adam optimizer with a learning rate of 3e −5 and linearly decayed learning rate schedule, and 8426 select stmt = SelectStatement( distinct distinct, # DISTINCT keyword expr∗result columns, # Columns in SELECT clause expr? where clause, # WHERE clause order by clause? order by clause, # ORDER BY clause int? limit value, # LIMIT clause table ref∗join with tables, # Tables in the JOIN clause compound stmt? compound statement # Compound statements (e.g. , UNION, EXCEPT) ) distinct = None | Distinct order by clause = OrderByClause(expr∗expr list, order order) order = ASC | DESC expr = AndExpr(expr∗expr list) | OrExpr(expr∗expr list) | NotExpr(expr expr) | CompareExpr(compare op op, expr left value, expr right value) | AggregateExpr(aggregate op op, expr value, distinct distinct) | BinaryExpr(binary op op, expr left value, expr right value) | BetweenExpr(expr field, expr left value, expr right value) | InExpr(column ref left value, expr right value) | LikeExpr(column ref left value, expr right value) | AllRows(table ref table name) | select stmt | Literal(string value) | ColumnReference(column ref column name) aggregate op = Sum | Max | Min | Count | Avg compare op = LessThan | LessThanEqual | GreaterThan | GreaterThanEqual | Equal | NotEqual binary op = Add | Sub | Divide | Multiply compound stmt = CompoundStatement(compound op op, select stmt query) compound op = Union | Intersect | Except Figure 2: ASDL Grammar of SemQL used in TranX another Adam optimizer with a constant learning rate of 1e −3 for all remaining parameters. During training, we update model parameters for 25000 iterations, and freeze the TABERT parameters at the first 1000 update steps. We use a batch size of 30 and beam size of 3. We use gradient accumulation for large models to fit a batch into GPU memory. B.2 Weakly-supervised Parsing on WIKITABLEQUESTIONS Model We use MAPO (Liang et al., 2018), a strong weakly-supervised semantic parser. The original MAPO models comes with an LSTM encoder, which generates utterance and column representations used by the decoder to predict table queries. We directly substitute the encoder with TABERT, and project the utterance and table representations from TABERT to the original embedding space using a linear transformation. MAPO uses a domain-specific query language tailored to answer compositional questions on a single table. For instance, the example question in Fig. 1 could be answered using the following query: Table.contains(column=Position, value=1st) # Get rows whose ‘Position’ field contains ‘1st’ .argmax(order by=Year) # Get the row which has the largest ‘Year’ field .hop(column=Venue) # Select the value of ‘Venue’ in the result row MAPO is written in Tensorflow. In our experiments we use an optimized re-implementation in PyTorch, which yields 4× training speedup. Configuration We use the same optimizer and learning rate schedule as in § B.1. We use a batch size of 10, and train the model for 20000 steps, with the TABERT parameters frozen at the first 5000 steps. Other hyper-parameters are kept the same as the original MAPO system.
2020
745
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8427–8439 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8427 Universal Decompositional Semantic Parsing Elias Stengel-Eskin Johns Hopkins University Aaron Steven White University of Rochester Sheng Zhang Johns Hopkins University Benjamin Van Durme Johns Hopkins University Abstract We introduce a transductive model for parsing into Universal Decompositional Semantics (UDS) representations, which jointly learns to map natural language utterances into UDS graph structures and annotate the graph with decompositional semantic attribute scores. We also introduce a strong pipeline model for parsing into the UDS graph structure, and show that our transductive parser performs comparably while additionally performing attribute prediction. By analyzing the attribute prediction errors, we find the model captures natural relationships between attribute groups. 1 Introduction A structured account of compositional meaning has been longstanding goal for both natural language understanding and computational semantics. To this end, a number of efforts have focused on encoding semantic relationships and attributes in a semantic graph—e.g. Abstract Meaning Representation (AMR; Banarescu et al., 2013), Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013), and Semantic Dependency Parsing (SDP; Oepen et al., 2014, 2015, 2016). In these formalisms, semantic information is typically encoded discretely, using nominal category labels for nodes and edges. This categorical encoding can make such formalisms brittle when presented with non-prototypical instances, and leads to challenges in coping with changing label ontologies and new datasets (White et al., 2019). Furthermore, they are difficult to annotate, often requiring trained linguists and large annotation manuals. The Decompositional Semantics framework presents an alternative to categorical formalisms that encodes semantic information in a featurebased scheme—using continuous scales rather than categorical labels. Starting with a feature-based semantic role representation rooted in Dowty 1991’s (1991) proto-role theory (Reisinger et al., 2015; White et al., 2016), this framework has expanded to cover a wide variety of phenomena: event factuality (Rudinger et al., 2018b), genericity (Govindarajan et al., 2019), entity types (White et al., 2016), and temporal relations (Vashishtha et al., 2019). While this rich array of annotation types has been separately modeled, no system yet exists for its joint prediction, which has only recently been made feasible by the introduction of Universal Decompositional Semantics v1.0 (UDS1.0). Presented by White et al. (2019), UDS1.0 normalizes all of these annotations, and incorporates them as node- and edge-level attributes in a single semantic graph whose structure is deterministically extracted from Universal Dependencies (UD; Nivre et al., 2015) syntactic parses via the PredPatt tool (White et al., 2016; Zhang et al., 2017).1 An example graph can be seen in Fig. 1. We present the first joint UDS parser, which learns to extract both graph structures and attributes from natural language input. This parser is a sequence-to-graph transductive model which takes as input a sentence and outputs a UDS graph complete with node- and edge-level annotations. In contrast to the traditional semantic parsing paradigm, which shares its roots with syntactic parsing and rests on the assumption that the nodes in the graph correspond to tokens in the input— i.e. the graph is lexicalized—the parsing-astransduction paradigm treats parsing as a sequenceto-graph problem. Rather than generating one sequence conditional on another sequence (sequenceto-sequence), we generate the nodes in a graph conditional on an input sequence, dynamically adding their edges during generation. As in sequenceto-sequence modeling, the supports of the input and output distributions—i.e. the input and output 1Available at http://decomp.io. 8428 Subspace Attribute Val Subspace Attribute Val of Hiller Taiwan the Chechnya asked Bush to name leaders , , and India Pakistan Subspace Attribute Val arg. edge predicate node argument node syntax node instance edge nonhead edge syntax edge protoroles protoroles protoroles protoroles protoroles awareness change-of-location change-of-possession change-of-state existed-before … -0.110 -0.039 0.000 -0.104 1.402 factuality genericty genericity genericity time time time factual pred-dynamic pred-hypothetical pred-particular dur-days dur-minutes dur-seconds … 1.038 1.418 -0.892 1.418 -1.062 -0.912 1.260 genericity genericity genericity word-sense word-sense word-sense arg-abstract arg-kind arg-particular noun.act noun.cognition noun.food … -1.112 1.195 -1.112 -3.000 -3.000 -3.000 semantic head edge Figure 1: The UDS graph structure. Semantic subgraph is outlined in black while the syntactic subgraph is annotated in pink. Node and edge attribute annotations are shown via annotations on argument and attribute edges. vocabularies—are not constrained to be identical. This has two benefits: first, post-hoc methods of obtaining alignments between input sequences and graphs—common especially in AMR parsing— are no longer required; and second, we are able to produce semantic graphs from arbitrary input vocabularies—allowing for future extensions to cross-lingual parsing (Zhang et al., 2018). The parsing-as-transduction paradigm thus lends itself perfectly to UDS parsing, since the UDS protocol allows non-lexicalized (as well as cross-lingual) graphs, and these graphs may have nodes with multiple parents—i.e. re-entrant nodes—which pose problems for traditional tree-based methods but are handled natively by the transductive paradigm. We compare our end-to-end transductive parser against a strong pipeline system, finding that the parser slightly outperforms the pipeline while additionally learning to produce decompositional attribute scores. Our results are reflected in the UDS1.0 leaderboard at http://decomp.io/ leaderboards/. 2 Related Work Datasets Reisinger et al. (2015) introduce the Decompositional Semantics framework in the context of a corpus-based verification of Dowty’s seminal proto-role theory of semantic roles. This work was substantially expanded by White et al. (2016), who annotate for semantic proto-roles (SPR), wordsense, and temporal properties on top of semantic graphs extracted from English Web Treebank (EWT; Bies et al., 2012) UD parses using PredPatt (White et al., 2016; Zhang et al., 2017). White et al.’s EWT annotations are modeled by Teichert et al. (2017), who present a CRF-based multi-label classifier for proto-role labelling, and Rudinger et al. (2018a), who make use of an eventdriven neural model. More recently, the annotation coverage for the same EWT data was expanded by Vashishtha et al. (2019) who annotate and model fine-grained temporal distinctions, and Govindarajan et al. (2019), who add annotations and models for genericity—i.e. the degree of generality of events and entities in linguistic expressions. All of these efforts coalesce in White et al. (2019), which presents the first unified Decompositional Semantics-aligned dataset—Universal Decompositional Semantics v1.0 (UDS1.0)— containing all properties annotated on top of EWT parses with standardized train, validation, and testing splits and a native reader and query interface. Parsing In most work on decompositional semantics, models are tasked with learning to predict attribute values, but not the structure of the graph. Zhang et al. (2018) develop the first model for performing both graph parsing and UDS attribute prediction in a cross-lingual setting, where Chinese input sentences were transduced into UDS graphs derived from UD parses of the input’s English translation. This represents the first application of the parsing-as-transduction paradigm to a subset of UDS data as well as the introduction of a novel graph evaluation metric, S which we describe in further detail in Section 5. In contrast to the end-to-end approach presented here, Zhang et al. take a pipeline approach to parsing. Andreas et al. (2013) recast semantic parsing in a tree formalism as a sequence-to-sequence problem. Parsing-as-transduction, which extends this approach to directed acyclic graphs, has proven to be applicable in a variety of settings: Zhang et al. (2019a) use it to achieve state-of-the-art re8429 sults in AMR parsing. These results are improved upon and shown to generalize to two other semantic formalisms (UCCA and SDP) by Zhang et al. (2019b), which set new state-of-the-art benchmarks for AMR and UCCA. The former result was subsequently surpassed by Cai and Lam (2020), which applies a similar transductive approach, while the latter was surpassed by Jiang et al. (2019). Having both been subjects of SemEval tasks (May, 2016; May and Priyadarshi, 2017; Oepen et al., 2019; Hershcovich et al., 2019), there are a number of contrasting methods for both AMR and UCCA parsing. These include transition-based parsing system for AMR (Wang et al., 2018; Goodman et al., 2016; Damonte et al., 2017; Ballesteros and Al-Onaizan, 2017) and for UCCA (Hershcovich et al., 2017). In a similar vein to Zhang et al. (2019b), Hershcovich et al. (2018a) convert multiple formalisms into a unified formalism and use multitask learning for improved UCCA parsing; however, the latter does so at a loss to performance on the other formalisms, while Zhang et al. achieve state-of-the-art results in AMR and UCCA simultaneously. UCCA has also been shown to transfer to syntactic parsing: by converting UD parse trees into a format resembling UCCA, Hershcovich et al. (2018b) are able to apply a UCCA parser to both standard UD parses as well as enhanced UD parses, which contain re-entrant nodes. 3 Data The UDS1.0 dataset is built on top of the UD-EWT data with three layers of annotations: UD parses, PredPatt graph structure, and decompositional semantic annotations on the edge and node level. In addition to specifying the syntactic head and head relation of every token in the input, UD parses include lexical features, such as word form, word lemma, and part-of-speech (POS) tag. This forms the syntactic graph, which is lexicalized (each token is tied to a node in the graph). From these pieces of information, PredPatt outputs a set of predicates and their arguments. Each predicate and argument is tied via an instance edge to a particular node in the syntactic graph. Because both predicates and arguments can consist of multi-word spans, there can be multiple instance edges leaving a semantic node. The semantic graph contains edges between predicates and arguments; in the case of clausal embedding, there can also be argument-argument edges. UDS1.0 asked Hiller Bush(1) name leaders the of Che. Tai. Ind. and Pak. SOMETHING to Bush(1) Figure 2: Arborescence for graphs with object control. includes “performative” speaker/author and addressee nodes, which model discourse properties of the sentence. These nodes are structural placeholders for future discourse-level annotations; as these properties have not yet been annotated, we have opted to remove them from the graphs.2 The crowdsourced decompositional annotations tied to the semantic subgraph can be divided into node-level annotations and edge-level annotations. On the node level, annotations were collected for factuality, genericity, time, and entity type. Edgelevel annotations are in the space of semantic protoroles, which are designed to provide a nuanced higher-dimensional substrate for notions of agency and patienthood. These are summarized in Table 1, where purple indicates a high attribute score, while orange indicates a low score. For further details on attribute types and data annotation, see White et al. (2019) and the references therein. Arborescence Recall that the lowest level of the UDS graph (Fig. 1) is a syntactic dependency parse. Modeling this level is out of scope for this work, as we are interesting in modeling the semantic structure and attributes. In order to train a parsing-astransduction model, an arborescence—a hierarchical tree structure which has only edge and node annotations—is required. From the full UDS graph, we construct the arborescence by: (a) Assigning each semantic node a lexical label; this label is taken from the syntactic head that the semantic node dominates. The only exception to this is in the case of embedded clauses, where an argument node dominates an embedded predicate. Here, we follow PredPatt, assigning the label “SOMETHING” to the embedded argument (c.f. Fig. 2). 2Since these placeholder nodes are currently added deterministically, recovering them is also a deterministic operation. 8430 Annotation Description Examples Factuality Factuality inferences represent how likely (or unlikely) a listener thinks a scenario is to have occurred. Jo left (3), Jo didn’t leave (-3), Jo thought that Cole had left (-1) Genericity Genericity refers to inferences about the generality of events or event participants. Ex. property: genericity-pred-particular: Amy ate oats for breakfast today (3), Amy ate oats for breakfast every day (-3) Time Temporal inferences pertain to the duration of events. Ex. property: time-dur-minutes: Tom left (-3), Tom was singing (3) Word Sense UDS decomposes word sense, allowing multiple senses to apply to a given node. Ex. property: supersense.person: Sandy led Rufus by a leash (-3), Sandy led Rufus by a leash (3) Semantic Proto-Roles SPR properties are edge-level annotations that capture fine-grained semantic relations between predicates and arguments. Ex. property: volition:, Derek broke his arm (-3), Derek broke the wishbone (3) Table 1: Type descriptions and illustrative sentences for UDS properties predicted in this work. Example ratings in parentheses, bolding indicates the salient predicate/argument/edge. See White et al. (2019) for further details. (b) Retaining all edges between semantic nodes as “argument” edges, duplicating nodes in cases of re-entrancy (e.g. “Bush(1)” in Fig. 2). (c) Converting the deep syntactic structure into a shallow representation, where we introduce “non-head” edges from the syntactic head (attached to a semantic node) to each node it dominates, and remove all other syntaxsemantics edges. This effectively linearizes the yield of each semantic node (see Fig. 2). 4 Model Our model is based on the transductive broadcoverage parsing model presented in Zhang et al. (2019b), which can be consulted for further details on the encoder, decoder, and pointer-generator modules. The original parser is composed of six major modules: the encoder, the decoder embedding module, the target node module, the target label module, the head module, and the relation module. In this work we introduce two new modules: the node attribute module and the edge attribute module, as well a loss function for attributes. Encoder The encoder module takes a concatenation of multiple input features: GloVe token embeddings (Pennington et al., 2014), POS tag embeddings, character CNN embeddings, and BERT (Devlin et al., 2019) contextual embeddings (meanpooled over subwords). These representations are passed through a stacked bidirectional LSTM encoder, which has the following definition: sl t = −→s l t ←−s l t  = "−−−−→ LSTM(sl−1 t , st t−1) ←−−−− LSTM(sl−1 t , st t+1) # where arrows denote the LSTM direction, t denotes the timestep, and l denotes the layer of the stack. Decoder embedding module In order to generate new semantic nodes and relationships, a method of embedding categorical semantic information is required. More formally, a semantic relation is given by a tuple ⟨ui, du i , ri, vi, dv i ⟩, where ui denotes the “head” token of index i and vi denotes the token at index i. Note that these tokens are the labels of nodes in the arborescence (see Fig 2.) du i and dv i are the indices of ui and vi, while ri is the relationship type between vi and ui. The decoder embedding module embeds these categorical variables into real space, producing a tuple of vectors ⟨ui, du i , ri, vi, dv i ⟩. For node labels ui and vi, we take the concatenation of GloVe and CharCNN features. ri, dv i and du i are randomly initialized. Target Node Module From the continuous embedding of a semantic relation ⟨ui, du i , ri, vi, dv i ⟩ we want to obtain a latent node representation zi. We initialize the hidden states of the 0th layer and the hidden states of the 0th state in each layer to h0 i = [vi; dv i ] hl 0 = [←−s l 1; −→s l n] respectively. Further, let ci be a context vector over encoder states sl 1:n, defined as a(enc) i = softmax MLP(enc)([hl i; sl 1:n])  ci = aT i sl 1:n 8431 Let hl i and zi be defined as follows: zi = MLP(relation)([hl i; ci; ri; ui; du i ]) hl i = LSTM(hl−1 i , hl i−1) where zi can be thought as a representation of node i in the graph, conditioned on previous nodes (via hl i as well as the input text via ci, the graph token (via ui and du i ) and the relation type (via ri). Using this representation zi, Zhang et al. (2019b) introduce an extended pointer-generator network (See et al., 2017) which computes the distribution over the next node label vi+1: [pgen, penc,pdec] = softmax MLP(switch)(zi)  adec i = softmax MLPdec([z1:i])  p(vocab) i = softmax MLP(vocab)(zi)  P(vi+1) = pgenp(vocab) i ⊕penca(enc) i ⊕pdeca(dec) i From this last equation, we have that the generation of a new node is decomposed into three options: (1) generate a new node from a vocabulary of node labels, (2) copy a node label directly from the input sequence (lexicalization), or (3) copy a node label from a previously generated node (re-entrancy). Parsing modules To obtain a parse from the node states h1:n, a head node and relation type must be assigned to each node 1 : n. In order to assign a head node, we instantiate two multilayer perceptrons (MLPs): MLP(start) and MLP(end), where (start) denotes the starting node of the edge and (end) denotes its target. Using these MLPs, for node i + 1 we obtain h(start) i+1 = MLP(start)(hl i+1) h(end) 1:i = MLP(end)(hl 1:i) P(ui+1) = softmax BIAFFINE(h(start) i+1 , h(end) 1:i )  The next relationship ri+1 is computed in a similar fashion, also using two MLPs: h(rel-src) i+1 = MLP(rel-src)(hl j) h(rel-tgt) i+1 = MLP(rel-tgt)(hl i+1) P(ri+1) = softmax BILINEAR(h(rel-src) i+1 , h(rel-tgt) i+1 )  where j is the index of the head assigned to the node indexed by i + 1.3 3BIAFFINE is defined in Dozat and Manning (2016). BILINEAR(x1, x2) = x1Ax2 + b where A and b are learned parameters. Node attribute module As noted in previous UDS projects, an important step in decompositional attribute annotation is determining whether a property applies in a given context. For example, factuality typically applies only to predicate nodes. Since all nodes (predicate and argument) are treated identically w.r.t. their semantic relations zi, this work introduces a two-fold node attribute model, which predicts whether a property j applies to a node i via a binary mask αj i as well as its value νj i . This module defines αj i and νj i as follows: P(αj i) = sigmoid MLP(node-mask)(zi)  νj i = MLP(node-attr)(zi) Edge attribute module As in the case of node attributes, edge attributes do not apply in all cases. Therefore, a similar bifurcation strategy is pursued with edge attribute prediction: we predict a binary attribute mask βj s,e for attribute j on edge s →e as well as an attribute value λj s,e. These are given by: m(mask) s,e = BILINEAR(mask)(hl s, hl e) m(attr) s,e = BILINEAR(attr)(hl s, hl e) P(βj s,e) = sigmoid MLP(edge-mask)(m(mask) s,e )  λj s,e = MLP(edge-attr)(m(attr) s,e ) Training The nodes in the graph are linearized in a pre-order traversal over the arborescence, which ensures that at prediction time, we have seen the potential antecendent of a node for target-side copying (e.g. Bush(1) in Fig. 2), determining the order of semantic nodes in the graph. The syntactic children of these nodes are presented in the order they appear in the text. The loss functions for the node, head, and relation prediction modules are cross-entropy loss, while for the masks α and β binary cross-entropy loss is used, since each position in the mask is a separate classification decision. The loss function used for K attributes ν1:K on N nodes/edges is given by: τ(x) = ( 0 if x ≤0 1 otherwise LMSE(ν, ν∗) = 1 NK N X i=1 K X j=1 cj i(νj i −νj∗ i )2 8432 LBCE(ν, ν∗) = 1 NK N X i=1 K X j=1  τ(νj∗ i ) log(τ(νj i )) + 1 −τ(νj∗ i )  log 1 −τ(νj i )  L(ν, ν∗) = γ 2 ∗LMSE(ν, ν∗) ∗LBCE(ν, ν∗) LMSE(ν, ν∗) + LBCE(ν, ν∗) where γ is a scaling factor, cj i is the annotator confidence for annotation j on token i, ν is the set of predicted attributes, and ν∗is the set of true attributes. Note that inclusion of the confidence mask cj i means the model only incurs loss on attributes annotated for a given node, since cj i = 0 when an annotation is missing (i.e. no MSE loss is incurred for attributes which do not apply to a node or edge); in the “binary” experimental setting, we replace cj i with τ(cj i), removing the weighting but still masking out loss on un-annotated nodes. Also note than in the case of edges, the form of the loss is identical, but ν is replaced by λ, and α by β. This loss encourages the predicted attribute νj i value to be close in value to the true value νj∗ i via the mean-squared error criterion while concomitantly encouraging the predicted and reference values to share a sign via the thresholded cross-entropy criterion. Both node and edge attribute models are trained to predict attribute values independently, and that parameters are shared across attributes. This is central to our analysis in §7. Following Zhang et al. (2019b) we train the structural parsing modules with coverage loss (See et al., 2017). All models were trained to convergence using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001. 5 Experiments Pipeline Model Recall from Section 3 that the semantic graph structure in UDS graphs is deterministically generated from PredPatt, which takes as input a UD parse and outputs a semantic graph structure. This leads to a strong pipeline model for the graph structure alone: running a high-performing UD parser—the Stanford UD parser (Chen and Manning, 2014)—and passing its output through PredPatt to create a structure.4 For this baseline, 4This structure is missing the core decompositional attributes but has both predicate and argument nodes. Additionally, the pipeline model fails to capture nominal heads of copular predicates (e.g. Jo is a doctor), which are not returned by PredPatt but are added to the dataset as a preprocessing step in the genericity annotation task. Method P R F1 Pipeline 84.83 75.22 79.74 Parser 83.52 77.92 80.62 Parser (binary) 84.97 78.52 81.62 Table 2: Test set S score precision, recall, and F1. the only source of error is the UD parsing model, which for English performs very highly. S Metric For evaluating the quality of output graph structures, Smatch (Cai and Knight, 2013), a hill-climbing approach to approximating the optimal matching between variables in two graphs, is commonly used. While Smatch can match categorial variables such as those found in meaning representations like AMR, it lacks a matching function for continuous variables such as decompositional attributes. To remedy this, Zhang et al. (2018) introduce the S metric, an extension to Smatch that allows for attribute matching. Using hill-climbing, we are able to match instance and attribute nodes and edges; instance nodes are matched via string match, while attribute similarity is given by 1 −  νi−νj ω 2 where ω = 6 is the maximum possible difference between attributes, which are bounded on [−3, 3].5 6 Results Table 5 shows the Pearson’s correlation coefficient (ρ) and the F1 score computed on binarized responses for each node and edge attribute under the “oracle” decoding setting, where a gold graph structure is provided to the model. An asterisk denotes that p < 0.05, where p is determined by a Student’s t-test. F1 scores are obtained by binarizing continuous attribute predictions into positive and negative, following from the original UDS motivation found in Dowty (1991), where binary proto-role features were introduced. The binarization threshold was tuned per attribute on the validation set. The baseline column in Table 5 shows the binarized F1 score for the baseline attribute model, given by predicting the median attribute value for each attribute type at each position. Pearson’s ρ is undefined for this approach, as the variance of the predicted distribution is 0. The thresholds were similarly tuned on validation data for this baseline. Table 2 shows S metric (c.f. §5) precision, recall, and F1 score as computed on full arborescences 5This function was found to produce more matches on UDS1.0 than the e−MAE function used by Zhang et al. (2018). 8433 with both semantics and syntax nodes. Our parser slightly outperforms the pipeline, with higher performance in the “binary” setting, where we exclude annotator confidence from the loss. Table 3 shows precision, recall, and F1 score on semantics nodes alone. The first parser setting (syntax) reflects a parsing model trained on full graphs, and evaluated only on the semantic subgraphs of the produced graphs. The second parser (semantics) is directly trained on semantic subgraphs, with no syntactic nodes in the training graphs. The full parser performs comparably to the pipeline, while the parser trained specifically on semantics-only graphs outperforms the pipeline. However, the mean attribute ρ of the syntactic parser (0.3433) exceeded that of the semantics-only parser (0.3151). Method P R F1 Pipeline 84.72 88.51 86.57 Parser (syntax) 89.02 83.67 86.26 Parser (syntax, binary) 89.74 86.00 87.83 Parser (semantics) 91.28 87.23 89.21 Parser (sem., binary) 91.10 84.59 87.73 Table 3: Test set S score precision, recall, and F1 on semantics nodes only, where (syntax) denotes a parser trained to predict full graphs (semantics nodes with non-head edges to syntax nodes) while (semantics) denotes model trained on semantics-only subgraphs. Table 4 gives the S metric results on full graphs predicted by the model, including attribute matching. The pipeline model is unable to perform this task because it predicts structure alone, without attributes. We see that training the parser with shared MLP and BILINEAR modules (i.e. MLP(mask) = MLP(attr) and BILINEAR(mask) = BILINEAR(attr)) for both the attribute mask and attribute value heavily degrades the performance, while removing annotator confidence increases it slightly. 7 Analysis Table 2 suggests that the structural quality of the parses obtained by the parsing model presented here is slightly superior to that of pipeline model’s parses, with Table 3 indicating that the semantic component of the graph can be parsed significantly more accurately by our model. Taken together with Table 5, we can conclude that the model is able to learn to jointly predict the graph structure and attributes. This is further reinforced by Table 4. Note that the numbers reported in Tables 2 and 4 are not directly comparable, as the scores in Table 4 Method P R F1 Shared 79.52 32.48 46.12 Separate 83.46 82.27 82.86 Separate (binary) 84.19 84.19 84.19 Table 4: Test set precision, recall, and F1 computed via S score with attributes (syntactic nodes included) Property Pearson’s ρ F1 F1 (model) (baseline) (model) node-level                                                                                                                                                                              factuality-factual 0.6479* 75.15 84.46 genericity                arg-abstract 0.3392* 40.04 48.05 arg-kind 0.2145* 67.61 67.54 arg-particular 0.3347* 83.10 84.62 pred-dynamic 0.2469* 72.49 71.19 pred-hypothetical 0.3442* 44.16 50.21 pred-particular 0.1887* 77.47 78.16 time                                      dur-centuries 0.1336* 10.14 12.30 dur-days 0.1802* 68.72 68.21 dur-decades 0.2383* 29.89 34.19 dur-forever 0.2524* 37.93 38.58 dur-hours 0.2227* 73.66 73.61 dur-instant 0.1761* 55.98 51.90 dur-minutes 0.3409* 86.28 87.05 dur-months 0.3204* 63.25 64.42 dur-seconds 0.2751* 65.33 64.75 dur-weeks 0.2475* 54.02 55.41 dur-years 0.4239* 65.03 66.19 wordsense                                                                                                    supersense-noun.Tops 0.4660* 7.34 40.00 supersense-noun.act 0.6007* 27.37 56.39 supersense-noun.animal 0.3773* 5.60 25.64 supersense-noun.artifact 0.5617* 23.12 52.79 supersense-noun.attribute 0.4505* 10.81 29.27 supersense-noun.body 0.4543* 1.53 42.86 supersense-noun.cognition 0.5692* 21.17 50.56 supersense-noun.communication 0.6182* 30.60 62.12 supersense-noun.event 0.4233* 5.80 33.61 supersense-noun.feeling 0.2404* 2.74 5.45 supersense-noun.food 0.6773* 7.15 67.72 supersense-noun.group 0.5650* 15.57 55.22 supersense-noun.location 0.5118* 7.81 55.64 supersense-noun.motive 0.3447* 0.62 50.00 supersense-noun.object 0.2276* 2.04 19.05 supersense-noun.person 0.6091* 15.74 61.25 supersense-noun.phenomenon 0.2955* 2.04 8.85 supersense-noun.plant 0.0358 0.21 13.33 supersense-noun.possession 0.5247* 6.67 47.62 supersense-noun.process 0.1292* 1.13 3.96 supersense-noun.quantity 0.4403* 4.92 36.11 supersense-noun.relation 0.2089* 2.34 11.94 supersense-noun.shape 0.0659* 0.31 1.55 supersense-noun.state 0.4877* 11.36 36.17 supersense-noun.substance 0.2411* 1.43 3.64 supersense-noun.time 0.5175* 10.99 51.43 edge-level                                                  protoroles                                                  awareness 0.6715* 68.20 81.99 change-of-location 0.1061* 38.98 36.90 change-of-possession 0.0452 14.93 20.00 change-of-state 0.0448 42.59 37.21 change-of-state-continuous 0.0793 31.47 27.69 existed-after 0.3910* 93.33 95.58 existed-before 0.4802* 91.60 92.31 existed-during 0.3247* 98.31 98.61 instigation 0.3820* 74.48 76.77 partitive 0.0213 31.91 34.64 sentient 0.6494* 64.67 82.81 volition 0.5501* 63.79 79.86 was-for-benefit 0.2389* 59.87 62.11 was-used 0.1608* 86.64 89.00 macro-average 0.3433 37.20 50.66 Table 5: Pearson’s ρ, baseline F1, and model F1 for each UDS attribute given gold test-set graph structures. incorporate the matching scores between attributes. Table 3 shows that a parser trained on semantic subgraphs better recovers the subgraphs than a parser trained on full graphs whose outputs are 8434 postprocessed to remove syntactic nodes. However, the fact that the parser trained on full graphs achieves a higher Pearson’s ρ score indicates that the inclusion of syntactic nodes may provide additional information for predicting UDS attributes. In examining instances with an S score below 50, we observe two trends: the input sentences are often ungrammatical, and for 63.82% (on the validation set) the model predicts no output nodes. While the pipeline system does well on modeling semantic graph structure, it is by its definition unable to perform attribute parsing. In contrast, the results presented in Tables 4 and 5 show that the parser can jointly learn to produce semantic graphs and annotate them with attributes. Finally, we find that while weighting the loss with the confidence scores has a small benefit in the semantics-only setting, it hurts overall attribute and structure prediction performance. This may be due to the relatively small size of the UDS dataset, which makes a strategy that is effectively weakening the loss signal at training time less effective.6 Figs. 3a-3c show the correlational strength coefficient between the true and predicted attributes under a forced decode of the graph structure. It is defined over property types indices j, k with predicted attribute values νj i and true values νj∗ i as: ψ(j, k) = tanh 1 −|corr(νj −νj∗, νk −νk∗)| |corr(νj∗, νk∗)|  where corr(νj∗, νk∗) is Pearson’s correlation coefficient. Further details are given in Appendix A. ψ(i, j) reflects how well the model captures the strength of the correlations (either positive or negative) between two attribute types in the dataset: a positive value indicates that the model captures the correlation to some extent, with values closer to 1 implying better performance; a value of 0 indicates that the model does not capture the correlation at all, or that no significant interaction was present; a negative value indicates that the model makes systematic mistakes while predicting the two variables, e.g. when the model under-predicts the value of property i, it also under-predicts property j’s value. A Bonferroni-corrected non-parametric bootstrap test (1000 replicants) was used for significance testing, with failing pairs being said to not be reliably different from 0 correlation. Fig. 3a shows the model typically systematically under- or over-predicts the values for pairs 6All confidences are on [0, 1] gen-kind gen-particular gen-abstract ws-plant ws-cognition ws-artifact ws-comm. ws-person ws-animal ws-state ws-event ws-object gen-kind gen-particular gen-abstract ws-plant ws-cognition ws-artifact ws-comm. ws-person ws-animal ws-state ws-event ws-object 0.8 0.4 0.0 0.4 0.8 (a) ψ between argument-node attribute pairs. Subset of wordsenses used for readability. gen-hypo. gen-dynamic gen-particular dur-days dur-weeks factuality dur-months dur-instant dur-years gen-hypo. gen-dynamic gen-particular dur-days dur-weeks factuality dur-months dur-instant dur-years (b) ψ between predicate-node attribute pairs. Subset of time attributes used for readability. existed_after volition awareness sentient was_used chng_state_cont. chng_of_state partitive chng_loc. chng_poss. instigation was_for_benefit existed_before existed_during existed_after volition awareness sentient was_used chng_state_cont. chng_of_state partitive chng_loc. chng_poss. instigation was_for_benefit existed_before existed_during (c) ψ between all edge attribute pairs. Figure 3: ψ heatmaps for UDS1.0 attribute pairs of argument-node attributes, with most ψ values close to -1. However, we do see positive correlations between some of the genericity annotations, 8435 Sentences Property (A) Ours (A) (B) Ours (B) (C) Ours (C) (A) She was untrained and, awareness 3 3.04 1 3.69 5 3.68 in one botched job, killed a client. volition 2 2.92 1 3.45 5 3.44 (B) The antibody then kills the cell. instigation 5 3.08 5 3.39 5 3.37 (C) An assassin in Colombia killed sentience 5 2.99 1 3.71 5 3.70 a federal judge on a Medellin street. existed-after 5 3.57 2 3.79 5 3.78 Table 6: Comparison of gold properties from Reisinger et al. (2015) (on an ordinal scale from 1 to 5, with 3 as “neutral”) and predicted properties (mapped to [1, 5]) for sentences involving the predicate “kills”. as well as between genericity-arg-abstract, which rates how conceptually abstract an argument is, and the cognition wordsense, which applies to abstract terms such as “doubts” and “thoughts”. In Fig. 3b, we again observe several negative ψ values; however, some positive correlations can be seen between certain time properties, such as duration-days, duration-weeks, and durationmonths, as well as more strongly positive ψ’s between certain genericity annotations. The positive ψ between factuality and genericity-hypothetical indicates the model has captured the commonalities between predicates with these annotations. In contrast to the node attributes, Fig. 3c shows stronger results for edge attribute prediction, with all significant ψ’s being positive, and related attributes falling into clusters (e.g. volition, awareness, sentience, or the existed attributes) Qualitative examples Table 6 lists three sentences from Reisinger et al. (2015) along with a relevant subset of their original SPR properties and values; the scale in Reisinger et al. was ordinal from 1-5, with 1 corresponding to “very unlikely,” 5 to “very likely,” and 3 to “neutral.” Our model’s predictions for the same sentences and properties are given as well, mapped onto [1, 5]. We first note that the structural component of the model is sufficiently strong that the correct predicate-argument edges were extracted during parsing, allowing for a direct comparison between the annotations by Reisinger et al. and the parser’s predictions. We see that while for sentence (C), the model captures at least the correct direction of the protorole annotations, it overgeneralizes these results to (B), where a more nuanced analysis is required. For (A), we see that on most attributes the model captures the desired binary direction of the inferences, but that it fails on sentience. Overall, the model’s predictions are weaker than the desired output, even when the prediction is on the correct side of the midpoint, 3. This might help explain the disparity between Pearson and F1 scores in Table 5, and represents a direction for future work. Note that to obtain attributes for (A) and (B), the threshold for the masks β was dropped; ideally, this would not be required. 8 Conclusion The scalar valued, multi-attribute nature of UDS provides for a distinct structured prediction problem as compared to other existing representations. We have demonstrated how a transductive parsing paradigm that has achieved state-of-the-art results on other representations can be adapted to UDS1.0 structures and attributes, and have provided procedures for analysis, with the fine-grained nature of UDS allowing for investigating novel correlations and aspects of meaning. While UDS structures and various attribute types have been modeled separately (Vashishtha et al., 2019; Govindarajan et al., 2019; White et al., 2016; Rudinger et al., 2018a,b; Zhang et al., 2018), this work represents the first time all of these attributes and structures have been modeled jointly, and establishes a baseline for future efforts on UDS1.0. We envision future efforts exploring the interactions between improving the underlying graphstructure prediction and ever-better correlations to human judgements on individual properties. 9 Acknowledgements This work was supported by NSF Awards #1749025 and #1763705, DARPA LORELEI and AIDA, and IARPA BETTER. We thank the anonymous reviewers for their constructive feedback. The views and conclusions expressed herein are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA, IARPA, or the U.S. Government. References Omri Abend and Ari Rappoport. 2013. Universal Conceptual Cognitive Annotation (UCCA). In Proceedings of the 51st Annual Meeting of the Association 8436 for Computational Linguistics (Volume 1: Long Papers), pages 228–238, Sofia, Bulgaria. Association for Computational Linguistics. Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 47–52. Miguel Ballesteros and Yaser Al-Onaizan. 2017. AMR parsing using stack-LSTMs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1269–1275, Copenhagen, Denmark. Association for Computational Linguistics. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186. Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English web treebank. Linguistic Data Consortium, Philadelphia, PA. Deng Cai and Wai Lam. 2020. Amr parsing via graph-sequence iterative inference. arXiv preprint arXiv:2004.05572. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 740–750. Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 536–546. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. David Dowty. 1991. Thematic proto-roles and argument selection. Language, 67(3):547–619. Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734. James Goodman, Andreas Vlachos, and Jason Naradowsky. 2016. Ucl+sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an alphabound. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1167–1172. Association for Computational Linguistics. Venkata Govindarajan, Benjamin Van Durme, and Aaron Steven White. 2019. Decomposing Generalization: Models of Generic, Habitual, and Episodic Statements. Transactions of the Association for Computational Linguistics, 7:501–517. Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for UCCA. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1127– 1138, Vancouver, Canada. Association for Computational Linguistics. Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2018a. Multitask parsing across semantic representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 373–385, Melbourne, Australia. Association for Computational Linguistics. Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2018b. Universal dependency parsing with a general transition-based dag parser. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 103–112. Daniel Hershcovich, Zohar Aizenbud, Leshem Choshen, Elior Sulem, Ari Rappoport, and Omri Abend. 2019. SemEval-2019 task 1: Cross-lingual semantic parsing with UCCA. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 1–10, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Wei Jiang, Zhenghua Li, Yu Zhang, and Min Zhang. 2019. Hlt@ suda at semeval 2019 task 1: Ucca graph parsing as constituent tree parsing. arXiv preprint arXiv:1903.04153. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Jonathan May. 2016. SemEval-2016 task 8: Meaning representation parsing. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1063–1073, San Diego, California. Association for Computational Linguistics. 8437 Jonathan May and Jay Priyadarshi. 2017. SemEval2017 task 9: Abstract meaning representation parsing and generation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 536–545, Vancouver, Canada. Association for Computational Linguistics. Joakim Nivre, Zeljko Agic, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Cristina Bosco, Sam Bowman, Giuseppe G. A. Celano, Miriam Connor, Marie-Catherine de Marneffe, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Tomaˇz Erjavec, Rich´ard Farkas, Jennifer Foster, Daniel Galbraith, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Berta Gonzales, Bruno Guillaume, Jan Hajiˇc, Dag Haug, Radu Ion, Elena Irimia, Anders Johannsen, Hiroshi Kanayama, Jenna Kanerva, Simon Krek, Veronika Laippala, Alessandro Lenci, Nikola Ljubeˇsi´c, Teresa Lynn, Christopher Manning, C˘at˘alina M˘ar˘anduc, David Mareˇcek, H´ector Mart´ınez Alonso, Jan Maˇsek, Yuji Matsumoto, Ryan McDonald, Anna Missil¨a, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Shunsuke Mori, Hanna Nurmi, Petya Osenova, Lilja Øvrelid, Elena Pascual, Marco Passarotti, Cenel-Augusto Perez, Slav Petrov, Jussi Piitulainen, Barbara Plank, Martin Popel, Prokopis Prokopidis, Sampo Pyysalo, Loganathan Ramasamy, Rudolf Rosa, Shadi Saleh, Sebastian Schuster, Wolfgang Seeker, Mojgan Seraji, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk´o, Kiril Simov, Aaron Smith, Jan ˇStˇep´anek, Alane Suhr, Zsolt Sz´ant´o, Takaaki Tanaka, Reut Tsarfaty, Sumire Uematsu, Larraitz Uria, Viktor Varga, Veronika Vincze, Zdenˇek ˇZabokrtsk´y, Daniel Zeman, and Hanzhi Zhu. 2015. Universal Dependencies 1.2. http://universaldependencies.github.io/docs/. Stephan Oepen, Omri Abend, Jan Hajic, Daniel Hershcovich, Marco Kuhlmann, Tim O’Gorman, Nianwen Xue, Jayeol Chun, Milan Straka, and Zdenka Ureˇsov´a. 2019. Mrp 2019: Cross-framework meaning representation parsing. In Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning, pages 1–27. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Zdenka Uresova. 2016. Towards comparability of linguistic graph banks for semantic parsing. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 3991–3995. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, and Zdeˇnka Ureˇsov´a. 2015. SemEval 2015 task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 915–926, Denver, Colorado. Association for Computational Linguistics. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajiˇc, Angelina Ivanova, and Yi Zhang. 2014. SemEval 2014 task 8: Broad-coverage semantic dependency parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 63–72, Dublin, Ireland. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic Proto-Roles. Transactions of the Association for Computational Linguistics, 3:475–488. Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, and Benjamin Van Durme. 2018a. Neuraldavidsonian semantic proto-role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 944– 955, Brussels, Belgium. Association for Computational Linguistics. Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018b. Neural Models of Factuality. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 731–744, New Orleans, Louisiana. Association for Computational Linguistics. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083. Adam Teichert, Adam Poliak, Benjamin Van Durme, and Matthew R Gormley. 2017. Semantic proto-role labeling. In Thirty-First AAAI Conference on Artificial Intelligence. Siddharth Vashishtha, Benjamin Van Durme, and Aaron Steven White. 2019. Fine-Grained Temporal Relation Extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2906–2919, Florence, Italy. Association for Computational Linguistics. Yuxuan Wang, Wanxiang Che, Jiang Guo, and Ting Liu. 2018. A neural transition-based approach for semantic dependency graph parsing. In AAAI Conference on Artificial Intelligence. Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, 8438 Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1713–1723, Austin, TX. Association for Computational Linguistics. Aaron Steven White, Elias Stengel-Eskin, Siddharth Vashishtha, Venkata Govindarajan, Dee Ann Reisinger, Tim Vieira, Keisuke Sakaguchi, Sheng Zhang, Francis Ferraro, Rachel Rudinger, et al. 2019. The universal decompositional semantics dataset and decomp toolkit. arXiv preprint arXiv:1909.13851. Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019a. AMR parsing as sequence-tograph transduction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 80–94, Florence, Italy. Association for Computational Linguistics. Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019b. Broad-coverage semantic parsing as transduction. arXiv preprint arXiv:1909.02607. Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2018. Cross-lingual decompositional semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1664–1675, Brussels, Belgium. Association for Computational Linguistics. Sheng Zhang, Rachel Rudinger, and Benjamin Van Durme. 2017. An Evaluation of PredPatt and Open IE via Stage 1 Semantic Role Labeling. In IWCS 2017 — 12th International Conference on Computational Semantics — Short papers. 8439 A Derivation of ψ The metric used in visualizations Fig. 3a-3c is given by: ψ(j, k) = tanh 1 −|corr(νj −νj∗, νk −νk∗)| |corr(νj∗, νk∗)|  where corr(νj −νj∗, νk −νk∗) and corr(νj∗, νk∗) are defined as follows: νj = 1 N N X i=1 νj i Oj = 1 N N X i=1 (νj i −νj∗ i )2 corr(νj −νj∗, νk −νk∗) = 1 N PN i=1 (νj∗ i −νj i )(νk∗ i −νk i )  p Oj √Ok Ej = 1 N N X i=1 (νj∗ i −νj∗ i )2 corr(νj∗, νk∗) = 1 N PN i=1 (νj∗ i −νj∗ i )(νk∗ i −νk∗ i )  p Ej √Ek Note that by this definition, ψ is effectively a ratio of Pearson correlations, where the denominator is exactly the Pearson correlation between νj∗and νk∗.
2020
746
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440–8451 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8440 Unsupervised Cross-lingual Representation Learning at Scale Alexis Conneau∗Kartikay Khandelwal∗ Naman Goyal Vishrav Chaudhary Guillaume Wenzek Francisco Guzm´an Edouard Grave Myle Ott Luke Zettlemoyer Veselin Stoyanov Facebook AI Abstract This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of crosslingual transfer tasks. We train a Transformerbased masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing perlanguage performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.1 1 Introduction The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised crosslingual representations at a very large scale. We present XLM-R a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering. ∗Equal contribution. Correspondence to {aconneau,kartikayk}@fb.com 1https://github.com/facebookresearch/(fairseq-py,pytext,xlm) Multilingual masked language models (MLM) like mBERT (Devlin et al., 2018) and XLM (Lample and Conneau, 2019) have pushed the stateof-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models (Vaswani et al., 2017) on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference (Bowman et al., 2015; Williams et al., 2017; Conneau et al., 2018), question answering (Rajpurkar et al., 2016; Lewis et al., 2019), and named entity recognition (Pires et al., 2019; Wu and Dredze, 2019). However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages. In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts (Liu et al., 2019). We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets. Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 23% accuracy on low-resource languages. It outperforms the previous state of the art by 5.1% average accuracy on XNLI, 2.42% average F1-score 8441 on Named Entity Recognition, and 9.1% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa (Liu et al., 2019). These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding. 2 Related Work From pretrained word embeddings (Mikolov et al., 2013b; Pennington et al., 2014) to pretrained contextualized representations (Peters et al., 2018; Schuster et al., 2019) and transformer based language models (Radford et al., 2018; Devlin et al., 2018), unsupervised representation learning has significantly improved the state of the art in natural language understanding. Parallel work on cross-lingual understanding (Mikolov et al., 2013a; Schuster et al., 2019; Lample and Conneau, 2019) extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages. Most recently, Devlin et al. (2018) and Lample and Conneau (2019) introduced mBERT and XLM - masked language models trained on multiple languages, without any cross-lingual supervision. Lample and Conneau (2019) propose translation language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark (Conneau et al., 2018). They further show strong improvements on unsupervised machine translation and pretraining for sequence generation. Wu et al. (2019) shows that monolingual BERT representations are similar across languages, explaining in part the natural emergence of multilinguality in bottleneck architectures. Separately, Pires et al. (2019) demonstrated the effectiveness of multilingual models like mBERT on sequence labeling tasks. Huang et al. (2019) showed gains over XLM using cross-lingual multi-task learning, and Singh et al. (2019) demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach. The benefits of scaling language model pretraining by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, Jozefowicz et al. (2016) show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT (Radford et al., 2018) also highlights the importance of scaling the amount of data and RoBERTa (Liu et al., 2019) shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better performance. We train on cleaned CommonCrawls (Wenzek et al., 2019), which increase the amount of data for low-resource languages by two orders of magnitude on average. Similar data has also been shown to be effective for learning high quality word embeddings in multiple languages (Grave et al., 2018). Several efforts have trained massively multilingual machine translation models from large parallel corpora. They uncover the high and low resource trade-off and the problem of capacity dilution (Johnson et al., 2017; Tan et al., 2019). The work most similar to ours is Arivazhagan et al. (2019), which trains a single model in 103 languages on over 25 billion parallel sentences. Siddhant et al. (2019) further analyze the representations obtained by the encoder of a massively multilingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsupervised learning of cross-lingual representations and their transfer to discriminative tasks. 3 Model and Data In this section, we present the training objective, languages, and data we use. We follow the XLM approach (Lample and Conneau, 2019) as closely as possible, only introducing changes that improve performance at scale. Masked Language Models. We use a Transformer model (Vaswani et al., 2017) trained with the multilingual MLM objective (Devlin et al., 2018; Lample and Conneau, 2019) using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tok8442 en ru id vi fa uk sv th ja de ro hu bg fr fi ko es no pt el zh da pl he it nl ar sk hi hr tr cs lt ta ca sl ka sr lv bn ms ml az kk et ur hy sq mk te be ne si is kn tl gl mn mr la eu gu sw km af ky eo am pa cy ps uz or ga my ku so ug sa yi mg fy jv gd br bs as su 10-1 100 101 102 103 Dataset size (in GB) CommonCrawl Wikipedia Figure 1: Amount of data in GiB (log-scale) for the 88 languages that appear in both the Wiki-100 corpus used for mBERT and XLM-100, and the CC-100 used for XLM-R. CC-100 increases the amount of data by several orders of magnitude, in particular for low-resource languages. enization directly on raw text data using Sentence Piece (Kudo and Richardson, 2018) with a unigram language model (Kudo, 2018). We sample batches from different languages using the same sampling distribution as Lample and Conneau (2019), but with α = 0.3. Unlike Lample and Conneau (2019), we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full softmax and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architecture with a vocabulary of 150K tokens. Appendix B goes into more details about the architecture of the different models referenced in this paper. Scaling to a hundred languages. XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix A. Figure 1 specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from Lample and Conneau (2019) trained on Wikipedia text in 100 languages. Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 languages for which we have classification and sequence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource languages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and lowresource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively. Scaling the Amount of Training Data. Following Wenzek et al. (2019) 2, we build a clean CommonCrawl Corpus in 100 languages. We use an internal language identification model in combination with the one from fastText (Joulin et al., 2017). We train language models in each language and use it to filter documents as described in Wenzek et al. (2019). We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, especially for low-resource languages like Burmese and Swahili. Figure 1 shows the difference in size between the Wikipedia Corpus used by mBERT and XLM100, and the CommonCrawl Corpus we use. As we show in Section 5.3, monolingual Wikipedia corpora are too small to enable unsupervised representation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model. 4 Evaluation We consider four evaluation benchmarks. For crosslingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models. Cross-lingual Natural Language Inference (XNLI). The XNLI dataset comes with groundtruth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other lan2https://github.com/facebookresearch/cc net 8443 guages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): the English training set is machine-translated to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project. Named Entity Recognition. For NER, we consider the CoNLL-2002 (Sang, 2002) and CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from Lample et al. (2016) and Akbik et al. (2018). Cross-lingual Question Answering. We use the MLQA benchmark from Lewis et al. (2019), which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English. GLUE Benchmark. Finally, we evaluate the English performance of our model on the GLUE benchmark (Wang et al., 2018) which gathers multiple classification tasks, such as MNLI (Williams et al., 2017), SST-2 (Socher et al., 2013), or QNLI (Rajpurkar et al., 2018). We use BERTLarge and RoBERTa as baselines. 5 Analysis and Results In this section, we perform a comprehensive analysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages. 5.1 Improving and Understanding Multilingual Masked Language Models Much of the work done on understanding the crosslingual effectiveness of mBERT or XLM (Pires et al., 2019; Wu and Dredze, 2019; Lewis et al., 2019) has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages. Transfer-dilution Trade-off and Curse of Multilinguality. Model capacity (i.e. the number of parameters in the model) is constrained due to practical considerations such as memory and speed during training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource language performance can be improved by adding similar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution (Arivazhagan et al., 2019). Positive transfer and capacity dilution have to be traded off against each other. We illustrate this trade-off in Figure 2, which shows XNLI performance vs the number of languages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer which improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Corpus. The issue is even more prominent when the capacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hidden size from 768 to 960 to 1152. In Figure 4, we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section 3 that we used a fixed vocabulary size of 150K for all models). High-resource vs Low-resource Trade-off. The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword 8444 7 15 30 60 100 Number of languages 40 50 60 70 80 Accuracy Low res. High res. All Figure 2: The transferinterference trade-off: Lowresource languages benefit from scaling to more languages, until dilution (interference) kicks in and degrades overall performance. Low res. High res. All 40 50 60 70 80 Accuracy Wikipedia CommonCrawl Figure 3: Wikipedia versus CommonCrawl: An XLM-7 obtains significantly better performance when trained on CC, in particular on low-resource languages. 7 30 100 Number of languages 66 68 70 72 74 Accuracy Fixed capacity Increased capacity Figure 4: Adding more capacity to the model alleviates the curse of multilinguality, but remains an issue for models of moderate size. 0.01 0.3 0.7 1.0 Language sampling 40 50 60 70 80 Accuracy Low res. High res. All Figure 5: On the high-resource versus low-resource trade-off: impact of batch language sampling for XLM-100. 32k 64k 128k 256k 512k Vocabulary size 60 62 64 66 68 Accuracy Fixed capacity Increased capacity Figure 6: On the impact of vocabulary size at fixed capacity and with increasing capacity for XLM-100. 2048 4096 8192 BPE SPM Batch size Preproc. 60 62 64 66 68 Accuracy . Figure 7: On the impact of largescale training, and preprocessing simplification from BPE with tokenization to SPM on raw text data. vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of highresource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the α parameter which controls the exponential smoothing of the language sampling rate. Similar to Lample and Conneau (2019), we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of α see batches of high-resource languages more often. Figure 5 shows that the higher the value of α, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found 0.3 to be an optimal value for α, and use this for XLM-R. Importance of Capacity and Vocabulary. In previous sections and in Figure 4, we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure 6 shows that even with a fixed capacity, we observe a 2.8% increase in XNLI average accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher proportion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. For simplicity and given the softmax computational constraints, we use a vocabulary of 250k for XLM-R. We further illustrate the importance of this parameter, by training three models with the same transformer architecture (BERTBase) but with different vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k. 8445 Larger-scale Datasets and Training. As shown in Figure 1, the CommonCrawl Corpus that we collected has significantly more monolingual data than the previously used Wikipedia corpora. Figure 3 shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain significantly better performance. Apart from scaling the training data, Liu et al. (2019) also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure 7) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in Lample and Conneau (2019) to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observation with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of Lample and Conneau (2019) from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models. Simplifying Multilingual Tokenization with Sentence Piece. The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in performance for models trained with SPM when compared to models trained with language-specific preprocessing and byte-pair encoding (see Figure 7) and hence use SPM for XLM-R. 5.2 Cross-lingual Understanding Results Based on these results, we adapt the setting of Lample and Conneau (2019) and use a large Transformer model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multilingual MLM loss and train our XLM-R model for 1.5 Million updates on five-hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from CommonCrawl in 100 languages and sample languages with α = 0.3. In this section, we show that it outperforms all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark. XNLI. Table 1 shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters significantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language (N models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from N models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language. XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.9% accuracy, outperforming the XLM-100 and mBERT open-source models by 10.2% and 14.6% average accuracy. On the Swahili and Urdu lowresource languages, XLM-R outperforms XLM-100 by 15.7% and 11.4%, and mBERT by 23.5% and 15.8%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder (Huang et al., 2019) and XLM (MLM+TLM), which handle only 15 languages, by 5.5% and 5.8% average accuracy respectively. Using the multilingual training of translate-train-all, XLM-R further improves performance and reaches 83.6% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 5.1%. Multilingual training is similar to practical applications where training sets are available in various languages for the same task. In the case of XNLI, datasets have been translated, and translate-trainall can be seen as some form of cross-lingual data augmentation (Singh et al., 2019), similar to backtranslation (Xie et al., 2019). Named Entity Recognition. In Table 2, we report results of XLM-R and mBERT on CoNLL2002 and CoNLL-2003. We consider the LSTM + CRF approach from Lample et al. (2016) and the Flair model from Akbik et al. (2018) as baselines. We evaluate the performance of the model 8446 Model D #M #lg en fr es de el bg ru tr ar vi th zh hi sw ur Avg Fine-tune multilingual model on English training set (Cross-lingual Transfer) Lample and Conneau (2019) Wiki+MT N 15 85.0 78.7 78.9 77.8 76.6 77.4 75.3 72.5 73.1 76.1 73.2 76.5 69.6 68.4 67.3 75.1 Huang et al. (2019) Wiki+MT N 15 85.1 79.0 79.4 77.8 77.2 77.2 76.3 72.8 73.5 76.4 73.6 76.2 69.4 69.7 66.7 75.4 Devlin et al. (2018) Wiki N 102 82.1 73.8 74.3 71.1 66.4 68.9 69.0 61.6 64.9 69.5 55.8 69.3 60.0 50.4 58.0 66.3 Lample and Conneau (2019) Wiki N 100 83.7 76.2 76.6 73.7 72.4 73.0 72.1 68.1 68.4 72.0 68.2 71.5 64.5 58.0 62.4 71.3 Lample and Conneau (2019) Wiki 1 100 83.2 76.7 77.7 74.0 72.7 74.1 72.7 68.7 68.6 72.9 68.9 72.5 65.6 58.2 62.4 70.7 XLM-RBase CC 1 100 85.8 79.7 80.7 78.7 77.5 79.6 78.1 74.2 73.8 76.5 74.6 76.7 72.4 66.5 68.3 76.2 XLM-R CC 1 100 89.1 84.1 85.1 83.9 82.9 84.0 81.2 79.6 79.8 80.8 78.1 80.2 76.9 73.9 73.8 80.9 Translate everything to English and use English-only model (TRANSLATE-TEST) BERT-en Wiki 1 1 88.8 81.4 82.3 80.1 80.3 80.9 76.2 76.0 75.4 72.0 71.9 75.6 70.0 65.8 65.8 76.2 RoBERTa Wiki+CC 1 1 91.3 82.9 84.3 81.2 81.7 83.1 78.3 76.8 76.6 74.2 74.1 77.5 70.9 66.7 66.8 77.8 Fine-tune multilingual model on each training set (TRANSLATE-TRAIN) Lample and Conneau (2019) Wiki N 100 82.9 77.6 77.9 77.9 77.1 75.7 75.5 72.6 71.2 75.8 73.1 76.2 70.4 66.5 62.4 74.2 Fine-tune multilingual model on all training sets (TRANSLATE-TRAIN-ALL) Lample and Conneau (2019)† Wiki+MT 1 15 85.0 80.8 81.3 80.3 79.1 80.9 78.3 75.6 77.6 78.5 76.0 79.5 72.9 72.8 68.5 77.8 Huang et al. (2019) Wiki+MT 1 15 85.6 81.1 82.3 80.9 79.5 81.4 79.7 76.8 78.2 77.9 77.1 80.5 73.4 73.8 69.6 78.5 Lample and Conneau (2019) Wiki 1 100 84.5 80.1 81.3 79.3 78.6 79.4 77.5 75.2 75.6 78.3 75.7 78.3 72.1 69.2 67.7 76.9 XLM-RBase CC 1 100 85.4 81.4 82.2 80.3 80.4 81.3 79.7 78.6 77.3 79.7 77.9 80.2 76.1 73.1 73.0 79.1 XLM-R CC 1 100 89.1 85.1 86.6 85.7 85.3 85.9 83.5 83.2 83.1 83.7 81.5 83.7 81.6 78.0 78.1 83.6 Table 1: Results on cross-lingual classification. We report the accuracy on each of the 15 XNLI languages and the average accuracy. We specify the dataset D used for pretraining, the number of models #M the approach requires and the number of languages #lg the model handles. Our XLM-R results are averaged over five different seeds. We show that using the translate-train-all approach which leverages training sets from multiple languages, XLM-R obtains a new state of the art on XNLI of 83.6% average accuracy. Results with † are from Huang et al. (2019). Model train #M en nl es de Avg Lample et al. (2016) each N 90.74 81.74 85.75 78.76 84.25 Akbik et al. (2018) each N 93.18 90.44 88.27 mBERT† each N 91.97 90.94 87.38 82.82 88.28 en 1 91.97 77.57 74.96 69.56 78.52 XLM-RBase each N 92.25 90.39 87.99 84.60 88.81 en 1 92.25 78.08 76.53 69.60 79.11 all 1 91.08 89.09 87.28 83.17 87.66 XLM-R each N 92.92 92.53 89.72 85.81 90.24 en 1 92.92 80.80 78.64 71.40 80.94 all 1 92.00 91.60 89.52 84.60 89.43 Table 2: Results on named entity recognition on CoNLL-2002 and CoNLL-2003 (F1 score). Results with † are from Wu and Dredze (2019). Note that mBERT and XLM-R do not use a linear-chain CRF, as opposed to Akbik et al. (2018) and Lample et al. (2016). on each of the target languages in three different settings: (i) train on English data only (en) (ii) train on data in target language (each) (iii) train on data in all languages (all). Results of mBERT are reported from Wu and Dredze (2019). Note that we do not use a linear-chain CRF on top of XLM-R and mBERT representations, which gives an advantage to Akbik et al. (2018). Without the CRF, our XLM-R model still performs on par with the state of the art, outperforming Akbik et al. (2018) on Dutch by 2.09 points. On this task, XLM-R also outperforms mBERT by 2.42 F1 on average for cross-lingual transfer, and 1.86 F1 when trained on each language. Training on all languages leads to an average F1 score of 89.43%, outperforming cross-lingual transfer approach by 8.49%. Question Answering. We also obtain new state of the art results on the MLQA cross-lingual question answering benchmark, introduced by Lewis et al. (2019). We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table 3. XLM-R obtains F1 and accuracy scores of 70.7% and 52.7% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 13.0% F1-score and 11.1% accuracy. It even outperforms BERT-Large on English, confirming its strong monolingual performance. 5.3 Multilingual versus Monolingual In this section, we present results of multilingual XLM models against monolingual BERT models. GLUE: XLM-R versus RoBERTa. Our goal is to obtain a multilingual model with strong performance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLMR on the GLUE benchmark. We show in Table 4, that XLM-R obtains better average dev performance than BERTLarge by 1.6% and reaches performance on par with XLNetLarge. The RoBERTa model outperforms XLM-R by only 1.0% on average. We believe future work can reduce this gap even further by alleviating the curse of multilinguality and 8447 Model train #lgs en es de ar hi vi zh Avg BERT-Large† en 1 80.2 / 67.4 mBERT† en 102 77.7 / 65.2 64.3 / 46.6 57.9 / 44.3 45.7 / 29.8 43.8 / 29.7 57.1 / 38.6 57.5 / 37.3 57.7 / 41.6 XLM-15† en 15 74.9 / 62.4 68.0 / 49.8 62.2 / 47.6 54.8 / 36.3 48.8 / 27.3 61.4 / 41.8 61.1 / 39.6 61.6 / 43.5 XLM-RBase en 100 77.1 / 64.6 67.4 / 49.6 60.9 / 46.7 54.9 / 36.6 59.4 / 42.9 64.5 / 44.7 61.8 / 39.3 63.7 / 46.3 XLM-R en 100 80.6 / 67.8 74.1 / 56.0 68.5 / 53.6 63.1 / 43.5 69.2 / 51.6 71.3 / 50.9 68.0 / 45.4 70.7 / 52.7 Table 3: Results on MLQA question answering We report the F1 and EM (exact match) scores for zero-shot classification where models are fine-tuned on the English Squad dataset and evaluated on the 7 languages of MLQA. Results with † are taken from the original MLQA paper Lewis et al. (2019). vocabulary dilution. These results demonstrate the possibility of learning one model for many languages while maintaining strong performance on per-language downstream tasks. Model #lgs MNLI-m/mm QNLI QQP SST MRPC STS-B Avg BERTLarge† 1 86.6/92.3 91.3 93.2 88.0 90.0 90.2 XLNetLarge† 1 89.8/93.9 91.8 95.6 89.2 91.8 92.0 RoBERTa† 1 90.2/90.2 94.7 92.2 96.4 90.9 92.4 92.8 XLM-R 100 88.9/89.0 93.8 92.3 95.0 89.5 91.2 91.8 Table 4: GLUE dev results. Results with † are from Liu et al. (2019). We compare the performance of XLMR to BERTLarge, XLNet and RoBERTa on the English GLUE benchmark. XNLI: XLM versus BERT. A recurrent criticism against multilingual models is that they obtain worse performance than their monolingual counterparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehensive study to assess this claim on the XNLI benchmark. We extend our comparison between multilingual XLM models and monolingual BERT models on 7 languages and compare performance in Table 5. We train 14 monolingual BERT models on Wikipedia and CommonCrawl (capped at 60 GiB), and two XLM-7 models. We increase the vocabulary size of the multilingual model for a better comparison. We found that multilingual models can outperform their monolingual BERT counterparts. Specifically, in Table 5, we show that for cross-lingual transfer, monolingual baselines outperform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-trainall) and leveraging training sets coming from multiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance. Model D #vocab en fr de ru zh sw ur Avg Monolingual baselines BERT Wiki 40k 84.5 78.6 80.0 75.5 77.7 60.1 57.3 73.4 CC 40k 86.7 81.2 81.2 78.2 79.5 70.8 65.1 77.5 Multilingual models (cross-lingual transfer) XLM-7 Wiki 150k 82.3 76.8 74.7 72.5 73.1 60.8 62.3 71.8 CC 150k 85.7 78.6 79.5 76.4 74.8 71.2 66.9 76.2 Multilingual models (translate-train-all) XLM-7 Wiki 150k 84.6 80.1 80.2 75.7 78 68.7 66.7 76.3 CC 150k 87.2 82.5 82.9 79.7 80.4 75.7 71.5 80.0 Table 5: Multilingual versus monolingual models (BERT-BASE). We compare the performance of monolingual models (BERT) versus multilingual models (XLM) on seven languages, using a BERT-BASE architecture. We choose a vocabulary size of 40k and 150k for monolingual and multilingual models. 5.4 Representation Learning for Low-resource Languages We observed in Table 5 that pretraining on Wikipedia for Swahili and Urdu performed similarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heavily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifically, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improvement on Swahili and Urdu respectively. 6 Conclusion In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages. We show that it provides strong gains over previous multilingual 8448 models like mBERT and XLM on classification, sequence labeling and question answering. We exposed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilingual models over monolingual models, and show strong improvements on low-resource languages. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING, pages 1638–1649. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In EMNLP. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In LREC. Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pretraining with multiple cross-lingual tasks. ACL. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. TACL, 5:339–351. Armand Joulin, Edouard Grave, and Piotr Bojanowski Tomas Mikolov. 2017. Bag of tricks for efficient text classification. EACL 2017, page 427. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In ACL, pages 66–75. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. EMNLP. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL, pages 260–270, San Diego, California. Association for Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. NeurIPS. Patrick Lewis, Barlas O˘guz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. Mlqa: Evaluating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. NAACL. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In ACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/openaiassets/research-covers/languageunsupervised/language understanding paper.pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. 8449 Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. ACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Erik F Sang. 2002. Introduction to the conll-2002 shared task: Language-independent named entity recognition. CoNLL. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing. NAACL. Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Arivazhagan, Jason Riesa, Ankur Bapna, Orhan Firat, and Karthik Raman. 2019. Evaluating the crosslingual effectiveness of massively multilingual neural machine translation. AAAI. Jasdeep Singh, Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2019. Xlda: Cross-lingual data augmentation for natural language inference and question answering. arXiv preprint arXiv:1905.11471. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pages 1631–1642. Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and TieYan Liu. 2019. Multilingual neural machine translation with knowledge distillation. ICLR. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: languageindependent named entity recognition. In CoNLL, pages 142–147. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzman, Armand Joulin, and Edouard Grave. 2019. Ccnet: Extracting high quality monolingual datasets from web crawl data. arXiv preprint arXiv:1911.00359. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. Proceedings of the 2nd Workshop on Evaluating VectorSpace Representations for NLP. Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Emerging cross-lingual structure in pretrained language models. ACL. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. EMNLP. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. 2019. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848. 8450 Appendix A Languages and statistics for CC-100 used by XLM-R In this section we present the list of languages in the CC-100 corpus we created for training XLM-R. We also report statistics such as the number of tokens and the size of each monolingual corpus. ISO code Language Tokens (M) Size (GiB) ISO code Language Tokens (M) Size (GiB) af Afrikaans 242 1.3 lo Lao 17 0.6 am Amharic 68 0.8 lt Lithuanian 1835 13.7 ar Arabic 2869 28.0 lv Latvian 1198 8.8 as Assamese 5 0.1 mg Malagasy 25 0.2 az Azerbaijani 783 6.5 mk Macedonian 449 4.8 be Belarusian 362 4.3 ml Malayalam 313 7.6 bg Bulgarian 5487 57.5 mn Mongolian 248 3.0 bn Bengali 525 8.4 mr Marathi 175 2.8 Bengali Romanized 77 0.5 ms Malay 1318 8.5 br Breton 16 0.1 my Burmese 15 0.4 bs Bosnian 14 0.1 my Burmese 56 1.6 ca Catalan 1752 10.1 ne Nepali 237 3.8 cs Czech 2498 16.3 nl Dutch 5025 29.3 cy Welsh 141 0.8 no Norwegian 8494 49.0 da Danish 7823 45.6 om Oromo 8 0.1 de German 10297 66.6 or Oriya 36 0.6 el Greek 4285 46.9 pa Punjabi 68 0.8 en English 55608 300.8 pl Polish 6490 44.6 eo Esperanto 157 0.9 ps Pashto 96 0.7 es Spanish 9374 53.3 pt Portuguese 8405 49.1 et Estonian 843 6.1 ro Romanian 10354 61.4 eu Basque 270 2.0 ru Russian 23408 278.0 fa Persian 13259 111.6 sa Sanskrit 17 0.3 fi Finnish 6730 54.3 sd Sindhi 50 0.4 fr French 9780 56.8 si Sinhala 243 3.6 fy Western Frisian 29 0.2 sk Slovak 3525 23.2 ga Irish 86 0.5 sl Slovenian 1669 10.3 gd Scottish Gaelic 21 0.1 so Somali 62 0.4 gl Galician 495 2.9 sq Albanian 918 5.4 gu Gujarati 140 1.9 sr Serbian 843 9.1 ha Hausa 56 0.3 su Sundanese 10 0.1 he Hebrew 3399 31.6 sv Swedish 77.8 12.1 hi Hindi 1715 20.2 sw Swahili 275 1.6 Hindi Romanized 88 0.5 ta Tamil 595 12.2 hr Croatian 3297 20.5 Tamil Romanized 36 0.3 hu Hungarian 7807 58.4 te Telugu 249 4.7 hy Armenian 421 5.5 Telugu Romanized 39 0.3 id Indonesian 22704 148.3 th Thai 1834 71.7 is Icelandic 505 3.2 tl Filipino 556 3.1 it Italian 4983 30.2 tr Turkish 2736 20.9 ja Japanese 530 69.3 ug Uyghur 27 0.4 jv Javanese 24 0.2 uk Ukrainian 6.5 84.6 ka Georgian 469 9.1 ur Urdu 730 5.7 kk Kazakh 476 6.4 Urdu Romanized 85 0.5 km Khmer 36 1.5 uz Uzbek 91 0.7 kn Kannada 169 3.3 vi Vietnamese 24757 137.3 ko Korean 5644 54.2 xh Xhosa 13 0.1 ku Kurdish (Kurmanji) 66 0.4 yi Yiddish 34 0.3 ky Kyrgyz 94 1.2 zh Chinese (Simplified) 259 46.9 la Latin 390 2.5 zh Chinese (Traditional) 176 16.6 Table 6: Languages and statistics of the CC-100 corpus. We report the list of 100 languages and include the number of tokens (Millions) and the size of the data (in GiB) for each language. Note that we also include romanized variants of some non latin languages such as Bengali, Hindi, Tamil, Telugu and Urdu. 8451 B Model Architectures and Sizes As we showed in section 5, capacity is an important parameter for learning strong cross-lingual representations. In the table below, we list multiple monolingual and multilingual models used by the research community and summarize their architectures and total number of parameters. Model #lgs tokenization L Hm Hff A V #params BERTBase 1 WordPiece 12 768 3072 12 30k 110M BERTLarge 1 WordPiece 24 1024 4096 16 30k 335M mBERT 104 WordPiece 12 768 3072 12 110k 172M RoBERTaBase 1 bBPE 12 768 3072 8 50k 125M RoBERTa 1 bBPE 24 1024 4096 16 50k 355M XLM-15 15 BPE 12 1024 4096 8 95k 250M XLM-17 17 BPE 16 1280 5120 16 200k 570M XLM-100 100 BPE 16 1280 5120 16 200k 570M Unicoder 15 BPE 12 1024 4096 8 95k 250M XLM-R Base 100 SPM 12 768 3072 12 250k 270M XLM-R 100 SPM 24 1024 4096 16 250k 550M GPT2 1 bBPE 48 1600 6400 32 50k 1.5B wide-mmNMT 103 SPM 12 2048 16384 32 64k 3B deep-mmNMT 103 SPM 24 1024 16384 32 64k 3B T5-3B 1 WordPiece 24 1024 16384 32 32k 3B T5-11B 1 WordPiece 24 1024 65536 32 32k 11B Table 7: Details on model sizes. We show the tokenization used by each Transformer model, the number of layers L, the number of hidden states of the model Hm, the dimension of the feed-forward layer Hff, the number of attention heads A, the size of the vocabulary V and the total number of parameters #params. For Transformer encoders, the number of parameters can be approximated by 4LH2 m + 2LHmHff + V Hm. GPT2 numbers are from Radford et al. (2019), mm-NMT models are from the work of Arivazhagan et al. (2019) on massively multilingual neural machine translation (mmNMT), and T5 numbers are from Raffel et al. (2019). While XLM-R is among the largest models partly due to its large embedding layer, it has a similar number of parameters than XLM-100, and remains significantly smaller that recently introduced Transformer models for multilingual MT and transfer learning. While this table gives more hindsight on the difference of capacity of each model, note it does not highlight other critical differences between the models.
2020
747
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8452–8464 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8452 A Generate-and-Rank Framework with Semantic Type Regularization for Biomedical Concept Normalization Dongfang Xu and Zeyu Zhang and Steven Bethard School of Information University of Arizona Tucson, AZ {dongfangxu9,zeyuzhang,bethard}@email.arizona.edu Abstract Concept normalization, the task of linking textual mentions of concepts to concepts in an ontology, is challenging because ontologies are large. In most cases, annotated datasets cover only a small sample of the concepts, yet concept normalizers are expected to predict all concepts in the ontology. In this paper, we propose an architecture consisting of a candidate generator and a list-wise ranker based on BERT. The ranker considers pairings of concept mentions and candidate concepts, allowing it to make predictions for any concept, not just those seen during training. We further enhance this list-wise approach with a semantic type regularizer that allows the model to incorporate semantic type information from the ontology during training. Our proposed concept normalization framework achieves stateof-the-art performance on multiple datasets. 1 Introduction Mining and analyzing the constantly-growing unstructured text in the bio-medical domain offers great opportunities to advance scientific discovery (Gonzalez et al., 2015; Fleuren and Alkema, 2015) and improve the clinical care (Rumshisky et al., 2016; Liu et al., 2019). However, lexical and grammatical variations are pervasive in such text, posing key challenges for data interoperability and the development of natural language processing (NLP) techniques. For instance, heart attack, MI, myocardial infarction, and cardiovascular stroke all refer to the same concept. It is critical to disambiguate these terms by linking them with their corresponding concepts in an ontology or knowledge base. Such linking allows downstream tasks (relation extraction, information retrieval, text classification, etc.) to access the ontology’s rich knowledge about biomedical entities, their synonyms, semantic types and mutual relationships. Concept normalization is a task that maps concept mentions, the in-text natural-language mentions of ontological concepts, to concept entries in a standardized ontology or knowledge base. Techniques for concept normalization have been advancing, thanks in part to recent shared tasks including clinical disorder normalization in 2013 ShARe/CLEF (Suominen et al., 2013) and 2014 SemEval Task 7 Analysis of Clinical Text (Pradhan et al., 2014), and adverse drug event normalization in Social Media Mining for Health (SMM4H) (Sarker et al., 2018; Weissenbacher et al., 2019). Most existing systems use a string-matching or dictionary look-up approach (Leal et al., 2015; D’Souza and Ng, 2015; Lee et al., 2016), which are limited to matching morphologically similar terms, or supervised multi-class classifiers (Belousov et al., 2017; Tutubalina et al., 2018; Niu et al., 2019; Luo et al., 2019a), which may not generalize well when there are many concepts in the ontology and the concept types that must be predicted do not all appear in the training data. We propose an architecture (shown in Figure 1) that is able to consider both morphological and semantic information. We first apply a candidate generator to generate a list of candidate concepts, and then use a BERT-based list-wise classifier to rank the candidate concepts. This two-step architecture allows unlikely concept candidates to be filtered out prior to the final classification, a necessary step when dealing with ontologies with millions of concepts. In contrast to previous list-wise classifiers (Murty et al., 2018) which only take the concept mention as input, our BERT-based list-wise classifier takes both the concept mention and the candidate concept name as input, and is thus able to handle concepts that never appear in the training data. We further enhance this list-wise approach with a semantic type regularizer that allows our ranker to leverage semantic type information from 8453 head spinning a little C0220870 C0012833 C0018681 C0393760 . . . Candidate Generator (multi-class BERT classifier or Lucene) [CLS] head spinning a little [SEP] Lightheadedness [SEP] Light-headed feeling ... [CLS] head spinning a little [SEP] Dizzyness [SEP] Dizziness symptom ... [CLS] head spinning a little [SEP] headache [SEP] head pains ... 0.4 0.5 0.1 Ranker (list-wise BERT classifier) Figure 1: Proposed architecture for concept normalization: candidate generation and ranking. the ontology during training. Our work makes the following contributions: • Our proposed concept normalization framework achieves state-of-the-art performance on multiple datasets. • We propose a concept normalization framework consisting of a candidate generator and a list-wise classifier. Our framework is easier to train and the list-wise classifier is able to predict concepts never seen during training. • We introduce a semantic type regularizer which encourages the model to consider the semantic type information of the candidate concepts. This semantic type regularizer improves performance over the BERT-based listwise classifier on multiple datasets. The code for our proposed generate-and-rank framework is available at https://github.com/ dongfang91/Generate-and-Rank-ConNorm. 2 Related work Traditional approaches for concept normalization involve string match and dictionary look-up. These approaches differ in how they construct dictionaries, such as collecting concept mentions from the labeled data as extra synonyms (Leal et al., 2015; Lee et al., 2016), and in different string matching techniques, such as string overlap and edit distance (Kate, 2016). Two of the most commonly used knowledge-intensive concept normalization tools, MetaMap (Aronson, 2001) and cTAKES (Savova et al., 2010) both employ rules to first generate lexical variants for each noun phrase and then conduct dictionary look-up for each variant. Several systems (D’Souza and Ng, 2015; Jonnagaddala et al., 2016) have demonstrated that rule-based concept normalization systems achieve performance competitive with other approaches in a sieve-based approach that carefully selects combinations and orders of dictionaries, exact and partial matching, and heuristic rules. However, such rule-based approaches struggle when there are great variations between concept mention and concept, which is common, for example, when comparing social media text to medical ontologies. Due to the availability of shared tasks and annotated data, the field has shifted toward machine learning techniques. We divide the machine learning approaches into two categories, classification (Savova et al., 2008; Stevenson et al., 2009; Limsopatham and Collier, 2016; Yepes, 2017; Festag and Spreckelsen, 2017; Lee et al., 2017; Tutubalina et al., 2018; Niu et al., 2019) and learning to rank (Leaman et al., 2013; Liu and Xu, 2017; Li et al., 2017; Nguyen et al., 2018; Murty et al., 2018). Most classification-based approaches using deep neural networks have shown strong performance. They differ in using different architectures, such as Gated Recurrent Units (GRU) with attention mechanisms (Tutubalina et al., 2018), multi-task learning with auxiliary tasks to generate attention weights (Niu et al., 2019), or pre-trained transformer networks (Li et al., 2019; Miftahutdinov and Tutubalina, 2019); different sources for training word embeddings, such as Google News (Limsopatham and Collier, 2016) or concept definitions from the Unified Medical Language System (UMLS) Metathesaurus (Festag and Spreckelsen, 2017); and different input representations, such as using character embeddings (Niu et al., 2019). All classification approaches share the disadvantage that the output space must be the same size as the number of concepts to be predicted, and thus the output space tends to be small such as 2,200 concepts in (Limsopatham and Collier, 2016) and around 22,500 concepts in (Weissenbacher et al., 2019). Classification approaches also struggle with concepts that have only a few example mentions in the training data. Researchers have applied point-wise learning to rank (Liu and Xu, 2017; Li et al., 2017), pairwise learning to rank (Leaman et al., 2013; Nguyen 8454 et al., 2018), and list-wise learning to rank (Murty et al., 2018; Ji et al., 2019) on concept normalization. Generally, the learning-to-rank approach has the advantage of reducing the output space by first obtaining a smaller list of possible candidate concepts via a candidate generator and then ranking them. DNorm (Leaman et al., 2013), based on a pair-wise learning-to-rank model where both mentions and concept names were represented as TFIDF vectors, was the first to use learning-to-rank for concept normalization and achieved the best performance in the ShARe/CLEF eHealth 2013 shared task. List-wise learning-to-rank approaches are both computationally more efficient than pairwise learning-to-rank (Cao et al., 2007) and empirically outperform both point-wise and pair-wise approaches (Xia et al., 2008). There are two implementations of list-wise classifiers using neural networks for concept normalization: Murty et al. (2018) treat the selection of the best candidate concept as a flat classification problem, losing the ability to handle concepts not seen during training; Ji et al. (2019) take a generate-and-rank approach similar to ours, but they do not leverage resources such as synonyms or semantic type information from UMLS in their BERT-based ranker. 3 Proposed methods 3.1 Concept normalization framework We define a concept mention m as an abbreviation such as “MI”, a noun phrase such as “heart attack”, or even a short text such as “an obstruction of the blood supply to the heart”. The goal is then to assign m with a concept c. Formally, given a list of pre-identified concept mentions M = {m1, m2, ..., mn} in the text and an ontology or knowledge base with a set of concepts C = {c1, c2, ..., ct}, the goal of concept normalization is to find a mapping function cj = f(mi) that maps each textual mention to its correct concept. We approach concept normalization in two steps: we first use a candidate generator G(m, C) →Cm to generate a list of candidate concepts Cm for each mention m, where Cm ⊆C and |Cm| ≪|C|. We then use a candidate ranker R(m, Cm) →ˆ Cm, where ˆ Cm is a re-ranked list of candidate concepts sorted by their relevance, preference, or importance. But unlike information retrieval tasks where the order of candidate concepts in the sorted list ˆ Cm is crucial, in concept normalization we care only that the one true concept is at the top of the list. The main idea of the two-step approach is that we first use a simple and fast system with high recall to generate candidates, and then a more precise system with more discriminative input to rank the candidates. 3.2 Candidate generator We implement two kinds of candidate generators: a BERT-based multi-class classifier when the number of concepts in the ontology is small, and a Lucenebased1 dictionary look-up when there are hundreds of thousands of concepts in the ontology. 3.2.1 BERT-based multi-class classifier BERT (Devlin et al., 2019) is a contextualized word representation model that has shown great performance in many NLP tasks. Here, we use BERT in a multi-class text-classification configuration as our candidate concept generator. We use the final hidden vector Vm ∈RH corresponding to the first input token ([CLS]) generated from BERT(m) and a classification layer with weights W ∈R|C|×H, and train the model using a standard classification loss: LG = y ∗log(softmax(VmW T )) (1) where y is a one-hot vector, and |y| = |C|. The score for all concepts is calculated as: p(C) = softmax(VmW T ) (2) We select the top k most probable concepts in p(C) and feed that list Cm to the ranker. 3.2.2 Lucene-based dictionary look-up system Multi-pass sieve rule based systems (D’Souza and Ng, 2015; Jonnagaddala et al., 2016; Luo et al., 2019b) achieve competitive performance when used with the right combinations and orders of different dictionaries, exact and partial matching, and heuristic rules. Such systems relying on basic lexical matching algorithms are simple and fast to implement, but they are only able to generate candidate concepts which are morphologically similar to a given mention. Inspired by the work of Luo et al. (2019b), we implement a Lucene-based sieve normalization system which consists of the following components (see Appendix A.1 for details): 1https://lucene.apache.org/ 8455 a. Lucene index over the training data finds all mentions that exactly match m. b. Lucene index over ontology finds concepts whose preferred name exactly matches m. c. Lucene index over ontology finds concepts where at least one synonym of the concept exactly matches m. d. Lucene index over ontology finds concepts where at least one synonym of the concept has high character overlap with m. The ranked list Cm generated by this system is fed as input to the candidate ranker. 3.3 Candidate ranker After the candidate generator produces a list of concepts, we use a BERT-based list-wise classifier to select the most likely candidate. BERT allows us to match morphologically dissimilar (but semantically similar) mentions and concepts, and the list-wise classifier takes both mention and candidate concepts as input, allowing us to handle concepts that appear infrequently (or never) in the training data. Here, we use BERT similar to a question answering configuration, where given a concept mention m, the task is to choose the most likely candidate concept cm from all candidate concepts Cm. As shown in Figure 1, our classifier input includes the text of the mention m and all synonyms of the candidate concept cm, and takes the form [CLS] m [SEP] syn1(cm) [SEP] ... [SEP] syns(cm) [SEP], where syni(cm) is the ith synonym of concept cm2. We calculate the final hidden vector V(m,cm) ∈RH corresponding to the first input token ([CLS]) generated from BERT for each such input, and then concatenate the hidden vectors of all candidate concepts to form a matrix V(m,Cm) ∈R|Cm|×H. We use this matrix and classification layer weights W ∈RH, and compute a standard classification loss: LR = y ∗log(softmax(V(m,Cm)W T )). (3) where y is a one-hot vector, and |y| = |Cm|. 3.4 Semantic type regularizer To encourage the list-wise classifier towards a more informative ranking than just getting the correct 2In preliminary experiments, we tried only the concept’s preferred term and several other ways of separating synonyms, but none of these resulted in better performance. concept at the top of the list, we propose a semantic type regularizer that is optimized when candidate concepts with the correct semantic type are ranked above candidate concepts with incorrect types. The semantic type of the candidate concept is assumed correct only if it exactly matches the semantic type of the gold truth concept. If the concept has multiple semantic types, all must match. Our semantic type regularizer consists of two components: Rp( ˆyt, ˆyp) = X p∈P(y) (m1 + ˆyp −ˆyt) (4) Rn( ˆyp, ˆyn) = X p∈P(y) max n∈N(y) (m2 + ˆyn −ˆyp) (5) where ˆy = V(m,cm)W T , N(y) is the set of indexes of candidate concepts with incorrect semantic types (negative candidates), P(y) (positive candidates) is the complement of N(y), ˆyt is the score of the gold truth candidate concept, and thus t ∈P(y). The margins m1 and m2 are hyper-parameters for controlling the minimal distances between ˆyt and ˆyp and between ˆyp and ˆyn, respectively. Intuitively, Rp tries to push the score of the gold truth concept above all positive candidates at least by m1, and Rn tries to push the best scored negative candidate below all positive candidates by m2. The final loss function we optimize for the BERTbased list-wise classifier is: L = LR + λRp( ˆyt, ˆyp) + µRn( ˆyp, ˆyn) (6) where λ and µ are hyper-parameters to control the tradeoff between standard classification loss and the semantic type regularizer. 4 Experiments 4.1 Datasets Our experiments are conducted on three social media datasets, AskAPatient (Limsopatham and Collier, 2016), TwADR-L (Limsopatham and Collier, 2016), and SMM4H-17 (Sarker et al., 2018), and one clinical notes dataset, MCN (Luo et al., 2019b). We summarize dataset characteristics in Table 1. AskAPatient The AskAPatient dataset3 contains 17,324 adverse drug reaction (ADR) annotations collected from blog posts. The mentions are mapped to 1,036 medical concepts with 3http://dx.doi.org/10.5281/zenodo. 55013 8456 Dataset AskAPatient TwADR-L SMM4H-17 MCN Ontology SNOMED-CT & AMT MedDRA MedDRA (PT) SNOMED-CT & RxNorm Subset Y Y N N |Contology| 1,036 2,220 22,500 434,056 |STontology| 22 18 61 125 |Cdataset| 1,036 2,220 513 3,792 |M| 17,324 5,074 9,149 13,609 |Mtrain| 15665.2 4805.7 5,319 5,334 |Mtest| 866.2 142.7 2,500 6,925 |M|/|Cdataset| 16.72 2.29 17.83 3.59 |Ctest −Ctrain| 0 0 43 2,256 |Mtest −Mtrain|/Mtest 39.7% 39.5% 34.7% 53.9% |Mambiguous|/|M| 1.2% 12.8% 0.8% 4.5% Table 1: Dataset statistics, where C is a set of concepts, ST is a set of semantic types, and M is a set of mentions. 22 semantic types from the subset of Systematized Nomenclature Of Medicine-Clinical Term (SNOMED-CT) and the Australian Medicines Terminology (AMT). We follow the 10-fold cross validation (CV) configuration in Limsopatham and Collier (2016) which provides 10 sets of train/dev/test splits. TwADR-L The TwADR-L dataset3 contains 5,074 ADR expressions from social media. The mentions are mapped to 2,220 Medical Dictionary for Regulatory Activities (MedDRA) concepts with 18 semantic types. We again follow the 10-fold cross validation configuration defined by Limsopatham and Collier (2016). SMM4H-17 The SMM4H-17 dataset 4 consists of 9,149 manually curated ADR expressions from tweets. The mentions are mapped to 22,500 concepts with 61 semantic types from MedDRA Preferred Terms (PTs). We use the 5,319 mentions from the released set as our training data, and keep the 2,500 mentions from the original test set as evaluation. MCN The MCN dataset consists of 13,609 concept mentions drawn from 100 discharge summaries from the fourth i2b2/VA shared task (Uzuner et al., 2011). The mentions are mapped to 3792 unique concepts out of 434,056 possible concepts with 125 semantic types in SNOMEDCT and RxNorm. We take 40 clinical notes from the released data as training, consisting of 5,334 mentions, and the standard evaluation data with 6,925 mentions as our test set. Around 2.7% of mentions in MCN could not be mapped to any 4http://dx.doi.org/10.17632/rxwfb3tysd. 1 concepts in the terminology, and are assigned the CUI-less label. A major difference between the datasets is the space of concepts that systems must consider. For AskAPatient and TwADR-L, all concepts in the test data are also in the training data, and in both cases only a couple thousand concepts have to be considered. Both SMM4H-17 and MCN define a much larger concept space: SMM4H-17 considers 22,500 concepts (though only 513 appear in the data) and MCN considers 434,056 (though only 3,792 appear in the data). AskAPatient and TwADR-L have no unseen concepts in their test data, SMM4H-17 has a few (43), while MCN has a huge number (2,256). Even a classifier that perfectly learned all concepts in the training data could achieve only 70.15% accuracy on MCN. MCN also has more unseen mentions: 53.9%, where the other datasets have less than 40%. The MCN dataset is thus harder to memorize, as systems must consider many mentions and concepts never seen in training. Unlike the clinical MCN dataset, in the three social media datasets – AskAPatient, TwADR-L, and SMM4H-17 – it is common for the ADR expressions to share no words with their target medical concepts. For instance, the ADR expression “makes me like a zombie” is assigned the concept “C1443060” with preferred term “feeling abnormal”. The social media datasets do not include context, only the mentions themselves, while the MCN dataset provides the entire note surrounding each mention. Since only 4.5% of mentions in the MCN dataset are ambiguous, for the current experiments we ignore this additional context information. 4.2 Unified Medical Language System The UMLS Metathesaurus (Bodenreider, 2004) links similar names for the same concept 8457 from nearly 200 different vocabularies such as SNOMED-CT, MedDRA, RxNorm, etc. There are over 3.5 million concepts in UMLS, and for each concept, UMLS also provides the definition, preferred term, synonyms, semantic type, relationships with other concepts, etc. In our experiments, we make use of synonyms and semantic type information from UMLS. We restrict our concepts to the three vocabularies, MedDRA, SNOMED-CT, and RxNorm in the UMLS version 2017AB. For each concept in the ontologies of the four datasets, we first find its concept unique identifier (CUI) in UMLS. We then extract synonyms and semantic type information according to the CUI. Synonyms (English only) are collected from level 0 terminologies containing vocabulary sources for which no additional license agreements are necessary. 4.3 Evaluation metrics For all four datasets, the standard evaluation of concept normalization systems is accuracy. For the AskAPatient and TwADR-L datasets, which use 10-fold cross validation, the accuracy metrics are averaged over 10 folds. 4.4 Implementation details We use the BERT-based multi-class classifier as the candidate generator on the three social media datasets AskAPatient, TwADR-L, and SMM4H17, and the Lucene-based candidate generator for the MCN dataset. In the social media datasets, the number of concepts in the data is small, few test concepts are unseen in the training data, and there is a greater need to match expressions that are morphologically dissimilar from medical concepts. In the clinical MCN dataset, the opposites are true. For all experiments, we use BioBERT-base (Lee et al., 2019), which further pre-trains BERT on PubMed abstracts (PubMed) and PubMed Central full-text articles (PMC). We use huggingface’s pytorch implementation of BERT5. We select the best hyper-parameters based on the performance on dev set. See Appendix A.2 for hyperparameter settings. 4.5 Comparisons with related methods We compare our proposed architecture with the following state-of-the-art systems. 5https://github.com/huggingface/ transformers WordCNN Limsopatham and Collier (2016) use convolutional neural networks over pre-trained word embeddings to generate a vector representation for each mention, and then feed these into a softmax layer for multi-class classification. WordGRU+Attend+TF-IDF Tutubalina et al. (2018) use a bidirectional GRU with attention over pre-trained word embeddings to generate a vector representation for each mention, concatenate such vector representations with the cosine similarities of the TF-IDF vectors between the mention and all other concept names, and then feed the concatenated vector to a softmax layer for multi-class classification. BERT+TF-IDF Miftahutdinov and Tutubalina (2019) take similar approach as Tutubalina et al. (2018), but use BERT to generate a vector representation for each mention. They concatenate the vector representations with the cosine similarities of the TF-IDF vectors between the mention and all other concept names, and then feed the concatenated vector to a softmax layer for multi-class classification. CharCNN+Attend+MT Niu et al. (2019) use a multi-task attentional character-level convolution neural network. They first convert the mention into a character embedding matrix. The auxiliary task network takes the embedding matrix as input for a CNN to learn to generate characterlevel domain-related importance weights. Such learned importance weights are concatenated with the character embedding matrix and fed as input to another CNN model with a softmax layer for multi-class classification. CharLSTM+WordLSTM Han et al. (2017) first use a forward LSTM over each character of the mention and its corresponding character class such as lowercase or uppercase to generate a character-level vector representation, then use another bi-directional LSTM over each word of the mention to generate a word-level representation. They concatenate character-level and wordlevel representations and feed them as input to a softmax layer for multi-class classification. LR+MeanEmbedding Belousov et al. (2017) calculate the mean of three different weighted word embeddings pre-trained on GoogleNews, Twitter and DrugTwitter as vector representations for 8458 TwADR-L AskAPatient SMM4H-17 Approach Dev Test Dev Test Dev Test WordCNN (Limsopatham and Collier, 2016) 44.78 81.41 WordGRU+Attend+TF-IDF (Tutubalina et al., 2018) 85.71 BERT+TF-IDF (Miftahutdinov and Tutubalina, 2019) 89.64 CharCNN+Attend+MT (Niu et al., 2019) 46.46 84.65 CharLSTM+WordLSTM (Han et al., 2017) 87.20 LR+MeanEmbedding (Belousov et al., 2017) 87.70 BERT 47.08 44.05 88.63 87.52 84.74 87.36 BERT + BERT-rank 48.07 46.32 88.14 87.10 84.44 87.66 BERT + BERT-rank + ST-reg 47.98 47.02 88.26 87.46 84.66 88.24 BERT + gold + BERT-rank 52.70 49.69 89.06 87.92 88.57 90.16 BERT + gold + BERT-rank + ST-reg 52.84 50.81 89.68 88.51 88.87 91.08 Table 2: Comparisons of our proposed concept normalization architecture against the current state-of-the-art performances on TwADR-L, AskAPatient, and SMM4H-17 datasets. the mention, where word weights are calculated as inverse document frequency. Such vector representations are fed as input to a multinomial logistic regression (LR) model for multi-class classification. Sieve-based Luo et al. (2019b) build a sievebased normalization model which contains exactmatch and MetaMap (Aronson, 2001) modules. Given a mention as input, the exact-match module first looks for mentions in the training data that exactly match the input, and then looks for concepts from the ontology whose synonyms exactly match the input. If no concepts are found, the mention is fed into MetaMap. They run this sieve-based normalization model twice. In the first round, the model lower-cases the mentions and includes acronym/abbreviation tokens during dictionary lookup. In the second round, the model lower-cases the mentions spans and also removes special tokens such as “&apos;s”, “&quot;”, etc. Since our focus is individual systems, not ensembles, we compare only to other non-ensembles6. 4.6 Models We separate out the different contributions from the following components of our architecture. BERT The BERT-based multi-class classifier. When used alone, we select the most probable concept as the prediction. 6An ensemble of three systems (including CharLSTM+WordLSTM and LR+MeanEmbedding) achieved 88.7% accuracy on the SMM4H-17 dataset (Sarker et al., 2018). MCN Approach Dev Test Sieve-based (Luo et al., 2019b) 76.35 Lucene 79.25 Lucene+BERT-rank 83.56 82.75 Lucene+BERT-rank+ST-reg 84.44 83.56 Lucene+gold+BERT-rank 86.89 84.77 Lucene+gold+BERT-rank+ST-reg 88.59 86.56 Table 3: Accuracy of our proposed concept normalization architecture on MCN dataset. Lucene The Lucene-based dictionary look-up. When used alone, we take the top-ranked candidate concept as the prediction. +BERT-rank The BERT-based list-wise classifier, always used in combination with either BERT or Lucene as a canddiate generator +ST-reg The semantic type regularizer, always used in combination with BERT-ranker. We also consider the case (+gold) where we artificially inject the correct concept into the candidate generator’s list if it was not already there. 5 Results Table 2 shows that our complete model, BERT + BERT-rank + ST-reg, achieves a new state-of-theart on two of the social media test sets, and Table 3 shows that Lucene + BERT-rank + ST-reg achieves a new state-of-the-art on the clinical MCN test set. The TwADR-L dataset is the most difficult, with our complete model achieving 47.02% accuracy. In the other datasets, performance of our complete 8459 model is much higher: 87.46% for AskAPatient, 88.24% for SMM4H-177. On the TwADR-L, SMM4H-17, and MCN test sets, adding the BERT-based ranker improves performance over the candidate generator alone, and adding the semantic type regularization further improves performance. For example, Lucene alone achieves 79.25% accuracy on the MCN data, adding the BERT ranker increases this to 82.75%, and adding the semantic type regularizer increases this to 83.56%. On AskAPatient, performance of the full model is similar to just the BERT multiclass classifier, perhaps because in this case BERT alone already successfully improves the state-ofthe-art from 85.71% to 87.52%. The +gold setting allows us to answer how well our ranker would perform if our candidate generator made no mistakes. First, we can see that if the correct concept is always in the candidate list, our list-based ranker (+BERT-rank) outperforms the multi-class classifier (BERT) on all test sets. We also see in this setting that the benefits of the semantic type regularizer are amplified, with test sets of TwADR-L and MCN showing more than 1.00% gain in accuracy from using the regularizer. These findings suggest that improving the quality of the candidate generator should be a fruitful future direction. Overall, we see the biggest performance gains from our proposed generate-and-rank architecture in the MCN dataset. This is the most realistic setting, where the number of candidate concepts is large and many test concepts were never seen during training. In such cases, we cannot use a multiclass classifier as a candidate generator since it would never generate unseen concepts. Thus, our ranker shines in its ability to sort through the long list of possible concepts. 6 Qualitative analysis Table 4 shows an example that is impossible for the multi-class classifier approach to concept normalization. The concept mention “an abdominal wall hernia” in the clinical MCN dataset needs to be mapped to the concept with the preferred name “Hernia of abdominal wall”, but that concept never appeared in the training data. The Lucene-based candidate generator finds this concept, but only 7Miftahutdinov and Tutubalina (2019) use the same architecture as our BERT-based multi-class classifier (row 7), but they achieve 89.28% of accuracy on SMM4H-17. We were unable to replicate this result as their code and parameter settings were unavailable. Candidates L BR Repair of abdominal wall hernia 1 3 Repair of anterior abdominal wall hernia 2 4 Obstructed hernia of anterior abdominal wall 3 5 Hernia of abdominal wall 4 1 Abdominal wall hernia procedure 5 2 Table 4: Predicted candidate concepts for mention An abdominal wall hernia and their rankings among the outputs of Lucene (L) and BERT-Ranker (BR). Gold concept is Hernia of abdominal wall. Candidates BR STR ST Influenza-like illness 1 2 DS Influenza 2 4 DS Influenza-like symptoms 3 1 SS Feeling tired 4 5 F Muscle cramps in feet 5 3 SS Table 5: Predicted candidate concepts for mention felt like I was coming down with flu and their rankings among the outputs of BERT-Ranker (BR) and BERTRanker + semantic type regularizer (STR). Gold concept is flu-like symptoms. Semantic types (ST) of the candidates include: disease or syndrome (DS), sign or symptom (SS), finding (F) through character overlap (step d.) and several other concepts have high overlap as well. Thus Lucene ranks the correct concept 4th in its list. The BERT ranker is able to compare “an abdominal wall hernia” to “Hernia of abdominal wall” and recognize that as a better match than the other options, re-assigning it to rank 1. Table 5 shows an example that illustrates why the semantic type regularizer helps. The mention “felt like I was coming down with flu” in the social media AskAPatient dataset needs to be mapped to the concept with the preferred name “influenza-like symptoms”, which has the semantic type of a sign or symptom. The BERT ranker ranks two disease or syndromes higher, placing the correct concept at rank 3. After the semantic type regularizer is added, the system recognizes that the mention should be mapped to a sign or symptom, and correctly ranks it above the disease or syndromes. Note that this happens even though the ranker does not get to see the semantic type of the input mention at prediction time. 7 Limitations and future research The available concept normalization datasets are somewhat limited. Lee et al. (2017) notes that AskAPatient and TwADR-L have issues including 8460 duplicate instances, which can lead to bias in the system; many phrases have multiple valid mappings to concepts but the context necessary to disambiguate is not part of the dataset; and the 10-fold cross-validation makes training complex models unnecessarily expensive. These datasets are also unrealistic in that all concepts in the test data are seen during training. Future research should focus on more realistic datasets that follow the approach of MCN in annotating mentions of concepts from a large ontology and including the full context. Our ability to explore the size of the candidate list was limited by our available computational resources. As the size of the candidate list increases, the true concept is more likely to be included, but the number of training instances also increases, making the computational cost larger, especially for the datasets using 10-fold cross-validation. We chose candidate list sizes as large as we could afford, but there are likely further gains possible with larger candidate lists. Our semantic type regularizer is limited to exact matching: it checks only whether the semantic type of a candidate exactly matches the semantic type of the true concept. The UMLS ontology includes many other relations, such as is-a and part-of relations, and extending our regularizer to encode such rich semantic knowledge may yield further improvements in the BERT-based ranker. 8 Conclusion We propose a concept normalization framework consisting of a candidate generator and a list-wise classifier based on BERT. Because the candidate ranker makes predictions over pairs of concept mentions and candidate concepts, it is able to predict concepts never seen during training. Our proposed semantic type regularizer allows the ranker to incorporate semantic type information into its predictions without requiring semantic types at prediction time. This generate-and-rank framework achieves state-of-theart performance on multiple concept normalization datasets. Acknowledgments We thank the anonymous reviewers for their insightful comments on an earlier draft of this paper. This work was supported in part by National Institutes of Health grant R01LM012918 from the National Library of Medicine (NLM) and grant R01GM114355 from the National Institute of General Medical Sciences (NIGMS). The computations were done in systems supported by the National Science Foundation under Grant No. 1228509. This research was supported in part by an appointment to the Oak Ridge National Laboratory Advanced Short-Term Research Opportunity (ASTRO) Program, sponsored by the U.S. Department of Energy and administered by the Oak Ridge Institute for Science and Education. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health, National Science Foundation, or Department of Energy. References Alan R. Aronson. 2001. Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program. In Proceedings of the AMIA Symposium, pages 17–21. American Medical Informatics Association. Maksim Belousov, William Dixon, and Goran Nenadic. 2017. Using an Ensemble of Generalised Linear and Deep Learning Models in the SMM4H 2017 Medical Concept Normalisation Task. In CEUR Workshop Proceedings, volume 1996, pages 54–58. Olivier Bodenreider. 2004. The Unified Medical Language System (UMLS): integrating biomedical terminology. Nucleic Acids Research, 32(suppl 1):D267–D270. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to Rank: From Pairwise Approach to Listwise Approach. In Proceedings of the 24th International Conference on Machine Learning, pages 129–136. Association for Computing Machinery. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jennifer D’Souza and Vincent Ng. 2015. Sieve-based entity linking for the biomedical domain. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 297– 302, Beijing, China. Association for Computational Linguistics. 8461 Sven Festag and Cord Spreckelsen. 2017. Word Sense Disambiguation of Medical Terms via Recurrent Convolutional Neural Networks. In Health Informatics Meets EHealth: Digital InsightInformationDriven Health & Care. Proceedings of the 11th EHealth2017 Conference, volume 236, pages 8–15. Wilco W.M. Fleuren and Wynand Alkema. 2015. Application of text mining in the biomedical domain. Methods, 74:97–106. Graciela H. Gonzalez, Tasnia Tahsin, Britton C. Goodale, Anna C. Greene, and Casey S. Greene. 2015. Recent Advances and Emerging Applications in Text and Data Mining for Biomedical Discovery. Briefings in Bioinformatics, 17(1):33–42. Sifei Han, Tung Tran, Anthony Rios, and Ramakanth Kavuluru. 2017. Team UKNLP: Detecting ADRs, Classifying Medication Intake Messages, and Normalizing ADR Mentions on Twitter. In CEUR Workshop Proceedings, volume 1996, pages 49–53. Zongcheng Ji, Qiang Wei, and Hua Xu. 2019. Bertbased ranking for biomedical entity normalization. arXiv preprint arXiv:1908.03548. Jitendra Jonnagaddala, Toni Rose Jue, Nai-Wen Chang, and Hong-Jie Dai. 2016. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion. Database, 2016:baw112. Rohit J. Kate. 2016. Normalizing clinical terms using learned edit distance patterns. Journal of the American Medical Informatics Association, 23(2):380– 386. Andr´e Leal, Bruno Martins, and Francisco Couto. 2015. ULisboa: Recognition and normalization of medical concepts. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 406–411, Denver, Colorado. Association for Computational Linguistics. Robert Leaman, Rezarta Islamaj Do˘gan, and Zhiyong Lu. 2013. DNorm: disease name normalization with pairwise learning to rank. Bioinformatics, 29(22):2909–2917. Hsin-Chun Lee, Yi-Yu Hsu, and Hung-Yu Kao. 2016. AuDis: an automatic CRF-enhanced disease normalization in biomedical text. Database, 2016. Baw091. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. Btz682. Kathy Lee, Sadid A. Hasan, Oladimeji Farri, Alok Choudhary, and Ankit Agrawal. 2017. Medical Concept Normalization for Online User-Generated Texts. In 2017 IEEE International Conference on Healthcare Informatics (ICHI), pages 462–469. IEEE. Fei Li, Yonghao Jin, Weisong Liu, Bhanu Pratap Singh Rawat, Pengshan Cai, and Hong Yu. 2019. FineTuning Bidirectional Encoder Representations From Transformers (BERT)–Based Models on LargeScale Electronic Health Record Notes: An Empirical Study. JMIR Med Inform, 7(3):e14830. Haodi Li, Qingcai Chen, Buzhou Tang, Xiaolong Wang, Hua Xu, Baohua Wang, and Dong Huang. 2017. CNN-based ranking for biomedical entity normalization. BMC Bioinformatics, 18(11):385. Nut Limsopatham and Nigel Collier. 2016. Normalising medical concepts in social media texts by learning semantic representation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1014–1023, Berlin, Germany. Association for Computational Linguistics. Feifan Liu, Abhyuday Jagannatha, and Hong Yu. 2019. Towards drug safety surveillance and pharmacovigilance: Current progress in detecting medication and adverse drug events from electronic health records. Drug Saf, 42:95–97. Hongwei Liu and Yun Xu. 2017. A Deep Learning Way for Disease Name Representation and Normalization. In Natural Language Processing and Chinese Computing, pages 151–157. Springer International Publishing. Yen-Fu Luo, Weiyi Sun, and Anna Rumshisky. 2019a. A Hybrid Normalization Method for Medical Concepts in Clinical Narrative using Semantic Matching. In AMIA Joint Summits on Translational Science proceedings, volume 2019, pages 732–740. American Medical Informatics Association. Yen-Fu Luo, Weiyi Sun, and Anna Rumshisky. 2019b. MCN: A comprehensive corpus for medical concept normalization. Journal of Biomedical Informatics, pages 103–132. Zulfat Miftahutdinov and Elena Tutubalina. 2019. Deep neural models for medical concept normalization in user-generated texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 393–399, Florence, Italy. Association for Computational Linguistics. Shikhar Murty, Patrick Verga, Luke Vilnis, Irena Radovanovic, and Andrew McCallum. 2018. Hierarchical losses and new resources for fine-grained entity typing and linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 97–109, Melbourne, Australia. Association for Computational Linguistics. Thanh Ngan Nguyen, Minh Trang Nguyen, and Thanh Hai Dang. 2018. Disease Named Entity Normalization Using Pairwise Learning To Rank and Deep Learning. Technical report, VNU University of Engineering and Technology. 8462 Jinghao Niu, Yehui Yang, Siheng Zhang, Zhengya Sun, and Wensheng Zhang. 2019. Multi-task CharacterLevel Attentional Networks for Medical Concept Normalization. Neural Process Lett, 49(3):1239– 1256. Sameer Pradhan, No´emie Elhadad, Wendy Chapman, Suresh Manandhar, and Guergana Savova. 2014. SemEval-2014 task 7: Analysis of clinical text. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 54–62, Dublin, Ireland. Association for Computational Linguistics. Anna Rumshisky, Marzyeh Ghassemi, Tristan Naumann, Peter Szolovits, V.M. Castro, T.H. McCoy, and R.H. Perlis. 2016. Predicting early psychiatric readmission with natural language processing of narrative discharge summaries. Transl Psychiatry, 6(10):e921–e921. Abeed Sarker, Maksim Belousov, Jasper Friedrichs, Kai Hakala, Svetlana Kiritchenko, Farrokh Mehryary, Sifei Han, Tung Tran, Anthony Rios, Ramakanth Kavuluru, Berry de Bruijn, Filip Ginter, Debanjan Mahata, Saif M. Mohammad, Goran Nenadic, and Graciela Gonzalez-Hernandez. 2018. Data and systems for medication-related text classification and concept normalization from Twitter: insights from the Social Media Mining for Health (SMM4H)-2017 shared task. Journal of the American Medical Informatics Association, 25(10):1274–1283. Guergana K. Savova, Anni R. Coden, Igor L. Sominsky, Rie Johnson, Philip V. Ogren, Piet C. De Groen, and Christopher G. Chute. 2008. Word sense disambiguation across two domains: Biomedical literature and clinical notes. Journal of Biomedical Informatics, 41(6):1088–1100. Guergana K. Savova, James J. Masanz, Philip V. Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C. KipperSchuler, and Christopher G. Chute. 2010. Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications. Journal of the American Medical Informatics Association, 17(5):507–513. Mark Stevenson, Yikun Guo, Abdulaziz Alamri, and Robert Gaizauskas. 2009. Disambiguation of biomedical abbreviations. In Proceedings of the BioNLP 2009 Workshop, pages 71–79, Boulder, Colorado. Association for Computational Linguistics. Hanna Suominen, Sanna Salanter¨a, Sumithra Velupillai, Wendy W. Chapman, Guergana Savova, Noemie Elhadad, Sameer Pradhan, Brett R. South, Danielle L. Mowery, Gareth J.F. Jones, Johannes Leveling, Liadh Kelly, Lorraine Goeuriot, David Martinez, and Guido Zuccon. 2013. Overview of the ShARe/CLEF eHealth Evaluation Lab 2013. In Information Access Evaluation. Multilinguality, Multimodality, and Visualization, pages 212–231. Springer Berlin Heidelberg. Elena Tutubalina, Zulfat Miftahutdinov, Sergey Nikolenko, and Valentin Malykh. 2018. Medical concept normalization in social media posts with recurrent neural networks. Journal of Biomedical Informatics, 84:93–102. ¨Ozlem Uzuner, Brett R. South, Shuying Shen, and Scott L. DuVall. 2011. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5):552–556. Davy Weissenbacher, Abeed Sarker, Arjun Magge, Ashlynn Daughton, Karen O’Connor, Michael J. Paul, and Graciela Gonzalez-Hernandez. 2019. Overview of the fourth social media mining for health (SMM4H) shared tasks at ACL 2019. In Proceedings of the Fourth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 21–30, Florence, Italy. Association for Computational Linguistics. Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: theory and algorithm. In Proceedings of the 25th International Conference on Machine Learning, pages 1192–1199. Association for Computing Machinery. Antonio Jimeno Yepes. 2017. Word embeddings and recurrent neural networks based on Long-Short Term Memory nodes in supervised biomedical word sense disambiguation. Journal of Biomedical Informatics, 73:137–147. 8463 A Appendices A.1 Lucene-based dictionary look-up system The lucene-based dictionary look-up system consists of the following components: (a) Lucene index over the training data finds all CUI-less mentions that exactly match mention m. (b) Lucene index over the training data finds CUIs of all training mentions that exactly match mention m. (c) Lucene index over UMLS finds CUIs whose preferred name exactly matches mention m. (d) Lucene index over UMLS finds CUIs where at least one synonym of the CUI exactly matches mention m. (e) Lucene index over UMLS finds CUIs where at least one synonym of the CUI has high character overlap with mention m. To check the character overlap, we run the following three rules sequentially: token-level matching, fuzzy string matching with a maximum edit distance of 2, and character 3-gram matching.. See Figure A1 for the flow of execution across the components. Whenever there are multiple CUIs generated from a component (a) to (e), they are fed, along with the concept mention, to the BERT-based reranker (f). During training, we used component (e) alone instead of the combination of components (b)-(e) to generate training instances for the BERT-based reranker (f) as it generated many more training examples and resulted in better performance on the dev set. During evaluation, we used the whole pipeline. Concept mention (a) Training data CUI-less mention exact-match (Lucene) (b) Training data CUI mention exact-match (Lucene) (c) UMLS preferred name exact-match (Lucene) (d) UMLS synonyms exact-match (Lucene) (e) UMLS synonyms partial-match (Lucene) (f) BERT-based reranker 0 0 0 0 CUI-less 1+ CUI-less 0 CUI CUI CUI CUI 1 2+ CUI CUI CUI CUI 1 2+ CUI CUI CUI CUI 1 2+ CUI CUI CUI CUI 1 2+ CUI 1 Figure A1: Architecture of the lucene-based dictionary look-up system. The edges out of a search process indicate the number of matches necessary to follow the edge. Outlined nodes are terminal states that represent the predictions of the system. 8464 Multi-class List-wise AAP TwADR-L SMM4H-17 AAP TwADR-L SMM4H-17 MCN learning rate 1e-4 5e-5 5e-5 5e-5 5e-5 3e-5 3e-5 num train epochs 30 30 40 10 10 20 30 per gpu train batch size 32 16 32 16 16 16 8 save steps 487 301 166 976 301 333 250 warmup steps 1463 903 664 976 301 666 750 list size (k) 10 20 10 30 m1 0.0 0.0 0.0 0.1 m2 0.2 0.2 0.2 0.2 λ 0.6 0.4 0.4 0.4 µ 0.6 0.4 0.4 0.8 Table A1: Hyper-parameters for BERT-based multi-class and list-wise classifiers. AAP=AskAPatient. Terms with underscores are hyper-parameters in huggingface’s pytorch implementation of BERT. A.2 Hyper-parameters Table A1 shows the hyper-parameters for our models. We use huggingface’s pytorch implementation of BERT. We tune the hyperparameters via grid search, and select the best BERT hyper-parameters based on the performance on the dev set. To keep the size of the candidate list equal to k for every mention, we apply the following rules: if the list does not contain the gold concept and is already of length k, we inject the correct one and remove an incorrect candidate; if the list is not length of k, we inject the gold concept and the most frequent concepts in the training set to reach k.
2020
748
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8465–8475 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8465 Hierarchical Entity Typing via Multi-level Learning to Rank Tongfei Chen Yunmo Chen Benjamin Van Durme Johns Hopkins University {tongfei, ychen, vandurme}@cs.jhu.edu Abstract We propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction. At training, our novel multi-level learning-to-rank loss compares positive types against negative siblings according to the type tree. During prediction, we define a coarseto-fine decoder that restricts viable candidates at each level of the ontology based on already predicted parent type(s). We achieve stateof-the-art across multiple datasets, particularly with respect to strict accuracy.1 1 Introduction Entity typing is the assignment of a semantic label to a span of text, where that span is usually a mention of some entity in the real world. Named entity recognition (NER) is a canonical information extraction task, commonly considered a form of entity typing that assigns spans to one of a handful of types, such as PER, ORG, GPE, and so on. Fine-grained entity typing (FET) seeks to classify spans into types according to more diverse, semantically richer ontologies (Ling and Weld, 2012; Yosef et al., 2012; Gillick et al., 2014; Del Corro et al., 2015; Choi et al., 2018), and has begun to be used in downstream models for entity linking (Gupta et al., 2017; Raiman and Raiman, 2018). Consider the example in Figure 1 from the FET dataset, FIGER (Ling and Weld, 2012). The mention of interest, Hollywood Hills, will be typed with the single label LOC in traditional NER, but may be typed with a set of types {/location, /geography, /geography/mountain} under a fine-grained typing scheme. In these finergrained typing schemes, types usually form a hierarchy: there are a set of coarse types that lies on 1 Code can be found at https://github.com/ ctongfei/hierarchical-typing. location geography zzz city county zzz mountain island He is interred at Forest Lawn Memorial Park in Hollywood Hills, Los Angeles, CA. person artist doctor Mention representation zzz entity Figure 1: An example mention classified using the FIGER ontology. Positive types are highlighted. the top level—these are similar to traditional NER types, e.g. /person; additionally, there are finer types that are subtypes of these top-level types, e.g. /person/artist or /person/doctor. Most prior work concerning fine-grained entity typing has approached the problem as a multilabel classification problem: given an entity mention together with its context, the classifier seeks to output a set of types, where each type is a node in the hierarchy. Approaches to FET include handcrafted sparse features to various neural architectures (Ren et al., 2016a; Shimaoka et al., 2017; Lin and Ji, 2019, inter alia, see section 2). Perhaps owing to the historical transition from “flat” NER types, there has been relatively little work in FET that exploits ontological tree structure, where type labels satisfy the hierarchical property: a subtype is valid only if its parent supertype is also valid. We propose a novel method that takes the explicit ontology structure into account, by a multi-level learning to rank approach that ranks the candidate types conditioned on the given entity mention. Intuitively, coarser types are easier whereas finer types are harder to classify: we capture this intuition by allowing distinct margins at each level of the ranking model. 8466 entity veh wea aircraft bomb bullets bullets molotov cocktail ammunition zzz airplane zzz zzz zzz zzz person other entity zzz artist athlete zzz actor author zzz event food zzz accident election zzz AIDA OntoNotes product organization entity zzz vehicle zzz substance weapon corporation educational government museum chemical drug BBN L0 L1 L2 L3 other other other other other zzz Figure 2: Various type ontologies. Different levels of the types are shown in different shades, from L0 to L3. The ENTITY and OTHER special nodes are discussed in section 3. Coupled with a novel coarse-to-fine decoder that searches on the type hierarchy, our approach guarantees that predictions do not violate the hierarchical property, and achieves state-of-the-art results according to multiple measures across various commonly used datasets. 2 Related Work FET is usually studied as allowing for sentencelevel context in making predictions, notably starting with Ling and Weld (2012) and Gillick et al. (2014), where they created the commonly used FIGER and OntoNotes datasets for FET. While researchers have considered the benefits of document-level (Zhang et al., 2018), and corpuslevel (Yaghoobzadeh and Sch¨utze, 2015) context, here we focus on the sentence-level variant for best contrast to prior work. Progress in FET has focused primarily on: • Better mention representations: Starting from sparse hand-crafted binary features (Ling and Weld, 2012; Gillick et al., 2014), the community has moved to distributed representations (Yogatama et al., 2015), to pre-trained word embeddings with LSTMs (Ren et al., 2016a,b; Shimaoka et al., 2016; Abhishek et al., 2017; Shimaoka et al., 2017) or CNNs (Murty et al., 2018), with mention-to-context attention (Zhang et al., 2018), then to employing pre-trained language models like ELMo (Peters et al., 2018) to generate ever better representations (Lin and Ji, 2019). Our approach builds upon these developments and uses state-of-theart mention encoders. • Incorporating the hierarchy: Most prior works approach the hierarchical typing problem as multi-label classification, without using information in the hierarchical structure, but there are a few exceptions. Ren et al. (2016a) proposed an adaptive margin for learning-to-rank so that similar types have a smaller margin; Xu and Barbosa (2018) proposed hierarchical loss normalization that penalizes output that violates the hierarchical property; and Murty et al. (2018) proposed to learn a subtyping relation to constrain the type embeddings in the type space. In contrast to these approaches, our coarse-tofine decoding approach strictly guarantees that the output does not violate the hierarchical property, leading to better performance. HYENA (Yosef et al., 2012) applied ranking to sibling types in a type hierarchy, but the number of predicted positive types are trained separately with a meta-model, hence does not support neural end-to-end training. Researchers have proposed alternative FET formulations whose types are not formed in a type hierarchy, in particular Ultra-fine entity typing (Choi et al., 2018; Xiong et al., 2019; Onoe and Durrett, 2019), with a very large set of types derived from phrases mined from a corpus. FET in KB (Jin et al., 2019) labels mentions to types in a knowledge base with multiple relations, forming a type graph. Dai et al. (2019) augments the task with entity linking to KBs. 3 Problem Formulation We denote a mention as a tuple x = (w, l, r), where w = (w1, · · · , wn) is the sentential context and the span [l : r] marks a mention of interest in sentence w. That is, the mention of interest is (wl, · · · , wr). Given x, a hierarchical entity typing model outputs 8467 a set of types Y in the type ontology Y, i.e. Y ⊆Y. Type hierarchies take the form of a forest, where each tree is rooted by a top-level supertype (e.g. /person, /location, etc.). We add a dummy parent node ENTITY = “/”, the supertype of all entity types, to all the top-level types, effectively transforming a type forest to a type tree. In Figure 2, we show 3 type ontologies associated with 3 different datasets (see subsection 5.1), with the dummy ENTITY node augmented. We now introduce some notation for referring to aspects of a type tree. The binary relation “type z is a subtype of y” is denoted as z <: y.2 The unique parent of a type y in the type tree is denoted ¯y ∈Y, where ¯y is undefined for y = ENTITY. The immediate subtypes of y (children nodes) are denoted Ch(y) ⊆Y. Siblings of y, those sharing the same immediate parent, are denoted Sb(y) ⊆ Y, where y < Sb(y). In the AIDA FET ontology (see Figure 2), the maximum depth of the tree is L = 3, and each mention can only be typed with at most 1 type from each level. We term this scenario singlepath typing, since there can be only 1 path starting from the root (ENTITY) of the type tree. This is in contrast multi-path typing, such as in the BBN dataset, where mentions may be labeled with multiple types on the same level of the tree. Additionally, in AIDA, there are mentions labeled such as as /per/police/<unspecified>. In FIGER, we find instances with labeled type /person but not any further subtype. What does it mean when a mention x is labeled with a partial type path, i.e., a type y but none of the subtypes z <: y? We consider two interpretations: • Exclusive: x is of type y, but x is not of any type z <: y. • Undefined: x is of type y, but whether it is an instance of some z <: y is unknown. We devise different strategies to deal with these two conditions. Under the exclusive case, we add a dummy OTHER node to every intermediate branch node in the type tree. For any mention x labeled with type y but none of the subtypes z <: y, we add this additional label “y/OTHER” to the labels of x (see Figure 2: AIDA). For example, if we interpret a partial type path /person 2 Per programming language literature, e.g. the type system F<: that supports subtyping. in FIGER as exclusive, we add another type /person/OTHER to that instance. Under the undefined case, we do not modify the labels in the dataset. We will see this can make a significant difference depending on the way a specific dataset is annotated. 4 Model 4.1 Mention Representation Hidden representations for entity mentions in sentence w are generated by leveraging recent advances in language model pre-training, e.g. ELMo (Peters et al., 2018).3 The ELMo representation for each token wi is denoted as wi ∈Rdw. Dropout is applied with probability pD to the ELMo vectors. Our mention encoder largely follows Lin and Ji (2019). First a mention representation is derived using the representations of the words in the mention. We apply a max pooling layer atop the mention after a linear transformation:4 m = MaxPool(Twl, · · · , Twr) ∈Rdw . (1) Then we employ mention-to-context attention first described in Zhang et al. (2018) and later employed by Lin and Ji (2019): a context vector c is generated by attending the sentence with a query vector derived from the mention vector m. We use the multiplicative attention of Luong et al. (2015): ai ∝exp(mTQwi) (2) c = N X i=1 aiwi ∈Rdw (3) The final representation for an entity mention is generated via concatenation of the mention and context vector: [m ; c] ∈R2dw. 4.2 Type Scorer We learn a type embedding y ∈Rdt for each type y ∈Y. To score an instance with representation [m ; c], we pass it through a 2-layer feed-forward network that maps into the same space as the type space Rdt, with tanh as the nonlinearity. The final 3 Lin and Ji (2019) found that ELMo performs better than BERT (Devlin et al., 2019) for FET. Our internal experiments also confirm this finding. We hypothesize that this is due to the richer character-level information contained in lowerlevel ELMo representations that are useful for FET. 4 Lin and Ji (2019) proposed an attentive pooler with a learned global query vector. We found out that a simple max pooling layer achieves similar performance. 8468 score is an inner product between the transformed feature vector and the type embedding: F(x, y) = FFNN([m ; c]) · y . (4) 4.3 Hierarchical Learning-to-Rank We introduce our novel hierarchical learning-torank loss that (1) allows for natural multi-label classification and (2) takes the hierarchical ontology into account. We start with a multi-class hinge loss that ranks positive types above negative types (Weston and Watkins, 1999): Jflat(x,Y) = X y∈Y X y′<Y [ξ −F(x, y) + F(x, y′)]+ (5) where [x]+ = max{0, x}. This is actually learningto-rank with a ranking SVM (Joachims, 2002): the model learns to rank the positive types y ∈Y higher than those negative types y′ < Y, by imposing a margin ξ between y and y′: type y should rank higher than y′ by ξ. Note that in Equation 5, since it is a linear SVM, the margin hyperparameter ξ could be just set as 1 (the type embeddings are linearly scalable), and we rely on L2 regularization to constrain the type embeddings. Multi-level Margins However, this method considers all candidate types to be flat instead of hierarchical — all types are given the same treatment without any prior on their relative position in the type hierarchy. Intuitively, coarser types (higher in the hierarchy) should be easier to determine (e.g. /person vs /location should be fairly easy for the model), but fine-grained types (e.g. /person/artist/singer) are harder. We encode this intuition by (i) learning to rank types only on the same level in the type tree; (ii) setting different margin parameters for the ranking model with respect to different levels: X y∈Y X y′∈Sb(y)\Y [ξlev(y) −F(x, y) + F(x, y′)]+ (6) Here lev(y) is the level of the type y: for example, lev(/location) = 1, and lev(/person/artist/singer) = 3. In Equation 6, each positive type y is only compared against its negative siblings Sb(y)\Y, and the margin hyperparameter is set to be ξlev(y), i.e., a margin dependent on which level y is in the tree. Intuitively, we should set ξ1 > ξ2 > ξ3 since our (1 −↵)⇠2 <latexit sha1_base64="Yckm0lgEuwguyWYDTSkfyWZpZs=">ACw3 icdVFNa9tAEF2rTZq6Sey0x15ETcAtjZFCIDkGSqHFOokYBkzWo3ixfshdkeNjdAvSY/Jj+q/6doxUragYXHm3k7b2bSQgpHUfS7Fbx4ubX9aud1+8 3u3n6ne/D20pnSchxyI429TsGhFBqHJEjidWERVCrxKp19WeavfqJ1wugftChwrOBGi1xwIE9Nup1+fJSALKbwMZmLyfGk24sG0SrC5yBegx5bx8Xko PUryQwvFWriEpwbxVFB4wosCS6xbielwL4DG5w5KEGhW5crZzX4aFnsjA31j9N4YrdVFSgnFuo1FcqoKl7mluS/8qNSsrPxpXQRUmo+WOjvJQhmXC5h jATFjnJhQfArfBeQz4FC5z8stqHm20cB4kWZd2kpUjRz6ixyY/+8p+9O+mVpvDzaryl+cptuzEDCb+O5pL+I9tUpcbMCFIv9L9Z9FXcKAU6+5QZhDK amiOVHt7xk/vd5zcHk8iKNB/P2kd95fX3aHvWcfWJ/F7JSds2/sg0ZyW7Y/fsIfgazAIb0GNp0Fpr3rFGBPUfKwDgvg=</latexit> <latexit sha1_base64="Yckm0lgEuwguyWYDTSkfyWZpZs=">ACw3 icdVFNa9tAEF2rTZq6Sey0x15ETcAtjZFCIDkGSqHFOokYBkzWo3ixfshdkeNjdAvSY/Jj+q/6doxUragYXHm3k7b2bSQgpHUfS7Fbx4ubX9aud1+8 3u3n6ne/D20pnSchxyI429TsGhFBqHJEjidWERVCrxKp19WeavfqJ1wugftChwrOBGi1xwIE9Nup1+fJSALKbwMZmLyfGk24sG0SrC5yBegx5bx8Xko PUryQwvFWriEpwbxVFB4wosCS6xbielwL4DG5w5KEGhW5crZzX4aFnsjA31j9N4YrdVFSgnFuo1FcqoKl7mluS/8qNSsrPxpXQRUmo+WOjvJQhmXC5h jATFjnJhQfArfBeQz4FC5z8stqHm20cB4kWZd2kpUjRz6ixyY/+8p+9O+mVpvDzaryl+cptuzEDCb+O5pL+I9tUpcbMCFIv9L9Z9FXcKAU6+5QZhDK amiOVHt7xk/vd5zcHk8iKNB/P2kd95fX3aHvWcfWJ/F7JSds2/sg0ZyW7Y/fsIfgazAIb0GNp0Fpr3rFGBPUfKwDgvg=</latexit> <latexit sha1_base64="Yckm0lgEuwguyWYDTSkfyWZpZs=">ACw3 icdVFNa9tAEF2rTZq6Sey0x15ETcAtjZFCIDkGSqHFOokYBkzWo3ixfshdkeNjdAvSY/Jj+q/6doxUragYXHm3k7b2bSQgpHUfS7Fbx4ubX9aud1+8 3u3n6ne/D20pnSchxyI429TsGhFBqHJEjidWERVCrxKp19WeavfqJ1wugftChwrOBGi1xwIE9Nup1+fJSALKbwMZmLyfGk24sG0SrC5yBegx5bx8Xko PUryQwvFWriEpwbxVFB4wosCS6xbielwL4DG5w5KEGhW5crZzX4aFnsjA31j9N4YrdVFSgnFuo1FcqoKl7mluS/8qNSsrPxpXQRUmo+WOjvJQhmXC5h jATFjnJhQfArfBeQz4FC5z8stqHm20cB4kWZd2kpUjRz6ixyY/+8p+9O+mVpvDzaryl+cptuzEDCb+O5pL+I9tUpcbMCFIv9L9Z9FXcKAU6+5QZhDK amiOVHt7xk/vd5zcHk8iKNB/P2kd95fX3aHvWcfWJ/F7JSds2/sg0ZyW7Y/fsIfgazAIb0GNp0Fpr3rFGBPUfKwDgvg=</latexit> <latexit sha1_base64="Yckm0lgEuwguyWYDTSkfyWZpZs=">ACw3 icdVFNa9tAEF2rTZq6Sey0x15ETcAtjZFCIDkGSqHFOokYBkzWo3ixfshdkeNjdAvSY/Jj+q/6doxUragYXHm3k7b2bSQgpHUfS7Fbx4ubX9aud1+8 3u3n6ne/D20pnSchxyI429TsGhFBqHJEjidWERVCrxKp19WeavfqJ1wugftChwrOBGi1xwIE9Nup1+fJSALKbwMZmLyfGk24sG0SrC5yBegx5bx8Xko PUryQwvFWriEpwbxVFB4wosCS6xbielwL4DG5w5KEGhW5crZzX4aFnsjA31j9N4YrdVFSgnFuo1FcqoKl7mluS/8qNSsrPxpXQRUmo+WOjvJQhmXC5h jATFjnJhQfArfBeQz4FC5z8stqHm20cB4kWZd2kpUjRz6ixyY/+8p+9O+mVpvDzaryl+cptuzEDCb+O5pL+I9tUpcbMCFIv9L9Z9FXcKAU6+5QZhDK amiOVHt7xk/vd5zcHk8iKNB/P2kd95fX3aHvWcfWJ/F7JSds2/sg0ZyW7Y/fsIfgazAIb0GNp0Fpr3rFGBPUfKwDgvg=</latexit> ↵⇠2 <latexit sha1_base64="e6geauY3jiuodw/0nXuPUGwUKo=">ACvX icdVHbatAEF0rvaTuLUkf+yJqAqEUI4VA+9ZAX/KYQp0EJGNGq1G8eC9id9TYCH9G30rzXf2bjh1TrKQdWDicmbNzZqaotQqUJL970c6jx0+e7j7rP3 /x8tXrvf2Di+AaL3EknXb+qoCAWlkckSKNV7VHMIXGy2L2ZW/I4+KGe/0aLGsYFrqyolgZjKctD1FPK5mhxP9gbJMFlH/BCkGzAQmzif7Pd+5qWTj UFLUkMIWZrUNG7Bk5Ial/28CViDnME1ZgwtGAzjdu15GR8yU8aV8/wsxWt2W9GCWFhCq40QNwP7ci/5XLGqo+jVtl64bQyrtGVaNjcvFqAXGpPErSC wYgvWKvsZyCB0m8pv7hdpsgQaNHvezSWhXIM1rs8tlf/gO706x0Nc9r8Ybma7f9zgykeB3dJf1Htq0qnJsRFCzk3zxylXTGgC3f50QlVtBoamlOtOR7p vev9xBcHA/TZJh+PRmcHm0uyveinfiSKTiozgVZ+JcjIQUTvwQv8Rt9DnCSEf2rjTqbTRvRCeimz9leN+2</latexit> <latexit sha1_base64="e6geauY3jiuodw/0nXuPUGwUKo=">ACvX icdVHbatAEF0rvaTuLUkf+yJqAqEUI4VA+9ZAX/KYQp0EJGNGq1G8eC9id9TYCH9G30rzXf2bjh1TrKQdWDicmbNzZqaotQqUJL970c6jx0+e7j7rP3 /x8tXrvf2Di+AaL3EknXb+qoCAWlkckSKNV7VHMIXGy2L2ZW/I4+KGe/0aLGsYFrqyolgZjKctD1FPK5mhxP9gbJMFlH/BCkGzAQmzif7Pd+5qWTj UFLUkMIWZrUNG7Bk5Ial/28CViDnME1ZgwtGAzjdu15GR8yU8aV8/wsxWt2W9GCWFhCq40QNwP7ci/5XLGqo+jVtl64bQyrtGVaNjcvFqAXGpPErSC wYgvWKvsZyCB0m8pv7hdpsgQaNHvezSWhXIM1rs8tlf/gO706x0Nc9r8Ybma7f9zgykeB3dJf1Htq0qnJsRFCzk3zxylXTGgC3f50QlVtBoamlOtOR7p vev9xBcHA/TZJh+PRmcHm0uyveinfiSKTiozgVZ+JcjIQUTvwQv8Rt9DnCSEf2rjTqbTRvRCeimz9leN+2</latexit> <latexit sha1_base64="e6geauY3jiuodw/0nXuPUGwUKo=">ACvX icdVHbatAEF0rvaTuLUkf+yJqAqEUI4VA+9ZAX/KYQp0EJGNGq1G8eC9id9TYCH9G30rzXf2bjh1TrKQdWDicmbNzZqaotQqUJL970c6jx0+e7j7rP3 /x8tXrvf2Di+AaL3EknXb+qoCAWlkckSKNV7VHMIXGy2L2ZW/I4+KGe/0aLGsYFrqyolgZjKctD1FPK5mhxP9gbJMFlH/BCkGzAQmzif7Pd+5qWTj UFLUkMIWZrUNG7Bk5Ial/28CViDnME1ZgwtGAzjdu15GR8yU8aV8/wsxWt2W9GCWFhCq40QNwP7ci/5XLGqo+jVtl64bQyrtGVaNjcvFqAXGpPErSC wYgvWKvsZyCB0m8pv7hdpsgQaNHvezSWhXIM1rs8tlf/gO706x0Nc9r8Ybma7f9zgykeB3dJf1Htq0qnJsRFCzk3zxylXTGgC3f50QlVtBoamlOtOR7p vev9xBcHA/TZJh+PRmcHm0uyveinfiSKTiozgVZ+JcjIQUTvwQv8Rt9DnCSEf2rjTqbTRvRCeimz9leN+2</latexit> <latexit sha1_base64="e6geauY3jiuodw/0nXuPUGwUKo=">ACvX icdVHbatAEF0rvaTuLUkf+yJqAqEUI4VA+9ZAX/KYQp0EJGNGq1G8eC9id9TYCH9G30rzXf2bjh1TrKQdWDicmbNzZqaotQqUJL970c6jx0+e7j7rP3 /x8tXrvf2Di+AaL3EknXb+qoCAWlkckSKNV7VHMIXGy2L2ZW/I4+KGe/0aLGsYFrqyolgZjKctD1FPK5mhxP9gbJMFlH/BCkGzAQmzif7Pd+5qWTj UFLUkMIWZrUNG7Bk5Ial/28CViDnME1ZgwtGAzjdu15GR8yU8aV8/wsxWt2W9GCWFhCq40QNwP7ci/5XLGqo+jVtl64bQyrtGVaNjcvFqAXGpPErSC wYgvWKvsZyCB0m8pv7hdpsgQaNHvezSWhXIM1rs8tlf/gO706x0Nc9r8Ybma7f9zgykeB3dJf1Htq0qnJsRFCzk3zxylXTGgC3f50QlVtBoamlOtOR7p vev9xBcHA/TZJh+PRmcHm0uyveinfiSKTiozgVZ+JcjIQUTvwQv8Rt9DnCSEf2rjTqbTRvRCeimz9leN+2</latexit> ↵⇠1 <latexit sha1_base64="mg5btP8pvA4PX4Lgs 3gJW+5gB0=">ACvXicdVHbatAEF0rbZO6l1z62BdREwilGKkU0rcE+tLHFOokIBkzWo3ixXsRu6PG Rvgz+lba7+rfdOyYiXtwMLhzJydMzNFrVWgJPndi3YePX6yu/e0/+z5i5f7B4dHl8E1XuJIOu38dQE BtbI4IkUar2uPYAqNV8Xs0yp/9Q19UM5+pUWNYwM3VlVKAjGV5aDrKeRzNUknB4NkmKwjfgjSDRiITVx MDns/8tLJxqAlqSGELE1qGrfgSUmNy37eBKxBzuAGM4YWDIZxu/a8jI+ZKePKeX6W4jW7rWjBhLAwBV caoGm4n1uR/8plDVUfx62ydUNo5V2jqtExuXi1gLhUHiXpBQOQXrHXWE7BgyReU/94u02QoNGjXnZpr QrkGS12+ewv/47daVa6mue1eEvztdt+ZwZSvI7ukv4j21YVzs0IChbybx65SjpjwJZvc6ISK2g0tTQnW vI90/vXewgu3w/TZJh+TA4P9lcdk+8Fm/EiUjFqTgXn8WFGAkpnPgufopf0VmEkY7sXWnU2heiU5E t38AYynftQ=</latexit> <latexit sha1_base64="mg5btP8pvA4PX4Lgs 3gJW+5gB0=">ACvXicdVHbatAEF0rbZO6l1z62BdREwilGKkU0rcE+tLHFOokIBkzWo3ixXsRu6PG Rvgz+lba7+rfdOyYiXtwMLhzJydMzNFrVWgJPndi3YePX6yu/e0/+z5i5f7B4dHl8E1XuJIOu38dQE BtbI4IkUar2uPYAqNV8Xs0yp/9Q19UM5+pUWNYwM3VlVKAjGV5aDrKeRzNUknB4NkmKwjfgjSDRiITVx MDns/8tLJxqAlqSGELE1qGrfgSUmNy37eBKxBzuAGM4YWDIZxu/a8jI+ZKePKeX6W4jW7rWjBhLAwBV caoGm4n1uR/8plDVUfx62ydUNo5V2jqtExuXi1gLhUHiXpBQOQXrHXWE7BgyReU/94u02QoNGjXnZpr QrkGS12+ewv/47daVa6mue1eEvztdt+ZwZSvI7ukv4j21YVzs0IChbybx65SjpjwJZvc6ISK2g0tTQnW vI90/vXewgu3w/TZJh+TA4P9lcdk+8Fm/EiUjFqTgXn8WFGAkpnPgufopf0VmEkY7sXWnU2heiU5E t38AYynftQ=</latexit> <latexit sha1_base64="mg5btP8pvA4PX4Lgs 3gJW+5gB0=">ACvXicdVHbatAEF0rbZO6l1z62BdREwilGKkU0rcE+tLHFOokIBkzWo3ixXsRu6PG Rvgz+lba7+rfdOyYiXtwMLhzJydMzNFrVWgJPndi3YePX6yu/e0/+z5i5f7B4dHl8E1XuJIOu38dQE BtbI4IkUar2uPYAqNV8Xs0yp/9Q19UM5+pUWNYwM3VlVKAjGV5aDrKeRzNUknB4NkmKwjfgjSDRiITVx MDns/8tLJxqAlqSGELE1qGrfgSUmNy37eBKxBzuAGM4YWDIZxu/a8jI+ZKePKeX6W4jW7rWjBhLAwBV caoGm4n1uR/8plDVUfx62ydUNo5V2jqtExuXi1gLhUHiXpBQOQXrHXWE7BgyReU/94u02QoNGjXnZpr QrkGS12+ewv/47daVa6mue1eEvztdt+ZwZSvI7ukv4j21YVzs0IChbybx65SjpjwJZvc6ISK2g0tTQnW vI90/vXewgu3w/TZJh+TA4P9lcdk+8Fm/EiUjFqTgXn8WFGAkpnPgufopf0VmEkY7sXWnU2heiU5E t38AYynftQ=</latexit> <latexit sha1_base64="mg5btP8pvA4PX4Lgs 3gJW+5gB0=">ACvXicdVHbatAEF0rbZO6l1z62BdREwilGKkU0rcE+tLHFOokIBkzWo3ixXsRu6PG Rvgz+lba7+rfdOyYiXtwMLhzJydMzNFrVWgJPndi3YePX6yu/e0/+z5i5f7B4dHl8E1XuJIOu38dQE BtbI4IkUar2uPYAqNV8Xs0yp/9Q19UM5+pUWNYwM3VlVKAjGV5aDrKeRzNUknB4NkmKwjfgjSDRiITVx MDns/8tLJxqAlqSGELE1qGrfgSUmNy37eBKxBzuAGM4YWDIZxu/a8jI+ZKePKeX6W4jW7rWjBhLAwBV caoGm4n1uR/8plDVUfx62ydUNo5V2jqtExuXi1gLhUHiXpBQOQXrHXWE7BgyReU/94u02QoNGjXnZpr QrkGS12+ewv/47daVa6mue1eEvztdt+ZwZSvI7ukv4j21YVzs0IChbybx65SjpjwJZvc6ISK2g0tTQnW vI90/vXewgu3w/TZJh+TA4P9lcdk+8Fm/EiUjFqTgXn8WFGAkpnPgufopf0VmEkY7sXWnU2heiU5E t38AYynftQ=</latexit> (1 −↵)⇠1 <latexit sha1_base64="NQaqpUnq6dCEYoNBL kI0fjNzoB0=">ACw3icdVHbihNBEO2MtzVeNquPvgyGhSgaZkTQxwURfFzB7C5kQqjpqdk06cvQ XaMJw3yJPupH+TdWskEyu1rQcDhVp+tUV5pFShJfveiW7fv3L13cL/4OGjx4eDoydnwdVe4kQ67 fxFDgG1sjghRovKo9gco3n+fLDJn/+FX1Qzn6hdYUzA5dWlUoCMTUfHI7S1xnoagEvspWap/PBMB kn24hvgnQHhmIXp/Oj3o+scLI2aElqCGaJhXNGvCkpMa2n9UBK5BLuMQpQwsGw6zZOm/jY2aKuHS en6V4y+4rGjAhrE3OlQZoEa7nNuS/ctOayvezRtmqJrTyqlFZ65hcvFlDXCiPkvSaAUiv2GsF+B Ei+rf7zfJkjQ6FG3XVqrHlGi1+pd/xe40K13F81r8Rqut235nBlK8ju6S/iPbV+XOLQlyFvJvH rlKOmPAFi8zogJLqDU1tCJq+Z7p9evdBGdvxmkyTj+/HZ6Mdpc9EM/EczESqXgnTsQncSomQopafB c/xa/oY7SMfERXpVFvp3kqOhG1fwAoseC9</latexit> <latexit sha1_base64="NQaqpUnq6dCEYoNBL kI0fjNzoB0=">ACw3icdVHbihNBEO2MtzVeNquPvgyGhSgaZkTQxwURfFzB7C5kQqjpqdk06cvQ XaMJw3yJPupH+TdWskEyu1rQcDhVp+tUV5pFShJfveiW7fv3L13cL/4OGjx4eDoydnwdVe4kQ67 fxFDgG1sjghRovKo9gco3n+fLDJn/+FX1Qzn6hdYUzA5dWlUoCMTUfHI7S1xnoagEvspWap/PBMB kn24hvgnQHhmIXp/Oj3o+scLI2aElqCGaJhXNGvCkpMa2n9UBK5BLuMQpQwsGw6zZOm/jY2aKuHS en6V4y+4rGjAhrE3OlQZoEa7nNuS/ctOayvezRtmqJrTyqlFZ65hcvFlDXCiPkvSaAUiv2GsF+B Ei+rf7zfJkjQ6FG3XVqrHlGi1+pd/xe40K13F81r8Rqut235nBlK8ju6S/iPbV+XOLQlyFvJvH rlKOmPAFi8zogJLqDU1tCJq+Z7p9evdBGdvxmkyTj+/HZ6Mdpc9EM/EczESqXgnTsQncSomQopafB c/xa/oY7SMfERXpVFvp3kqOhG1fwAoseC9</latexit> <latexit sha1_base64="NQaqpUnq6dCEYoNBL kI0fjNzoB0=">ACw3icdVHbihNBEO2MtzVeNquPvgyGhSgaZkTQxwURfFzB7C5kQqjpqdk06cvQ XaMJw3yJPupH+TdWskEyu1rQcDhVp+tUV5pFShJfveiW7fv3L13cL/4OGjx4eDoydnwdVe4kQ67 fxFDgG1sjghRovKo9gco3n+fLDJn/+FX1Qzn6hdYUzA5dWlUoCMTUfHI7S1xnoagEvspWap/PBMB kn24hvgnQHhmIXp/Oj3o+scLI2aElqCGaJhXNGvCkpMa2n9UBK5BLuMQpQwsGw6zZOm/jY2aKuHS en6V4y+4rGjAhrE3OlQZoEa7nNuS/ctOayvezRtmqJrTyqlFZ65hcvFlDXCiPkvSaAUiv2GsF+B Ei+rf7zfJkjQ6FG3XVqrHlGi1+pd/xe40K13F81r8Rqut235nBlK8ju6S/iPbV+XOLQlyFvJvH rlKOmPAFi8zogJLqDU1tCJq+Z7p9evdBGdvxmkyTj+/HZ6Mdpc9EM/EczESqXgnTsQncSomQopafB c/xa/oY7SMfERXpVFvp3kqOhG1fwAoseC9</latexit> <latexit sha1_base64="NQaqpUnq6dCEYoNBL kI0fjNzoB0=">ACw3icdVHbihNBEO2MtzVeNquPvgyGhSgaZkTQxwURfFzB7C5kQqjpqdk06cvQ XaMJw3yJPupH+TdWskEyu1rQcDhVp+tUV5pFShJfveiW7fv3L13cL/4OGjx4eDoydnwdVe4kQ67 fxFDgG1sjghRovKo9gco3n+fLDJn/+FX1Qzn6hdYUzA5dWlUoCMTUfHI7S1xnoagEvspWap/PBMB kn24hvgnQHhmIXp/Oj3o+scLI2aElqCGaJhXNGvCkpMa2n9UBK5BLuMQpQwsGw6zZOm/jY2aKuHS en6V4y+4rGjAhrE3OlQZoEa7nNuS/ctOayvezRtmqJrTyqlFZ65hcvFlDXCiPkvSaAUiv2GsF+B Ei+rf7zfJkjQ6FG3XVqrHlGi1+pd/xe40K13F81r8Rqut235nBlK8ju6S/iPbV+XOLQlyFvJvH rlKOmPAFi8zogJLqDU1tCJq+Z7p9evdBGdvxmkyTj+/HZ6Mdpc9EM/EczESqXgnTsQncSomQopafB c/xa/oY7SMfERXpVFvp3kqOhG1fwAoseC9</latexit> Figure 3: Hierarchical learning-to-rank. Positive type paths are colored black, negative type paths are colored gray. Each blue line corresponds to a threshold derived from a parent node. Positive types (on the left) are ranked above negative types (on the right). model should be able to learn a larger margin between easier pairs: we show that this is superior than using a single margin in our experiments. Analogous to the reasoning that in Equation 5 the margin ξ can just be 1, only the relative ratios between ξ’s are important. For simplicity,5 if the ontology has L levels, we assign ξl = L −l + 1 . (7) For example, given an ontology with 3 levels, the margins per level are (ξ1, ξ2, ξ3) = (3, 2, 1). Flexible Threshold Equation 6 only ranks positive types higher than negative types so that all children types given a parent type are ranked based on their relevance to the entity mention. What should be the threshold between positive and negative types? We could set the threshold to be 0 (approaching the multi-label classification problem as a set of binary classification problem, see Lin and Ji (2019)), or tune an adaptive, type-specific threshold for each parent type (Zhang et al., 2018). Here, we propose a simpler method. We propose to directly use the parent node as the threshold. If a positive type is y, we learn the following ranking relation: y ≻¯y ≻y′, ∀y′ ∈Sb(t) (8) where ≻ means “ranks higher than”. For example, a mention has gold type /person/artist/singer. Since the 5 We did hyperparameter search on these margin hyperparameters and found that Equation 7 generalized well. 8469 parent type /person/artist can be considered as a kind of prior for all types of artists, the model should learn that the positive type “singer” should have a higher confidence than “artist”, and in turn, higher than other types of artists like “author” or “actor”. Hence the ranker should learn that “a positive subtype should rank higher than its parent, and its parent should rank higher than its negative children.” Under this formulation, at decoding time, given parent type y, a child subtype z <: y that scores higher than y should be output as a positive label. We translate the ranking relation in Equation 8 into a ranking loss that extends Equation 6. In Equation 6, there is an expected margin ξ between positive types and negative types. Since we inserted the parent in the middle, we divide the margin ξ into αξ and (1 −α)ξ: αξ being the margin between positive types and the parent; and (1−α)ξ is the margin between the parent and the negative types. For a visualization see Figure 3. The hyperparameter α ∈[0, 1] can be used to tune the precision-recall tradeoff when outputting types: the smaller α, the smaller the expected margin there is between positive types and the parent. This intuitively increases precision but decreases recall (only very confident types can be output). Vice versa, increasing α decreases precision but increase recall. Therefore we learn 3 sets of ranking relations from Equation 8: (i) positive types should be scored above parent by αξ; (ii) parent should be scored above any negative sibling types by (1 − α)ξ; (iii) positive types should be scored above negative sibling types by ξ. Our final hierarchical ranking loss is formulated as follows. Jy≻¯y = [ αξlev(y)−F(x, y)+ F(x, ¯y)]+ J¯y≻y′ = X y′∈Sb(y)\Y [(1 −α)ξlev(y)−F(x, ¯y)+F(x, y′)]+ Jy≻y′ = X y′∈Sb(y)\Y [ ξlev(y)−F(x, y)+F(x, y′)]+ Jhier(x,Y) = X y∈Y Jy≻¯y + J¯y≻y′ + Jy≻y′ (9) 4.4 Decoding Predicting the types for each entity mention can be performed via iterative searching on the type tree, from the root ENTITY node to coarser types, then to finer-grained types. This ensures that our output does not violate the hierarchical property, i.e., if a subtype is output, its parent must be output. Algorithm 1 Decoding for Hierarchical Typing 1: function HIERTYPEDEC(F(x, ·)) 2: Q ←{ENTITY} ▷queue for searching 3: ˆY ←œ ▷set of output types 4: repeat 5: y ←DEQUEUE(Q) 6: θ ←F(x, y) + δlev(y) ▷threshold value 7: Z ←{z ∈Ch(y) | F(x, z) > θ} ▷all decoded children types 8: Z ′ ←TOPK(Z, klev(y)+1, F(x, ·)) ▷pruned by the max branching factors 9: ˆY ←ˆY ∪Z ′ 10: for z ∈Z ′ do 11: ENQUEUE(Q, z) 12: end for 13: until Q = œ ▷queue is empty 14: return ˆY ▷return all decoded types 15: end function Given instance x we compute the score F(x, y) for each type y ∈Y, the searching process starts with the root node ENTITY of the type tree in the queue. For each type y in the node, a child node z <: y (subtypes) is added to the predicted type set if F(x, z) > F(x, y), corresponding to the ranking relation in Equation 8 that the model has learned.6 Here we only take the top-k element to add to the queue to prevent from over-generating types. This can also be used to enforce the single-path property (setting k = 1) if the dataset is singlepath. For each level i in the type hierarchy, we limit the branching factor (allowed children) to be ki. The algorithm is listed in Algorithm 1, where the function TOPK(S, k, f ) selects the top-k elements from S with respect to the function f . 4.5 Subtyping Relation Constraint Each type y ∈Y in the ontology is assigned a type embedding y ∈Rdt. We notice the binary subtyping relation “ <: ” ⊆Y × Y on the types. Trouillon et al. (2016) proposed the relation embedding method ComplEx that works well with anti-symmetric and transitive relations such as subtyping. It has been employed in FET before 6 For the OntoNotes dataset, we introduce another set of per-level hyperparameters δlev(y), and the threshold value F(x, y) is modified to F(x, y) + δlev(y), akin to the adaptive threshold in Zhang et al. (2018). This is due to a large type distribution mismatch between the training and dev/test sets in OntoNotes (in dev/test there are a lot of instances with the single type /other but not in the training set). For other datasets they are unused, i.e. just 0. 8470 — in Murty et al. (2018), ComplEx is added to the loss to regulate the type embeddings. ComplEx operates in the complex space — we use the natural isomorphism between real and complex spaces to map the type embedding into complex space (first half of the embedding vector as the real part, and the second half as the imaginary part): φ : Rdt →Cdt/2 (10) t = [ Re φ(t) ; Im φ(t) ] (11) We learn a single relation embedding r ∈Cdt/2 for the subtyping relation. Given type y and z, the subtyping statement y <: z is modeled using the following scoring function: r(y, z) = Re  r ·  φ(y) ⊙φ(z)  (12) where ⊙is element-wise product and x is the complex conjugate of x. If y <: z then r(y, z) > 0; and vice versa, r(y, z) < 0 if y ≮: z. Loss Given instance (x,Y), for each positive type y ∈Y, we learn the following relations: y <: ¯y y ≮: y′, ∀y′ ∈Sb(y) y ≮: y′, ∀y′ ∈Sb(¯y) (13) Translating these relation constraints as a binary classification problem (”is or is not a subtype”) under a primal SVM, we get a hinge loss: Jrel(x,Y) = X y∈Y [1 −r(y, ¯y)]+ + X y′∈Sb(y)∪Sb(¯y) [1 + r(y, y′)]+ ! . (14) This is different from Murty et al. (2018), where a binary cross-entropy loss on randomly sampled (y, y′) pairs is used. Our experiments showed that the loss in Equation 14 performs better than the cross-entropy version, due to the structure of the training pairs: we use siblings and siblings of parents as negative samples (these are types closer to the positive parent type), hence are training with more competitive negative samples. 4.6 Training and Validation Our final loss is a combination of the hierarchical ranking loss and the subtyping relation constraint loss, with L2 regularization: Jhier(x,Y) + βJrel(x,Y) + λ 2 ∥Θ∥2 2 . (15) The AdamW optimizer (Loshchilov and Hutter, 2019) is used to train the model, as it is shown to be superior than the original Adam under L2 regularization. Hyperparameters α (ratio of margin above/below threshold), β (weight of subtyping relation constraint), and λ (L2 regularization coefficient) are tuned. At validation time, we tune the maximum branching factors for each level k1, · · · , kL.7 These parameters tune the trade-off between the precision and recall for each layer and prevents over-generation (as we observed in some cases). All hyperparameters are tuned so that models achieve maximum micro F1 scores (see subsection 5.4). 5 Experiments 5.1 Datasets AIDA The AIDA Phase 1 practice dataset for hierarchical entity typing comprises of 297 documents from LDC2019E04 / LDC2019E07, and the evaluation dataset is from LDC2019E42 / LDC2019E77. We take only the English part of the data, and use the practice dataset as train/dev, and the evaluation dataset as test. The practice dataset comprises of 3 domains, labeled as R103, R105, and R107. Since the evaluation dataset is out-of-domain, we use the smallest domain R105 as dev, and the remaining R103 and R107 as train. The AIDA entity dataset has a 3-level ontology, termed type, subtype, and subsubtype. A mention can only have one label for each level, hence the dataset is single-path, thus the branching factors (k1, k2, k3) for the three layers are set to (1, 1, 1). BBN Weischedel and Brunstein (2005) labeled a portion of the one million word Penn Treebank corpus of Wall Street Journal texts (LDC95T7) using a two-level hierarchy, resulting in the BBN Pronoun Coreference and Entity Type Corpus. We follow the train/test split by Ren et al. (2016b), and follow the train/dev split by Zhang et al. (2018). OntoNotes Gillick et al. (2014) sampled sentences from the OntoNotes corpus and annotated the entities using 89 types. We follow the train/dev/test data split by Shimaoka et al. (2017). 7 For the OntoNotes dataset, this also includes the perlevel threshold δlev(k). 8471 Dataset Train Dev Test # Levels # Types Multi-path? α β λ pD k1,···,L AIDA 2,492 558 1,383 3 187 single-path 0.1 0.3 0.1 0.5 (1,1,1) BBN 84,078 2,000 13,766 2 56 multi-path 0.2 0.1 0.003 0.5 (2,1) OntoNotes 251,039 2,202 8,963 3 89 multi-path 0.15 0.1 0.001 0.5 (2,1,1) FIGER 2,000,000 10,000 563 2 113 multi-path 0.2 0.1 0.0001 0.5 (2,1) Table 1: Statistics of various datasets and their corresponding hyperparameter settings. FIGER Ling and Weld (2012) sampled a dataset from Wikipdia articles and news reports. Entity mentions in these texts are mapped to a 113-type ontology derived from Freebase (Bollacker et al., 2008). Again, we follow the data split by Shimaoka et al. (2017). The statistics of these datasets and their accompanying ontologies are listed in Table 1, together with their respective hyperparameters.8 5.2 Setup To best compare to recent prior work, we follow Lin and Ji (2019) where the ELMo encodings of words are fixed and not updated. We use all 3 layers of ELMo output, so the initial embedding has dimension dw = 3072. We set the type embedding dimensionality to be dt = 1024. The initial learning rate is 10−5 and the batch size is 256. Hyperparameter choices are tuned on dev sets, and are listed in Table 1. We employ early stopping: choosing the model that yields the best micro F1 score on dev sets. Our models are implemented using AllenNLP (Gardner et al., 2018), with implementation for subtyping relation constraints from OpenKE (Han et al., 2018). 5.3 Baselines We compare our approach to major prior work in FET that are capable of multi-path entity typing.9 For AIDA, since there are no prior work on this dataset to our knowledge, we also implemented multi-label classification as set of binary classifier models (similar to Lin and Ji (2019)) as a baseline, with our mention feature extractor. The results are shown in Table 2 as “Multi-label”. 8 The OntoNotes dataset has an additional set of hyperparameters, i.e. the per-level threshold δ1,2,3 = (2.5, 3.0, 0.0). 9 Zhang et al. (2018) included document-level information in their best results—for fair comparison, we used their results without document context, as are reported in their ablation tests. 5.4 Metrics We follow prior work and use strict accuracy (Acc), macro F1 (MaF), and micro F1 (MiF) scores. Given instance xi, we denote the gold type set as Yi and the predicted type set ˆYi. The strict accuracy is the ratio of instances where Yi = ˆYi. Macro F1 is the average of all F1 scores between Yi and ˆYi for all instances, whereas micro F1 counts total true positives, false negatives and false positives globally. We also investigate per-level accuracies on AIDA. The accuracy on level l is the ratio of instances whose predicted type set and gold type set are identical at level l. If there is no type output at level l, we append with OTHER to create a dummy type at level l: e.g. /person/OTHER/OTHER. Hence accuracy of the last level (in AIDA, level 3) is equal to the strict accuracy. 5.5 Results and Discussions All our results are run under the two conditions regarding partial type paths: exclusive or undefined. The result of the AIDA dataset is shown in Table 2. Our model under the exclusive case outperforms a multi-label classification baseline over all metrics. Of the 187 types specified in the AIDA ontology, the train/dev set only covers 93 types. The test set covers 85 types, of which 63 are seen types. We could perform zero-shot entity typing by initializing a type’s embedding using the type name (e.g. /fac/structure/plaza) together with its description (e.g. “An open urban public space, such as a city square”) as is designated in the data annotation manual. We leave this as future work. Approach L1 L2 L3 MaF MiF Ours (exclusive) 81.6 43.1 32.0 60.6 60.0 Ours (undefined) 80.0 43.3 30.2 59.3 58.0 −Subtyping constraints 80.3 40.9 29.9 59.1 58.3 −Multi-level margins 76.9 40.2 29.8 57.4 56.9 Multi-label 80.5 42.1 30.7 59.7 57.9 Table 2: Results on the AIDA dataset. 8472 Approach BBN OntoNotes FIGER Acc MaF MiF Acc MaF MiF Acc MaF MiF Ling and Weld (2012) 46.7 67.2 61.2 −† 52.3 69.9 69.3 Ren et al. (2016b) 49.4 68.8 64.5 51.6 67.4 62.4 49.4 68.8 64.5 Ren et al. (2016a) 67.0 72.7 73.5 55.1 71.1 64.7 53.3 69.3 66.4 Abhishek et al. (2017) 60.4 74.1 75.7 52.2 68.5 63.3 59.0 78.0 74.9 Shimaoka et al. (2017) −† 51.7 71.0 64.9 59.7 79.0 75.4 Murty et al. (2018) −† −† 59.7 78.3 75.4 Zhang et al. (2018) 58.1 75.7 75.1 53.2 72.1 66.5 60.2‡ 78.7‡ 75.5‡ Lin and Ji (2019) 55.9 79.3 78.1 63.8* 82.9* 77.3* 62.9 83.0 79.8 Ours (exclusive) 48.2 63.2 61.0 58.3 72.4 67.2 69.1 82.6 80.8 Ours (undefined) 75.2 79.7 80.5 58.7 73.0 68.1 65.5 80.5 78.1 −Subtyping constraint 73.2 77.8 78.4 58.3 72.2 67.1 65.4 81.4 79.2 −Multi-level margins 68.9 73.2 74.2 58.5 71.7 66.0 68.1 80.4 78.0 †: Not run on the specific dataset; *: Not strictly comparable due to non-standard, much larger training set; ‡: Result has document-level context information, hence not comparable. Table 3: Results of common FET datasets: BBN, OntoNotes, and FIGER. Numbers in italic are results obtained with various augmentation techniques, either larger data or larger context, hence not directly comparable. Results for the BBN, OntoNotes, and FIGER can be found in Table 3. Across 3 datasets, our method produces the state-of-the-art performance on strict accuracy and micro F1 scores, and stateof-the-art or comparable (±0.5%) performance on macro F1 score, as compared to prior models, e.g. (Lin and Ji, 2019). Especially, our method improves upon the strict accuracy substantially (4%– 8%) across these datasets, showing our decoder are better at outputting exact correct type sets. Partial type paths: exclusive or undefined? Interestingly, we found that for AIDA and FIGER, partial type paths should be better considered as exclusive, whereas for BBN and OntoNotes, considering them as undefined leads to better performance. We hypothesize that this comes from how the data is annotatated—the annotation manual may contain directives as whether to interpret partial type paths as exclusive or undefined, or the data may be non-exhaustively annotated, leading to undefined partial types. We advocate for careful investigation into partial type paths for future experiments and data curation. Ablation Studies We compare our best model with various components of our model removed, to study the gain from each component. From the best of these two settings (exclusive and undefined), we report the performance of (i) removing the subtyping constraint as is described in subsection 4.5; (ii) substituting the multi-level margins in Equation 7 with a “flat” margin, i.e., margins on all levels are set to be 1. These results are shown in Table 2 and Table 3 under our best results, and they show that both multi-level margins and subtyping relation constraints offer orthogonal improvements to our models. Error Analysis We identify common patterns of errors, coupled with typical examples: • Confusing types: In BBN, our model outputs /gpe/city when the gold type is /location/region for “... in shipments from the Valley of either hardware or software goods.” These types are semantically similar, and our model failed to discriminate between these types. • Incomplete types: In FIGER, given instance “... multi-agency investigation headed by the U.S. Immigration and Customs Enforcement ’s homeland security investigations unit”, the gold types are /government agency and /organization, but our model failed to output /organization. • Focusing on only parts of the mention: In AIDA, given instance “... suggested they were the work of Russian special forces assassins 8473 out to blacken the image of Kievs proWestern authorities”, our model outputs /org/government whereas the gold type is /per/militarypersonnel. Our model focused on the “Russian special forces” part, but ignored the “assassins” part. Better mention representation is required to correct this, possibly by introducing type-aware mention representation—we leave this as future work. 6 Conclusions We proposed (i) a novel multi-level learning to rank loss function that operates on a type tree, and (ii) an accompanying coarse-to-fine decoder to fully embrace the ontological structure of the types for hierarchical entity typing. Our approach achieved state-of-the-art performance across various datasets, and made substantial improvement (4–8%) upon strict accuracy. Additionally, we advocate for careful investigation into partial type paths: their interpretation relies on how the data is annotated, and in turn, influences typing performance. Acknowledgements We thank our colleague Guanghui Qin and the anonymous reviewers for their insightful suggestions and comments. This research benefited from support by the JHU Human Language Technology Center of Excellence (HLTCOE), and DARPA AIDA. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government. References Abhishek, Ashish Anand, and Amit Awekar. 2017. Fine-grained entity type classification by jointly learning representations and label embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 797–807. Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008, pages 1247–1250. Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettlemoyer. 2018. Ultra-fine entity typing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 87–96. Hongliang Dai, Donghong Du, Xin Li, and Yangqiu Song. 2019. Improving fine-grained entity typing with entity linking. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 6209–6214, Hong Kong, China. Association for Computational Linguistics. Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. FINET: context-aware fine-grained named entity typing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 868–878. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1– 6, Melbourne, Australia. Association for Computational Linguistics. Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Contextdependent fine-grained entity type tagging. CoRR, abs/1412.1820. Nitish Gupta, Sameer Singh, and Dan Roth. 2017. Entity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2681–2690. Xu Han, Shulin Cao, Xin Lv, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. OpenKE: An open toolkit for knowledge embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pages 139–144. 8474 Hailong Jin, Lei Hou, Juanzi Li, and Tiansi Dong. 2019. Fine-grained entity typing via hierarchical multi graph convolutional networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4968–4977, Hong Kong, China. Association for Computational Linguistics. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, July 23-26, 2002, Edmonton, Alberta, Canada, pages 133–142. Ying Lin and Heng Ji. 2019. An attentive fine-grained entity typing model with latent type representation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6198– 6203, Hong Kong, China. Association for Computational Linguistics. Xiao Ling and Daniel S. Weld. 2012. Fine-grained entity recognition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 2226, 2012, Toronto, Ontario, Canada., pages 94–100. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1412–1421. Shikhar Murty, Patrick Verga, Luke Vilnis, Irena Radovanovic, and Andrew McCallum. 2018. Hierarchical losses and new resources for fine-grained entity typing and linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 97–109. Yasumasa Onoe and Greg Durrett. 2019. Learning to denoise distantly-labeled data for entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 27, 2019, Volume 1 (Long and Short Papers), pages 2407–2417. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227–2237. Jonathan Raiman and Olivier Raiman. 2018. Deeptype: Multilingual entity linking by neural type system evolution. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5406–5413. Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016a. AFET: automatic finegrained entity typing by hierarchical partial-label embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1369–1378. Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, and Jiawei Han. 2016b. Label noise reduction in entity typing by heterogeneous partial-label embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1825–1834. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2016. An attentive neural architecture for fine-grained entity type classification. In Proceedings of the 5th Workshop on Automated Knowledge Base Construction, AKBC@NAACLHLT 2016, San Diego, CA, USA, June 17, 2016, pages 69–74. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. Neural architectures for fine-grained entity type classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 1271–1280. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 2071–2080. Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type corpus. Philadelphia: Linguistic Data Consortium. Jason Weston and Chris Watkins. 1999. Support vector machines for multi-class pattern recognition. In ESANN 1999, 7th European Symposium on Artificial Neural Networks, Bruges, Belgium, April 21-23, 1999, Proceedings, pages 219–224. 8475 Wenhan Xiong, Jiawei Wu, Deren Lei, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2019. Imposing label-relational inductive bias for extremely fine-grained entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 773–784. Peng Xu and Denilson Barbosa. 2018. Neural finegrained entity type classification with hierarchyaware loss. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 16–25. Yadollah Yaghoobzadeh and Hinrich Sch¨utze. 2015. Corpus-level fine-grained entity typing using contextual information. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 715–725. Dani Yogatama, Daniel Gillick, and Nevena Lazic. 2015. Embedding methods for fine grained entity type classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers, pages 291–296. Mohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. 2012. HYENA: hierarchical type classification for entity names. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Posters, 8-15 December 2012, Mumbai, India, pages 1361–1370. Sheng Zhang, Kevin Duh, and Benjamin Van Durme. 2018. Fine-grained entity typing through increased discourse context and adaptive classification thresholds. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2018, New Orleans, Louisiana, USA, June 5-6, 2018, pages 173–179.
2020
749
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 813–822 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 813 “The Boating Store Had Its Best Sail Ever”: Pronunciation-attentive Contextualized Pun Recognition Yichao Zhou, Jyun-yu Jiang, Jieyu Zhao, Kai-Wei Chang and Wei Wang Computer Science Department University of California, Los Angeles {yz, jyunyu, jyzhao, kwchang, weiwang}@cs.ucla.edu Abstract Humor plays an important role in human languages and it is essential to model humor when building intelligence systems. Among different forms of humor, puns perform wordplay for humorous effects by employing words with double entendre and high phonetic similarity. However, identifying and modeling puns are challenging as puns usually involved implicit semantic or phonological tricks. In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor, detect if a sentence contains puns and locate them in the sentence. PCPR derives contextualized representation for each word in a sentence by capturing the association between the surrounding context and its corresponding phonetic symbols. Extensive experiments are conducted on two benchmark datasets. Results demonstrate that the proposed approach significantly outperforms the state-of-the-art methods in pun detection and location tasks. In-depth analyses verify the effectiveness and robustness of PCPR. 1 Introduction During the last decades, social media has promoted the creation of a vast amount of humorous web contents (Nijholt et al., 2017). Automatic recognition of humor has become an important task in the area of figurative language processing, which can benefit various downstream NLP applications such as dialogue systems, sentiment analysis, and machine translation (Melby and Warner, 1995; Augello et al., 2008; Ghosh et al., 2015; Bertero and Fung, 2016; Blinov et al., 2019). However, humor is one of the most complicated behaviors in natural language semantics and sometimes it is even difficult for humans to interpret. In most cases, understanding humor requires adequate background knowledge and a rich context. Homographic Puns 1. Did you hear about the guy whose whole left side was cut off? He’s all right now. 2. I’d tell you a chemistry joke but I know I wouldn’t get a reaction. Heterographic Puns 1. The boating store had its best sail (sale) ever. 2. I lift weights only on Saturday and Sunday because Monday to Friday are weak (week) days. Table 1: Examples of homographic and heterographic puns. Puns are a form of humorous approaches using the different meanings of identical words or words with similar pronunciations to explain texts or utterances. There are two main types of puns. Homographic puns rely on multiple interpretations of the same word. As shown in Table 1, the phrase all right means good condition or opposite to left; the word reaction means chemical change or action. The two meanings of the same expression are consistent with its context, which creates a humorous pun in both sentences when there is a clear contrast between two meanings. On the other hand, heterographic puns take advantage of phonologically same or similar words. For example, the word pairs sale and sail, weak and week in Table 1 have the same or similar pronunciations. The sentences are funny because both words fit the same context. Understanding puns is a big fish to fry for deep comprehension of complex semantics. These two forms of puns have been studied in literature from different angles. To recognize puns in a sentence, word sense disambiguation techniques (WSD) (Navigli, 2009) have been employed to identify the equitable intention of words in utterances (Pedersen, 2017). External knowledge bases such as WordNet (Miller, 1998b) have been applied in determining word senses of pun words (Oele and Evang, 2017). However, these methods cannot tackle heterographic puns with distinct word 814 spellings and knowledge bases that only contain a limited vocabulary. To resolve the issues of sparseness and heterographics, the word embedding techniques (Mikolov et al., 2013; Pennington et al., 2014) provide flexible representations to model puns (Hurtado et al., 2017; Indurthi and Oota, 2017; Cai et al., 2018). However, a word may have different meanings regarding its contexts. Especially, an infrequent meaning of the word might be utilized for creating a pun. Therefore, static word embeddings are insufficient to represent words. In addition, some puns are created by replacing a word with another word with the same or similar pronunciation as examples shown in Table 1. Therefore, to recognize puns, it is essential to model the association between words in the sentence and the pronunciation of words. Despite existing approaches attempt to leverage phonological structures to understand puns (Doogan et al., 2017; Jaech et al., 2016), there is a lack of a general framework to model these two types of signals in a whole. In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to jointly model the contextualized word embeddings and phonological word representations for pun recognition. To capture the phonological structures of words, we break each word into a sequence of phonemes as its pronunciation so that homophones can have similar phoneme sets. For instance, the phonemes of the word pun are {P, AH, N}. In PCPR, we construct a pronunciation attentive module to identify important phonemes of each word, which can be applied in other tasks related to phonology. We jointly encode the contextual and phonological features into a self-attentive embedding to tackle both pun detection and location tasks. We summarize our contributions as following. • To the best of our knowledge, PCPR is the first work to jointly model contextualized word embeddings and pronunciation embeddings to recognize puns. Both contexts and phonological properties are beneficial to pun recognition. • Extensive experiments are conducted on two benchmark datasets. PCPR significantly outperforms existing methods in both pun detection and pun location. In-depth analyses also verify the effectiveness and robustness of PCPR. • We release our implementations and pre-trained phoneme embeddings at https://github.com/ joey1993/pun-recognition to facilitate future research. 2 Related Work Pun Recognition and Generation To recognize puns, Miller et al. (2017) summarize several systems for the SemEval 2017 tasks. To detect the pun, Pedersen (2017) supposes that if there is one pun in the sentence, when adopting different Word Sense Disambiguation (WSD) methods, the sense assigned to the sentence will be different. To locate the pun, based on the WSD results for pun detection, they choose the last word which changes the senses between different WSD runs. Even though this method can tackle both homographic and heterographic pun detection, it does not use any pretrained embedding model. Xiu et al. (2017) detect the pun in the sentence using similarity features which are calculated on sense vectors or cluster center vectors. To locate the pun, they use an unsupervised system by scoring each word in the sentence and choosing the word with the smallest score. However, this model exclusively relies on semantics to detect the heterographic puns but ignores the rich information embedded in the pronunciations. Doogan et al. (2017) leverage word embeddings as well as the phonetic information by concatenating pronunciation strings, but the concatenation has limited expression ability. They also mention that their systems suffer for short sentences as word embeddings do not have much context information. Besides, Zou and Lu (2019) jointly detect and locate the pun from a sequence labeling perspective by employing a new tagging schema. Diao et al. (2018) expand word embeddings using WordNet to settle the polysemy of homographic puns, following by a neural attention mechanism to extract the collocation to detect the homographic pun. However, all these methods only make use of limited context information. Other than the pun recognition, Yu et al. (2018) generate homographic puns without requiring any pun data for training. He et al. (2019) improve the homographic pun generation based on the “local-global surprisal principle” which posits that the pun word and the alternative word have a strong association with the distant and immediate context respectively. Pronunciation Embeddings Word embeddings assign each word with a vector so that words with similar semantic meanings are close in the embedding space. Most word embedding models only make use of text information and omitting the rich information contained in the pronunciation. How815 ever, the pronunciation is also an important part of the language (Zhu et al., 2018). Prior studies have demonstrated that the phonetic information can be used in speech recognition (Bengio and Heigold, 2014), spell correction (Toutanova and Moore, 2002) and speech synthesis (Miller, 1998a). By projecting to the embedding space, words sound alike are nearby to each other (Bengio and Heigold, 2014). Furthermore, Kamper et al. (2016) make use of word pairs information to improve the acoustic word embedding. Zhu et al. (2018) show that combining the pronunciation with the writing texts can help to improve the performance of word embeddings. However, these pronunciation embeddings are word-level features, while in our approach, we make use of syllabic pronunciations which is phoneme-level and could help with the out-of-vocabulary (OOV) situation. Luo et al. (2019) also propose an adversarial generative network for pun generation, which does not require any pun corpus. Contextualized Word Embeddings Traditional word embeddings assign a fixed vector to one word even if the word has multiple meanings under different contexts (e.g., “the river bank” v.s. “the commercial bank”). McCann et al. (2017) combine the pivot word embeddings as well as the contextual embeddings generated by an encoder from a supervised neural machine translation task. Peters et al. (2017) enrich the word embeddings by the contextual information extracted from a bidirectional language model. (Devlin et al., 2018) learn the language embedding by stacking multiple transformer layers with masked language model objective which advances the state-of-the-art for many NLP tasks. Yang et al. (2019) enable learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and solve the problem of pretrain-finetune discrepancy. 3 Pronunciation-attentive Contextualized Pun Recognition In this section, we first formally define the problem and then introduce the proposed method, PCPR. 3.1 Problem Statement Suppose the input text consists of a sequence of N words {w1, w2, · · · , wN}. For each word wi with Mi phonemes in its pronunciation, the phonemes are denoted as R(wi) = {ri,1, ri,2, · · · , ri,Mi}, where ri,j is the j-th phoneme in the pronunciation of wi. These phonemes are given by a dictionary. In this paper, we aim to recognize potential puns in the text with two tasks, including pun detection and pun location, as described in the following. Task 1: Pun Detection. The pun detection task identifies whether a sentence contains a pun. Formally, the task is modeled as a classification problem with binary label yD. Task 2: Pun Location. Given a sentence containing at least a pun, the pun location task aims to unearth the pun word. More precisely, for each word wi, we would like to predict a binary label yL i that indicates if wi is a pun word. In addition to independently solving the above two tasks, the ultimate goal of pun recognition is to build a pipeline from scratch to detect and then locate the puns in texts. Hence, we also evaluate the end-to-end performance by aggregating the solutions for two tasks. 3.2 Framework Overview Figure 1 shows the overall framework of the proposed Pronunciation-attentive Contextualized Pun Recognition (PCPR). For each word in the input text, we first derive two continuous vectors, including contextualized word embedding and pronunciation embedding, as representations in different aspects. Contextualized word embeddings derive appropriate word representations with consideration of context words and capture the accurate semantics in the text. To learn the phonological characteristics, each word is divided into phonemes while each phoneme is projected to a phoneme embedding space, thereby obtaining pronunciation embeddings with the attention mechanism (Bahdanau et al., 2015). Finally, a self-attentive encoder blends contextualized word embeddings and pronunciation embeddings to capture the overall semantics for both pun detection and location. 3.3 Contextualized Word Embeddings The context is essential for interpreting a word in the text. Hence, we propose to apply contextualized word embeddings to derive word representations. In the framework of PCPR, any contextualized word embedding method, such as BERT (Devlin et al., 2018), ELMo (Peters et al., 2018), and XLNet (Yang et al., 2019), can be utilized. Here, we choose BERT to derive contextualized word embeddings without loss of generality. 816 Input Embeddings · · · · · · E1 E2 EN · · · Input Words w1 w2 wN · · · r1,M1 · · · · · · r1,1 r2,1 r2,M2 rN,1 rN,MN u1,1 u1,M1 u2,1 u2,M2 uN,MN uN,1 Word Phonemes · · · T C 1 T C 2 T C N T P N T P 1 T P 2 Phoneme Embeddings Phonological Attention T C [CLS] E[CLS] · · · Joint Embeddings · · · · · · · · · T J 1 T J 2 T J N T J [CLS] Pun Detection Prediction ˆyD ˆyL 1 ˆyL 2 ˆyL N Pun Location Predictions Pronunciation Embeddings Contextualized Word Embeddings Contextualized Word Encoder Self-attentive Encoder Figure 1: The overall framework of PCPR. We leverage the self-attention mechanism to jointly model contextualized embeddings and phonological representations. PCPR can tackle both pun detection and pun location tasks. BERT deploys a multi-layer bidirectional encoder based on transformers with multi-head selfattention (Vaswani et al., 2017) to model words in the text after integrating both word and position embeddings (Sukhbaatar et al., 2015). As a result, for each word, a representative contextualized embedding is derived by considering both the specific word and all contexts in the document. Here we denote T C i as the dC-dimensional contextualized word embedding for the word wi. In addition, BERT contains a special token [CLS] with an embedding vector in BERT to represent the semantics of the whole input text. 3.4 Pronunciation Embeddings To learn the phonological characteristics of words, PCPR models the word phonemes. For each phoneme ri,j of the word wi, we project ri,j to a dP -dimensional embedding space as a trainable vector ui,j to represent its phonological properties. Based on the phoneme embeddings of a word, we apply the attention mechanism (Bahdanau et al., 2015) to simultaneously identify important phonemes and derive the pronunciation embedding T P i . Specifically, the phoneme embeddings are transformed by a fully-connected hidden layer to measure the importance scores αP i as follows: vi,j = tanh(FP (ui,j)), αP i,j = v⊺ i,jvs P k v⊺ i,kvs , where FP (·) is a fully-connected layer with dA outputs and dA is the attention size; vs is a dAdimensional context vector that estimates the importance score of each pronunciation embedding. Finally, the pronunciation embeddings T P i can be represented as the weighted combination of phoneme embeddings as follows: T P i = X j αi,jui,j. Moreover, we can further derive the joint embedding T J i to indicate both word semantics and phonological knowledge for the word wi by concatenating two different embeddings as follows: T J i =  T C i ; T P i  . Note that the joint embeddings are dJ-dimensional vectors, where dJ = dC + dP . 3.5 Pronunciation-attentive Contextualized Embedding with Self-attention For the task of pun detection, understanding the meaning of input text is essential. Due to its advantages of interpretability over convolutional neural network (LeCun et al., 1995) and recurrent neural network (Schuster and Paliwal, 1997), we deploy the self-attention mechanism (Vaswani et al., 2017) to capture the overall semantics represented in the joint embeddings. For each word wi, the self-attention mechanism estimates an importance vector αS i : FS(T) = Softmax(TT ⊺ √ d )T, αS i = exp(FS(T J i )) P j exp(FS(T J j )), where FS(·) is the function to estimate the attention for queries, and d is a scaling factor to avoid extremely small gradients. Hence, the self-attentive embedding vector is computed by aggregating joint embeddings: T J [ATT] = X i αS i · T J i . 817 Note that the knowledge of pronunciations is considered by the self-attentive encoder but not the contextualized word encoder. Finally, the pronunciation-attentive contextualized representation for the whole input text can be derived by concatenating the overall contextualized embedding and the self-attentive embedding: T J [CLS] =  T C [CLS]; T J [ATT]  . Moreover, each word wi is benefited from the selfattentive encoder and is represented by a joint embedding: T J i,[ATT] = αS i · T J i . 3.6 Inference and Optimization Based on the joint embedding for each word and the pronunciation-attentive contextualized embedding for the whole input text, both tasks can be tackled with simple fully-connected layers. Pun Detection. Pun detection is modeled as a binary classification task. Given the overall embedding for the input text T J [CLS], the prediction ˆyD is generated by a fully-connected layer and the softmax function: ˆyD = argmax k∈{0,1} FD(T J [CLS])k, where FD(·) derives the logits of two classes in binary classification. Pun Location. For each word wi, the corresponding self-attentive joint embedding T J i,[ATT] is applied as features for pun location. Similar to pun detection, the prediction ˆyL i is generated by: ˆyL i = argmax k∈{0,1} FL(T J i,[ATT])k, where FL(·) derives two logits for classifying if a word is a pun word. Since both tasks focus on binary classification, we optimize the model with cross-entropy loss. 4 Experiments In this section, we describe our experimental settings and explain the results and interpretations. We will verify some basic assumptions of this paper: (1) the contextualized word embeddings and pronunciation embeddings are both beneficial to the pun detection and location tasks; (2) the attention mechanism can improve the performance. Dataset SemEval PTD Homo Hetero Examples w/ Puns 1,607 1,271 2,423 Examples w/o Puns 643 509 2,403 Total Examples 2,250 1,780 4,826 Table 2: Data statistics. “Homo” and “Hetero” denote homographic and heterographic puns. Pun detection employs all of the examples in the two datasets while pun location only exploits the examples with puns in SemEval due to the limitation of annotations. 4.1 Experiment settings Experimental Datasets. We conducted experiments on the SemEval 2017 shared task 7 dataset1 (SemEval) (Miller et al., 2017) and the Pun of The Day dataset (PTD) (Yang et al., 2015). For pun detection, the SemEval dataset consists of 4, 030 and 2, 878 examples for pun detection and location while each example with a pun can be a homographic or heterographic pun. In contrast, the PTD dataset contains 4, 826 examples without labels of pun types. Table 2 further shows the data statistics. The two experimental datasets are the largest publicly available benchmarks that are used in the existing studies. SemEval-2017 dataset contains punning and non-punning jokes, aphorisms, and other short texts composed by professional humorists and online collections. Hence, we assume the genres of positive and negative examples should be identical or extremely similar. Evaluation Metrics. We adopt precision (P), recall (R), and F1-score (Sch¨utze et al., 2007; Powers, 2011) to compare the performance of PCPR with previous studies in both pun detection and location. More specifically, we apply 10-fold crossvalidation to conduct evaluation. For each fold, we randomly select 10% of the instances from the training set for development. To conduct fair comparisons, we strictly follow the experimental settings in previous studies (Zou and Lu, 2019; Cai et al., 2018) and include their reported numbers in the comparisons. Implementation Details. For data pre-processing, all of the numbers and punctuation marks are removed. The phonemes of each word are derived by the CMU Pronouncing Dictionary2. We initialize the phoneme embeddings by using the fastText 1http://alt.qcri.org/semeval2017/ task7/ 2http://svn.code.sf.net/p/cmusphinx/ code/trunk/cmudict/ 818 4 8 16 32 64 128 Phoneme embedding size (dP) 0.88 0.90 0.92 F1 score homographic heterographic (a) Phoneme emb. size dP 16 32 64 128 256 512 Attention size (dA) 0.86 0.88 0.90 0.92 F1 score homographic heterographic (b) Attention size dA Figure 2: Pun location performance over different phoneme embedding sizes dP and attention sizes dA on the SemEval dataset. word embedding (Mikolov et al., 2018) trained on Wikipedia articles3 crawled in December, 2017. The PCPR is implemented in PyTorch while the fused Adam optimizer (Kingma and Ba, 2014) optimizes the parameters with an initial learning rate of 5 × 10−5. The dropout and batch size are set as 10−1 and 32. We follow BERT (BASE) (Devlin et al., 2018) to use 12 Transformer layers and self-attention heads. To clarify, in PCPR, tokens and phonemes are independently processed, so the tokens processed with WordPiece tokenizer (Wu et al., 2016) in BERT are not required to line up with phonemes for computations. To deal with the out-of-vocabulary words, we use the output embeddings of the first WordPiece tokens as the representatives, which is consistent with many state-of-theart named entity recognition approaches (Devlin et al., 2018; Lee et al., 2019). We also create a variant of PCPR called CPR by exploiting only the contextualized word encoder without considering phonemes to demonstrate the effectiveness of pronunciation embeddings. To tune the hyperparameters, we search the phoneme embedding size dP and the attention size dA from {8, 16, 32, 64, 128, 256, 512} as shown in Figure 2. For the SemEval dataset, the best setting is (dP = 64, dA = 256) for the homographic puns while heterographic puns favor (dP = 64, dA = 32). For the PTD dataset, (dP = 64, dA = 32) can reach the best performance. Baseline Methods. We compare PCPR with several baseline methods. For the SemEval dataset, nine baseline methods are compared in the experiments, including Duluth (Pedersen, 2017), JU CES NLP (Pramanick and Das, 2017), PunFields (Mikhalkova and Karyakin, 2017), UWAV (Vadehra, 2017), Fermi (Indurthi and Oota, 2017), and 3https://dumps.wikimedia.org/enwiki/ latest/ UWaterloo (Vechtomova, 2017). While most of them extract complicated linguistic features to train rule based and machine learning based classifiers. In addition to task participants, Sense (Cai et al., 2018) incorporates word sense representations into RNNs to tackle the homographic pun location task. The CRF (Zou and Lu, 2019) captures linguistic features such as POS tags, n-grams, and word suffix to model puns. Moreover, the Joint (Zou and Lu, 2019) jointly models two tasks with RNNs and a CRF tagger. For the PTD dataset, four baseline methods with reported performance are selected for comparisons. MCL (Mihalcea and Strapparava, 2005) exploits word representations with multiple stylistic features while HAE (Yang et al., 2015) applies a random forest model with Word2Vec and humancentric features. PAL (Chen and Lee, 2017) trains a convolutional neural network (CNN) to learn essential feature automatically. Based on existing CNN models, HUR (Chen and Soo, 2018) improves the performance by adjusting the filter size and adding a highway layer. 4.2 Experimental Results Pun Detection. Table 3 presents the pun detection performance of methods for both homographic and heterographic puns on the SemEval dataset while Table 4 shows the detection performance on the PTD dataset. For the SemEval dataset, compared to the nine baseline models, PCPR achieves the highest performance with 3.0% and 6.1% improvements of F1 against the best among the baselines (i.e. Joint) for the homographic and heterographic datasets, respectively. For the PTD dataset, PCPR improves against HUR by 9.6%. Moreover, the variant CPR beats all of the baseline methods and shows the effectiveness of contextualized word embeddings. In addition, PCPR further improves the performances by 2.3% and 1.1% with the attentive pronunciation feature for detecting homographic and heterographic puns, respectively. An interesting observation is that pronunciation embeddings also facilitate homographic pun detection, implying the potential of pronunciation for enhancing general language modeling. Pun Location. Table 3 shows that the proposed PCPR model achieves highest F1-scores on both homographic and heterographic pun location tasks with 10.9% and 15.9% incredible increment against the best baseline method. The improvement is 819 Model Homographic Puns Heterographic Puns Pun Detection Pun Location Pun Detection Pun Location P R F1 P R F1 P R F1 P R F1 Duluth 78.32 87.24 82.54 44.00 44.00 44.00 73.99 86.62 68.71 JU CSE NLP 72.51 90.79 68.84 33.48 33.48 33.48 73.67 94.02 71.74 37.92 37.92 37.92 PunFields 79.93 73.37 67.82 32.79 32.79 32.79 75.80 59.40 57.47 35.01 35.01 35.01 UWAV 68.38 47.23 46.71 34.10 34.10 34.10 65.23 41.78 42.53 42.80 42.80 42.80 Fermi 90.24 89.70 85.33 52.15 52.15 52.15 UWaterloo 65.26 65.21 65.23 79.73 79.54 79.64 Sense 81.50 74.70 78.00 CRF 87.21 64.09 73.89 86.31 55.32 67.43 89.56 70.94 79.17 88.46 62.76 73.42 Joint 91.25 93.28 92.19 83.55 77.10 80.19 86.67 93.08 89.76 81.41 77.50 79.40 CPR 91.42 94.21 92.79 88.80 85.65 87.20 93.35 95.04 94.19 92.31 88.24 90.23 PCPR 94.18 95.70 94.94 90.43 87.50 88.94 94.84 95.59 95.22 94.23 90.41 92.28 Table 3: Performance of detecting and locating puns on the SemEval dataset. All improvements of PCPR and CPR over baseline methods are statistically significant at a 95% confidence level in paired t-tests. Comparing to PCPR, CPR does not model word pronunciations. Results show that both PCPR and CPR outperform baselines. With modeling pronunciations, PCPR performs the best. Model P R F1 MCL 83.80 65.50 73.50 HAE 83.40 88.80 85.90 PAL 86.40 85.40 85.70 HUR 86.60 94.00 90.10 CPR 98.12 99.34 98.73 PCPR 98.44 99.13 98.79 Table 4: Performance of pun detection on the PTD dataset. Model Homographic Puns Heterographic Puns P R F1 P R F1 Joint 67.70 67.70 67.70 68.84 68.84 68.84 PCPR 87.21 81.72 84.38 85.16 80.15 82.58 Table 5: Performance of pipeline recognition in the SemEval dastaset. much larger than that on pun detection task. We posit the reason is that predicting pun locations relies much more on the comparative relations among different tokens in one sentence. As a result, contextualized word embeddings acquire an enormous advantage. By applying the pronunciation-attentive representations, different words with similar pronunciations are linked, leading to a much better pinpoint of pun word for the heterographic dataset. We notice that some of the baseline models such as UWaterloo, UWAV and PunFields have poor performances. These methods consider the word position in a sentence or calculate the inverse document frequency of words. We suppose such rulebased recognition techniques can hardly capture the deep semantic and syntactic properties of words. Pipeline Recognition. The ultimate goal of pun Model P R F1 PCPR 90.43 87.50 88.94 w/o Pre-trained Phoneme Emb. 89.37 85.65 87.47 w/o Self-attention Encoder 89.17 86.42 87.70 w/o Phonological Attention 89.56 87.35 88.44 Table 6: Ablation study on different features of PCPR for homographic pun detection on the SemEval dataset. recognition is to establish a pipeline to detect and then locate puns. Table 5 shows the pipeline performances of PCPR and Joint, which is the only baseline with reported pipeline performance for recognizing the homographic and heterographic puns in the SemEval dataset. Joint achieves suboptimal performance and the authors of Joint attribute the performance drop to error propagation. In contrast, PCPR improves the F1-scores against Joint by 24.6% and 20.0% on two pun types. 4.3 Ablation Study and Analysis Ablation Study. To better understand the effectiveness of each component in PCPR, we conduct an ablation study on the homographic puns of the SemEval dataset. Table 6 shows the results on taking out different features of PCPR, including pre-trained phoneme embeddings, the self-attentive encoder, and phonological attention. Note that we use the average pooling as an alternative when we remove the phonological attention module. As a result, we can see the drop after removing each of the three features. It shows that all these components are essential for PCPR to recognize puns. Attentive Weights Interpretation. Figure 3 illustrates the self-attention weights αS i of three ex820 A busy barber is quiet harried. <latexit sha1_base64="kfBM Zz+Jzk/uX3yV8s7kFSdRdls=">ACnicbVC7TsNAEDyHVwgvA yXNQYREZdmhgDJAQxk8pCFZ3P6+SU89ncnZGsKDUNv0JDAUK0f AEdf8MlcQEJI60mtnV7k6Qcqa0635bpaXldW18nplY3Nre8fe 3WupJMUmjThiewERAFnApqaQ6dVAKJAw7tYHg18dsPIBVLxK3 OU/Bj0hcsYpRoI/XswscZCrHAZEBSMwUvs8YaDwgUjIHYx7dt V13CnwIvEKUkUFGj376y5MaBaD0JQTpbqem2p/RKRmlMO4cpcpS Akdkj50DRUkBuWPpq+M8bFRQhwl0pTQeKr+nhiRWKk8DkxnTPRAz XsT8T+vm+no3B8xkWYaBJ0tijKOdYInueCQSaCa54YQKpm5FVMT AqHapFcxIXjzLy+SVs3xTp3aTa1avyziKMDdIROkIfOUB1dowZ qIoe0TN6RW/Wk/VivVsfs9aSVczsoz+wPn8ASQKZWg=</late xit> A busy barber is quiet harried. <latexit sha1_base64="kfBM Zz+Jzk/uX3yV8s7kFSdRdls=">ACnicbVC7TsNAEDyHVwgvA yXNQYREZdmhgDJAQxk8pCFZ3P6+SU89ncnZGsKDUNv0JDAUK0f AEdf8MlcQEJI60mtnV7k6Qcqa0635bpaXldW18nplY3Nre8fe 3WupJMUmjThiewERAFnApqaQ6dVAKJAw7tYHg18dsPIBVLxK3 OU/Bj0hcsYpRoI/XswscZCrHAZEBSMwUvs8YaDwgUjIHYx7dt V13CnwIvEKUkUFGj376y5MaBaD0JQTpbqem2p/RKRmlMO4cpcpS Akdkj50DRUkBuWPpq+M8bFRQhwl0pTQeKr+nhiRWKk8DkxnTPRAz XsT8T+vm+no3B8xkWYaBJ0tijKOdYInueCQSaCa54YQKpm5FVMT AqHapFcxIXjzLy+SVs3xTp3aTa1avyziKMDdIROkIfOUB1dowZ qIoe0TN6RW/Wk/VivVsfs9aSVczsoz+wPn8ASQKZWg=</late xit> I phoned the zoo but the lion was busy. <latexit sha1_base64="o+8bhmVCom+04auGjuI6lKzh2vY="> ACEHicbZC7TkJBEIb34A3xhlrabCRGK3IOFloSbTDRC4JELJnGWDnt2T3TkaJDyCja9iY6ExtpZ2vo3LpVBwk2+/P9MZucPYy ks+v63l1paXldS69nNja3tneyu3sVqxPDocy1KYWMgtSKCijQAm12ACLQgnVsH859qt3YKzQ6hYHMTQj1lWiIzhDJ7Wyx9c07mkFb Yo9oA9a0zDBCUvn03tmnWAH+VY25+f9SdFCGaQI7MqtbJfjbmSQKuWTW1gM/xuaQGRcwijTSCzEjPdZF+oOFYvANoeTg0b0yCl t2tHGPYV0ov6eGLI2kEUus6IYc/Oe2PxP6+eYOe8ORQqThAUny7qJKipuN0aFsY4CgHDhg3wv2V8h4zjKPLMONCOZPXoRKIR+c5 gs3hVzxYhZHmhyQ3JCAnJGiuSKlEiZcPJInskrefOevBfv3fuYtqa82cw+VPe5w8vy5wN</latexit> I phoned the zoo but the lion was busy. <latexit sha1_base64="o+8bhmVCom+04auGjuI6lKzh2vY="> ACEHicbZC7TkJBEIb34A3xhlrabCRGK3IOFloSbTDRC4JELJnGWDnt2T3TkaJDyCja9iY6ExtpZ2vo3LpVBwk2+/P9MZucPYy ks+v63l1paXldS69nNja3tneyu3sVqxPDocy1KYWMgtSKCijQAm12ACLQgnVsH859qt3YKzQ6hYHMTQj1lWiIzhDJ7Wyx9c07mkFb Yo9oA9a0zDBCUvn03tmnWAH+VY25+f9SdFCGaQI7MqtbJfjbmSQKuWTW1gM/xuaQGRcwijTSCzEjPdZF+oOFYvANoeTg0b0yCl t2tHGPYV0ov6eGLI2kEUus6IYc/Oe2PxP6+eYOe8ORQqThAUny7qJKipuN0aFsY4CgHDhg3wv2V8h4zjKPLMONCOZPXoRKIR+c5 gs3hVzxYhZHmhyQ3JCAnJGiuSKlEiZcPJInskrefOevBfv3fuYtqa82cw+VPe5w8vy5wN</latexit> The boating store had its best sail ever. <latexit sha1_base64="xfuWqSeWPFQyKRKtnygUbE7UgA4=">ACEnicbVC7TgMxEPSFVwivA0oaiwgJmuguFBG0FAGKS8piSK fs0ms+OyTvYcURfkGn6FhgKEaKno+BucRwEJI1kaz+yuvRMlUlgMgm8vs7a+sbmV3c7t7O7tH/iHRzWrU8OhyrXUphExC1IoqKJACY3EAIsjCfVoeDv16w9grNCqgqME2jHrK9ETnKGTOv5FZQA0u6m+tSiNkAHrEsFWhqBRWqZkBTchELHzweFYAa6SsIFyZMFyh3/q9XV PI1BIZfM2mYJNgeM4OCS5jkWqmFhPEh60PTUcVisO3xbKUJPXNKl/a0cUchnam/O8YstnYUR64yZjiwy95U/M9rpti7bo+FSlIExecP9VJUdNpPrQrDHCUI0cYN8L9lfIBM4yjSzHnQgiXV14ltWIhvCwU74v50s0ijiw5IafknITkipTIHSmTKuHkTyTV/LmPXkv3rv3M S/NeIueY/IH3ucPBS2dDQ=</latexit> The boating store had its best sail ever. <latexit sha1_base64="xfuWqSeWPFQyKRKtnygUbE7UgA4=">ACEnicbVC7TgMxEPSFVwivA0oaiwgJmuguFBG0FAGKS8piSK fs0ms+OyTvYcURfkGn6FhgKEaKno+BucRwEJI1kaz+yuvRMlUlgMgm8vs7a+sbmV3c7t7O7tH/iHRzWrU8OhyrXUphExC1IoqKJACY3EAIsjCfVoeDv16w9grNCqgqME2jHrK9ETnKGTOv5FZQA0u6m+tSiNkAHrEsFWhqBRWqZkBTchELHzweFYAa6SsIFyZMFyh3/q9XV PI1BIZfM2mYJNgeM4OCS5jkWqmFhPEh60PTUcVisO3xbKUJPXNKl/a0cUchnam/O8YstnYUR64yZjiwy95U/M9rpti7bo+FSlIExecP9VJUdNpPrQrDHCUI0cYN8L9lfIBM4yjSzHnQgiXV14ltWIhvCwU74v50s0ijiw5IafknITkipTIHSmTKuHkTyTV/LmPXkv3rv3M S/NeIueY/IH3ucPBS2dDQ=</latexit> Figure 3: Visualization of attention weights of each pun word (marked in pink) in the sentences. A deeper color indicates a higher attention weight. Sentence Pun CPR PCPR In the dark? Follow the son. son son He stole an invention and then told patent lies. patent patent lies A thief who stole a calendar got twelve months. got Table 7: A case study of the model predictions for the pun location task of SemEval 2017. 0 10 20 30 40 Number of words 0.25 0.50 0.75 1.00 F1 score homographic heterographic (a) Pun Detection 0 10 20 30 40 Number of words 0.6 0.8 1.0 F1 score homographic heterographic (b) Pun Location Figure 4: Pun recognition performance over different text lengths for homographic and heterographic puns on the SemEval dataset. amples from heterographic puns in the SemEval dataset. The word highlighted in the upper sentence (marked in pink) is a pun while we also color each word of the lower sentence in blue according to the magnitude of its attention weights. The deeper colors indicate higher attention weights. In the first example, busy has the largest weight because it has the most similar semantic meaning as harried. The barber also has relatively high weights. We suppose it is related to hairy which should be the other word of this double entendre. Similar, the zoo is corresponded to lion while phone and busy indicate line for the pun. Moreover, boating confirms sail while store supports sale. Interpreting the weights out of our self-attentive encoder explains the significance of each token when the model detects the pun in the context. The phonemes are essential in these cases because they strengthen the relationship among words with distant semantic meanings but similar phonological expressions. Sensitivity to Text Lengths. Figure 4 shows the performance of pun detection and location over different text lengths for homographic and heterographic puns in the SemEval dataset. For both tasks, the performance gets higher when the text lengths are longer because the context information is richer. Especially in the pun detection task, we observe that our model requires longer contexts (more than 20 words) to detect the homographic puns. However, shorter contexts (less than 10 words) are adequate for heterographic pun detection, which indicates the contribution from phonological features. In short, the results verify the importance of contextualized embeddings and pronunciation representations for pun recognition. Case Study and Error Analysis. Table 7 shows the results of a case study with the outputs of CPR and PCPR. In the first case, the heterographic pun comes from the words son and sun. CPR fails to recognize the pun word with limited context information while the phonological attention in PCPR helps to locate it. However, the pronunciation features in some cases can mislead the model to make wrong predictions. For example, patent in the second sentence is a homographic pun word and has several meanings, which can be found with the contextual features. Besides, the phonemes in lies are ubiquitous in many other words like laws, thereby confusing the model. In the last case, got is a widely used causative with dozens of meanings so that the word is hard to be recognized as a pun word with its contextual and phonological features. 5 Conclusions In this paper, we propose a novel approach, PCPR, for pun detection and location by leveraging a contextualized word encoder and modeling phonemes as word pronunciations. Moreover, we would love to apply the proposed model to other problems, such as general humor recognition, irony discovery, and sarcasm detection, as the future work. 821 Acknowledgment We would like to thank the anonymous reviewers for their helpful comments. References Agnese Augello, Gaetano Saccone, Salvatore Gaglio, and Giovanni Pilato. 2008. Humorist bot: Bringing computational humour in a chat-bot system. In CISIS 2008, pages 703–708. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR 2015. Samy Bengio and Georg Heigold. 2014. Word embeddings for speech recognition. In ISCA 2014. Dario Bertero and Pascale Fung. 2016. Predicting humor response in dialogues from tv sitcoms. In ICASSP 2016, pages 5780–5784. Vladislav Blinov, Valeria Bolotova-Baranova, and Pavel Braslavski. 2019. Large dataset and language model fun-tuning for humor recognition. In ACL 2019, pages 4027–4032. Yitao Cai, Yin Li, and Xiaojun Wan. 2018. Senseaware neural models for pun location in texts. In ACL 2018, pages 546–551. Lei Chen and Chong MIn Lee. 2017. Predicting audience’s laughter using convolutional neural network. arXiv preprint arXiv:1702.02584. Peng-Yu Chen and Von-Wun Soo. 2018. Humor recognition using deep learning. In NAACL 2018, pages 113–117. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Yufeng Diao, Hongfei Lin, Di Wu, Liang Yang, Kan Xu, Zhihao Yang, Jian Wang, Shaowu Zhang, Bo Xu, and Dongyu Zhang. 2018. Weca: A wordnet-encoded collocation-attention network for homographic pun recognition. In EMNLP 2018, pages 2507–2516. Samuel Doogan, Aniruddha Ghosh, Hanyang Chen, and Tony Veale. 2017. Idiom savant at semeval2017 task 7: Detection and interpretation of english puns. In SemEval-2017, pages 103–108. Aniruddha Ghosh, Guofu Li, Tony Veale, Paolo Rosso, Ekaterina Shutova, John Barnden, and Antonio Reyes. 2015. Semeval-2015 task 11: Sentiment analysis of figurative language in twitter. In SemEval 2015, pages 470–478. He He, Nanyun Peng, and Percy Liang. 2019. Pun generation with surprise. In NAACL 2019. Llu´ıs-F Hurtado, Encarna Segarra, Ferran Pla, Pascual Carrasco, and Jos´e-Angel Gonz´alez. 2017. Elirf-upv at semeval-2017 task 7: Pun detection and interpretation. In SemEval-2017, pages 440–443. Vijayasaradhi Indurthi and Subba Reddy Oota. 2017. Fermi at semeval-2017 task 7: Detection and interpretation of homographic puns in english language. In SemEval-2017, pages 457–460. Aaron Jaech, Rik Koncel-Kedziorski, and Mari Ostendorf. 2016. Phonological pun-derstanding. In ACL 2016, pages 654–663. Herman Kamper, Weiran Wang, and Karen Livescu. 2016. Deep convolutional acoustic word embeddings using word-pair side information. In ICASSP 2016, pages 4950–4954. IEEE. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Yann LeCun, Yoshua Bengio, et al. 1995. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: pre-trained biomedical language representation model for biomedical text mining. arXiv preprint arXiv:1901.08746. Fuli Luo, Shunyao Li, Pengcheng Yang, Lei Li, Baobao Chang, Zhifang Sui, and Xu SUN. 2019. Pun-GAN: Generative adversarial network for pun generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In NeurIPS 2017, pages 6294–6305. Alan K Melby and Terry Warner. 1995. The possibility of language: A discussion of the nature of language, with implications for human and machine translation, volume 14. John Benjamins Publishing. Rada Mihalcea and Carlo Strapparava. 2005. Making computers laugh: Investigations in automatic humor recognition. In EMNLP 2005, pages 531–538. Elena Mikhalkova and Yuri Karyakin. 2017. Punfields at semeval-2017 task 7: Employing roget’s thesaurus in automatic pun recognition and interpretation. arXiv preprint arXiv:1707.05479. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In LREC 2018. 822 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Corey Andrew Miller. 1998a. Pronunciation modeling in speech synthesis. George Miller. 1998b. WordNet: An electronic lexical database. MIT press. Tristan Miller, Christian Hempelmann, and Iryna Gurevych. 2017. Semeval-2017 task 7: Detection and interpretation of english puns. In SemEval-2017, pages 58–68. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM computing surveys (CSUR), 41(2):10. Anton Nijholt, Andreea I Niculescu, Valitutti Alessandro, and Rafael E Banchs. 2017. Humor in humancomputer interaction: a short survey. Dieke Oele and Kilian Evang. 2017. Buzzsaw at semeval-2017 task 7: Global vs. local context for interpreting and locating homographic english puns with sense embeddings. In SemEval-2017, pages 444–448. Ted Pedersen. 2017. Duluth at semeval-2017 task 7: Puns upon a midnight dreary, lexical semantics for the weak and weary. arXiv preprint arXiv:1704.08388. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP 2014, pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL 2018, pages 2227–2237. Matthew E Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. arXiv preprint arXiv:1705.00108. David Martin Powers. 2011. Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation. Aniket Pramanick and Dipankar Das. 2017. Ju cse nlp @ semeval 2017 task 7: Employing rules to detect and interpret english puns. In SemEval-2017, pages 432–435. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Hinrich Sch¨utze, Christopher D Manning, and Prabhakar Raghavan. 2007. An introduction to information retrieval. Cambridge University Press,. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Kristina Toutanova and Robert C Moore. 2002. Pronunciation modeling for improved spelling correction. Ankit Vadehra. 2017. Uwav at semeval-2017 task 7: Automated feature-based system for locating puns. In SemEval-2017, pages 449–452. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Olga Vechtomova. 2017. Uwaterloo at semeval-2017 task 7: Locating the pun using syntactic characteristics and corpus-based metrics. In SemEval-2017, pages 421–425. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Yuhuan Xiu, Man Lan, and Yuanbin Wu. 2017. Ecnu at semeval-2017 task 7: Using supervised and unsupervised methods to detect and locate english puns. In SemEval-2017, pages 453–456. Diyi Yang, Alon Lavie, Chris Dyer, and Eduard Hovy. 2015. Humor recognition and humor anchor extraction. In EMNLP 2015, pages 2367–2376. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Zhiwei Yu, Jiwei Tan, and Xiaojun Wan. 2018. A neural approach to pun generation. In ACL 2018. Wenhao Zhu, Xin Jin, Jianyue Ni, Baogang Wei, and Zhiguo Lu. 2018. Improve word embedding using both writing and pronunciation. PloS one, 13(12):e0208785. Yanyan Zou and Wei Lu. 2019. Joint detection and location of english puns. In NAACL 2019, pages 2117– 2123.
2020
75
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8476–8488 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8476 Multi-Domain Named Entity Recognition with Genre-Aware and Agnostic Inference Jing Wang∗, Mayank Kulkarni∗, Daniel Preot¸iuc-Pietro Bloomberg New York, New York, USA {jwang1621,mkulkarni24,dpreotiucpie}@bloomberg.net Abstract Named entity recognition is a key component of many text processing pipelines and it is thus essential for this component to be robust to different types of input. However, domain transfer of NER models with data from multiple genres has not been widely studied. To this end, we conduct NER experiments in three predictive setups on data from: a) multiple domains; b) multiple domains where the genre label is unknown at inference time; c) domains not encountered in training. We introduce a new architecture tailored to this task by using shared and private domain parameters and multi-task learning. This consistently outperforms all other baseline and competitive methods on all three experimental setups, with differences ranging between +1.95 to +3.11 average F1 across multiple genres when compared to standard approaches. These results illustrate the challenges that need to be taken into account when building real-world NLP applications that are robust to various types of text and the methods that can help, at least partially, alleviate these issues. 1 Introduction Accurately identifying named entities and their type in texts is a key processing step for many NLP applications. Named entity recognition (NER) is an important component in several tasks including named entity linking (Cucerzan, 2007), co-reference resolution (Ng and Cardie, 2002), question answering (Krishnamurthy and Mitchell, 2015), relation extraction (Culotta and Sorensen, 2004) and usually sits upstream of analytics such as sentiment (Pang and Lee, 2004) or stance (Mohammad et al., 2016). Building robust NER models to accurately tag and adapt to heterogeneous types of text is thus paramount. Recent research focused on ∗*Equal Contribution improving the overall performance of NER models on specific data sets. Yet NER models show relatively high variance even when trained on the same data (Reimers and Gurevych, 2017) and poorly generalize when tested on data from different genres1, especially if these contain entity mentions unseen in the test data (Augenstein et al., 2017; Agarwal et al., 2020). Despite this, research on NER models robust to different types of input is usually limited to the standard domain adaptation scenario: a single source domain rich in training data and a single target domain with limited or no training data (Lin and Lu, 2018). We argue that this is an over-simplified experimental setup that is not typical for how NER models are used in real-world applications. Ideally, NER models use all available data, regardless of genre, and perform inference on data from any genre, even if this was not encountered in training. In this scenario, simply pooling all the available data is likely sub-optimal as genre-specific differences in named entity mentions are useful to model. Conversely, models limited to only data from the same genre as the test set are likely to underperform, as using more data is usually beneficial. This work introduces three experimental setups for the NER task where models are trained on data from multiple genres and evaluated as follows: a) Multi-Domain – evaluation is performed across multiple genres, all seen in training. b) Multi-Domain with Unknown Domain Labels – evaluation is carried out across multiple genres, all seen in training, but the genre label for each document is unknown at inference time. c) Zero-shot Domain – evaluation is performed on documents from genres unseen in training. 1Throughout this paper, we refer by genre to a collection of documents with variations in style or structure that might impact modelling (Santini et al., 2006); we use domain when referring to modeling concepts. 8477 We propose a neural architecture for NER tailored to these three experimental setups, based on the popular BiLSTM-CRF architecture (Lample et al., 2016). We augment the base architecture to learn both domain-specific and independent features through shared and private domain components including projections and CRFs. Further, we add a multi-task learning objective for domain prediction to guide this separation. This model can perform inference on a text without knowledge of its corresponding domain label by using the shared components. We compare this model with several competitive methods that use a similar base architecture while holding the embeddings constant (i.e. GloVe embeddings). These include models trained on data from each domain independently, models that pool all data and models that use domain identities as features through to source-target domain adaptation methods. Extensive results on all three experimental setups on a collection of data from a total of twelve genres demonstrate that our proposed architecture outperforms all others by a respectable margin. Finally, through an error analysis of our results, we aim to understand the contributions of each proposed component and the margins for future improvements. 2 Related Work Setups for Domain Adaptation Domain adaptation, formulated as learning a single model for the same task across multiple domains, is a wellstudied research area in NLP (Chelba and Acero, 2004; Florian et al., 2004; Blitzer et al., 2006; Daum´e III, 2007). The standard setup for domain adaptation is to maximize performance on data from a single low-resource (target) domain, by using data from a single high-resource (source) domain (Blitzer et al., 2007; Peng and Dredze, 2017). Extensions consider a single source and multiple different target domains (Yang and Eisenstein, 2015) or multiple sources and a single target domain (Mansour et al., 2009). The multi-domain text classification task studied in (Li and Zong, 2008; Wu and Huang, 2015; Chen and Cardie, 2018) is the analogous setup for the text classification task to the first experimental setup we propose for NER. Under this setup, training and evaluation is done across data from multiple domains. Multi-Domain Adaptation Methods for multidomain text classification use data fusion either at the feature or classifier level (Li and Zong, 2008), decomposing the classifier into a shared one and multiple domain-specific ones (Wu and Huang, 2015), further guided by a domain discriminator (Chen and Cardie, 2018) which is also used in multi-lingual NER (Chen et al., 2019). Further, McClosky et al. (2010) explored sequence tagging tasks on data from unknown domains and Chen and Cardie (2018) experiment with sentiment classification on data from unknown domains, similar to our third experimental setup for NER. To the best of our knowledge, our second setup where the domain label is not available at inference time was never explicitly studied. We note that most of these approaches make use of additional unlabeled data from each domain to learn domain-specific representations. We do not use these resources in our methods, as we assume the end-user of the model is agnostic to the data used in training and wants to run inference without having to provide entire comparable corpora. Domain Adaptation for NER Models for domain adaptation in NER using neural architectures were studied recently, albeit mostly for covering the single-source and single-target setup. The INIT method trains a model using the source domain data, and its parameters are used to initialize a target model which is fine-tuned on the target data (Mou et al., 2016). The MULT method trains jointly one model for each domain with shared parameters (Lee et al., 2018). For sequence tagging, one CRF for each of the two domains is used to obtain the predictions (Yang et al., 2017). Adaptation can also be made at the embeddings stage (Lin and Lu, 2018) or by using additional unlabeled data from the source domain and out-of-domain annotated data (He and Sun, 2017). However, as mentioned above, this assumes that unlabeled training data can be provided for each domain, which may not be realistic. The model adds layers between embeddings and the BiLSTM layers, between the BiLSTM and the CRF for the target domain and separate CRF layers, the latter two of which we adapt to our proposed architecture for multi-domain adaptation. A hierarchical Bayesian prior approach is used in (Finkel and Manning, 2009) to tie feature weights across domains when information is sparse and also allow the model to take advantage if substantial data is available in one domain. Their experiments on NER focused only on three data sets: CoNLL, MUC-6 and MUC-7 and only the first of our three setups. A multi-task domain adaptation 8478 method for NER and word segmentation is used in (Peng and Dredze, 2017). The proposed architecture learns a shared representation across domains and experiments with linear domain projections for each domain to guide learning of shared representations. The output of these linear layers is fed to a CRF. We adopt the linear domain projection method, but extend this to also include a shared projection, followed by domain-specific CRFs and multi-task learning. Finally, another type of domain adaptation is temporal adaptation of models tested on data that is more recent than the training data, when each temporal slice can be considered as a different domain (Rijwhani and Preot¸iuc-Pietro, 2020). 3 Methods This section describes the proposed NER architecture tailored the architecture to our multi-domain experimental setups, which is independent of input embedding representation. 3.1 Base Architecture The basic component of our NER models is an architecture which has reached state-of-the-art performance several times over the last few years (Lample et al., 2016; Peters et al., 2018; Akbik et al., 2018). Named entity recognition task is a structured prediction task and earlier statistical approaches are based models like Conditional Random Fields (Lafferty et al., 2001), which rely on features often designed based on domain-specific knowledge (Luo et al., 2015). The current dominant approach to the NER task consists of neural architectures based on recurrent neural networks with different choices of input representations (Huang et al., 2015; Ma and Hovy, 2016; Lample et al., 2016; Peters et al., 2018; Akbik et al., 2018, 2019). The input consists of a concatenation of pretrained word embeddings and character embeddings. Character embeddings are trained using an LSTM from randomly initialized vectors as in (Lample et al., 2016). Word embeddings are derived from a combination GloVe (Pennington et al., 2014) and FastText (Bojanowski et al., 2017) pre-trained word embeddings, as used in (Ma and Hovy, 2016). The choice of embeddings is orthogonal to the architecture and thus, we hold these constant in all experiments. This representation is passed through two LSTM layers that process the input sequence in differFigure 1: MultDomain–SP–Aux Architecture for 2 domains (A & B) and shared layers denoted by Sh ent directions (Huang et al., 2015). The outputs of these layers are concatenated and, in order to map the word representation obtained from the LSTM module into the label distribution, passed to a one-layer feed-forward network. A Conditional Random Field is applied to the class predictions to jointly assign the sequence tags using a transition matrix. This CRF layer improves performance of the model (Lample et al., 2016) as it ensures the output sequence takes into account dependencies between the tags and also models the constraints the output sequence adheres to (e.g. I-PER can not follow B-LOC). 3.2 Proposed Architecture (MultDomain–SP–Aux) We propose a new architecture based on the BiLSTM–CRF model tailored to the three proposed experimental setups. Our proposed architecture enhances the base architecture with three components: a) domain -specific and -independent feed-forward layers that process the BiLSTM outputs; b) domain -specific and -independent feed forward layers CRFs; c) a multi-task learning objective that learns domain labels as an auxiliary task. The proposed architecture changes are motivated by the aim of capturing commonalities in which named entities are referred to, in any given genre, while still allowing for the model to tease apart and exploit domain-specific aspects. The architecture is also designed to capture these commonalities across label relationships, which can vary across domains. In addition, the multi-task objective further assists the model to leverage domaindependent and -independent components. The choice of input representation is orthogonal to the proposed architecture and our extensions to the architecture can be combined with any input repre8479 sentation. The model architecture is presented in Figure 1 and described below: Private and Shared Layers We rely on the shared-private paradigm where the model learns both a shared representation across all domains and is useful when the domain of the input is unknown or unseen in training, and a private domain representation that mostly helps tagging in that domain. We model the shared and private features at both the feature mapping stage connecting the BiLSTM outputs to the CRF(s) and at the CRF level. We expect the features extracted by the BiLSTM layers to model the structure of the input across all domains. The feed-forward layers capture the domainspecific and -independent information by using private output layers for each domain and one shared output layer. In training, the BiLSTM outputs are projected to both the shared layer and the private layer based on the domain label provided in training. The CRF layer is used to make a global decision for the entire tag sequence by modelling label dependencies. We expect that this decision is, at least partially, dependent on domain-specific relationships in the label space. Hence, each feedforward layer feeds into either private CRFs (one for each domain) or a shared CRF. The separation of the shared and private layers could happen before the CRF stage (late separation) or before the feed-forward layer stage (early separation). We investigate the influence of each individual addition on the multi-domain performance in our analysis section through ablation studies. Given an input, both the shared and the private parameters are used in learning to predict the output. The set of private parameters for each domain are only updated by data from the same domain while the set of shared parameters are updated in a pooled way by taking all available data points in the training stage regardless of the domain characteristics. For a given data point, inference can be run either by: a) passing it though the private components if the domain label is known; b) through the shared components if the domain label in unknown or the domain of the data is unseen in training. To this end, the objective function for the private and shared layers is: LNER SP (x, y) = LNER S(x, y) + LNER P (x, y) (1) where LNER S and LNER P stand for the shared layer loss and private layer loss respectively. Multi-Task Learning of Domain Labels Further, to better guide the learning process, we augment our architecture with a multi-task learning objective. Through this, the model learns to predict the domain label of each sample in training as an auxiliary task. The architecture uses average pooling on BiLSTM outputs followed by a fully connected layer. Finally, softmax is applied over the learned domain feature to obtain a probability distribution of all domain labels. The domain classification objective is to minimize the crossentropy loss Ldomain(x, yd) for an input x with domain label yd. The global objective function is the combination of the NER loss function and domain loss: L(x; y, yd) = LNER SP (x, y) + Ldomain(x, yd) (2) 4 Experimental setup 4.1 Data We use a collection of data sets spanning eight genres to evaluate our methods. In addition, in order to test the feasibility of NER tagging in a zero-shot domain setup, we present additional data covering four other genres. Each genre of documents is considered a domain in modelling. 4.1.1 Data Sets The data set collection used in learning the multidomain models (denoted as ‘Open Data’ in the rest of the paper) includes the following three data sets: CoNLL 2003 We use the data set released as part of CoNLL 2003 shared task for English (Tjong Kim Sang and De Meulder, 2003), which is arguably the most popular data set for NER and is regularly used as a benchmark for this task. This data is a collection of news articles from the Reuters Corpus. Twitter The Twitter data set consists of 22,000 tweets representative of multiple English-speaking locales and a variety of topics that span 11 years of Twitter posts (2009–2019). This data was annotated with Organizations (ORG), Persons (PER) and Locations (LOC), using the annotation guidelines used in annotating past data sets (Tjong Kim Sang and De Meulder, 2003) supplemented with examples that are specific to Twitter data. OntoNotes (six genres) The OntoNotes data set (Hovy et al., 2006) consists for six different genres annotated, amongst others, with named entities and their types. In this data, each genre refers to a different source, which includes newswire (NW), 8480 Data Set # Tokens Density Entity Distribution ORG PER LOC CoNLL 2003 302811 14.52% 33.2% 38.8% 28.0% Twitter 227019 8.02% 36.9% 46.5% 16.5% OntoNotes-NW 490738 8.89% 55.1% 21.1% 23.8% OntoNotes-BN 258625 9.06% 27.5% 37.2% 35.3% OntoNotes-MZ 197520 7.84% 28.1% 41.9% 30.0% OntoNotes-BC 239236 5.49% 27.5% 39.8% 32.8% OntoNotes-TC 114463 1.59% 12.3% 45.6% 42.1% OntoNotes-WB 490738 2.17% 25.5% 44.4% 30.1% Zero-Shot-A 103992 3.10% 53.3% 24.4% 22.2% Zero-Shot-B 794199 8.48% 55.5% 28.4% 16.1% Zero-Shot-C 156032 10.06% 64.4% 14.4% 21.1% Zero-Shot-D 27522 5.84% 38.8% 31.9% 29.4% Table 1: Size of data sets, NE density (tokens that are named entities) and distributions across entity types for both open and zero-shot data sets. broadcast news (BN), broadcast conversation (BC), magazine (MZ), telephone conversation (TC) and web data (WB) (Pradhan et al., 2013). Note that we replace the ‘LOC’, ‘FAC’ and ‘GPE’ tags in the OntoNotes data with the ‘LOC’ type in order to be consistent with the definition of ‘LOC’ in CoNLL 2003, as also done in (Augenstein et al., 2017). Zero Shot Genres Finally, for zero-shot genre NER, we use a collection of internal data sets from four different genres spanning news, closed captions and other documents. All four genres were annotated with the same entity types and using similar guidelines. 4.1.2 Data Set Statistics Data set statistics are presented in Table 1. This shows that all domains are represented with a substantial number of sentences, although the prevalence of named entities and their distribution across types varies, as expected from data sets collected from different sources and genres. We also see that the zero-shot domains are significantly different in entity type distribution and density than the training data, making them well-suited for this setting. 4.1.3 Data Processing In order to present comparable results across all different data sets, we limit our experiments to three different types of entities that are present in all the above data sets and annotated using similar guidelines: organizations (including geo-political entities and facilities), persons and locations. In case other types of entities exist in the data (e.g. MISC for CoNLL, dates for OntoNotes), these are considered to be not an entity, similar to (Augenstein et al., 2017). We used the BIO tagging scheme in all our experiments, as this is arguably the most popular and differences in results between this tagging scheme and others, such as the BILOU scheme, are very small in practice (Ratinov and Roth, 2009). 4.1.4 Data Splits We train our models using the open data sets from CoNLL, Twitter and OntoNotes. The training, development and test splits of CoNLL and OntoNotes follows the standard splits. Similarly, we randomly split the Twitter data set randomly into 70% for training, 10% for development and 20% for testing. The final train, dev and test sets are obtained by joining all the respective splits across the individual data sets. 4.2 Other Methods We evaluate several baseline methods and other competitive methods introduced in past research and compare to our proposed architecture (MultDomain–SP–Aux) described in Section 3.2. These methods focus on different variations of the neural model architecture, while holding the input embeddings constant. InDomain trains an individual NER model using the base architecture for each of the known domains. In inference, the corresponding in-domain model is used. This allows us to establish the baseline individual domain performance when no information is shared between the domains in training. InDomain-DomainClassifier uses the same NER models as the InDomain model. The InDomain approach is however unable to directly perform inference on sentences where the domain label is unknown at inference time. We thus build a separate domain classifier using a Bi-LSTM recurrent neural network that feeds the final hidden state into a feed-forward network to recognize the domain of a given input sentence and route it to the appropriate InDomain NER model. PoolDomain naively pools all available data, disregarding the domain information and trains a model using the base architecture. This model thus ignores the domain information when training, albeit uses all available training data. Data pooling is the standard baseline in most domain adaptation experiments. PoolDomain-Init uses all available data and uses the domain information to train models on data from one domain at once. After training on data from each domain, the model uses the weights as 8481 initialization for training on next domain. This is similar to the INIT strategy for domain adaptation used in (Mou et al., 2016; Lee et al., 2018). We perform this weight initialization and fine-tuning process over all the domains consecutively, where the order is defined by the density of entities, starting with the highest one. PoolDomain-GradRev trains the base architecture using a gradient reversal layer (Ganin and Lempitsky, 2014). The gradient reversal technique aims to confuse the domain discriminator while learning NER with the combination of the training data from all domains. PoolDomain+DomainFeat trains a base architecture model over all available data and, in addition to the text-based features, the domain information is explicitly represented by passing it through a domain embedding. This is appended to the word-level features that are used as input to the BiLSTM layers. The domain embeddings are randomly initialized. MultDomain-SP extends the MULT method (Yang et al., 2017) to the multi-domain setup. This method uses a domain-specific CRF for each domain and a shared CRF for all domains. Both the BiLSTM and the feed-forward layers are shared across all domains. Inference can be done either through the private layer corresponding to the domain of the input – denoted as MultDomain-MultCRF (P) – or through the shared layer – denoted as MultDomain-MultCRF (S) – in which case this can be used when the domain label is unknown in inference. 4.3 Implementation Details For our experiments, we largely follow the training and evaluation procedure used in (Akbik et al., 2018). As hyperparameters, we follow most suggestions outlined in the in-depth study on model robustness (Reimers and Gurevych, 2017). Our training uses 256 hidden states for BiLSTM with mini-batch size of 32. The model parameters are updated using back-propagation and Adam optimizer (Kingma and Ba, 2014). The learning rate is 1e−3 with weight decay value 1e−5. The model is regularized with a locked dropout rate of 0.5. We use 300-dimensional pre-trained word embeddings as described in Section 3.1, whereas the character LSTM is randomly initialized and has a hidden dimension of 64. The embeddings are updated on the training data. When training the domain features together with the NER (PoolDomain+DomainFeat), we set the domain embedding size to 128. We train all models for 20 epochs and report the results for the model performing best on the joint development set of the open data set collection. 5 Results In this section, we present and compare the results of all the methods introduced previously. Experiments are conducted first on the open data collection introduced in Section 4.1 in the Multi-Domain and Multi-Domain with Unknown Label setups. Following, we evaluate the performance of our model on the data used for zero-shot genre NER. The goal of these experiments is to examine the NER performance across the three proposed experimental setups which focus on model generalizability across multiple domains. We note that the results below can not be directly compared to the state-of-the-art results on each data set, as we restrict the entity types to PER, ORG, LOC, such that these types are constant across all data sets. 5.1 Multi-Domain with Known Domain Labels First, we compare models when assuming the domain label of each test document is known at inference time. The results are listed in Table 2. Our proposed method – MultDomain-SP-Aux (P) – obtains the best results across the entire test collection in both micro-average (+0.43) and macro-average (+1.94) compared to all other approaches and performs best on 7 out of the 8 domains. The second best method is the PoolDomain+DomainFeat which uses the domain feature as input. Our method consistently surpasses the in-domain classifiers (InDomain) on microaverage (+1.48) and macro-average (+3.11), showing the limitations of naive modeling approaches. Although increases exist across all domains, these are most prominent in domains like TC (+5.36) that have a low density of named entities and where indomain models have access to limited amounts of data. However, the in-domain performance is better than the pooled method of training, which shows consistent drops in performance on some domains (-8.69 on WB, -6.77 on BC, - 1.98 on CoNLL), where information from other domains did not benefit the model. 8482 Model Works on Unknown Domain Labels CoNLL Twitter NW BN MZ BC TC WB µ–Avg M–Avg InDomain  89.91 67.36 91.09 91.09 86.90 84.41 77.06 64.74 85.29 81.57 InDomain+DomainClassifier  88.92 66.98 90.48 90.21 85.63 84.64 76.28 59.62 83.93 80.35 PoolDomain  87.93 66.21 90.86 92.76 87.73 89.06 70.29 56.05 83.94 80.11 PoolDomain–Init  31.31 15.74 63.34 67.63 47.30 63.30 33.93 57.55 47.00 47.55 PoolDomain–GradRev  83.49 54.55 83.95 86.87 77.46 83.93 77.78 50.88 77.29 74.86 PoolDomain+DomainFeat  90.74 67.80 90.32 92.27 89.12 89.86 78.40 63.37 86.34 82.74 MultDomain–SP (P)  87.70 59.16 88.96 93.51 88.52 89.95 77.97 55.51 82.12 80.16 MultDomain–SP (S)  87.41 57.98 88.64 93.47 88.39 89.00 55.51 54.39 81.73 80.08 MultDomain–SP–Aux (P)  90.21 69.15 91.09 93.64 91.38 90.67 82.42 67.44 86.77 84.68 MultDomain–SP–Aux (S)  88.43 67.13 91.26 93.59 87.67 89.54 78.77 59.63 84.68 82.30 Table 2: Experimental results on the eight data sets, as well as micro (µ-) and macro (M-) averaged across data sets. Performance is measured using micro F1 score. The rows with  indicate methods that can be applied when the domain label is not known at inference time. (S) and (P) denote if inference is done through the shared (S) or private (P) layers of the architecture. Results in bold are the best across all models, those underlined are best across methods that work with unknown domain labels. 5.2 Multi-Domain with Unknown Domain Labels We now focus on the experimental setup where domain labels are unknown for each data point at inference time. This is akin to a setup where the user is agnostic to the data the model was trained on. As only a subset of the models can perform inference in this scenario, the results are a subset of those in Table 2. Our model – MultDomain-SP-Aux (S) – gains the best overall performance in this setup, with 1.95 macro-average F1 increase over the next best method (InDomain+DomainClassifier). The other standard baseline for domain adaptation (PoolDomain) obtains a similar performance (−2.19 compared to our method) to the in-domain approach, which shows the benefits of multidomain adaptation. PoolDomain-Init is performing overall poorly, which shows that the INIT transfer learning strategy that is somewhat effective for source-target domain adaptation does not work well in the multidomain setup. Our intuition is that this technique is unable to learn robust features sequentially across N domains, as it performs poorly on the initial trained domains. PoolDomain-GradRev gains relatively weak performance overall, lower than the in-domain baseline. 5.3 Zero-Shot Domain Finally, we show the results on the experimental setup where the test data is the four ‘Zero-Shot Genres’, which were not used in during training. Table 3 shows the experimental results of all methods that can run inference with unknown domain Models Zero-Shot Genres M–Avg A B C D InDomain+DomainClassifier 47.16 60.04 62.00 59.50 57.17 PoolDomain 52.61 62.53 63.53 61.55 60.05 PoolDomain-Init 24.38 36.92 47.13 19.47 31.98 PoolDomain-GradRev 49.48 68.97 67.95 57.41 60.95 MultDomain-SP (S) 50.9 72.27 68.19 61.86 63.30 MultDomain-SP-Aux (S) 54.50 67.77 70.30 64.02 64.15 Table 3: Evaluation results on data from genres unseen in training. labels, as we assume that in this setup, the end-user does not have knowledge about the domains used in training and which of these are most similar to the test point. Results show that our proposed method obtains again the best results, with a consistent margin of 2.24 macro-average F1 improvement over the next method. Pooling all data (PoolDomain) obtains better performance than building in-domain classifiers with domain classification (InDomain+DomainClassifier) unlike in the other setups. This also shows that the zero-shot domains we used are indeed different to any of the ones in training and pooling all data manages to build a slightly more robust model than individual ones trained on less data. The in-domain models perform 5.21 F1 points lower than our approach, the largest gap in all experimental setups, highlighting the robustness of the multi-domain modeling approach. The MultDomain-SP (S) model is second best, and as this is the base for our method, we discuss its performance in the ablation study from the next section. 8483 6 Analysis 6.1 Ablation Experiments We first focus on understanding the impact of each component added to our proposed method over the base architecture through an ablation study. Table 4 shows results using the private layer (MultDomain-SP-Aux (P)) when each of the three components are alternatively turned off: SharedPrivate Linear layer, Shared-Private CRF and the domain prediction auxiliary task. Shared vs. Shared-Private CRF With the rest of the architecture fixed, the results show that the shared-private CRF performs close to the shared CRF when the shared linear layer is used (80.08 vs. 80.16; 82.04 vs. 82.74; all comparisons in this section are on macro-average). However, once we use a separate linear layer between the BiLSTM and each CRF, the difference between having the shared and the shared-private CRFs increases drastically (81.36 vs. 83.11; 82.30 vs. 84.68). With only this late separation, the inputs to CRF decoders are still domain-independent features, which makes it hard for the linear CRF to adapt. When the inputs are already domain-dependent, the linear CRF can better use this information in performing the joint inference of the sequence. We note that only using shared-private CRF with the base architecture is equivalent to the MultDomain-SP method (Yang et al., 2017). Shared vs. Shared-Private Linear Projections The results show that regardless of the other parameters, adding shared and private linear layers between the BiLSTM layers and the CRF(s) is always beneficial (80.08 vs. 81.36; 80.16 vs. 83.11; 82.04 vs. 82.30; 82.74 vs. 84.68). The improvements are relatively larger when combined with shared and private CRF, as previously seen. Multi-Task Learning of Domain Labels Finally, we compare the impact of adding the multi-task learning objective. We find that, similar to the linear layers, adding the domain prediction task is always beneficial for the model with the increase being larger if is only a shared linear layer. We expect that the two tasks at different levels of granularity rely on shared structure in the original semantic space. The document-level domain labels can help regularize the training, providing generic information about which low-level features are valuable to entity-level recognition. 6.2 InDomain with Oracle Choice In order to understand the limitations of the multidomain setup, we study whether the models we can build from the available data could theoretically achieve better overall performance. We use an oracle-based selection technique on the in-domain models to select, after the prediction and using the gold labels the model which performed best for each test instance, as selected using F1 score or, if there are no entities, the model with most O predictions. If multiple models are tied, we choose one at random. The oracle thus provides the counterfactually “Optimal” strategy of model selection for each test instance and represents an upper bound on strategies relying on InDomain models. Table 5 compares the oracle strategy predictions with the InDomain+DomainClassifier and the MultDomain-SP-Aux model. The results show that even though our model improves substantially over the in-domain models, an oracle selection method would push performance much higher (+6.73 F1 on the open data). This highlights both the variability of NER models trained on different data sets and that there is potentially more room for improvements in the multi-domain setup. 6.3 InDomain Models The Supplementary Material shows a breakdown of the domain prediction labels for three methods: domain classification, domain prediction in the proposed MultDomain-SP-Aux model and the oracle in-domain choice on gold data. The oracle strategy selects the predictions from all in-domain models. Based on this, we analyzed the performance of each individual in-domain model when tested on all domains in Table 6. We find that although the Oracle strategy uses a mix of models, any model alone is unable to generalize to other domains (67.19 vs. 84.68 best InDomain model compared to the best overall model). In the zero-shot genres, the Twitter model performs close to the MultDomain-SP-Aux model (-0.56 F1), albeit it is 24 F1 lower on the multi-domain setup. This reinforces that learning shared domain features as opposed to learning individual models helps boost performance and is more robust to different types of inputs. 7 Runtime Comparison Finally, we compare the runtime difference across various methods listed in the experiment section to test the practical implications of using our pro8484 Auxiliary Task Linear CRF CoNLL Twitter NW BN MZ BC TC WB µ–Avg M–Avg  Shared Shared 87.41 57.98 88.64 93.47 88.39 89.00 55.51 54.39 81.73 80.08 Sh-Private 87.70 59.16 88.96 93.51 88.52 89.95 77.97 55.51 82.12 80.16  Sh-Private Shared 87.65 64.45 90.88 92.82 87.92 88.75 80.60 57.81 83.77 81.36 Sh-Private 89.57 67.78 90.98 92.45 90.10 88.75 80.86 64.38 85.95 83.11  Shared Shared 89.00 67.27 91.10 93.00 89.15 89.00 78.36 59.48 85.69 82.04 Sh-Private 89.48 67.19 91.31 93.48 89.99 89.48 79.18 61.84 86.55 82.74  Sh-Private Shared 88.43 67.13 91.26 93.59 87.67 89.54 78.77 59.63 84.68 82.30 Sh-Private 90.21 69.15 91.09 93.64 91.38 90.67 82.42 67.44 86.77 84.68 Table 4: Ablation study comparing the performance (F1 score) of models trained with and without: shared-private linear projections of BiLSTM outputs, shared-private CRF heads and multi-task domain classification. Model Open Data Zero-Shot InDomain + DomainClassifier 80.35 57.17 MultDomain-SP-Aux 84.68 64.15 Oracle with InDomain 91.41 80.27 Table 5: Performance in macro-average F1 of the InDomain models with an oracle model selection strategy using gold test data compared to selected methods. Model Open Data Zero-Shot CoNLL 64.26 61.40 Twitter 60.59 63.59 NW 67.19 59.00 BN 66.08 54.82 MZ 57.52 48.62 BC 59.19 46.30 TC 47.25 37.41 WB 44.09 25.41 Table 6: Results of InDomain models trained on each domain independently on the open data set collection and the zero-shot genres reported in macro average of F1 for each domain. posed multi-domain modelling approach. In test phase, we set the batch size as 128. Table 7 shows the average time of inference time used for each model. Our proposed model architecture takes 0.15 ms (33% increase) longer for inference than InDomain or PoolDomain models, which is a result of more model parameters. However, our proposed architecture is still 0.19 ms faster than using the InDomain+DomainClassifier approach. In addition to inference runtime, we also find that the training time is not significantly more than the combined training time of N in-domain models. The main additions are that of the shared layers and the auxiliary task to the components of the N in-domain models and is thus a constant addition in the number of parameters to the total of N indomain models. Hence, the model would scale by a constant with respect to the number of input domains (N+1 number of components, where N is the number of domains). This should allow our proposed model to scale to a large number of domains. This highlights that the proposed MultDomain– SP–Aux model is a viable option for real-world applications. Model Runtime (ms) InDomain 0.45 InDomain+DomainClassifier 0.79 PoolDomain 0.45 PoolDomain–Init 0.43 PoolDomain–GradRev 0.47 PoolDomain+DomainFeat 0.45 MultDomain–SP 0.56 MultDomain–SP–Aux 0.60 Table 7: Averaged inference time (in ms) per sentence query on Open Dataset. 8 Conclusions Robustness of NLP models is essential to their wider adoption and usability. Existing NER approaches are widely faced with limited scalability when applied to data that spans multiple domains. This paper introduced three experimental setups that provide a framework for evaluating the robustness of NER models. These include learning from data in multiple domains and testing on all domains, when the domain label of the test point is unknown and when this does not belong to a domain seen in training. Building on past research, we proposed a new neural architecture that achieves substantial improvements of up to 5 F1 points when compared to standard methods. Future work will focus on domain adaptation at the embedding layer. References Oshin Agarwal, Yinfei Yang, Byron C. Wallace, and Ani Nenkova. 2020. Interpretability analysis for named entity recognition to understand sys8485 tem predictions and how they can improve. ArXiv, abs/2004.04564. Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 724–728, Minneapolis, Minnesota. Association for Computational Linguistics. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638– 1649, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Isabelle Augenstein, Leon Derczynski, and Kalina Bontcheva. 2017. Generalisation in named entity recognition: A quantitative analysis. Computer Speech & Language, 44:61–83. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440–447, Prague, Czech Republic. Association for Computational Linguistics. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 120–128, Sydney, Australia. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Ciprian Chelba and Alex Acero. 2004. Adaptation of maximum entropy capitalizer: Little data can help a lo. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 285–292, Barcelona, Spain. Association for Computational Linguistics. Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2019. Multi-source cross-lingual model transfer: Learning what to share. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3098– 3112, Florence, Italy. Association for Computational Linguistics. Xilun Chen and Claire Cardie. 2018. Multinomial adversarial networks for multi-domain text classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1226–1240, New Orleans, Louisiana. Association for Computational Linguistics. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 708–716, Prague, Czech Republic. Association for Computational Linguistics. Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 423–429, Barcelona, Spain. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256–263, Prague, Czech Republic. Association for Computational Linguistics. Jenny Rose Finkel and Christopher D Manning. 2009. Hierarchical bayesian domain adaptation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 602–610. R Florian, H Hassan, A Ittycheriah, H Jing, N Kambhatla, X Luo, N Nicolov, and S Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 1–8, Boston, Massachusetts, USA. Association for Computational Linguistics. Yaroslav Ganin and Victor Lempitsky. 2014. Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495. Hangfeng He and Xu Sun. 2017. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In Thirty-First AAAI Conference on Artificial Intelligence, AAAI. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 57–60, New York City, USA. Association for Computational Linguistics. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. ArXiv, abs/1508.01991. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 8486 Jayant Krishnamurthy and Tom M. Mitchell. 2015. Learning a compositional semantics for Freebase with an open predicate vocabulary. Transactions of the Association for Computational Linguistics, 3:257–270. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML, pages 282–289. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics. Ji Young Lee, Franck Dernoncourt, and Peter Szolovits. 2018. Transfer learning for named-entity recognition with neural networks. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA). Shoushan Li and Chengqing Zong. 2008. Multidomain sentiment classification. In Proceedings of ACL-08: HLT, Short Papers, pages 257–260, Columbus, Ohio. Association for Computational Linguistics. Bill Yuchen Lin and Wei Lu. 2018. Neural adaptation layers for cross-domain named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2012–2022, Brussels, Belgium. Association for Computational Linguistics. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 879–888, Lisbon, Portugal. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2009. Domain adaptation with multiple sources. In Advances in Neural Information Processing Systems, NeurIPS, pages 1041–1048. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 28– 36. Association for Computational Linguistics. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31– 41, San Diego, California. Association for Computational Linguistics. Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016. How transferable are neural networks in NLP applications? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 479–489, Austin, Texas. Association for Computational Linguistics. Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 104– 111, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271–278, Barcelona, Spain. Nanyun Peng and Mark Dredze. 2017. Multi-task domain adaptation for sequence tagging. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 91–100, Vancouver, Canada. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143–152, Sofia, Bulgaria. Association for Computational Linguistics. 8487 Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147–155, Boulder, Colorado. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338–348, Copenhagen, Denmark. Association for Computational Linguistics. Shruti Rijwhani and Daniel Preot¸iuc-Pietro. 2020. Temporally-informed analysis of named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Marina Santini, Richard Power, and Roger Evans. 2006. Implementing a characterization of genre for automatic genre identification of web pages. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 699–706, Sydney, Australia. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Fangzhao Wu and Yongfeng Huang. 2015. Collaborative multi-domain sentiment classification. In 2015 IEEE International Conference on Data Mining, pages 459–468. IEEE. Yi Yang and Jacob Eisenstein. 2015. Unsupervised multi-domain adaptation with feature embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 672–682, Denver, Colorado. Association for Computational Linguistics. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. ICLR. A Domain Prediction We further study the domains that are selected by the methods above by creating confusion matrices between the domain predictions of three setups: domain classification, domain prediction in the proposed MultDomain-SP-Aux model and the oracle in-domain choice on gold data. Figure 2 shows that the Oracle model relies on the corresponding InDomain model to only a limited extent Figure 2: Domain label confusion matrices on the CoNLL-Twitter-OntoNotes data collection. for each model. In uniformly many cases, predictions from other in-domain models are better than the existing in-domain one, showing the variability of the NER models. The domain classifier predictions align closer to the actual domains. The MultDomain-SP-Aux model also tends to predict the domain correctly, but we see that it better learns the NW, WB and BN domains. Note noting that the MultDomain-SP-Aux model does not use these domain predictions in inference and the model uses the shared components for unknown domains or 8488 Figure 3: Zero-Shot Domain data domain-label frequency prediction comparison labels. Finally, we plot the domain prediction distribution on the zero-shot genre data in Figure 3. We find that similar to the confusion matrices, the oracle strategy has a more even spread in domain selection. We observe similar patterns to the confusion matrices for the InDomain+DomainClassifier and MultDomain-SP-Aux models.
2020
750
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8489–8502 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8489 TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories Giannis Karamanolakis∗ Columbia University New York, NY 10027, USA [email protected] Jun Ma, Xin Luna Dong Amazon.com Seattle, WA 98109, USA {junmaa, lunadong}@amazon.com Abstract Extracting structured knowledge from product profiles is crucial for various applications in e-Commerce. State-of-the-art approaches for knowledge extraction were each designed for a single category of product, and thus do not apply to real-life e-Commerce scenarios, which often contain thousands of diverse categories. This paper proposes TXtract, a taxonomyaware knowledge extraction model that applies to thousands of product categories organized in a hierarchical taxonomy. Through category conditional self-attention and multi-task learning, our approach is both scalable, as it trains a single model for thousands of categories, and effective, as it extracts categoryspecific attribute values. Experiments on products from a taxonomy with 4,000 categories show that TXtract outperforms state-of-the-art approaches by up to 10% in F1 and 15% in coverage across all categories. 1 Introduction Real-world e-Commerce platforms contain billions of products from thousands of different categories, organized in hierarchical taxonomies (see Figure 1). Knowledge about products can be represented in structured form as a catalog of product attributes (e.g., flavor) and their values (e.g., “strawberry”). Understanding precise values of product attributes is crucial for many applications including product search, recommendation, and question answering. However, structured attributes in product catalogs are often sparse, leading to unsatisfactory search results and various kinds of defects. Thus, it is invaluable if such structured information can be extracted from product profiles such as product titles and descriptions. Consider for instance the “Ice Cream” product of Figure 1. The corresponding title can potentially ∗Work performed during internship at Amazon. … Product Groceries Health … Fruit Alcoholic
 Beverages Beer Sports
 Nutrition … Protein Powder Beauty … Hair Care Shampoos Lager Ice Cream Ben & Jerry's 
 Strawberry Cheesecake Ice Cream 16 oz … • Great idea for a delicious dessert • Includes Fairtrade certified sugar • Kosher certified dairy Figure 1: A hierarchical taxonomy with various product categories and the public webpage of a product assigned to “Ice Cream” category. be used to extract values for attributes, such as “Ben & Jerry’s” for brand, “Strawberry Cheesecake” for flavor, and “16 oz” for capacity. State-of-the-art approaches for attribute value extraction (Zheng et al., 2018; Xu et al., 2019; Rezk et al., 2019) have employed deep learning to capture features of product attributes effectively for the extraction purpose. However, they are all designed without considering the product categories and thus cannot effectively capture the diversity of categories across the product taxonomy. Categories can be substantially different in terms of applicable attributes (e.g., a “Camera” product should not have flavor), attribute values (e.g., “Vitamin” products may have “fruit” flavor but “Banana” products should not) and more generally, text patterns used to describe the attribute values (e.g., the phrase “infused with” is commonly followed by a scent value such as “lavender” in “Hair Care” products but not in “Mattresses” products). In this paper, we consider attribute value extraction for real-world hierarchical taxonomies with thousands of product categories, where directly 8490 applying previous approaches presents limitations. On the one extreme, ignoring the hierarchical structure of categories in the taxonomy and assuming a single “flat” category for all products does not capture category-specific characteristics and, as we will show in Section 5, is not effective. On the other extreme, training a separate deep neural network for each category in the product taxonomy is prohibitively expensive, and can suffer from lack of training data on small categories. To address the limitations of previous approaches under this challenging setting, we propose a framework for category-specific attribute value extraction that is both efficient and effective. Our deep neural network, TXtract, is taxonomyaware: it leverages the hierarchical taxonomy of product categories and extracts attribute values for a product conditional to its category, such that it automatically associates categories with specific attributes, valid attribute values, and categoryspecific text patterns. TXtract is trained on all categories in parallel and thus can be applied even on small categories with limited labels. The key question we need to answer is how to condition deep sequence models on product categories. Our experiments suggest that following previous work to append category-specific artificial tokens to the input sequence, or concatenate category embeddings to hidden neural network layers is not adequate. There are two key ideas behind our solution. First, we use the category information as context to generate category-specific token embeddings via conditional self-attention. Second, we conduct multi-task training by meanwhile predicting product category from profile texts; this allows us to get token embeddings that are discriminative of the product categories and further improve attribute extraction. Multi-task training also makes our extraction model more robust towards wrong category assignment, which occurs often in real e-Commerce websites.1 To the best of our knowledge, TXtract is the first deep neural network that has been applied to attribute value extraction for hierarchical taxonomies with thousands of product categories. In particular, we make three contributions. 1Examples: (1) an ethernet cable assigned under the “Hair Brushes”: https://www.amazon.com/dp/B012AE5EP4; (2) an eye shadow product assigned under “Travel Cases”: https://www.amazon.com/dp/B07BBM5B33. Screenshots of these product profiles are taken in 12/2019 and available at the Appendix. 1. We develop TXtract, a taxonomy-aware deep neural network for attribute value extraction from product profiles for multiple product categories. In TXtract, we capture the hierarchical relations between categories into category embeddings, which in turn we use as context to generate category-specific token embeddings via conditional self-attention. 2. We improve attribute value extraction through multi-task learning: TXtract jointly extracts attribute values and predicts the product’s categories by sharing representations across tasks. 3. We evaluate TXtract on a taxonomy of 4,000 product categories and show that it substantially outperforms state-of-the-art models by up to 10% in F1 and 15% in coverage across all product categories. Although this work focuses on e-Commerce, our approach to leverage taxonomies can be applied to broader domains such as finance, education, and biomedical/clinical research. We leave experiments on these domains for future work. The rest of this paper is organized as follows. Section 2 discusses related work. Section 3 presents background and formally defines the problem. Section 4 presents our solution and Section 5 describes experimental results. Finally, Section 6 concludes and suggests future work. 2 Related Work Here, we discuss related work on attribute value extraction (Section 2.1), and multi-task learning/meta-learning (Section 2.2). 2.1 Attribute Value Extraction from Product Profiles Attribute value extraction was originally addressed with rule-based techniques (Nadeau and Sekine, 2007; Vandic et al., 2012; Gopalakrishnan et al., 2012) followed by supervised learning techniques (Ghani et al., 2006; Putthividhya and Hu, 2011; Ling and Weld, 2012; Petrovski and Bizer, 2017; Sheth et al., 2017). Most recent techniques consider open attribute value extraction: emerging attribute values can be extracted by sequence tagging, similar to named entity recognition (NER) (Putthividhya and Hu, 2011; Chiu and Nichols, 2016; Lample et al., 2016; Yadav and Bethard, 2018). State-of-the-art methods employ deep learning for sequence tagging (Zheng et al., 8491 2018; Xu et al., 2019; Rezk et al., 2019). However, all previous methods can be adapted to a small number of categories and require many labeled datapoints per category.2 Even the Active Learning method of Zheng et al. (2018) requires humans to annotate at least hundreds of carefully selected examples per category. Our work differs from previous approaches as we consider thousands of product categories organized in a hierarchical taxonomy. 2.2 Multi-Task/Meta- Learning Our framework is related to multi-task learning (Caruana, 1997) as we train a single model simultaneously on all categories (tasks). Traditional approaches consider a small number of different tasks, ranging from 2 to 20 and employ hard parameter sharing (Alonso and Plank, 2017; Yang et al., 2017; Ruder, 2019): the first layers of neural networks are shared across all tasks, while the separate layers (or “heads”) are used for each individual task. In our setting with thousands of different categories (tasks), our approach is efficient as we use a single (instead of thousands) head and effective as we distinguish between categories through low-dimensional category embeddings. Our work is also related to meta-learning approaches based on task embeddings (Finn et al., 2017; Achille et al., 2019; Lan et al., 2019): the target tasks are represented in a low-dimensional space that captures task similarities. However, we generate category embeddings that reflect the already available, hierarchical structure of product categories in the taxonomy provided by experts. 3 Background and Problem Definition We now provide background on open attribute value extraction (Section 3.1) and define our problem of focus (Section 3.2). 3.1 Open Attribute Value Extraction Most recent approaches for attribute value extraction rely on the open-world assumption to discover attribute values that have never been seen during training (Zheng et al., 2018). State-of-the-art approaches address extraction with deep sequence tagging models (Zheng et al., 2018; Xu et al., 2Zheng et al. (2018) considered 3 categories: “Dog Dood,” “Cameras,” and “Detergent.” Xu et al. (2019) consider 1 category: “Sports & Entertainment.” Rezk et al. (2019) considered 21 categories and trained a separate model for each category. Input Ben & Jerry’s black cherry cheesecake ice cream Output O O O B I E O O Table 1: Example of input/output tag sequences for the “flavor” attribute of an ice cream product. 2019; Rezk et al., 2019): each token of the input sequence x = (x1, . . . , xT ) is assigned a separate tag from {B, I, O, E}, where “B,” “I,” “O,” and “E” represent the beginning, inside, outside, and end of an attribute, respectively. (Not extracting any values corresponds to a sequence of “O”only tags.) Table 1 shows an input/output example of flavor value extraction from (part of) a product title. Given this output tag sequence, “black cherry cheesecake” is extracted as a flavor for the ice cream product. 3.2 Problem Definition We represent the product taxonomy as a tree C, where the root node is named “Product” and each taxonomy node corresponds to a distinct product category: c ∈C. A directed edge between two nodes represents the category-to-subcategory relationship. A product is assigned to a category node in C. In practice, there are often thousands of nodes in a taxonomy tree and the category assignment of a product may be incorrect. We now formally define our problem as follows. DEFINITION: Consider a product from a category c and the sequence of tokens x = (x1, . . . , xT ) from its profile, where T is the sequence length. Let a be a target attribute for extraction. Attribute extraction identifies subsequences of tokens from x, each sub-sequence representing a value for a. For instance, given (1) a product title x =“Ben & Jerry’s Strawberry Cheesecake Ice Cream 16 oz,” (2) a product category c = “Ice Cream,” and (3) a target attribute α = flavor, we would like to extract “Strawberry Cheesecake” as a flavor for this product. Note that we may not see all valid attribute values during training. 4 TXtract Model: Taxonomy-Aware Knowledge Extraction In this paper, we address open attribute value extraction using a taxonomy-aware deep sequence tagging model, TXtract. Figure 2 shows the model architecture, which contains two key components: attribute value extraction and product category 8492 ProductEnc x1 Product Profile xT … y1 yT … ˜h1 ˜hT Product Category CRF c Product Taxonomy ec CondSelfAtt Category Embedding h1 hT … … h c′! Product Embedding Predicted Category Taxonomy-Aware
 Product Category Prediction Taxonomy-Aware 
 Attribute Value Extraction CategoryEnc Att CategoryCLF Figure 2: TXtract architecture: tokens (x1, . . . , xT ) are classified to BIOE attribute tags (y1, . . . , yT ) by conditioning to the product’s category embedding ec. TXtract is jointly trained to extract attribute values and assign a product to taxonomy nodes. prediction, accounting for the two tasks in multitask training. Both components are taxonomy aware, as we describe next in detail. 4.1 Taxonomy-Aware Attribute Value Extraction TXtract leverages the product taxonomy for attribute value extraction. The underlying intuition is that knowing the product category may help infer attribute applicability and associate the product with a certain range of valid attribute values. Our model uses the category embedding in conditional self-attention to guide the extraction of categoryspecific attribute values. 4.1.1 Product Encoder The product encoder (“ProductEnc”) represents the text tokens of the product profile (x1, . . . , xT ) as low-dimensional, real-valued vectors: h1, . . . hT = ProductEnc(x1, . . . , xT) ∈Rd. (1) To effectively capture long-range dependencies between the input tokens, we use word embeddings followed by bidirectional LSTMs (BiLSTMs), similar to previous state-of-the-art approaches (Zheng et al., 2018; Xu et al., 2019). 4.1.2 Category Encoder Our category encoder (“CategoryEnc”) encodes the hierarchical structure of product categories such that TXtract understands expert-defined relations across categories, such as “Lager” is a subcategory of “Beer”. In particular, we embed each product category c (taxonomy node) into a lowdimensional latent space: ec = CategoryEnc(c) ∈Rm. (2) To capture the hierarchical structure of the product taxonomy, we embed product categories into the m-dimensional Poincaré ball (Nickel and Kiela, 2017), because its underlying geometry has been shown to be appropriate for capturing both similarity and hierarchy. 4.1.3 Category Conditional Self-Attention The key component for taxonomy-aware value extraction is category conditional selfattention (“CondSelfAtt”). CondSelfAtt generates category-specific token embeddings (˜hi ∈Rd) by conditioning on the category embedding ec: ˜h1, . . . ˜hT = CondSelfAtt((h1, . . . , hT ), ec). (3) To leverage the mutual interaction between all pairs of token embeddings ht, ht′ and the category embedding ec we use self-attention and compute pairwise sigmoid attention weights: αt,t′ = σ(wT αgt,t′ + bα), t, t′ = 1..T. (4) We compute scores gt,t′ using both the token embeddings ht, ht′ and the category embedding ec: 8493 gt,t′ = tanh(W1ht + W2ht′ + W3ec + bg), (5) where W1 ∈Rp×d, W2 ∈Rp×d, W3 ∈Rp×m, wα ∈Rp are trainable attention matrices and bg ∈Rp, bα ∈R, are trainable biases. The T × T attention matrix A = at,t′ stores the pairwise attention weights. The contextualized token embeddings are computed as: ˜ht = T X t′=1 αt,t′ · ht′. (6) 4.1.4 CRF Layer We feed the contextualized token representations ˜h = (˜h1, . . . , ˜hT ) to CRFs to get the sequence of BIOE tags with the highest probability: y1, . . . , yT = CRF(˜h1, . . . , ˜ht). (7) We then extract attribute values as valid subsequences of the input tokens (x1, . . . , xT ) with B/I/E tags (see Section 3.1). 4.1.5 Training for Attribute Value Extraction Our training objective for attribute value extraction is to minimize the negative conditional loglikelihood of the model parameters on N training products xi with ground truth labels ˆyi1 . . . , ˆyiT : La = − N X i=1 log Pr(ˆyi1, . . . , ˆyiT | xi, ci) (8) We train our model on all categories in parallel, thus leveraging for a given category products from related categories. To generate training sequence labels from the corresponding attribute values, we use the distant supervision framework of Mintz et al. (2009), similar to Xu et al. (2019), by generating tagging labels according to existing (sparse) values in the Catalog. 4.2 Taxonomy-Aware Product Category Prediction We now describe how we train TXtract for the auxiliary task of product category prediction through multi-task learning. Our main idea is that by encouraging TXtract to predict the product categories using only the product profile, the model will learn token embeddings that are discriminative of the product categories. Thus, we introduce an inductive bias for more effective categoryspecific attribute value extraction. 4.2.1 Attention Layer Our attention component (“Att”) represents the product profile (x1, . . . , xT ) as a single vector h ∈ Rn computed through the weighted combination of the ProductEnc’s embeddings (h1, . . . , hT ): h = T X t=1 βt · ht. (9) This weighted combination allows tokens that are more informative for a product’s category to get higher “attention weights” β1, . . . , βT ∈[0, 1]. For example, we expect xt = “frozen” to receive a relatively high βt for the classification of a product to the “Ice Cream” category. We compute the attention weights as: βt = softmax(uT c tanh(Wcht + bc)), (10) where Wc ∈Rq×d, bc ∈Rq, uc ∈Rq are trainable attention parameters. 4.2.2 Category Classifier Our category classifier (“CategoryCLF”) classifies the product embedding h to the taxonomy nodes. In particular, we use a sigmoid classification layer to predict the probabilities of the taxonomy nodes: p1, . . . , p|C| = sigmoid(Wdh + bd), (11) where Wd ∈R|C|×d and bd ∈R|C| are trainable parameters. We compute sigmoid (instead of softmax) node probabilities because we treat category prediction as multi-label classification, as we describe next. 4.2.3 Training for Category Prediction Training for “flat” classification of products to thousands of categories is not effective because the model is fully penalized if it does not predict the exact true category ˆc while at the same time ignores parent-children category relations. Here, we conduct “hierarchical” classification by incorporating the hierarchical structure of the product taxonomy into a taxonomy-aware loss function. The insight behind our loss function is that a product assigned under ˆc could also be assigned under any of the ancestors of ˆc. Thus, we consider hierarchical multi-label classification and encourage TXtract to assign a product to all nodes in the path from ˆc to the root, denoted by (ˆcK, ˆcK−1, . . . , ˆc1), where K is the level of the 8494 node ˆc in the taxonomy tree. The model is thus encouraged to learn the hierarchical taxonomy relations and will be penalized less if it predicts high probabilities for ancestor nodes (e.g., "Beer" instead of “Lager” in Figure 1). Our minimization objective is the weighted version of the binary cross-entropy (instead of unweighted categorical cross-entropy) loss:3 Lb = X c∈C wc(yc · log pc + (1 −yc) · log(1 −pc)), (12) For the nodes in the path from ˆc to the root (ˆcK, ˆcK−1, . . . , ˆc1), we define positive labels yc = 1 and weights wc that are exponentially decreasing (w0, w1, . . . , wK−1), where 0 < w ≤1 is a tunable hyper-parameter. The remaining nodes in C receive negative labels yc = 0 and fixed weight wc = wK−1. 4.3 Multi-task Training We jointly train TXtract for attribute value extraction and product category prediction by combining the loss functions of Eq. (8) and Eq. (12): L = γ · La + (1 −γ) · Lb, (13) where γ ∈[0, 1] is a tunable hyper-parameter. Here, we employ multi-task learning, and share ProductEnc across both tasks. 5 Experiments We empirically evaluated TXtract and compared it with state-of-the-art models and strong baselines for attribute value extraction on 4000 product categories. TXtract leads to substantial improvement across all categories, showing the advantages of leveraging the product taxonomy. 5.1 Experimental Settings Dataset: We trained and evaluated TXtract on products from public web pages of Amazon.com. We randomly selected 2 million products from 4000 categories under 4 general domains (subtrees) in the product taxonomy: Grocery, Baby product, Beauty product, and Health product. Experimental Setup: We split our dataset into training (60%), validation (20%), and test (20%) sets. We experimented with extraction of flavor, scent, and brand values from product titles, and 3For simplicitly in notation, we define Eq 12 for a single product. Defining for all training products is straightforward. with ingredient values from product titles and descriptions. For each attribute, we trained TXtract on the training set and evaluated the performance on the held-out test set. Evaluation Metrics: For a robust evaluation of attribute value extraction, we report several metrics. For a test product, we consider as true positive the case where the extracted values match at least one of the ground truth values (as some of the ground truth values may not exist in the text) and do not contain any wrong values.4 We compute Precision (Prec) as the number of “matched” products divided by the number of products for which the model extracts at least one attribute value; Recall (Rec) as the number of “matched” products divided by the number of products associated with attribute values; and F1 score as the harmony mean of Prec and Rec. To get a global picture of the model’s performance, we consider microaverage scores (Mi*), which first aggregates products across categories and computes Prec/Rec/F1 globally. To evaluate per-category performance we consider macro-average scores (Ma*), which first computes Prec/Rec/F1 for each category and then aggregates per-category scores. To evaluate the capability of our model to discover (potentially new) attribute values, we also report the Value vocabulary (Vocab) as the total number of unique attribute values extracted from the test set (higher number is often better); and Coverage (Cov), as the number of products for which the model extracted at least one attribute value, divided by the total number of products. For product category (multi-label) classification we reported the area under Precision-Recall curve (AUPR), Precision, Recall, and F1 score. Model Configuration: We implemented our model in Tensorflow (Abadi et al., 2016) and Keras.5 For a fair comparison, we consider the same configuration as OpenTag for the ProductEnc (BiLSTM)6 and CRF components. For model configuration details see the appendix. Model Comparison: We compared our model with state-of-the-art models in the literature and 4For example, if the ground-truth is [v1] but the system extracts [v1, v2, v3], the extraction is considered as incorrect. 5https://keras.io/ 6We expect to see further performance improvement by considering pre-trained language models (Radford et al., 2018; Devlin et al., 2019) for ProductEnc, which we leave for future work. 8495 Attr. Model Vocab Cov Micro F1 Micro Prec Micro Rec Macro F1 Macro Prec Macro Rec Flavor OpenTag 6,756 73.2 57.5 70.3 49.6 54.6 68.0 47.3 TXtract 13,093 83.9 ↑14.6% 63.3 ↑10.1% 70.9 ↑0.9% 57.8 ↑16.5% 59.3 ↑8.6% 68.4 ↑0.6% 53.8 ↑13.7% Scent OpenTag 10,525 75.8 70.6 87.6 60.2 59.3 79.7 50.8 TXtract 13,525 83.2 ↑9.8% 73.7 ↑4.4% 86.1 ↓1.7% 65.7 ↑9.1% 59.9 ↑10.1% 78.3 ↓1.8% 52.1 ↑2.6% Brand OpenTag 48,943 73.1 63.4 81.6 51.9 51.7 75.1 41.5 TXtract 64,704 82.9 ↑13.4% 67.5 ↑6.5% 82.7 ↑1.3% 56.5 ↑8.1% 55.3 ↑7.0% 75.2 ↑0.1% 46.8 ↑12.8% Ingred. OpenTag 9,910 70.0 35.7 46.6 29.1 20.9 34.6 16.7 TXtract 18,980 76.4 ↑9.1% 37.1 ↑3.9% 48.3 ↑3.6% 30.1 ↑3.3% 24.2 ↑15.8% 37.4 ↑8.1% 19.8 ↑18.6% Average relative increase ↑11.7% ↑6.2% ↑1.0% ↑9.3% ↑10.4% ↑6.8% ↑11.9% Table 2: Extraction results for flavor, scent, brand, and ingredients across 4,000 categories. Across all attributes, TXtract improves OpenTag by 11.7% in coverage, 6.2% in micro-average F1, and 10.4% in macro-average F1. introduced additional strong baselines: 1. “OpenTag”: the model of Zheng et al. (2018). It is a special case of our system that consists of the ProductEnc and CRF components without leveraging the taxonomy. 2. “Title+*”: a class of models for conditional attribute value extraction, where the taxonomy is introduced by artificially appending extra tokens x′ 1, . . . , x′ T ′ and a special separator token (<SEP>) to the beginning of a product’s text, similar to Johnson et al. (2017): x′ = (x′ 1, . . . , x′ T ′, <SEP>, x1, . . . , xT ) Tokens x′ 1, . . . , x′ T ′ contain category information such as unique category id (“Title+id”), category name (“Title+name”), or the names of all categories in the path from the root to the category node, separated by an extra token <SEP2> (“Title+path”). 3. “Concat-*”: a class of models for taxonomyaware attribute value extraction that concatenate the category embedding to the word embedding (-wemb) or hidden BiLSTM embedding layer (-LSTM) instead of using conditional self-attention. We evaluate Euclidean embeddings (“Concat-*-Euclidean”) and Poincaré embeddings (“Concat-*-Poincaré”). 4. “Gate”: a model that leverages category embeddings ec in a gating layer (Cho et al., 2014; Ma et al., 2019): ˜ht = ht ⊗σ(W4ht + W5ec), where W4 ∈Rp×d, W5 ∈Rp×m are trainable matrices, and ⊗denotes element-wise multiplication. Our conditional self-attention is different as it leverages pairwise instead of singletoken interactions with category embeddings. 5. “CondSelfAtt”: the model with our conditional self-attention mechanism (Section 4.1.3). CondSelfAtt extracts attribute values but does not predict the product category. 6. “MT-*”: a multi-task learning model that jointly performs (not taxonomy-aware) attribute value extraction and category prediction. “MT-flat” assumes “flat” categories, whereas “MT-hier” considers the hierarchical structure of the taxonomy (Section 4.2.3). 7. “TXtract”: our model that jointly performs taxonomy-aware attribute value extraction (same as CondSelfAtt) and hierarchical category prediction (same as MT-hier). Here, we do not report previous models (e.g., BiLSTM-CRF) for sequence tagging (Huang et al., 2015; Kozareva et al., 2016; Lample et al., 2016), as OpenTag has been shown to outperform these models in Zheng et al. (2018). Moreover, when considering attributes separately, the model of Xu et al. (2019) is the same as OpenTag, but with a different ProductEnc component; since we use the same ProductEnc for all alternatives, we expect/observe the same trend and do not report its performance. 5.2 Experimental Results Table 2 reports the results across all categories. For detailed results see Figure 6 in Appendix. Over all categories, our taxonomy-aware TXtract substantially improves over the state-of-the-art OpenTag by up to 10.1% in Micro F1, 14.6% in coverage, and 93.8% in vocabulary (for flavor). Table 3 shows results for the four domains of our taxonomy under different training granularities: training on all domains versus training only on the target domain. Regardless of the configuration, TXtract substantially outperforms OpenTag, showing the general advantages of our approach. Interestingly, although training a single model on all of the four domains obtains lower F1 for Flavor, it obtains better results for Scent: training fewer models does not necessarily lead to 8496 Domain OpenTag/TXtract Train Test Attr. Micro F1 all Grocery Flavor 60.3 / 64.9 ↑7.6% Grocery Grocery 65.4 / 70.5 ↑7.8% all Baby Flavor 54.4 / 63.0 ↑15.8% Baby Baby 69.2 / 71.8 ↑3.8% all Beauty Scent 76.9 / 79.5 ↑3.4% Beauty Beauty 76.9 / 79.0 ↑2.7% all Health Scent 63.0 / 69.1 ↑9.7% Health Health 60.9 / 63.5 ↑4.3% Table 3: Evaluation results for each domain under training configurations of different granularity. TXtract outperforms OpenTag under all configurations. lower quality and may actually improve extraction by learning from neighboring taxonomy trees. 5.3 Ablation Study Table 4 reports the performance of several alternative approaches for flavor value extraction across all categories. OpenTag does not leverage the product taxonomy, so it is outperformed by most approaches that we consider in this work. Implicit vs. explicit conditioning on categories. “Title+*” baselines fail to leverage the taxonomy, thus leading to lower F1 score than OpenTag: implicitly leveraging categories as artificial tokens appended to the title is not effective in our setting. Representing the taxonomy with category embeddings leads to significant improvement over OpenTag and “Title+*” baselines: even simpler approaches such as “Concat-*-Euclidean” outperform OpenTag across all metrics. However, “Concat-*” and “Gate-*” do not leverage category embeddings as effectively as “CondSelfAtt”: conditioning on the category embedding for the computation of the pair-wise attention weights in the self-attention layer appears to be the most effective approach for leveraging the product taxonomy. Multi-task Learning. In Table 4, both MT-flat and MT-hier, which do not condition on the product taxonomy, outperform OpenTag on attribute value extraction: by learning to predict the product category, our model implicitly learns to condition on the product category for effective attribute value extraction. MT-hier outperforms MT-flat: leveraging the hierarchical structure of the taxonomy is more effective than assuming flat categories. Table 5 shows that category prediction is more effective when considering the hierarchiModel TX MT Micro F1 OpenTag 57.5 Title+id ✓ 55.7 ↓3.1% Title+name ✓ 56.9 ↓1.0% Title+path ✓ 54.3 ↓5.6% Concat-wemb-Euclidean ✓ 60.1 ↑4.5% Concat-wemb-Poincaré ✓ 60.6 ↑5.4% Concat-LSTM-Euclidean ✓ 60.1 ↑4.5% Concat-LSTM-Poincaré ✓ 60.8 ↑5.7% Gate-Poincaré ✓ 60.6 ↑5.4% CondSelfAtt-Poincaré ✓ 61.9 ↑7.7 MT-flat ✓ 60.9 ↑5.9% MT-hier ✓ 61.5 ↑7.0% Concat & MT-hier ✓ ✓ 62.3 ↑8.3% Gate & MT-hier ✓ ✓ 61.1 ↑6.3% CondSelfAtt & MT-hier ✓ ✓ 63.3 ↑10.1% Table 4: Ablation study for flavor extraction across 4,000 categories. “TX” column indicates whether the taxonomy is leveraged for attribute value extraction (Section 4.1). “MT” column indicates whether multitask learning is used (Section 4.2). Category Prediction AUPR F1 Prec Rec Flat 0.61 53.9 74.2 48.0 Hierarchical 0.68 62.7 80.4 56.9 Table 5: Performance of product classification to the 4,000 nodes in the taxonomy using flat versus hierarchical multi-task learning. cal structure of the categories into our taxonomyaware loss function than assuming flat categories. 5.4 Visualization of Poincaré Embeddings Poincaré embeddings effectively capture the hierarchical structure of the product taxonomy: Figure 3a plots the embeddings of product categories in the 2-dimensional Poincaré disk.7 Figure 3b plots the embeddings trained in the 50-dimensional Poincaré ball and projected to the 2-dimensional Euclidean space through tSNE (Maaten and Hinton, 2008). 5.5 Examples of Extracted Attribute Values Figure 4 shows examples of product titles and attribute values extracted by OpenTag or TXtract. TXtract is able to detect category-specific values: in Figure 4a, “Purple Lemonade” is a valid flavor for “Vitamin Pills” but not for most of other categories. OpenTag, which ignores product categories, fails to detect this value while TXtract 7We train 2-dimensional Poincaré embeddings only for visualization. In our experiments we use d = 50 dimensions. 8497 (a) Taxonomy embeddings in the 2-dimensional Poincaré disk, where the distance of points grows exponentially to the radius. Leaf nodes are placed close to the boundary of the disk. (b) Taxonomy embeddings projected from the 50-dimensional Poincaré ball to the 2-dimensional Euclidean space using tSNE. Small clusters correspond to taxonomy sub-trees. Figure 3: Poincaré embeddings of taxonomy nodes (product categories). Each point is a product category. Categories are colored based on the first-level taxonomy where they belong (green: Grocery products, blue: Baby products, red: Beauty products, yellow: Health products). Related categories in the taxonomy (e.g., categories belonging to the same sub-tree) have similar embeddings. Category = Vitamins & Dietary Supplements ASIN = B00CX96KTQ Title = Controlled Labs Purple Wraath 90 Servings - Purple Lemonade OpenTag (flavor) = (empty) TXtract (flavor) = “purple lemonade” (a) Category = Sports Nutrition ASIN = B005P0LKTU Title = Click - Espresso Protein Drink Vanilla Latte - 16 oz. OpenTag (flavor) = “espresso” TXtract (flavor) = “vanilla latte” (b) Category = Vitamins & Dietary Supplements ASIN = B015K3Y728 Title = Mason Vitamins Melatonin 500 mcg Fast Meltz Tablets, Fruit, 60 Count OpenTag (flavor) = (empty) TXtract (flavor) = “fruit” (c) Category = Eyeshadow ASIN = B07BBM5B33 Title = HP95(TM) Fashion Glitter Matte Eye Shadow Powder 
 Palette Single Shimmer Eyeshadow (10#) OpenTag (scent) = palette TXtract (scent) = (empty) (d) Figure 4: Examples of extracted attribute values from OpenTag and TXtract. successfully extracts it as a flavor. TXtract also learns attribute applicability: in Figure 4d, OpenTag erroneously extracts “palette” as scent for an “Eyeshadow” product, while this product should not have scent; on the other hand, TXtract, which considers category embeddings, does not extract any scent values for this product. 6 Conclusions and Future Work We present a novel method for large-scale attribute value extraction for products from a taxonomy with thousands of product categories. Our proposed model, TXtract, is both efficient and effective: it leverages the taxonomy into a deep neural network to improve extraction quality and can extract attribute values on all categories in parallel. TXtract significantly outperforms state-of-the-art approaches and strong baselines under a taxonomy with thousands of product categories. Interesting future work includes applying our techniques to different taxonomies (e.g., biomedical) and training a model for different attributes. Acknowledgments The authors would like to sincerely thank Ron Benson, Christos Faloutsos, Andrey Kan, Yan Liang, Yaqing Wang, and Tong Zhao for their insightful comments on the paper, and Gabriel Blanco, Alexandre Manduca, Saurabh Deshpande, Jay Ren, and Johanna Umana for their constructive feedback on data integration for the experiments. 8498 References Mart’ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283. Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Stefano Soatto, and Pietro Perona. 2019. Task2vec: Task embedding for meta-learning. arXiv preprint arXiv:1902.03545. Hector Martinez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic sequence prediction under varying data conditions. In EACL 2017-15th Conference of the European Chapter of the Association for Computational Linguistics, pages 1–10. Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41–75. Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org. Rayid Ghani, Katharina Probst, Yan Liu, Marko Krema, and Andrew Fano. 2006. Text mining for product attribute extraction. ACM SIGKDD Explorations Newsletter, 8(1):41–48. Vishrawas Gopalakrishnan, Suresh Parthasarathy Iyengar, Amit Madaan, Rajeev Rastogi, and Srinivasan Sengamedu. 2012. Matching product titles using web-based enrichment. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 605–614. ACM. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Zornitsa Kozareva, Qi Li, Ke Zhai, and Weiwei Guo. 2016. Recognizing salient entities in shopping queries. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 107–111. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260–270. Lin Lan, Zhenguo Li, Xiaohong Guan, and Pinghui Wang. 2019. Meta reinforcement learning with task embedding and shared policy. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 2794–2800. International Joint Conferences on Artificial Intelligence Organization. Xiao Ling and Daniel S Weld. 2012. Fine-grained entity recognition. In Twenty-Sixth AAAI Conference on Artificial Intelligence. Chen Ma, Peng Kang, and Xue Liu. 2019. Hierarchical gating networks for sequential recommendation. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1):3–26. Maximillian Nickel and Douwe Kiela. 2017. Poincaré embeddings for learning hierarchical representations. In Advances in neural information processing systems, pages 6338–6347. 8499 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Petar Petrovski and Christian Bizer. 2017. Extracting attribute-value pairs from product specifications on the web. In Proceedings of the International Conference on Web Intelligence, pages 558–565. ACM. Duangmanee Pew Putthividhya and Junling Hu. 2011. Bootstrapped named entity recognition for product attribute extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1557–1567. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. https://blog.openai.com/language-unsupervised. Martin Rezk, Laura Alonso Alemany, Lasguido Nio, and Ted Zhang. 2019. Accurate product attribute extraction on the field. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pages 1862–1873. IEEE. Sebastian Ruder. 2019. Neural Transfer Learning for Natural Language Processing. Ph.D. thesis, National University Of Ireland, Galway. Amit P. Sheth, Axel Ngonga, Yin Wang, Elizabeth Chang, Dominik Slezak, Bogdan Franczyk, Rainer Alt, Xiaohui Tao, and Rainer Unland, editors. 2017. Proceedings of the International Conference on Web Intelligence, Leipzig, Germany, August 23-26, 2017. ACM. Damir Vandic, Jan-Willem Van Dam, and Flavius Frasincar. 2012. Faceted product search powered by the semantic web. Decision Support Systems, 53(3):425–437. Huimin Xu, Wenting Wang, Xinnian Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5214– 5223. Vikas Yadav and Steven Bethard. 2018. A survey on recent advances in named entity recognition from deep learning models. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2145–2158. Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In Proceedings of the International Conference on Learning Representations. Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1049–1058. ACM. 8500 A Appendix For reproducibility, we provide details on TXtract configuration (Section A.1). We also report detailed evaluation results (Section A.2). A.1 TXtract Configuration We implemented our model in Tensorflow (Abadi et al., 2016) and Keras.8 To achieve a fair comparison with OpenTag (Zheng et al., 2018), and to ensure that performance improvements stem from leveraging the product taxonomy, we use exactly the same components and configuration as OpenTag for ProductEnc: We initialize the word embedding layer using 100-dimensional pre-trained Glove embeddings (Pennington et al., 2014). We use masking to support variable-length input. Each of the LSTM layers has a hidden size of 100 dimensions, leading to a BiLSTM layer with d = 200 dimensional embeddings. We set the dropout rate to 0.4. For CategoryEnc, we train m = 50-dimensional Poincaré embeddings.9 For CondSelfAtt, we use p = 50 dimensions. For Att, we use q = 50 dimensions. For multi-task training, we obtain satisfactory performance with default hyper-parameters γ = 0.5, w = 1, while we leave fine-tuning for future work. For parameter optimization, we use Adam (Kingma and Ba, 2014) with a batch size of 32. We train our model for up to 30 epochs and quit training if the validation loss does not decrease for more than 3 epochs. A.2 Extra Results Table 6 reports extraction results (of TXtract trained on all domains) for each domain separately. Table 7 reports category classification results for each domain separately. Table 8 reports several evaluation metrics for our ablation study. 8https://keras.io/ 9We use the public code in provided by Nickel and Kiela (2017): https://github.com/facebookresearch/poincareembeddings 8501 Figure 5: Snapshot of https://www.amazon.com/dp/B012AE5EP4. This ethernet cable has been erroneously assigned under “Hair Brushes” category. (The assignment can be seen on the top left part of the screenshot.) Figure 6: Snapshot of https://www.amazon.com/dp/B07BBM5B33. This eye shadow product has been erroneously assigned under “Travel Cases” category. (The assignment can be seen on the top left part of the screenshot.) 8502 Grocery Products Baby Products Beauty Products Health Products Attr. Model Vocab Cov miF1 maF1 Vocab Cov miF1 maF1 Vocab Cov miF1 maF1 Vocab Cov miF1 maF1 Flavor OpenTag 4364 79.6 60.3 59.0 264 53.1 54.4 45.0 832 45.8 41.1 32.0 1296 58.2 53.9 47.0 TXtract 8607 89.1 64.9 62.8 414 72.8 63.0 56.1 1684 61.3 46.5 35.6 2388 71.5 67.3 57.5 Scent OpenTag 446 75.5 56.8 48.4 593 69.7 35.7 20.3 7007 78.5 76.9 67.9 2479 68.1 63.0 47.5 TXtract 565 87.4 61.2 51.4 589 72.1 38.1 22.0 9048 85.6 79.5 68.4 3322 79.9 69.1 48.2 Brand OpenTag 5150 68.8 62.9 52.7 11166 72.2 66.0 54.0 15394 77.2 68.8 54.7 17233 71.2 57.8 45.9 TXtract 6944 78.9 67.4 55.1 14965 81.0 72.9 56.2 19821 85.1 72.7 57.2 22974 82.9 60.5 52.4 Ingred. OpenTag 3402 82.5 40.5 30.1 490 50.7 27.7 22.4 2767 65.1 33.6 26.8 3251 66.7 34.6 29.9 TXtract 6155 87.3 43.1 36.5 835 59.7 30.5 24.3 5539 70.6 32.9 26.6 6451 74.2 36.5 31.2 Table 6: Extraction results for flavor, scent, brand, and ingredients for each of our 4 domains (sub-trees). Grocery Products Baby Products Beauty Products Health Products MT type AUPR F1 Prec Rec AUPR F1 Prec Rec AUPR F1 Prec Rec AUPR F1 Prec Rec flat 45.9 21.4 63.3 13.7 65.9 23.7 68.4 17.4 63.7 62.4 78.8 56.5 49.8 38.8 60.7 32.7 hierarchical 47.3 29.7 68.4 19.9 68.5 29.4 72.6 22.9 72.1 71.5 83.1 66.4 56.3 47.7 74.6 39.8 Table 7: Product category classification results Micro-average Macro-average Model TX MT Vocab Cov (%) F1 Prec Rec F1 Prec Rec OpenTag 6,756 73.2 57.5 70.3 49.6 54.6 68.0 47.3 Title+id ✓ 6,400 69.1 55.7 70.6 46.9 53.3 68.9 45.1 Title+name ✓ 5,328 70.6 56.9 71.2 48.4 54.2 69.1 46.3 Title+path ✓ 4,608 64.6 54.3 72.0 44.6 51.9 69.1 43.2 Concat-wemb-Euclidean ✓ 9,768 76.3 60.1 71.6 52.9 57.4 69.0 50.6 Concat-wemb-Poincaré ✓ 8,684 74.3 60.6 73.4 52.7 57.7 70.2 50.6 Concat-LSTM-Euclidean ✓ 9,255 75.9 60.1 71.9 52.8 57.5 69.4 50.6 Concat-LSTM-Poincaré ✓ 8,893 75.2 60.8 72.9 53.2 57.9 70.3 50.9 Gate-Poincaré ✓ 9,690 77.1 60.6 71.5 53.5 57.7 69.3 51.0 CondSelfAtt-Poincaré ✓ 12,558 83.1 61.9 68.8 57.0 58.3 66.5 53.1 MT-flat ✓ 8,699 72.2 60.9 74.7 52.4 57.8 70.3 50.5 MT-hier ✓ 9,528 73.4 61.5 74.5 53.2 58.3 70.9 51.1 Concat & MT-hier ✓ ✓ 9,316 74.6 62.3 75.0 54.3 59.0 70.8 52.1 Gate & MT-hier ✓ ✓ 10,845 80.0 61.1 70.7 54.8 57.9 67.9 51.8 CondSelfAtt & MT-hier (TXtract) ✓ ✓ 13,093 83.9 63.3 70.9 57.8 59.3 68.4 53.8 Table 8: Results for flavor extraction across all categories. “TX” column indicates whether the taxonomy is leveraged for attribute value extraction (Section 4.1). “MT” column indicates whether multi-task learning is used (Section 4.2).
2020
751
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8503–8511 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8503 TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition Bill Yuchen Lin†∗, Dong-Ho Lee†∗, Ming Shen†, Ryan Moreno†, Xiao Huang†, Prashant Shiralkar‡, Xiang Ren† †University of Southern California ‡ Amazon {yuchen.lin,dongho.lee,shenming,morenor}@usc.edu, {huan183,xiangren}@usc.edu, [email protected] Abstract Training neural models for named entity recognition (NER) in a new domain often requires additional human annotations that are usually expensive and time-consuming to collect. Thus, a crucial research question is how to obtain supervision in a cost-effective way. In this paper, we introduce “entity triggers,” an effective proxy of human explanations for facilitating label-efficient learning of NER models. An entity trigger is defined as a group of words in a sentence that helps to explain why humans would recognize an entity in the sentence. We crowd-sourced 14k entity triggers for two well-studied NER datasets1. Our proposed model, Trigger Matching Network, jointly learns trigger representations and soft matching module with self-attention such that can generalize to unseen sentences easily for tagging. The framework is significantly more cost-effective than the traditional frameworks. 1 Introduction Named entity recognition (NER) is a fundamental information extraction task that focuses on extracting entities from a given text and classifying them using pre-defined categories (e.g., persons, locations, organizations) (Nadeau and Sekine, 2007). Recent advances in NER have primarily focused on training neural network models with an abundance of human annotations, yielding state-of-theart results (Lample et al., 2016). However, collecting human annotations for NER is expensive and time-consuming, especially in social media messages (Lin et al., 2017a) and technical domains such as biomedical publications, financial documents, legal reports, etc. As we seek to advance NER into more domains with less human effort, ∗The first two authors contributed equally. 1The code, data, and a longer version of the paper are at http://github.com/INK-USC/TriggerNER 𝑡! = 2,5,6 →7 𝑡" = 11,12,13 →7 2 5 6 7 8 We had a fantastic lunch at Rumble Fish yesterday , I-RES B-RES 11 12 13 where the food is my favorite . Figure 1: We show two individual entity triggers: t1 (“had ... lunch at”) and t2 (“where the food”). Both are associated to the same entity mention “Rumble Fish” (starting from 7th token) typed as restaurant (RES). how to learn neural models for NER in a costeffective way becomes a crucial research problem. The standard protocol for obtaining an annotated NER dataset involves an annotator selecting token spans in a sentence as mentions of entities, and labeling them with an entity type. However, such annotation process provides limited supervision per example. Consequently, one would need large amount of annotations in order to train high-performing models for a broad range of entity types, which can clearly be cost-prohibitive. The key question is then how can we learn an effective NER model in presence of limited quantities of labeled data? We, as humans, recognize an entity within a sentence based on certain words or phrases that act as cues. For instance, we could infer that ‘Kasdfrcxzv’ is likely to be a location entity in the sentence “Tom traveled a lot last year in Kasdfrcxzv.” We recognize this entity because of the cue phrase “travel ... in,” which suggests there should be a location entity following the word ’in’. We call such phrases “entity triggers.” Similar to the way these triggers guide our recognition process, we hypothesize that they can also help the model to learn to generalize efficiently. Specifically, we define an “entity trigger” (or 8504 trigger for simplicity) as a group of words that can help explain the recognition process of a particular entity in the same sentence. For example, in Figure 1, “had ... lunch at”2 and “where the food” are two distinct triggers associated with the RESTAURANT entity “Rumble Fish.” An entity trigger should be a necessary and sufficient cue for humans to recognize its associated entity even if we mask the entity with a random word. Thus, unnecessary words such as “fantastic” should not be considered part of the entity trigger. In this paper, we argue that a combination of entity triggers and standard entity annotations can enhance the generalization power of NER models. This approach is more powerful because unlabeled sentences, such as “Bill enjoyed a great dinner with Alice at Zcxlbz.”, can be matched with the existing trigger “had ... lunch at” via their semantic relatedness. This makes it easier for a model to recognize “Zcxlbz” as a RESTAURANT entity. In contrast, if we only have the entity annotation itself (i.e., “Rumble Fish”) as supervision, the model will require many similar examples in order to learn this simple pattern. We hypothesize that using triggers as additional supervision is a more cost-effective way to train models. We crowd-sourced annotations of 14,708 triggers on two well-studied NER datasets to study their usefulness for the NER task. Also, we propose a novel framework named Trigger Matching Network that learns trigger representations indicative of entity types during the training phase, and identifies triggers in an unlabeled sentence at inference time to guide a traditional entity tagger for delivering better overall NER performance. Different from conventional training, our learning process has two stages, where the first stage comprises jointly training a trigger classifier and the semantic trigger matcher, followed by a second stage that leverages the trigger representation and the encoding of the given sentence using an attention mechanism to learn a tagger. Experiments show that the proposed model using only 20% of the trigger-annotated sentences results in a comparable performance as using 70% of conventional annotated sentences. 2 Problem Formulation We consider the problem of how to cost-effectively learn a model for NER using entity triggers. In this 2Note that a trigger can be a discontinuous phrase. section, we introduce basic concepts and their notations, present the conventional data annotation process for NER, and provide a formal task definition for learning using entity triggers. In the conventional setup for supervised learning for NER, we let x = [x(1), x(2), · · · , x(n)] denote a sentence in the labeled training corpus DL. Each labeled sentence has a NER-tag sequence y = [y(1), y(2), · · · , y(n)], where y(i) ∈Y and Y can be {O, B-PER, · · · }. Thus, we have DL = {(xi, yi)}, and an unlabeled corpus DU = {xi}. We propose to annotate entity triggers in sentences. We use T(x, y) to represent the set of annotated entity triggers, where each trigger ti ∈ T(x, y) is associated with an entity index e and a set of word indices {wi}. Note that we use the index of the first word of an entity as its entity index. That is, t = ({w1, w2, · · · } →e), where e and wi are integers in the range of [1, |x|]. Adding triggers creates a new form of data DT = {(xi, yi, T(xi, yi)}. Our goal is to learn a model for NER from a trigger-labeled dataset DT , such that we can achieve comparable learning performance to a model with a much larger DL. 3 Trigger Matching Networks We propose a straightforward yet effective framework, named Trigger Matching Networks (TMN), consisting of a trigger encoder (TrigEncoder), a semantic-based trigger matching module (TrigMatcher), and a base sequence tagger (SeqTagger). We have two learning stages for the framework: the first stage (Section 3.1) jointly learns the TrigEncoder and TrigMatcher, and the second stage (Section 3.2) uses the trigger vectors to learn NER tag labels. 3.1 Trigger Encoding & Semantic Matching Learning trigger representations and semantically matching them with sentences are inseparable tasks. Desired trigger vectors capture the semantics in a shared embedding space with token hidden states, such that sentences and triggers can be semantically matched. Learning an attentionbased matching module between entity triggers and sentences is necessary so that triggers and sentences can be semantically matched. Specifically, for a sentence x with multiple entities {e1, e2, · · · }, for each entity ei we assume that there is a set of triggers Ti = {t(i) 1 , t(i) 2 , · · · } without loss of generality. To enable more efficient 8505 Bidirectional LSTM Networks Sent. Rep. Trigger Rep. Bidirectional LSTM Networks B-PER O O O O O O Trigger Classification Loss CRF ②: Learning for Sequence Tagging Contrastive Loss LSM <latexit sha1_base64="5Dr2frp6Y78DxA GDPOmRUM5R8yI=">AB7XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVFj0WvXhQqGg/oF1KNs2 sdlkSbJCWfofvHhQxKv/x5v/xrTdg7Y+GHi8N8PMvCDmTBvX/XZyK6tr6xv5zcLW9s7uXnH/ oKloghtEMmlagdYU84EbRhmOG3HiuIo4LQVjK6mfuJKs2keDjmPoRHgWMoKNlZo3vfT+ dtIrltyOwNaJl5GSpCh3it+dfuSJBEVhnCsdcdzY+OnWBlGOJ0UuomMSYjPKAdSwWOqPbT 2bUTdGKVPgqlsiUMmqm/J1IcaT2OAtsZYTPUi95U/M/rJCa8FMm4sRQeaLwoQjI9H0dR nihLDx5Zgopi9FZEhVpgYG1DBhuAtvrxMmdlr1Ku3lVKtcsjwcwTGcgfnUINrqEMDCDz CM7zCmyOdF+fd+Zi35pxs5hD+wPn8AV6djwE=</latexit> Structured Self-Attention Layer gs <latexit sha1_base64="vYcSCMfj d5sWvKTiT9fKRqAeqCk=">AB83icbVDLSsNAFL2pr1pfVZduBovgqiSi 6LoxmUF+4CmlMl0g6dTMLMjVBCf8ONC0Xc+jPu/BsnbRbaemDgcM693DM nSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlslrHuBtRwKRvoUDJu 4nmNAok7wSTu9zvPHFtRKwecZrwfkRHSoSCUbS70cUx0GYjQZmNqjW3Lo7 B1klXkFqUKA5qH75w5ilEVfIJDWm57kJ9jOqUTDJZxU/NTyhbEJHvGepoh E3/WyeUbOrDIkYaztU0jm6u+NjEbGTKPATuYZzbKXi/95vRTDm34mVJIiV 2xKEwlwZjkBZCh0JyhnFpCmRY2K2FjqilDW1PFluAtf3mVtC/q3mX96uG y1rgt6ijDCZzCOXhwDQ24hya0gECz/AKb07qvDjvzsditOQUO8fwB87nD3 bhkfk=</latexit> gt <latexit sha1_base64="ESRGTsfbvn7G7hfgLSAZQyYApbU=">A AB83icbVDLSsNAFL2pr1pfVZduBovgqiSi6LoxmUF+4CmlMl0g6dTMLMjVBCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSKQw6LrfTm ltfWNzq7xd2dnd2z+oHh61TZxqxlslrHuBtRwKRvoUDJu4nmNAok7wSTu9zvPHFtRKwecZrwfkRHSoSCUbS70cUx0GYjQY4G1Rrbt2d g6wSryA1KNAcVL/8YczSiCtkhrT89wE+xnVKJjks4qfGp5QNqEj3rNU0YibfjbPCNnVhmSMNb2KSRz9fdGRiNjplFgJ/OMZtnLxf+8X orhT8TKkmRK7Y4FKaSYEzyAshQaM5QTi2hTAublbAx1ZShraliS/CWv7xK2hd17J+9XBZa9wWdZThBE7hHDy4hgbcQxNawCBZ3iFNy d1Xpx352MxWnKnWP4A+fzB3hmkfo=</latexit> type(e) = PER LT C <latexit sha1_base64="ruNI31krsrYxeKeps IhlzZUfJuU=">AB7XicbVA9SwNBEJ2LXzF+RS1tFoNgFe4kQctgGguLCPmC5Ah7m71kzd7usbsn hCP/wcZCEVv/j53/xk1yhSY+GHi8N8PMvCDmTBvX/XZyG5tb2zv53cLe/sHhUfH4pK1loghtEcml6 gZYU84EbRlmO3GiuIo4LQTOpzv/NElWZSNM0pn6ER4KFjGBjpfb9IG3WZ4NiyS27C6B14mWkB Bkag+JXfyhJElFhCMda9zw3Nn6KlWGE01mhn2gaYzLBI9qzVOCIaj9dXDtDF1YZolAqW8Kghfp7Is WR1tMosJ0RNmO96s3F/7xeYsIbP2UiTgwVZLkoTDgyEs1fR0OmKDF8agkmitlbERljhYmxARVsCN7 qy+ukfVX2KuXqQ6VUu83iyMZnMleHANbiDBrSAwCM8wyu8OdJ5cd6dj2VrzslmTuEPnM8fUPGO +A=</latexit> L <latexit sha1_base64="TO 3Ikl/IgY7t8Ss6ba6sY3RmME=">AB6HicbVA9SwNBEJ2LX zF+RS1tFoNgFe4komXQxsIiAfMByRH2NnPJmr29Y3dPCG/wM ZCEVt/kp3/xk1yhSY+GHi8N8PMvCARXBvX/XZya+sbm1v57c LO7t7+QfHwqKnjVDFsFjEqh1QjYJLbBhuBLYThTQKBLaC0e3 Mbz2h0jyWD2acoB/RgeQhZ9RYqX7fK5bcsjsHWSVeRkqQodY rfnX7MUsjlIYJqnXHcxPjT6gynAmcFrqpxoSyER1gx1JI9T+ ZH7olJxZpU/CWNmShszV3xMTGmk9jgLbGVEz1MveTPzP6Qm vPYnXCapQckWi8JUEBOT2dekzxUyI8aWUKa4vZWwIVWUGZtNw YbgLb+8SpoXZa9SvqxXStWbLI48nMApnIMHV1CFO6hBAxgP MrvDmPzovz7nwsWnNONnMf+B8/gCmIYzY</latexit> ①: Jointly Training TrigEncoder & TrigMatcher concat ˆgt <latexit sha1_base64="v1d7lGKquw2hk4tFpqJWsOrK2hw=">AB+3icbVDLSsNAFJ3 UV62vWJduBovgqiSi6LoxmUF+4AmlMl0g6dTMLMjVhCfsWNC0Xc+iPu/BsnbRbaemDgcM693DMnSATX4DjfVmVtfWNzq7pd29nd2z+wD+tdHaeKsg6NRaz6AdFMcMk6wEGwfqIYiQLBe sH0tvB7j0xpHsHmCXMj8hY8pBTAkYa2nVvQiDzIgKTIMzGQ8jzod1wms4ceJW4JWmgEu2h/eWNYpGTAIVROuB6yTgZ0QBp4LlNS/VLCF0SsZsYKgkEdN+Ns+e41OjHAYK/Mk4Ln6eyM jkdazKDCTRUi97BXif94ghfDaz7hMUmCSLg6FqcAQ46IPOKURAzQwhV3GTFdEIUoWDqpkS3OUvr5LuedO9aF7eXzRaN2UdVXSMTtAZctEVaqE71EYdRNETekav6M3KrRfr3fpYjFasc ucI/YH1+QPWY5T4</latexit> Kzch is the leader of our group Kzch is the leader of our group Bidirectional LSTM Networks Sent. Rep. Bidirectional LSTM Networks Structured Self-Attention Layer gs <latexit sha1_base64="vYcSCMfjd5sWvKTiT9fKRqAeqCk=">AB83icbVDLSsNAFL2pr1pfVZduBovgqiSi6LoxmUF+ 4CmlMl0g6dTMLMjVBCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlslrHuBtRwKRvoUDJu4nmNAok7wSTu9zvPHFtRKwecZrwfkRHSoSCUbS70cUx0GYjQZmNqjW3Lo7B1klXkFqUKA5qH75w5ilEVfIJDW m57kJ9jOqUTDJZxU/NTyhbEJHvGepohE3/WyeUbOrDIkYaztU0jm6u+NjEbGTKPATuYZzbKXi/95vRTDm34mVJIiV2xKEwlwZjkBZCh0JyhnFpCmRY2K2FjqilDW1PFluAtf3mVtC/q3mX96uGy1rgt6ijDCZzCOXhwDQ24hya0gECz/AKb07qvDjvzsdit OQUO8fwB87nD3bhkfk=</latexit> concat ˆgt <latexit sha1_base64="v1d7lGKquw2hk4tFpqJWsOrK2hw=">AB+3icbVDLSsNAFJ3UV62vWJduBovgqiSi6LoxmUF+4AmlMl0g6dTMLMjVhCfsWNC0Xc+iPu/BsnbRbaemDgcM 693DMnSATX4DjfVmVtfWNzq7pd29nd2z+wD+tdHaeKsg6NRaz6AdFMcMk6wEGwfqIYiQLBesH0tvB7j0xpHsHmCXMj8hY8pBTAkYa2nVvQiDzIgKTIMzGQ8jzod1wms4ceJW4JWmgEu2h/eWNYpGTAIVROuB6yTgZ0QBp4LlNS/VLCF0SsZsYKgkEdN+Ns+e41OjHAYK/Mk4Ln6eyMjkdazKDCTRUi97BXif94ghfDaz7hMUmCSLg6FqcAQ46IPOKURAzQwhV3GTFdEIUoWDqp kS3OUvr5LuedO9aF7eXzRaN2UdVXSMTtAZctEVaqE71EYdRNETekav6M3KrRfr3fpYjFascucI/YH1+QPWY5T4</latexit> Learned Trigger Rep. Table Mean pooling gt2 <latexit sha1_base64="yHTqdZezy1IzDpthV/l93GUJabI=">AB+3icbVBNS8NAF NzUr1q/Yj16WSyCp5KUih4LXjxWsK3QhrDZbtqlm03YfRFLyF/x4kERr/4Rb/4bN20O2jqwMy8x5udIBFcg+N8W5WNza3tnepubW/4PDIPq73dZwqyno0FrF6CIhmgkvWAw6CPS SKkSgQbBDMbgp/8MiU5rG8h3nCvIhMJA85JWAk365no4jANAiziZ+B38rz3LcbTtNZAK8TtyQNVKLr21+jcUzTiEmgmg9dJ0EvIwo4FSwvDZKNUsInZEJGxoqScS0ly2y5/jcKG Mcxso8CXih/t7ISKT1PArMZBFUr3qF+J83TCG89jIukxSYpMtDYSowxLgoAo+5YhTE3BCFTdZMZ0SRSiYumqmBHf1y+uk32q67eblXbvRaZ1VNEpOkMXyEVXqINuURf1EVP6Bm 9ojcrt16sd+tjOVqxyp0T9AfW5w+47pTU</latexit> gt3 <latexit sha1_base64="MqmPx0Wk6XhalfjWxOfNBIun31g=">AB+3icbVBNS8NAF NzUr1q/Yj16WSyCp5BoRY8FLx4r2FpoQ9hsN+3SzSbsvogl5K948aCIV/+IN/+N2zYHbR1YGbe481OmAquwXW/rcra+sbmVnW7trO7t39gH9a7OskUZR2aiET1QqKZ4J1gINgvV QxEoeCPYSTm5n/8MiU5om8h2nK/JiMJI84JWCkwK7ng5jAOIzyUZBDcFEURWA3XMedA68SryQNVKId2F+DYUKzmEmgmjd9wU/Jwo4FSwojbINEsJnZAR6xsqScy0n8+zF/jUKE McJco8CXiu/t7ISaz1NA7N5CyoXvZm4n9eP4Po2s+5TDNgki4ORZnAkOBZEXjIFaMgpoYQqrjJiumYKELB1FUzJXjLX14l3XPHazqXd81GynrqKJjdILOkIeuUAvdojbqIqe0DN 6RW9WYb1Y79bHYrRilTtH6A+szx+6dZTV</latexit> gt4 <latexit sha1_base64="AhJDMVyfDqfzXnM+na4G9HdGCw=">AB+3icbVDLSsNAF J3UV62vWJduBovgqiRS0WXBjcsK9gFtCJPpB06mYSZG7GE/IobF4q49Ufc+TdO2iy09cDA4Zx7uWdOkAiuwXG+rcrG5tb2TnW3trd/cHhkH9d7Ok4VZV0ai1gNAqKZ4J1gYNg0 QxEgWC9YPZbeH3H5nSPJYPME+YF5GJ5CGnBIzk2/VsFBGYBmE28TPwW3me+3bDaToL4HXilqSBSnR8+2s0jmkaMQlUEK2HrpOAlxEFnAqW10apZgmhMzJhQ0MliZj2skX2HJ8bZY zDWJknAS/U3xsZibSeR4GZLILqVa8Q/OGKYQ3XsZlkgKTdHkoTAWGBdF4DFXjIKYG0Ko4iYrplOiCAVTV82U4K5+eZ30Lptuq3l132q0m2UdVXSKztAFctE1aqM71EFdRNETeka v6M3KrRfr3fpYjlascucE/YH1+QO7/JTW</latexit> gt5 <latexit sha1_base64="YsBPtT4PycpgeQJC/aZTlWrJr0Y=">AB+3icbVDLSsNAF L2pr1pfsS7dDBbBVUnEosuCG5cV7APaECbTSTt0MgkzE7GE/IobF4q49Ufc+TdO2iy09cDA4Zx7uWdOkHCmtON8W5WNza3tnepubW/4PDIPq73VJxKQrsk5rEcBFhRzgTtaqY5HS S4ijgtB/Mbgu/0ilYrF40POEehGeCBYygrWRfLuejSKsp0GYTfxM+608z3274TSdBdA6cUvSgBId3/4ajWOSRlRowrFSQ9dJtJdhqRnhNK+NUkUTGZ4QoeGChxR5WL7Dk6N8 oYhbE0T2i0UH9vZDhSah4FZrIqla9QvzPG6Y6vPEyJpJU0GWh8KUIx2jog0ZpISzeGYCKZyYrIFEtMtKmrZkpwV7+8TnqXTfeq2bq/arSbZR1VOIUzuAXrqENd9CBLhB4gmd 4hTcrt16sd+tjOVqxyp0T+APr8we9g5TX</latexit> gt1 <latexit sha1_base64="NEvlkA3NWHbVsKhnWcog2EZu+q0=">AB+3icbVDLSsNAF L2pr1pfsS7dDBbBVUikosuCG5cV7APaECbTSTt0MgkzE7GE/IobF4q49Ufc+TdO2y09cDA4Zx7uWdOmHKmtOt+W5WNza3tnepubW/4PDIPq53VZJQjsk4Ynsh1hRzgTtaKY57a eS4jktBdOb+d+75FKxRLxoGcp9WM8FixiBGsjBXY9H8ZYT8IoHwe5DryiKAK74TruAmideCVpQIl2YH8NRwnJYio04Vipgem2s+x1IxwWtSGmaIpJlM8pgNDBY6p8vNF9gKdG2 WEokSaJzRaqL83chwrNYtDMzkPqla9ufifN8h0dOPnTKSZpoIsD0UZRzpB8yLQiElKNJ8ZgolkJisiEywx0auminBW/3yOuleOl7TubpvNlpOWUcVTuEMLsCDa2jBHbShAwSe4Bl e4c0qrBfr3fpYjlascucE/sD6/AG3Z5T</latexit> k-nearest “leader of … group” à PER Pyzxc is the head of our team Pyzxc is the head of our team ②: Tagging the Sequence ①: Soft-Matching Triggers Trigger-based Global Attention B-PER O O O O O O CRF Trigger-based Global Attention Figure 2: Two-stage training in Trigger Matching Network (Left). We first jointly train TrigEncoder (via trigger classification) and TrigMatcher (via contrastive loss). Then, we reuse the training data trigger vectors as attention queries in SeqTagger. The inference process (Right). It uses the TrigMatcher to retrieve the k nearest triggers and average their trigger vectors as the attention query for the trained SeqTagger. Thus, an unseen cue phrase (e.g., “head of ... team”) can be matched with a seen trigger (e.g., “leader of ... group”). batch-based training, we reformat the triggerbased annotated dataset DT such that each new sequence contains only one entity and one trigger. We then create a training instance by pairing each entity with one of its triggers, denoted (x, ei, t(i) j ). For each reformed training instance (x, e, t), we first apply a bidirectional LSTM (BLSTM) on the sequence of word vectors of x, obtaining a sequence of hidden states that are the contextualized word representations hi for each token xi in the sentence. We use H to denote the matrix containing the hidden vectors of all of the tokens, and we use Z to denote the matrix containing the hidden vectors of all trigger tokens inside the trigger t. In order to learn an attention-based representation of both triggers and sentences, we follow the self-attention method introduced by (Lin et al., 2017b) as follows: ⃗asent = SoftMax W2 tanh W1HT  gs = ⃗asentH ⃗atrig = SoftMax W2 tanh W1ZT  gt = ⃗atrigZ W1 and W2 are two trainable parameters for computing self-attention score vectors ⃗asent and ⃗atrig. We obtain a vector representing the weighted sum of the token vectors in the entire sentence as the final sentence vector gs. Similarly, gt is the final trigger vector, representing the weighted sum of the token vectors in the trigger. We want to use the type of the associated entity as supervision to guide the trigger representation. Thus, the trigger vector gt is further fed into a multi-class classifier to predict the type of the associated entity e (such as PER, LOC, etc) which we use type(e) to denote. The loss of the trigger classification is as follows: LTC = − X log P (type(e) | gt; θTC) , where θTC is a model parameter to learn. Towards learning to match triggers and sentences based on attention-based representations, we use contrastive loss (Hadsell et al., 2006). The intuition is that similar triggers and sentences should have close representations (i.e., have a small distance between them, d). We create negative examples (i.e., mismatches) for training by randomly mixing the triggers and sentences, because TrigMatcher needs to be trained with both positive and negative examples of the form (sentence, trigger, label). For the negative examples, we expect a margin m between their embeddings. The contrastive loss of soft matching is as follows, where 1matched is 1 if the trigger was originally in this sentence and 0 if they are not: d = ∥gs −gt∥2 LSM = (1 −1matched)1 2 (d)2 +1matched 1 2 {max (0, m −d)}2 The joint loss of the first stage is thus L = LTC + λLSM, where λ is a hyper-parameter to tune. 3.2 Trigger-Enhanced Sequence Tagging The learning objective in this stage is to output the tag sequence y. Following the most common design of neural NER architecture, BLSTMCRF (Ma and Hovy, 2016), we incorporate the entity triggers as attention queries to train a triggerenhanced sequence tagger for NER. Note that the BLSTM used in the the TrigEncoder and TrigMatcher modules is the same BLSTM we 8506 use in the SeqTagger to obtain H, the matrix containing the hidden vectors of all of the tokens. Given a sentence x, we use the previously trained TrigMatcher to compute the mean of all the trigger vectors ˆgt associated with this sentence. Following the conventional attention method (Luong et al., 2015), we incorporate the mean trigger vector as the query, creating a sequence of attention-based token representations, H′. ⃗α = SoftMax  v⊤tanh  U1HT + U2 ˆgtT ⊤ H′ = ⃗α H U1, U2, and v are trainable parameters for computing the trigger-enhanced attention scores for each token. Finally, we concatenate the original token representation H with the trigger-enhanced one H′ as the input ([H; H′]) to the final CRF tagger. Note that in this stage, our learning objective is the same as conventional NER, which is to correctly predict the tag for each token. 3.3 Inference on Unlabeled Sentences When inferencing tags on unlabeled sentences, we do not know the sentence’s triggers. Instead, we use the TrigMatcher to compute the similarities between the self-attended sentence representations and the trigger representations, using the most suitable triggers as additional inputs to the SeqTagger. Specifically, we have a trigger dictionary from our training data, T = {t|(·, ·, t) ∈ DT }. Recall that we have learned a trigger vector for each of them, and we can load these trigger vectors as a look-up table in memory. For each unlabeled sentence x, we first compute its self-attended vector gs as we do when training the TrigMatcher. Using L2-norm distances to compute the contrastive loss, we efficiently retrieve the most similar triggers in the shared embedding space of the sentence and trigger vectors. Then, we calculate ˆgt, the mean of the top k nearest semantically matched triggers, as this serves a proxy to triggers mentioned for the entity type in the labeled data. We then use it as the attention query for SeqTagger, similarly in Sec. 3.2. 4 Experiments In this section, we first discuss how to collect entity triggers, and empirically study the dataefficiency of our proposed framework. CONLL 03 PER ORG MISC LOC Total # of Entities 1,608 958 787 1,781 5,134 # of Triggers 3,445 1,970 2,057 3,456 10,938 Avg. # of Trig. / Ent. 2.14 2.05 2.61 1.94 2.13 Avg. Trig. Length 1.41 1.46 1.4 1.44 1.43 BC5CDR DISEASE CHEMICAL Total # of Entities 906 1,085 1,991 # of Triggers 2,130 1,640 3,770 Avg. # of Trig. / Ent. 2.35 1.51 1.89 Avg. Trig. Length 2.00 1.99 2.00 Table 1: Statistics of the crowd-sourced triggers. 4.1 Annotating Entity Triggers We use a general domain dataset CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) and a bio-medical domain dataset BC5CDR (Li et al., 2016). Both datasets are wellstudied and popular in evaluating the performance of neural named entity recognition models such as BLSTM-CRF (Ma and Hovy, 2016). In order to collect the entity triggers from human annotators, we use Amazon SageMaker Ground Truth3 to crowd-source entity triggers. More recently, Lee et al. (2020) developed an annotation framework, named LEAN-LIFE, which supports our proposed trigger annotating. Specifically, we sample 20% of each training set as our inputs, and then reform them (Section 2). Annotators are asked to annotate a group of words that would be helpful in typing and/or detecting the occurrence of a particular entity in the sentence. We masked the entity tokens with their types so that human annotators are more focused on the nonentity words in the sentence when considering the triggers. We consolidate multiple triggers for each entity by taking the intersection of the three annotators’ results. Statistics of the final curated triggers are summarized in Table 1. 4.2 Base model We require a base model to compare with our proposed TMN model in order to validate whether the TMN model effectively uses triggers to improve model performance in a limited label setting. We choose the CNN-BLSTM-CRF (Ma and Hovy, 2016) as our base model for its wide usage in research of neural NER models and applications. Our TMNs are implemented within the same codebase and use the same external word 3An advanced version of Amazon Mechanical Turk. https://aws.amazon.com/sagemaker/ 8507 CONLL 2003 BLSTM-CRF TMN TMN + S.T. sent. F1 trig. F1 F1 5% 69.04 3% 75.33 77.68 10% 76.83 5% 80.2 81.57 20% 81.3 7% 82.02 82.43 30% 83.23 10% 83.53 83.53 40% 84.18 13% 84.22 84.33 50% 84.27 15% 85.03 85.38 60% 85.24 17% 85.36 85.52 70% 86.08 20% 86.01 86.5 Table 2: Labor-efficiency study on BLSTM-CRF and TMN. “sent.” means the percentage of the sentences (labeled only with entity tags) we use for BLSTM-CRF, while “trig.” denotes the percentage of the sentences (labeled with both entity tags and trigger tags) we use for TMN. ‘S.T.’ stands for self-training. vectors from GloVE (Pennington et al., 2014). The hyper-parameters of the CNNs, BLSTMs, and CRFs are also the same. This ensures a fair comparison between a typical non-trigger NER model and our trigger-enhanced framework. 4.3 Results and analysis Labeled data efficiency. We first seek to study the cost-effectiveness of using triggers as an additional source of supervision. Accordingly, we explore the performance of our model and the baseline for different fractions of the training data. The results on the two datasets are shown in Table 2. The full results are shown in Table 3. We can see that by using only 20% of the triggerannotated data, TMN model delivers comparable performance as the baseline model using 50-70% traditional training data. The drastic improvement in the model performance obtained using triggers thus justifies the slightly additional cost incurred in annotating triggers. Self-training with triggers. We also do a preliminary investigation of adopting selftraining (Rosenberg et al., 2005) with triggers. We make inferences on unlabeled data and take the predictions with high confidences as the weak training examples for continually training the model. The confidence is computed following the MNLP metric (Shen et al., 2017), and we take top 20% every epoch. With the self-training method, we further improve the TMN model’s F-1 scores by about 0.5∼1.0%. Annotation time vs. performance. Although it is hard to accurately study the time cost on the crowd-sourcing platform we use, based on our ofFigure 3: The cost-effectiveness study. fline simulation we argue that annotating both triggers and entities are about 1.5 times (“BLSTMCRF (x1.5)”) longer than only annotating entities. our offline simulation. In Figure 3, The x-axis for BLSTM-CRF means the number of sentences annotated with only entities, while for TMN means the number of sentences tagged with both entities and triggers. In order to reflect human annotators spending 1.5 to 2 times as long annotating triggers and entities as they spend annotating only entities, we stretch the x-axis for BLSTM-CRF. We can clearly see that the proposed TMN outperforms the BLSTM-CRF model by a large margin. Even if we consider the extreme case that tagging triggers requires twice the human effort (“BLSTMCRF (x2)”), the TMN is still significantly more labor-efficient in terms of F1 scores. 5 Conclusion We introduce “entity trigger” as a complementary annotation. We crowdsourced triggers on two mainstream datasets and will release them to the community, and proposed a novel framework TMN which can generalize to unseen sentences easily for tagging named entities. Acknowledgements This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, NSF SMA 18-29268, and Snap research gift. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. 8508 References Yixin Cao, Zikun Hu, Tat-seng Chua, Zhiyuan Liu, and Heng Ji. 2019. Low-resource name tagging learned with weakly labeled data. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 261–270, Hong Kong, China. Association for Computational Linguistics. R. Hadsell, S. Chopra, and Y. LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pages 1735–1742. Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R´e. 2018. Training classifiers with natural language explanations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1884– 1895, Melbourne, Australia. Association for Computational Linguistics. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics. Ouyu Lan, Xiao Huang, Bill Yuchen Lin, He Jiang, Liyuan Liu, and Xiang Ren. 2020. Learning to contextually aggregate multi-source supervision for sequence labeling. In Proceedings of Association for Computational Linguistics. (to appear). Dong-Ho Lee, Rahul Khanna, Bill Yuchen Lin, Jamin Chen, Seyeon Lee, Qinyuan Ye, Elizabeth Boschee, Leonardo Neves, and Xiang Ren. 2020. Leanlife: A label-efficient annotation framework towards learning from explanation. In Proceedings of Association for Computational Linguistics. (to appear). Jiao Li, Yueping Sun, Robin J. Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database : the journal of biological databases and curation, 2016. Shen Li, Hengru Xu, and Zhengdong Lu. 2018. Generalize symbolic knowledge with neural rule engine. ArXiv, abs/1808.10326. Bill Y. Lin, Frank Xu, Zhiyi Luo, and Kenny Zhu. 2017a. Multi-channel BiLSTM-CRF model for emerging named entity recognition in social media. In Proceedings of the 3rd Workshop on Noisy Usergenerated Text, pages 160–165, Copenhagen, Denmark. Association for Computational Linguistics. Bill Yuchen Lin, Dong-Ho Lee, Frank F. Xu, Ouyu Lan, and Xiang Ren. 2019. AlpacaTag: An active learning-based crowd annotation framework for sequence tagging. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 58–63, Florence, Italy. Association for Computational Linguistics. Bill Yuchen Lin and Wei Lu. 2018. Neural adaptation layers for cross-domain named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2012–2022, Brussels, Belgium. Association for Computational Linguistics. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017b. A structured self-attentive sentence embedding. In International Conference on Learning Representations. Tianyu Liu, Jin-Ge Yao, and Chin-Yew Lin. 2019. Towards improving neural named entity recognition with gazetteers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5301–5307, Florence, Italy. Association for Computational Linguistics. David Lowell, Zachary C. Lipton, and Byron C. Wallace. 2019. Practical obstacles to deploying active learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 21–30, Hong Kong, China. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1):3–26. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. 8509 Chuck Rosenberg, Martial Hebert, and Henry Schneiderman. 2005. Semi-supervised self-training of object detection models. 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION’05) - Volume 1, 1:29–36. Esteban Safranchik, Shiying Luo, and Stephen H. Bach. 2020. Weakly supervised sequence tagging from noisy rules. In AAAI Conference on Artificial Intelligence (AAAI). Jingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, and Jiawei Han. 2018. Learning named entity tagger using domain-specific dictionary. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2054–2064, Brussels, Belgium. Association for Computational Linguistics. Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 252–256, Vancouver, Canada. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Ziqi Wang*, Yujia Qin*, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xiang Ren. 2020. Learning from explanations with neural execution tree. In International Conference on Learning Representations. Yaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, and Min Zhang. 2018. Distantly supervised NER with partial annotation learning and reinforcement learning. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2159–2169, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Wenxuan Zhou, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo Neves, and Xiang Ren. 2020. Nero: A neural rule grounding framework for label-efficient relation extraction. In Proceedings of The Web Conference 2020, WWW ’20, page 2166–2176, New York, NY, USA. Association for Computing Machinery. 8510 A Interpretibility Figure 4 shows two examples illustrating that the trigger attention scores help the TMN model recognize entities. The training data has ‘per day’ as a trigger phrase for chemical-type entities, and this trigger matches the phrase ‘once daily’ in an unseen sentence during the inference phase of TrigMatcher. Similarly, in CoNLL03 the training data trigger phrase ‘said it’ matches with the phrase ‘was quoted as saying’ in an unlabeled sentence. These results not only support our argument that trigger-enhanced models such as TMN can effectively learn, but they also demonstrate that trigger-enhanced models can provide reasonable interpretation, something that lacks in other neural NER models. B Related Work Towards low-resource learning for NER, recent works have mainly focused on dictionary-based distantly supervision (Shang et al., 2018; Yang et al., 2018; Liu et al., 2019). These approaches create an external large dictionary of entities, and then regard hard-matched sentences as additional, noisy-labeled data for learning a NER model. Although these approaches largely reduce human efforts in annotating, the quality of matched sentences is highly dependent on the coverage of the dictionary and the quality of the corpus. The learned models tend to have a bias towards entities with similar surface forms as the ones in dictionary. Without further tuning under better supervision, these models have low recall (Cao et al., 2019). Linking rules (Safranchik et al., 2020) focuses on the votes on whether adjacent elements in the sequence belong to the same class. Unlike these works aiming to get rid of training data or human annotations, our work focuses on how to more cost-effectively utilize human efforts. Another line of research which also aims to use human efforts more cost-effectively is active learning (Shen et al., 2017; Lin et al., 2019). This approach focuses on instance sampling and the human annotation UI, asking workers to annotate the most useful instances first. However, a recent study (Lowell et al., 2019) argues that actively annotated data barely helps when training new models. Transfer learning approaches (Lin and Lu, 2018) and aggregating multi-source supervision (Lan et al., 2020) are also studied for using less expensive supervision for NER, while these methods usually lack clear rationales to advise annotation process unlike the trigger annotations. Inspired by recent advances in learning sentence classification tasks (e.g., relation extraction and sentiment classification) with explanations or human-written rules (Li et al., 2018; Hancock et al., 2018; Wang* et al., 2020; Zhou et al., 2020), we propose the concept of an “entity trigger” for the task of named entity recognition. These prior works primarily focused on sentence classification, in which the rules (parsed from natural language explanations) are usually continuous token sequences and there is a single label for each input sentence. The unique challenge in NER is that we have to deal with rules which are discontinuous token sequences and there may be multiple rules applied at the same time for an input instance. We address this problem in TMN by jointly learning trigger representations and creating a soft matching module that works in the inference time. We argue that either dictionary-based distant supervision or active learning can be used in the context of trigger-enhanced NER learning via our framework. For example, one could create a dictionary using a high-quality corpus and then apply active learning by asking human annotators to annotate the triggers chosen by an active sampling algorithm designed for TMN. We believe our work sheds light on future research for more costeffectively using human to learn NER models. C Future Directions We believe future directions with TriggerNER includes: 1) developing models for automatically extracting novel triggers, 2) transferring existing entity triggers to low-resource languages, and 3) improving trigger modeling with better structured inductive bias (e.g., OpenIE). 8511 CONLL 2003 BLSTM-CRF TMN TMN + SELF-TRAINING sent. Precision Recall F1 trig. Precision Recall F1 Precision Recall F1 5% 70.85 67.32 69.04 3% 76.36 74.33 75.33 80.36 75.18 77.68 10% 76.57 77.09 76.83 5% 81.28 79.16 80.2 81.96 81.18 81.57 20% 82.17 80.35 81.3 7% 82.93 81.13 82.02 82.92 81.94 82.43 30% 83.71 82.76 83.23 10% 84.47 82.61 83.53 84.47 82.61 83.53 40% 85.31 83.1 84.18 13% 84.76 83.69 84.22 84.64 84.01 84.33 50% 85.07 83.49 84.27 15% 85.61 84.45 85.03 86.53 84.26 85.38 60% 85.58 84.54 85.24 17% 85.25 85.46 85.36 86.42 84.63 85.52 70% 86.87 85.3 86.08 20% 86.04 85.98 86.01 87.09 85.91 86.5 BC5CDR BLSTM-CRF TMN TMN + SELF-TRAINING sent. Precision Recall F1 trig. Precision Recall F1 Precision Recall F1 5% 63.37 43.23 51.39 3% 66.47 57.11 61.44 65.23 59.18 62.06 10% 68.83 60.37 64.32 5% 69.17 73.31 66.11 68.02 66.76 67.38 20% 79.09 62.66 69.92 7% 64.81 69.82 67.22 69.87 66.03 67.9 30% 80.13 65.3 71.87 10% 71.89 69.57 70.71 69.75 72.75 71.22 40% 82.05 65.5 72.71 13% 73.36 70.44 71.87 75.11 69.31 72.1 50% 82.56 66.58 73.71 15% 70.91 72.89 71.89 71.23 73.31 72.26 60% 81.73 70.74 75.84 17% 75.67 70.6 73.05 77.47 70.47 73.97 70% 81.16 75.29 76.12 20% 77.47 70.47 73.97 75.23 73.83 74.52 Table 3: Labor-efficiency study on BLSTM-CRF and TMN. “sent.” means the percentage of the sentences (labeled only with entity tags) we use for BLSTM-CRF, while “trig.” denotes the percentage of the sentences (labeled with both entity tags and trigger tags) we use for TMN. apomorphine induced ( 0.0 mg / kg s.c. once daily ) aggressive behavior of adult male and female Wistar rats Trigger : 'per day' - Entity type : Chemical " I will have to have a good look at giving someone else a go , " Newcombe was quoted as saying in Sydney 's Daily Telegraph . Trigger : 'said it' - Entity type : PER Figure 4: Two case studies of trigger attention during inference. The darker cells have higher attention weights.
2020
752
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8512–8525 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8512 Addressing Posterior Collapse with Mutual Information for Improved Variational Neural Machine Translation Arya D. McCarthy♠♦and Xian Li♦and Jiatao Gu♦and Ning Dong♦ ♠Johns Hopkins University ♦Facebook [email protected], {xianl,jgu,dnn}@fb.com Abstract This paper proposes a simple and effective approach to address the problem of posterior collapse in conditional variational autoencoders (CVAEs). It thus improves performance of machine translation models that use noisy or monolingual data, as well as in conventional settings. Extending Transformer and conditional VAEs, our proposed latent variable model measurably prevents posterior collapse by (1) using a modified evidence lower bound (ELBO) objective which promotes mutual information between the latent variable and the target, and (2) guiding the latent variable with an auxiliary bag-of-words prediction task. As a result, the proposed model yields improved translation quality compared to existing variational NMT models on WMT Ro↔En and De↔En. With latent variables being effectively utilized, our model demonstrates improved robustness over non-latent Transformer in handling uncertainty: exploiting noisy source-side monolingual data (up to +3.2 BLEU), and training with weakly aligned web-mined parallel data (up to +4.7 BLEU). 1 Introduction The conditional variational autoencoder (CVAE; Sohn et al., 2015) is a conditional generative model for structured prediction tasks like machine translation. This model, learned by variational Bayesian methods (Kingma and Welling, 2014), can capture global signal about the target in its latent variables. Unfortunately, variational inference for text generation often yields models that ignore their latent variables (Bowman et al., 2016), a phenomenon called posterior collapse. In this paper, we introduce a new loss function for CVAEs that counteracts posterior collapse, motivated by our analysis of CVAE’s evidence lower bound objective (ELBO). Our analysis (§2) reveals that optimizing ELBO’s second term not only brings the variational posterior approximation closer to the prior, but also decreases mutual information between latent variables and observed data. Based on this insight, we modify CVAE’s ELBO in two ways (§3): (1) We explicitly add a principled mutual information term back into the training objective, and (2) we use a factorized decoder (Chen et al., 2017), which also predicts the target bagof-words as an auxiliary decoding distribution to regularize our latent variables. Our objective is effective even without Kullback–Leibler term (KL) annealing (Bowman et al., 2016), a strategy for iteratively altering ELBO over the course of training to avoid posterior collapse. In applying our method to neural machine translation (NMT; Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014), we find that we have measurably mitigated posterior collapse. The latent variables are not ignored, even in the presence of a powerful Transformer decoder. By addressing this problem, the resulting NMT model has improved robustness and performance in low-resource scenarios. Noisy data like those scraped from the Internet (Smith et al., 2013; Michel and Neubig, 2018) present a challenge for NMT (Khayrallah and Koehn, 2018; Ott et al., 2018a); we are measurably more able to model this extrinsic uncertainty than the (non-latent) Transformer (Vaswani et al., 2017) or existing variational NMT with the CVAE architecture (Zhang et al., 2016). Finally, we extend the model to semi-supervised learning (Cheng et al., 2016) to more effectively learn from monolingual data. In summary, our conditional text generation model overcomes posterior collapse by promoting mutual information. It can easily and successfully integrate noisy and monolingual data, and it does this without the cost of lower BLEU score than non-latent NMT in typical settings. 8513 2 Formalism and Mathematical Analysis Here we review the standard framework for neural MT. Next, we connect this to the conditional variational autoencoder, a model with latent random variables whose distributions are learned by blackbox variational Bayesian inference. Finally, we analyze the CVAE’s objective to explain why these models will ignore their latent variables (“posterior collapse”). 2.1 Neural Machine Translation Problem instances in machine translation are pairs of sequences (x ≜ [x1, . . . , xm], y ≜ [y1, . . . , yn]), where x and y represent the source and target sentences, respectively. Conventionally, a neural machine translation model is a parameterized conditional distribution whose likelihood factors in an autoregressive fashion: pθ(y | x) = n Y t=1 pθ(yt | x, y<t) . (1) The dominant translation paradigm first represents the source sentence as a sequence of contextualized vectors (using the encoder), then decodes this representation into a target hypothesis according to Equation 1. The parameters θ are learned by optimizing the log-likelihood of training pairs with stochastic gradient methods (Bottou and Cun, 2004; Kingma and Ba, 2015). Decoding is deterministic, using an efficient approximate search like beam search (Tillmann and Ney, 2003). The Transformer architecture with multi-head attention has become the state of the art for NMT (Vaswani et al., 2017). 2.2 The Conditional Variational Autoencoder Our NMT approach extends the conditional variational autoencoder (Sohn et al., 2015), which we identify as a generalization of Variational NMT (Zhang et al., 2016). It introduces a latent random variable z into the standard NMT conditional distribution from Equation 1:1,2 pθ(y | x) = Z z pθ(y | z, x) | {z } decoder · pθ(z | x) | {z } encoder dz. (2) For a given source sentence x, first a latent variable z is sampled from the encoder, then the target sen1By contrast, the hidden states of a standard sequence-tosequence model are deterministic latent variables. 2In Equation 2 we assume a continuous latent variable. For the discrete case, replace integration with summation. tence y is generated by the decoder: z ∼pθ(z | x), y ∼pθ(y | z, x).3 It is intractable to marginalize Equation 2 over z. Instead, the CVAE training objective is a variational lower bound (the ELBO) of the conditional log-likelihood. It relies on a parametric approximation of the model posterior: qφ(z | x, y). The variational family we choose for q is a neural network whose parameters φ are shared (i.e., amortized) across the dataset. The ELBO lower-bounds the log-likelihood, as can be proven with Jensen’s inequality. Its form is: LCVAE = Eqφ(z|x,y) [log pθ(y | x, z)] −DKL(qφ(z | x, y) ∥pθ(z | x)), (3) where DKL represents the Kullback–Leibler divergence between two distributions. We use amortized variational inference to simultaneously perform learning and approximate posterior inference, updating both θ and φ with stochastic gradient methods. Improving θ raises the lower bound, and improving φ keeps the bound tight with respect to the model conditional log-likelihood. The same argument pertains to the joint maximization interpretation of the expectation–maximization (EM) algorithm (Neal and Hinton, 1998). (Our optimization is a variational generalization of EM.) 2.3 Posterior Collapse Despite their success when applied to computer vision tasks, variational autoencoders in natural language generation suffer from posterior collapse, where the learnt latent code is ignored by a strong autoregressive decoder. This presents a challenge to conditional language generation tasks in NLP like machine translation. The phenomenon can be explained mathematically by an analysis of the ELBO objective, as well as from the perspective of a powerful decoder that can model the true distribution without needing the latent code. We consider both in this subsection. ELBO surgery Recall that the computed objective approximates the objective on the true data distribution pD, using a finite number of samples 3The sense of “encoder” in the context of variational autoencoders differs from the typical sense in neural machine translation, such that the NMT encoder is a component of both the VAE’s encoder and decoder. We can separate these by computing a second, deterministic latent variable h from x to represent the NMT encoder outputs, used by both the VAE encoder and the NMT/VAE decoder. 8514 Figure 1: Model architecture in training (with parallel data) and inference. (see, e.g., Brown et al., 1992): L = EpD(x,y) [LCVAE(φ, θ; x, y)] . (4) We can factor the KL term of Equation 3 (omitting parameter subscripts) as: EpD(x,y) [DKL(q(z | x, y) ∥p(z | x))] = H(x, y) −H(x, y | z) | {z } ≜Iqφ(z;x,y) + Eq(z) log q(z) p(z) | {z } ≜DKL(qφ(z)∥p(z)) , (5) which we prove in Appendix A, following (Hoffman and Johnson, 2016). As both the resulting mutual information and KL terms are non-negative (Cover and Thomas, 2006), the global minimum of Equation 5 is Iqφ(z; x, y) = DKL(qφ(z) ∥p(z)) = 0. Unfortunately, at this point, the consequence of the optimization is that the latent variable z is conditionally independent of the data (x, y). A powerful decoder Revisiting Equation 3, we see that the decoder is conditioned on both the stochastic latent variable z and the source text x. A sufficiently high-capacity autoregressive decoder can model the conditional density directly, ignoring the latent variable and reducing inference to Equation 1. The KL term can then be reduced to its minimum (0) by equating the posterior to the prior. To prevent this, some work weakens the decoder in various ways. This is a challenge, because NMT requires a powerful decoder such as Transformer with direct attention to the encoder. 3 An Information-Infused Objective We modify our training objective to explicitly retain mutual information between the latent variable z and the observation (x, y). Further, we use an auxiliary decoder that only uses the latent variable, not the encoder states. We combine it with the existing decoder as a mixture of softmaxes (Yang et al., 2018a). The model is trained with amortized variational inference. When source-language monolingual text is available, we augment our modified CVAE objective with a similarly modified (non-conditional) VAE objective. The training and inference strategy is summarized in Figure 1. 3.1 Adding Iqφ(z; x, y) to ELBO To combat the optimization dilemma from Equation 5 (namely, that the objective discourages mutual information between the latent variable and the data), we explicitly add the mutual information term to the CVAE’s ELBO and obtain a new training objective: LMICVAE = LCVAE + Iqφ(z; x, y) = Eqφ(z|x,y) log p(y | x, z) −DKL(qφ(z) ∥p(z)) (6) The new training objective LMICVAE aims to match the aggregated approximate posterior distribution of the latent variable qφ(z) (Hoffman and Johnson, 2016) to the aggregated-posterior prior distribution pθ(z).4 4It can be seen as extending InfoVAE (Zhao et al., 2019) to conditional generative models, where we have overcome 8515 3.2 Guiding z to Encode Global Information Several existing approaches weaken the decoder: limiting its capacity to encourage latent variables to be utilized (Bowman et al., 2016; Gulrajani et al., 2017). Here we propose a different approach: explicitly guiding the information encoded in z without reducing the decoder’s capacity. The decision to weaken the decoder can be understood in the context of Bits-Back Coding theory (Chen et al., 2017), which suggests that at optimality the decoder will model whatever it can locally, and only the residual will be encoded in the latent variable z. A consequence is that explicit information placement can give more powerful latent representations. Inspired by this Bits-Back perspective, we add a global auxiliary loss for z to encode information which cannot be modelled locally by the autoregressive decoder Q t pθ(yt | x, y<t, z). We use bag-of-words (BoW) prediction as the auxiliary loss. It encodes global information while having a non-autoregressive factorization: Q t pψ(yt | z). (We choose not to condition it on the source sentence x.) Further, it requires no additional annotated data. The auxiliary decoder complements the autoregressive decoder (which is locally factorized), interpolating predictions at the softmax layer, i.e. p(yt | x, y<t, z) is a mixture of softmaxes (Yang et al., 2018b): p(yt | ·) = (1 −λ) · pθ(yt | x, y<t, z) + λ · pψ(yt | z), (7) with mixing parameter λ. (We use λ = 0.1 in this paper.) Thus, the bag-of-words objective regularizes the log-likelihood bound. 4 Implementing Latent Variable NMT 4.1 Architecture Our model uses discrete latent variables. These are used to select a latent embedding, which is concatenated to the decoder state. Inference Network We use discrete latent variables with reparameterization via GumbelSoftmax (Jang et al., 2017; Maddison et al., 2017) to allow backpropagation through discrete sampling. Unlike the multivariate Gaussian distribution commonly used in VAE and CVAE, our parameterization can explicitly account for multiple the mismatch between the (joint) data distribution pD(x, y) and the (conditional) likelihood objective pθ(y | x). modes in the data. (See Rezende and Mohamed (2015) for a perspective on the value of multimodal distributions over latent variables.) To make our model more general, we introduce a set of discrete latent variables z = {z1, . . . , zK} which are independently sampled from their own inference networks Φk. Specifically, each Φk computes scaled dot product attention with encoder outputs h ∈Rd using latent code embedding ek: Ck = Attention  ekW k, hW h, hW h = Softmax ekW k(hW h)⊤ √ d  hW h. (8) We can now sample zk by the Gumbel-Softmax reparameterization trick (Maddison et al., 2017; Jang et al., 2017): zk ∼GumbelSoftmax(Ck) (9) = Softmax Ck + g τ  , (10) where g = −log(−log(u)), u ∼Uniform is the Gumbel noise and τ is a fixed temperature. (We use τ = 1 in this paper.) At inference time, we use a discrete version by directly sampling from the latent variable distribution. BoW Auxiliary Decoder Given an inferred sample z ∼Φk(h), the BoW decoder predicts all tokens at once without considering their order. We compute the cross-entropy loss for the predicted tokens over the output vocabulary space V : LBoW = |V | X i=1 pi log ˆpψ(yi | z), |V | X i=1 pi = 1. (11) We take the (unnormalized) empirical distribution ˜pi to be a token’s frequency within a sentence normalized by its total frequency within a minibatch, mitigating the effect of frequent (stop) words. This is then normalized over the sentence to sum to 1, giving values pi. The model distribution ˆpψ is computed by conditioning on the latent code only, without direct attention to encoder outputs. We use scaled dot-product attention between the latent embeddings and the target embeddings (each of dimensionality d, represented as a matrix EV ): pψ(yi | z) = Softmax e(z)E⊺ V √ d  i . (12) 8516 Algorithm 1 Training Strategy 1: Φenc, Φk=1,...,K, Θdec, ΘBoW ←init. 2: while Θenc, Θdec, ΘBoW , Φk=1,...,K have not converged do 3: Sample (x, y) from Dbitext 4: Compute LMICVAE with Equation 6 5: Train Φenc, Θdec, Φk=1,...,K with LMICVAE 6: Compute LBoW with Equation 12 7: Train Φenc, ΘBoW , Φk=1,...,K with LBoW 8: if self training then 9: Sample x from Dmono 10: Compute LMono with Equation 13 11: Train Φenc, Φk=1,...,K with LMono 12: end if 13: end while 4.2 Training For training with parallel data, we optimize LMICVAE. We draw samples z from the approximate posterior qφ(z | x, y) parameterized by the inference network, then feed the samples to both the autoregressive and auxiliary (BoW) decoders to get a Monte Carlo estimate of the gradient. Estimating aggregated distributions We estimate pθ(z) and qφ(z) over each minibatch, following Zhao et al. (2018). Semi-supervised learning We apply the same modification to VAE’s ELBO, following Zhao et al. (2019). For jointly training with source-side monolingual data, we add Iqφ(z; x) to the ELBO, and for target-side monolingual data, we add Iqφ(z; y).5 The joint objective sums the modified CVAE and VAE objectives: LMono = log p(x | z) + DKL 1 L L X ℓ=1 qφ  z(ℓ) x(ℓ) 1 L L X ℓ=1 p  z(ℓ)! (13) LJoint = LMICVAE + LMono, (14) where L is the number of monolingual examples. Algorithm 1 describes the overall training strategy. 5Learning to copy the target text has proven useful for low-resource NMT (Currey et al., 2017). 5 Experiments and Results Here we present empirical results on the Transformer architecture. We evaluate our model on four standard datasets and compare against three baselines. We use four measures to quantify posterior collapse, then examine translation quality (BLEU score) in standard fully supervised settings, a semi-supervised setting, and a fully supervised setting with noisy source text. Hyperparameters, regularization choices, and subword vocabulary information can be found in §5.3. The results show that we have effectively addressed posterior collapse: latent variables are no longer ignored despite the presence of a powerful decoder. As a result, we outperform both the standard Transformer and the Transformer-based variational NMT approach, when using noisy data or source-language monolingual data. 5.1 Datasets First, we evaluate our models on a standard highresource and low-resource benchmark dataset from WMT. Second, we focus on situations where noisy or monolingual data is available. We note that lowresource scenarios and noisy data are two representative challenges in MT (Lopez and Post, 2013). WMT14 German–English We use data from the WMT14 news translation shared task, which has 3.9M sentence pairs for training with the same BPE tokenization as in Gu et al. (2018). WMT16 Romanian–English We use data from the WMT16 news translation shared task. We use the same BPE-preprocessed (Sennrich et al., 2016b) train, dev and test splits as in Gu et al. (2018) with 608k sentence pairs for training. FLORES Sinhala–English For this low-resource benchmark, we use the same preprocessed data as in Guzm´an et al. (2019). There are 646k sentence pairs. MT for Noisy Text (MTNT) French–English This dataset pairs web-scraped text from Reddit with professional translations. We use 30k subword units built jointly from source and target sentences and only keep sentences with less than 100 tokens. For training, there are 34,380 sentence pairs for English–French and 17,616 sentence pairs for French–English (Michel and Neubig, 2018). We also used 8517 18,676 monolingual sentences per language from the same data source (Reddit). 5.2 Baselines We compare our model to three baselines: Non-latent This is a standard Transformer model without latent variables. VNMT A CVAE model with Gaussian distribution as proposed in Variational NMT by Zhang et al. (2016), which we reimplement using Transformer. (Zhang et al. (2016) use a GRUbased recurrent model.) DCVAE A CVAE model with the same discrete latent variable parameterization as ours but without the new objective (i.e., the mutual information term and bag-of-words regularizer). 5.3 Implementation details All of our models build on Transformer. For WMT14 De–En and WMT16 Ro–En, we use the base configuration (Vaswani et al., 2017): 6 blocks, with 512-dimensional embedding, 2048dimensional feed-forward network, and 8 attention heads. For FLoRes (low-resource) and MTNT (low-resource and noisy), we use a smaller Transformer: 4 layers, 256-dimensional embedding, 1024-dimensional inner layers, and 4 attention heads. Input and output embeddings are shared between the inference network and decoder. We use T = 4 categorical latent variables of dimension 16 (found by grid search on the dev set). Auxiliary bag-of-words predictions are combined with the decoder prediction with λ = 0.1. We optimize using Adam (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.98, ϵ = 1E-8, weight decay of 0.001, and the warmup and learning rate schedule of Ott et al. (2018b). All models are trained on 8 NVIDIA V100 GPUs with 32K tokens per mini-batch. We train WMT14 De–En with 200k updates and others with 100k updates. We do not use early stopping. We employ joint BPE vocabularies. The sizes are 32k for En–De and En–Ro; 30k for Fr–En; and 3k for Si–En. We also use a word dropout rate of 0.4 during training of all models, which is complementary to our approach. We found the default initialization in the FAIRSEQ NMT toolkit was effective; we did not need to explore several initializations to avoid degenerate models. Model DKL Iqφ(z, x) Iqφ(z, y) NLL DCVAE + KLA 0.001 0.001 4.2E-6 3.17 Our model 0.17 0.18 0.31 3.16 Table 1: Our model mitigates posterior collapse. The KL value refers to DKL(qφ(z | x, y) ∥pθ(z | x)) for DCVAE and DKL(qφ(z | y) ∥pθ(z | x)) for our model. 5.4 Preventing Posterior Collapse We compare our model to a standard DCVAE lacking the new objective. We report four metrics of posterior collapse on the validation set of WMT Ro–En: 1. Kullback–Leibler divergence (KL). 2. Mutual information between the latent variable and the source: Iqφ(z; x) 3. Mutual information between the latent variable and the target: Iqφ(z; y). 4. Negative conditional log-likelihood (NLL) per token. Table 1 shows that when using standard DCVAE ELBO, even with the common practice of KL annealing (KLA), both the KL loss and mutual information settle to almost 0 which is consistent with the analysis in Equation 5. We also plot the progression of DKL, Iqφ(z; x), and Iqφ(z; y) during training in Figure 2. The posterior collapse of the baseline model is apparent: both DKL mutual information terms drop to 0 at the beginning of training as a result ELBO’s design. On the other hand, our model, without using any annealing schedule, effectively increases mutual information and prevents KL loss from settling to a degenerate solution early on. 5.5 Translation Quality We report corpus-level BLEU (Papineni et al., 2002)6 on the test sets where the translations are generated by sampling each zk with softassignment (vs. argmax). Supervised Learning on Parallel Data First, we evaluate our model’s performance when trained with parallel data on standard WMT datasets. Table 2 shows that our model consistently outperforms both VNMT and DCVAE models—which 6We use detokenized SacreBLEU (Post, 2018). 8518 0 20 40 60 80 100 120 140 updates (1k) 0.00 0.05 0.10 0.15 0.20 KL 0 20 40 60 80 100 120 140 updates (1k) 0.0 0.1 0.2 0.3 I(z,x) 0 20 40 60 80 100 120 140 updates (1k) 0.0 0.1 0.2 0.3 0.4 I(z,y) 0 20 40 60 80 100 120 140 updates (1k) 0.00 0.05 0.10 0.15 0.20 KL 0 20 40 60 80 100 120 140 updates (1k) 0.0 0.1 0.2 0.3 I(z,x) 0 20 40 60 80 100 120 140 updates (1k) 0.0 0.1 0.2 0.3 0.4 I(z,y) 0 20 40 60 80 100 120 140 updates (1k) 0.00 0.05 0.10 0.15 0.20 KL 0 20 40 60 80 100 120 140 updates (1k) 0.0 0.1 0.2 0.3 I(z,x) 0 20 40 60 80 100 120 140 updates (1k) 0.0 0.1 0.2 0.3 0.4 I(z,y) (A). Our model (B). modified ELBO only (C). BoW only Figure 2: Row (A): comparison of KL and mutual information between baseline (DCVAE, solid triangle, orange color) and our model (solid circle, teal color). Rows (B) and (C): ablation study on relative contribution from MICVAE and BoW. All metrics are computed on the WMT16 Ro–En validation set over the course of 140k training updates. WMT16 WMT14 Model Ro–En En–Ro De–En En–De VNMT 34.20 34.27 30.35 25.84 DCVAE 34.16 34.51 29.76 25.46 Our model 34.76 34.97 31.39 26.42 Non-latent 34.73 34.54 30.89 26.36 Table 2: BLEU score on WMT benchmarks. Best result on each dataset is in bold. Our model provides minor gains (≤0.5 points) over the standard Transformer, not degrading like VNMT and DCVAE. Alongside improvements in semi-supervised or noisy settings, this suggests that there is no BLEU compromise in choosing this model. require ad-hoc KL annealing—while on par with a strong Transformer baseline. Semi-supervised with Source-side Monolingual Data Leveraging monolingual data is a common practice to improve low resource NMT. One popular approach uses target-side monolingual data through “backtranslation” as a data augmentation, but how to effectively leverage source-side monolingual data is an open challenge (Sennrich et al., Model Fr–En En–Fr Non-latent 26.7 24.8 DCVAE 26.4 26.1 + source mono 27.3 26.4 Our model 28.6 26.3 + source mono 29.8 26.7 Table 3: Translation performance (BLEU) of utilizing source-side monolingual data. Best result on each data condition (with and without monolingual data) is bold. 2016a; Zhang and Zong, 2016; Wu et al., 2019). We use the joint training objective described in Equation 14. To have a fair comparison, we also extend VNMT and DCVAE with the same joint training algorithm, i.e., the newly added monolingual data is used to train their corresponding sequence encoder and inference network with standard VAE ELBO. That is, the only difference is that our model was trained to promote mutual information Iqφ(z, x) and Iqφ(z, y). As shown in Table 3, by doing so the proposed model brings larger gains during semi-supervised learning with source-side monolingual data. 8519 1M 2M 3M 4M 5M 0 5 10 7.64 7.79 7.07 6.06 5.12 8.79 10.3 10.14 9.42 9.81 BLEU Score Standard Transformer Our Model Figure 3: BLEU when increasing the number of noisy parallel sentences (ranked by Zipporah) in training, Si– En. Robustness to Noisy Data While high-quality parallel data is scarce for low-resource language pairs, weakly aligned sentence pairs can be mined from massive unpaired data such as Paracrawl.7 We evaluate our model’s performance when augmenting the training set with increasingly noisy parallel data filtered by Zipporah (Xu and Koehn, 2017). Because VNMT and DCVAE underperform our proposal in previous experiments, we omit them from this experiment. Figure 3 shows the results in the Sinhala–English direction. Our model always outperforms standard Transformer, which struggles as more (and noisier) data is added. The gap grows from +1.2 to +4.7 BLEU. 6 Analysis Ablation Study How do the different ingredients of our proposed approach contribute to preventing posterior collapse and improving translation quality? We explore two variants of the proposed model: 1) modified ELBO only: only adding mutual information term to the training objective, while without gradients from LBoW, 2) BoW only: which is equivalent to DCVAE combined with BoW decoder. First, we perform the same collapse metrics evaluation as in Table 1. Figure 2(B) suggests that by explicitly adding mutual information term back to the training objective, both Iqφ(z; x) and Iqφ(z; y) are effectively raised, while the remaining aggregated KL term is still optimized to zero. Such behavior is consistent with the analysis revealed 7https://paracrawl.eu/ Model De–En (3.9M) Ro–En (608K) BoW and LMICVAE 31.4 34.8 BoW only 31.1 34.2 Table 4: Ablation study on translation quality (BLEU). The information-infused loss function provides additional performance over the DCVAE with a bag-ofwords decoder. in Equation 5. On the other hand, regularizing z with the BoW decoder only, shown in Figure 2(C), is very effective in preventing KL vanishing as well as increasing mutual information. When two approaches are combined, as was shown in Figure 2(A), the model retains higher mutual information for both Iqφ(z; x) and Iqφ(z; y). Next, we see whether the difference in mutual information yields different translation quality. We compare two models: BoW only (Figure 2(C)) and both (Figure 2(A)), on WMT14 De–En and WMT16 Ro–En test sets. Table 4 shows the difference matters more in a low-data regime. Analysis of Outputs Delving into model predictions helps us understand how our model outperforms the others. We examined erroneous 1-best predictions on the Ro–En data. We provide salient examples of phenomena we identified in Table 5. (Naturally, as the Ro–En score differences are not dramatic, the predictions are largely similar.) Several examples support the fact that our model has more fluent and accurate translations than the baseline or VNMT. VNMT often struggles by introducing disfluent words, and both VNMT and Transformer select justifiable but incorrect words. For instance, in our second example, the gender and animacy of the possessor are not specified in Romanian. Our model selects a more plausible pronoun for this context. Analysis of Latent Variables Finally, we probe whether different latent variables encode different information. We random sample 100 sentences from two test sets of distinct domains, MTNT (Reddit comments) and WMT (news) with 50 sentences each. We plot the t-SNE projection of their corresponding samples zk inferred from Φk, k = 1, 2, 3, 4 respectively. Figure 4 suggests that different latent variables learn to organize the data in different manners, but there was no clear signal that any of them exclusively specialize in encoding a domain label. We leave a thorough analysis of 8520 Source: ma intristeaza foarte tare . Reference: that really saddens me . Base: i am very saddened . VNMT: i am saddened very loudly . (Wrong sense of tare) Ours: i am very saddened . Source: cred ca executia sa este gresita . Reference: i believe his execution is wrong . Base: i believe that its execution is wrong . VNMT: i believe that its execution is wrong . Ours: i believe that his execution is wrong . Source: da , chinatown Reference: yes , chinatown Base: yes , chinatown VNMT: yes , thin . Ours: yes , chinatown Source: nu stiu cine va fipropus pentru aceasta functie . Reference: i do not know who will be proposed for this position . Base: i do not know who will be proposed for this function . VNMT: i do not know who will be proposed for this function . Ours: i do not know who will be proposed for this position . Source: recrutarea , o prioritate tot mai mare pentru companii Reference: recruitment , a growing priority for companies Base: recruitment , an increasing priority for companies VNMT: recruitment , [article missing] increasing priority for companies Ours: recruitment , a growing priority for companies Table 5: Translation examples from the baseline Transformer, VNMT, and our model. Disfluent words or absences are in red, and slightly incorrect lexical choice is in blue. Romanian diacritics have been stripped. Figure 4: t-SNE visualization of zk, k = 1, 2, 3, 4 samples from 100 sentences from two datasets with distinct domains, MTNT (orchid) and WMT news (green). their information specialization to future work. 7 Related Work Unlike most prior work in (conditional) text generation, we tackle posterior collapse without requiring an annealing schedule (Bowman et al., 2016; Sønderby et al., 2016; Kim et al., 2018), a weakened decoder (Gulrajani et al., 2017), or a restricted variational family (Razavi et al., 2019). Unlike Ma et al. (2018), who also employ bag-ofwords as an NMT objective, our BoW decoder only sees the latent variable z, not the encoder states. Conversely, unlike Weng et al. (2017), our generative decoder has access to both the latent variable and the encoder states; bag-of-words prediction is handled by separate parameters. VNMT (Zhang et al., 2016) applies CVAE with Gaussian priors to conditional text generation. VRNMT (Su et al., 2018) extends VNMT, modeling the translation process in greater granularity. Both needed manually designed annealing schedules to increase KL loss and avoid posterior collapse. Discrete latent variables have been applied to NMT (Kaiser et al., 2017; Gu et al., 2018; Shen et al., 2019), without variational inference or addressing posterior collapse. Approaches to stop posterior collapse include aggressively trained inference networks (He et al., 2019), skip connections (Dieng et al., 2019), and expressive priors (Tomczak and Welling, 2018; Razavi et al., 2019). Unlike our conditional approach, Shah and Barber (2018) jointly model the source and target text in a generative fashion. Their EM-based inference is more computationally expensive than our amortized variational inference. Eikema and Aziz (2019) also present a generative (joint) model relying on autoencoding; they condition the source text x on the latent variable z. Finally, Schulz et al. (2018), like us, value mutual information between the data and the latent variable. While they motivate KL annealing using mutual information, we show that the annealing is unnecessary. 8 Conclusion We have presented a conditional generative model with latent variables whose distribution is learned with variation inference, then evaluated it in machine translation. Our approach does not require an annealing schedule or a hamstrung decoder to avoid posterior collapse. Instead, by providing a new analysis of the conditional VAE objective to improve it in a principled way and incorporating an auxiliary decoding objective, we measurably prevented posterior collapse. As a result, our model has outperformed previous variational NMT models in terms of translation quality, and is comparable to non-latent Transformer on standard WMT Ro↔En and De↔En datasets. Furthermore, the proposed method has improved robustness in dealing with uncertainty in data, including exploiting source-side monolingual data as well as training with noisy parallel data. 9 Acknowledgments We thank Alexandra DeLucia, Chu-Cheng Lin, Hongyuan Mei, Kenton Murray, Guanghui Qin, and Jo˜ao Sedoc (alphabetically) for remarks on the exposition. 8521 References L´eon Bottou and Yann L. Cun. 2004. Large scale online learning. In Advances in Neural Information Processing Systems 16, pages 217–224. Curran Associates, Inc. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Jennifer C. Lai, and Robert L. Mercer. 1992. An estimate of an upper bound for the entropy of English. Computational Linguistics, 18(1):31– 40. Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2017. Variational lossy autoencoder. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semisupervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1965–1974, Berlin, Germany. Association for Computational Linguistics. Thomas M. Cover and Joy A. Thomas. 2006. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley, New York, NY, USA. Anna Currey, Antonio Valerio Miceli Barone, and Kenneth Heafield. 2017. Copied monolingual data improves low-resource neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 148–156, Copenhagen, Denmark. Association for Computational Linguistics. Adji B. Dieng, Yoon Kim, Alexander M. Rush, and David M. Blei. 2019. Avoiding latent variable collapse with generative skip models. In Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 2397–2405. Bryan Eikema and Wilker Aziz. 2019. Auto-encoding variational neural machine translation. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 124–141, Florence, Italy. Association for Computational Linguistics. Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. 2018. Nonautoregressive neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Ta¨ıga, Francesco Visin, David V´azquez, and Aaron C. Courville. 2017. Pixelvae: A latent variable model for natural images. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Francisco Guzm´an, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc’Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali–English and Sinhala– English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098–6111, Hong Kong, China. Association for Computational Linguistics. Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. In ICLR. Matthew D. Hoffman and Matthew J. Johnson. 2016. ELBO surgery: yet another way to carve up the variational evidence lower bound. In Workshop in Advances in Approximate Bayesian Inference, volume 1. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with Gumbel–Softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Lukasz Kaiser, Aidan N. Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit. 2017. One model to learn them all. CoRR, abs/1706.05137v1. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality, pages 119–126, Sofia, Bulgaria. Association for Computational Linguistics. Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74–83, Melbourne, Australia. Association for Computational Linguistics. Yoon Kim, Sam Wiseman, Andrew Miller, David Sontag, and Alexander Rush. 2018. Semi-amortized variational autoencoders. In Proceedings of the 8522 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2678–2687, Stockholmsm¨assan, Stockholm Sweden. PMLR. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. Adam Lopez and Matt Post. 2013. Beyond bitext: Five open problems in machine translation. In Twenty Years of Bitext. Shuming Ma, Xu Sun, Yizhong Wang, and Junyang Lin. 2018. Bag-of-words as target for neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 332– 338, Melbourne, Australia. Association for Computational Linguistics. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relaxation of discrete random variables. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Paul Michel and Graham Neubig. 2018. MTNT: A testbed for machine translation of noisy text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 543– 553, Brussels, Belgium. Association for Computational Linguistics. Radford M. Neal and Geoffrey E. Hinton. 1998. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Michael I. Jordan, editor, Learning in Graphical Models, pages 355–368. Springer Netherlands, Dordrecht. Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018a. Analyzing uncertainty in neural machine translation. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 3953–3962. PMLR. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018b. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Belgium, Brussels. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Ali Razavi, A¨aron van den Oord, Ben Poole, and Oriol Vinyals. 2019. Preventing posterior collapse with delta-vaes. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Danilo Jimenez Rezende and Shakir Mohamed. 2015. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 1530–1538. JMLR.org. Philip Schulz, Wilker Aziz, and Trevor Cohn. 2018. A stochastic decoder for neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1243–1252, Melbourne, Australia. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Harshil Shah and David Barber. 2018. Generative neural machine translation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 1346–1355. Curran Associates, Inc. Tianxiao Shen, Myle Ott, Michael Auli, and Marc’Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5719–5728. PMLR. 8523 Jason R. Smith, Herve Saint-Amand, Magdalena Plamada, Philipp Koehn, Chris Callison-Burch, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the common crawl. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1374–1383, Sofia, Bulgaria. Association for Computational Linguistics. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3483–3491. Curran Associates, Inc. Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. 2016. Ladder variational autoencoders. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3738–3746. Curran Associates, Inc. Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Lu, Xianpei Han, and Biao Zhang. 2018. Variational recurrent neural machine translation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5488–5495. AAAI Press. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Christoph Tillmann and Hermann Ney. 2003. Word reordering and a dynamic programming beam search algorithm for statistical machine translation. Computational Linguistics, 29(1):97–133. Jakub M. Tomczak and Max Welling. 2018. VAE with a VampPrior. In International Conference on Artificial Intelligence and Statistics, AISTATS 2018, 911 April 2018, Playa Blanca, Lanzarote, Canary Islands, Spain, volume 84 of Proceedings of Machine Learning Research, pages 1214–1223. PMLR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Rongxiang Weng, Shujian Huang, Zaixiang Zheng, Xinyu Dai, and Jiajun Chen. 2017. Neural machine translation with word predictions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 136–145, Copenhagen, Denmark. Association for Computational Linguistics. Lijun Wu, Yiren Wang, Yingce Xia, Tao QIN, Jianhuang Lai, and Tie-Yan Liu. 2019. Exploiting monolingual data at scale for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4205– 4215, Hong Kong, China. Association for Computational Linguistics. Hainan Xu and Philipp Koehn. 2017. Zipporah: a fast and scalable data cleaning system for noisy webcrawled parallel corpora. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2945–2950, Copenhagen, Denmark. Association for Computational Linguistics. Yilin Yang, Liang Huang, and Mingbo Ma. 2018a. Breaking the beam search curse: A study of (re)scoring methods and stopping criteria for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3054–3059, Brussels, Belgium. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. 2018b. Breaking the softmax bottleneck: A high-rank RNN language model. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Biao Zhang, Deyi Xiong, Jinsong Su, Qun Liu, Rongrong Ji, Hong Duan, and Min Zhang. 2016. Variational neural discourse relation recognizer. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 382– 391, Austin, Texas. Association for Computational Linguistics. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1535–1545, Austin, Texas. Association for Computational Linguistics. Shengjia Zhao, Jiaming Song, and Stefano Ermon. 2019. Infovae: Balancing learning and inference in variational autoencoders. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 5885–5892. AAAI Press. 8524 Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1098–1107, Melbourne, Australia. Association for Computational Linguistics. 8525 A Derivation of Equation 5 To prove the decomposition of the conditional VAE’s regularization term into a mutual information term and a KL divergence term, we introduce a random variable ℓrepresenting an index into the training data; it uniquely identifies x(ℓ), y(ℓ) . This alteration is “entirely algebraic” (Hoffman and Johnson, 2016) while making our process both more compact and more interpretable. q(ℓ, z) ≜q(ℓ)q(z | ℓ) q(z | ℓ) ≜q(z | x(ℓ), y(ℓ)) q(ℓ) ≜1 L p(ℓ, z) ≜p(ℓ)p(z | ℓ) p(z | ℓ) ≜p(z) p(ℓ) ≜1 L We define the marginals p(z) and q(z) as the aggregated posterior (Tomczak and Welling, 2018) and aggregated approximate posterior (Hoffman and Johnson, 2016). (This allows the independence assumption above.) Moving forward will require just a bit of information theory: the definitions of entropy and mutual information. For these, we direct the reader to the text of Cover and Thomas (2006). Given these definitions, the regularization term of the ELBO objective may be expressed as Eℓ[DKL (q(z | x, y) ∥p(z | x))] = X ℓ 1 Lq(z | x, y) log q(z | x, y) p(z | x) . We may now multiply the numerator and denominator by 1 L and use its equivalence to p(ℓ) and q(ℓ). = X ℓ q(ℓ, z) log q(ℓ, z) p(ℓ, z) Factoring then gives us two log terms. = X ℓ q(ℓ, z)  log q(z) p(z) + log q(ℓ| z) p(ℓ)  We then distribute the weighted sum. = DKL(q(z) ∥p(z)) + Eq(z) [DKL(q(ℓ| z) | p(ℓ))] Because of how we defined p(ℓ), we expand the second term and factor out the constant H(p(ℓ)) = log L. = DKL(q(z) ∥p(z)) + log L −Eq(z) [H(q(ℓ| z))] Finally, we arrive at the result from Equation 5 by using log L = H(q(ℓ)). = DKL(q(z) ∥p(z)) + Iq(ℓ; z).
2020
753
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8526–8537 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8526 Balancing Training for Multilingual Neural Machine Translation Xinyi Wang Yulia Tsvetkov Graham Neubig Language Technology Institute, Carnegie Mellon University, Pittsburgh, PA 15213 {xinyiw1,ytsvetko,gneubig}@cs.cmu.edu Abstract When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others. Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampling has a large effect on the overall performance. In this paper, we propose a method that instead automatically learns how to weight training data through a data scorer that is optimized to maximize performance on all test languages. Experiments on two sets of languages under both one-to-many and manyto-one MT settings show our method not only consistently outperforms heuristic baselines in terms of average performance, but also offers flexible control over the performance of which languages are optimized.1 1 Introduction Multilingual models are trained to process different languages in a single model, and have been applied to a wide variety of NLP tasks such as text classification (Klementiev et al., 2012; Chen et al., 2018a), syntactic analysis (Plank et al., 2016; Ammar et al., 2016), named-entity recognition (Xie et al., 2018; Wu and Dredze, 2019), and machine translation (MT) (Dong et al., 2015; Johnson et al., 2016). These models have two particularly concrete advantages over their monolingual counterparts. First, deploying a single multilingual model is much more resource efficient than deploying one model for each language under consideration (Arivazhagan et al., 2019; Aharoni et al., 2019). Second, multilingual training makes it possible to transfer knowledge from high-resource languages (HRLs) to improve performance on lowresource languages (LRLs) (Zoph et al., 2016; 1The code is available at https://github.com/ cindyxinyiwang/fairseq/tree/multiDDS. Nguyen and Chiang, 2018; Neubig and Hu, 2018; Wang and Neubig, 2019; Aharoni et al., 2019). A common problem with multilingual training is that the data from different languages are both heterogeneous (different languages may exhibit very different properties) and imbalanced (there may be wildly varying amounts of training data for each language). Thus, while LRLs will often benefit from transfer from other languages, for languages where sufficient monolingual data exists, performance will often decrease due to interference from the heterogeneous nature of the data. This is especially the case for modestly-sized models that are conducive to efficient deployment (Arivazhagan et al., 2019; Conneau et al., 2019). To balance the performance on different languages, the standard practice is to heuristically adjust the distribution of data used in training, specifically by over-sampling the training data from LRLs (Johnson et al., 2016; Neubig and Hu, 2018; Arivazhagan et al., 2019; Conneau et al., 2019). For example, Arivazhagan et al. (2019) sample training data from different languages based on the dataset size scaled by a heuristically tuned temperature term. However, such heuristics are far from perfect. First, Arivazhagan et al. (2019) find that the exact value of this temperature term significantly affects results, and we further show in experiments that the ideal temperature varies significantly from one experimental setting to another. Second, this heuristic ignores factors other than data size that affect the interaction between different languages, despite the fact that language similarity has been empirically proven important in examinations of cross-lingual transfer learning (Wang and Neubig, 2019; Lin et al., 2019). In this paper, we ask the question: “is it possible to learn an optimal strategy to automatically balance the usage of data in multilingual model training?” To this effect, we propose a method that 8527 learns a language scorer that can be used throughout training to improve the model performance on all languages. Our method is based on the recently proposed approach of Differentiable Data Selection (Wang et al., 2019b, DDS), a general machine learning method for optimizing the weighting of different training examples to improve a pre-determined objective. In this work, we take this objective to be the average loss from different languages, and directly optimize the weights of training data from each language to maximize this objective on a multilingual development set. This formulation has no heuristic temperatures, and enables the language scorer to consider the interaction between languages. Based on this formulation, we propose an algorithm that improves the ability of DDS to optimize multiple model objectives, which we name MultiDDS. This is particularly useful in the case where we want to optimize performance on multiple languages simultaneously. Specifically, MultiDDS (1) has a more flexible scorer parameterization, (2) is memory efficient when training on multiple languages, and (3) stabilizes the reward signal so that it improves all objectives simultaneously instead of being overwhelmed by a single objective. While the proposed methods are model-agnostic and thus potentially applicable to a wide variety of tasks, we specifically test them on the problem of training multilingual NMT systems that can translate many languages in a single model. We perform experiments on two sets of languages (one with more similarity between the languages, one with less) and two translation directions (one-to-many and many-to-one where the “one” is English). Results show that MultiDDS consistently outperforms various baselines in all settings. Moreover, we demonstrate MultiDDS provides a flexible framework that allows the user to define a variety of optimization objectives for multilingual models. 2 Multilingual Training Preliminaries Monolingual Training Objective A standard NMT model is trained to translate from a single source language S to a target language T. The parameters of the model are generally trained by preparing a training dataset Dtrain, and defining the empirical distribution of sentence pairs ⟨x, y⟩ sampled from Dtrain as P. We then minimize the empirical risk J(θ, P), which is the expected value of the loss function ℓ(x, y; θ) over this distribution: θ∗= argmin θ J(θ, Dtrain) where J(θ, Dtrain) = Ex,y∼P(X,Y )[ℓ(x, y; θ)] (1) Multilingual Training Formulation A multilingual NMT model can translate n pairs of languages {S1-T 1, S2-T 2, ..., Sn-T n}, from any source language Si. to its corresponding target T i. To train such a multilingual model, we have access to n sets of training data Dtrain = D1 train, D2 train, . . . , Dn train, where Di train is training data for language pair Si-T i. From these datasets, we can define P i, the distribution of sentences from Si-T i, and consequently also define a risk J(θ, P i) for each language following the monolingual objective in Eq. 1. However, the question now becomes: “how do we define an overall training objective given these multiple separate datasets?” Several different methods to do so have been proposed in the past. To discuss all of these different methods in a unified framework, we further define a distribution PD over the n sets of training data, and define our overall multilingual training objective as Jmult(θ, PD, Dtrain) = Ei∼PD(i;ψ)  J(θ, Di train)  . (2) In practice, this overall objective can be approximated by selecting a language according to ˜i ∼ PD(i), then calculating gradients with respect to θ on a batch of data from D˜i train. Evaluation Methods Another important question is how to evaluate the performance of such multilingual models. During training, it is common to use a separate development set for each language Ddev = D1 dev, D2 dev, ..., Dn dev to select the best model. Given that the objective of multilingual training is generally to optimize the performance on all languages simultaneously (Arivazhagan et al., 2019; Conneau et al., 2019), we can formalize this objective as minimizing the average of dev risks2: Jdev(θ, Ddev) = 1 n n X i=1 J(θ, Di dev). (3) 2In reality, it is common to have the loss ℓbe a likelihoodbased objective, but finally measure another metric such as BLEU score at test time, but for simplicity we will assume that these two metrics are correlated. 8528 Relation to Heuristic Strategies This formulation generalizes a variety of existing techniques that define PD(i) using a heuristic strategy, and keep it fixed throughout training. Uniform: The simplest strategy sets PD(i) to a uniform distribution, sampling minibatches from each language with equal frequency (Johnson et al., 2016). Proportional: It is also common to sample data in portions equivalent to the size of the corresponding corpora in each language (Johnson et al., 2016; Neubig and Hu, 2018). Temperature-based: Finally, because both of the strategies above are extreme (proportional underweighting LRLs, and uniform causing overfitting by re-sampling sentences from limited-size LRL datasets), it is common to sample according to data size exponentiated by a temperature term τ (Arivazhagan et al., 2019; Conneau et al., 2019): PD(i) = q1/τ i Pn k=1 q1/τ k where qi = |Di train| Pn k=1 |Dk train|. (4) When τ = 1 or τ = ∞this is equivalent to proportional or uniform sampling respectively, and when a number in the middle is chosen it becomes possible to balance between the two strategies. As noted in the introduction, these heuristic strategies have several drawbacks regarding sensitivity to the τ hyperparameter, and lack of consideration of similarity between the languages. In the following sections we will propose methods to resolve these issues. 3 Differentiable Data Selection Now we turn to the question: is there a better way to optimize PD(i) so that we can achieve our final objective of performing well on a representative development set over all languages, i.e. minimizing Jdev(θ, Ddev). In order to do so, we turn to a recently proposed method of Differentiable Data Selection (Wang et al., 2019b, DDS), a general purpose machine learning method that allows for weighting of training data to improve performance on a separate set of held-out data. Specifically, DDS uses a technique called bilevel optimization (Colson et al., 2007), that learns a second set of parameters ψ that modify the training objective that we use to learn θ, so as to maximize the final objective Jdev(θ, Ddev). Specifically, it proposes to learn a data scorer P(x, y; ψ), parameterized by ψ, such that training using data sampled from the scorer optimizes the model performance on the dev set. To take the example of learning an NMT system to translate a single language pair i using DDS, the general objective in Eq. 1 could be rewritten as ψ∗= argmin ψ J(θ∗(ψ), Di dev) where θ∗(ψ) = argmin θ Ex,y∼P(x,y;ψ) [ℓ(x, y; θ)] . (5) DDS optimizes θ and ψ iteratively throughout the training process. Given a fixed ψ, the update rule for θ is simply θt ←θt−1 −∇θEx,y∼P(x,y;ψ) [ℓ(x, y; θ)] To update the data scorer, DDS uses reinforcement learning with a reward function that approximates the effect of the training data on the model’s dev performance R(x, y; θt) ≈∇J(θt, Di dev)⊤· ∇θℓ(x, y; θt−1) ≈cos ∇J(θt, Di dev), ∇θℓ(x, y; θt−1)  (6) where cos(·) is the cosine similarity of two vectors. This reward can be derived by directly differentiating J(θ(ψ), Di dev) with respect to ψ, but intuitively, it indicates that the data scorer should be updated to up-weigh the data points that have similar gradient with the dev data. According to the REINFORCE algorithm (Williams, 1992), the update rule for the data scorer then becomes ψt+1 ←ψt + R(x, y; θt) · ∇ψlogP(x, y; ψ) (7) 4 DDS for Multilingual Training In this section, we use the previously described DDS method to derive a new framework that, instead of relying on fixed heuristics, adaptively optimizes usage of multilingual data for the best model performance on multiple languages. We illustrate the overall workflow in Fig. 1. First, we note two desiderata for our multilingual training method: 1) generality: the method should be flexible enough so that it can be utilized universally for different multilingual tasks and settings (such as different translation directions for 8529 NMT). 2) scalablity: the method should be stable and efficient if one wishes to scale up the number of languages that a multilingual model supports. Based on these two properties, we introduce MultiDDS, an extension of the DDS method tailored for multilingual training. Method MultiDDS directly parameterizes the standard dataset sampling distribution for multilingual training with ψ: PD(i; ψ) = eψi/Pn k=1eψk (8) and optimizes ψ to minimize the dev loss. Notably, unlike standard DDS we make the design decision to weight training datasets rather than score each training example ⟨x, y⟩directly, as it is more efficient and also likely easier to learn. We can thus rewrite the objective in Eq. 2 to incorporate both ψ and θ as: ψ∗= argmin ψ Jdev(θ∗(ψ), Ddev) where θ∗= argmin θ Ei∼PD(i;ψ)  J(θ, Di train)  (9) In other words, while the general DDS framework evaluates the model performance on a single dev set and optimizes the weighting of each training example, our multilingual training objective evaluates the performance over an aggregation of n dev sets and optimizes the weighting of n training sets. The reward signal for updating ψt is R(i; θt) ≈cos  ∇(Jdev(θt, Ddev)) , ∇θJ(θt−1, Di train)  = cos ∇ 1 n n X k=1 J(θt, Dk dev) ! , ∇θJ(θt−1, Di train) ! , (10) where Jdev(·) defines the combination of n dev sets, and we simply plug in its definition from Eq. 3. Intuitively, Eq. 10 implies that we should favor the training language i if its gradient aligns with the gradient of the aggregated dev risk of all languages. Implementing the Scorer Update The pseudocode for the training algorithm using MultiDDS can be found in line 25. Notably, we do not update the data scorer ψ on every training step, because it is too computationally expensive for NMT training (Wang et al., 2019b). Instead, after training the multilingual model θ for a certain number of steps, we update the scorer for all languages. This implementation is not only efficient, but also allows us to x Scorer Model ∇θJ(Di train; θt) ∇θJdev(θ′!t+1, Ddev) θt D1 train Dn train … D1 dev Dn dev … ψt PD(i; ψt) Figure 1: An illustration of the MultiDDS algorithm. Solid lines represent updates for θ, and dashed lines represent updates for ψ. The scorer defines the distribution over n training languages, from which training data is sampled to train the model. The scorer is updated to favor the datasets with similar gradients as the gradient of the aggregated dev sets. re-estimate more frequently the effect of languages that have low probability of being sampled. In order to do so, it is necessary to calculate the effect of each training language on the current model, namely R(i; θt). We estimate this value by sampling a batch of data from each Di train to get the training gradient for θt, and use this to calculate the reward for this language. This process is detailed in line 11 of the line 25. Unlike the algorithm in DDS which requires storing n model gradients,3 this approximation does not require extra memory even if n is large, which is important given recent efforts to scale multilingual training to 100+ (Arivazhagan et al., 2019; Aharoni et al., 2019) or even 1000+ languages ( ¨Ostling and Tiedemann, 2017; Malaviya et al., 2017). 5 Stabilized Multi-objective Training In our initial attempts to scale DDS to highly multilingual training, we found that one challenge was that the reward for updating the scorer became unstable. This is because the gradient of a multilingual dev set is less consistent and of higher variance than that of a monolingual dev set, which influences the fidelity of the data scorer reward. 4 3The NMT algorithm in (Wang et al., 2019b) estimates the reward by storing the moving average of n training gradients, which is not memory efficient (See Line. 7 of Alg. 2 in (Wang et al., 2019b)). In the preliminary experiments, our approximation performs as well as the moving average approximation (see App. A.1). Thus, we use our approximation method as the component for MultiDDS for the rest of the experiments. 4Suppose the dev set gradient of language k has variance of var(gk dev) = σ, and that the dev gradients of each language 8530 Algorithm 1: Training with MultiDDS Input :Dtrain; M: amount of data to train the multilingual model before updating ψ; Output :The converged multilingual model θ∗ ▷Initialize PD(i, ψ) to be proportional to dataset size 1 PD(i, ψ) ← |Di train| Pn j=1 |Dj train| 2 while not converged do ▷Load training data with ψ 3 X, Y ←∅ 4 while |X, Y | < M do 5 ˜i ∼PD(i, ψt) 6 (x, y) ∼D˜i train 7 X, Y ←X, Y ∪x, y 8 end ▷Train the NMT model for multiple steps 9 for x, y in X, Y do 10 θ ← GradientUpdate (θ, ∇θℓ(x, y; θ)) 11 end ▷Estimate the effect of each language R(i; θ) 12 for i from 1 to n do 13 x′, y′ ∼Di train 14 gtrain ←∇θℓ(x′, y′; θ) 15 θ′ ←GradientUpdate(θ, gtrain) 16 gdev ←0 17 for j from 1 to n do 18 xd, yd ∼Dj dev 19 gdev ←gdev + ∇θ′ℓ(xd, yd; θ′) 20 end 21 R(i; θ) ←cos(gdev, gtrain) 22 end ▷Optimize ψ 23 dψ ←Pn i=1 R(i; θ) · ∇ψlog (PD (i; ψ)) 24 ψ ←GradientUpdate(ψ, dψ) 25 end Thus, instead of using the gradient alignment between the training data and the aggregated loss of n dev sets as the reward, we propose a second approach to first calculate the gradient alignment reward between the data and each of the n dev sets, then take the average of these as the final reward. {g1 dev, ..., gn dev} are independent. Then the sum of the gradients from the n languages has a variance of var(Pn k=1 gk dev) = nσ. This can be expressed mathematically as follows: R′(i; θt) ≈ cos ∇θ 1 n n X k=1 J(θt, Dk dev) ! , ∇θJ(θt−1, Di train) ! ≈1 n n X k=1 cos  ∇θJ(θt, Dk dev), ∇θJ(θt−1, Di train)  (11) To implement this, we can simply replace the standard reward calculation at Line 11 of line 25 to use the stable reward. We name this setting MultiDDS-S. In § 6.6 we show that this method has less variance than the reward in Eq. 10. 6 Experimental Evaluation 6.1 Data and Settings We use the 58-languages-to-English parallel data from Qi et al. (2018). A multilingual NMT model is trained for each of the two sets of language pairs with different level of language diversity: Related: 4 LRLs (Azerbaijani: aze, Belarusian: bel, Glacian: glg, Slovak: slk) and a related HRL for each LRL (Turkish: tur, Russian: rus, Portuguese: por, Czech: ces) Diverse: 8 languages with varying amounts of data, picked without consideration for relatedness (Bosnian: bos, Marathi: mar, Hindi: hin, Macedonian: mkd, Greek: ell, Bulgarian: bul, French: fra, Korean: kor) Statistics of the datasets are in § A.3. For each set of languages, we test two varieties of translation: 1) many-to-one (M2O): translating 8 languages to English; 2) one-to-many (O2M): translating English into 8 different languages. A target language tag is added to the source sentences for the O2M setting (Johnson et al., 2016). 6.2 Experiment Setup All translation models use standard transformer models (Vaswani et al., 2017) as implemented in fairseq (Ott et al., 2019) with 6 layers and 4 attention heads. All models are trained for 40 epochs. We preprocess the data using sentencpiece (Kudo and Richardson, 2018) with a vocabulary size of 8K for each language. The complete set of hyperparameters can be found in § A.2. The model performance is evaluated with BLEU score (Papineni et al., 2002), using sacreBLEU (Post, 2018). 8531 Baselines We compare with the three standard heuristic methods explained in § 2: 1) Uniform (τ = ∞): datasets are sampled uniformly, so that LRLs are over-sampled to match the size of the HRLs; 2) Temperature: scales the proportional distribution by τ = 5 (following Arivazhagan et al. (2019)) to slightly over-sample the LRLs; 3) Proportional (τ = 1): datasets are sampled proportional to their size, so that there is no over-sampling of the LRLs. Ours we run MultiDDS with either the standard reward (MultiDDS), or the stabilized reward proposed in Eq. 11 (MultiDDS-S). The scorer for MultiDDS simply maps the ID of each dataset to its corresponding probability (See Eq. 8. The scorer has N parameters for a dataset with N languages.) 6.3 Main Results We first show the average BLEU score over all languages for each translation setting in Tab. 2. First, comparing the baselines, we can see that there is no consistently strong strategy for setting the sampling ratio, with proportional sampling being best in the M2O setting, but worst in the O2M setting. Next, we can see that MultiDDS outperforms the best baseline in three of the four settings and is comparable to proportional sampling in the last M2O-Diverse setting. With the stabilized reward, MultiDDS-S consistently delivers better overall performance than the best baseline, and outperforms MultiDDS in three settings. From these results, we can conclude that MultiDDS-S provides a stable strategy to train multilingual systems over a variety of settings. Next, we look closer at the BLEU score of each language pair for MultiDDS-S and the best baseline. The results for all translation settings are in Tab. 1. In general, MultiDDS-S outperforms the baseline on more languages. In the best case, for the O2M-Related setting, MultiDDS-S brings significant gains for five of the eight languages, without hurting the remaining three. The gains for the Related group are larger than for the Diverse group, likely because MultiDDS can take better advantage of language similarities than the baseline methods. It is worth noting that MultiDDS does not impose large training overhead. For example, for our M2O system, the standard method needs around 19 hours and MultiDDS needs around 20 hours for convergence. The change in training time is not siginificant because MultiDDS only optimizes a simple distribution over the training datasets. 6.4 Prioritizing what to Optimize Prior works on multilingual models generally focus on improving the average performance of the model on all supported languages (Arivazhagan et al., 2019; Conneau et al., 2019). The formulation of MultiDDS reflects this objective by defining the aggregation of n dev sets using Eq. 3, which is simply the average of dev risks. However, average performance might not be the most desirable objective under all practical usage settings. For example, it may be desirable to create a more egalitarian system that performs well on all languages, or a more specialized system that does particularly well on a subset of languages. In this section, we examine the possibility of using MultiDDS to control the priorities of the multilingual model by defining different dev set aggregation methods that reflect these priorities. To do so, we first train the model for 10 epochs using regular MultiDDS, then switch to a different dev set aggregation method. Specifically, we compare MultiDDS with three different priorities: Regular: this is the standard MultiDDS that optimizes all languages throughout training using the average dev risk aggregation in Eq. 3 Low: a more egalitarian system that optimizes the average of the four languages with the worst dev perplexity, so that MultiDDS can focus on optimizing the low-performing languages High: a more specialized system that optimizes the four languages with the best dev perplexity, for MultiDDS to focus on optimizing the highperforming languages We performed experiments with these aggregation methods on the Diverse group, mainly because there is more performance trade-off among these languages. First, in Tab. 3 we show the average BLEU over all languages, and find that MultiDDS with different optimization priorities still maintains competitive average performance compared to the baseline. More interestingly, in Fig. 2, we plot the BLEU score difference of High and Low compared to Regular for all 8 languages. The languages are ordered on the x-axis from left to right in decreasing perplexity. Low generally performs better on the low-performing languages on the left, while High generally achieves the best performance on the high-performing languages on 8532 Method Avg. aze bel glg slk tur rus por ces M2O Prop. 24.88 11.20 17.17 27.51 28.85 23.09∗ 22.89 41.60 26.80 MultiDDS-S 25.52 12.20∗ 19.11∗ 29.37∗ 29.35∗ 22.81 22.78 41.55 27.03 O2M Temp. 16.61 6.66 11.29 21.81 18.60 11.27 14.92 32.10 16.26 MultiDDS-S 17.32 6.59 12.39∗ 21.65 20.61∗ 11.58 15.26∗ 33.52∗ 16.98∗ bos mar hin mkd ell bul fra kor M2O Prop. 26.68 23.43 10.10 22.01 31.06 35.62∗ 36.41∗ 37.91∗ 16.91 MultiDDS-S 27.00 25.34∗ 10.57 22.93∗ 32.05∗ 35.27 35.77 37.30 16.81 O2M Temp. 17.94 14.73∗ 4.93 15.49 20.59 24.82 26.60 29.74∗ 6.62 MultiDDS-S 18.24 14.02 4.76 15.68∗ 21.44 25.69∗ 27.78∗ 29.60 7.01∗ Table 1: BLEU scores of the best baseline and MultiDDS-S for all translation settings. MultiDDS-S performs better on more languages. For each setting, bold indicates the highest value, and ∗means the gains are statistically significant with p < 0.05. Method M2O O2M Related Diverse Related Diverse Baseline Uni. (τ=∞) 22.63 24.81 15.54 16.86 Temp. (τ=5) 24.00 26.01 16.61 17.94 Prop. (τ=1) 24.88 26.68 15.49 16.79 Ours MultiDDS 25.26 26.65 17.17 18.40 MultiDDS-S 25.52 27.00 17.32 18.24 Table 2: Average BLEU for the baselines and our methods. Bold indicates the highest value. Setting Baseline MultiDDS-S Regular Low High M2O 26.68 27.00 26.97 27.08 O2M 17.94 18.24 17.95 18.55 Table 3: Average BLEU of the best baseline and three MultiDDS-S settings for the Diverse group. MultiDDS-S always outperform the baseline. the right, with results most consistent in the O2M setting. This indicates that MultiDDS is able to prioritize different predefined objectives. It is also worth noting that low-performing languages are not always low-resource languages. For example, Korean (kor) has the largest amount of training data, but its BLEU score is among the lowest. This is because it is typologically very different from English and the other training languages. Fig. 2 shows that Low is still able to focus on improving kor, which aligns with the predefined objective. This fact is not considered in baseline methods that only consider data size when sampling from the training datasets. 6.5 Learned Language Distributions In Fig. 3, we visualize the language distribution learned by MultiDDS throughout the training process. Under all settings, MultiDDS gradually increases the usage of LRLs. Although initialized mar kor hin bos mkdell bul fra −0.50 −0.25 0.00 0.25 BLEU mar kor bos hin mkdell bul fra −1 0 1 low high Figure 2: The difference between Low and High optimization objectives compared to Regular for the Diverse language group. MultiDDS successfully optimize for different priorities. left: M2O; right: O2M. with the same distribution for both one-to-many and many-to-one settings, MultiDDS learns to upsample the LRLs more in the one-to-many setting, likely due to the increased importance of learning language-specific decoders in this setting. For the Diverse group, MultiDDS learns to decrease the usage of Korean (kor) the most, probably because it is very different from other languages in the group. 6.6 Effect of Stablized Rewards Next, we study the effect of the stablized reward proposed in § 2. In Fig. 4, we plot the regular reward (used by MultiDDS) and the stable reward (used by MultiDDS-S) throughout training. For all settings, the reward in MultiDDS and MultiDDS-S follows the similar trend, while the stable reward used in MultiDDS-S has consistently less variance. MultiDDS-S also results in smaller variance in the final model performance. We run MultiDDS and MultiDDS-S with 4 different random seeds, and record the mean and variance of the average BLEU score. Tab. 4 shows results for the Diverse group, which indicate that the model performance achieved using MultiDDS-S has lower 8533 0 100 200 Step 0.00 0.05 0.10 0.15 0.20 0.25 Language Probability 0 100 200 Step 0.00 0.05 0.10 0.15 0.20 0.25 tur rus por ces aze bel glg slk 0 100 200 Step 0.00 0.05 0.10 0.15 0.20 0.25 Language Probability 0 100 200 Step 0.00 0.05 0.10 0.15 0.20 0.25 kor fra bul ell mkd hin mar bos Figure 3: Language usage by training step. Left: manyto-one; Right: one-to-many; Top: related language group; Bottom: diverse language group. Method M2O O2M Mean Var. Mean Var. MultiDDS 26.85 0.04 18.20 0.05 MultiDDS-S 26.94 0.02 18.24 0.02 Table 4: Mean and variance of the average BLEU score for the Diverse group. The models trained with MultiDDS-S perform better and have less variance. variance and a higher mean than MultiDDS. Additionally, we compare the learned language distribution of MultiDDS-S and MultiDDS in Fig. 5. The learned language distribution in both plots fluctuates similarly, but MultiDDS has more drastic changes than MultiDDS-S. This is also likely due to the reward of MultiDDS-S having less variance than that of MultiDDS. 7 Related Work Our work is related to the multilingual training methods in general. Multilingual training has a rich history (Schultz and Waibel, 1998; Mimno et al., 2009; Shi et al., 2010; T¨ackstr¨om et al., 2013), but has become particularly prominent in recent years due the ability of neural networks to easily perform multi-task learning (Dong et al., 2015; Plank et al., 2016; Johnson et al., 2016). As stated previously, recent results have demonstrated the importance of balancing HRLs and LRLs during multilingual training (Arivazhagan et al., 2019; Conneau et al., 0 100 200 0.00 0.05 multDDS, var=0.0012 multDDS-S, var=0.0003 0 100 200 −0.05 0.00 0.05 0.10 0.15 multDDS, var=0.0026 multDDS-S, var=0.0007 0 100 200 −0.02 0.00 0.02 0.04 multDDS, var=0.0015 multDDS-S, var=0.0005 0 100 200 −0.025 0.000 0.025 0.050 0.075 multDDS, var=0.0018 multDDS-S, var=0.0004 Figure 4: Variance of reward. Left: M2O; Right: O2M; Top: Related language group; Bottom: Diverse language group. 0 100 200 Step 0.00 0.05 0.10 0.15 0.20 0.25 Language Probability 0 100 200 Step 0.00 0.05 0.10 0.15 0.20 0.25 0.30 kor fra bul ell mkd hin mar bos Figure 5: Language usage for the M2O-Diverse setting. Left: MultiDDS-S; Right: MultiDDS. The two figures follow similar trends while MultiDDS changes more drastically. 2019), which is largely done with heuristic sampling using a temperature term; MultiDDS provides a more effective and less heuristic method. Wang and Neubig (2019); Lin et al. (2019) choose languages from multilingual data to improve the performance on a particular language, while our work instead aims to train a single model that handles translation between many languages. (Zaremoodi et al., 2018; Wang et al., 2018, 2019a) propose improvements to the model architecture to improve multilingual performance, while MultiDDS is a model-agnostic and optimizes multilingual data usage. Our work is also related to machine learning methods that balance multitask learning (Chen et al., 2018b; Kendall et al., 2018). For example, Kendall et al. (2018) proposes to weigh the training loss from a multitask model based on the uncertainty of each task. Our method focuses on optimizing the multilingual data usage, and is both somewhat orthogonal to and less heuristic than such loss weighting methods. Finally, our work is related to meta-learning, which is used in hyperpa8534 rameter optimization (Baydin et al., 2018), model initialization for fast adaptation (Finn et al., 2017), and data weighting (Ren et al., 2018). Notably, Gu et al. (2018) apply meta-learning to learn an NMT model initialization for a set of languages, so that it can be quickly fine-tuned for any language. This is different in motivation from our method because it requires an adapted model for each of the language, while our method aims to optimize a single model to support all languages. To our knowledge, our work is the first to apply meta-learning to optimize data usage for multilingual objectives. 8 Conclusion In this paper, we propose MultiDDS, an algorithm that learns a language scorer to optimize multilingual data usage to achieve good performance on many different languages. We extend and improve over previous work on DDS (Wang et al., 2019b), with a more efficient algorithmic instantiation tailored for the multilingual training problem and a stable reward to optimize multiple objectives. MultiDDS not only outperforms prior methods in terms of overall performance on all languages, but also provides a flexible framework to prioritize different multilingual objectives. Notably, MultiDDS is not limited to NMT, and future work may consider applications to other multilingual tasks. In addition, there are other conceivable multilingual optimization objectives than those we explored in § 6.4. Acknowledgement The first author is supported by a research grant from the Tang Family Foundation. This work was supported in part by NSF grant IIS-1812327. The authors would like to thank Amazon for providing GPU credits. References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In NAACL. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Many languages, one parser. TACL, 4:431–444. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. In arxiv. Atilim Gunes Baydin, Robert Cornish, David Mart´ınezRubio, Mark Schmidt, and Frank Wood. 2018. Online learning rate adaptation with hypergradient descent. In ICLR. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018a. Adversarial deep averaging networks for cross-lingual sentiment classification. In ACL. Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. 2018b. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In ICML. Benoˆıt Colson, Patrice Marcotte, and Gilles Savard. 2007. An overview of bilevel optimization. Annals OR, 153(1). Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. In EMNLP. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In ACL. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML. Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018. Meta-learning for lowresource neural machine translation. In EMNLP. Melvin Johnson et al. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. In TACL. Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING 2012, pages 1459–1474. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In EMNLP. Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In ACL. 8535 Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In EMNLP. David M. Mimno, Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. 2009. Polylingual topic models. In EMNLP. Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. EMNLP. Toan Q. Nguyen and David Chiang. 2018. Transfer learning across low-resource, related languages for neural machine translation. In NAACL. Toan Q. Nguyen and Juli´an Salazar. 2019. Transformers without tears: Improving the normalization of self-attention. In IWSLT. Robert ¨Ostling and J¨org Tiedemann. 2017. Continuous multilinguality with language vectors. In EACL. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In NAACL: Demonstrations. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In ACL. Matt Post. 2018. A call for clarity in reporting BLEU scores. In WMT. Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In NAACL. Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. 2018. Learning to reweight examples for robust deep learning. In ICML. Tanja Schultz and Alex Waibel. 1998. Multilingual and crosslingual speech recognition. In Proc. DARPA Workshop on Broadcast News Transcription and Understanding. Citeseer. Lei Shi, Rada Mihalcea, and Mingjun Tian. 2010. Cross language text classification by model translation and semi-supervised learning. In EMNLP. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. In TACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Xinyi Wang and Graham Neubig. 2019. Target conditioned sampling: Optimizing data selection for multilingual neural machine translation. In ACL. Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019a. Multilingual neural machine translation with soft decoupled encoding. In ICLR. Xinyi Wang, Hieu Pham, Paul Mitchel, Antonis Anastasopoulos, Jaime Carbonell, and Graham Neubig. 2019b. Optimizing data usage via differentiable rewards. In arxiv. Yining Wang, Jiajun Zhang, Feifei Zhai, Jingfang Xu, and Chengqing Zong. 2018. Three strategies to improve one-to-many multilingual translation. In EMNLP. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In EMNLP. Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural crosslingual named entity recognition with minimal resources. In EMNLP. Poorya Zaremoodi, Wray L. Buntine, and Gholamreza Haffari. 2018. Adaptive knowledge sharing in multitask learning: Improving low-resource neural machine translation. In ACL. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low resource neural machine translation. In EMNLP. 8536 A Appendix A.1 Effect of Step-ahead Reward Setting Baseline MultiDDS Moving Ave. Step-ahead M2O 24.88 25.19 25.26 O2M 16.61 17.17 17.17 Table 5: Ave. BLEU for the Related language group. The step-ahead reward proposed in the paper is better or comparable with the moving average, and both are better than the baseline. A.2 Hyperparameters In this section, we list the details of preprocessing and hyperparameters we use for the experiments. • We use 6 encoder and decoder layers, with 4 attention heads • The embedding size is set to 512, and the feedforward layer has a dimension of 1024 • We use the dropout rate of 0.3 • The batch size is set to 9600 tokens • We use label smoothing with rate of 0.1 • We use the scaled l2 normalization before residual connection, which is shown to be helpful for small data (Nguyen and Salazar, 2019) A.3 Dataset statistics Language Train Dev Test aze 5.94k 671 903 bel 4.51k 248 664 glg 10.0k 682 1007 slk 61.5k 2271 2445 tur 182k 4045 5029 rus 208k 4814 5483 por 185k 4035 4855 ces 103k 3462 3831 Table 6: Statistics of the related language group. A.4 Detailed Results for All Settings Language Train Dev Test bos 5.64k 474 463 mar 9.84k 767 1090 hin 18.79k 854 1243 mkd 25.33k 640 438 ell 134k 3344 4433 bul 174k 4082 5060 fra 192k 4320 4866 kor 205k 4441 5637 Table 7: Statistics of the diverse language group. 8537 Method Avg. aze bel glg slk tur rus por ces Uni. (τ = ∞) 22.63 8.81 14.80 25.22 27.32 20.16 20.95 38.69 25.11 Temp. (τ = 5) 24.00 10.42 15.85 27.63 28.38 21.53 21.82 40.18 26.26 Prop. (τ = 1) 24.88 11.20 17.17 27.51 28.85 23.09 22.89 41.60 26.80 MultiDDS 25.26 12.20 18.60 28.83 29.21 22.24 22.50 41.40 27.22 MultiDDS-S 25.52 12.20 19.11 29.37 29.35 22.81 22.78 41.55 27.03 Table 8: BLEU score of the baselines and our method on the Related language group for many-to-one translation Method Avg. bos mar hin mkd ell bul fra kor Uni. (τ = ∞) 24.81 21.52 9.48 19.99 30.46 33.22 33.70 35.15 15.03 Temp. (τ = 5) 26.01 23.47 10.19 21.26 31.13 34.69 34.94 36.44 16.00 Prop. (τ = 1) 26.68 23.43 10.10 22.01 31.06 35.62 36.41 37.91 16.91 MultiDDS 26.65 25.00 10.79 22.40 31.62 34.80 35.22 37.02 16.36 MultiDDS-S 27.00 25.34 10.57 22.93 32.05 35.27 35.77 37.30 16.81 Table 9: BLEU score of the baselines and our method on the Diverse language group for many-to-one translation Method Avg. aze bel glg slk tur rus por ces Uni. (τ = ∞) 15.54 5.76 10.51 21.08 17.83 9.94 13.59 30.33 15.35 Temp. (τ = 5) 16.61 6.66 11.29 21.81 18.60 11.27 14.92 32.10 16.26 Prop. (τ = 1) 15.49 4.42 5.99 14.92 17.37 12.86 16.98 34.90 16.53 MultiDDS 17.17 6.24 11.75 21.46 20.67 11.51 15.42 33.41 16.94 MultiDDS-S 17.32 6.59 12.39 21.65 20.61 11.58 15.26 33.52 16.98 Table 10: BLEU score of the baselines and our method on the Related language group for one-to-many translation Method Avg. bos mar hin mkd ell bul fra kor Uni. (τ = ∞) 16.86 14.12 4.69 14.52 20.10 22.87 25.02 27.64 5.95 Temp. (τ = 5) 17.94 14.73 4.93 15.49 20.59 24.82 26.60 29.74 6.62 Prop. (τ = 1) 16.79 6.93 3.69 10.70 15.77 26.69 29.59 33.51 7.49 MultiDDS 18.40 14.91 4.83 14.96 22.25 24.80 27.99 30.77 6.75 MultiDDS-S 18.24 14.02 4.76 15.68 21.44 25.69 27.78 29.60 7.01 Table 11: BLEU score of the baselines and our method on the Diverse language group for one-to-many translation
2020
754
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8538–8544 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8538 Evaluating Robustness to Input Perturbations for Neural Machine Translation Xing Niu, Prashant Mathur, Georgiana Dinu, Yaser Al-Onaizan Amazon AI {xingniu,pramathu,gddinu,onaizan}@amazon.com Abstract Neural Machine Translation (NMT) models are sensitive to small perturbations in the input. Robustness to such perturbations is typically measured using translation quality metrics such as BLEU on the noisy input. This paper proposes additional metrics which measure the relative degradation and changes in translation when small perturbations are added to the input. We focus on a class of models employing subword regularization to address robustness and perform extensive evaluations of these models using the robustness measures proposed. Results show that our proposed metrics reveal a clear trend of improved robustness to perturbations when subword regularization methods are used. 1 Introduction Recent work has pointed out the challenges in building robust neural network models (Goodfellow et al., 2015; Papernot et al., 2016). For Neural Machine Translation (NMT) in particular, it has been shown that NMT models are brittle to small perturbations in the input, both when these perturbations are synthetically created or generated to mimic real data noise (Belinkov and Bisk, 2018). Consider the example in Table 1 where an NMT model generates a worse translation as a consequence of only one character changing in the input. Improving robustness in NMT has received a lot of attention lately with data augmentation (Sperber et al., 2017; Belinkov and Bisk, 2018; Vaibhav et al., 2019; Liu et al., 2019; Karpukhin et al., 2019) and adversarial training methods (Cheng et al., 2018; Ebrahimi et al., 2018; Cheng et al., 2019; Michel et al., 2019) as some of the more popular approaches used to increase robustness in neural network models. In this paper, we focus on one class of methods, subword regularization, which addresses NMT Original input Se kyll¨a tuntuu sangen luultavalta. Translation It certainly seems very likely. Perturbed input Se kyll¨a tumtuu sangen luultavalta. Translation It will probably darken quite probably. Reference It certainly seems probable. Table 1: An example of NMT English translations for a Finnish input and its one-letter misspelled version. robustness without introducing any changes to the architectures or to the training regime, solely through dynamic segmentation of input into subwords (Kudo, 2018; Provilkov et al., 2019). We provide a comprehensive comparison of these methods on several language pairs and under different noise conditions on robustness-focused metrics. Previous work has used translation quality measures such as BLEU on noisy input as an indicator of robustness. Absolute model performance on noisy input is important, and we believe this is an appropriate measure for noisy domain evaluation (Michel and Neubig, 2018; Berard et al., 2019; Li et al., 2019). However, it does not disentangle model quality from the relative degradation under added noise. For this reason, we propose two additional measures for robustness which quantify the changes in translation when perturbations are added to the input. The first one measures relative changes in translation quality while the second one focuses on consistency in translation output irrespective of reference translations. Unlike the use of BLEU scores alone, the metrics introduced show clearer trends across all languages tested: NMT models are more robust to perturbations when subword regularization is employed. We also show that for the models used, changes in output strongly correlate with decreased quality and the consistency measure alone can be used as a robustness proxy in the absence of reference data. 8539 2 Evaluation Metrics Robustness is usually measured with respect to translation quality. Suppose an NMT model M translates input x to y′ and translates its perturbed version xδ to y′ δ, the translation quality (TQ) on these datasets is measured against reference translations y: TQ(y′, y) and TQ(y′ δ, y). TQ can be implemented as any quality measurement metric, such as BLEU (Papineni et al., 2002) or 1 minus TER (Snover et al., 2006). Previous work has used TQ on perturbed or noisy input as an indicator of robustness. However, we argue that assessing models’ performance relative to that of the original dataset is important as well in order to capture models’ sensitivity to perturbations. Consider the following hypothetical example: M1: BLEU(y′ 1, y) = 40, BLEU(y′ δ1, y) = 38; M2: BLEU(y′ 2, y) = 37, BLEU(y′ δ2, y) = 37. Selecting M1 to translate noisy data alone is preferable, since M1 outperforms M2 (38 > 37). However, M1’s quality degradation (40 →38) reflects that it is in fact more sensitive to perturbation δ comparing with M2. To this end, we use the ratio between TQ(y′, y) and TQ(y′ δ, y) to quantify an NMT model M’s invariance to specific data and perturbation, and define it as robustness: ROBUST(M|x, y, δ) = TQ(y′ δ, y) TQ(y′, y) . When evaluating on the dataset (x, y), ROBUST(M|x, y, δ) < 1 means the translation quality of M is degraded under perturbation δ; ROBUST(M|x, y, δ) = 1 indicates that M is robust to perturbation δ. It is worth noting that: (1) ROBUST can be viewed as the normalized ∆TQ = TQ(y′, y) − TQ(y′ δ, y) because ∆TQ/TQ(y′, y) = 1−ROBUST. We opt for the ratio definition because it is on a [0, 1] scale, and it is easier to interpret than ∆TQ since the latter needs to be interpreted in the context of the TQ score. (2) High robustness can only be expected under low levels of noise, as it is not realistic for a model to recover from extreme perturbations. Evaluation without References Reference translations are not readily available in some cases, such as when evaluating on a new domain. Inspired by unsupervised consistency training (Xie et al., 2019), we test if translation consistency can be used to estimate robustness against noise perturbations. Specifically, a model is consistent under a perturbation δ if the two translations, y′ δ and y′ are similar to each other. Note that consistency is sufficient but not necessary for robustness: a good translation can be expressed in diverse ways, which leads to high robustness but low consistency. We define consistency by CONSIS(M|x, δ) = Sim(y′ δ, y′). Sim can be any symmetric measure of similarity, and in this paper we opt for Sim(y′ δ, y′) to be the harmonic mean of TQ(y′ δ, y′) and TQ(y′, y′ δ), where TQ is BLEU between two outputs. 3 Experimental Set-Up We run several experiments across different language families with varying difficulties, across different training data conditions (i.e. with different training data sizes) and evaluate how different subword segmentation strategies performs across noisy domains and noise types. Implementation Details We build NMT models with the Transformer-base architecture (Vaswani et al., 2017) implemented in the Sockeye toolkit (Hieber et al., 2017). The target embeddings and the output layer’s weight matrix are tied (Press and Wolf, 2017). Training is done on 2 GPUs, with a batch size of 3072 tokens and we checkpoint the model every 4000 updates. The learning rate is initialized to 0.0002 and reduced by 10% after 4 checkpoints without improvement of perplexity on the development set. Training stops after 10 checkpoints without improvement. Tasks and Data We train NMT models on eight translation directions and measure robustness and consistency for them. EN↔DE and EN↔FI models are trained with pre-processed WMT18 news data and tested with the latest news test sets (newstest2019). Recently, two datasets were built from usergenerated content, MTNT (Michel and Neubig, 2018) and 4SQ (Berard et al., 2019). They provide naturally occurring noisy inputs and translations for EN↔FR and EN↔JA, thus enabling automatic evaluations. EN↔JA baseline models are trained and also tested with aggregated data provided by MTNT, i.e., KFTT+TED+JESC (KTJ). EN↔FR 8540 Languages # sentences # EN tokens EN↔DE 29.3 M 591 M BASE EN↔FR 22.2 M 437 M EN↔FI 2.9 M 71 M EN↔JA 3.9 M 43 M EN→FR 36.1 K 1,011 K MTNT FR→EN 19.2 K 779 K EN→JA 5.8 K 338 K JA→EN 6.5 K 156 K 4SQ FR→EN 12.1 K 141 K Table 2: Statistics of various training data sets. baseline models are trained with aggregated data of Europarl-v7 (Koehn, 2005), NewsCommentaryv14 (Bojar et al., 2018), OpenSubtitles-v2018 (Lison and Tiedemann, 2016), and ParaCrawl-v51, which simulates the UGC training corpus used in 4SQ benchmarks, and they are tested with the latest WMT new test sets supporting EN↔FR (newstest2014). Following the convention, we also evaluate models directly on noisy MTNT (mtnt2019) and 4SQ test sets. We fine-tune baseline models with corresponding MTNT/4SQ training data, inheriting all hyper-parameters except the checkpoint interval which is re-set to 100 updates. Table 2 shows itemized training data statistics after pre-processing. Perturbations We investigate two frequently used types of perturbations and apply them to WMT and KTJ test data. The first is synthetic misspelling: each word is misspelled with probability of 0.1, and the strategy is randomly chosen from single-character deletion, insertion, and substitution (Karpukhin et al., 2019). The second perturbation is letter case changing: each sentence is modified with probability of 0.5, and the strategy is randomly chosen from upper-casing all letters, lower-casing all letters, and title-casing all words (Berard et al., 2019).2 Since we change the letter case in the test data, we always report case-insensitive BLEU with ‘13a’ tokenization using sacreBLEU (Post, 2018). Japanese output is pre-segmented with Kytea before running sacreBLEU.3 1https://paracrawl.eu/ 2Character substitution uses neighbor letters on the QWERTY keyboard, so accented characters are not substituted. Japanese is “misspelled” for each character with probability of 0.1, and it only supports deletion and repetition. Letter case changing does not apply to Japanese. 3http://www.phontron.com/kytea/ Model Variations We focus on comparing different (stochastic) subword segmentation strategies: BPE (Sennrich et al., 2016), BPE-Dropout (Provilkov et al., 2019), and SentencePiece (Kudo, 2018). Subword regularization methods (i.e., BPEDropout and SentencePiece) generate various segmentations for the same word, so the resulting NMT model better learns the meaning of less frequent subwords and should be more robust to noise that yields unusual subword combinations, such as misspelling. We use them only in offline training data pre-processing steps, which requires no modification to the NMT model.4 4 Experimental Results As shown in Table 3, there is no clear winner among the three subword segmentation models based on BLEU scores on original WMT or KTJ test sets. This observation is different from results reported by Kudo (2018) and Provilkov et al. (2019). One major difference from previous work is the size of the training data, which is much larger in our experiments – subword regularization is presumably preferable on low-resource settings. However, both our proposed metrics (i.e., robustness and consistency) show clear trends of models’ robustness to input perturbations across all languages we tested: BPE-Dropout > SentencePiece > BPE. This suggests that although we did not observe a significant impact of subword regularization on generic translation quality, the robustness of the models is indeed improved drastically. Unfortunately, it is unclear if subword regularization can help translating real-world noisy input, as shown in Table 4. MTNT and 4SQ contain several natural noise types such as grammar errors, emojis, with misspelling as the dominating noise type for English and French. The training data we use may already cover common natural misspellings, perhaps contributing to the failure of regularization methods to improve over BPE in this case. Robustness Versus Consistency Variation in output is not necessarily in itself a marker of reduced translation quality, but empirically, consistency and robustness nearly always provide same model rankings in Table 3. We conduct more comprehensive analysis on the correlation between them, and we collect additional data points by varying the noise level of both perturbations. Specif4We sample one subword segmentation for each source sequence with SentencePiece. 8541 Model BLEU ROBUST CONSIS BLEU ROBUST CONSIS EN→DE (newstest2019) DE→EN (newstest2019) BPE 39.70±0.71 – – 40.01±0.65 – – original BPE-Dropout 39.65±0.73 – – 40.16±0.66 – – SentencePiece 39.85±0.75 – – 40.25±0.67 – – BPE 29.38±0.60 74.01±0.95 60.59±0.80 33.48±0.61 83.69±0.96 71.51±0.74 + misspelling BPE-Dropout 33.13±0.70 83.55±0.92 70.74±0.77 35.97±0.64 89.58±0.78 78.33±0.64 SentencePiece 31.87±0.66 79.99±0.97 66.40±0.76 35.26±0.66 87.61±0.91 74.09±0.74 BPE 31.61±0.74 79.63±1.31 73.26±1.19 33.72±0.69 84.27±1.15 73.19±1.13 + case-changing BPE-Dropout 35.04±0.73 88.37±0.97 80.04±0.99 36.34±0.69 90.48±0.95 78.96±0.96 SentencePiece 33.49±0.73 84.05±1.09 76.24±1.09 34.48±0.71 85.65±1.10 74.55±1.10 EN→FR (newstest2014) FR→EN (newstest2014) BPE 41.47±0.48 – – 39.24±0.50 – – original BPE-Dropout 40.72±0.48 – – 39.22±0.50 – – SentencePiece 41.05±0.48 – – 39.14±0.50 – – BPE 34.01±0.45 82.01±0.66 71.59±0.53 32.62±0.48 83.13±0.63 73.05±0.49 + misspelling BPE-Dropout 35.98±0.46 88.36±0.59 78.49±0.48 34.71±0.48 88.51±0.60 79.27±0.50 SentencePiece 34.78±0.45 84.72±0.59 75.28±0.51 33.44±0.48 85.43±0.62 75.28±0.50 BPE 34.75±0.54 83.81±0.97 79.34±0.93 32.31±0.54 82.34±0.96 76.56±0.95 + case-changing BPE-Dropout 38.28±0.47 94.00±0.55 86.28±0.58 35.78±0.50 91.24±0.65 84.47±0.65 SentencePiece 36.49±0.50 88.87±0.74 82.73±0.76 33.51±0.54 85.61±0.84 78.18±0.88 EN→FI (newstest2019) FI→EN (newstest2019) BPE 20.43±0.55 – – 24.31±0.59 – – original BPE-Dropout 20.01±0.54 – – 24.51±0.57 – – SentencePiece 20.63±0.57 – – 24.67±0.60 – – BPE 15.20±0.46 74.42±1.39 52.76±0.89 21.27±0.54 87.47±1.14 70.06±0.89 + misspelling BPE-Dropout 17.39±0.50 86.95±1.43 63.63±0.86 22.40±0.55 91.38±1.06 75.18±0.83 SentencePiece 16.73±0.51 81.09±1.52 57.45±0.85 21.89±0.57 88.76±1.19 70.57±0.87 BPE 15.65±0.53 76.63±1.71 68.27±1.44 20.71±0.58 85.20±1.32 74.85±1.16 + case-changing BPE-Dropout 17.19±0.53 85.92±1.39 72.76±1.30 23.10±0.58 94.26±1.09 79.67±1.00 SentencePiece 15.72±0.54 76.19±1.72 67.73±1.40 21.50±0.58 87.16±1.26 76.29±1.12 EN→JA (KTJ) JA→EN (KTJ) BPE 24.28±0.53 – – 22.80±0.51 – – original BPE-Dropout 24.11±0.51 – – 22.21±0.52 – – SentencePiece 22.63±0.45 – – 22.99±0.50 – – BPE 19.82±0.47 81.66±1.09 54.84±0.73 18.20±0.45 79.83±1.20 52.34±0.74 + misspelling BPE-Dropout 22.01±0.49 91.30±0.95 63.21±0.78 18.89±0.47 85.06±1.17 56.43±0.78 SentencePiece 19.85±0.41 87.69±1.05 61.25±0.80 18.97±0.46 82.53±1.15 56.40±0.73 BPE 20.35±0.51 83.83±1.13 68.10±1.25 – – – + case-changing BPE-Dropout 21.44±0.49 88.91±1.00 72.96±1.13 – – – SentencePiece 19.99±0.44 88.32±1.06 73.52±1.10 – – – Table 3: BLEU, robustness (in percentage), and consistency scores of different subword segmentation methods on original and perturbed test sets. We report mean and standard deviation using bootstrap resampling (Koehn, 2004). Subword regularization makes NMT models more robust to input perturbations. MTNT (mtnt2019) 4SQ Model EN→JA JA→EN EN→FR FR→EN FR→EN BPE 10.75±0.49 9.68±0.59 34.15±0.93 45.84±0.89 30.96±0.85 baseline BPE-Dropout 10.76±0.47 9.26±0.64 33.39±0.95 45.84±0.90 31.28±0.84 SentencePiece 10.52±0.51 9.52±0.68 33.75±0.91 45.94±0.92 31.44±0.85 BPE 14.88±0.52 10.47±0.69 35.11±0.95 46.49±0.90 34.83±0.86 fine-tuning BPE-Dropout 15.26±0.53 11.13±0.68 34.80±0.93 46.88±0.88 34.72±0.84 SentencePiece 14.68±0.53 11.19±0.72 34.71±0.93 46.89±0.90 34.59±0.86 Table 4: BLEU scores of using different subword segmentation methods on two datasets with natural noise. Subword regularization methods do not achieve consistent improvement over BPE, nor with or without fine-tuning. 8542 30 40 50 60 70 80 90 100 50 60 70 80 90 100 Consistency Robustness EN<->DE r=0.97 EN<->FR r=0.98 EN<->FI r=0.92 EN<->JA r=0.91 Figure 1: Robustness (in percentage) and consistency are highly correlated within each language pair. Correlation coefficients are marked in the legend. ically, we use the following word misspelling probabilities: {0.05, 0.1, 0.15, 0.2} and the following sentence case-changing probability values: {0.3, 0.5, 0.7, 0.9}. As illustrated in Figure 1, consistency strongly correlates with robustness (sample Pearson’s r = 0.91 to 0.98) within each language pair. This suggests that for this class of models, low consistency signals a drop in translation quality and the consistency score can be used as a robustness proxy when the reference translation is unavailable. Robustness Versus Noise Level In this paper, robustness is defined by giving a fixed perturbation function and its noise level. We observe consistent model rankings across language pairs, but is it still true if we vary the noise level? To test this, we plot the robustness data points from the last section against the noise level. Focusing on the misspelling perturbation for EN→DE models, Figure 2 shows that varying the word misspelling probability does not change the ranking of the models, and the gap in the robustness measurement only increases with larger amount of noise. This observation applies to all perturbations and language pairs we investigated. 5 Conclusion We proposed two additional measures for NMT robustness which can be applied when both original and noisy inputs are available. These measure robustness as relative degradation in quality as well as consistency which quantifies variation in translation output irrespective of reference translations. We also tested two popular subword regularization techniques and their effect on overall performance and robustness. Our robustness metrics reveal a clear trend of subword regularization being much 50 60 70 80 90 100 0.00 0.05 0.10 0.15 0.20 Robustness Noise Level (word misspelling probability) BPE BPE-Dropout SentencePiece Figure 2: Varying the synthetic word misspelling probability for EN→DE models does not change the model ranking w.r.t. robustness (in percentage). more robust to input perturbations than standard BPE. Furthermore, we identify a strong correlation between robustness and consistency in these models indicating that consistency can be used to estimate robustness on data sets or domains lacking reference translations. Acknowledgements We thank the anonymous reviewers for their comments and suggestions. References Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Alexandre Berard, Ioan Calapodescu, Marc Dymetman, Claude Roux, Jean-Luc Meunier, and Vassilina Nikoulina. 2019. Machine translation of restaurant reviews: New corpus for domain adaptation and robustness. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 168–176, Hong Kong. Association for Computational Linguistics. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272–303, Belgium, Brussels. Association for Computational Linguistics. Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adversarial inputs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4324–4333, Florence, Italy. Association for Computational Linguistics. 8543 Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756– 1766, Melbourne, Australia. Association for Computational Linguistics. Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018. On adversarial examples for character-level neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 653–663, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. CoRR, abs/1712.05690. Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in machine translation. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 42–47, Hong Kong, China. Association for Computational Linguistics. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388– 395, Barcelona, Spain. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the Tenth Machine Translation Summit, volume 5, pages 79–86. Citeseer. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75, Melbourne, Australia. Association for Computational Linguistics. Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, and Hassan Sajjad. 2019. Findings of the first shared task on machine translation robustness. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 91–102, Florence, Italy. Association for Computational Linguistics. Pierre Lison and J¨org Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 923–929, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, and Zhongjun He. 2019. Robust neural machine translation with joint textual and phonetic embedding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3044–3049, Florence, Italy. Association for Computational Linguistics. Paul Michel, Xian Li, Graham Neubig, and Juan Pino. 2019. On evaluation of adversarial perturbations for sequence-to-sequence models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3103–3114, Minneapolis, Minnesota. Association for Computational Linguistics. Paul Michel and Graham Neubig. 2018. MTNT: A testbed for machine translation of noisy text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 543– 553, Brussels, Belgium. Association for Computational Linguistics. Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2016. Practical black-box attacks against deep learning systems using adversarial examples. CoRR, abs/1602.02697. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain. Association for Computational Linguistics. Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2019. Bpe-dropout: Simple and effective subword regularization. CoRR, abs/1910.13267. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational 8544 Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, volume 200. Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward robust neural machine translation for noisy input sequences. In Proceedings of the 14th International Workshop on Spoken Language Translation, pages 1715–1725, Tokyo, Japan. Vaibhav Vaibhav, Sumeet Singh, Craig Stewart, and Graham Neubig. 2019. Improving robustness of machine translation with synthetic noise. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1916–1920, Minneapolis, Minnesota. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Qizhe Xie, Zihang Dai, Eduard H. Hovy, Minh-Thang Luong, and Quoc V. Le. 2019. Unsupervised data augmentation for consistency training. CoRR, abs/1904.12848.
2020
755
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8545–8554 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8545 Parallel Corpus Filtering via Pre-trained Language Models Boliang Zhang, Ajay Nagesh, and Kevin Knight DiDi Labs {boliangzhang, ajaynagesh, kevinknight}@didiglobal.com Abstract Web-crawled data provides a good source of parallel corpora for training machine translation models. It is automatically obtained, but extremely noisy, and recent work shows that neural machine translation systems are more sensitive to noise than traditional statistical machine translation methods. In this paper, we propose a novel approach to filter out noisy sentence pairs from web-crawled corpora via pre-trained language models. We measure sentence parallelism by leveraging the multilingual capability of BERT and use the Generative Pre-training (GPT) language model as a domain filter to balance data domains. We evaluate the proposed method on the WMT 2018 Parallel Corpus Filtering shared task, and on our own web-crawled Japanese-Chinese parallel corpus. Our method significantly outperforms baselines and achieves a new stateof-the-art. In an unsupervised setting, our method achieves comparable performance to the top-1 supervised method. We also evaluate on a web-crawled Japanese-Chinese parallel corpus that we make publicly available. 1 Introduction Training modern neural machine translation (NMT) systems requires large parallel-text resources. Publicly-available parallel corpora are mostly paired with English, such as German-English, French-English, Chinese-English, etc., and their domains are limited. For building machine translation systems between non-English language pairs, such as Chinese and Japanese, existing parallel corpora are insufficient and often low quality. To address this problem, system builders have trained NMT systems on web-crawled data and achieved promising results (Xu and Koehn, 2017; JunczysDowmunt, 2018; Schwenk, 2018; Schwenk et al., 2019). However, data automatically crawled from the web is extremely noisy. Khayrallah and Koehn (2018) and Belinkov and Bisk (2018) show that neural translation models are far more sensitive to noisy parallel training data than statistical machine translation. Data selection methods that can filter noisy parallel sentences from large-scale web crawled resources are in demand. In this paper, we study the problem in a realworld scenario where we crawl a large JapaneseChinese parallel corpus from various websites and build open-domain machine translation systems between Japanese and Chinese, by filtering the web crawled parallel corpus. In addition, a small amount of clean parallel data is available, in the software domain. In order to confirm our results on a public data, we also apply our filter to the WMT 2018 German-English Parallel Corpus Filtering shared task. Previous work on parallel corpus filtering performs poorly in our scenario as it either requires large clean parallel corpora or dictionaries (Xu and Koehn, 2017; Artetxe and Schwenk, 2019; Junczys-Dowmunt, 2018; Chaudhary et al., 2019), or relies on multilingual word embeddings and neglects context when measuring translation parallelism (Hangya and Fraser, 2018). In this paper, we propose a simple but effective parallel corpus filtering method. Multilingual BERT (Devlin et al., 2019) projects multilingual sentences into a shared space and has shown a great potential for cross-lingual model transfer (Pires et al., 2019). We use pre-trained multilingual BERT as prior knowledge and fine-tune it on a synthetic dataset. This multilingual BERT-based classifier forms an acceptability filter that determines whether or not a sentence pair consists of a bona-fide translation. As the domain of training data largely affects machine translation model performance, we also introduce a domain filter. It uses the pre-trained Generative Pre-training (GPT) as in-domain language 8546 model and is an extension of the existing crossentropy difference based domain filter (Moore and Lewis, 2010; Junczys-Dowmunt, 2018). We evaluate our proposed method on the WMT 2018 German-English Parallel Corpus Filtering shared task and achieve a new state-of-the-art. Our unsupervised method achieves comparable performance to the top system that is trained on millions of clean parallel sentence pairs. Our proposed methods also significantly outperform baselines in our own Japanese-Chinese parallel corpus filtering task. We make the following contributions: • We propose a novel approach to filter noisy parallel corpora by using pre-trained language models. Our approach outperforms strong baselines and achieves a new state-of-the-art. • We devise an unsupervised filtering approach that does not require an identifiable clean subset of parallel segments. Our unsupervised method matches the results of previous supervised methods. • We release a large web-crawled JapaneseChinese parallel corpus which can be a useful resource for machine translation research on non-English language pairs.1 2 Related Work Several recent works address parallel corpus filtering. Denkowski et al. (2012), Dyer et al. (2010) and Heafield (2011) use language models and word alignments to determine how likely sentences are to be a good translation of another. Xu and Koehn (2017) introduce a noise filtering tool, Zipporah, that discriminates parallel and non-parallel sentences based on word-frequency vectors and a dictionary. Junczys-Dowmunt (2018) proposes a dual conditional cross-entropy filtering method, which achieved first place in the WMT 2018 GermanEnglish Parallel Corpus Filtering shared task. They train two translation models in inverse directions on millions of parallel sentences and score sentence pairs based on the word-normalized conditional cross-entropy from the translation models. Artetxe and Schwenk (2019) and Schwenk (2018) propose a margin-based scoring method that compares the 1http://iwslt.org/doku.php?id=open_ domain_translation similarity of the source and target sentence representations. The sentence representations are produced by a sentence encoder trained on clean parallel data via a neural encoder-decoder architecture. Other works based on sentence embeddings include Hangya and Fraser (2018) and Littell et al. (2018), as well as Schwenk et al. (2019), which mines millions of parallel sentences in 1620 language pairs from Wikipedia. These encoder-decoder based methods require large amounts of clean parallel training data and are not applicable in our scenario where available data is noisy. Ondrej Bojar (2020) organize an open domain translation challenge where participants are provided a large, noisy set of Japanese-Chinese segment pairs built from web data, and the task is to clean the noisy data and build an end-to-end machine translation system. Work on data selection is also related. Moore and Lewis (2010); Junczys-Dowmunt (2018) select domain-related data by computing the crossentropy difference between in-domain and outdomain language models. Duh et al. (2013) use neural language models for data selection. Axelrod et al. (2011) and Axelrod et al. (2015) expand cross-entropy difference filtering to both sides of the parallel corpus. Since we aim to build a general machine translation system, instead of selecting data that are relevant to a specific domain, we select data whose domains are as general as possible, by using Generative Pre-training (GPT) models trained on large and diverse corpora. 3 Method In this section we introduce a language detection filter, a translation-acceptability filter, and a domain filter. Each filter produces a score for every candidate source/target sentence pair. The partial score produced by each filter ranges from 0 to 1. Values beyond this range are normalized by minmax normalization: ˆy = (y −min)/(max −min). The final score is the product of the partial scores. 3.1 Language Detection Filter Targeting a web-crawler at a given language pair still results in many pages written in the wrong language. For example, while a URL pair may clearly indicate translation (e.g., “.jp” and “.zh”), it may happen that the text content is simply copied rather than translated. We observe this in both our Japanese-Chinese data and the German-English Paracrawl data set. It is necessary to filter out sen8547 tence pairs with undesired languages. We adopt the fastText (Joulin et al., 2017, 2016) language identification toolkit in our language detection filter. For each sentence, the toolkit produces a list of language candidates and their corresponding confidence scores. We select the language that has the highest confidence score from fastText as the language of the sentence. Sentence pairs that have both of the elements detected as the desired language are assigned score 1 and otherwise 0. By discarding sentence pairs with undesired language IDs, we filter out 27% of our ChineseJapanese parallel sentences and nearly 70% of the German-English parallel sentences from Paracrawl data set. 3.2 Acceptability Filter In this section, we introduce our translation acceptability filter, one of the main contributions in the paper. It aims to measure the parallelism of sentence pairs and filter out sentence pairs that are not mutual translations. The pre-trained language model BERT (Devlin et al., 2019) has been shown to be effective in many NLP tasks as it produces better and meaningful contextualized word representations. Multilingual BERT, a transformer Masked Language Model pre-trained on Wikipedia dumps of 104 languages, shows remarkable multilingual capability, given that it is not exposed to any multilingual signals, such as parallel data or dictionaries. A thorough study by Pires et al. (2019) shows the promising zero-shot cross-lingual model transfer ability of multilingual BERT on named entity recognition and part-of-speech tagging tasks. They hypothesize that having language-universal word pieces, such as numbers and URLs, mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close in the shared space. We use pre-trained multilingual BERT to encode a sentence pair (s, t) and create the sentence embeddings vs and vt by using the representations of the [CLS] token of s and t. We find that the cosine similarity between vs and vt does not necessarily reflect the parallelism of sentence s and t. We suspect that the word representations from multilingual BERT are loosely aligned across languages as there is no parallel data or dictionary used during the pre-training. A similar observation was made in Lample et al. (2018), where the cross-lingual word embeddings learned in an unsupervised manner are loosely aligned. However, after fine-tuning on a few anchor pairs (word translations), they become more aligned. Similarly, we use an unsupervised synthetic training set as anchors to fine-tune multilingual BERT with a binary classification objective. Xu and Koehn (2017) did similar work to train a filtering classifier on synthetic data, but via bag-ofwords translation features. Synthetic Training Set. In cases where a small number of clean parallel sentence pairs are available, we use them as positive training samples for our classifier. In Japanese-Chinese filtering, we use around 300k sentence pairs, mostly from open-source software documentation,2 as our positive samples. In extreme cases where no identifiable, clean parallel data is available, we sub-select high quality parallel sentences, which are used as positive samples, from the noisy parallel corpus based on the Hunalign (Varga et al., 2007) sentencealignment score. We sample negative instances by simulating the noise produced by web crawling and alignment. Given a positive pair (s, t), we create a negative sample by randomly choosing one of the following options: • Randomly select a target sentence from its adjacent sentences within a window size of k (where k = 2 in our experiments). • Randomly truncate 30%-70% of the source or target sentence. • Swap the order of 30%-70% words of the source or target sentence. To balance the training set, we create the same number of positive instances and sampled negative instances. Binary Classification Objective. We feed the sentence pair (s, t) into multilingual BERT, which accepts two-sentence input due to its next-sentence prediction objective (Devlin et al., 2019). Instead of using the [CLS] token representation, we use a Convolutional Network (CNN) layer that takes the BERT output and generates the final representation of the pair. Our experiments show that using CNN layer pooling achieves marginal gains over [CLS] pooling. The final layer is a feed-forward network 2GNOME, Ubuntu, OpenOffice, and KDE data set, from http://opus.nlpl.eu/ 8548 with a softmax activation function to produce label probabilities. We use the softmax probability as the degree of parallelism. 3.3 Domain Filter Web-crawled data contains noise of various types, due to the complicated structure of web pages. By inspecting the training data generated by the above methods, we notice much of the content is not well-formed, e.g., concatenated lists of months and dates, randomly mixed content from tables, series of emojis and punctuation marks, etc. These are certainly written in the desired language, thus not filtered out by language detection. The translation acceptability filter also accepts them. However, such malformatted data is not helpful to machine translation models, and we prefer a training corpus to contain meaningful content. For our domain filter, we adopt the cross-entropy difference scoring method proposed by Moore and Lewis (2010) and Junczys-Dowmunt (2018). More specifically, we treat a general domain monolingual corpus as our in-domain data set I, and the noisy parallel corpus without any filtering as our nondomain data set N. We train two language models LI and LN and measure how the target sentence t is domain-related to I and less domain-related to N by a perplexity ratio, which is a transformation of cross-entropy difference: ˆfdom(s, t) = PPLN(t) PPLI(t) where PPLM(x) is the word-normalized perplexity of the sentence x defined by the language model LM: PPLM(x) = exp( 1 |x| |x| P i=1 log PM(xi|x<i)) The intuition is fairly straightforward: the higher the perplexity of the sentence to the non-domain corpus and the lower the perplexity of the sentence to the in-domain corpus, the more likely the sentence is meaningful. Our contribution is to use GPT (Radford et al., 2019) as our in-domain language model, instead of news domain text (Junczys-Dowmunt, 2018). This minor yet crucial change yields non-trivial performance gains in our experiments for GermanEnglish parallel corpus filtering. As GPT is trained on data from various sources, such as Wikipedia, Reddit, news websites, etc., it covers a wide range of domains, so our filtered data is more diverse and performs better on multi-domain test sets, as well as in the real world application. For our in-domain language model, we use pre-trained Chinese GPT3 for Japanese-Chinese and pre-trained GPT-24 for German-English. We randomly sample 4 million sentences from the unfiltered noisy parallel corpus and use KenLM (Heafield, 2011) to train the non-domain language model. Perplexity scores from different language models are compatible. Following Junczys-Dowmunt (2018), we introduce two operations, clip and cutoff, to postprocess the domain filter score ˆfdom(s, t). The clip operation clips the maximum value of the domain score to a threshold τclip: fclip(x, τclip) = min(x, τclip) and the cutoff operation modifies scores below a threshold τcutoff and changes them to 0: fcutoff(x, τcutoff) = ( x, if x > τcutoff 0, otherwise τclip prevents a high monolingual in-domain score from overwriting scores from other filters. τcutoff eliminates out-domain sentence pairs and ensures that highly parallel sentence pairs are at least somewhat in-domain. We tune τclip and τcutoff on the development set. The scoring method of our final domain filter becomes: fdom(s, t) = fclip(fcutoff( ˆfdom(s, t), τcutoff), τclip) 4 Experiments and Results 4.1 WMT 2018 Parallel Corpus Filtering We use the WMT 2018 Parallel Corpus Filtering shared task (Koehn et al., 2018) as a benchmark to evaluate our methods. Participants in the shared task are provided a very noisy 1 billion word (English token count) German-English corpus crawled from the web by the Paracrawl project.5 The task is to sub-select clean sentence pairs amounting to (a) 10 million words, and (b) 100 million words, counted on the English side. The quality of the 3https://github.com/dbiir/UER-py 4https://github.com/huggingface/transformers 5https://paracrawl.eu 8549 resulting subsets is determined by training a neural machine translation system (Marian)6 (JunczysDowmunt et al., 2018) on this data. The quality of the machine translation system is measured by BLEU score on six test sets from various domains. As the task is to address the challenge of the data quality and not domain-relatedness of the data for a particular use, sub-sampling the corpus for relevance to the news domain is not encouraged by the shared task organizers. All parameters used for training Marian machine translation models are the same as described in Koehn et al. (2018). We use CLIP = 5 and CUTOFF = 1.5 in the experiments. We use 4 GPUs for training. 4.2 Web-Crawled Japanese-Chinese Parallel Corpus Filtering Due to the lack of publicly available JapaneseChinese parallel corpus, we build a data harvesting pipeline to fetch Japanese-Chinese parallel text from the Internet. The crawled bi-text are extremely noisy, but we rely on the proposed parallel corpus filtering method to clean up the data and eventually train a satisfactory machine translation system. In this paper, we use these crawled data as another test bed to evaluate our proposed method. A single run of the of the data harvesting pipeline is the following. We first identify Japanese-Chinese parallel webpages by programmatically analyzing the URL structure of the 5 billion URLs from CommonCrawl,7 for example, https://www.gotokyo.org/jp/ and https: //www.gotokyo.org/cn/ only differ by jp and cn. Then we download the webpages and conduct a series of cascaded data cleaning methods, including removing HTML markups, sentence segmentation, etc. Finally we perform segment alignment and filtering. Our workflow consists of several runs of the data harvesting pipeline with entry points at different modules (for instance, a more targeted crawling of higher quality material from a previous run). We also integrate existing Japanese-Chinese parallel datasets from other publicly available sources for a final parallel data size of 527m characters in 20.9M parallel segments. We include all details of our data harvesting 6https://github.com/marian-nmt/marian (We do not evaluate our method using Moses, the statistical machine translation system provided by WMT, as neural machine translation better fits our real world scenario.) 7https://commoncrawl.org/ pipeline, as well as the statistics of the obtained dataset, in Appendix A. Test and Development Dataset. We curate two parallel test sets by manually processing web data involving daily expressions (337 parallel segments) and news (437 parallel segments). For our development set, we use 5304 Japanese-Chinese basic expressions. 4.3 Results and Analysis WMT 2018 Parallel Corpus Filtering. Table 1 presents the BLEU scores of neural machine translation systems trained on 10 million and 100 million words of training data, selected by different filtering methods. In the table, we list the top three performers from the shared task, as well as another two work that are similar to ours. JunczysDowmunt (2018) has a dual conditional crossentropy adequacy filter and a domain filter trained on news corpora. Hangya and Fraser (2018) generate sentence embeddings by using unsupervised word embedding alignment and measure parallelism via multilingual sentence embedding similarity. Chaudhary et al. (2019) leverage massive publicly available English-German parallel corpora to train multilingual sentence embeddings via bidirectional Long Short Term Memory (LSTM) encoderdecoder network. We replicate the adequacy and domain-news filters from Junczys-Dowmunt (2018) and obtain similar results. By replacing the domain-news filter with our domain-GPT filter, we achieve new stateof-the-art scores on 10M and 100M word data sets (bold scores in the table). Given the very compact score range in the shared task (Koehn et al., 2018), we consider this gain very successful. It is stated in the shared task that the test sets are from multiple domains. Domain-news filter in Junczys-Dowmunt (2018) tends to select sentence pairs from news domain as the filter is trained on news domain data, and this leads to a biased parallel corpus for training machine translation system. Our proposed domainGPT filter is trained from various sources and thus covers a wide range of domains, so our filtered data is more diverse and performs better on multidomain test sets. For our supervised acceptability filter, we train a mulitlingual BERT classifier on clean parallel sentences as positive examples and randomly sampling negative instances, using the method described in Section 3.2. For our unsupervised acceptabil8550 Method Supervised Unsupervised 10M 100M Junczys-Dowmunt (2018) top-1 x 28.62 32.05 Lu et al. (2018) top-2 x 27.60 31.93 Lo et al. (2018) top-3 x 27.41 31.88 Hangya and Fraser (2018) x 22.96 30.54 Chaudhary et al. (2019) x 26.98 30.77 adequacy (our replication of J-D 2018) x 27.12 31.20 + domain-news (our replication of J-D 2018) x 28.66 32.01 + domain-GPT x †29.09 †32.11 supervised acceptability x 27.09 31.56 + domain-GPT x 28.94 32.03 unsupervised acceptability x 27.03 30.65 + domain-GPT x ‡28.68 ‡32.02 - all methods above apply language detection filter beforehand. † our new state-of-the-art combines adequacy (Junczys-Dowmunt, 2018) + our proposed domain-GPT. ‡ our unsupervised acceptability + domain-GPT is comparable to top supervised method. Table 1: BLEU scores of German-English neural MT systems trained on 10 million and 100 million word training data selected by different methods. The scores are averaged BLEU scores across the six test sets from WMT 2018 parallel corpus filtering task. domain-news trains an in-domain language model on news corpus, while domainGPT uses the pre-trained GPT language model. Methods JA-ZH %∗ ZH-JA %∗ unfiltered 22.92 100 22.27 100 Chaudhary et al. (2019) 23.46 75 26.22 70 adequacy (our replication of J-D 2018) 23.91 90 24.51 90 + domain-GPT 24.00 65 acceptability 25.53 75 28.54 50 + domain-GPT 25.49 50 - all methods above apply language detection filter beforehand. * percentage of raw parallel sentences used for MT training. Table 2: BLEU scores of Japanese-Chinese and Chinese-Japanese MT systems trained on data sets generated by various filtering methods. We rank sentence pairs by filtering scores and train an MT system on N percent of the top ranked data. N is selected based on the development set and we report the best BLEU score. domain-GPT is the domain filter whose in-domain language model is the pre-trained GPT language model; note that for ZH-JA, we do not have access to pre-trained Japanese GPT. ity filter, we rank noisy parallel sentences by (a) the alignment score from Hunalign, and (b) the GPT domain filter score. We then select the top 10M words (counted on English side) worth of sentence pairs as positive examples. This makes the method completely unsupervised, not requiring any identifiable clean parallel data. With finetuning multilingual BERT on sentences pairs aligned by Hunalign, the unsupervised acceptability already achieves comparable performance to Chaudhary et al. (2019) which use massive public parallel data. After applying the unsupervised domain-GPT filter, we achieve a surprisingly good result (underlined scores in the table), comparable to the best supervised method. Japanese-Chinese Parallel Corpus Filtering. In Table 2, we evaluate machine translation systems trained on data generated by different filtering methods. Unfiltered refers to data generated by Hunalign without any filtering. Chaudhary et al. (2019) refer to LASER, the top performing filtering system in WMT 2019 Parallel Corpus Filtering shared task. We use the pretrained 93-language LASER model to generate sentence pair scores. The model is trained on a large parallel corpus that contains 3.2M EnglishJapanese and 8.2M English-Chinese sentence pairs (English is used as pivot to connect Japanese and 8551 Chinese during their training). Adequacy refers to the dual conditional cross-entropy filtering method that we replicate from Junczys-Dowmunt (2018). It is trained on around 300k high quality softwaredomain parallel sentences from Microsoft Developer Network (MSDN) and Ubuntu. The GPT domain filter uses a pre-trained Chinese GPT8 as the in-domain language model and trains a four-gram KenLM (Heafield, 2011) language model on the Chinese side of our 4 million unfiltered noisy parallel sentences as a non-domain language model. Acceptability is our proposed multilingual BERT based filtering method, which is trained on a synthetic dataset, where we use 300k high-quality software domain parallel sentences as positive examples and sample equal-sized negative sentence pairs, using the sampling methods described in Section 3.2. Chaudhary et al. (2019) train a multilingual sentence encoder on various EnglishForeign Language parallel corpus and prove the zero-shot cross-lingual transfer capability between non-English pairs, such as Japanese and Chinese. However, when English is used as the pivot, the distance between Japanese and Chinese become larger, resulting in not effectively capturing the correlation between them. The conditional cross-entropy metric in adequacy relies on the quality of machine translation system. Due to the difficulty of training high-quality machine translation systems on 300k sentence pairs, the adequacy filter cannot produce accurate conditional cross-entropy. The GPT domain filter assigns higher score to sentences that are more like human natural language and downgrades malformatted sentence pairs. It is effective in the German-English filtering task, where a fixed-size subset is selected and we want to fill the subset with as much domain relevant data as possible. However, to best fit the real world scenario where the goal is to have the best machine translation system, we do not limit the amount of data to select for training machine translation system and let the system decide the amount of the data to select, according to each filtering method. We rank sentence pairs by their filtering scores and train a MT system on N percentage of the top ranked data. N is selected based on the development set and we report the best BLEU score. Under this setting, adding a domain filter makes the model use less data (N = 50% 8pre-trained Mixedlarge corpus + GptEncoder + LmTarget Model in https://github.com/dbiir/UER-py Filtering Probability Threshold Quality of Pairs (P/R) 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 Precision Recall Figure 1: Precision and recall curves of the acceptability filter on our internal JA-ZH filtering test set. The threshold is based on the classifier probability produced by the softmax layer. When threshold set to 0.9, we obtain 97.7% precision parallel sentence pairs at 66.9% recall. vs N = 75%), but we do not observe any performance gain, as we suspect that the malformatted but parallel sentence pairs are neither harmful or helpful to the model, and filtering them out makes no difference in performance of the model. High Precision Parallel Corpus Filtering. For analysis purposes, we manually annotate a small set of 320 sentence pairs randomly selected from our original web crawled Japanese-Chinese data set. 24% of the sentence pairs are labeled “not mutual translations.” As stated in Khayrallah and Koehn (2018), neural machine translation models are more sensitive to noise than statistical machine translation models, so having high precision filtering results as training data is necessary. In Figure 1, we show precision and recall curves for our proposed filtering method on this labeled test set, under different threshold settings. The threshold is selected based on the filtering classifier probability produced by the softmax layer. By setting the threshold to 0.9, we are able to obtain 97.7% precision high-quality parallel sentences, while still having 66.9% recall. 5 Conclusions In this paper, we address the parallel corpus filtering problem in machine translation. We propose a novel filtering method using pre-trained language models. Our method outperforms strong baselines and achieves a new state-of-the-art. We release a large Japanese-Chinese web crawled parallel corpus for the research purposes. Because it is artificial to use synthetic data for training a filter classifier, future work can focus on a better objective that 8552 models parallelism more smoothly. Future work also includes extending the method to low-resource languages not covered by multilingual BERT. Acknowledgments We would like to thank the anonymous reviewers for their constructive feedback. References Mikel Artetxe and Holger Schwenk. 2019. Marginbased parallel corpus mining with multilingual sentence embeddings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Amittai Axelrod, Yogarshi Vyas, Marianna Martindale, and Marine Carpuat. 2015. Class-based n-gram language difference models for data selection. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT). Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In Proceedings of the Sixth International Conference on Learning Representations (ICLR). Vishrav Chaudhary, Yuqing Tang, Francisco Guzm´an, Holger Schwenk, and Philipp Koehn. 2019. Lowresource corpus filtering using multilingual sentence embeddings. In Proceedings of the Fourth Conference on Machine Translation. Chenhui Chu, Toshiaki Nakazawa, and Sadao Kurohashi. 2015. Integrated parallel sentence and fragment extraction from comparable corpora: A case study on Chinese–Japanese Wikipedia. ACM Transactions on Asian and Low-Resource Language Information Processing. Raj Dabre and Sadao Kurohashi. 2017. MMCR4NLP: multilingual multiway corpora repository for natural language processing. CoRR, abs/1710.01025. Michael Denkowski, Greg Hanneman, and Alon Lavie. 2012. The CMU-Avenue French-English translation system. In Proceedings of the Seventh Workshop on Statistical Machine Translation. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Kevin Duh, Graham Neubig, Katsuhito Sudoh, and Hajime Tsukada. 2013. Adaptation data selection using neural language models: Experiments in machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Chris Dyer, Jonathan Weese, Hendra Setiawan, Adam Lopez, Ferhan Ture, Vladimir Eidelman, Juri Ganitkevitch, Phil Blunsom, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), System Demonstrations. Viktor Hangya and Alexander Fraser. 2018. An unsupervised system for parallel corpus filtering. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers. Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation. Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H´erve J´egou, and Tomas Mikolov. 2016. FastText.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL). Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr´e F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), System Demonstrations. Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the Workshop on Neural Machine Translation and Generation. Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L Forcada. 2018. Findings of the WMT 2018 shared task on parallel corpus filtering. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In Proceedings of the 6th International Conference on Learning Representations (ICLR). 8553 Pierre Lison and J¨org Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC). Patrick Littell, Samuel Larkin, Darlene Stewart, Michel Simard, Cyril Goutte, and Chi-kiu Lo. 2018. Measuring sentence parallelism using Mahalanobis distances: The NRC unsupervised submissions to the WMT18 parallel corpus filtering shared task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers. Chi-kiu Lo, Michel Simard, Darlene Stewart, Samuel Larkin, Cyril Goutte, and Patrick Littell. 2018. Accurate semantic textual similarity for cleaning noisy parallel corpora using semantic machine translation evaluation metric: The NRC supervised submissions to the parallel corpus filtering task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers. Jun Lu, Xiaoyu Lv, Yangbin Shi, and Boxing Chen. 2018. Alibaba submission to the WMT18 parallel corpus filtering task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers. Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Christian Federmann Jiatao Gu Fei Huang Ajay Nagesh Jan Niehues Elizabeth Salesky Sebastian St uker Marco Turchi Ondrej Bojar, Marcello Federico. 2020. Findings of the iwslt 2020 evaluation campaign. In Proceedings of the 2020 International Conference on Spoken Language Translation (IWSLT). Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Holger Schwenk. 2018. Filtering and mining parallel data in a joint multilingual space. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm´an. 2019. Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from Wikipedia. arXiv preprint arXiv:1907.05791. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC). D´aniel Varga, P´eter Hal´acsy, Andr´as Kornai, Viktor Nagy, L´aszl´o N´emeth, and Viktor Tr´on. 2007. Parallel corpora for medium density languages. Amsterdam Studies in the Theory and History of Linguistic Science. Hainan Xu and Philipp Koehn. 2017. Zipporah: a fast and scalable data cleaning system for noisy webcrawled parallel corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 8554 A Web-Crawled Parallel Data for Japanese-Chinese Figure 2: Our Japanese-Chinese parallel data harvesting pipeline. It consists of several modules, each of them numbered. The inputs to and outputs from each module are depicted in orange. The example entry points to the data pipeline are shown at the bottom of the diagram. Source # Segment-pairs # Characters (zh side) Reference Web-crawled (pipeline) 18,966,595 493,902,539 Linux documentation 92,250 1,549,964 Tiedemann (2012) Open Subtitiles 914,355 10,932,722 Lison and Tiedemann (2016) TED 376,441 5,345,867 Dabre and Kurohashi (2017) Global Voices 16,848 337,194 Tiedemann (2012) Wikipedia 228,565 5,067,489 Chu et al. (2015) Wiktionary 62,557 222,562 wiktionary.org News Commentary 570 65,038 Tiedemann (2012) Tatoeba 4,243 50,846 tatoeba.org Facebook 267,409 9,950,657 Schwenk et al. (2019) Total 20,929,833 527,424,878 Table 3: Japanese-Chinese parallel data assembled for our experiments. This appendix describes our pipeline to extract parallel Japanese-Chinese parallel sentence fragments from the Internet (Figure 2). We start with 5 billion URLs from CommonCrawl.9 We identify JapaneseChinese parallel webpages by looking at URL structure (step 2). For example, https://www.gotokyo. org/jp/ and https://www.gotokyo.org/cn/ only differ by jp and cn. We download these potentially parallel page pairs (step 3), remove HTML and other markup metadata (step 4),10 and split into sentence segments. We use off-the-shelf Hunalign11 for segment alignment (step 5). We filter segment pairs by rough language ID and length ratio (step 6). We obtain 227k URL pairs, 1.4m segment pairs, and 28.7m characters of parallel data (measured on the Chinese side). From the 227k URL pairs above, we trace which site pairs yielded the most parallel data. We then run a deep-crawling module on each of the 6000 most-promising sites,12 and we process the resulting URLs using the rest of the pipeline. Concatenating parallel data from all runs (step 7) and running a simple post-processing filter to remove objectionable content in the text gathered, we obtain around 494m characters of parallel data (measured on the Chinese side). We also integrate existing Japanese-Chinese parallel datasets from other publicly available sources for a final parallel data size 527m characters in 20.9m parallel segments. Table 3 describes the various components of this dataset. 9https://commoncrawl.org/ 10Using Python module BeautifulSoup 11http://mokk.bme.hu/en/resources/hunalign/ 12Using the Python-based scrapy tool
2020
756
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8555–8562 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8555 Regularized Context Gates on Transformer for Machine Translation Xintong Li1, Lemao Liu2, Rui Wang3, Guoping Huang2, Max Meng1 1The Chinese University of Hong Kong 2Tencent AI Lab 3National Institute of Information and Communications Technology [email protected] [email protected] [email protected] [email protected] [email protected] Abstract Context gates are effective to control the contributions from the source and target contexts in the recurrent neural network (RNN) based neural machine translation (NMT). However, it is challenging to extend them into the advanced Transformer architecture, which is more complicated than RNN. This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer. In addition, to further reduce the bias problem in the gate mechanism, this paper proposes a regularization method to guide the learning of the gates with supervision automatically generated using pointwise mutual information. Extensive experiments on 4 translation datasets demonstrate that the proposed model obtains an averaged gain of 1.0 BLEU score over a strong Transformer baseline. 1 Introduction An essence to modeling translation is how to learn an effective context from a sentence pair. Statistical machine translation (SMT) models the source context from the source-side of a translation model and models the target context from a target-side language model (Koehn et al., 2003; Koehn, 2009; Chiang, 2005). These two models are trained independently. On the contrary, neural machine translation (NMT) advocates a unified manner to jointly learn source and target context using an encoderdecoder framework with an attention mechanism, leading to substantial gains over SMT in translation quality (Sutskever et al., 2014; Bahdanau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017). Prior work on attention mechanism (Luong et al., 2015; Liu et al., 2016; Mi et al., 2016; Chen et al., 2018; Li et al., 2018; Elbayad et al., 2018; Yang et al., 2020) have shown a better context representation is helpful to translation performance. wǒ jīng cháng hé wǒ dè tóng háng mén yì qǐ tī qíu 。 我 经常 和 我的 同行 们 一起踢球。 h Attention si golf play often I with my colleagues . ti + zi + 1 −zi I often play golf with my colleagues . I often play soccer with my colleagues . Transformer: Context Gates: Regularized Context Gates: Figure 1: A running example to raise the context control problem. Both original and context gated Transformer obtain an unfaithful translation by wrongly translate “t¯i q´ıu” into “play golf” because referring too much target context. By regularizing the context gates, the purposed method corrects the translation of “t¯i q´ıu” into “play soccer”. The light font denotes the target words to be translated in the future. For original Transformer, the source and target context are added directly without any rebalancing. However, a standard NMT system is incapable of effectively controlling the contributions from source and target contexts (He et al., 2018) to deliver highly adequate translations as shown in Figure 1. As a result, Tu et al. (2017) carefully designed context gates to dynamically control the influence from source and target contexts and observed significant improvements in the recurrent neural network (RNN) based NMT. Although Transformer (Vaswani et al., 2017) delivers significant gains over RNN for translation, there are still one third translation errors related to context control problem as described in Section 3.3. Obviously, it is feasible to extend the context gates in RNN based NMT into Transformer, but an obstacle to accomplishing this goal is the complicated archi8556 tecture in Transformer, where the source and target words are tightly coupled. Thus, it is challenging to put context gates into practice in Transformer. In this paper, under the Transformer architecture, we firstly provide a way to define the source and target contexts and then obtain our model by combining both source and target contexts with context gates, which actually induces a probabilistic model indicating whether the next generated word is contributed from the source or target sentence (Li et al., 2019). In our preliminary experiments, this model only achieves modest gains over Transformer because the context selection error reduction is very limited as described in Section 3.3. To further address this issue, we propose a probabilistic model whose loss function is derived from external supervision as regularization for the context gates. This probabilistic model is jointly trained with the context gates in NMT. As it is too costly to annotate this supervision for a large-scale training corpus manually, we instead propose a simple yet effective method to automatically generate supervision using pointwise mutual information, inspired by word collocation (Bouma, 2009). In this way, the resulting NMT model is capable of controlling the contributions from source and target contexts effectively. We conduct extensive experiments on 4 benchmark datasets, and experimental results demonstrate that the proposed gated model obtains an averaged improvement of 1.0 BLEU point over corresponding strong Transformer baselines. In addition, we design a novel analysis to show that the improvement of translation performance is indeed caused by relieving the problem of wrongly focusing on the source or target context. 2 Methodology Given a source sentence x = ⟨x1, · · · , x|x|⟩and a target sentence y = ⟨y1, · · · , y|y|⟩, our proposed model is defined by the following conditional probability under the Transformer architecture: 1 P (y | x) = |y| Q i=1 P (yi | y<i, x) = |y| Q i=1 P yi | cL i  , (1) where y<i = ⟨y1, . . . , yi−1⟩denotes a prefix of y with length i −1, and cL i denotes the Lth layer 1Throughout this paper, a variable in bold font such as x denotes a sequence while regular font such as x denotes an element which may be a scalar x, vector x or matrix X. context in the decoder with L layers which is obtained from the representation of y<i and hL, i.e., the top layer hidden representation of x, similar to the original Transformer. To finish the overall definition of our model in equation 1, we will expand the definition cL i based on context gates in the following subsections. 2.1 Context Gated Transformer To develop context gates for our model, it is necessary to define the source and target contexts at first. Unlike the case in RNN, the source sentence x and the target prefix y<i are tightly coupled in our model, and thus it is not trivial to define the source and target contexts. Suppose the source and target contexts at each layer l are denoted by sl i and tl i. We recursively define them from cl−1 <i as follows. 2 tl i = rn ◦ln ◦att  cl−1 i , cl−1 <i  , sl i = ln ◦att tl i, hL , (2) where ◦is functional composition, att (q, kv) denotes multiple head attention with q as query, k as key, v as value, and rn as a residual network (He et al., 2016), ln is layer normalization (Ba et al., 2016), and all parameters are removed for simplicity. In order to control the contributions from source or target side, we define cl i by introducing a context gate zl i to combine sl i and tl i as following: cl i = rn ◦ln ◦ff (1 −zl i) ⊗tl i + zl i ⊗sl i  (3) with zl i = σ ff tl i∥sl i  , (4) where ff denotes a feedforward neural network, ∥denotes concatenation, σ(·) denotes a sigmoid function, and ⊗denotes an element-wise multiplication. zl i is a vector (Tu et al. (2017) reported that a gating vector is better than a gating scalar). Note that each component in zl i actually induces a probabilistic model indicating whether the next generated word yi is mainly contributed from the source (x) or target sentence (y<i) , as shown in Figure 1. Remark It is worth mentioning that our proposed model is similar to the standard Transformer with boiling down to replacing a residual connection 2For the base case, c0 <i is word embedding of y<i. 8557 with a high way connection (Srivastava et al., 2015; Zhang et al., 2018): if we replace (1 −zl i) ⊗tl i + zl i ⊗sl i in equation 3 by tl i +sl i, the proposed model is reduced to Transformer. 2.2 Regularization of Context Gates In our preliminary experiments, we found learning context gates from scratch cannot effectively reduce the context selection errors as described in Section 3.3. To address this issue, we propose a regularization method to guide the learning of context gates by external supervision z∗ i which is a binary number representing whether yi is contributed from either source (z∗ i = 1) or target sentence (z∗ i = 0). Formally, the training objective is defined as follows: ℓ= −log P(y | x)+λ X l,i  z∗ i max(0.5−zl i, 0) + (1 −z∗ i ) max(zl i −0.5, 0)  , (5) where zl i is a context gate defined in equation 4 and λ is a hyperparameter to be tuned in experiments. Note that we only regularize the gates during the training, but we skip the regularization during inference. Because golden z∗ i are inaccessible for each word yi in the training corpus, we ideally have to annotate it manually. However, it is costly for human to label such a large scale dataset. Instead, we propose an automatic method to generate its value in practice in the next subsection. 2.3 Generating Supervision z∗ i To decide whether yi is contributed from the source (x) or target sentence (y<i) (Li et al., 2019), a metric to measure the correlation between a pair of words (⟨yi, xj⟩or ⟨yi, yk⟩for k < i) is first required. This is closely related to a well-studied problem, i.e., word collocation (Liu et al., 2009), and we simply employ the pointwise mutual information (PMI) to measure the correlation between a word pair ⟨µ, ν⟩following Bouma (2009): pmi (µ, ν) = log P(µ,ν) P(µ)P(ν) = log Z + log C(µ,ν) C(µ)C(ν), (6) where C (µ) and C (ν) are word counts, C (µ, ν) is the co-occurrence count of words µ and ν, and Z is the normalizer, i.e., the total number of all possible (µ, ν) pairs. To obtain the context gates, we define two types of PMI according to different C (µ, ν) including two scenarios as follows. PMI in the Bilingual Scenario For each parallel sentence pair ⟨x, y⟩in training set, C (yi, xj) is added by one if both yi ∈y and xj ∈x. PMI in the Monolingual Scenario In the translation scenario, only the words in the preceding context of a target word should be considered. So for any target sentence y in the training set, C (yi, yk) is added by one if both yi ∈y and yk ∈y<i. Given the two kinds of PMI for a bilingual sentence ⟨x, y⟩, each z∗ i for each yi is defined as follows, z∗ i = 1maxj pmi(yi,xj)>maxk<i pmi(yi,yk), (7) where 1b is a binary function valued by 1 if b is true and 0 otherwise. In equation 7, we employ max strategy to measure the correlation between yi and a sentence (x or y<i). Indeed, it is similar to use the average strategy, but we did not find its gains over max in our experiments. 3 Experiments The proposed methods are evaluated on NIST ZH⇒EN 3, WMT14 EN⇒DE 4, IWSLT14 DE⇒EN 5 and IWSLT17 FR⇒EN 6 tasks. To make our NMT models capable of open-vocabulary translation, all datasets are preprocessed with Byte Pair Encoding (Sennrich et al., 2015). All proposed methods are implemented on top of Transformer (Vaswani et al., 2017) which is the state-of-the-art NMT system. Case-insensitive BLEU score (Papineni et al., 2002) is used to evaluate translation quality of ZH⇒EN, DE⇒EN and FR⇒EN. For the fair comparison with the related work, EN⇒DE is evaluated with case-sensitive BLEU score. Setup details are described in Appendix A. 3.1 Tuning Regularization Coefficient In the beginning of our experiments, we tune the regularization coefficient λ on the DE⇒EN task. Table 2 shows the robustness of λ, because the translation performance only fluctuates slightly over various λ. In particular, the best performance 3LDC2000T50, LDC2002L27, LDC2002T01, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17, LDC2004T07 4WMT14: http://www.statmt.org/wmt14/ 5IWSLT14: http://workshop2014.iwslt.org/ 6IWSLT17: http://workshop2017.iwslt.org/ 8558 Models params ×106 ZH⇒EN EN⇒DE DE⇒EN FR⇒EN MT05 MT06 MT08 RNN based NMT 84 30.6 31.1 23.2 – – – Tu et al. (2017) 88 34.1 34.8 26.2 – – – Vaswani et al. (2017) 65 – – – 27.3 – – Ma et al. (2018) – 36.8 35.9 27.6 – – – Zhao et al. (2018) – 43.9 44.0 33.3 – – – Cheng et al. (2018) – 44.0 44.4 34.9 – – – Transformer 74 46.9 47.4 38.3 27.4 32.2 36.8 This Work Context Gates 92 47.1 47.6 39.1 27.9 32.5 37.7 Regularized Context Gates 92 47.7 48.3 39.7 28.1 33.0 38.3 Table 1: Translation performances (BLEU). The RNN based NMT (Bahdanau et al., 2014) is reported from the baseline model in Tu et al. (2017). “params” shows the number of parameters of models when training ZH⇒EN except Vaswani et al. (2017) is for EN⇒DE tasks. λ 0.1 0.5 1 2 10 BLEU 32.7 32.6 33.0 32.7 32.6 * Results are measured on DE⇒EN task. Table 2: Translation performance over different regularization coefficient λ. is achieved when λ = 1, which is the default setting throughout this paper. 3.2 Translation Performance Table 1 shows the translation quality of our methods in BLEU. Our observations are as follows: 1) The performance of our implementation of the Transformer is slightly higher than Vaswani et al. (2017), which indicates we are in a fair comparison. 2) The proposed Context Gates achieves modest improvement over the baseline. As we mentioned in Section 2.1, the structure of RNN based NMT is quite different from the Transformer. Therefore, naively introducing the gate mechanism to the Transformer without adaptation does not obtain similar gains as it does in RNN based NMT. 3) The proposed Regularized Context Gates improves nearly 1.0 BLEU score over the baseline and outperforms all existing related work. This indicates that the regularization can make context gates more effective in relieving the context control problem as discussed following. 3.3 Error Analysis To explain the success of Regularized Context Gates, we analyze the error rates of translation and context selection. Given a sentence pair x and y, the forced decoding translation error is defined as P (yi | y<i, x) < P (ˆyi | y<i, x), where ˆyi ≜arg maxv P (v | y<i, x) and v denotes any token in the vocabulary. The context selection error is defined as z∗ i (yi) ̸= z∗ i (ˆyi), where z∗ i is defined in equation 7. Note that a context selection error must be a translation error but the opposite is not true. The example shown in Figure 1 also demonstrates a context selection error indicating the translation error is related with the bad context selection. Models FER CER CE/FE Transformer 40.5 13.8 33.9 Context Gates 40.5 13.7 33.7 Regularized Context Gates 40.0 13.4 33.4 * Results are measured on MT08 of ZH⇒EN task. Table 3: Forced decoding translation error rate (FER), context selection error rate (CER) and the proportion of context selection errors over forced decoding translation errors (CE/FE) of the original and context gated Transformer with or without regularization. As shown in Table 3, the Regularized Context Gates significantly reduce the translation error by avoiding the context selection error. The Context Gates are also able to avoid few context selection error but cannot make a notable improvement in translation performance. It is worth to note that there is approximately one third translation error is related to context selection error. The Regularized Context Gates indeed alleviate this severe problem by effectively rebalancing of source and target context for translation. 3.4 Statistics of Context Gates Table 4 summarizes the mean and variance of each context gate (every dimension of the context gate vectors) over the MT08 test set. It shows that learning context gates freely from scratch tends to pay more attention to target context (0.38 < 0.5), which 8559 Models Mean Variance Context Gates 0.38 0.10 Regularized Context Gates 0.51 0.13 * Results are measured on MT08 of ZH⇒EN task. Table 4: Mean and variance of context gates means the model tends to trust its language model more than the source context, and we call this context imbalance bias of the freely learned context gate. Specifically, this bias will make the translation unfaithful for some source tokens. As shown in Table 4, the Regularized Context Gates demonstrates more balanced behavior (0.51≈0.5) over the source and target context with similar variance. 3.5 Regularization in Different Layers To investigate the sensitivity of choosing different layers for regularization, we only regularize the context gate in every single layer. Table 5 shows that there is no significant performance difference, but all single layer regularized context gate models are slightly inferior to the model, which regularizes all the gates. Moreover, since nearly no computation overhead is introduced and for design simplicity, we adopt regularizing all the layers. Layers N/A 1 2 3 4 ALL BLEU 32.5 32.8 32.7 32.5 32.3 33.0 * Results are measured on DE⇒EN task. Table 5: Regularize context gates on different layers.“N/A” indicates regularization is not added. “ALL” indicates regularization is added to all the layers. 3.6 Effects on Long Sentences In Tu et al. (2017), context gates alleviate the problem of long sentence translation of attentional RNN based system (Bahdanau et al., 2014). We follow Tu et al. (2017) and compare the translation performances according to different lengths of the sentences. As shown in Figure 2, we find Context Gates does not improve the translation of long sentences but translate short sentences better. Fortunately, the Regularized Context Gates indeed significantly improves the translation for both short sentences and long sentences. 4 Conclusions This paper transplants context gates from the RNN based NMT to the Transformer to control the source and target context for translation. We find [0,10) [10,20) [20,30) [30,40) [40,50) [50,60) [60,130) Length of Source Sentence 34 36 38 40 42 BLEU score Transformer Context Gates Regularized Context Gates Figure 2: Translation performance on MT08 test set with respect to different lengths of source sentence. Regularized Context Gates significantly improves the translation of short and long sentences. that context gates only modestly improve the translation quality of the Transformer, because learning context gates freely from scratch is more challenging for the Transformer with the complicated structure than for RNN. Based on this observation, we propose a regularization method to guide the learning of context gates with an effective way to generate supervision from training data. Experimental results show the regularized context gates can significantly improve translation performances over different translation tasks even though the context control problem is only slightly relieved. In the future, we believe more work on alleviating context control problem has the potential to improve translation performance as quantified in Table 3. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL, pages 31–40. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2018. Syntax-directed attention for neural machine translation. In ThirtySecond AAAI Conference on Artificial Intelligence. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards ro8560 bust neural machine translation. arXiv preprint arXiv:1805.06130. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 263–270. Association for Computational Linguistics. Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2018. Pervasive attention: 2d convolutional neural networks for sequence-to-sequence prediction. arXiv preprint arXiv:1808.03867. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. JMLR. org. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Tianyu He, Xu Tan, Yingce Xia, Di He, Tao Qin, Zhibo Chen, and Tie-Yan Liu. 2018. Layer-wise coordination between encoder and decoder for neural machine translation. In Advances in Neural Information Processing Systems, pages 7944–7954. Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 48–54. Association for Computational Linguistics. Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi. 2019. On the word alignment from neural machine translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1293–1303. Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, and Max Meng. 2018. Target foresight based attention for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1380–1390. Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. arXiv preprint arXiv:1609.04186. Zhanyi Liu, Haifeng Wang, Hua Wu, and Sheng Li. 2009. Collocation extraction using monolingual word alignment method. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2, pages 487– 495. Association for Computational Linguistics. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Shuming Ma, Xu Sun, Yizhong Wang, and Junyang Lin. 2018. Bag-of-words as target for neural machine translation. arXiv preprint arXiv:1805.04871. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. arXiv preprint arXiv:1608.00112. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2017. Context gates for neural machine translation. Transactions of the Association for Computational Linguistics, 5:87–99. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Mingming Yang, Min Zhang, Kehai Chen, Rui Wang, and Tiejun Zhao. 2020. Neural machine translation with target-attention model. IEICE TRANSACTIONS on Information and Systems, 103(3):684– 694. Jiacheng Zhang, Yanzhuo Ding, Shiqi Shen, Yong Cheng, Maosong Sun, Huanbo Luan, and Yang Liu. 2017. Thumt: an open source toolkit for neural machine translation. arXiv preprint arXiv:1706.06415. Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. arXiv preprint arXiv:1810.03581. 8561 Yang Zhao, Jiajun Zhang, Zhongjun He, Chengqing Zong, and Hua Wu. 2018. Addressing troublesome words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 391–400. 8562 A Details of Data and Implementation The training data for ZH⇒EN task consists of 1.8M sentence pairs. The development set is chosen as NIST02 and test sets are NIST05, 06, 08. For EN⇒DE task, its training data contains 4.6M sentences pairs. Both FR⇒EN and DE⇒EN tasks contain around 0.2M sentence pairs. For ZH⇒EN and EN⇒DE tasks, the joint vocabulary is built with 32K BPE merge operations, and for DE⇒EN and FR⇒EN tasks it is built with 16K merge operations. Our implementation of context gates and the regularization are based on Transformer, implemented by THUMT (Zhang et al., 2017). For ZH⇒EN and EN⇒DE tasks, only the sentences of length up to 256 tokens are used with no more than 215 tokens in a batch. The dimension of both word embeddings and hidden size are 512. Both encoder and decoder have 6 layers and adopt multi-head attention with 8 heads. For FR⇒EN and DE⇒EN tasks, we use a smaller model with 4 layers and 4 heads, and both the embedding size and the hidden size is 256. The training batch contains no more than 212 tokens. For all tasks, the beam size for decoding is 4, and the loss function is optimized with Adam, where β1 = 0.9, β2 = 0.98 and ϵ = 10−9.
2020
757
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8563–8568 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8563 A Multi-Perspective Architecture for Semantic Code Search Rajarshi Haldar†, Lingfei Wu‡, Jinjun Xiong‡, Julia Hockenmaier† †University of Illinois at Urbana-Champaign, Champaign, IL, USA ‡IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA {rhaldar2, juliahmr}@illinois.edu {wuli, jinjun}@us.ibm.com Abstract The ability to match pieces of code to their corresponding natural language descriptions and vice versa is fundamental for natural language search interfaces to software repositories. In this paper, we propose a novel multiperspective cross-lingual neural framework for code–text matching, inspired in part by a previous model for monolingual text-to-text matching, to capture both global and local similarities. Our experiments on the CoNaLa dataset show that our proposed model yields better performance on this cross-lingual text-to-code matching task than previous approaches that map code and text to a single joint embedding space. 1 Introduction In semantic code search or retrieval, the user provides a natural language query, and the system returns a ranked list of relevant code snippets from a database or repository for that query. This task is usually performed using a matching model that computes the similarity between code snippets and natural language descriptions by mapping code and natural language embeddings into a common space where the distance between a piece of code and its corresponding description is small (Gu et al., 2018; Yao et al., 2019). But current models do not explicitly model any interactions between the code and the description until the final step when their global similarity is calculated. In this paper, we propose a novel multiperspective neural framework for code–text matching that captures both global and local similarities. We show that it yields improved results on semantic code search. We apply our model to the CoNaLa benchmark dataset (Yin et al., 2018), which consists of Python code snippets and their corresponding annotations in English. We believe that our model could be applied to other programming languages as well. We have made our code publicly available for research purpose 1. 2 Background Semantic code search is a cross-modal ranking problem where items in one modality (code) need to be ranked according to how well they match queries in another (natural language). One standard way to compute the similarity of items drawn from two different modalities or languages is to map each modality into a common “semantic” vector space such that matching pairs are mapped to vectors that are close to each other. Gu et al. (2018) propose a code retrieval framework that jointly embeds code snippets and NL descriptions into a high dimensional embedding space such that the vectors representing a code snippet and its corresponding description have high similarity. A variety of different approaches for learning embeddings for code have been proposed. Because source code is less ambiguous than natural language, there are ways to exploit the underlying structure of code to obtain better representations. Wan et al. (2019); LeClair et al. (2020) show that using features extracted from Abstract Syntax Trees (AST’s) and Control Flow Graphs (CFG’s) lead to creating better representations of code. Hu et al. (2018); Haque et al. (2020) show that ASTs represented as compact strings can be used to represent code. Following these approaches, we developed a multi-modal framework that generates embeddings for code using both the code tokens and an AST representation. 1https://github.com/rajarshihaldar/ codetextmatch 8564 3 Models We compare four models: a baseline model (CT) that only considers text and source code, a (CAT) model that also includes embedding of Abstract Syntax Trees, a multi-perspective model (MP) that leverages multi-perspective matching operations as defined in a bilateral multi-perspective model (Wang et al., 2017), and our MP-CAT model that combines both MP and CAT architectures. 3.1 CT: A Baseline Code and Text Model Our baseline model (CT) is based on Gu et al. (2018)’s CODEnn model. It maps both code and natural language descriptions to vectors in the same embedding space and then computes the similarity between these vectors using the L2 distance metric. These vectors are computed by two sets of three layers (one set per modality): The Word Embedding Module consists of two independently pre-trained lookup tables that map code tokens or natural language tokens to embeddings. We use FastText (Bojanowski et al., 2017)) for all embeddings in this paper. The Context Representation Module consists of bi-directional LSTM layers (one for code, one for text) that map the word embedding sequences into another pair of sequences of embeddings that contain contextual information. The Maxpool Layer performs max pool (separately per dimension) over the Context Representation embedding sequences to obtain a single vector. The Similarity Module computes the similarity of the two vectors vc and vc produced by the Maxpool Layers as d(v1, v2) = d X i=1 (v1i −v2i)2 sim(vc, vd) = 1 −d( vc ∥vc∥2 , vd ∥vd∥2 ) where d returns the L2 distance between ddimensional vectors vc and vd. 3.2 CAT: An AST-Based Model To capture both syntactic and semantic features, we augment our baseline CT model with embeddings based on the Abstract Syntax Tree (AST) representation of the code. Most programming languages, including Python, come with a deterministic parser that outputs the AST representation of a code snippet. Python has a library module called ast that generates AST representations of code. We convert this AST representation to a string using structure-based traversal (SBT) (Hu et al., 2018). The CAT model is similar to the CT model, except that it extracts features from both the source code tokens and its corresponding AST representation. So the Word Embedding Module now contains three lookup tables: for code, AST, and natural language, respectively. Similarly, the Context Representation Module has 3 bi-directional LSTM layers which is followed by 3 Maxpool Layers. Before the output is passed to the similarity module, the output vectors of the two max pool layers representing code and AST are concatenated to form a single representation of the source code. Because of this, the hidden dimension in the bidirectional LSTM’s of the Context Representation Module for the natural language sequence is double that of code and AST sequences’ LSTM hidden dimensions. This ensures that, after concatenation, the vectors representing the candidate code snippet and the natural language description are of the same dimension. After that, the Similarity Module computes the similarity of these vectors via the same L2-distance-based operation as in CT. 3.3 MP: A Multi-Perspective Model The CT and CAT models learn to map source code and natural language tokens into a joint embedding space such that semantically similar code-natural language pairs are projected to vectors that are close to each other. However, these two representations interact only in the final step when the global similarity of the sequence embeddings is calculated, but not during the first step when each sequence is encoded into its corresponding embedding. Wang et al. (2017) show that, for tasks such as paraphrase identification and natural language inference that require two pieces of texts from the same language to compare, it is beneficial to include a number of different (i.e., multi-perspective) local matching operations between the two input sequences when computing their vector representations. Given contextual sequence encodings P and Q (computed, e.g., by biLSTMs) for the two sequences to be compared, Wang et al. (2017)’s Bilateral Multi-Perspective Matching (BiMPM) model includes a matching mechanism that compares P and Q by matching each position in P with all positions in Q, and by matching each position in Q with all positions in P, under four different match8565 ing strategies. We will discuss these strategies in more detail under the Bilateral Multi-Perspective Matching (BiMPM) Module. We apply the MP model to our cross-modal codetext matching task as follows: The Word Embedding Layer takes as input the code sequence, AST sequence, and description sequence. The output of this layer is three independent sequences of token embeddings, one for each input sequence. The Context Representation Module consists of three sets of BiLSTM layers that each computes a contextual representation of each token in the corresponding input sequence. We concatenate the hidden states of the sequences representing the code and AST, respectively, to get one set of sequence embeddings representing the source code input. The Bilateral Multi-Perspective Matching (BiMPM) Module compares the two sequences, say P and Q, by matching each position in P with all positions in Q, and by matching each position in Q with all positions in P, under four different matching strategies m that each produce new embedding sequences P ′ m and Q′ m that have the same length as the original P and Q. Each matching strategy is parameterized by a feedforward network (e.g. P ′[i]m = fP→Q m (P[i], Qm; W P→Q m )) that takes in a token embedding P[i] and a strategyspecific single-vector representation of Qm, and returns a new vector P ′[i]m for P[i]. For each token P[i] ∈P (and conversely for any Q[j] ∈Q), Qm (Pm) is defined as follows: Full matching sets Qm (Pm) to be the final hidden state of Q (and vice versa for P). Maxpool matching obtains Qm by performing maximum pooling (per dimension) across the elements of Q. Attentive matching computes Qm as a weighted average of all Q[j] ∈Q, where Q[j]’s weight is the cosine similarity of P[i] and Q[j]. Max-Attentive matching sets Qm to be the Q[j] with the highest cosine similarity to P[i]. We concatenate the four P ′[i]m (Q′[i]m) for each token i to get two new sequences P ′ and Q′. The Local Aggregation Module aggregates these sequence embeddings into two fixed-length multi-perspective hidden representations by passing them through two different bi-LSTM layers (one for each sequence). For each sequence, we concatenate the final hidden states of both the forward and reverse directions to get a vector representation of that sequence. The Similarity Module computes the similarity of the two vectors returned by the Aggregation Module as before. 3.4 MP-CAT: A Combined Model Our final model combines the MP and the CAT models. It contains the following components: The CAT module reads in the code sequence, the AST sequence, and the natural language sequence and outputs two vectors, one jointly representing the code and the AST and the other representing the natural language description. The MP module also reads in the code sequence, the AST sequence, and the natural language sequence. It returns two vectors, one for code and AST, and the other for the natural language description. The difference between this module and the previous is that MP contains local information that is ignored in the global CAT embeddings. The Global and Local Fusion Module concatenates the two CAT and MP vectors representing the code to get the final code representation, and does the same for the CAT and MP vectors representing the natural language description, before computing their L2 distance in the same manner as the other similarity modules. Figure 1 shows the pipeline of the MP-CAT framework. 4 Experiments The CoNaLa Dataset The CoNaLa dataset (Yin et al., 2018) has two parts, a manually curated parallel corpus of 2,379 training and 500 test examples, and a large automatically-mined dataset with 600k examples (which we ignore here). Each example consists of a snippet of Python code and its corresponding English description. Pre-processing We pre-process the text representing both the source code and the natural language descriptions using sub-word regularization based on unigram language modeling (Kudo, 2018) transforms the original tokens into sequences of shorter (and hence more common) substrings. We use the sentencepiece library (Kudo and Richardson, 2018) and follow the same approach as used by Yin et al. (2018) for the CoNaLa dataset. Training procedure During training, we use triplets consisting of a code snippet, a correct description, and an incorrect description (obtained by 8566 Figure 1: The MP-CAT framework that contains both global-level and local-level features for code–text matching Framework Training Time (s) Evaluation Time (s) CT 4663.10 6755.62 CAT 6702.69 11050.68 MP 183393.47 17374.14 MP-CAT 240062.38 25306.97 Table 1: Training and Evaluation times for all our models. The models were trained for 100 epochs and the evaluation time was computed on 500 test queries. Frameworks MRR R@1 R@5 R@10 CT 0.172 7.4 24.0 39.6 CAT 0.207 9.0 32.2 45.0 MP 0.154 6.4 21.6 33.6 MP-CAT 0.220 11.0 32.2 47.4 Table 2: Code Search Results random sampling from the training set). We sample 5 incorrect descriptions for each code–text pair, giving us five triplets for each training example. During the evaluation phase, for every natural language query D, we calculate the rank of its corresponding code snippet C among all 500 candidates in the test set. 4.1 Experimental Setup We train our models on triplets ⟨C, D+, D−⟩consisting of a snippet of code C, a natural language description D+ that correctly describes what the code does (a positive example), and a description D−that does not describe what the code does (a negative example). We minimize the ranking loss with margin ϵ, following Gu et al. (2018): L(θ) = X ⟨C,D+,D−⟩ max 0, ϵ −cos(C, D+) + cos(C, D−)  In the CAT model, since we first concatenate the vectors for the code and AST before comparing them with the vector for the natural language description, the first two vectors are each half the dimension size of the third one. Our models are implemented in PyTorch (Paszke et al., 2017) and trained using Adam (Kingma and Ba, 2014). Each model is trained for 100 epochs, and during the evaluation step, we use a set of 500 natural language queries from the test set. The training and evaluation times are shown in Table 2. 4.2 Results Table 2 shows our test set results for code search. We report Recall@K (K=1,5,10) and mean reciprocal rank (MRR) of the correct answer. The Impact of Modeling ASTs: In going from the first (CT) row to the second (CAT) row in Table 2, we see that the AST features alone increase MRR from 0.172 to 0.207. There is also an increase in R@k for all values of k. In fact, its R@5 values are competitive with our best model. Multi-Perspective Results: The results for the multi-perspective models are both surprising and interesting. Row 3 of Table 2 shows that the MP model on its own under-performs and actually has the worst results out of all the models we tested. On the other hand, we see that combining the MP and the CAT models into one framework gives the best performance across the board. This shows that even if we use a multi-perspective framework to model local features, we still need encoders to capture the global features of code and text in addition to the local features; otherwise, we end up missing the forest for the trees. 8567 Query MP-CAT CAT Sort dictionary ‘x‘ by value in ascending order sorted(list(x.items( )), key = operator.itemgetter(1)) for k in sorted( foo.keys( )): pass Run a command ‘echo hello world‘ in bash instead of shell os.system (/bin/bash -c ”echo hello world”) os.system ( ’GREPDB= ”echo 123”; /bin/bash -c ”$GREPDB”’) Select records of dataframe ‘df‘ where the sum of column ’X’ for each value in column ’User’ is 0 df.groupby(’User’)[’X’].filter( lambda x: x.sum() == 0) print(df.loc[df[’B’].isin( [’one’, ’three’])]) Table 3: The top hits returned by the MP-CAT and CAT models for a natural language query. Query MP-CAT MP Concatenate elements of a list ’x’ of multiple integers to a single integer sum(d*10**i for i, d in enumerate( x[::-1])) [float( i ) for i in lst] convert pandas DataFrame ‘df‘ to a dictionary using ‘id‘ field as the key df.set index( ’id’).to dict() data[ data[’Value’] == True] Replace repeated instances of a character ’*’ with a single instance in a string ’text’ re.sub(’\\*\\*+’, ’*’, text) re.sub(’ˆ(( ?:(?!cat).)*cat( ?:(?!cat).)*)cat’, ’\\\\1Bull’, s) Table 4: The top hits returned by the MP-CAT and MP models for a natural language query. Comparison of MP-CAT, MP and CAT Models In Table 3, we present the retrieval results for select natural language queries from the development set returned by the MP-CAT and CAT models. We do the same thing for MP-CAT and MP models in Table 4. Comparing MP-CAT and CAT, we observe that while CAT correctly identifies the data structures and libraries required to solve the user’s problem, it ends up returning the wrong command. MP, on the other hand, sometimes fails to identify even the correct libraries required. In the second example in Table 4, it fails to understand that there is also a dictionary involved and ends up returning the wrong command. MP-CAT successfully finds the required code snippet when the user queries are longer and have multiple data structures involved. 5 Conclusions In this paper, we consider the task of semantic code search or retrieval using a code–text similarity model. We propose MP-CAT, a novel multiperspective deep neural network framework for this task. In contrast to previous approaches, the multiperspective nature of our model allows it to capture richer similarities between the two sequences. Acknowledgement This work is supported by the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM AI Horizons Network. References Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In Proceedings of the 2018 40th International Conference on Software Engineering (ICSE 2018). ACM. Sakib Haque, Alexander LeClair, Lingfei Wu, and Collin McMillan. 2020. Improved automatic summarization of subroutines via attention to file context. ArXiv, abs/2004.04881. Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018. Deep code comment generation. In Proceedings of the 26th Conference on Program Comprehension, ICPC ’18, pages 200–210, New York, NY, USA. ACM. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75, Melbourne, Australia. Association for Computational Linguistics. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Alexander LeClair, Sakib Haque, Linfgei Wu, and Collin McMillan. 2020. Improved code summarization via a graph neural network. ArXiv, abs/2004.02843. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. Yao Wan, Jingdong Shu, Yulei Sui, Guandong Xu, Zhou Zhao, Jian Wu, and Philip S. Yu. 2019. Multi-modal attention network learning for semantic source code retrieval. 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 13–25. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 4144–4150. 8568 Ziyu Yao, Jayavardhan Reddy Peddamail, and Huan Sun. 2019. Coacor: Code annotation for code retrieval with reinforcement learning. In The World Wide Web Conference, WWW ’19, pages 2203– 2214, New York, NY, USA. ACM. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In International Conference on Mining Software Repositories, MSR, pages 476–486. ACM.
2020
758
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8569–8584 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8569 Automated Topical Component Extraction Using Neural Network Attention Scores from Source-based Essay Scoring Haoran Zhang Department of Computer Science University of Pittsburgh Pittsburgh, PA 15260 [email protected] Diane Litman Department of Computer Science & LRDC University of Pittsburgh Pittsburgh, PA 15260 [email protected] Abstract While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting Topical Components (TCs) representing evidence from a source text using the intermediate output of attention layers. We evaluate performance using a feature-based AES requiring TCs. Results show that performance is comparable whether using automatically or manually constructed TCs for 1) representing essays as rubric-based features, 2) grading essays. 1 Introduction Automated essay scoring (AES) systems reliably grade essays at scale, while automated writing evaluation (AWE) systems additionally provide formative feedback to guide revision. Although neural networks currently generate state-of-the-art AES results (Alikaniotis et al., 2016; Taghipour and Ng, 2016; Dong et al., 2017; Farag et al., 2018; Jin et al., 2018; Li et al., 2018; Tay et al., 2018; Zhang and Litman, 2018), non-neural AES create feature representations more easily useable by AWE (Roscoe et al., 2014; Foltz and Rosenstein, 2015; Crossley and McNamara, 2016; Woods et al., 2017; Madnani et al., 2018; Zhang et al., 2019). We believe that neural AES can also provide useful information for creating feature representations, e.g., by exploiting information in the intermediate layers. Our work focuses on a particular source-based essay writing task called the response-to-text assessment (RTA) (Correnti et al., 2013). Recently, an RTA AWE system (Zhang et al., 2019) was built by extracting rubric-based features related to the use of Topical Components (TCs) in an essay. However, manual expert effort was first required to create the TCs. For each source, the TCs consist of a comprehensive list of topics related to evidence which include: 1) important words indicating the set of evidence topics in the source, and 2) phrases representing specific examples for each topic that students need to find and use in their essays. To eliminate this expert effort, we propose a method for using the interpretable output of the attention layers of a neural AES for source-based essay writing, with the goal of extracting TCs. We evaluate this method by using the extracted TCs to support feature-based AES for two RTA source texts. Our results show that 1) the feature-based AES with TCs manually created by humans is matched by our neural method for generating TCs , and 2) the values of the rubric-based essay features based on automatic TCs are highly correlated with human Evidence scores. 2 Related Work Three recent AWE systems have used non-neural AES to provide rubric-specific feedback. Woods et al. (2017) developed an influence estimation process that used a logistic regresion AES to identify sentences needing feedback. Shibani et al. (2019) presented a web-based tool that provides formative feedback on rhetorical moves in writing. Zhang et al. (2019) used features created for a random forest AES to select feedback messages, although human effort was first needed to create TCs from a source text. We automatically extract TCs using neural AES, thereby eliminating this expert effort. Others have also proposed methods for preprocessing source information external to an essay. Content importance models for AES predict the parts of a source text that students should include when writing a summary (Klebanov et al., 8570 Source Excerpt: Today, Yala Sub-District Hospital has medicine, free of charge, for all of the most common diseases. Water is connected to the hospital, which also has a generator for electricity. Bed nets are used in every sleeping site in Sauri... Essay Prompt: The author provided one specific example of how the quality of life can be improved by the Millennium Villages Project in Sauri, Kenya. Based on the article, did the author provide a convincing argument that winning the fight against poverty is achievable in our lifetime? Explain why or why not with 3-4 examples from the text to support your answer. Essay: In my opinion I think that they will achieve it in lifetime. During the years threw 2004 and 2008 they made progress. People didnt have the money to buy the stuff in 2004. The hospital was packed with patients and they didnt have alot of treatment in 2004. In 2008 it changed the hospital had medicine, free of charge, and for all the common dieases. Water was connected to the hospital and has a generator for electricity. Everybody has net in their site. The hunger crisis has been addressed with fertilizer and seeds, as well as the tools needed to maintain the food. The school has no fees and they serve lunch. To me thats sounds like it is going achieve it in the lifetime. Table 1: A source excerpt for the RTAMV P prompt and an essay with score of 3. Prompt RTAMV P RTASpace Score 1 852 538 (29%) (26%) Score 2 1197 789 (40%) (38%) Score 3 616 512 (21%) (25%) Score 4 305 237 (10%) (11%) Total 2970 2076 Table 2: The Evidence score distribution of RTA. 2014). Methods for extracting important keywords or keyphrases also exist, both supervised (unlike our approach) (Meng et al., 2017; Mahata et al., 2018; Florescu and Jin, 2018) and unsupervised (Florescu and Caragea, 2017). Rahimi and Litman (2016) developed a TC extraction LDA model (Blei et al., 2003). While the LDA model considers all words equally, our model takes essay scores into account by using attention to represent word importance. Both the unsupervised keyword and LDA models will serve as baselines in our experiments. In the computer vision area, attention cropped images have been used for further image classification or object detection (Cao et al., 2015; Yuxin et al., 2018; Ebrahimpour et al., 2019). In the NLP area, Lei et al. (2016) proposed to use a generator to find candidate rationale and these are passed through the encoder for prediction. Our work is similar in spirit to this type of work. 3 RTA Corpus and Prior AES Systems The essays in our corpus were written by students in grades 4 to 8 in response to two RTA source texts (Correnti et al., 2013): RTAMV P (2970 essays) and RTASpace (2076 essays). Table 1 shows an excerpt from RTAMV P , the associated essay writing prompt, and a student essay. The bolding in the source indicates evidence examples that experts manually labeled as important for students to discuss (i.e., TC phrases). Evidence usage in each essay was manually scored on a scale of 1 to 4 (low to high). The distribution of Evidence scores is shown in Table 2. The essay in Table 1 received a score of 3, with the bolding indicating phrases semantically related to the TCs from the source text. To date, two approaches to AES have been proposed for the RTA: AESrubric and AESneural. To support the needs of AWE, AESrubric (Zhang and Litman, 2017) used a traditional supervised learning framework where rubric-motivated features were extracted from every essay before model training - Number of Pieces of Evidence (NPE) 1, Concentration (CON), Specificity (SPC) 2, Word Count (WOC). The two aspects of TCs introduced in Section 1 (topic words, specific example phrases) were used during feature extraction. Motivated by improving stand-alone AES performance (i.e., when an interpretable model was not needed for subsequent AWE), Zhang and Litman (2018) developed AESneural, a hierarchical neural model with the co-attention mechanism in the sentence level to capture the relationship between the essay and the source. Neither feature engineering nor TC creation were needed before training. 4 Attention-Based TC Extraction: TCattn In this section we propose a method for extracting TCs based on the AESneural attention level outputs. Since the self-attention and co-attention mechanisms were designed to capture sentence and phrase importance, we hypothesize that the attention scores can help determine if a sentence or 1An integer feature based on the list of topic words for each topic. 2A vector of integer values indicating the number of specific example phrases (semantically) mentioned in the essay per topic. 8571 No. Sentences attnsent attnphrase 1 People didn’t have the money to buy the stuff in 2004. 0.00420 0.23372 2 The hunger crisis has been addressed with fertilizer and seeds, as well as the tools needed to maintain the food. 0.08709 0.62848 3 The school has no fees and they serve lunch. 0.10686 0.63369 Table 3: Example attention scores of essay sentences. phrase has important source-related information. To provide intuition, Table 3 shows examples sentences from the student essay in Table 1. Bolded are phrases with the highest self-attention score within the sentence. Italics are specific example phrases that refer to the manually constructed TCs for the source. Attnsent is the text to essay attention score that measures which essay sentences have the closest meaning to a source sentence. Attnphrase is the self-attention score of the bolded phrase that measures phrase importance. A sentence with a high attention score tends to include at least one specific example phrase, and vice versa. The phrase with the highest attention score tends to include at least one specific example phrase if the sentence has a high attention score. Based on these observations, we first extract the output of two layers from the neural network: 1) the attnsent of each sentence, and 2) the output of the convolutional layer as the representation of the phrase with the highest attnphrase in each sentence (denoted by cnnphrase). We also extract the plain text of the phrase with the highest attnphrase in each sentence (denoted by textphrase). Then, our TCattn method uses the extracted information in 3 main steps: 1) filtering out textphrase from sentences with low attnsent, 2) clustering all remaining textphrase based on cnnphrase, and 3) generating TCs from clusters. The first filtering step keeps all textphrase where the original sentences have attnsent higher than a threshold. The intuition is that lower attnsent indicates less source-related information. The second step clusters these textphrase based on their corresponding representations cnnphrase. We use k-medoids to cluster textphrase into M clusters, where M is the number of topics in the source text. Then, for textphrase in each topic cluster, we use k-medoids to cluster them into N clusters, where N is the number of the specific example phrases we want to extract from each topic. The outputs of this step are M ∗N clusters. The third step uses the topic and example clusLayer Parameter Name Value Embedding Embedding dimension 50 Word-CNN Kernel size 5 Number of filters 100 Sent-LSTM Hidden units 100 Modeling Hidden units 100 Dropout Dropout rate 0.5 Others Epochs 100 Batch size 100 Initial learning rate 0.001 Momentum 0.9 Table 4: Hyper-parameters for neural training. Figure 1: An overview of four TC extraction systems. tering to extract TCs. As noted earlier, TCs include two parts: topic words, and specific example phrases. Since our method is data-driven and students introduce their vocabulary into the corpus, essay text is noisy. To make the TC output cleaner, we filter out words that are not in the source text. To obtain topic words, we combine all textphrase from each topic cluster to calculate the word frequency per topic. To make topics unique, we assign each word to the topic cluster in which it has the highest normalized word frequency. We then include the top Ktopic words based on their frequency in each topic cluster. To obtain example phrases, we combine all textphrase from each example cluster to calculate the word frequency per example, then include the top Kexample words based on their frequency in each example cluster. 5 Experimental Setup and Results Figure 1 shows an overview of four TC extraction methods to be evaluated. TCmanual (upper bound) uses a human expert to extract TCs from a source text. TCattn is our proposed method and automatically extracts TCs using both a source text and student essays. TClda (Rahimi and Litman, 2016) (baseline) builds on LDA to extract TCs from student essays only, while TCpr (baseline) builds on PositionRank (Florescu and Caragea, 2017) to instead extract TCs from only the source text. Since PositionRank is not designed for TC ex8572 Prompt Component Parameter TClda TCpr TCattn RTAMV P Topic Words Number of Topics 9 19 16 Number of Words 30 20 25 Example Phrases Number of Topics 20 1 18 Number of Phrases 15 20 15 RTASpace Topic Words Number of Topics 15 20 10 Number of Words 10 10 20 Example Phrases Number of Topics 10 1 9 Number of Phrases 20 50 20 Table 5: Parameters for different models. traction, we needed to further process its output to create TCpr. To extract topic words, we extract all keywords from the output. Next, we map each word to a higher dimension with word embedding. Lastly, we cluster all keywords using k-medoids into PRtopic topics. To extract example phrases, we put them into only one topic and remove all redundant example phrases if they are subsets of other example phrases. We configure experiments to test two hypotheses: H1) the AESrubric model for scoring Evidence (Zhang and Litman, 2017) will perform comparably when extracting features using either TCattn or TCmanual, and will perform worse when using TClda or TCpr; H2) the correlation between the human Evidence score and the feature values (NPE and sum of SPC features)3 will be comparable when extracted using TCattn and TCmanual, and will be stronger than when using TClda and TCpr. The experiment for H1 tests the impact of using our proposed TC extraction method on the downstream AESrubric task, while the H2 experiment examines the impact on the essay representation itself. Following Zhang and Litman (2017), we stratify essay corpora: 40% for training word embeddings and extracting TCs, 20% for selecting the best embedding and parameters, and 40% for testing. We use the hyper-parameters from Zhang and Litman (2018) for neural training as shown in Table 4. Table 5 shows all other parameters selected using the development set. Results for H1. H1 is supported by the results in Table 6, which compares the Quadratic Weighted Kappa (QWK) between human and AESrubric Evidence scores (values 1-4) when AESrubric uses TCmanual versus each of the automatic methods. TCattn always yields better performance, and even significantly better than TCmanual. Results for H2. The results in Table 7 support H2. TCattn outperforms the two automated base3These features are extracted based on TCs. Prompt TCmanual (1) TClda (2) TCpr (3) TCattn (4) RTAMV P 0.643 (2,3) 0.614 (3) 0.525 0.648 (1,2,3) RTASpace 0.609 (3) 0.615 (3) 0.559 0.622 (1,3) Table 6: The performance (QWK) of AESrubric using different TC extraction methods for feature creation. The numbers in the parentheses show the model numbers over which the current model performs significantly better (p < 0.05). The best results between automated methods in each row are in bold. Prompt Feature TCmanual TClda TCpr TCattn RTAMV P NPE 0.542 0.482 0.587 0.639 SPC (sum) 0.689 0.585 0.365 0.679 RTASpace NPE 0.484 0.513 0.494 0.625 SPC (sum) 0.601 0.574 0.533 0.598 Table 7: Pearson’s r comparing feature values computed using each TC extraction method with human (gold-standard) Evidence essay scores. All correlation values are significant (p ≤0.05). The best results between automated methods in each row are in bold. lines, and for NPE even yields stronger correlations than the manual TC method. Qualitative Analysis. The manually-created topic words for RTAMV P represent 4 topics, which are “hospital”, “malaria”, “farming” and “school”4. Although Table 5 shows that the automated list has more topics for topic words and might have broken one topic into separate topics, a good automated list should have more topics related to the 4 topics above. We manually assign a topic for each of the topic words from the different automated methods. TClda has 4 related topics out of 9 (44.44%), TCpr has 6 related topics out of 19 (31.58%), and TCattn has 10 related topics out of 16 (62.50%). Obviously, TCattn preserves more related topics than our baselines. Moving to the second aspect of TCs (specific example phrases), Table 8 shows the first 10 specific example phrases for a manually-created category that introduces the changes made by the MVP project5. This category is a mixture of different topics because it talks about the “hospital”, “malaria”, “school”, and “farming” at the same time. TCattn has overlap with TCmanual on different topics. However, TClda mainly talks about “hospital”, because the nature of the LDA model doesn’t allow mixing specific example phrases about different topics in one category. Unfortunately, TCpr 4All Topic Words generated by different models can be found in the Appendix A.1. 5All Specific Example Phrases generated by different models can be found in the Appendix A.2. 8573 TCmanual TClda TCpr TCattn progress just four years running water electricity brighter future hannah electricity running water irrigation set medicine most common diseases water connected hospital generator electricity millennium villages project poor showed treatment school supplies water connected hospital patients afford unpaved dirt road farmers could crops afford bed hospital generator electricity rooms packed patients probably bar sauri primary school electricity hospital bed nets used every sleeping site share beds future hannah better fertilizer medicine enough also hunger crisis addressed fertilizer seeds recieve treatment sauri primary school rooms packed patients tools needed maintain food supply doctor clinical officer running hospital villages project food fertilizer crops get supply no school fees doctors clinical millennium development goals five net costs 5 school attendance rate way up water fertilizer knowledge village leaders nets net bed free kids go school now receive treatment dirt road running water supplies schools almost ... ... ... ... Table 8: Specific example phrases for the RTAMV P progress topic. does not include any overlapped specific phrase in the first 10 items; they all refer to some general example phrases from the beginning of the source article. Although there are some related specific example phrases in the full list, they are mainly about school. This is because the PositionRank algorithm tends to assign higher scores to words that appear early in the text. 6 Conclusion and Future Work This paper proposes TCattn, a method for using the attention scores in a neural AES model to automatically extract the Topical Components of a source text. Evaluations show the potential of TCattn for eliminating expert effort without degrading AESrubric performance or the feature representations themselves. TCattn outperforms baselines and generates comparable or even better results than a manual approach. Although TCattn outperforms all baselines and requires no human effort on TC extraction, annotation of essay evidence scores is still needed. This leads to an interesting future investigation direction, which is training the AESneural using the gold standard that can be extracted automatically. One of our next steps is to investigate the impact of TC extraction methods on a corresponding AWE system (Zhang et al., 2019), which uses the feature values produced by AESrubric to generate formative feedback to guide essay revision. Currently, the TClda are trained on student essays, while the TCpr only works on the source article. However, TCattn uses both student essays and the source article for TC generation. It might be hard to say that the superior performance of TCattn is due to the neural architecture and attention scores rather than the richer training resources. Therefore, a comparison between TCattn and a model that uses both student essays and the source article is needed. Acknowledgments We would like to show our appreciation to every member of the RTA group for sharing their pearls of wisdom with us. We are also immensely grateful to all members of the PETAL group and reviewers for their comments on an earlier version of the paper. The research reported here was supported, in whole or in part, by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A160245 to the University of Pittsburgh. The opinions expressed are those of the authors and do not represent the views of the Institute or the U.S. Department of Education. References Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 715–725. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Chunshui Cao, Xianming Liu, Yi Yang, Yinan Yu, Jiang Wang, Zilei Wang, Yongzhen Huang, Liang Wang, Chang Huang, Wei Xu, et al. 2015. Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2956–2964. Richard Correnti, Lindsay Clare Matsumura, Laura Hamilton, and Elaine Wang. 2013. Assessing students’ skills at writing analytically in response to texts. The Elementary School Journal, 114(2):142– 177. Scott A Crossley and Danielle S McNamara. 2016. Adaptive educational technologies for literacy instruction. Routledge. 8574 Fei Dong, Yue Zhang, and Jie Yang. 2017. Attentionbased recurrent convolutional neural network for automatic essay scoring. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 153–162. Mohammad K Ebrahimpour, Jiayun Li, Yen-Yun Yu, Jackson Reesee, Azadeh Moghtaderi, Ming-Hsuan Yang, and David C Noelle. 2019. Ventral-dorsal neural networks: Object detection via selective attention. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 986–994. IEEE. Youmna Farag, Helen Yannakoudakis, and Ted Briscoe. 2018. Neural automated essay scoring and coherence modeling for adversarially crafted input. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 263–271. Corina Florescu and Cornelia Caragea. 2017. Positionrank: An unsupervised approach to keyphrase extraction from scholarly documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105–1115. Corina Florescu and Wei Jin. 2018. Learning feature representations for keyphrase extraction. In ThirtySecond AAAI Conference on Artificial Intelligence. Peter W Foltz and Mark Rosenstein. 2015. Analysis of a large-scale formative writing assessment system with automated feedback. In Proceedings of the Second (2015) ACM Conference on Learning@ Scale, pages 339–342. ACM. Cancan Jin, Ben He, Kai Hui, and Le Sun. 2018. Tdnn: a two-stage deep neural network for promptindependent automated essay scoring. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1088–1097. Beata Beigman Klebanov, Nitin Madnani, Jill Burstein, and Swapna Somasundaran. 2014. Content importance models for scoring writing from sources. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 247–252. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117. Xia Li, Minping Chen, Jianyun Nie, Zhenxing Liu, Ziheng Feng, and Yingdan Cai. 2018. Coherencebased automated essay scoring using self-attention. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 386–397. Springer. Nitin Madnani, Jill Burstein, Norbert Elliot, Beata Beigman Klebanov, Diane Napolitano, Slava Andreyev, and Maxwell Schwartz. 2018. Writing mentor: Self-regulated writing feedback for struggling writers. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, pages 113–117. Debanjan Mahata, John Kuriakose, Rajiv Ratn Shah, and Roger Zimmermann. 2018. Key2vec: Automatic ranked keyphrase extraction from scientific articles using phrase embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 634–639. Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 582–592. Zahra Rahimi and Diane Litman. 2016. Automatically extracting topical components for a responseto-text writing assessment. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 277–282. Rod D Roscoe, Laura K Allen, Jennifer L Weston, Scott A Crossley, and Danielle S McNamara. 2014. The writing pal intelligent tutoring system: Usability testing and development. Computers and Composition, 34:39–59. Antonette Shibani, Simon Knight, and Simon Buckingham Shum. 2019. Contextualizable learning analytics design: A generic model and writing analytics evaluations. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge, pages 210–219. ACM. Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1882–1891. Yi Tay, Minh C Phan, Luu Anh Tuan, and Siu Cheung Hui. 2018. Skipflow: incorporating neural coherence features for end-to-end automatic text scoring. In Thirty-Second AAAI Conference on Artificial Intelligence. Bronwyn Woods, David Adamson, Shayne Miel, and Elijah Mayfield. 2017. Formative essay feedback using predictive scoring models. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 2071–2080. ACM. Peng Yuxin, He Xiangteng, and Zhao Junjie. 2018. Object-part attention model for fine-grained image classification. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, 27(3):1487–1500. 8575 Haoran Zhang and Diane Litman. 2017. Word embedding for response-to-text assessment of evidence. In Proceedings of ACL 2017, Student Research Workshop, pages 75–81. Haoran Zhang and Diane Litman. 2018. Co-attention based neural network for source-dependent essay scoring. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 399–409. Haoran Zhang, Ahmed Magooda, Diane Litman, Richard Correnti, Elaine Wang, LC Matsmura, Emily Howe, and Rafael Quintana. 2019. erevise: Using natural language processing to provide formative feedback on text evidence usage in student writing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9619–9625. 8576 A Appendices A.1 Topic Words Results Table 9 shows all topic words for the RTAMV P from TCmanual. Table 10 shows all topic words for the RTAMV P from TClda. Table 11 shows all topic words for the RTAMV P from TCpr. Table 12 shows all topic words for the RTAMV P from TCattn. A.2 Specific Example Phrases Results Table 13 shows all specific example phrases for the RTAMV P from TCmanual. Table 14 shows all specific example phrases for the RTAMV P from TClda. Table 15 shows all specific example phrases for the RTAMV P from TCpr. Table 16 shows all specific example phrases for the RTAMV P from TCattn. 8577 Topic 1 Topic 2 Topic 3 Topic 4 care bed farmer school health net fertilizer supplies hospital malaria irrigation fee treatment infect dying student doctor bednet crop midday electricity mosquito seed meal disease bug water lunch water sleeping harvest supply sick die hungry book medicine cheap feed paper generator infect food pencil no biting energy die free kid children bed kid patient go clinical attend officer running Table 9: Topic words of TCmanual. 8578 Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 Topic 9 help kenya poverty food money school people hospital years poor like think fertilizer need kids sauri medicine africa world better author crops nets supplies malaria hospitals project good know lifetime water thing children sick water villages things life article farmers afford schools 2008 free sauri time help possible needed donate lunch disease electricity village work think convinced grow right education 2004 diseases helped hard sauri fight dying dollar afford nets medicines change going live proverty problem treatment energy mosquitoes doctors lives alot clothes said family survive learn getting 2008 goals reason states achievable families needs students says gave improved happen place time stop stuff went years doctor 2015 helping health convince lack person adults progress examples help goal important believe hunger cause fees died 2004 changed believe feel hannah tools patients parents text shape year problems happy shows seeds provide 2004 away cure changes countries tell reasons plants cost lunches mosquitos running started difference care convincing fertilizers beds books prevent treat great places shoes fighting farming means home treated support millennium change story wrote able dont wanted dieing common progress little america story solved dollars chores said beds came improve ways agree supply medical meal come patients girl country wants saying irrigation jobs wood night said 2025 achieve makes opinion wont everyday materials bite generator place hope clothing winning afford gone learning death clean program helps community sachs hungry doctors able sleep electricty tells everybody economy progress plant lots suplies impoverished giving small start history conclusion look sickness meals living drink millenium easy paragraph says farms live paper amazing cures read making thats future feed fact attendance easily evidence happened Table 10: Topic words of TClda. 8579 Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 Topic 9 Topic 10 Topic 11 Topic 12 Topic 13 Topic 14 Topic 15 Topic 16 Topic 17 Topic 18 Topic 19 irrigation road diseases adults fight development joyful people midday village millennium backs plenty doctor thing paper end work sleeping fertilizer brighter medicine lifetime villages dirt kids school women access hospital supplies world bed farmers future malaria project jump fees ground care shape chores net crops hannah disease goals bar students bananas medicines patients books nets plant car mosquitoes plan music meal cloth schools treatment pencils site seeds sauri charge economy singing energy mothers today officer outcome market quality everyone lunch feet supply water lack year supporters dancing clothing areas electricity tools time help day kind generator place health rooms years advice family poverty items life targets communities death leaders night glimpse costs africa die chemicals knowledge solutions food millions parents Table 11: Topic words of TCpr. 8580 Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 Topic 9 Topic 10 Topic 11 Topic 12 Topic 13 Topic 14 Topic 15 Topic 16 poverty way years lunch goals electricity supplies afford many free school hospital bed project supply fertilizer fight would four serves problems water food lifetime people medicine schools 2004 nets world maintain seeds winning rate villages parents day generator net could kenya crops fees disease used millennium diseases addressed attendance 80 attend cloth also rooms achievable sauri charge students yala every village hunger irrigation help progress passed three running packed together pencils farmers sleeping across lives necessary kids last made energy patients malaria africa medicines site work adults tools enough occurred books connected needed take yet midday end life lack better year 2015 5 future sachs meal worry dying plenty go changes knowledge keep worked though dramatic supporters death plant get outcome learn poor care feed change time away common place today one five family two clinical 2025 treated become solutions first like hard health officer history really along come good set tattered selling targets little doctor crisis clothing see treatment either areas chemicals die minimal whole items malarial hungry almost save preventable dancing harvest millions treatable walked showed easy costs bare cheap met feet ever hannah around impoverished mosquitoes encouraging easily probably Table 12: Topic words of TCattn. 8581 Category 1 Category 2 Category 3 Category 4 unpaved roads united nations intervention yala sub district hospital malaria common disease preventable treatable tattered clothing safer healthier better life three kids bed two adults rooms packed patients mosquitoes carry malaria infect people biting bare feet out poverty stabilize economy quality life communities not medicine treatment could afford kids die malaria adults sick 20 000 day less than 1 dollar day africa kenya sauri no doctor only clinical officer running hospital bed nets mosquitoes away people save millions lives goals met 2015 2025 no running water electricity bed nets cost 5 dollar 80 villages across sub-sahara africa sad people dying near death preventable cheap medicines treat malaria Category 5 Category 6 Category 7 Category 8 crops dying kids not attend go school progress just four years progress encouraging supporters not afford fertilizer irrigation not afford school fees yala sub district hospital has medicine solutions problems keep people impoverished outcome poor crops kids help chores fetching water wood medicine free charge change poverty stricken areas good lack fertilizer water schools minimal supplies books paper pencils medicine most common diseases poverty history not easy task hard enough food crops harvest feed whole family hungry sick concentrate not energy water connected hospital winning against poverty possible achievable lifetime no midday meal lunch hospital generator electricity bed nets used every sleeping site hunger crisis addressed fertilizer seeds tools needed maintain food supply kids go school now no school fees now serves lunch students school attendance rate way up Table 13: Specific example phrases of TCmanual. 8582 Category 1 Category 2 Category 3 Category 4 Category 5 work hard life time author convince winning fight poverty achievable lifetime children adults easy task better place united nations author convinced winning fight poverty achievable lifetime mosquitoes carry malaria lived dollar better health united states author wants disease called malaria thing history brighter future life communities author convince winning fight proverty come night stuff need things like like books paper pencils winning fight proverty achievable lifetime malarial mosquitoes earn money things need learn life kenya winning fight poverty achievable life time easily adults sick fighting poverty important kids article brighter future solutions problems people impoverished work change thinks important wining fight poverty achievable mosquitoes away hard work wants know article states infect people biting agree author winning fight poverty acheivable away sleeping working hard author provided better life 2008 author thinks better life2008 based article author convince reading article convinced poverty things changed poverty acheivable lifetime Category 6 Category 7 Category 8 Category 9 Category 10 attendance rate amazing progress years good shape kids adults donate money midday meal text says good education 2015 2025 tattered clothes serves lunch students text said went school hungry sick tattered clothing midday meals year girl areas good cheap medicines bare feet served lunch year 2004 trying help goals supposed donating money students wanted learn paragraph says worked hard save millions lives books pencils progress shows winning fight poverty achievable second reason kids attend school treated chemicals second example schools minimal paragraph states girl went schools hospitals progress encouraging supporters millennium villages hannah sachs convinced winning school school fees went kenya practical items kids sauri attend school parents afford school fees attendence rate parents money Category 11 Category 12 Category 13 Category 14 Category 15 clean water grow crops millennium village project stop poverty running water electricity water wood feed family millenium village project long time water connected hospital generator electricity fresh water needed help millennium villages project helped world work change patients afford needs help farmers worry change dramatically beat poverty rooms packed patients probably medicines free charge crops dying afford necessary fertilizer irrigation dramatic changes occured villages subsaharan africa ending poverty share beds chores fetching fertilizer knowledge place live want learn recieve treatment fetching water hunger crisis addressed fertilizer seeds tools needed maintain food supply happened years places like doctor clinical officer running hospital feed families dramatic changes occurred villages shows winning fight poverty achievable lifetime doctors clinical hunger crisis adressed millennium development goals want kind poverty water fertilizer knowledge family plant seeds outcome poor change povertystricken areas good poverty assure access receive treatment farmers worried coming years running bare encouraging supporters millennium villages project afford treatment occurred villages subsaharan Category 16 Category 17 Category 18 Category 19 Category 20 yala subdistrict hospital medicine free charge common diseases nets sleeping site sauri plan people poverty achieve goal years later free lunch afford nets stabilize economy quality life communities reach goal took years yala district assure access health care help going school started 2004 preventable treatable people people story says common africa near death achieve goals diseases like poor crops lack common disease africa homeless people hospital good shape district hospital Table 14: Specific example phrases of TClda. 8583 Category 1 brighter future hannah millennium villages project unpaved dirt road bar sauri primary school future hannah sauri primary school villages project millennium development goals village leaders dirt road car jump little kids preventable diseases people many kids diseases people kids die school supplies primary school school fees infect people Table 15: Specific example phrases of TCpr. 8584 Category 1 Category 2 Category 3 Category 4 Category 5 Category 6 winning fight could feed bed net afford four years progress lifetime year fees students school supplies schools sauri knowledge supplies medicines poverty winning world villages people school work hard books villages occurred 80 across along school fees supplies afford fertilizer afford school fees better medicine water energy winning fight poverty also every diseases kids health net 5 tools crops school fees seeds bed nets help keep hospital electricity connected winning poverty preventable family people care years many villages sauri project farmers rooms patients crops people food attendance rooms end many bed nets 5 also fight poverty afford school fees bed nets outcome poor crops school lunch meal midday supplies problems also people energy many water electricity hospital fertilizer poverty fight winning also would energy learn help progress years kenya africa today lunch students serves midday food supply maintain electricity supplies electricity water energy fight poverty winning people fees school farmers could rate people medicine 2004 5 years keep school fees bed showed lunch could work electricity medicine villages kenya 80 farmers many school lunch schools also fees 2004 also year rate school bed nets used could afford fertilizer four years lifetime poverty year years school showed hospital water farmers needed food supply villages generator energy school supplies little afford enough years four last five day school parents attend bed nets free food also farmers two many poverty school medicine fertilizer hospital bed water electricity also fertilizer supplies also tools years changes fertilizer addressed school schools fees free two electricity water running also generator supply maintain food also tattered years villages kenya project attendance school fees schools lunch free generator electricity energy poverty hunger electricity lunch school crops food farmers fertilizer bed net water water fertilizer energy school medicines fertilizer addressed school supplies crisis Category 7 Category 8 Category 9 Category 10 Category 11 Category 12 electricity running water irrigation set bed showed diseases help students supplies people schools years four free schools medicine medicine electricity tools fertilizer medicines schools also school students attendance poor showed treatment school supplies lunch meal energy people years four three though school schools free supplies fees water electricity connected schools running free charge school maintain supply farmers could crops afford bed dramatic change bed nets villages years 80 poverty many crops fertilizer farmers tools plant students lunch serves school 2004 crops farmers 2004 first food electricity hospital poverty better lives made many worked together end water electricity supplies school energy medicine crops free hospital also lack fertilizer school bed nets better fertilizer medicine enough also achievable lifetime sauri pencils students supplies yet medicine school supplies years hunger school supplies farmers attendance crops bed nets years hospital rooms packed patients malaria good bed net used villages many kenya sauri 80 fertilizer crops lack farmers water water supplies schools free hospital hospital disease four years 2004 food fertilizer crops get supply bed net years food supply hunger crisis fertilizer irrigation crops medicine water schools crops supplies free charge every sleeping site five net costs 5 common diseases sauri net medicines school medicine fertilizer free school schools lunch also free school bed also occurred 80 nets net bed free work together poverty net 5 free charge medicine school medicines school fees schools years four schools last students running water supplies schools almost hospital go school could afford school supplies items seeds plant crops fertilizer school fees lunch school supplies schools also 2004 bed supplies knowledge medicines afford project progress made food good sachs many free schools lunch school charge lunch schools school seeds food crops farmers schools project also supplies food supply farmers water also hospital doctor clinical showed bed nets water fertilizer medicines school fees schools free lunch hospital years medicine school water supplies midday school food hunger years made malaria take changes free charge medicine school fertilizer schools supplies electricity farmers fertilizer free charge schools years meal many food could better future people lunch crops farmers fertilizer electricity knowledge students lunch medicine hospital made school fees schools free medicines schools school farmers crops bed free charge school years hunger Category 13 Category 14 Category 15 Category 16 Category 17 Category 18 bed nets villages africa millennium 80 across supply books seeds fertilizer addressed food medicine enough would work hard better water connected hospital water running medicine medicines supplies 80 villages across electricity water seeds supply fertilizer crops plenty people world sauri kids poverty nets bed used crops afford bed nets medicine crops electricity poverty fight people kenya end poverty many lives hunger every fertilizer seeds crops many people poverty could take midday meal sauri free bed nets world 2015 diseases lack water day every tools fertilizer kenya would better walked bare midday meal lunch crops fertilizer plant food irrigation poor village sauri adults one bed two last crops farmers also water could poverty problems crisis though many bed nets used bed nets every water medicine well project villages poor end people food work many energy crops seeds water needed people kenya targets 80 villages bed every sleeping site net fertilizer crops water keep tools achievable kenya villages village school people many addressed fertilizer seeds almost kids die people bed nets every used school kenya bed nets many villages people problems kenya school food schools hospital people seeds fertilizer food also water rate way progress better africa hospital water running clinical officer bed nets also adults project villages kenya village people years changes four free occurred seeds fertilizer water attendance rate way water hospital bed nets sauri bed nets goals four years met needed water every work school fees fertilizer food see world bed nets could keep every bed nets poverty village fight africa sauri years hospital villages charge connected fertilizer irrigation necessary farmers tools go hungry get people could bed nets used every sleeping diseases medicine medicines common preventable attendance rate way selling come food maintain supply electricity supplies fertilizer seeds irrigation farmers lack get food work would probably hospital charge bed nets preventable nets bed water sauri years work world help last together 2015 2025 dying hunger death fertilizer lack crops become sauri world winning fight way place crops fertilizer enough farmers poverty many 2015 millennium progress diseases malaria people easily sauri history way site sauri help people poverty place many Table 16: Specific example phrases of TCattn.
2020
759
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 823–835 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 823 Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung Seoul National University Republic of Korea {jbshin, cpi1234, mysmilish, kjung}@snu.ac.kr Abstract Even though BERT has achieved successful performance improvements in various supervised learning tasks, BERT is still limited by repetitive inferences on unsupervised tasks for the computation of contextual language representations. To resolve this limitation, we propose a novel deep bidirectional language model called a Transformer-based Text Autoencoder (T-TA). The T-TA computes contextual language representations without repetition and displays the benefits of a deep bidirectional architecture, such as that of BERT. In computation time experiments in a CPU environment, the proposed T-TA performs over six times faster than the BERT-like model on a reranking task and twelve times faster on a semantic similarity task. Furthermore, the T-TA shows competitive or even better accuracies than those of BERT on the above tasks. Code is available at https://github.com/joongbo/tta. 1 Introduction A language model is an essential component of many natural language processing (NLP) applications ranging from automatic speech recognition (ASR) (Chan et al., 2016; Panayotov et al., 2015) to neural machine translation (NMT) (Sutskever et al., 2014; Sennrich et al., 2016; Vaswani et al., 2017). Recently, the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) and its variations have led to significant improvements in learning natural language representation and have achieved state-of-the-art performances on various downstream tasks such as the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019) and question answering (Rajpurkar et al., 2016). BERT continues to succeed in various unsupervised tasks, such as the N-best list reranking for ASR and NMT (Shin et al., 2019; Salazar et al., 2019), confirming that deep bidirectional language models are useful in unsupervised applications as well. However, concerning its applications to unsupervised learning tasks, BERT is significantly inefficient at computing language representations at the inference stage (Salazar et al., 2019). During training, BERT adopts the masked language modeling (MLM) objective, which is to predict the original word of the explicitly masked word from the input sequence. Following the MLM objective, each contextual word representation should be computed by a two-step process: masking a word in the input and feeding the result to BERT. During the inference stage, this process is repeated n times to obtain the representations of all the words within a text sequence (Wang and Cho, 2019; Shin et al., 2019; Salazar et al., 2019), resulting in a computational complexity of O(n3)1 in terms of the number of words n. Hence, it is necessary to reduce the computational complexity when applying the model to situations where the inference time is critical, e.g., mobile environments and real-time systems (Sanh et al., 2019; Lan et al., 2019). Considering this limitation of BERT, we submit a new research question: “Can we construct a deep bidirectional language model with a minimal inference time while maintaining the accuracy of BERT?” In this paper, in response to the above question, we propose a novel bidirectional language model named the Transformer-based Text Autoencoder (T-TA), which has a reduced computational complexity of O(n2) when applying the model to unsupervised applications. The proposed model is trained with a new learning objective named language autoencoding (LAE). The LAE objective, which allows the target labels to be the same as the text input, is to predict every token in the input sequence simultaneously without merely copying 1A complexity of O(n2) is derived from the per-layer complexity of the Transformer (Vaswani et al., 2017). 824 the input to the output. To learn the proposed objective, we devise both a diagonal masking operation and an input isolation mechanism inside the T-TA based on the Transformer encoder (Vaswani et al., 2017). These components enable the proposed TTA to compute contextualized language representations at once while maintaining the benefits of the deep bidirectional architecture of BERT. We conduct a series of experiments on two unsupervised tasks: N-best list reranking and unsupervised semantic textual similarity. First, by conducting runtime experiments in a CPU environment, we show that the proposed T-TA is 6.35 times faster than the BERT-like model in the reranking task and 12.7 times faster in the unsupervised semantic textual similarity task. Second, despite its faster inference time, the T-TA achieves competitive performances relative to BERT on reranking tasks. Furthermore, the T-TA outperforms BERT by up to 8 points in Pearson’s r on unsupervised semantic textual similarity tasks. 2 Related Works When referring to an autoencoder for language modeling, sequence-to-sequence learning approaches have been commonly used. These approaches encode a given sentence into a compressed vector representation, followed by a decoder that reconstructs the original sentence from the sentence-level representation (Sutskever et al., 2014; Cho et al., 2014; Dai and Le, 2015). To the best of our knowledge, however, none of these approaches consider an autoencoder that encodes word-level representations (such as BERT) without an autoregressive decoding process. Many studies have been performed on neural network-based language models for word-level representations. Distributed word representations were proposed and attracted considerable interest, as they were considered to be fundamental building blocks for NLP tasks (Rumelhart et al., 1986; Bengio et al., 2003; Mikolov et al., 2013b). Subsequently, researchers explored contextualized representations of text where each word has a different representation depending on the context (Peters et al., 2018; Radford et al., 2018). Most recently, a Transformer-based deep bidirectional model was proposed and applied to various supervised-learning tasks with remarkable success (Devlin et al., 2019). For unsupervised tasks, researchers have adopted recently developed language-representation models and investigated their effectiveness; a typical example is the N-best list reranking for ASR and NMT tasks. In particular, studies have integrated leftto-right and right-to-left language models (Arisoy et al., 2015; Chen et al., 2017; Peris and Casacuberta, 2015) to outperform conventional unidirectional language models (Mikolov et al., 2010; Sundermeyer et al., 2012) in these tasks. Furthermore, BERT-based approaches have been explored and have achieved significant performance improvements on these tasks because bidirectional language models yield the pseudo-log-likelihood of a given sentence, and this score is useful in ranking the n-best hypotheses (Wang and Cho, 2019; Shin et al., 2019; Salazar et al., 2019). Another line of research involves reducing the computation time and memory consumption of BERT. Lan et al. (2019) proposed parameterreduction techniques, factorized embedding parameterization and cross-layer parameter sharing and reported 18 times fewer parameters and a 1.7-fold increase in the training time. Similarly, Sanh et al. (2019) presented a method to pretrain a smaller model that can be fine-tuned for downstream tasks and achieved 1.4 times fewer parameters with a 1.6fold increase in the inference time. However, none of these studies developed methods that directly revise the BERT architecture to reduce the computational complexity during the inference stage. 3 Language Model Baselines In a conventional language modeling task, the ith token xi is predicted using its preceding context x<i = [x1, . . . , xi−1]; throughout this paper, this objective is known as causal language modeling (CLM) following (Conneau and Lample, 2019). As shown in Figure 1a, we can obtain (left-to-right) contextualized language representations HC = [HC 1 , . . . , HC n] after feeding the input sequence to the CLM-trained language model only once, where HC i = hC(x<i) is the hidden representation of the i-th token. This paper takes this unidirectional language model (uniLM) as our speed baseline. However, contextualized language representations obtained from the uniLM are insufficient to accurately encode a given text because future contexts cannot be leveraged to understand the current tokens during the inference stage. Recently, BERT (Devlin et al., 2019) was designed to enable the full contextualization 825 (a) Causal language modeling (b) Masked language modeling (c) Language autoencoding Figure 1: Schematic diagrams of language models for the (a) CLM, (b) MLM, and (c) LAE objectives. of language representations by using the MLM objective, in which some tokens from the input sequence are randomly masked; the objective is to predict the original tokens at the masked positions using only their context. As in Figure 1b, we can obtain a contextualized representation of the i-th token HM i = hM(Mi(x)) by masking the token in the input sequence and feeding it to the MLM-trained model, where Mi(x) = [x1, . . . , xi−1, [MASK], xi+1, . . . , xn] signifies an external masking operation. This paper takes this bidirectional language model (biLM) as our performance baseline. However, this mask-and-predict approach should be repeated n times to obtain all the language representations HM = [HM 1 , . . . , HM n ] because learning occurs only at the masked position during the MLM training stage. Although the resulting language representations are robust and accurate, as a consequence of this repetition, the model is significantly inefficient when applied to unsupervised tasks such as N-best list reranking (Wang and Cho, 2019; Shin et al., 2019; Salazar et al., 2019). 4 Proposed Method 4.1 Language Autoencoding In this paper, we propose a new learning objective named language autoencoding (LAE) for obtaining fully contextualized language representations without repetition. The LAE objective, with which the output is the same as the input, is to predict every token in a text sequence simultaneously without merely copying the input to the output. For the proposed task, a language model should reproduce the whole input at once while avoiding overfitting; otherwise, the model outputs only the representation copied from the input representation without learning any statistics of the language. To this end, the flow of information from the i-th input to the i-th output should be blocked inside the model shown in Figure 1c. From the LAE objective, we can obtain fully contextualized language representations HL = [HL 1 , . . . , HL n] all at once, where HL i = hL(x\i) and x\i = [x1, . . . , xi−1, xi+1, . . . , xn]. The method for blocking the flow of information is described in the next section. 4.2 Transformer-based Text Autoencoder In this section, we introduce the novel architecture of the proposed T-TA shown in Figure 2. As indicated by its name, the T-TA architecture is based on the Transformer encoder (Vaswani et al., 2017). To learn the proposed LAE objective, we develop both a diagonal masking operation and an input isolation mechanism inside the T-TA. Both developments are designed to enable the language model to predict all tokens simultaneously while maintaining the deep bidirectional property (see the descriptions in the following subsections). For brevity, we refer to the original paper on the Transformer encoder (Vaswani et al., 2017) for other details regarding the standard functions, such as the multihead attention and scaled dot-product attention mechanisms, layer normalization, and the position-wise fully connected feed-forward network. 4.2.1 Diagonal Masking As shown in Figure 3, a diagonal masking operation is implemented inside the scaled dot-product attention mechanism to be “self-unknown” during the inference stage. This operation prevents information from flowing to the same position in the next layer by masking out the diagonal values in the input of the softmax function. Specifically, the output vector at each position is the weighted sum of the value V at other positions, where the attention weights come from the query Q and the key K. 826 Figure 2: Architecture of our T-TA. The highlighted box and dashed arrows are the innovations presented in this paper. The diagonal mask becomes meaningless when we use it together with a residual connection or utilize it within the multilayer architecture. To retain the self-unknown functional, we can remove the residual connection and adopt a single-layer architecture. However, it is essential to utilize a deep architecture to understand the intricate patterns of natural language. To this end, we further develop the architecture described in the next section. 4.2.2 Input Isolation We now propose an input isolation mechanism to ensure that the residual connection and the multilayer architecture are compatible with the abovementioned diagonal masking operation. In the input isolation mechanism, the key and value inputs (K and V, respectively) of all encoding layers are isolated from the network flow and are fixed to the sum of the token embeddings and the position embeddings. Hence, only the query inputs (Q) are updated across the layers during the inference stage by referring to the fixed output of the embedding layer. Figure 3: Diagonal masking of the scaled dot-product attention mechanism. The highlighted box and dashed arrow represent the innovations reported in this paper. Additionally, we input the position embeddings to the Q of the very first encoding layer, thereby making the self-attention mechanism effective. Otherwise, the attention weights will be the same at all positions, and thus, the first self-attention mechanism will function as a simple average of all the input representations (except the “self” position). Finally, we apply the residual connection only to the query to completely maintain unawareness. The dashed arrows in Figure 2 show the proposed input isolation mechanism inside the T-TA. By using diagonal masking and input isolation in conjunction, the T-TA can have multiple encoder layers, enabling the T-TA to obtain high-quality contextual language representations after feeding a sequence into the model only once. 4.3 Discussion and Analysis Heretofore, we have introduced the new learning objective named LAE, and the novel deep bidirectional language model named T-TA. We will verify the architecture of the proposed T-TA in Section 4.3.1 and compare our model with the recently proposed strong baseline BERT in Section 4.3.2. 4.3.1 Verification of the Architecture Here, we discuss how diagonal masking with input isolation preserves the “self-unknown” property in detail. As shown in Figure 2, we have two input embeddings, namely, token embeddings X = [X1, . . . , Xn]T ∈Rn×d and position embeddings P = [P1, . . . , Pn]T ∈Rn×d, where d is an embedding dimension. From the input isolation mechanism, the key and value K = V = X+P have the information of the input tokens and are fixed in 827 all layers, but the query Ql is updated across the layers during the inference stage starting from the position embeddings Q1 = P in the first layer. Let us consider the l-th encoding layer’s query input Ql and its output Hl = Ql+1: Hl = SMSAN(Ql, K, V) = g(Norm(Add(Ql, f(Ql, K, V)))), (1) where SMSAN(·) is the self-masked self-attention network, namely, the encoding layer of the T-TA, g(x) = Norm(Add(x, FeedForward(x))) signifies two upper subboxes of the encoding layer in Figure 2, and f(·) is the (multihead) diagonal-masked self-attention (DMSA) mechanism. As illustrated in Figure 3, the DMSA module computes Zl as follows: Zl = f(Ql, K, V) = DMSA(Ql, K, V) = SoftMax(DiagMask(QlKT / √ d))V. (2) In the DMSA module, the i-th element of Zl = [Zl 1, . . . , Zl n]T is always computed by a weighted average of the fixed V while discarding the information of the i-th token Xi in Vi. Specifically, Zl i is the weighted average of V with the attention weight vector sl i, i.e., Zl i = sl iV, where sl i = [sl 1, . . . , sl i−1, 0, sl i+1, . . . , sl n] ∈R1×n. Here, we note that the DMSA mechanism is related only to the “self-unknown” property since no token representations are referred to each other in subsequent transformations from Zl to Hl. Therefore, we can guarantee that the i-th element of the query representation in any layer, Ql i, never encounters the corresponding token representation starting from Q1 i = Pi. Consequently, the T-TA preserves the “self-unknown” property during the inference stage while maintaining the residual connection and multilayer architecture. 4.3.2 Comparison with BERT There are several differences between the strong baseline BERT (Devlin et al., 2019) and the proposed T-TA, while both models learn deep bidirectional language representations. • While BERT uses an external masking operation in the input, the T-TA has an internal masking operation in the model, as we intend. Additionally, while BERT is based on a denoising autoencoder, the T-TA is based on an autoencoder. With this novel approach, the T-TA does not need mask-and-predict repetition during the computing of contextual language representations. Consequently, we reduce the computational complexity from O(n3) with the BERT to O(n2) with the T-TA in applications to unsupervised learning tasks. • As in the T-TA, feeding an intact input (without masking) into BERT is also possible. However, we argue that this process will significantly diminish the model performance in unsupervised applications since the MLM objective does not consider intact tokens much. In the next section, we include experiments that reveal the model performance with intact inputs (described in Tables 1, 3, and 4). For further reference, we also suggest a previous study that reported the same opinion (Salazar et al., 2019). 5 Experiments To evaluate the proposed method, we conduct a series of experiments. We first evaluate the contextual language representations obtained from the T-TA on N-best list reranking tasks. We then apply our method to unsupervised semantic textual similarity (STS) tasks. The following sections will demonstrate that the proposed model is much faster than BERT during the inference stage (Section 5.2) while showing competitive or even better accuracies than those of BERT on reranking tasks (Section 5.3) and STS tasks (Section 5.4). 5.1 Language Model Setups The main purpose of this paper is to compare the proposed T-TA with a biLM trained with the MLM objective. For a fair comparison, each model has the same number of parameters based on the Transformer as follows: |L| = 3 self-attention layers with d = 512 input and output dimensions, h = 8 attention heads, and df = 2048 hidden units for the position-wise feed-forward layers. We use a Gaussian error linear unit (gelu) activation function (Hendrycks and Gimpel, 2016) rather than the standard rectified linear unit (relu) following OpenAI GPT (Radford et al., 2018) and BERT (Devlin et al., 2019). In our experiments, we set the position embeddings to be trainable following BERT (Devlin et al., 2019) rather than a fixed sinusoid (Vaswani et al., 2017) with supported sequence lengths up to 128 tokens. We use WordPiece embeddings (Wu et al., 2016) with a vocabulary of approximately |V | ≃30, 000 tokens. The weights 828 of the embedding layer and the last softmax layer of the Transformer are shared. For the speed baseline, we also implement a uniLM that has the same number of parameters as the T-TA and biLM. For training, we create a training instance consisting of a single sentence with [BOS] and [EOS] tokens at the beginning and end of each sentence, respectively. We use 64 sentences as the training batch and train the language models over 1M steps for ASR and 2M steps for NMT. We train the language models with Adam (Kingma and Ba, 2014) with an initial learning rate of 1e −4 and coefficients of β1 = 0.9 of β2 = 0.999; the learning rate is set to warm up over the first 50k steps, and the learning rate exhibits linear decay. We use a dropout probability of 0.1 on all layers. Our implementation is based on Google’s official code for BERT2. To train the language models that we implement, we use an English Wikipedia dump (approximately 13 GB in size) containing approximately 120M sentences. The trained models are used for reranking in NMT and unsupervised STS tasks. For the ASR reranking task, we use additional in-domain training data, namely, 4.0 GB of normalized text data from the official LibriSpeech corpus containing approximately 40M sentences. 5.2 Runtime Analysis We first measure the runtime of each language model to compute the contextual language representation HL ∈Rn×d of a given text sequence. In the unsupervised STS tasks, we directly use HL for the analysis. In the case of the reranking task, further computation is required: we compute Softmax(HLET ) to obtain the likelihood of each token, where E ∈R|V |×d is the weight parameter of the softmax layer. Therefore, the computational complexity of the reranking task is larger than that of the STS task. To measure the runtime, we use an Intel(R) Core(TM) i7-6850K CPU (3.60 GHz) and the TensorFlow 1.12.0 library with Python 3.6.8 on Ubuntu 16.04.06 LTS. In each experiment, we measure the runtime 50 times and average the results. Figure 4 shows that the T-TA exhibits faster runtimes than the biLM, and the gap between the T-TA and biLM increases as the sentence becomes longer. To facilitate a numerical comparison, we set the standard number of words to 20, which is approxi2https://github.com/google-research/bert Figure 4: Average runtimes of each model according to the number of words on STS and reranking tasks, subscripted as sts and rrk, respectively. mately the average number of words in a contemporary English sentence (DuBay, 2006). In this setup, in the STS tasks, the T-TA takes approximately 9.85 ms, while the biLM takes approximately 125 ms; hence, the T-TA is 12.7 times faster than the biLM. In the reranking task, the T-TA is 6.35 times faster than the biLM (which is still significant); this reduction occurs because the repetition of the biLM is related only to computing HL rather than Softmax(HLET ). For the visual clarity of Figure 4, we omit the runtime results of the uniLM, which is as fast as the T-TA (see Appendix B.1). With such a fast inference time, we next demonstrate that the T-TA is as accurate as BERT. 5.3 Reranking the N-best List To evaluate the language models, we conduct experiments on the unsupervised task of reranking the N-best list. In these experiments, we apply each language model to rerank the 50 best candidate sentences, which are obtained in advance using each sequence-to-sequence model on ASR and NMT. The ASR and NMT models we implement are detailed in Appendices A.1 and A.2. We rescore the sentences by linearly interpolating two scores from a sequence-to-sequence model and each language model as follows: score = (1 −λ) · scores2s + λ · scorelm, where scores2s is the score from the sequence-tosequence model, scorelm is the score from the language model calculated by the sum (or mean) of the log-likelihood of each token, and the interpolation weight λ is set to a value that leads to the best performance in the development set. 829 One of the strong baseline language models, the pretrained BERT-base-uncased model (Devlin et al., 2019), is used for reranking tasks. We also include the reranking results from the traditional count-based 5-gram language models trained on each dataset using the KenLM library (Heafield, 2011). We note that the T-TA and biLM (including BERT) assign the pseudo-log-likelihood to the score of a given sentence, whereas the uniLM assigns the log-likelihood. Because the reranking task is based on the relative scores of the n-best hypotheses, the fact that the bidirectional models yields the pseudo-log-likelihood of a given sentence does not impact this task (Wang and Cho, 2019; Shin et al., 2019; Salazar et al., 2019). 5.3.1 Results on ASR For reranking in ASR, we use prepared N-best lists obtained from dev and test sets using Seq2SeqASR, which we train on the LibriSpeech ASR corpus. Additionally, we use the N-best lists obtained from (Shin et al., 2019) to confirm the robustness of the language models in a testing environment. Table 1 shows the word error rates (WERs) for each method after reranking. The interpolation weights λ are 0.3 or 0.4 in all N-best lists for ASR. First, we confirm that the bidirectional models trained with the LAE (T-TA) and MLM (biLM) objectives consistently outperform the uniLM trained with the CLM objective. The performance gains from reranking are much lower in the better base system Seq2SeqASR, and it is evidently challenging to rerank the N-best list using a language model if the speech recognition model performs well enough. Interestingly, the T-TA is competitive with (or even better than) the biLM; this may result from the gap between the training and testing of the biLM: the biLM predicts multiple masks at a time when training but predicts only one mask at a time when testing. Moreover, the 3-layer TTA is better than the 12-layer BERT-base, showing that in-domain data are critical to language model applications. Finally, we note that feeding an intact input to BERT (the corresponding model is denoted as “w/ BERT\M” in Table 1) causes the model to underperform relative to the other models, demonstrating that the mask-and-predict approach is necessary for effective reranking. Method dev test clean other clean other Shin et al. 7.17 19.79 7.25 20.37 w/ n-gram 5.62 16.85 5.75 17.72 w/ ∗uniSANLMw 6.05 17.32 6.11 18.13 w/ ∗biSANLMw 5.52 16.61 5.65 17.37 w/ BERT 5.24 16.56 5.38 17.46 w/ BERT\M 7.08 19.61 7.14 20.18 w/ uniLM 5.07 16.20 5.14 17.00 w/ biLM 4.94 16.09 5.14 16.81 w/ T-TA 4.98 16.09 5.11 16.91 Seq2SeqASR 4.11 12.31 4.31 13.14 w/ n-gram 3.94 11.93 4.15 12.89 w/ BERT 3.72 11.59 3.97 12.46 w/ BERT\M 4.09 12.26 4.28 13.15 w/ uniLM 3.82 11.73 4.05 12.63 w/ biLM 3.73 11.53 3.97 12.41 w/ T-TA 3.67 11.56 3.97 12.38 Table 1: WERs after reranking with each language model on LibriSpeech. The ‘other’ sets are recorded in noisier environments than the ‘clean’ sets. Bold font denotes the best performance on each subtask, and ∗ signifies a word-level language model from Shin et al. (2019). 5.3.2 Results on NMT To compare the reranking performances in another domain, NMT, we again prepare N-best lists using Seq2SeqNMT3 from the WMT13 German-toEnglish (De→En) and French-to-English (Fr→En) test sets. Table 2 shows the bilingual evaluation understudy (BLEU) scores for each method after reranking. Each interpolation weight becomes a value that shows the best performance on each test set with each method in NMT. The interpolation weights λ are 0.4 or 0.5 in the N-best lists for NMT. We confirm again that the bidirectional models trained with the LAE and MLM objectives perform better than the uniLM trained with the CLM objective. Additionally, the Fr→En translation has less effect on the reranking than the De→En translation because the base NMT system for Fr→En is better than that for De→En. The 12-layer BERT model appears much better than the other models at reranking on NMT; hence, the N-best hypotheses of the NMT model seem to be more indistinguish3The Seq2Seq models for De→En and Fr→En are trained independently using the t2t library (Vaswani et al., 2018). 830 Method De→En Fr→En Seq2SeqNMT 27.83 29.63 w/ n-gram 28.41 30.04 w/ BERT 29.31 30.52 w/ uniLM 28.80 30.21 w/ biLM 28.76 30.32 w/ T-TA 28.83 30.20 Table 2: BLEU scores after reranking with each language model on WMT13. Bold font denotes the best performance on each subtask, and the underlined values signify the best performances in our implementations. able than those of the ASR model from a language modeling perspective. All the reranking results on the ASR and NMT tasks demonstrate that the proposed T-TA performs both efficiently (similar to the uniLM) and effectively (similar to the biLM). 5.4 Unsupervised STS In addition to the reranking task, we apply the language models to an STS task, that is, measuring the similarity between the meaning of sentence pairs. We use the STS Benchmark (STS-B) (Cer et al., 2017) and Sentences Involving Compositional Knowledge (SICK) (Marelli et al., 2014) datasets, both of which have a set of sentence pairs with corresponding similarity scores. The evaluation metric of STS is Pearson’s r between the predicted similarity scores and the reference scores of the given sentence pairs. In this section, we address the unsupervised STS task to examine the inherent ability of each language model to obtain contextual language representations, and we mainly compare the language models that are trained on the English Wikipedia dump. To compute the similarity score of a given sentence pair, we use the cosine similarity of two sentence representations, where each representation is obtained by averaging each language model’s contextual representations. Specifically, the contextual representations of a given sentence are the outputs of the final encoding layer of each model, denoted as context in Tables 3 and 4. For comparison, we use noncontextual representations, which are obtained from the outputs of the embedding layer, denoted as embed in Tables 3 and 4. As a strong baseline for unsupervised STS tasks, we also include the 12-layer BERT model (Devlin Method STS-B-dev STS-B-test context embed context embed BERT 64.78 54.22 BERT\M 59.17 60.07 47.91 48.19 BERT[CLS] 29.16 17.18 uniLM 56.25 63.87 39.57 55.00 uniLM[EOS] 40.75 38.30 biLM 59.99 50.76 biLM\M 53.20 58.80 36.51 49.08 T-TA 71.88 54.75 62.27 44.74 GloVe 52.4 40.6 Word2Vec 70.0 56.5 Table 3: Pearson’s r×100 results on the STS-B dataset. “-” denotes an infeasible value, and bold font denotes the top 2-performing models on each subtask. et al., 2019), and we employ BERT in the maskand-predict approach for computing the contextual representations of each sentence. Note that we use the most straightforward approach for the unsupervised STS task to focus on comparing token-level language representations. 5.4.1 Results on STS-B The STS-B dataset has 5749/1500/1379 sentence pairs with train/dev/test splits and corresponding scores ranging from 0 to 5. We test the language models on the STS-B-dev and STS-B-test sets using the simplest approach on the unsupervised STS task. As additional baselines, we include the results of GloVe (Pennington et al., 2014) and Word2Vec (Mikolov et al., 2013a) from the official sites of STS Benchmark4. Table 3 shows our T-TA trained with the LAE objective best captures the semantics of a sentence over the Transformer-based language models. Remarkably, our 3-layer T-TA trained on a relatively small dataset outperforms the 12-layer BERT trained on a larger dataset (Wikipedia + BookCorpus). Furthermore, the embedding representations are trained better by the CLM objective than by the other language modeling objectives; we suppose that the uniLM depends strongly on the embedding layer due to its unidirectional context constraint. Since the uniLM encodes all contexts in the last token, [EOS], we also use the last representation as the sentence representation; however, this approach does not outperform the average sentence 4http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark 831 Method SICK-test context embed BERT 64.31 BERT\M 61.18 64.63 uniLM 54.20 65.69 biLM 58.98 biLM\M 53.79 62.67 T-TA 69.49 60.77 Table 4: Pearson’s r × 100 results on the SICK dataset. “-” denotes an infeasible value, and bold font denotes the best performance on each subtask. representation. Similarly, BERT has a special token, [CLS], which is trained for the “next sentence prediction” objective; thus, we also use the [CLS] token to see how this model learns the sentence representation, but it significantly underperforms the other models. 5.4.2 Results on SICK We further evaluate the language models on the SICK dataset, which consists of 4934/4906 sentence pairs with training/testing splits and scores ranging from 1 to 5. The results are in Table 4, from which we obtain the same observations as those reported for STS-B. All results on unsupervised STS tasks demonstrate that the T-TA learns textual semantics best using the token-level LAE objective. 6 Conclusion In this work, we propose a novel deep bidirectional language model, namely, the T-TA, to eliminate the computational overload of applying BERT to unsupervised applications. Experimental results on N-best list reranking and unsupervised STS tasks demonstrate that the proposed T-TA is significantly faster than the BERT-like approach, and its encoding ability is competitive with (or even better than) that of BERT. Acknowledgments K. Jung is with ASRI, Seoul National University, Korea. This work was supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under the Industrial Technology Innovation Program (No.10073144) and by the NRF grant funded by the Korean government (MSIT) (NRF2016M3C4A7952587). References Ebru Arisoy, Abhinav Sethy, Bhuvana Ramabhadran, and Stanley Chen. 2015. Bidirectional recurrent neural network language models for automatic speech recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5421–5425. IEEE. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14. William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4960–4964. IEEE. Xie Chen, Anton Ragni, Xunying Liu, and Mark JF Gales. 2017. Investigating bidirectional recurrent neural network language models for speech recognition. In INTERSPEECH, pages 269–273. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Advances in neural information processing systems, pages 577–585. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7057–7067. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. 832 William H DuBay. 2006. The classic readability studies. Impact Information, Costa Mesa, California. Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the sixth workshop on statistical machine translation, pages 187–197. Association for Computational Linguistics. Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. arXiv preprint arXiv:1606.08415. Takaaki Hori, Shinji Watanabe, Yu Zhang, and William Chan. 2017. Advances in joint ctc-attention based end-to-end speech recognition with a deep cnn encoder and rnn-lm. Proc. Interspeech 2017, pages 949–953. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 1–8. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210. IEEE. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Alvaro Peris and Francisco Casacuberta. 2015. A bidirectional recurrent neural language model for machine translation. Procesamiento del Lenguaje Natural, 55:109–116. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1986. Learning representations by backpropagating errors. nature, 323(6088):533–536. Julian Salazar, Davis Liang, Toan Q Nguyen, and Katrin Kirchhoff. 2019. Pseudolikelihood reranking with masked language models. arXiv preprint arXiv:1910.14659. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96. Yusuxke Shibata, Takuya Kida, Shuichi Fukamachi, Masayuki Takeda, Ayumi Shinohara, Takeshi Shinohara, and Setsuo Arikawa. 1999. Byte pair encoding: A text compression scheme that accelerates pattern matching. Technical report, Technical Report DOITR-161, Department of Informatics, Kyushu University. Joonbo Shin, Yoonhyung Lee, and Kyomin Jung. 2019. Effective sentence scoring method using bert for speech recognition. In Asian Conference on Machine Learning, pages 1081–1093. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Thirteenth annual conference of the international speech communication association. 833 Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, et al. 2018. Tensor2tensor for neural machine translation. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 193– 199. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Alex Wang and Kyunghyun Cho. 2019. Bert has a mouth, and it must speak: Bert as a markov random field language model. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30–36. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019. Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, NelsonEnrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, et al. 2018. Espnet: Endto-end speech processing toolkit. Proc. Interspeech 2018, pages 2207–2211. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. 834 Appendix A Implementation Details A.1 Setup for the ASR System This section introduces our implementation of the ASR system. For the input features, we use an 80-band Melscale spectrogram derived from the speech signal. The target sequence is processed in 5K caseinsensitive subword units created via unigram byte-pair encoding (Shibata et al., 1999). We use an attention-based encoder-decoder model as our acoustic model. The encoder is a 5-layer bidirectional long short-term memory (LSTM) network, and there are bottleneck layers that conduct a linear transformation between every LSTM layer. Additionally, there is a VGG module before the encoder, and it reduces the number of encoding time steps by one-quarter through two max-pooling layers. The decoder is a 2-layer bidirectional LSTM network with a location-aware attention mechanism (Chorowski et al., 2015). All the layers have 1024 hidden units. The model is trained with an additional connectionist temporal classification (CTC) objective function because the left-to-right constraint of CTC helps learn alignments between speech-text pairs (Hori et al., 2017). Our model is trained for 20 epochs on 960 h of LibriSpeech training data using the Adadelta optimizer (Zeiler, 2012). Using this acoustic model, we obtain the 50 best decoded sentences for each input audio file through the hybrid CTC-attentionbased scoring (Hori et al., 2017) method. For Seq2SeqASR, we additionally use a pretrained recurrent neural network language model (RNNLM) to combine the log-probability plm of the RNNLM during decoding as follows: log p(yn|y1:n−1) = log pam(yn|y1:n−1) + β log plm(yn|y1:n−1), (3) where β is set to 0.7. We use the efficient spatial pyramid network (ESPNet) toolkit (Watanabe et al., 2018) for this implementation. Table 5 shows the oracle word error rates (WERs) of the 50 best lists measured assuming that the best sentence is always picked from the candidates. We also include the oracle WERs from the 50 best lists of (Shin et al., 2019). Method dev test clean other clean other Shin et al. 7.17 19.79 7.26 20.37 oracle 3.18 12.98 3.19 13.61 Seq2SeqASR 4.11 12.31 4.31 13.14 oracle 1.80 7.90 1.96 8.39 Table 5: Oracle WERs of the 50 best lists on LibriSpeech from each ASR system. A.2 Setup for the NMT System We implement the standard Transformer model (Vaswani et al., 2017) using the Tensor2Tensor library (Vaswani et al., 2018) for NMT. Both the encoder and the decoder of the Transformer consist of 6 layers with 512 hidden units, and the number of self-attention heads is 8. The maximum number of input tokens is set to 256, and we use a shared vocabulary of size 32k. For effective training, we let the token embedding layer and the last softmax layer share their weights. The other hyperparameters of our translation system follow the standard transformer_base_single_gpu setting in Google’s official Tensor2Tensor repository5. We train the baseline model on the standard WMT13 Fr→En and De→En datasets with 250k steps using the Adam optimizer (Kingma and Ba, 2014). We use linear-warmup-square-root-decay learning rate scheduling with the default learning rate (2.5e-4) and number of warmup steps (16k). Using this baseline translation model, we obtain the 50 best decoded sentences for each source through the beam search. The oracle BLEU scores for the NMT system are shown in Table 6. Method WMT13 De→En Fr→En Seq2SeqNMT 27.83 29.63 oracle 38.18 39.58 Table 6: Oracle BLEU scores of the 50 best lists on WMT13 B Additional Experiments B.1 Runtimes of the uniLM and T-TA As mentioned in Section 5.2, we also measure the runtimes of the uniLM we implement. Figure 5 5https://github.com/tensorflow/tensor2tensor 835 Figure 5: Runtimes according to the number of words for the uniLM and T-TA. Figure 6: Runtimes according to the number of words for the biLM and T-TA in the GPU-augmented environment. shows the average runtimes of the uniLM and the TTA for the number of words in a sentence. Since we use subword tokens, the number of words nw and the number of tokens n can be different (nw ≤n). B.2 Runtimes on a GPU Additionally, we similarly measure the runtimes in a GPU-augmented environment (using GeForce GTX 1080 Ti). Figure 6 shows the average runtimes of the biLM and the T-TA for the number of words in a sentence. In our 20-word standard in the STS task, the T-TA takes approximately 2.51 ms, whereas biLM takes approximately 4.72 ms, showing that the T-TA is 1.88 times faster than the biLM. Compared to the CPU-only environment, the speed difference is significantly reduced due to the support offered by the GPU. Considering Figure 4, however, the CPU-only environment and GPUaugmented environment show a similar tendency: the longer the sentence is, the more significant the difference in the runtime between the T-TA and the biLM. B.3 Perplexity and Reranking In general, perplexity (PPL) is a measure of how well the language model is trained. To investigate the alignment of the PPL and reranking, we compute the PPL of reference sentences from the LibriSpeech dev-clean and test-clean sets using each language model. We can obtain the pseudoperplexity (pPPL) from the biLM and T-TA since they do not follow the product rule, unlike the uniLM. Note that we compute the subword-level (p)PPL (not word-level); these values are valid only in our vocabulary. Method [WER] (p)PPLa (p)PPLm dev clean uniLM [3.82] 341.5 70.80 biLM [3.73] (76.49) (11.93) T-TA [3.67] (293.4) (11.69) test clean uniLM [4.05] 495.5 73.18 biLM [3.97] (75.43) (12.72) T-TA [3.97] (590.0) (12.43) Table 7: (pseudo)Perplexities and corresponding WERs of the language models on LibriSpeech. We find that the WERs are better aligned with the median of pPPLm than with the average pPPLa. Interestingly, the pPPLa of the T-TA is similar to the PPLa of the uniLM, but the pPPLm of the TTA is similar to that of the biLM. We additionally discover that if the length of a sentence is short, the T-TA shows a very high PPL, even higher than that of the uniLM.
2020
76
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8585–8592 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8585 Clinical Concept Linking with Contextualized Neural Representations Elliot Schumacher1 Andriy Mulyar2∗ Mark Dredze1 1Johns Hopkins University 2Virginia Commonwealth University {eschumac,mdredze}@cs.jhu.edu [email protected] Abstract In traditional approaches to entity linking, linking decisions are based on three sources of information – the similarity of the mention string to an entity’s name, the similarity of the context of the document to the entity, and broader information about the knowledge base (KB). In some domains, there is little contextual information present in the KB and thus we rely more heavily on mention string similarity. We consider one example of this, concept linking, which seeks to link mentions of medical concepts to a medical concept ontology. We propose an approach to concept linking that leverages recent work in contextualized neural models, such as ELMo (Peters et al., 2018), which create a token representation that integrates the surrounding context of the mention and concept name. We find a neural ranking approach paired with contextualized embeddings provides gains over a competitive baseline (Leaman et al., 2013). Additionally, we find that a pre-training step using synonyms from the ontology offers a useful initialization for the ranker. 1 Introduction Medical concept linking produces structured topical content from clinical free text (Aronson and Lang, 2010). Healthcare providers often refer to medical concepts in clinical text notes that are absent from associated health record metadata despite their importance to understanding a patient’s medical status. For example, in The patient reports a history of seizure disorder..., the phrase seizure disorder refers to the concept epilepsy contained within the Unified Medical Language System (UMLS) ontology (Bodenreider, 2004). However, this may be absent from metadata as it is not part of the current diagnosis. Concept mentions can use non-standard ∗Contribution performed during an internship at Johns Hopkins University. terms (e.g. epilepsy), thus concept linking requires non-lexical methods. Additionally, some terms (cancer) are ambiguous and could refer to multiple concepts (breast cancer, colon cancer, etc.) The related task of Entity Linking – linking named entities (people, places, and organizations) to a knowledge base – has been explored in nonmedical domains (Dredze et al., 2010; Durrett and Klein, 2014; Gupta et al., 2017). Entity linking systems consider three sources of information: 1) similarity between mention strings and names for the KB entity; 2) comparison of the document context to information about the KB entity (e.g. entity description); 3) information contained in the KB, such as entity popularity or inter-entity relations. In contrast to the dense KBs in entity linking, concept linking uses sparse ontologies, which contain a unique identifier (CUI), title, and links to synonyms and related concepts, but rarely longform text. For example, while the concept epilepsy has many synonyms in UMLS, it has no definition or other long description. Furthermore, UMLS concept names are more formal than clinical notes, making mention matching challenging. Therefore, we need an approach that can use local context from the mention (surrounding sentence), and whatever information may be present in the ontology to build a contextualized non-lexical representation for matching. Additionally, Entity Linking systems are often able to leverage greater amounts of annotated data, which are not available in the clinical space. Text that does not have restrictive privacy protections can be annotated more easily through crowdsourcing, or other sources of non-gold standard data collected (e.g., Wikipedia cross-links). As the annotation of clinical notes is expensive due to the knowledge required of annotators and the protected status of clinical records, any effort in clinical concept linking must focus on leveraging a small amount 8586 of annotations, and using larger amounts of related or unannotated data when possible. We propose learning contextualized representations that leverage both free text and information from knowledge bases. We train a contextualized language model (Peters et al., 2018) on unannotated clinical text, leveraging sentence context to construct a mention. We explore several methods of building representations of the mention span and concept, including pooling and attention, and pre-training our linker with additional data from the ontology to augment the small amount of annotated data present. The resulting ranker outperforms a non-contextualized version of our model, and beats the previous best performing system (Leaman et al., 2013) in most metrics. 2 Concept Linking Concept linking (alternatively: named entity recognition, entity normalization), has a long history (Pradhan et al., 2013; Luo et al., 2019) in the clinical NLP community, with common approaches including generating lexical variations to increase matches (Metamap) (Aronson, 2001; Aronson and Lang, 2010), dictionary matching algorithms (Kipper-Schuler et al., 2008; Savova et al., 2010), rule based systems (D’Souza and Ng, 2015), and mention/ontology context overlap (Aggarwal and Barker, 2015). Learned ensembles can also be effective (Rajani et al., 2017). Concept linking has also been applied to bio-medical literature (Do˘gan et al., 2014; Zheng et al., 2015; Tsai and Roth, 2016; Zhao et al., 2019) and is most similar to the task of entity linking (Dredze et al., 2010; Durrett and Klein, 2014; Gupta et al., 2017; Mueller and Durrett, 2018). Similar to our approach, Choi et al. (2016) learn representations of concepts in UMLS. While we cannot make a direct comparison since they do not cover all of our KB (SNOMEDCT), initial experiments with their embeddings performed worse than our method. While some jointly consider the task of mention finding and linking (Durrett and Klein, 2014), we follow the more common convention of separating the two and assuming gold mention spans (Leaman et al., 2013; D’Souza and Ng, 2015). Formally, we are given a mention m in a document and must select the best CUI (concept) c from an ontology/KB, or CUI-less if no relevant concept exists. Many systems utilize a rule-based approach – often as a pre-processing step – that uses the trainFigure 1: Architecture for our neural ranker. The input consists of gold standard mention string representation m (purple), gold standard concept representation c+ (blue), and n randomly selected negative concept representation c−pairings (red). The ELMO hidden states are noted as h, and the hidden states of our feed forward neural network are noted as d. To build our ELMO representations for m, c+ and c−, we select the representation from the lowest layer of the model. ing data to augment a dictionary (D’Souza and Ng, 2015; Luo et al., 2019). While this approach does quite well, it poorly generalizes to unseen mentions or new domains.1 Therefore, our work will focus on a learned system and compare it to similar baselines. While related to concept linking, entity linking requires a different solution due to several factors. Many entity linking systems (Upadhyay et al., 2018; Kolitsas et al., 2018) leverage context from a large document, such as Wikipedia, to make linking decisions, while a similar source is not present in UMLS. Further, earlier work (Zheng et al., 2014) showed that standard Entity Linking systems don’t work well on the related domain of biomedical journal literature, which suggests that separate solutions are required. 3 Methods Our concept linking system is based on a pairwise neural network ranker (§3.1) using contextualized representations (§3.2) for both the mention and concept. We leverage the context present in clinical notes for our representations and synonyms present within the UMLS to train our linker. 8587 3.1 Neural Ranker For a given mention string m and document, the system ranks all possible candidates c in the KB. Figure 1 shows our ranking system, based on the Rank model of Dehghani et al. (2017). We learn the parameters θ of a scoring function S(m, c; θ), which consists of a feed-forward neural network with hidden layers d that takes input representations of m and c in addition to pairwise features. We train using pairwise loss, in which we have two point-wise networks – one which takes the mention m and correct concept c+ as input, the other which takes the mention m and incorrect concept c−– with shared parameters that are updated to minimize the loss function. Using a pairwise model allows us to learn a scoring function that does not rely on annotated scores. Adapting the approach of Dehghani et al. (2017), we use adaptive hinge loss, which considers n negative concepts and selects the highest scoring concept as the negative sample. For mention m, correct concept c+, and n negative samples c0−to cn−, our loss function is: L(θ) = max{0, ϵ −(S({m, c+}; θ)− max{S({m, c0−}; θ) . . . S({m, cn−}; θ)}} (1) 3.2 Contextualized Representations Recent work (Devlin et al., 2019) proposed representations of words that integrate the context of the surrounding sentence. We use ELMo (Peters et al., 2018), a bi-directional recurrent neural network (RNN), to build representations for each token in a sentence trained using language model objectives. For each direction, the model first builds a contextindependent token representation using a convolutional neural network over the characters. Then the representation is passed through L = 2 layers of long-short term memory (LSTM) RNN. The final layer is used to predict the next token. These models are robust to out-of-vocabulary types, so they provide broad coverage to the diverse types present in clinical text. We train ELMo on clinical notes and create mention representations m by running the entire sentence through the model and selecting the resulting word representations for the mention (the lowest token representation) from the LSTM.2. 1An extension of this approach could use unsupervised methods to discover synonyms in a new dataset (Schumacher and Dredze, 2019) 2While there are now a multitude of deep transformerbased LMs (Devlin et al., 2019), the principle of contextualThe concept representations c are created in the same manner as m except that only the name of the concept, as there is often no available context3. For multi-word mentions and concept names, we explore two methods of creating a single embedding. First, we use max-pooling over the set of token embeddings (reported as Max in Table 1). Second, we run self-attention (Vaswani et al., 2017)4 over the set of token embeddings, with a single head to attend over the tokens (noted as Attention). 3.3 Pre-training with Structured Data Pre-training a model using an alternative data source has been frequently used in the field of machine learning (Erhan et al., 2010; Sharif Razavian et al., 2014), and presented (Tsujimura et al., 2019) at a recent shared task (Luo et al., 2019). A model is pre-trained on a large amount of a related dataset and then is trained on the target task, which allows a model to see more examples to achieve a better initialization for training on the final task. As creation is expensive, most annotated clinical datasets are small, such as for our task. Therefore, we look to alternative data sources for pre-training our model. For a given concept (e.g. epilepsy), the UMLS includes synonyms (e.g. seizure disorder, epileptic fits), which can be used to pre-train our linker. Unlike in the annotated clinical data, there is no surrounding context, and terms in the UMLS are more likely to be formal. However, training on synonyms will allow for a greater variety of terms to be seen by our model than otherwise possible. Therefore, using all synonyms taken from the annotated subset of the UMLS, we pre-train our linker before training on the annotated clinical notes. We follow the previous training procedure by replacing the mention representation m with the synonym string representation only (without surrounding sentence), thus training the linker to assign a higher score to the synonym paired with the corresponding concept representation c+ against negatively sampled concepts c−. We use this pre-training initialization with the Attention model discussed in ized representations are the same. Additionally, others have found ELMo trained on MIMIC does better than a similarly trained BERT model (Schumacher and Dredze, 2019) 3We ran experiments that padded the names with synonyms or other forms of available text within the knowledge base. However, we did not see consistent improvements. 4We use the implementation provided by https://github.com/kaushalshetty/ Structured-Self-Attention. 8588 CUI All Acc MRR Acc MRR DNorm 0.73 0.75 0.55 0.57 Word2vec 0.26 0.33 0.21 0.30 Max 0.66 0.70 0.58 0.67 Attention 0.70 0.75 0.62 0.71 Att. + Pre. 0.70 0.78 0.59 0.71 Table 1: Accuracy (top-1) and MRR (mean reciprocal rank) for the test sets, for mentions with linked concepts (CUI) and all mentions (All). For each metric, we compare the best score (in bold) to the baseline using a two-tailed z-score test (for CUI ACC, we compare to the next best score). We find that for all CUI models, the difference is not significant, while for All models, p < 0.05. the previous section and note this as Att. + Pre. in Table 1. 4 Experimental Setup We train and evaluate our system on the ShARe/CLEF eHealth Evaluation Lab 2013 Task 1b dataset (Pradhan et al., 2013), which consists of span-level annotations for disorder concepts taken from the MIMIC 2.5 clinical note dataset (Saeed et al., 2011). The publicly available training set includes 200 clinical notes, which we split into a 100 note training set, and development and testing sets of 50 documents each - the shared task test set was not available. The data is annotated against SNOMED-CT (Spackman et al., 1997), one of the ontologies within UMLS. We choose to focus on this smaller dataset as leveraging small amounts of annotated data is critical to building useful tools in the clinical domain. We only included mention annotations for concepts that occur in the selected subset of the ontology noted in the annotation guidelines for the respective datasets or are marked as CUI-less 5. In Table 1, we report results on only mentions with links to the ontology (CUI) and mentions with 5We included all concepts in the SNOMED-CT Disorder Semantic group or in the Finding, Body Substance, and Mental Process semantic types. We include all preferred entries, with the default settings of UMLS 2011AA, in the SNOMEDCT Disorder Semantic group (116,436 unique concepts), but also include the first non-preferred entries that do not have a preferred entry (8,926 unique concepts.), and annotations marked CUI-less. Mentions that do not have a corresponding concept in the ontology (e.g. calcifications) were classified as CUI-less (or NIL) entries by annotators. Some annotations consist of concepts outside of the subsets described in the shared task paper, and we exclude those exceptions. links to the ontology and CUI-less mentions (All). We train ELMo on 199,987 clinical notes from MIMIC III (Johnson et al., 2016) as the source of our clinical text, pre-processing the data using the NLTK toolkit ( ˇReh˚uˇrek and Sojka, 2010). For the Pre-training model, we augment the clinical text training data with synonyms, definitions, and names of related concepts from the selected subset of UMLS. All together, this resulted in 645,863 additional sentences of training data. We compare our system to DNorm (Leaman et al., 2013) for the SHARE/Clef 2013 dataset, the best performing system in the SHARE/Clef 2013 shared task.6 Unlike many other concept linking systems, DNorm scores each mention against all concepts and does not use a triage system, allowing a fair comparison to our system. DNorm builds term frequency-inverse document frequency (TF-IDF) representations of both the mention and concept and learns a weighted similarity to rank concepts for each mention. It is unable to return concept candidates for mentions that are out-ofvocabulary as it uses a word-level measure. The authors add a specific CUI-less representation, which is made of entries occurring more than four times in training. We report results on our recreated test set, as the evaluation set provided for the shared task was not available to us. We also compare with using Word2vec (Mikolov et al., 2013) representations instead of ELMo representations in the same linking architecture to test the effect of contextualized embeddings. We trained the Word2vec model on the MIMIC dataset. We created single embeddings (d = 600) for mentions and concepts by max pooling over all embeddings for words in the corresponding text, ignoring all out-of-vocabulary words. We explored several parameter configurations for our model suggested in Dehghani et al. (2017), reporting the best performing models on development. These include hidden layers of size [256, 512, 1024] and number of layers in [1,2,3], with a Tanh activation function for final layer and ReLu (Glorot et al., 2011) for all others. We optimize using the ADAM optimizer (Kingma and Ba, 2014), and a dropout rate of 0.2. Parameter values and development metrics are available in Appendix A. For the ELMo models, we trained for 10 epochs 6As of this writing, there are no papers describing the 2019 N2C2 methods. Additionally, since we are interested in nontraining data-based dictionaries, a direct comparison to shared task submissions wasn’t possible. 8589 using the default configuration. For CUI-less mentions, we select a threshold score based on the development set, equal to the mean score of all CUI-less entries. If an entry does not have a scored concept above that threshold, we consider it CUI-less, adding CUI-less at that position in the list for MRR. We use the Pytorch framework and code from the Spotlight library (Kula, 2017). 5 Results Table 1 reports accuracy and mean reciprocal rank (MRR) for all models. We compare our models (Word2Vec, Max, Attention, and Att. + Pre.) to DNorm for all mentions (All) and only those with links to concepts in the KB (CUI). While DNorm has higher accuracy on entries with CUIs, our models have higher MRR on entities with CUIs (Att. + Pre.) and perform best on all entities in both accuracy and MRR (Attention and Att. + Pre.). 6 Discussion Our neural ranking models with attention outperform all other models, except for CUI-only accuracy. In the case of entities with CUIs, we find that pre-training the model does provide a gain in ranking accuracy (MRR). In the case of all entities, we find that the attention models provide a sizable gain in both accuracy and MRR. We conducted an error analysis of the best performing MRR model (Att. + Pre.) on the development data, looking at errors where the gold standard concept was not highly ranked (assigned a rank of 10 or above). Of those errors (n = 110), we find that 26% are mentions that contain only acronyms (e.g. LBP for lower back pain), and 14% are mentions containing some other abbreviation (a shorted word, e.g. post nasal drip for Posterior rhinorrhoea, or a partial acronym, Seizure d / o for Epilepsy). Comparing to similar errors from Attention model (n = 161), we find that the number of acronym errors is nearly the same (24) as the better performing model (26). In contrast, the number of non-abbreviation errors drops significantly. This suggests that pre-training provides useful signal for mentions that consist of variations appearing in the ontology. However, it does not help with acronyms or other abbreviations that are less likely to appear in the ontology or are shorter and more ambiguous (e.g., ’R’ for Rhonchus). While the linker often predicted unrelated concepts (40% of errors) for concepts where the correct concept was ranked above 10, many incorrect concept predictions were somewhat related to the gold concept (e.g., for mention atherosclerotic plaque with gold concept Atherosclerotic fibrous plaque our model predicted the concept Atherosclerosis). We further noticed that in 21% of cases the linker predicted a relevant concept (e.g., mention thrombosed and Thrombosis), but is not counted as correct due to annotation decisions. This could be due to multiple possible concepts in the ontology or the presence of closely-related concepts. Deploying our system in a large-volume clinical setting would likely require several alterations. The main computational barrier to labeling a large amount of data, the speed of prediction, can be addressed by using an accurate candidate selection system to prune the number of concepts considered. Considering a smaller subset (e.g., 20) of concepts instead of all would significantly improve the speed. Further, if using a consistent portion of the ontology, caching the concept embeddings c as opposed to building them in-model also enhances efficiency. Depending on the application, a less accurate but faster linker might be a better choice (e.g. for all clinical notes at a medical institution). In contrast, a more complex linker, such as ours, maybe a better option for specific subsets of notes that require better accuracy (e.g., the results of specific clinical studies). Our results demonstrate the advantages of using contextualized embeddings for ranking tasks, and that using information from the knowledge base for training is an essential direction for learning concept representations for sparse KB domains. Future work will consider additional methods for integrating ontology structure into representation learning. References Nitish Aggarwal and Ken Barker. 2015. Medical concept resolution. In International Semantic Web Conference (Posters & Demos). Alan R Aronson. 2001. Effective mapping of biomedical text to the umls metathesaurus: the metamap program. In Proceedings of the AMIA Symposium, page 17. American Medical Informatics Association. Alan R Aronson and Franc¸ois-Michel Lang. 2010. An overview of metamap: historical perspective and recent advances. Journal of the American Medical Informatics Association, 17(3):229–236. 8590 Olivier Bodenreider. 2004. The unified medical language system (umls): integrating biomedical terminology. Nucleic acids research, 32(suppl 1):D267– D270. Youngduck Choi, Chill Yi-I Chiu, and David Sontag. 2016. Learning low-dimensional representations of medical concepts. AMIA Summits on Translational Science Proceedings, 2016:41. Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W Bruce Croft. 2017. Neural ranking models with weak supervision. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 65–74. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Rezarta Islamaj Do˘gan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for disease name recognition and concept normalization. Journal of biomedical informatics, 47:1–10. Mark Dredze, Paul McNamee, Delip Rao, Adam Gerber, and Tim Finin. 2010. Entity disambiguation for knowledge base population. In Conference on Computational Linguistics (COLING), pages 277– 285. Association for Computational Linguistics. Jennifer D’Souza and Vincent Ng. 2015. Sieve-based entity linking for the biomedical domain. In Association for Computational Linguistics (ACL), pages 297–302. Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics, 2:477–490. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11(Feb):625–660. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315– 323. Nitish Gupta, Sameer Singh, and Dan Roth. 2017. Entity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2681–2690, Copenhagen, Denmark. Association for Computational Linguistics. Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimiciii, a freely accessible critical care database. Scientific data, 3:160035. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations. Karin Kipper-Schuler, Vinod Kaggal, James Masanz, Philip Ogren, and Guergana Savova. 2008. System evaluation on a named entity corpus from clinical notes. In Language resources and evaluation conference, LREC 2008. Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-end neural entity linking. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 519–529, Brussels, Belgium. Association for Computational Linguistics. Maciej Kula. 2017. Spotlight. https://github. com/maciejkula/spotlight. Robert Leaman, Rezarta Islamaj Do˘gan, and Zhiyong Lu. 2013. Dnorm: disease name normalization with pairwise learning to rank. Bioinformatics, 29(22):2909–2917. Yen-Fu Luo, Weiyi Sun, and Anna Rumshisky. 2019. Mcn: A comprehensive corpus for medical concept normalization. Journal of biomedical informatics, page 103132. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. David Mueller and Greg Durrett. 2018. Effective use of context in noisy entity linking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1024–1029, Brussels, Belgium. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Sameer Pradhan, Noemie Elhadad, Brett R South, David Martinez, Lee M Christensen, Amy Vogel, Hanna Suominen, Wendy W Chapman, and Guergana K Savova. 2013. Task 1: Share/clef ehealth evaluation lab 2013. In CLEF (Working Notes). 8591 Nazneen Fatema Rajani, Mihaela Bornea, and Ken Barker. 2017. Stacking with auxiliary features for entity linking in the medical domain. BioNLP 2017, pages 39–47. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45– 50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en. Mohammed Saeed, Mauricio Villarroel, Andrew T Reisner, Gari Clifford, Li-Wei Lehman, George Moody, Thomas Heldt, Tin H Kyaw, Benjamin Moody, and Roger G Mark. 2011. Multiparameter intelligent monitoring in intensive care ii (mimic-ii): a public-access intensive care unit database. Critical care medicine, 39(5):952. Guergana K Savova, James J Masanz, Philip V Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C KipperSchuler, and Christopher G Chute. 2010. Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications. Journal of the American Medical Informatics Association, 17(5):507–513. Elliot Schumacher and Mark Dredze. 2019. Learning unsupervised contextual representations for medical synonym discovery. JAMIA Open. Ooz057. Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. 2014. Cnn features offthe-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 806– 813. Kent A Spackman, Keith E Campbell, and Roger A Cˆot´e. 1997. Snomed rt: a reference terminology for health care. In Proceedings of the AMIA annual fall symposium, page 640. American Medical Informatics Association. Chen-Tse Tsai and Dan Roth. 2016. Concept grounding to multiple knowledge bases via indirect supervision. In Transactions of the Association of Computational Linguistics, volume 4, pages 141–154. Tomoki Tsujimura, Noriyuki Mori, Masaki Asada, Makoto Miwa, and Yutaka Sasaki. 2019. Neural medical concept normalization with two-step training. Shyam Upadhyay, Nitish Gupta, and Dan Roth. 2018. Joint multilingual supervision for cross-lingual entity linking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2486–2495. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Sendong Zhao, Ting Liu, Sicheng Zhao, and Fei Wang. 2019. A neural multi-task learning framework to jointly model medical named entity recognition and normalization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 817–824. Jin G Zheng, Daniel Howsmon, Boliang Zhang, Juergen Hahn, Deborah McGuinness, James Hendler, and Heng Ji. 2015. Entity linking for biomedical literature. BMC medical informatics and decision making, 15:S4. Jin Guang Zheng, Daniel Howsmon, Boliang Zhang, Juergen Hahn, Deborah McGuinness, James Hendler, and Heng Ji. 2014. Entity linking for biomedical literature. In Proceedings of the ACM 8th International Workshop on Data and Text Mining in Bioinformatics, pages 3–4. 8592 A Replication Information Max Attention Pretraining Pre + Att Dev Acc (CUI) 0.685 0.730 0.704 Dev MRR (CUI) 0.719 0.766 0.776 Reported Epoch 2499 4000 1 750 Random Seed 3011457727 3027767026 589590319 3635932273 Learning Rate 1e-5 1e-5 1e-5 1e-5 Hidden Layers [1024, 512] [1024, 512] [1024, 512] [1024, 512] Batch Size 12 12 32 16 Num. Negative Samples 10 10 10 10 Est. Training Time per epoch (minutes) 7.2 3.4 1860 4.6 GPU Type Tesla K80 GTX 1080ti Tesla K80 Tesla K80 Table 2: The above table contains replication information for the models trained on SHaRE data. Note the pretraining model contains parameters for the pre-training stage only (and thus we do not note accuracy or mean reciprocal rank), while Pre + Att contains parameters for the final trained model. All GPU types have 12 GB of memory.
2020
760
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8593–8606 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8593 DeSePtion: Dual Sequence Prediction and Adversarial Examples for Improved Fact-Checking Christopher Hidey,1∗Tuhin Chakrabarty,1 Tariq Alhindi,1 Siddharth Varia,1 Kriste Krstovski,1,3 Mona Diab,4,5∗and Smaranda Muresan1,2 1Department of Computer Science, Columbia University 2Data Science Institute, Columbia University 3Columbia Business School, Columbia University 4Facebook AI 5Department of Computer Science, George Washington University {chidey, tariq, smara}@cs.columbia.edu, [email protected] {tuhin.chakrabarty, sv2504, kriste.krstovski}@columbia.edu Abstract The increased focus on misinformation has spurred development of data and systems for detecting the veracity of a claim as well as retrieving authoritative evidence. The Fact Extraction and VERification (FEVER) dataset provides such a resource for evaluating endto-end fact-checking, requiring retrieval of evidence from Wikipedia to validate a veracity prediction. We show that current systems for FEVER are vulnerable to three categories of realistic challenges for fact-checking – multiple propositions, temporal reasoning, and ambiguity and lexical variation – and introduce a resource with these types of claims. Then we present a system designed to be resilient to these “attacks” using multiple pointer networks for document selection and jointly modeling a sequence of evidence sentences and veracity relation predictions. We find that in handling these attacks we obtain state-of-the-art results on FEVER, largely due to improved evidence retrieval. 1 Introduction The growing presence of biased, one-sided, and often altered discourse, is posing a challenge to our media platforms from newswire to social media (Vosoughi et al., 2018). To overcome this challenge, fact-checking has emerged as a necessary part of journalism, where experts examine ”check-worthy” claims (Hassan et al., 2017) published by others for their “shades” of truth (e.g., FactCheck.org or PolitiFact). However, this process is time-consuming, and thus building computational models for automatic fact-checking has become an active area of research (Graves, 2018). Advances were made possible by new open source datasets and shared tasks: the Fact Extraction and Verification Shared Task (FEVER) 1.0 and 2.0 (Thorne et al., 2018; Thorne ∗Work completed in part at Amazon Claim: Murda Beatz′s real name is Marshall Mathers. Evidence: [Murda Beatz] Shane Lee Lindstrom (born February 11, 1994), known professionally as Murda Beatz, is a Canadian hip hop record producer and songwriter from Fort Erie, Ontario. Label: REFUTES Figure 1: Example from FEVER 1.0 Dataset and Vlachos, 2019), SemEval 2019 Shared Task 8: Fact-Checking in Community Forums (Mihaylova et al., 2019), and LIAR(+) datasets with claims from PolitiFact (Wang, 2017; Alhindi et al., 2018). The FEVER 1.0 shared task dataset (Thorne et al., 2018) has enabled the development of endto-end fact-checking systems, requiring document retrieval and evidence sentence extraction to corroborate a veracity relation prediction (supports, refutes, not enough info). An example is given in Figure 1. Since the claims in FEVER 1.0 were manually written using information from Wikipedia, the dataset may lack linguistic challenges that occur in verifying naturally occurring check-worthy claims, such as temporal reasoning or lexical generalization/specification. Thorne and Vlachos (2019) designed a second shared task (FEVER 2.0) for participants to create adversarial claims (“attacks”) to break state-of-the-art systems and then develop systems to resolve those attacks. We present a novel dataset of adversarial examples for fact extraction and verification in three challenging categories: 1) multiple propositions (claims that require multi-hop document or sentence retrieval); 2) temporal reasoning (date comparisons, ordering of events); and 3) named entity ambiguity and lexical variation (Section 4). We show that state-of-the-art systems are vulnerable to adversarial attacks from this dataset (Section 6). In addition, we take steps toward addressing these vulnerabilities, presenting a system for endto-end fact-checking that brings two novel contri8594 butions using pointer networks: 1) a document ranking model; and 2) a joint model for evidence sentence selection and veracity relation prediction framed as a sequence labeling task (Section 5). Our new system achieves state-of-the-art results for FEVER and we present an evaluation of our models including ablation studies (Section 6). Data and code will be released to the community.1 2 Related Work Approaches for predicting the veracity of naturallyoccurring claims have focused on statements factchecked by journalists or organizations such as PolitiFact.org (Vlachos and Riedel, 2014; Alhindi et al., 2018), news articles (Pomerleau and Rao, 2017), or answers in community forums (Mihaylova et al., 2018, 2019). However, those datasets are not suited for end-to-end fact-checking as they provide sources and evidence while FEVER (Thorne et al., 2018) requires retrieval. Initial work on FEVER focused on a pipeline approach of retrieving documents, selecting sentences, and then using an entailment module (Malon, 2018; Hanselowski et al., 2018; Tokala et al., 2019); the winning entry for the FEVER 1.0 shared task (Nie et al., 2019a) used three homogeneous neural models. Other work has jointly learned either evidence extraction and question answering (Nishida et al., 2019) or sentence selection and relation prediction (Yin and Roth, 2018; Hidey and Diab, 2018); unlike these approaches, we use the same sequential evidence prediction architecture for both document and sentence selection, jointly predicting a sequence of labels in the latter step. More recently, Zhou et al. (2019) proposed a graph-based framework for multi-hop retrieval, whereas we model evidence sequentially. Language-based adversarial attacks have often involved transformations of the input such as phrase insertion to distract question answering systems (Jia and Liang, 2017) or to force a model to always make the same prediction (Wallace et al., 2019). Other research has resulted in adversarial methods for paraphrasing with universal replacement rules (Ribeiro et al., 2018) or lexical substitution (Alzantot et al., 2018; Ren et al., 2019). While our strategies include insertion and replacement, we focus specifically on challenges in factchecking. The task of natural language inference 1https://github.com/chridey/ fever2-columbia (Bowman et al., 2015; Williams et al., 2018) provides similar challenges: examples for numerical reasoning and lexical inference have been shown to be difficult (Glockner et al., 2018; Nie et al., 2019b) and improved models on these types are likely to be useful for fact-checking. Finally, (Thorne and Vlachos, 2019) provided a baseline for the FEVER 2.0 shared task with entailment-based perturbations. Other participants generated adversarial claims using implicative phrases such as “not clear” (Kim and Allan, 2019) or GPT-2 (Niewinski et al., 2019). In comparison, we present a diverse set of attacks motivated by realistic, challenging categories and further develop models to address those attacks. 3 Problem Formulation and Datasets We address the end-to-end fact-checking problem in the context of FEVER (Thorne et al., 2018), a task where a system is required to verify a claim by providing evidence from Wikipedia. To be successful, a system needs to predict both the correct veracity relation– supported (S), refuted (R), or not enough information (NEI)– and the correct set of evidence sentences (not applicable for NEI). The FEVER 1.0 dataset (Thorne et al., 2018) was created by extracting sentences from popular Wikipedia pages and mutating them with paraphrases or other edit operations to create a claim. Then, each claim was labeled and paired with evidence or the empty set for NEI. Overall, there are 185,445 claims, of which 90,367 are S, 40,107 are R, and 45,971 are NEI. Thorne and Vlachos (2019) introduced an adversarial set up for the FEVER 2.0 shared task – participants submitted claims to break existing systems and a system designed to withstand such attacks. The organizers provided a baseline of 1000 adversarial examples with negation and entailment-preserving/-altering transformations and this set was combined with examples from participants to form the FEVER 2.0 dataset. Table 1 shows the partition of FEVER 1.0 and 2.0 data (hereafter FV1/FV2-train/dev/test). Dataset Train Dev. Blind Test FEVER 1.0 145,449 19,998 19,998 FEVER 2.0 – 1,174 1,180 Table 1: FEVER Dataset Statistics 4 Adversarial Dataset for Fact-checking While the FEVER dataset is a valuable resource, our goal is to evaluate complex adversarial claims 8595 which resemble check-worthy claims found in news articles, speeches, debates, and online discussions. We thus propose three types of attacks based on analysis of FV1 or prior literature: those using multiple propositions, requiring temporal and numerical reasoning, and involving lexical variation. For the multi-propositional type, Graves (2018) notes that professional fact-checking organizations need to synthesise evidence from multiple sources; automated systems struggle with claims such as “Lesotho is the smallest country in Africa.” In FV1dev, 83.18% of S and R claims require only a single piece of evidence and 89% require only a single Wikipedia page. Furthermore, our previous work on FEVER 1.0 found that our model can fully retrieve 86% of evidence sentences from Wikipedia when only a single sentence is required, but the number drops to 17% when 2 sentences are required and 3% when 3 or more sentences are required (Hidey and Diab, 2018). For the second type, check-worthy claims are often numerical (Francis, 2016) and temporal reasoning is especially challenging (Mirza and Tonelli, 2016). Rashkin et al. (2017) and Jiang and Wilson (2018) showed that numbers and comparatives are indicative of truthful statements in news, but the presence of a date alone does not indicate its veracity. In FV1-dev, only 17.81% of the claims contain dates and 0.22% contain time information.2 To understand how current systems perform on these types of claims, we evaluated three stateof-the-art systems from FEVER 1.0 (Hanselowski et al., 2018; Yoneda et al., 2018; Nie et al., 2019a), and examined the predictions where the systems disagreed. We found that in characterizing these predictions according to the named entities present in the claims, the most frequent types were numerical and temporal (such as percent, money, quantity, and date). Finally, adversarial attacks for lexical variation, where words may be inserted or replaced or changed with some other edit operation, have been shown to be effective for similar tasks such as natural language inference (Nie et al., 2019b) and question answering (Jia and Liang, 2017), so we include these types of attacks as well. For the fact-checking task, models must match words and entities across claim and evidence to make a veracity prediction. As claims often contain ambiguous entities (Thorne and Vlachos, 2018) or lexical features indicative 2As determined by NER using Spacy: https://spacy.io of credibility (Nakashole and Mitchell, 2014), we desire models resilient to minor changes in entities (Hanselowski et al., 2018) and words (Alzantot et al., 2018). We thus create an adversarial dataset of 1000 examples, with 417 multi-propositional, 313 temporal and 270 lexically variational. Representative examples are provided in Appendix A. Multiple Propositions Check-worthy claims often consist of multiple propositions (Graves, 2018). In the FEVER task, checking these claims may require retrieving evidence sequentially after resolving entities and events, understanding discourse connectives, and evaluating each proposition. Consider the claim “Janet Leigh was from New York and was an author.” The Wikipedia page [Janet Leigh] contains evidence that she was an author, but makes no mention of New York. We generate new claims of the CONJUNCTION type automatically by mining claims from FV1-dev and extracting entities from the subject position. We then combine two claims by replacing the subject in one sentence with a discourse connective such as “and.” The new label is S if both original claims are S, R if at least one claim is R, and NEI otherwise. While CONJUNCTION claims provide a way to evaluate multiple propositions about a single entity, these claims only require evidence from a single page; hence we create new examples requiring reasoning over multiple pages. To create MULTI-HOP examples, we select claims from FV1-dev whose evidence obtained from a single page P contains at least one other entity having a valid page Q. We then modify the claim by appending information about the entity which can be verified from Q. For example, given the claim “The Nice Guys is a 2016 action comedy film.” we make a multi-hop claim by obtaining the page [Shane Black] (the director) and appending the phrase “directed by a Danish screenwriter known for the film Lethal Weapon.“ While multi-hop retrieval provides a way to evaluate the S and R cases, composition of multiple propositions may also be necessary for NEI, as the relation of the claim and evidence may be changed by more general/specific phrases. We thus add ADDITIONAL UNVERIFIABLE PROPOSITIONS that change the gold label to NEI. We selected claims from FV1-dev and added propositions which have no evidence in Wikipedia (e.g. for the claim “Duff McKagan is an American citizen,” we can add the reduced relative clause “born in Seattle“). 8596 Temporal Reasoning Many check-worthy claims contain dates or time periods and to verify them requires models that handle temporal reasoning (Thorne and Vlachos, 2017). In order to evaluate the ability of current systems to handle temporal reasoning we modify claims from FV1-dev. More specifically, using claims with the phrase ”in <date>” we automatically generate seven modified claims using simple DATE MANIPULATION heuristics: arithmetic (e.g., “in 2001” → “4 years before 2005”), range (“in 2001” →“before 2008”), and verbalization (“in 2001” →“in the first decade of the 21st century”). We also create examples requiring MULTI-HOP TEMPORAL REASONING, where the system must evaluate an event in relation to another. Consider the S claim “The first governor of the Indiana Territory lived long enough to see it become a state.” A system must resolve entity references (Indiana Territory and its first governor, William Henry Harrison) and compare dates of events (the admittance of Indiana in 1816 and death of Harrison in 1841). While multi-hop retrieval may resolve references, the model must understand the meaning of “lived long enough to see” and evaluate the comparative statement. To create claims of this type, we mine Wikipedia by selecting a page X and extracting sentences with the pattern “is/was/named the A of Y ” (e.g. A is “first governor”) where Y links to another page. Then we manually create temporal claims by examining dates on X and Y and describing the relation between the entities and events. Named Entity Ambiguity and Lexical Variation As fact-checking systems are sensitive to lexical choice (Nakashole and Mitchell, 2014; Rashkin et al., 2017), we consider how variations in entities and words may affect veracity relation prediction. ENTITY DISAMBIGUATION has been shown to be important for retrieving the correct page for an entity among multiple candidates (Hanselowski et al., 2018). To create examples that contain ambiguous entities we selected claims from FV1-dev where at least one Wikipedia disambiguation page was returned by the Wikipedia python API.3 We then created a new claim using one of the documents returned from the disambiguation list. For example the claim “Patrick Stewart is someone who does acting for a living.” returns a disambiguation page, which in turn gives a list of pages 3https://pypi.org/project/wikipedia/ such as [Patrick Stewart] and [Patrick Maxwell Stewart]. Finally, as previous work has shown that neural models are vulnerable to LEXICAL SUBSTITUTION (Alzantot et al., 2018), we apply their genetic algorithm approach to replace words via counter-fitted embeddings. We make a claim adversarial to a model fine-tuned on claims and gold evidence by replacing synonyms, hypernyms, or hyponyms, e.g. created →established, leader →chief. We manually remove ungrammatical claims or incorrect relations. 5 Methods Verifying check-worthy claims such as those in Section 4 requires a system to 1) make sequential decisions to handle multiple propositions, 2) support temporal reasoning, and 3) handle ambiguity and complex lexical relations. To address the first requirement we make use of a pointer network (Vinyals et al., 2015) in two novel ways: i) to rerank candidate documents and ii) to jointly predict a sequence of evidence sentences and veracity relations in order to compose evidence (Figure 3). To address the second we add a post-processing step for simple temporal reasoning. To address the third we use rich, contextualized representations. Specifically, we fine-tune BERT (Devlin et al., 2019) as this model has shown excellent performance on related tasks and was pre-trained on Wikipedia. Figure 2: Our FEVER pipeline: 1) Retrieving Wikipedia pages by selecting an initial candidate set (1a) and ranking the top D (1b); 2) Identifying the top N sentences; 3) Predicting supports, refutes, or not enough info. Dashed arrows indicate fine-tuning steps. Our full pipeline is presented in Figure 2. We first identify an initial candidate set of documents 8597 Figure 3: Pointer network architecture. Claim and evidence (page title or sentence) are embedded with BERT and evidence is sequentially predicted (for sentence selection the relation sequence is jointly predicted). (1a) by combining the top M pages from a TF-IDF search using DrQA (Chen et al., 2017) with pages from the approach of Chakrabarty et al. (2018), which provides results from Google search and predicted named entities and noun phrases. Then, we perform document ranking by selecting the top D < M pages with a pointer network (1b). Next, an N-long sequence of evidence sentences (2) and veracity relation labels (3) are predicted jointly by another pointer network. Prior to training, we fine-tune BERT for document and sentence ranking on claim/title and claim/sentence pairs, respectively. Each claim and evidence pair in the FEVER 1.0 dataset has both the title of the Wikipedia article and at least one sentence associated with the evidence, so we can train on each of these pairs directly. For the claim “Michelle Obama’s husband was born in Kenya”, shown in Figure 3, we obtain representations by pairing this claim with evidence sentences such as “Obama was born in Hawaii” and article titles such as [Barack Obama]. The core component of our approach is the pointer network, as seen in Figure 3. Unlike our previous work (Hidey and Diab, 2018), we use the pointer network to re-rank candidate documents and jointly predict a sequence of evidence sentences and relations. Given a candidate set of evidence (as either document titles or sentences) and a respective fine-tuned BERT model, we extract features for every claim c and evidence ep pair by summing the [CLS] embedding for the top 4 layers (as recommended by Devlin et al. (2019)): mp = BERT(c, ep) (1) Next, to select the top k evidence, we use a pointer network over the evidence for claim c to extract evidence recurrently by computing the extraction probability P(pt|p0 · · · pt−1) for evidence ep at time t < k. At time t, we update the hidden state zt of the pointer network decoder. Then we compute the weighted average hq t of the entire evidence set using q hops over the evidence (Vinyals et al., 2016; Sukhbaatar et al., 2015):4 αo t = softmax(vT h tanh(Wgmp + Waho−1 t )) ho t = X j αo tWgmj (2) We concatenate mp and hq t and use a multi-layer perceptron (MLP) to predict pt. The loss is then: L(θptr) = −1/k k−1 X t=0 log Pθptr(pt|p0:t−1) (3) We train on gold evidence and perform inference with beam search for both document ranking (Section 5.1) and joint sentence selection and relation prediction (Section 5.2). 5.1 Document Ranking In order to obtain representations as input to the pointer network for document ranking, we leverage the fact that Wikipedia articles all have a title (e.g. [Barack Obama]), and fine-tune BERT on title and claim pairs, in lieu of examining the entire document text (which due to its length is not suitable for BERT). Because the title often overlaps lexically with the claim (e.g. [Michelle Obama]), we can train the model to locate the title in the claim. Furthermore, the words in the title co-occur with words in the article (e.g. Barack and Michelle), which the pre-trained BERT language model may be attuned to. We thus fine-tune a classifier on a dataset created from title and claim pairs (where positive examples are titles of gold evidence pages and negative are randomly sampled from our candidate set), obtaining 90.0% accuracy. Given the fine-tuned model, we extract features using Equation 1 where ep is a title, and use Equation 3 to learn to predict a sequence of titles as in Figure 3. 4Initially, ht,0 is set to zt. vh, Wg, and Wa are learned. 8598 5.2 Joint Sentence Selection and Relation Prediction The sentence selection and relation prediction tasks are closely linked, as predicting the correct evidence is necessary for predicting S or R and the representation should reflect the interaction between a claim and an evidence set. Conversely, if a claim and an evidence set are unrelated, the model should predict NEI. We thus jointly model this interaction by sharing the parameters of the pointer network - the hidden state of the decoder is used for both tasks and the models differ only by a final MLP. Sentence Selection Similar to our document selection fine-tuning approach, we fine-tune a classifier on claim and evidence sentence pairs to obtain BERT embeddings. However, instead of training a binary classifier for the presence of valid evidence we train directly on veracity relation prediction, which is better suited for the end task. We create a dataset by pairing each claim with its set of gold evidence sentences. As gold evidence is not available for NEI relations, we sample sentences from our candidate documents to maintain a balanced dataset. We then fine-tune a BERT classifier on relation prediction, obtaining 93% accuracy. Given the fine-tuned model, we extract features using Equation 1 where ep is a sentence, and use Equation 3 to learn to predict a sequence of sentences. Relation Prediction In order to closely link relation prediction with evidence prediction, we reframe the task as a sequence labeling task. In other words, rather than make a single prediction given all evidence sentences, we make one prediction at every timestep during decoding to model the relation between the claim and all evidence retrieved to that point. This approach provides three benefits: it allows the model to better handle noise (when an incorrect evidence sentence is predicted), to handle multi-hop inference (to model the occurrence of switching from NEI to S/R), and to effectively provide more training data (for k = 5 timesteps we have 5 times as many relation labels). For the claim in Figure 3, the initial label sequence is NEI and R because the first evidence sentence by itself (the fact that Barack Obama was born in Hawaii) would not refute the claim. Furthermore for k = 5, the remaining sequence would be R, R, R, as additional evidence (guaranteed to be non-contradictory in FEVER) would not change the prediction. On the other hand, given a claim that requires only a single piece of evidence, such as that in Figure 1, the sequence would be R, R, R, R, R if the correct evidence sentence was selected at the first timestep, NEI, R, R, R, R if the correct evidence sentence was selected at the second timestep, and so forth. We augment the evidence sentence selection described previously to use the hidden state of the pointer network after q hops (Equation 2) and an MLP to also predict a label at that time step, closely linking evidence and label prediction: P(lt) = softmax(Wl2tanh(Wl1ho t)) (4) As with evidence prediction (Equation 3), when the gold label sequence is available, the loss term is: L(θrel seq) = −1/k k−1 X t=0 log Pθrel seq(lt) (5) When training, at the current timestep we use both the gold evidence, i.e. “teacher forcing” (Williams and Zipser, 1989), and the model prediction from the previous step, so that we have training data for NEI. Combining Equations 3 and 5, our loss is: L(θ) = λL(θptr) + L(θrel seq) (6) Finally, to predict a relation at inference, we ensemble the sequence of predicted labels by averaging the probabilities over every time step.5 Post-processing for Simple Temporal Reasoning As neural models are unreliable for handling numerical statements, we introduce a rule-based step to extract and reason about dates. We use the Open Information Extraction system of Stanovsky et al. (2018) to extract tuples. For example, given the claim “The Latvian Soviet Socialist Republic was a republic of the Soviet Union 3 years after 2009,” the system would identify ARG0 as preceding the verb was and ARG1 following. After identifying tuples in claims and predicted sentences, we discard those lacking dates (e.g. ARG0). Given more than one candidate sentence, we select the one ranked higher by the pointer network. Once we have both the claim and evidence date-tuple we apply one of three rules to resolve the relation prediction based on the corresponding temporal phrase. We either evaluate whether the evidence 5The subset of timesteps was determined empirically: while at the final timestep the model is likely to have seen the correct evidence it also contains more noise; in future work we will experiment with alternatives. 8599 date is between two dates in the claim (e.g. between/during/in), we add/subtract x years from the date in the claim and compare to the evidence date (e.g. x years/days before/after), or compare the claim date directly to the evidence date (e.g. before/after/in). For the date expression “3 years after 2009,” we compare the year 2012 to the date in the retrieved evidence (1991, the year the USSR dissolved) and label the claim as R. 6 Experiments and Results We evaluate our dataset and system as part of the FEVER 2.0 shared task in order to validate the vulnerabilities introduced by our adversarial claims (Section 4) and the solutions proposed by our system (Section 5). We train our system on FV1-train and evaluate on FV1/FV2-dev/test (Section 3). We report accuracy (percentage of correct labels) and recall (whether the gold evidence is contained in selected evidence at k = 5). We also report the FEVER score, the percentage of correct evidence sentences (for S and R) that also have correct labels, and potency, the inverse FEVER score (subtracted from one) for evaluating adversarial claims. Our Baseline-RL: For baseline experiments, to compare different loss functions, we use the approach of Chakrabarty et al. (2018) for document selection and ranking, the reinforcement learning (RL) method of Chen and Bansal (2018) for sentence selection, and BERT (Devlin et al., 2019) for relation prediction. The RL approach using a pointer network is detailed by Chen and Bansal (2018) for extractive summarization, with the only difference that we use our fine-tuned BERT on claim/gold sentence pairs to represent each evidence sentence in the pointer network (as with our full system) and use the FEVER score as a reward. The reward is obtained by selecting sentences with the pointer network and then predicting the relation using an MLP (updated during training) and the concatenation of all claim/predicted sentence representations with their maximum/minimum pooling. Hyper-parameters and settings for all experiments are detailed in Appendix B. 6.1 Adversarial Dataset Evaluation We present the performance of our adversarial claims, obtained by submitting to the shared task server. We compare our claims to other participants in the FEVER 2.0 shared task (Table 2) and divided by attack type (Table 3). Potency was macro-averaged across different fact-checking systems (Thorne and Vlachos, 2019), correctness of labels was verified by shared task annotators, and adjusted potency was calculated by the organizers as the potency of correct examples. Compared to other participants (Table 2), we presented a larger set of claims (501 in dev and 499 in test). We rank second in adjusted potency, but we provided a more diverse set than those created by the organizers or other participants. The organizers (Thorne and Vlachos, 2019) created adversarial claims using simple pattern-matching and replacement, e.g. quantifiers and negation. Niewinski et al. (2019) trained a GPT-2-based model on the FEVER data and manually filtered disfluent claims. Kim and Allan (2019) considered a variety of approaches, the majority of which required understanding area comparisons between different regions or understanding implications (e.g. that “not clear” implies NEI). While GPT-2 is effective, our approach is controllable and targeted at real-world challenges. Finally, Table 3 shows that when we select our top 200 most effective examples (multi-hop reasoning and multi-hop temporal reasoning) and compare to the approaches of Niewinski et al. (2019) and Kim and Allan (2019) (who both provided less than 204 examples total) our potency is much higher. In particular, multi-hop reasoning has a potency of 88% for SUPPORT relations and 93% for REFUTES relations and multi-hop temporal reasoning obtains 98% for SUPPORT and REFUTES relations. Team # Pot. Corr. Adj. Organizer Baseline 498 60.34 82.33 49.68 Kim and Allan (2019) 102 79.66 64.71 51.54 Ours 501 68.51 81.44 55.79 Niewinski et al. (2019) 79 79.97 84.81 66.83 Table 2: The evaluation of our claims relative to other participants. #: Examples in blind test Pot: Potency score Corr.: Percent grammatical and coherent with correct label and evidence Adj.: Adjusted potency 6.2 Evaluation against State-of-the-art In Tables 4 and 5 we compare Our System (Section 5) to recent work from teams that submitted to the shared task server for FEVER 1.0 and 2.0, respectively, including the results of Our BaselineRL system in Table 5. Our dual pointer network approach obtains state-of-the-art results on the FEVER 1.0 blind test set (Table 4) on all measures even over systems designed specifically for evidence retrieval (Nishida et al., 2019; Zhou et al., 8600 Attack M/A #S/P #R/P #NEI/P Conjunct. A -/54/55% 75/63% Multi-hop M 100/88% 88/93% 99/50% Add. Unver. M -/-/50/50% Date Man. A 49/59% 129/80% 80/46% Mul. Temp. M 46/98% 5/98% 4/29% Entity Dis. M 46/50% -/-/Lexical Sub. A* 92/70% 57/70% 25/38% Table 3: Attack: Type of attack as described in Section 4. M/A: Whether claims are generated manually (M), automatically (A), or verified manually (A*) #S: Support examples #R: Refute examples #NEI Not enough info examples P: Potency on Shared Task systems 2019), largely due to a notable improvement in recall (more than 3 points over the next system (Hanselowski et al., 2018)). We also find improvements in accuracy over the remaining pipeline systems, suggesting that joint learning helps. Compared to Our Baseline-RL, Our System has 1.8 point improvement in FEVER score on FV1-test with 4 points on FV2-test. Notably, our system finishes second (with a score of 36.61) on the FEVER 2.0 shared task test set, even though our claims were designed to be challenging for our model. The model of Malon (2018) performs especially well; they use a transformer-based architecture without pre-training but focus only on single-hop claims. System Acc. Rec. FEVER Hanselowski et al. (2018) 65.46 85.19 61.58 Nishida et al. (2019) 69.30 76.30 61.80 Yoneda et al. (2018) 67.62 82.84 62.52 Nie et al. (2019a) 68.16 71.51 64.21 Tokala et al. (2019) 69.98 77.28 66.72 Zhou et al. (2019) 71.60 67.10 Our System 72.47 88.39 68.80 Table 4: Comparison with state of the art on FV1-test Team FV1-test FV2-test Hanselowski et al. (2018) 61.58 25.35 Nie et al. (2019a) 64.21 30.47 Our Baseline-RL 67.08 32.92 Stammbach and Neumann (2019) 68.46 35.82 Yoneda et al. (2018) 62.52 35.83 Our System 68.80 36.61 Malon (2018) 57.36 37.31 Table 5: Comparison of FEVER score to other sharedtask systems (ordered by FV2-test FEVER score) 6.3 System Component Ablation To better understand the improved performance of our system, we present two ablation studies in Tables 6 and 7 on FV1 and FV2 dev, respectively.6 Table 6 presents the effect of using different objective functions for sentence selection and relation prediction, compared to joint sentence selection and relation prediction in our full model. We compare Our System to Our Baseline-RL system as well as another baseline (Ptr). The Ptr system is the same as Our Baseline-RL, except the pointer network and MLP are not jointly trained with RL but independently using gold evidence and predicted evidence and relations, respectively. Finally, the Oracle upper bound presents the maximum possible recall after our document ranking stage, compared to 94.4% for Chakrabarty et al. (2018), and relation accuracy (given the MLP trained on 5 sentences guaranteed to contain gold evidence). We find that by incorporating the relation sequence loss, we improve the evidence recall significantly relative to the oracle upper-bound, reducing the relative error by 50% while also obtaining improvements on relation prediction, even over a strong RL baseline. Overall, the best model is able to retrieve 95.9% of the possible gold sentences after the document selection stage, suggesting that further improvements are more likely to come from document selection. Model Acc. Rec. FEVER Oracle 84.2 94.7 – Ptr 74.6 86.1 68.6 Our Baseline-RL 74.6 87.5 69.2 Our System 76.74 90.84 73.17 Table 6: Ablation experiments on FV1-dev Table 7 evaluates the impact of the document pointer network and rule-based date handling on FV2-dev, as the impact of multi-hop reasoning and temporal relations is less visible on FV1-dev. We again compare Our Baseline-RL system to Our System and find an even larger 7.16 point improvement in FEVER score. We find that ablating the date post-processing (-dateProc) and both the date post-processing and document ranking components (-dateProc,-docRank) reduces the FEVER score by 1.45 and 3.5 points, respectively, with the latter largely resulting from a 5 point decrease in recall. 6.4 Ablation for Attack Types While Table 3 presents the macro-average of all systems by attack type, we compare the performance of Our Baseline-RL and Our System in Table 8. 6Our system is significantly better on all metrics (p < 0.001 by the approximate randomization test). 8601 System Acc. Rec. FEVER Our System 48.13 63.28 43.36 -dateProc 45.14 63.28 41.91 -dateProc,-docRank 44.29 58.32 39.86 Our Baseline-RL 44.04 57.56 36.2 Table 7: Ablation experiments on FV2-dev Our System improves on evidence recall for multi-hop claims (indicating that a multi-hop document retrieval step may help) and those with ambiguous entities or words (using a model to re-rank may remove false matches with high lexical similarity). For example, the claim “Honeymoon is a major-label record by Elizabeth Woolridge Grant.” requires multi-hop reasoning over entities. Our System correctly retrieves the pages [Lana Del Rey] and [Honeymoon (Lana Del Rey album)], but Our Baseline-RL is misled by the incorrect page [Honeymoon]. However, while recall increases on multi-hop claims compared to the baseline, accuracy decreases, suggesting the model may be learning a bias of the claim or label distribution instead of relations between claims and evidence. We also obtain large improvements on date manipulation examples (here a rule-based approach is better than our neural one); in contrast, multi-hop temporal reasoning leaves room for improvement. For instance, for the claim “The MVP of the 1976 Canada Cup tournament was born before the tournament was first held,” our full system correctly retrieves [Bobby Orr] and [1976 Canada Cup] (unlike the RL baseline). However, a further inference step is needed beyond our current capabilities – reasoning that Orr’s birth year (1948) is before the first year of the tournament (1976). Finally, we enhance performance on multipropositions as conjunctions or additional unverifiable information (indicating that relation sequence prediction helps). Claims (non-verifiable phrase in brackets) such as “Taran Killam is a [stage] actor.” and “Home for the Holidays stars an actress [born in Georgia].” are incorrectly predicted by the baseline even though correct evidence is retrieved. 7 Conclusion We showed weaknesses in approaches to factchecking via novel adversarial claims. We took steps towards realistic fact-checking with targeted improvements to multi-hop reasoning (by a document pointer network and a pointer network for sequential joint sentence selection and relation preAttack Type Acc. Rec. FEVER Conjunction B 16.95 92.0 16.95 S 40.68∗∗ 92.0 40.68∗∗ Multi-hop B 55.81∗ 29.07 19.77 S 33.72 45.35∗ 17.44 Add. Unver. B 48.0 – 48.0 S 80.0∗∗ – 80.0∗∗ Date Manip. B 30.99 79.59 27.46 S 53.52∗∗∗ 79.59 42.25∗∗ Multi-hop Temp. B 3.33 10.34 0.0 S 3.33 13.79 0.0 Entity Disamb. B 70.83 62.5 58.33 S 79.17 79.17∗ 70.83 Lexical Sub. B 33.33 65.71 25.0 S 29.76 75.71∗ 26.19 Table 8: Attack results for our FV2-dev claims. B: Our Baseline-RL, S: Our System. *: p < 0.05 **: p < 0.01 ***: p < 0.001 by approximate randomization test diction), simple temporal reasoning (by rule-based date handling), and ambiguity and variation (by fine-tuned contextualized representations). There are many unaddressed vulnerabilities that are relevant for fact-checking. The Facebook bAbI tasks (Weston et al., 2016) include other types of reasoning (e.g. positional or size-based). The DROP dataset (Dua et al., 2019) requires mathematical operations for question answering such as addition or counting. Propositions with causal relations (Hidey and McKeown, 2016), which are eventbased rather than attribute-based as in FEVER, are also challenging. Finally, many verifiable claims are non-experiential (Park and Cardie, 2014), e.g. personal testimonies, which would require predicting whether a reported event was actually possible. Finally, our system could be improved in many ways. Future work in multi-hop reasoning could represent the relation between consecutive pieces of evidence and future work in temporal reasoning could incorporate numerical operations with BERT (Andor et al., 2019). One limitation of our system is the pipeline nature, which may require addressing each type of attack individually as adversaries adjust their techniques. An end-to-end approach or a query reformulation step (re-writing claims to be similar to FEVER) might make the model more resilient as new attacks are introduced. Acknowledgements The authors thank Kathy McKeown, Chris Kedzie, Fei-Tzin Lee, and Emily Allaway for their helpful comments on the initial draft of this paper and the anonymous reviewers for insightful feedback. 8602 References Tariq Alhindi, Savvas Petridis, and Smaranda Muresan. 2018. Where is your evidence: Improving factchecking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 85–90, Brussels, Belgium. Association for Computational Linguistics. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics. Daniel Andor, Luheng He, Kenton Lee, and Emily Pitler. 2019. Giving BERT a calculator: Finding operations and arguments with reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5947– 5952, Hong Kong, China. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Tuhin Chakrabarty, Tariq Alhindi, and Smaranda Muresan. 2018. Robust document retrieval and individual evidence modeling for fact extraction and verification. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 127– 131, Brussels, Belgium. Association for Computational Linguistics. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870– 1879. Association for Computational Linguistics. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159. Diane Francis. 2016. Fast furious fact check challenge. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–655. Lucas Graves. 2018. Understanding the promise and limits of automated fact-checking. Technical report, Reuters Institute, University of Oxford. Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. UKP-athene: Multi-sentence textual entailment for claim verification. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 103–108, Brussels, Belgium. Association for Computational Linguistics. Naeemul Hassan, Fatma Arslan, Chengkai Li, and Mark Tremayne. 2017. Toward automated factchecking: Detecting check-worthy factual claims by claimbuster. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages 1803– 1812, New York, NY, USA. ACM. Christopher Hidey and Mona Diab. 2018. Team SWEEPer: Joint sentence extraction and fact checking with pointer networks. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 150–155, Brussels, Belgium. Association for Computational Linguistics. Christopher Hidey and Kathy McKeown. 2016. Identifying causal relations using parallel Wikipedia articles. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1424–1433, Berlin, Germany. Association for Computational Linguistics. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 8603 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics. Shan Jiang and Christo Wilson. 2018. Linguistic signals under misinformation and fact-checking: Evidence from user comments on social media. Proc. ACM Hum.-Comput. Interact., 2(CSCW):82:1– 82:23. Youngwoo Kim and James Allan. 2019. FEVER breaker’s run of team NbAuzDrLqg. In Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), pages 99–104, Hong Kong, China. Association for Computational Linguistics. Christopher Malon. 2018. Team papelo: Transformer networks at FEVER. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 109–113, Brussels, Belgium. Association for Computational Linguistics. Tsvetomila Mihaylova, Georgi Karadzhov, Pepa Atanasova, Ramy Baly, Mitra Mohtarami, and Preslav Nakov. 2019. Semeval-2019 task 8: Fact checking in community question answering forums. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 860–869. Tsvetomila Mihaylova, Preslav Nakov, Llu´ıs M`arquez, Alberto Barr´on-Cede˜no, Mitra Mohtarami, Georgi Karadzhov, and James R. Glass. 2018. Fact checking in community forums. CoRR, abs/1803.03178. Paramita Mirza and Sara Tonelli. 2016. CATENA: causal and temporal relation extraction from natural language texts. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 64–75. Ndapandula Nakashole and Tom M. Mitchell. 2014. Language-aware truth assessment of fact candidates. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1009–1019, Baltimore, Maryland. Association for Computational Linguistics. Yixin Nie, Haonan Chen, and Mohit Bansal. 2019a. Combining fact extraction and verification with neural semantic matching networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6859–6866. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019b. Adversarial nli: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599. Piotr Niewinski, Maria Pszona, and Maria Janicka. 2019. GEM: Generative enhanced model for adversarial attacks. In Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), pages 20–26, Hong Kong, China. Association for Computational Linguistics. Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, and Junji Tomita. 2019. Answering while summarizing: Multi-task learning for multi-hop QA with evidence extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2335–2345, Florence, Italy. Association for Computational Linguistics. Joonsuk Park and Claire Cardie. 2014. Identifying appropriate support for propositions in online user comments. In Proceedings of the First Workshop on Argumentation Mining, pages 29–38, Baltimore, Maryland. Association for Computational Linguistics. Dean Pomerleau and Delip Rao. 2017. Fake news challenge. Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2931–2937, Copenhagen, Denmark. Association for Computational Linguistics. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085–1097, Florence, Italy. Association for Computational Linguistics. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856–865. Dominik Stammbach and Guenter Neumann. 2019. Team DOMLIN: Exploiting evidence enhancement for the FEVER shared task. In Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), pages 105–109, Hong Kong, China. Association for Computational Linguistics. Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 885– 895. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc. 8604 James Thorne and Andreas Vlachos. 2017. An extensible framework for verification of numerical claims. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 37–40, Valencia, Spain. Association for Computational Linguistics. James Thorne and Andreas Vlachos. 2018. Automated fact checking: Task formulations, methods and future directions. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3346–3359. Association for Computational Linguistics. James Thorne and Andreas Vlachos. 2019. Adversarial attacks against fact extraction and verification. arXiv preprint arXiv:1903.05543. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819. Association for Computational Linguistics. Santosh Tokala, G Vishal, Avirup Saha, and Niloy Ganguly. 2019. Attentivechecker: A bi-directional attention flow mechanism for fact verification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2218–2222. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2016. Order matters: Sequence to sequence for sets. In International Conference on Learning Representations (ICLR). Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 18–22. Association for Computational Linguistics. Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, 359(6380):1146–1151. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China. Association for Computational Linguistics. William Yang Wang. 2017. “Liar, liar pants on fire”: A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 422–426. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Ronald J. Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural Comput., 1(2):270–280. Wenpeng Yin and Dan Roth. 2018. TwoWingOS: A two-wing optimization strategy for evidential claim verification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 105–114, Brussels, Belgium. Association for Computational Linguistics. Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. UCL machine reading group: Four factor framework for fact finding (HexaF). In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 97–102, Brussels, Belgium. Association for Computational Linguistics. Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: Graph-based evidence aggregating and reasoning for fact verification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 892–901, Florence, Italy. Association for Computational Linguistics. A Examples of Attack Types Table 9 displays examples for each type of attack. The multi-propositional examples include attacks for CONJUNCTION, MULTI-HOP REASONING, and ADDITIONAL UNVERIFIABLE PROPOSITIONS. For temporal reasoning, we provide examples for DATE MANIPULATION and MULTI-HOP TEMPORAL REASONING. The lexical variation examples consist of ENTITY DISAMBIGUATION and LEXICAL SUBSTITUTION. 8605 Attack Type Example Claim Label Evidence Conjunction Blue Jasmine has Sally Hawkins acting in it and Blue Jasmine was filmed in San Francisco. NEI N/A Multi-Hop Reasoning Goosebumps was directed by Rob Letterman the person who co-wrote Shark Tale. S [Goosebumps (film)] It was directed by Rob Letterman, and written by Darren Lemke, based from a story by Scott Alexander and Larry Karaszewski. [Rob Letterman] Before Letterman’s film subjects took him into outer space with Monsters vs. Aliens (2009), he was taken underwater, having co-directed and cowritten Shark Tale. Additional Unverifiable Propositions Roswell is an American TV series with 61 episodes. NEI N/A Date Manipulation Artpop was Gaga’s second consecutive numberone record in the United States in 2009 before 2010. R [Artpop] Gaga began planning the project in 2011, shortly after the launch of her second studio album, Born This Way. Multi-Hop Temporal Reasoning Lisa Murkowski’s father resigned from the Senate after serving as Senator. S [Lisa Murkowski] She is the daughter of former U.S. Senator and Governor of Alaska Frank Murkowski. Murkowski was appointed to the U.S. Senate by her father, Frank Murkowski, who resigned his seat in December 2002 to become the Governor of Alaska. [Frank Murkowski] He was a United States Senator from Alaska from 1981 until 2002 and the eighth Governor of Alaska from 2002 until 2006. Entity Disambiguation Kate Hudson is a left wing political activist S [Kate Hudson (activist)] Katharine Jane “Kate” Hudson (born 1958) is a British left wing political activist and academic who is the General Secretary of the Campaign for Nuclear Disarmament (CND) and National Secretary of Left Unity. Lexical Substitution The Last Song began filming shooting on Monday June 14th 2009. R [The Last Song (film)] Filming lasted from June 15 to August 18, 2009 with much of it occurring on the island´s beach and pier. Table 9: Examples of the seven sub-types of attacks. Claims edited with word substitution or insertion have their changes in bold. Deletions are marked in strikethrough. Wikipedia titles are represented in bold with square brackets. S: SUPPORTS R: REFUTES NEI: NOT ENOUGH INFORMATION 8606 B Hyper-parameters and Experimental Settings We select M = 30 Wikipedia articles using TFIDF when combining with our other candidate document selection methods and select D = 5 after document ranking. We select N = 5 sentences during sentence selection, consistent with the shared task evaluation. B.1 BERT Language Model Fine-Tuning We use version 0.5.0 of the Huggingface library (https://github.com/huggingface/ pytorch-pretrained-BERT) to fine-tune the “BERT-base” model using the default settings. We lowercase all tokens and use the default BERT tokenizer. Document Ranking Our dataset of title and claim pairs (obtained from FV1-train) consists of 140,085 positive examples and 630,265 negative examples in training with approximately 10% set aside for validation (16,016 positive examples and 84,437 negative). As recommended by Devlin et al. (2019), we select hyper-parameters by grid search over 16 and 32 for batch size, 2e-5, 3e-5, and 5e-5 for learning rate, and 3 and 4 for the number of epochs. Sentence Selection Our dataset of sentence and claim pairs (also obtained from FV1-train) consists of 54,431 S relations, 54,592 R relations, and 54,501 NEI relations in training, with approximately 10% set aside for validation (6,139 S relations, 5,984 R relations, and 6,050 NEI relations). We again select hyper-parameters consistent with the recommended best practice. B.2 Pointer Network We train both the document ranking and sentence selection pointer networks on FV1-train with the same hyper-parameters using Adagrad (Duchi et al., 2011) with a learning rate of 0.01, a batch size of 16, and a maximum of 140 epochs with early stopping on FV1-dev. The dimension of the pointer network LSTM hidden state is set to 200 with q = 3 hops over the memory. We use a beam width of 5 during inference. The MLP used to predict relations has a hidden layer dimensionality of 200 and we set λ = 1. B.3 Reinforcement Learning To make the sentence extractor an RL agent, we can formulate a Markov Decision Process (MDP): at each extraction step t, given a claim c, the agent observes the current state and samples an action from Equation 3 to extract a document sentence s, predict the relation label l and receive a reward r(t + 1) = FEVER(c, s, l). We train using REINFORCE, adapted with an Actor-Critic to minimize variance (detailed by Chen and Bansal (2018)). As RL often requires pre-training, we combine the pointer network loss from Equation 3 with the RL loss (L(θrl)) and the relation prediction loss (L(θrel): L(θ) = λ1L(θptr) + λ2L(θrl) + L(θrel) (7) We set both λ1 = 1 and λ2 = 1.
2020
761
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8607–8613 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8607 Let Me Choose: From Verbal Context to Font Selection Amirreza Shirani†, Franck Dernoncourt‡, Jose Echevarria‡, Paul Asente‡, Nedim Lipka‡ and Thamar Solorio† †University of Houston ‡Adobe Research †{ashirani,tsolorio}@uh.edu ‡{franck.dernoncourt,echevarr,asente,lipka}@adobe.com Abstract In this paper, we aim to learn associations between visual attributes of fonts and the verbal context of the texts they are typically applied to. Compared to related work leveraging the surrounding visual context, we choose to focus only on the input text as this can enable new applications for which the text is the only visual element in the document. We introduce a new dataset, containing examples of different topics in social media posts and ads, labeled through crowd-sourcing. Due to the subjective nature of the task, multiple fonts might be perceived as acceptable for an input text, which makes this problem challenging. To this end, we investigate different end-to-end models to learn label distributions on crowd-sourced data and capture inter-subjectivity across all annotations. 1 Introduction In visual designs, textual information requires the use of fonts with different properties. Whether it is books, magazines, flyers, ads or social media posts, different typefaces are commonly used to express non-verbal information and add more dimensions to the text. An appropriate font usually embodies information about character, context and usage of the design (Doyle and Bottomley, 2006). This motivates us to explore font associations with regular users in a crowd-sourced setting. In other words, we investigate how users relate fonts to different characteristics of the input text. Current font selection interfaces such as O’Donovan et al. (2014) and commercial online services (e.g., MyFonts1 and Typekit2) assist users in selecting fonts by taking into account font similarity. However, they do not consider the verbal 1www.myfonts.com 2https://fonts.adobe.com/ context of the input text. Having a better understanding of the input text, users can benefit from a font recommendation system during authoring, saving time and avoiding tedious exploration of long lists of fonts. Most graphic designers agree that there is no strict or universally-accepted rule for choosing fonts. Different social and personal factors can be involved in typeface selection, which makes this process subjective. However, there seems to be enough agreement among human opinions to build reasonably effective models of font properties (O’Donovan et al., 2014; Shinahara et al., 2019). Several empirical studies have directly explored the relationship between fonts and texts (Shinahara et al., 2019; Henderson et al., 2004; Mackiewicz, 2007). For example, Brumberger (2003a) indicates that readers have strong opinions about the appropriateness of particular typefaces for particular text passages, and they can differentiate typeface/text mismatches. In this study, we aim to model for the first time the associations between visual font attributes and textual context, with the final goal of better font recommendation during text composition. Our main contributions are: 1) We propose and formulate a new task: “font recommendation from written text.” 2) We introduce a new dataset, Short Text Font Dataset, containing a variety of text examples annotated with ten different representative fonts. 3) We compare different end-to-end models that exploit contextual and emotional representations of the input text to recommend fonts. These models are able to capture inter-subjectivity among all annotations by learning label distributions during the training phase. We show that emotional representations can be successfully used to capture the underlying characteristics of sentences to suggest proper fonts. 8608 2 Related Work Font-related studies have been extensively explored in graphic design literature. Shinahara et al. (2019) performed an empirical study on collections of book titles and online ads, showcasing trends relating typographic design and genre. Several previous studies have attempted to associate personality traits and fonts (O’Donovan et al., 2014; Brumberger, 2003b; Juni and Gross, 2008; Mackiewicz and Moeller, 2005; Amare and Manning, 2012). They support the idea of typefaces consistently perceived to have particular personas, emotions, or tones. More recently, FontLex (Kulahcioglu and De Melo, 2018) was the first to find the association between fonts and words by utilizing font-emotion and word-emotion relationships. Instead of focusing on independent words, our proposed model suggests fonts by considering the broader context of the whole text. Task Subjectivity In some tasks, aggregated annotations always correspond to the correct answer (Brew et al., 2010). Therefore, to fully utilize the crowd’s knowledge, different approaches have been proposed to aggregate labels, from simply applying majority voting to more sophisticated strategies to assess annotators’ reliability (Yang et al., 2018; Srinivasan and Chander, 2019; Rodrigues et al., 2014). All of these methods rely on the assumption that only one answer is correct and should be considered as ground truth (Nguyen et al., 2016). Whereas in tasks like ours, sentiment analysis (Brew et al., 2010) or facial expression (Barsoum et al., 2016), the answer is likely to be more subjective due to its non-deterministic nature (Urkullu et al., 2019). We follow previous studies that successfully employed label distribution learning to handle ambiguity in the annotations (Geng et al., 2013; Shirani et al., 2019; Yang et al., 2015). 3 Font Dataset The proposed dataset includes 1,309 short text instances from Adobe Spark3. The dataset is a collection of publicly available sample texts created by different designers. It covers a variety of topics found in posters, flyers, motivational quotes and advertisements.4 3https://spark.adobe.com. 4The dataset along with the annotations can be found online: https://github.com/RiTUAL-UH/ Font-prediction-dataset Choice of Fonts A vast number of fonts and typefaces are used in contemporary printed literature. To narrow down the task, we had a font expert select a set of 10 display fonts that cover a wide range of trending styles. These fonts display enough differentiation in visual attributes and typical use cases to cover the topics in our text samples. Figure 1 shows several examples from the dataset, each rendered with the most congruent font (font with the highest agreement). Figure 1: Examples from our collected dataset visualized through fonts with the highest annotation agreements. Annotation Process In an MTurk experiment, we asked nine annotators to label each sample text by selecting their top three fonts (Figure 2). Workers were asked to choose suitable fonts after reading the sentence. We included carefully-designed quality questions in 10 percent of the hits to monitor the quality of our labeling. We also needed to ensure workers selected fonts based on the comprehension of the text rather than just personal preference. Therefore, we removed the annotations of workers who selected the same font more than 90 percent of the time, resulting in six to eight annotations per instance (we removed instances with fewer than six annotations). As we mentioned earlier, we asked annotators to rank their top three font choices for each text in our dataset. We decided to treat the first, second, and third choices differently as they represent the workers’ priorities. Therefore, we give the highest weight to the first choices (1.0) and lower weights (0.6) and (0.3) to the second and third choices, respectively. Figure 3 shows three examples with label distributions over 10 fonts. By comparing the label distributions of these examples, we can observe that ‘formal’ fonts like F0, F2, and F5 are often selected in business contexts (left). ‘modern/display’ fonts like F1, F3, and F8 are favored in more casual settings (center), and ‘script’ fonts like 8609 Figure 2: A text sample from the dataset rendered using the available 10 fonts for labelling. F0) Source Sans Pro, F1) Blakely, F2) FF Ernestine Pro, F3) FF Market Web, F4) Bickham Script Pro 3, F5) Burbank Big, F6) Fresno, F7) Sneakers Script Narrow, F8) Felt Tip Roman, F9) Pauline Figure 3: Label distributions for three examples Figure 4: Average label distribution of the entire corpus F4, F8, and F9 are preferred for more emotional contexts (right). We observe that some fonts are more popular than others. Figure 4 shows the average label distribution over all instances. F3, F2, and F1 are the most popular, while F4, F8, and F9 are the least popular among all 10 fonts. Statistics The dataset contains 8,211 tokens. The mean and standard deviation number of tokens per instance is 6.27 and 4.65, ranging from 1 to 27 tokens. We obtained a Fleiss kappa agreement (Fleiss, 1971) of 0.348 by taking into account all three choices. This value is reasonable for a task such as this since previous subjective tasks have also reported low inter-rater agreement scores (Salminen et al., 2018; Alonso et al., 2014). We split up the data randomly into training (70%), development (10%) and test (20%) sets for further experimentation and evaluation. 4 Methodology Task Definition Given a piece of text X, we want to determine which font(s) y = {y0, ...y9} are more appropriate or congruent with the properties of the input text. We formulate this problem as a ranking problem where the model assigns each font a real value dx y, representing the degree to which y describes X. In other words, dx y represents the degree of congruency of font y with input X. The values for all the labels are summed up to 1 to fully describe the instance (Geng, 2016). 4.1 Model We explore transfer learning from pre-trained models to improve the performance of our task. We investigate four different deep learning-based architectures to learn font distributions of examples in our dataset. Inspired by previous works, which supported the relationship between font and emotion (Section 2), we compare the effectiveness of emotional embeddings in our models to contextual embeddings like BERT.5 GloVe-BiLSTM Model In this model, we use GloVe embeddings (Pennington et al., 2014) as input and a BiLSTM layer to encode word sequence information in forward and backward directions. Subsequently, we pass the encoded-words to two dense layers for prediction. NRC Model Similar to the GloVe-BiLSTM Model, this model is LSTM-based. The difference is that instead of GloVe embeddings, we use the emotional representations of words from NRC 5The implementation is available online: https:// github.com/RiTUAL-UH/Font_LDL_2020 8610 Figure 5: Font-Emoji Pearson Correlation Coefficient Heatmap Emotion (Mohammad and Turney, 2013), Intensity (Mohammad, 2018b) and Valence, Arousal, and Dominance (VAD) (Mohammad, 2018a) lexicons as input to the model. To efficiently look up the emotion value of words, we search for the stemmed and synonym versions of out-ofvocabulary words. BERT Model We use pre-trained BERT sequence classification model (Devlin et al., 2018) to obtain contextual embeddings as features. Then the output is fed to two dense layers yielding the class predictions. We implement our model based on the Hugging Face’s BERT implementation (Wolf et al., 2019). Emoji Model In this model, we use the DeepMoji pre-trained model (Felbo et al., 2017) to generate emoji vectors by encoding the text into 2304dimensional feature vectors. We treat these features as embedding and pass them to the model with two dense layers. Deepmoji6 is a sentence-level model containing rich representations of emotional content which is trained on a 1,246 million tweet corpus in the emoji prediction task. 5 Experimental Settings and Results 5.1 Training Details The Kullback-Leibler Divergence (KL-DIV) (Kullback and Leibler, 1951) is used as the loss function to train the models. KL-DIV measures how the predicted probability distribution is different from the ground truth probability distribution. To train all the models, we use Adam optimizer (Kingma and Ba, 2014) to optimize the model parameters. We run all models over four runs with different random seeds and report the averaged score to ensure stability. The reported test results correspond to models with the best accuracy on the validation set. 6Our implementation is based on the Hugging Face Torchmoji implementation, https://github.com/huggingface/torchMoji 5.2 Evaluation Settings We evaluate the performance by using two different evaluation metrics for this new task. Font Recall (FR) Less popular fonts could be underrepresented by the models. Therefore we need an evaluation metric that measures the performance of models in learning individual labels. Since we are dealing with an unbalanced dataset, motivated by evaluation methodology used in previous recommendation systems like Kar et al. (2018); Carneiro et al. (2007), we compute Font Recall, i.e. the average recall per font, to measure the performance of the models in learning individual labels. FR := P|F | i=1 |Ri| |F| Where |F| represents the number of labels and Ri is the recall for the ith font. F-score For each instance X from the test set, we select the top k = {1, 3 and 5} fonts with the highest probabilities from both ground truth and prediction distributions. Then we compute weighted averaged F1-score for each k. Note that there are many cases where two or more fonts have the exact same probability. In this case, if the model predicts either one of the labels, we consider it as a correct answer in both metrics. 5.3 Results Model/Evals FR Top3 FR Top5 F-Top1 F-Top3 F-Top5 Majority Baseline 30.00 50.00 12.44 43.72 62.24 NRC Model 30.78 51.60 23.10 47.27 66.16 GloVe Model 32.71 53.74 25.95 51.29 68.29 Emoji Model 33.17 54.06 26.00 51.43 68.53 BERT Model 33.54 56.00 26.97 51.91 69.38 Table 1: Experimental results for all five models. FR represents Font Recall and F represents F-1 score. The results in bold are statistically significant compared to the Majority Baseline. Table 1 compares different models in terms of five evaluation settings. The first two columns of the results show FR for the top 3 and 5 fonts. The other three columns show F-score for the top 1, 3 8611 and 5 fonts. Comparing to the Majority Baseline, the results from the Emoji and BERT models are statistically significant under paired t-test with 95% confidence interval. Although the BERT model performs slightly better than the rest, the Emoji model performs just as well, which suggests two things: (1) the font recommendation task is highly related to what emojis represent and 2) a simpler model like Emoji model can perform similarly to a complex solution like BERT. We analyze the reason behind the effectiveness of the Emoji model by visualizing the FontEmoji Pearson Correlation Coefficient Heatmap (Figure 5) in the training set. Interestingly, fonts F4 and F9 with a ‘Script’ style are highly correlated by ‘Heart’ and ‘Love’ emojis. Also, F3 with a ‘Playful’ style is negatively correlated with emojis with discomfort and mild irritation expressions. Data Augmentation A well-established technique for automatic data augmentation is leveraging machine translation to find meaning-equivalent phrases in a single language (Mallinson et al., 2017; Coulombe, 2018). To mitigate the highly imbalanced class distribution in our data set, we tried over- and under-sampling techniques. We selected examples with high values in underrepresented classes and translated them to four non-English languages using Google Translate7. We then translated these examples back to English, resulting in 170 more examples. We also removed 50 instances with high values in the popular classes. We observed that the data augmentation process has marginal improvements (up to 1%) in some models. We leave the exploration of more sophisticated data augmentation approaches for future work. 6 Conclusion In this paper, we associated font with written text and tackle the problem of font recommendation from the input text. We collected more than 1,300 short written texts and annotated them with ten fonts. We formulated this task as a ranking problem and compared different models based on emotional and contextual representations that exploit label distribution learning to predict fonts. The current approach covers a fixed number of fonts, but it can be extended to support a larger set of fonts. For example, we can use font similarity techniques and enable users to pick a group of 7https://cloud.google.com/translate/docs/apis fonts, or to provide increased flexibility for the fonts available to users. Acknowledgments This research began during an internship at Adobe Research, and was sponsored in part by Adobe Research. We thank the reviewers for their thoughtful comments and efforts towards improving our work. We also thank Tim Brown for his help with font set selection. References Omar Alonso, Catherine Marshall, and Marc Najork. 2014. Crowdsourcing a subjective labeling task: a human-centered framework to ensure reliable results. Microsoft Res., Redmond, WA, USA, Tech. Rep. MSR-TR-2014–91. Nicole Amare and Alan Manning. 2012. Seeing typeface personality: Emotional responses to form as tone. In 2012 IEEE International Professional Communication Conference, pages 1–9. IEEE. Emad Barsoum, Cha Zhang, Cristian Canton Ferrer, and Zhengyou Zhang. 2016. Training deep networks for facial expression recognition with crowdsourced label distribution. In Proceedings of the 18th ACM International Conference on Multimodal Interaction, pages 279–283. ACM. Anthony Brew, Derek Greene, and P´adraig Cunningham. 2010. Using crowdsourcing and active learning to track sentiment in online media. In ECAI, pages 145–150. Eva R Brumberger. 2003a. The rhetoric of typography: The awareness and impact of typeface appropriateness. Technical communication, 50(2):224–231. Eva R Brumberger. 2003b. The rhetoric of typography: The persona of typeface and text. Technical communication, 50(2):206–223. Gustavo Carneiro, Antoni B Chan, Pedro J Moreno, and Nuno Vasconcelos. 2007. Supervised learning of semantic classes for image annotation and retrieval. IEEE transactions on pattern analysis and machine intelligence, 29(3):394–410. Claude Coulombe. 2018. Text data augmentation made simple by leveraging nlp cloud apis. arXiv preprint arXiv:1812.04718. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. John R Doyle and Paul A Bottomley. 2006. Dressed for the occasion: Font-product congruity in the perception of logotype. Journal of consumer psychology, 16(2):112–123. 8612 Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378. Xin Geng. 2016. Label distribution learning. IEEE Transactions on Knowledge and Data Engineering, 28(7):1734–1748. Xin Geng, Chao Yin, and Zhi-Hua Zhou. 2013. Facial age estimation by learning from label distributions. IEEE transactions on pattern analysis and machine intelligence, 35(10):2401–2412. Pamela W Henderson, Joan L Giese, and Joseph A Cote. 2004. Impression management using typeface design. Journal of marketing, 68(4):60–72. Samuel Juni and Julie S Gross. 2008. Emotional and persuasive perception of fonts. Perceptual and motor skills, 106(1):35–42. Sudipta Kar, Suraj Maharjan, and Thamar Solorio. 2018. Folksonomication: Predicting tags for movies from plot synopses using emotion flow encoded neural network. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2879–2891. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Tugba Kulahcioglu and Gerard De Melo. 2018. Fontlex: A typographical lexicon based on affective associations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics, 22(1):79–86. Jo Mackiewicz. 2007. Audience perceptions of fonts in projected powerpoint text slides. Technical communication, 54(3):295–307. Jo Mackiewicz and Rachel Moeller. 2005. Why people perceive typefaces to have different personalities. In International Professional Communication Conference, 2004. IPCC 2004. Proceedings., pages 304–313. IEEE. Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881–893. Saif M. Mohammad. 2018a. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 english words. In Proceedings of The Annual Conference of the Association for Computational Linguistics (ACL), Melbourne, Australia. Saif M. Mohammad. 2018b. Word affect intensities. In Proceedings of the 11th Edition of the Language Resources and Evaluation Conference (LREC-2018), Miyazaki, Japan. Saif M Mohammad and Peter D Turney. 2013. Crowdsourcing a word–emotion association lexicon. Computational Intelligence, 29(3):436–465. An Thanh Nguyen, Matthew Halpern, Byron C Wallace, and Matthew Lease. 2016. Probabilistic modeling for crowdsourcing partially-subjective ratings. In Fourth AAAI Conference on Human Computation and Crowdsourcing. Peter O’Donovan, J¯anis L¯ıbeks, Aseem Agarwala, and Aaron Hertzmann. 2014. Exploratory font selection using crowdsourced attributes. ACM Transactions on Graphics (TOG), 33(4):92. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. 2014. Sequence labeling with multiple annotators. Machine learning, 95(2):165–181. Joni O Salminen, Hind A Al-Merekhi, Partha Dey, and Bernard J Jansen. 2018. Inter-rater agreement for social computing studies. In 2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS), pages 80–87. IEEE. Yuto Shinahara, Takuro Karamatsu, Daisuke Harada, Kota Yamaguchi, and Seiichi Uchida. 2019. Serif or sans: Visual font analytics on book covers and online advertisements. arXiv preprint arXiv:1906.10269. Amirreza Shirani, Franck Dernoncourt, Paul Asente, Nedim Lipka, Seokhwan Kim, Jose Echevarria, and Thamar Solorio. 2019. Learning emphasis selection for written text in visual media from crowd-sourced label distributions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1167–1172. Ramya Srinivasan and Ajay Chander. 2019. Crowdsourcing in the absence of ground truth–a case study. arXiv preprint arXiv:1906.07254. A Urkullu, A Perez, and B Calvo. 2019. On the evaluation and selection of classifier learning algorithms with crowdsourced data. Applied Soft Computing, 80:832–844. 8613 Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Jie Yang, Thomas Drake, Andreas Damianou, and Yoelle Maarek. 2018. Leveraging crowdsourcing data for deep active learning an application: Learning intents in alexa. In Proceedings of the 2018 World Wide Web Conference, pages 23–32. International World Wide Web Conferences Steering Committee. Xu Yang, Bin-Bin Gao, Chao Xing, Zeng-Wei Huo, Xiu-Shen Wei, Ying Zhou, Jianxin Wu, and Xin Geng. 2015. Deep label distribution learning for apparent age estimation. In Proceedings of the IEEE international conference on computer vision workshops, pages 102–108.
2020
762
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8614–8624 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8614 Multi-Label and Multilingual News Framing Analysis Afra Feyza Akyürek1, Lei Guo2,1, Randa Elanwar3, Prakash Ishwar4,1, Margrit Betke1 and Derry T. Wijaya1 1Department of Computer Science, Boston University 2College of Communication, Boston University 3Electronics Research Institute, Egypt 4Department of Electrical and Computer Engineering, Boston University {akyurek,guolei,relanwar,pi,betke,wijaya}@bu.edu Abstract News framing refers to the practice in which aspects of specific issues are highlighted in the news to promote a particular interpretation. In NLP, although recent works have studied framing in English news, few have studied how the analysis can be extended to other languages and in a multi-label setting. In this work, we explore multilingual transfer learning to detect multiple frames from just the news headline in a genuinely low-resource context where there are few/no frame annotations in the target language. We propose a novel method that can leverage elementary resources consisting of a dictionary and few annotations to detect frames in the target language. Our method performs comparably or better than translating the entire target language headline to the source language for which we have annotated data. This work opens up an exciting new capability of scaling up frame analysis to many languages, even those without existing translation technologies. Lastly, we apply our method to detect frames on the issue of U.S. gun violence in multiple languages and obtain exciting insights on the relationship between different frames of the same problem across different countries with different languages. 1 Introduction The worldwide image of the United States has dropped precipitously during the past few years (Wike et al., 2018). Among other factors, the increasing number of gun violence incidents appears to affect the U.S. reputation abroad. Whenever a fatal mass shooting happens, it often attracts significant international news attention. While the domestic U.S. news media often links gun violence to individual shooters’ mental illness (DeFoster and Swalve, 2018; Liu et al., 2019), foreign media may attribute it to U.S. gun policy and its gun culture e.g., (Atkinson, 2019). This phenomenon is known as media framing, which is the process of selecting “some aspects of a perceived reality and [making] them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation for the item” (Entman, 1993). When foreign media frame the gun violence issue in a way to depict the U.S. as an unsafe and undesired place, it erodes the country’s “soft power” (Nye Jr, 2004). Evaluating how different countries frame the U.S. gun violence issue will enrich our understanding of the U.S. soft power in particular and international relations in general. In this work, we develop a multilingual approach to automatically detect frames in news coverage of different languages, thus facilitating the analysis of how different countries with different languages frame a particular issue. Aside from enabling this understanding of foreign public opinion regarding a certain issue or nation, a multilingual approach is essential in media framing analysis, as it is also an understudied problem in many parts of the world. Given frame-annotated news headlines of a particular topic in a source language (e.g., English), our approach uses word-to-word translation to translate keywords that are indicative of the frames in these headlines to a target language. Then, we fine-tune a state-of-the-art multilingual language model MultiBERT (Devlin et al., 2019) to detect frames on these “code-switched” headlines, combined with a few annotated headlines from the target language. The translated keywords and a fewshot examples act as anchors to adapt MultiBERT to detect frames in the target language. This approach performs comparably if not better than a model trained on the source language and tested on headlines that are translated from the target language to the source. Since our approach requires only simple resources – a dictionary and a few (≤40) annotated examples in the target language 8615 – it is handy for many languages. Moreover, considering the significant improvement gained over the zero-shot transfer, the proposed approach is much more reliable for languages without existing translation technologies or expert annotations. Due to the subtle nature of framing, it is not uncommon for one news article to involve more than one message. Communication researchers have suggested that the association of different constructs, such as issues and frames in the news, will influence how the audience associate these elements, thus determining how they perceive the world (Guo and McCombs, 2015). The Network Agenda Setting Model suggests that examining the interrelationships between media elements enables researchers to measure media effects in a more nuanced manner. Note that some frames appear more often than others. In this work, we formulate our frame detection model to allow for multilabel frame detection while also addressing the imbalance in the frame distribution by adapting focal loss (Lin et al., 2017) into our multi-label setting. Our multi-label approach allows for the examination of frame co-occurring, or “associative frames” (Schultz et al., 2012), across the news articles. Overall, the contribution of this work are manifold: (1) We devise a novel code-switch few-shot scheme to train a frame detection model for any language. (2) We extend the formulation of the frame classification problem and focal loss to a multi-label setting, allowing the model to predict multiple frames for each instance. (3) We use our multilingual multi-label frame detection model to detect frames in news headlines pertaining to U.S. gun violence issue in multiple countries and languages, and obtain interesting insights on how other countries view the gun violence issue in the U.S. and how frames are related across news articles in different countries with different languages.1 2 Background and Related Work Today’s international politics not only revolve around military and economic influence but also largely depend on a country’s soft power (Nye Jr, 2004). For each nation, constructing a positive 1Code and data are available at https://github. com/feyzaakyurek/newsframing country image to the outside world is crucial to ensure its international competitiveness in this global information society (Buhmann and Ingenhoff, 2015). In this light, more and more governments have realized the importance of public diplomacy, making great efforts to promote their countries’ values and perspectives to foreign publics (Entman, 2008; Golan and Himelboim, 2016). However, these efforts are not always successful. Editors of international news media serve as the gatekeepers to decisions which may lead to the framing of a given country contrary to how its government intends. In reporting news about a foreign country, news editors and reporters make conscious or unconscious choices to emphasize specific issues, or emphasize certain aspects of a given topic, which may alter the country’s image in the minds of their audience. A multilingual approach is essential to analyze media framing in different parts of the world, which will shed light on foreign public opinion regarding a particular nation. Communication researchers often rely on manual content analysis to examine media framing in news outlets of different languages (H. De Vreese, 2001). One critique for this type of study is that researchers tend to decide countries for review based on languages spoken in the research team rather than theoretical rationales. This language constraint becomes a more significant challenge in this increasingly globalized media landscape; capturing a holistic picture of international communication would require the analysis of news coverage in a larger number of languages. Arguably, an automatic, multilingual approach of framing analysis would greatly benefit the international communication research community. In NLP, language models have been effectively fine-tuned or used in downstream tasks such as text classification (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018). Further, the introduction of deep contextual language embedding such ELMO (Peters et al., 2018), which uses bi-directional LSTMs and BERT (Bi-directional Encoder Representations from Transformers) (Devlin et al., 2019), has been another milestone in this line of work. BERT is currently one of the state-of-the-art models in language modeling. News framing was first brought to the attention of the computational linguistics community by the Media Frames Corpus (Card et al., 2015), which addresses three issues: immigration, tobacco, and 8616 same-sex marriage. Field et al. (2018) analyzes the framing of the U.S. and agenda-setting in Russian news. Our work is similar to (Field et al., 2018) in terms of using nPMI to find essential words. Furthermore, our work advances previous research by leveraging a multilingual language model, facilitating transfer learning in news framing, and relying on parsimonious resources, that is, 50,000 lexical translations vs. ~350 in our case. The current state-of-the-art model (Liu et al., 2019) for frame detection fine-tunes BERT on frame-annotated English news headlines with the standard multiclass focal loss objective (Lin et al., 2017). Their approach predicts only a single frame, which is insufficient given the multifaceted nature of news framing in which multiple frames often co-occur in the same headline. Indeed, more than a quarter of the Gun Violence Frame Corpus (GVFC) has more than one frame (Liu et al., 2019). In this work, we fine-tune MultiBERT to detect frames in multiple languages’ headlines with our multi-label focal loss. Our approach can predict (and be evaluated on) multiple frames for each headline, which is a more complex task while being comparable to their work in terms of the average F1 performance. Similar to their work, we detect frames on news headlines as they provide the most direct clue to the potential influence of the news coverage. 3 Dataset Creation GVFC is a dataset of news articles from 21 major U.S. news organizations related to U.S. gun violence that contains news headlines and their domain-expert frame annotations (Liu et al., 2019). We extend GVFC to include headlines in other languages by following their process of curating GVFC. We first drew our sample of news articles from German-, Turkish-, and Arabic-speaking news websites, using Crimson Hexagon’s ForSight social media analytics platform (Hexagon, 2018), retrieving items that had at least one keyword in their headlines from the following list of words – {“gun”, “firearm”, “NRA”, “2nd amendment”, “second amendment”, “AR15”, “assault weapon”, “rifle”, “Brady act”, “Brady bill”, “mass shooting”} – that have been translated into German, Turkish, and Arabic respectively by native speakers of the languages. In curating the multilingual datasets, we used the same set of frames as in GVFC. We then trained two native speaker coders for each language to apply the GVFC codebook protocol for identifying frames and then measured their intercoder reliability (ICR) in annotating a sample of 350, 200, and 210 German, Turkish, and Arabic news headlines, respectively. The coders achieve 92.6%, 98.5%, 78.1% agreement rates in identifying the first frame and 78.9%, 97.9%, 74.3% agreement rates for the second frame for German, Turkish, and Arabic samples. Additionally, Krippendorff’s Alpha for the 1st frame and the 2nd frame are 0.89, 0.66; 0.90, 0.74, and 0.69, 0.26 for German, Turkish, and Arabic, respectively. Once a minimum of 70% agreement was reached, one coder of each language continued to code more headlines. Annotation resulted in a total of 326, 100, and 388 non-duplicate headlines for German, Turkish, and Arabic. The average number of labels, i.e., label cardinalities, per headline are 1.4, 1.5, and 1.5, for German, Turkish, and Arabic, whereas it’s 1.3 in GVFC, which is in English. As we can observe from the agreement rates, the Arabic data has a relatively weaker ICR, while the Turkish data has the best ICR. As high ICR values imply that two coders consistently categorized the content similarly, they signal a high validity of the coded results. In turn, this is reflected in the performance of our model as it performs the worst in Arabic (Section 5). Nonetheless, the quality of our curated data is substantially higher – the average of Krippendorff’s alpha is 0.82 – than contemporaries such as MFC (which is only in English) with an average alpha of less than 0.6 (Card et al., 2015). 4 Model In this work, we extend the current state-of-theart model on the GVFC (Liu et al., 2019), which predicts only the first frame, into a multi-label approach and evaluate it across multiple languages. As previous work has showcased that BERT surpasses LSTM and GRU-based architectures, we shift our focus in this work from architecture optimization to scalability of news framing analysis across multiple languages in a multi-label setting. BERT relies on multiple stacks of the Transformer’s encoder blocks (Devlin et al., 2019; Vaswani et al., 2017) to learn vector representations of sentences. A single encoder block is composed of a self-attention layer followed by a fullyconnected layer. When a sentence – a sequence of tokens – is fed into the encoder, it passes through an embedding layer, a self-attention layer, and fullyconnected layers before being passed to the upper 8617 encoder block. The self-attention layer embodies three matrices called W Q for the query, W K for the key, and W V for the value. Each of these matrices is of size vocab_size × hidden_size, and thus each token in the vocabulary has its corresponding q, k, and v vectors. Representations for each token are contextualized; namely, the representation of a token is the weighted average of all representations in the sequence. Therefore, the vector representation for token xi is given by vec_rep(xi) = X j∈S vj Softmax(qi · kj/ √ d) where d is the size of the key vectors in W K and S is the set of all tokens in the same sequence as xi, including xi. BERT adds a special token for classification [CLS] at the beginning of each sequence. Then it learns the representation of this token and other tokens in the sequence by training on Wikipedia corpus for two language tasks: next sentence prediction and Masked Language Model (MLM), which was initially inspired by the Cloze task (Taylor, 1953). The contextual representation of the [CLS] token encodes the syntactic and semantic constructs of the sequence, and one can fine-tune BERT for various down-stream tasks. Fine-tuning BERT performs well on new tasks even with small datasets, which can be attributed to the data-efficient deep attention mechanism (Devlin et al., 2019; Vinyals et al., 2015). The knowledge encoded within the vector representations of the tokens through pre-training also helps the classifier with the language understanding part of the task, reducing the need for a larger dataset. Finally, a multilingual version of pre-trained BERT, MultiBERT, which is trained on the entire Wikipedia dumps of 104 languages with the largest Wikipedia, has recently been released, making it an excellent candidate for scaling to multiple languages. The multilingual pre-training and the utilization of sub-word tokenization allows MultiBERT to represent sequences from any of these 104 languages (Gu et al., 2018) and enables zero-shot classification on any of the languages (i.e., train on one language and test on another). In our case, since reproducing the effort put in GVFC, which was created by highly qualified journalism students in other languages, is prohibitive, employing a cross-lingual model such as MultiBERT renders scaling to other words possible. 4.1 Multi-label News Frame Detection For frame detection purposes, we classify news articles into nine frame categories based on their headlines. Devlin et al. (2019) recommends using the embedding generated for the special token called [CLS], which is padded to the beginning of every sentence. All tokens, including [CLS] are of length H = 768. The representation for [CLS] is generated by attending every word in the sequence. We modify BERT by appending to it a fully connected layer which acts as a classifier taking in the embedding generated for [CLS] after 12 layers of encoders and mapping it into K = 9 output neurons. Hence, the only parameters trained from scratch during fine-tuning are those of the classifier layer’s, W ∈RHxK. Finally, we use Sigmoid activations to obtain nine outputs, each between 0 and 1, which are interpreted as scores for nine classes. During inference, we use the threshold of 0.5 on these scores to binarize the output. We fine-tune MultiBERT with two different losses: the standard Binary Cross-Entropy loss, and a multi-label variation of the weighted focal loss (Lin et al., 2017). We compute the Binary Cross-Entropy (BCE) loss, also named as Sigmoid Cross-Entropy loss, for a single sample x as, BCE(f) = −1 |K| |K| X i=1 (y(i)log(ˆy(i))+ (1 −y(i))log(1 −ˆy(i))) where predictions are given by ˆy = [ˆy(1), . . . , ˆy(|K|)] = 1 (1 + exp(−f(x))) y = [y(1), . . . , y(|K|)] are the gold binary labels and f is BERT with classifier. Considering the high degree of class imbalance in the GVFC dataset, which deteriorates within the multilingual datasets we developed, we adopt a multi-label variation of binary focal loss (Lin et al., 2017). As a reminder, the focal loss for a single sample x is defined as, FL(f) = −α(1 −p)2log(p) where p = (1−y)(1−ˆy)+yˆy and y ∈{0, 1} is the true label, also ˆy = 1/ (1 + exp(−f(x))) ∈R, and α is the balancing factor, which is usually normalized inverse class frequency. Hence, the smaller the class, the higher the α and vice versa, which balances the importance of each class’ examples – 8618 while f is the hypothesis e.g., neural network. In the multi-label case, we alter focal loss formulation such that y and ˆy become y ∈{0, 1}|K| and ˆy ∈R|K|. Moreover, for α we propose using α = h (α(0) 1 , α(1) 1 ), . . . , (α(0) k , α(1) k ) i where α(j) k is the normalized inverse frequency of the event yk = j where j ∈{0, 1}. In other words, we interpret each class as *two classes*, either 0 or 1, and compute inverse class frequencies for all 2 ∗|K| classes and normalize them such that P k∈K P j∈{0,1} α(j) k = 1. We observe that this loss matches BCE in F1 scores and prevails it in multi-label accuracy score EM-2 (Exact Match for two frames) by a significant 11% margin as in Table 1. We use two Binary Relevance approaches based on Naïve Bayes and MultiBERT, respectively, as our baselines. Naïve Bayes is a standard baseline for text classification which leverages Bayes theorem and utilizes word frequencies as features (McCallum et al., 1998). For regularization, we apply add-1 smoothing. The standard configuration for Naïve Bayes is multi-class. One intuitive technique of tailoring Naïve Bayes into a multi-label problem is called Binary Relevance (BR). BR is the method of training |K| one-vs-rest classifiers independently for each of class k ∈K on the same dataset. As our second baseline, we train nine binary MultiBERTs in a one-vs-rest manner. 4.2 Multilingual Models GVFC dataset is composed of 1300 relevant samples for the issue of Gun Violence and is only available in English. For cross-lingual transfer, MultiBERT with multi-label Focal loss provides the highest accuracy within English samples that have more than one correct class by a significant 11% margin, 62% vs. 51% in EM-2, while maintaining the same level of F-1 scores as given in Table 1. Firstly, we explore zero-shot and few-shot performances of our MultiBERT model with Focal loss which is trained on the English dataset as in 2.1 and 2.3 of Table 2. We use German (DE), Arabic (AR), and Turkish (TR) as our target languages to explore the cross-lingual performance of our model to a variety of languages for which we have some validation set but not train set. In our fewshot models, we use extra 40 samples from the target language, i.e., DE, AR, or TR, and use the same training configurations as in the initial training, which we describe in Section 5. Model (Loss) F1-Macro F1-Micro EM-1 EM-2 Top-2 EM-A MULTICLASS EngBERT (Liu et al., 2019) 0.77 0.83 0.86 N/A 0.93 0.83 MultiBERT 0.73 0.79 0.82 N/A 0.89 0.79 MULTI-LABEL BR w/ Naïve Bayes 0.58 0.65 0.58 0.29 0.68 0.51 BR w/ MultiBERT (Binary Focal) 0.74 0.82 0.69 0.58 0.87 0.66 EngBERT (ML Focal) 0.76 0.82 0.71 0.62 0.94 0.69 MultiBERT (ML Focal) 0.76 0.82 0.71 0.62 0.92 0.69 MultiBERT (BCE Loss) 0.76 0.82 0.79 0.51 0.91 0.72 Table 1: English results. Multiclass models consider only the first frame correct and are evaluated accordingly. EM-1, EM-2, EM-A, Top-2: See Section 5. ML: Multi-Label, BR: Binary Relevance. Furthermore, since the news framing task is fairly a keyword-driven phenomenon (Field et al., 2018), we developed a set of keywords that occur most frequently in a given frame. To this end, we utilize the metric called normalized pointwisemutual information (nPMI) which was suggested by Field et al. (2018). nPMI score for a given frame F and word w is I(F, w) = log P(w|F) P(w) . Both P(w) and P(w|F) are estimated from the training corpus. We determine the set of important words based on nPMI by selecting the top 250 words for each frame – that also have nPMI greater than zero – resulting in 358 total words. We, then, use word-toword translation to code-switch (CS) the English training set with the target language (TL) for these words. In other words, we replace all utterances of “important” words with it’s TL dictionary translation. For instance, a sample headline in the training set that was code-switched with German becomes Florida Schütze ein troubled loner mit Weiß supremacist Bindungen. which originally was "Florida shooter a troubled loner with white supremacist ties" having both frames “mental illness” and “race/ethnicity”. We experiment with using the code-switched data for training in both zero-shot and few-shot, using 40 target language examples. Models based on code-switched training are indicated with CSTL for target language (TL) in Table 2. Code-switched translation is a way of adapting the model to the target language during training. We observed significant improvements or comparable results both in zero-shot and few-shot settings over the model that was trained on the original English data, as demonstrated in Table 2 for all three languages. Furthermore, we explore the effect of translation direction for the news frame detection task using Google Translate in Table 3. 8619 DE AR TR Model F1-Macro F1-Micro EM-1 EM-2 EM-A F1-Macro F1-Micro EM-1 EM-2 EM-A F1-Macro F1-Micro EM-1 EM-2 EM-A Zero-shot (2.1) Train EN, Test TL 0.48 0.66 0.47 0.31 0.39 0.37 0.39 0.38 0.04 0.24 0.50 0.77 0.76 0.29 0.53 (2.2) Train CSTL(EN), Test TL 0.53 0.72 0.64 0.39 0.52 0.42 0.46 0.39 0.06 0.26 0.57 0.82 0.86 0.39 0.63 Few-shot (40 TL samples) (2.3) Train EN, Test TL 0.66 0.75 0.52 0.37 0.44 0.48 0.54 0.41 0.17 0.31 0.77 0.89 0.67 0.73 0.70 (2.4) Train CSTL(EN), Test TL 0.64 0.76 0.59 0.43 0.51 0.53 0.58 0.35 0.19 0.29 0.84 0.92 0.80 0.73 0.77 Table 2: Comparison of pure-English training and code-switched training in zero-shot and few-shot settings. CS: Code-Switched. EN: English. TL: Target Language (DE, AR, or TR). CSY (X): Code-switch X with Y . Underlying models are MultiBERT with ML Focal loss. DE AR TR Setup F1-Macro F1-Micro EM-1 EM-2 EM-A F1-Macro F1-Micro EM-1 EM-2 EM-A F1-Macro F1-Micro EM-1 EM-2 EM-A Train: EN →TL . Test: TL (3.1) MultiBERT 0.59 0.72 0.67 0.33 0.50 0.45 0.49 0.36 0.11 0.26 0.69 0.88 0.82 0.65 0.74 Train: EN. Test: TL →EN (3.2) MultiBERT 0.65 0.75 0.72 0.42 0.58 0.50 0.54 0.42 0.10 0.29 0.59 0.84 0.71 0.57 0.64 (3.3) EngBERT Uncased 0.63 0.78 0.75 0.44 0.60 0.52 0.55 0.48 0.13 0.34 0.48 0.78 0.73 0.43 0.58 (3.4) EngBERT Cased 0.53 0.75 0.74 0.41 0.58 0.51 0.54 0.46 0.11 0.32 0.54 0.86 0.75 0.63 0.69 (3.5) Few-shot w/ the best among (3.2), (3.3), (3.4) 0.61 0.79 0.62 0.50 0.56 0.62 0.66 0.48 0.29 0.40 0.70 0.84 0.63 0.57 0.60 Table 3: Exploring the effect of translation between target languages and English (the source) in both directions. We use Google Translate for translation. X →Y : X translated to Y using Google Translate. 5 Experiments and Results As input to our models, we follow previous work and rely on news headlines rather than news story content, due to reasons described by Liu et al. (2019). To showcase the gains made on top of a multi-class approach by reformulating the problem as multi-label, we reproduce the method described by Liu et al. (2019) with both English BERT and MultiBERTs (Table 1). In our implementations involving BERT, we use Adam optimizer with a learning rate of 0.02, a maximum sequence length of 128, and we train for ten epochs. In Table 1, we include experiments that use different configurations of BERT, such as uncased English BERT (EngBERT) and cased Multilingual BERT (MultiBERT) with two different loss functions. Casing decisions were based on previous work (Liu et al., 2019) and recommendations in BERT code repository2. As for losses, we experimented with Binary Cross-Entropy and multi-label Focal Loss, as described in Section 4.1. For evaluation, we follow recent work and report macro and micro-averaged F1-scores (Wu et al., 2019), as well as exact-match (EM) for samples which have single frames (EM-1), two frames (EM2) and any number of frames (EM-A). In Table 1, we also report Top-2 accuracy, which, for a given sample, computes the top two most confident predictions for each model based on the scores for each frame after the last activation layer, and checks whether those comprise the first frame. We report this metric to demonstrate that by switching from a multi-class model to a multi-label one, we retain 2https://github.com/google-research/bert accuracy for the first frame while providing more predictive power with multiple labels. Note that, to accommodate multiple languages, we favor a multilingual language model. Results in Table 1 show that for our application, there is only an insignificant drop in the predictive power from EngBERT to MultiBERT using multi-label Focal Loss (ML Focal). Moreover, Focal Loss results in higher accuracy in EM-2 while maintaining as high F1-scores to canonical BCE Loss. Considering the purposes of this paper, as well as the label cardinalities in other language datasets, we favor ML Focal loss for multilingual models. While being a state-of-the-art machine translation tool, Google Translate is the practitioner’s handy translation guide, (Edunov et al., 2018). In Table 3, we explore the effect of the direction of translation to detect frames in German (DE), Arabic (AR) and Turkish (TR) headlines about US gun violence. Note that in none of the languages is a sufficient size of news framing training data available; thus, to extend framing analysis to multiple languages, cross-lingual transfer learning is needed. Firstly, we translate GVFC from English, to target language TL ∈{DE, AR, TR}, train MultiBERT with ML Focal loss and test on the TL. Secondly, we use the English training set as is and translate target test sets to English. This latter setup lets us use EngBERT as well. We experiment with both cased and uncased models and observe that uncased performs better in DE and AR. Overall, we note that translating test sets to English results in better performance, which is intuitive as the model requires clarity in the language during training. All 8620 models in Tables 3 and 2 use the same loss, and MultiBERT experiments always use the cased version, following the authors’ recommendation. We use 40 target samples of target language, translated to English, and include them in the training set to study few-shot performance. We only train the best performers, primarily based on F1 scores, among (3.2), (3.3) and (3.4), namely the models (3.3), (3.3) and (3.2) for DE, AR and TR, respectively in (Table 3). For some of the metrics the few-shot performance may drop because the new samples come from a different distribution. Furthermore, we compare zero-shot and fewshot performances of MultiBERT when trained on original English versus code-switched train sets in Table 2. Both models use the same set of samples; the difference is that in the former, the headlines are in English, whereas in the latter, "important" words are switched with their TL translations. In a zero-shot setting, code-switched training (2.2) outperforms English training (2.1) significantly for all three languages (F1-macro and F1-micro scores). Considering the few-shot setting, although the improvement gets smaller, the performance of code-switching is on par if not better for all three languages, see (2.3, 2.4). Note that the comparisons we make are primarily based on F1-scores as the model’s capability might shift from predicting single-label cases correctly to predicting more multi-labeled cases correctly as well as between common and rare classes. In German, for instance, code-switched few-shot training improves in F1scores from zero-shot but remains around the same in terms of EM-A. The reason for that is because the model predicts multi-label cases (EM-2) better by 4 percent points, see (2.2), (2.4) in Table 2. Notably, considering Tables 2 and 3 together, a simple word-to-word translation for as little as 358 words, improves frame detection performance drastically even to the level of a complete translation of the test set to English. For Turkish, code-switched training beats full translation of the test set into English in a few-shot setting; it results in a comparable performance for German and slightly worse predictions for Arabic. We attribute the overall low performance for Arabic to the relatively small ICR in the annotation process. 6 Analysis To visualize our multi-label model we use the visualization tool by Vig (2019) in Figure 1. In BERT, (a) Multi-class Model (b) Multi-label Model Figure 1: “Wells Fargo gives gun maker a new line of credit, unswayed by nuns’ opposition” has Economic Consequences as the first frame and Public Opinion as the second. every sequence is padded by a special classification token [CLS] from the beginning. Embedding generated for this token is used for classification into 9 classes. Figures 1a and 1b demonstrate the attentions of this token to other tokens in the sequence. Note that the given sample headline has indeed two frames i.e. “Economic Consequences" as the first and “Public Opinion" as the second. However, in a multiclass setup in which the model is configured to produce a single label, it learns to disregard the second frame "Public Opinion" while strongly attending the words “fargo" and “credit" related to for the theme of “Economic Consequences". On the contrary, a multi-label model correctly attends all words that are related to both frames i.e. "fargo", "credit", "nuns" and "opposition" and predicts “Economic Consequences" and “Public Opinion" correctly. Another interesting observation is related to bias induced by translation. In German, the phrase “schärferes Waffenrecht” means “stricter gun regulation”. However, Google Translate translates half of the headlines that include the expression as “stricter/sharper gun rights" which makes the model predict “Gun Rights" rather than “Gun Control" as the frame. A discrepancy like this is widely deceptive and jeopardizes the learning, whether it happens in the training or validation set. However, in code-switched training, one has better control over the translation, as one only translates a manageable number of words. We observe that code-switched training escapes this bias through correctly translated keywords “gun" and “laws" to German. Additionally, we find our models catching several annotation errors such as the headline in Turkish 8621 Code-switch Technique Unique Switched Words Total Switched Words F1-Macro F1-Micro EM-1 EM-2 EM-A Zero-Shot (Train EN, Test DE) 0 0 0.48 0.66 0.47 0.31 0.39 Code-switch Omitted Words 387 2121 0.54 0.70 0.53 0.27 0.40 Code-switch nPMI Words 358 7522 0.53 0.72 0.64 0.39 0.52 Code-switch nPMI + Omitted Words 675 8129 0.60 0.70 0.65 0.29 0.47 Table 4: Code-switch analysis for German. “Obama’dan LGBTI bireylerin gitti˘gi bir kulüpte 49 ki¸siyi öldüren Orlando saldırganı hakkında açıklama" which translates as "Obama gave a statement about the Orlando shooter who killed 49 in an LGBTI club." is annotated as “Politics”.” In contrast, the model predicts “Society/Culture” and “Politics”, attending to “LGBTI” and “club”. 6.1 Code-switching Analysis In determining the words to code-switch from English to a target language, we mainly considered the metric called nPMI (Section 4.2), which essentially gives the most frequently-used words for each frame. In the English dataset (GVFC), we first list the top 250 words for a given frame based on their nPMI scores and take the union of these across frames, which resulted in a total of 358 casesensitive words to be dictionary-translated into the target language. In Table 4, we provide results obtained by using different code-switching methods that use no target language annotations. Note that, since nPMI is a frequency metric, code-switching with nPMI results in this set of words that includes not only frame-indicative words but also a lot of stop words and common words such as “a”, “the”, “he” or “are”. An alternative method, which we called “omitted words” suggests determining important words by omitting a word from the headline and reapplying the trained classifier to the headline with the missing word (similar to Zhong et al. (2019); Ribeiro et al. (2016)). We then compute the drop in the probability as an importance measure for word xj, Importance(xj) = p(y|x1, . . . , xn) − p(y|x1, . . . , xj−1, xj+1, . . . , xn) where y is a true label. The remaining procedure is similar to nPMI, as we determine the set of important words per frame, 45 of them this time, and combine those which resulted in 387 words. Note that this method results in a set of important words that are more disjointed across frames, which in turn makes the words more frame-specific. No common or stop words made it to the top 45 in any of the frames. Despite resulting in more sophisticated words, using omitted words to code-switch resulted in more deficient if not on par scores as compared to nPMI – our primary way of doing code-switching. We argue that the reason for nPMI performing better is the much higher number of total words that get translated to the target language. In Table 4, note that using dictionary translations for only 358 unique words results in a total of 7522 words that are in the target language, which is more than 3.5 times what omitted words method yields. The increased amount of words that end up in the target language helped the MultiBERT classifier distinguish frames in the target language better. Note that in the last line of Table 4, including translations for the omitted words results in inconsistent improvement due to negligible size in the increase of the total words that get translated. Our experiments show that for code-switching purposes, quantity might override quality which may suggest that for code-switching to be effective in multilingual transfer, translations of simpler words can outperform translations of the domainand task-specific words, making the resources required to leverage knowledge from the source language to target language even more parsimonious. 6.2 Framing Network Analysis The network visualization software Netdraw (Borgatti, 2002) was used to visualize the two frame networks depicted in Figure 6.2 based on the predictions generated on U.S. and German news articles from the year 2016 to 2018 by best performing models, i.e., uncased English BERT (Table 1) and code-switched model (Table 2) for English and German respectively. While each node represents a frame, each edge represents the number of times the two corresponding frames co-occurred in the news headline. The more central, the more connected the frame is with other frames. The node size was adjusted to reflect the relative frequency of news coverage of the given frame. That is, a frame with a larger node size more frequently occurs in the news coverage. Several notable patterns emerge by comparing the frame networks in the U.S. and Germany. It appears that the U.S. media highly politicized the 8622 (a) U.S. Frame Network (b) German Frame Network Figure 2: Comparison of frame association networks in the U.S. and German news. gun violence issue. The frame “politics” is not only the most salient but also the most central, closely connected with several other frames, reflecting, the sensationalism of the U.S. media landscape. The U.S. media tends to link all aspects of social reality to the political fight between the two parties, a pattern not followed in foreign media. Another important finding is that while the U.S. media broadly framed the gun violence issue from the perspective of mental health, German media rarely mentions this aspect. Rather than blaming individual shooters, the German press paid more attention to U.S. public opinion manifesting as gun violence protests and the U.S. gun regulations. In other words, compared to the U.S.’s news coverage, foreign media tended to attribute the responsibility to the U.S. government. In the German news coverage, the close association between the frame “society and culture” and “gun rights” is also noteworthy. Frequently linking the U.S.’s unique culture and people’s rights to purchase guns in the news presents the U.S. as a “bizarre” place, which may also lead to a negative perception of the country among Germans. In conclusion, the two frame networks illustrate how an issue can be framed differently in news media of different countries. Considering that the U.S. and Germany are close allies, it would be exciting to examine how countries with tense relations with the U.S. framed gun violence issues. A large-scale comparative framing study would allow a better understanding of the U.S. global image, which we propose as future work, and our multilingual and multi-label tool would make this type of analysis possible. In general, our approach is practical in looking at how media in different countries frame an international issue. 6.3 Future Work We want to acknowledge two additional properties of a given headline, which neither this nor the previous works in news framing consider (Card et al., 2015; Liu et al., 2019; Field et al., 2018). First is relevance, although rarely, not all headlines that include the specified keywords in Section 3 are actually about U.S. gun violence. Second, an article may be about one particular incident or event related to gun violence, i.e., episodic, or it may focus on the issue of gun violence as an ongoing problem, i.e., thematic. Moreover, some of the episodic articles may not be tendential enough to have a particular frame. Existing works on framing only includes headlines that are both relevant and have frames, whereas, in reality, 48% of headlines about U.S. gun violence in GVFC do not have a particular frame. Media outlets outside of the U.S. have various rates of tendential articles about gun violence in the U.S. For instance, among the foreign languages we examined, German articles have the highest rate, with 90% of articles having at least one frame. Among Turkish articles that are “relevant” only 10% have a frame. In our evaluations, we only considered headlines that are relevant and have at least one frame. While stressing that determining the frame of an article is the most nuanced task in news framing, addressing the challenges mentioned above is still meaningful and constitutes future work. 7 Conclusion In this work, we present a novel code-switch model for the task of automatic cross-lingual news frame detection and show that it matches the performance of full translation if not overrides. Moreover, we leverage an existing dataset by making use of multiple labels, create benchmark news framing test sets for three new languages, and employ a variant of Focal Loss to account for class imbalance in the data. In conclusion, while accounting for multiple frames per sample, we demonstrate how a crosslingual analysis of news framing is informative and insightful in developing a global view surrounding the gun violence problem in the U.S. Acknowledgment This work is supported in part by the U.S. NSF grant 1838193 and DARPA HR001118S0044 (the LwLL program). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA and the U.S. Government. 8623 References Claire Atkinson. 2019. Americans are crazy: Foreign journalists grapple with covering u.s. mass shootings. Stephen Borgatti. 2002. Netdraw: Graph visualization software. Harvard: Analytic Technologies. Alexander Buhmann and Diana Ingenhoff. 2015. The 4d model of the country image: An integrative approach from the perspective of communication management. International Communication Gazette, 77(1):102–124. Dallas Card, Amber Boydstun, Justin H Gross, Philip Resnik, and Noah A Smith. 2015. The media frames corpus: Annotations of frames across issues. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 438–444. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087. Ruth DeFoster and Natashia Swalve. 2018. Guns, culture or mental health? framing mass shootings as a public health crisis. Health communication, 33(10):1211–1222. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381. Robert M Entman. 1993. Framing: Toward clarification of a fractured paradigm. Journal of communication, 43(4):51–58. Robert M Entman. 2008. Theorizing mediated public diplomacy: The us case. The International Journal of Press/Politics, 13(2):87–102. Anjalie Field, Doron Kliger, Shuly Wintner, Jennifer Pan, Dan Jurafsky, and Yulia Tsvetkov. 2018. Framing and agenda-setting in russian news: a computational analysis of intricate political strategies. arXiv preprint arXiv:1808.09386. Guy J Golan and Itai Himelboim. 2016. Can world system theory predict news flow on twitter? the case of government-sponsored broadcasting. Information, Communication & Society, 19(8):1150–1170. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O.K. Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 344–354, New Orleans, Louisiana. Association for Computational Linguistics. Lei Guo and Maxwell McCombs. 2015. The power of information networks: New directions for agenda setting. Routledge, New York and London. Holli A. Semetko Claes H. De Vreese, Jochen Peter. 2001. Framing politics at the launch of the euro: A cross-national comparative study of frames in the news. Political communication, 18(2):107–122. Crimson Hexagon. 2018. ForSight social media analytics platform, Last accessed on November 1, 2018. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980– 2988. Siyi Liu, Lei Guo, Kate Mays, Margrit Betke, and Derry Tanti Wijaya. 2019. Detecting frames in news headlines and its application to analyzing news framing trends surrounding us gun violence. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 504–514. Andrew McCallum, Kamal Nigam, et al. 1998. A comparison of event models for naive bayes text classification. In AAAI-98 workshop on learning for text categorization, volume 752, pages 41–48. Citeseer. Joseph S Nye Jr. 2004. Soft power: The means to success in world politics. Public affairs. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Friederike Schultz, Jan Kleinnijenhuis, Dirk Oegema, Sonja Utz, and Wouter Van Atteveldt. 2012. Strategic framing in the bp crisis: A semantic network analysis of associative frames. Public Relations Review, 38(1):97–107. Wilson L Taylor. 1953. “cloze procedure”: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. 8624 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. arXiv preprint arXiv:1906.05714. Oriol Vinyals, Ł ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2773–2781. Curran Associates, Inc. Richard Wike, Bruce Stokes, Jacob Poushter, Laura Silver, Janell Fetterolf, and Kat Devlin. 2018. America’s international image continues to suffer. Pew Research Center, October, 1. Jiawei Wu, Wenhan Xiong, and William Yang Wang. 2019. Learning to learn and predict: A metalearning approach for multi-label classification. arXiv preprint arXiv:1909.04176. Ruiqi Zhong, Yanda Chen, Desmond Patton, Charlotte Selous, and Kathleen McKeown. 2019. Detecting and reducing bias in a high stakes domain. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4767–4777.
2020
763
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8625–8646 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8625 Predicting Performance for Natural Language Processing Tasks Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, Graham Neubig Language Technologies Institute, Carnegie Mellon University {mengzhox,aanastas,yiming,gneubig}@cs.cmu.edu [email protected] Abstract Given the complexity of combinations of tasks, languages, and domains in natural language processing (NLP) research, it is computationally prohibitive to exhaustively test newly proposed models on each possible experimental setting. In this work, we attempt to explore the possibility of gaining plausible judgments of how well an NLP model can perform under an experimental setting, without actually training or testing the model. To do so, we build regression models to predict the evaluation score of an NLP experiment given the experimental settings as input. Experimenting on 9 different NLP tasks, we find that our predictors can produce meaningful predictions over unseen languages and different modeling architectures, outperforming reasonable baselines as well as human experts. Going further, we outline how our predictor can be used to find a small subset of representative experiments that should be run in order to obtain plausible predictions for all other experimental settings.1 1 Introduction Natural language processing (NLP) is an extraordinarily vast field, with a wide variety of models being applied to a multitude of tasks across a plenitude of domains and languages. In order to measure progress in all these scenarios, it is necessary to compare performance on test datasets representing each scenario. However, the cross-product of tasks, languages, and domains creates an explosion of potential application scenarios, and it is infeasible to collect high-quality test sets for each. In addition, even for tasks where we do have a wide variety of test data, e.g. for well-resourced tasks such as machine translation (MT), it is still 1Code, data and logs are publicly available at https: //github.com/xiamengzhou/NLPerf. computationally prohibitive as well as not environmentally friendly (Strubell et al., 2019) to build and test on systems for all languages or domains we are interested in. Because of this, the common practice is to test new methods on a small number of languages or domains, often semi-arbitrarily chosen based on previous work or the experimenters’ intuition. As a result, this practice impedes the NLP community from gaining a comprehensive understanding of newly-proposed models. Table 1 illustrates this fact with an example from bilingual lexicon induction, a task that aims to find word translation pairs from cross-lingual word embeddings. As vividly displayed in Table 1, almost all the works report evaluation results on a different subset of language pairs. Evaluating only on a small subset raises concerns about making inferences when comparing the merits of these methods: there is no guarantee that performance on English–Spanish (EN–ES, the only common evaluation dataset) is representative of the expected performance of the models over all other language pairs (Anastasopoulos and Neubig, 2020). Such phenomena lead us to consider if it is possible to make a decently accurate estimation for the performance over an untested language pair without actually running the NLP model to bypass the computation restriction. Toward that end, through drawing on the idea of characterizing an experiment from Lin et al. (2019), we propose a framework, which we call NLPERF, to provide an exploratory solution. We build regression models, to predict the performance on a particular experimental setting given past experimental records of the same task, with each record consisting of a characterization of its training dataset and a performance score of the corresponding metric. Concretely, in §2, we start with a partly populated table (such as the one from 8626 BLI Method Evaluation Set DE–EN EN–DE ES–EN EN–ES FR–EN EN–FR IT–EN EN–IT EN–PT EN–RU ES–DE PT–RU Zhang et al. (2017) ? ✓ ✓ ✓ ? ? ✓ ? ? ? ? ? Chen and Cardie (2018) ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ? ✓ ? Yang et al. (2019) ✓ ✓ ✓ ✓ ✓ ✓ ✓ ? ? ? ? ? Heyman et al. (2019) ? ✓ ? ✓ ? ✓ ? ✓ ? ? ? ? Huang et al. (2019) ? ? ✓ ✓ ✓ ✓ ? ? ? ? ? ? Artetxe et al. (2019) ✓ ✓ ✓ ✓ ✓ ✓ ? ? ? ✓ ? ? Table 1: An illustration of the comparability issues across methods and multiple evaluation datasets from the Bilingual Lexicon Induction task. Our prediction model can reasonably fill in the blanks, as illustrated in Section 4. Table 1) and attempt to infer the missing values with the predictor. We begin by introducing the process of characterizing an NLP experiment for each task in §3. We evaluate the effectiveness and robustness of NLPERF by comparing to multiple baselines, human experts, and by perturbing a single feature to simulate a grid search over that feature (§4). Evaluations on multiple tasks show that NLPERF is able to outperform all baselines. Notably, on a machine translation (MT) task, the predictions made by the predictor turn out to be more accurate than human experts. An effective predictor can be very useful for multiple applications associated with practical scenarios. In §5, we show how it is possible to adopt the predictor as a scoring function to find a small subset of experiments that are most representative of a bigger set of experiments. We argue that this will allow researchers to make informed decisions on what datasets to use for training and evaluation, in the case where they cannot experiment on all experimental settings. Last, in §6, we show that we can adequately predict the performance of new models even with a minimal number of experimental records. 2 Problem Formulation In this section we formalize the problem of predicting performance on supervised NLP tasks. Given an NLP model of architecture M trained over dataset(s) D of a specific task involving language(s) L with a training procedure (optimization algorithms, learning rate scheduling etc.) P, we can test the model on a test dataset D′ and get a score S of a specific evaluation metric. The resulting score will surely vary depending on all the above mentioned factors, and we denote this relation as g: SM,P,L,D,D′ = g(M, P, L, D, D′). (1) In the ideal scenario, for each test dataset D′ of a specific task, one could enumerate all different settings and find the one that leads to the best performance. As mentioned in Section §1, however, such a brute-force method is computationally infeasible. Thus, we turn to modeling the process and formulating our problem as a regression task by using a parametric function fθ to approximate the true function g as follows: ˆSM,P,L,D,D′ = fθ([ΦM; ΦP; ΦL; ΦD; ΦD′]) where Φ∗denotes a set of features for each influencing factor. For the purpose of this study, we mainly focus on dataset and language features ΦL and ΦD, as this already results in a significant search space, and gathering extensive experimental results with fine-grained tuning over model and training hyperparameters is both expensive and relatively complicated. In the cases where we handle multiple models, we only use a single categorical model feature to denote the combination of model architecture and training procedure, denoted as ΦC. We still use the term model to refer to this combination in the rest of the paper. We also omit the test set features, under the assumption that the data distributions for training and testing data are the same (a fairly reasonable assumption if we ignore possible domain shift). Therefore, for all experiments below, our final prediction function is the following: ˆSC,L,D = fθ([ΦC; ΦL; ΦD]) In the next section we describe concrete instantiations of this function for several NLP tasks. 3 NLP Task Instantiations To build a predictor for NLP task performance, we must 1) select a task, 2) describe its featurization, and 3) train a predictor. We describe details of these three steps in this section. 8627 Task Dataset Citation Source Target Transfer # Models # EXs Task Langs Langs Langs Metric Wiki-MT Schwenk et al. (2019) 39 39 – single 995 BLEU TED-MT Qi et al. (2018) 54 1 – single 54 BLEU TSF-MT Qi et al. (2018) 54 1 54 single 2862 BLEU TSF-PARSING Nivre et al. (2018) – 30 30 single 870 Accuracy TSF-POS Nivre et al. (2018) – 26 60 single 1531 Accuracy TSF-EL Rijhwani et al. (2019) – 9 54 single 477 Accuracy BLI Lample et al. (2018) 44 44 – 3 88×3 Accuracy MA McCarthy et al. (2019) – 66 – 6 107×6 F1 UD Zeman et al. (2018a) – 53 – 25 72×25 F1 Table 2: Statistics of the datasets we use for training predictors. # EXs denote the total number of experiment instances; Task Metric reflects how the models are evaluated. Tasks We test on tasks including bilingual lexicon induction (BLI); machine translation trained on aligned Wikipedia data (Wiki-MT), on TED talks (TED-MT), and with cross-lingual transfer for translation into English (TSF-MT); crosslingual dependency parsing (TSF-Parsing); crosslingual POS tagging (TSF-POS); cross-lingual entity linking (TSF-EL); morphological analysis (MA) and universal dependency parsing (UD). Basic statistics on the datasets are outlined in Table 2. For Wiki-MT tasks, we collect experimental records directly from the paper describing the corresponding datasets (Schwenk et al., 2019). For TED-MT and all the transfer tasks, we use the results of Lin et al. (2019). For BLI, we conduct experiments using published results from three papers, namely Artetxe et al. (2016), Artetxe et al. (2017) and Xu et al. (2018). For MA, we use the results of the SIGMORPHON 2019 shared task 2 (McCarthy et al., 2019). Last, the UD results are taken from the CoNLL 2018 Shared Task on universal dependency parsing (Zeman et al., 2018b). Featurization For language features, we utilize six distance features from the URIEL Typological Database (Littell et al., 2017), namely geographic, genetic, inventory, syntactic, phonological, and featural distance. The complete set of dataset features includes the following: 1. Dataset Size: The number of data entries used for training. 2. Word/Subword Vocabulary Size: The number of word/subword types. 3. Average Sentence Length: The average length of sentences from all experimental. 4. Word/Subword Overlap: |T1 ∩T2| |T1| + |T2| where T1 and T2 denote vocabularies of any two corpora. 5. Type-Token Ratio (TTR): The ratio between the number of types and number of tokens (Richards, 1987) of one corpus. 6. Type-Token Ratio Distance:  1 −TTR1 TTR2 2 where TTR1 and TTR2 denote TTR of any two corpora. 7. Single Tag Type: Number of single tag types. 8. Fused Tag Type: Number of fused tag types. 9. Average Tag Length Per Word: Average number of single tags for each word. 10. Dependency Arcs Matching WALS Features: the proportion of dependency parsing arcs matching the following WALS features, computed over the training set: subject/object/oblique before/after verb and adjective/numeral before/after noun. For transfer tasks, we use the same set of dataset features ΦD as Lin et al. (2019), including features 1–6 on the source and the transfer language side. We also include language distance features between source and transfer language, as well as between source and target language. For MT tasks, we use features 1–6 and language distance features, but only between the source and target language. For MA, we use features 1, 2, 5 and morphological tag related features 7–9. For UD, we 8628 use features 1, 2, 5, and 10. For BLI, we use language distance features and URIEL syntactic features for the source and the target language. Predictor Our prediction model is based on gradient boosting trees (Friedman, 2001), implemented with XGBoost (Chen and Guestrin, 2016). This method is widely known as an effective means for solving problems including ranking, classification and regression. We also experimented with Gaussian processes (Williams and Rasmussen, 1996), but settled on gradient boosted trees because performance was similar and Xgboost’s implementation is very efficient through the use of parallelism. We use squared error as the objective function for the regression and adopted a fixed learning rate 0.1. To allow the model to fully fit the data we set the maximum tree depth to be 10 and the number of trees to be 100, and use the default regularization terms to prevent the model from overfitting. 4 Can We Predict NLP Performance? In this section we investigate the effectiveness of NLPERF across different tasks on various metrics. Following Lin et al. (2019), we conduct kfold cross validation for evaluation. To be specific, we randomly partition the experimental records of ⟨L, D, C, S⟩tuples into k folds, and use k−1 folds to train a prediction model and evaluate on the remaining fold. Note that this scenario is similar to “filling in the blanks” in Table 1, where we have some experimental records that we can train the model on, and predict the remaining ones. For evaluation, we calculate the average root mean square error (RMSE) between the predicted scores and the true scores. Baselines We compare against a simple mean value baseline, as well as against language-wise mean value and model-wise mean value baselines. The simple mean value baseline outputs an average of scores s from the training folds for all test entries in the left-out fold (i) as follows: ˆs(i) mean = 1 |S \ S(i)| X s∈S\S(i) s; i ∈1 . . . k (2) Note that for tasks involving multiple models, we calculate the RMSE score separately on each model and use the mean RMSE of all models as the final RMSE score. The language-wise baselines make more informed predictions, taking into account only training instances with the same transfer, source, or target language (depending on the task setting). For example, the source-language mean value baseline ˆs(i,j) s-lang for jth test instance in fold i outputs an average of the scores s of the training instances that share the same source language features s-lang, as shown in Equation 3: ˆs(i,j) s-lang = P s,φ δ(φL,src = s-lang) · s P s,φ δ(φL,src = s-lang) ∀(s, φ) ∈(|S \ S(i)|, |Φ \ Φ(i)|) (3) where δ is the indicator function. Similarly, we define the target- and the transfer-language mean value baselines. In a similar manner, we also compare against a model-wise mean value baseline for tasks that include experimental records from multiple models. Now, the prediction for the jth test instance in the left-out fold i is an average of the scores on the same dataset (as characterized by the language φL and dataset φD features) from all other models: ˆs(i,j) model = P s,φ δ(φL = lang, φD = data) · s P s,φ δ(φL = lang, φD = data) ∀(s, φ) ∈(|S \ S(i)|, |Φ \ Φ(i)|) (4) where lang = Φ(i,j) L and data = Φ(i,j) D respectively denote the language and dataset features of the test instance. Main Results For multi-model tasks, we can do either Single Model prediction (SM), restricting training and testing of the predictor within a single model, or Multi-Model (MM) prediction using a categorical model feature. The RMSE scores of NLPERF along with the baselines are shown in Table 3. For all tasks, our single model predictor is able to more accurately estimate the evaluation score of unseen experiments compared to the single model baselines, confirming our hypothesis that the there exists a correlation that can be captured between experimental settings and the downstream performance of NLP systems. The language-wise baselines are much stronger than the simple mean value baseline but still perform worse than our single model predictor. Similarly, the model-wise baseline significantly outperforms the mean value baseline because results from other models reveal much information about the dataset. 8629 Task Model Wiki-MT TED-MT TSF-MT TSF-PARSING TSF-POS TSF-EL BLI MA UD Mean 6.40 12.65 10.77 17.58 29.10 18.65 20.10 9.47 17.69 Transfer Lang-wise – – 10.96 15.68 29.98 20.55 – – – Source Lang-wise 5.69 12.65 2.24 – – – 20.13 – – Target Lang-wise 5.12 12.65 10.78 12.05 8.92 8.61 20.00 9.47 – NLPERF (SM) 2.50 6.18 1.43 6.24 7.37 7.82 12.63 6.48 12.06 Model-wise – – – – – – 8.77 5.22 4.96 NLPERF (MM) – – – – – – 6.87 3.18 3.54 Table 3: RMSE scores of three baselines and our predictions under the single model and multi model setting (missing values correspond to settings not applicable to the task). All results are from k-fold (k = 5) evaluations averaged over 10 random runs. Even so, our multi-model predictor still outperforms the model-wise baseline. The results nicely imply that for a wide range of tasks, our predictor is able to reasonably estimate left-out slots in a partly populated table given results of other experiment records, without actually running the system. We should note that RMSE scores across different tasks should not be directly compared, mainly because the scale of each evaluation metric is different. For example, a BLEU score (Papineni et al., 2002) for MT experiments typically ranges from 1 to 40, while an accuracy score usually has a much larger range, for example, BLI accuracy ranges from 0.333 to 78.2 and TSF-POS accuracy ranges from 1.84 to 87.98, which consequently makes the RMSE scores of these tasks higher. Comparison to Expert Human Performance We constructed a small scale case study to evaluate whether NLPERF is competitive to the performance of NLP sub-field experts. We focused on the TED-MT task and recruited 10 MT practitioners,2 all of whom had published at least 3 MTrelated papers in ACL-related conferences. In the first set of questions, the participants were presented with language pairs from one of the k data folds along with the dataset features and were asked to estimate an eventual BLEU score for each data entry. In the second part of the questionnaire, the participants were tasked with making estimations on the same set of language pairs, but this time they also had access to features, and BLEU scores from all the other folds.3 2None of the study participants were affiliated to the authors’ institutions, nor were familiar with this paper’s content. 3The interested reader can find an example questionnaire Predictor RMSE Mean Baseline 12.64 Human (w/o training data) 9.38 Human (w/ training data) 7.29 NLPERF 6.04 Table 4: Our model performs better than human MT experts on the TED-MT prediction task. The partition of the folds is consistent between the human study and the training/evaluation for the predictor. While the first sheet is intended to familiarize the participants with the task, the second sheet fairly adopts the training/evaluation setting for our predictor. As shown in Table 4, our participants outperform the mean baseline even without information from other folds, demonstrating their own strong prior knowledge in the field. In addition, the participants make more accurate guesses after acquiring more information on experimental records in other folds. In neither case, though, are the human experts competitive to our predictor. In fact, only one of the participants achieved performance comparable to our predictor. Feature Perturbation Another question of interest concerning predicting performance is “how will the model perform when trained on data of a different size” (Kolachina et al., 2012a). To test NLPERF’s extrapolation ability in this regard, we conduct an array of experiments on one language pair with various data sizes on the Wiki-MT task. We pick two language pairs, Turkish to English (TR–EN) and Portuguese to English (PT–EN) as our testbed for the Wiki-MT task. We sample par(and make estimations over one of the folds) in the A. 8630 0 100 200 300 400 500 Data Size (k) 5 10 15 20 BLEU TR-EN TR-EN prediction 0 400 800 1200 1600 2000 2400 15 20 25 30 35 BLEU PT-EN PT-EN prediction Figure 1: Our model’s predicted BLEU scores and true BLEU scores, on sampled TR–EN datasets (sizes 10k/50k/100k/200k/478k) and PT–EN datasets (sizes 100k/500k/1000k/2000k/2462k), achieving a RMSE score of 1.83 and 9.97 respectively. allel datasets with different sizes and train MT models with each sampled dataset to obtain the true BLEU scores. On the other hand, we collect the features of all sampled datasets and use our predictor (trained over all other languages pairs) to obtain predictions. The plot of true BLEU scores and predicted BLEU scores are shown in Figure 1. Our predictor achieves a very low average RMSE of 1.83 for TR–EN pair but a relatively higher RMSE of 9.97 for PT–EN pair. The favorable performance on the tr-en pair demonstrates the possibility of our predictor to do feature extrapolation over data set size. In contrast, the predictions on the pt-en pair are significantly less accurate. This is due to the fact that there are only two other experimental settings scoring as high as 34 BLEU score, with data sizes of 3378k (en-es) and 611k (gl-es), leading to the predictor’s inadequacy in predicting high BLEU scores for low-resourced data sets during extrapolation. This reveals the fact that while the predictor is able to extrapolate performance on settings similar to what it has seen in the data, NLPERF may be less successful under circumstances unlike its training inputs. 5 What Datasets Should We Test On? As shown in Table 1, it is common practice to test models on a subset of all available datasets. The reason for this is practical – it is computationally prohibitive to evaluate on all settings. However, if we pick test sets that are not representative of the data as a whole, we may mistakenly reach unfounded conclusions about how well models perform on other data with distinct properties. For example, models trained on a small-sized dataset may not scale well to a large-sized one, or models that perform well on languages with a particular linguistic characteristic may not do well on languages with other characteristics (Bender and Friedman, 2018). Here we ask the following question: if we are only practically able to test on a small number of experimental settings, which ones should we test on to achieve maximally representative results? Answering the question could have practical implications: organizers of large shared tasks like SIGMORPHON (McCarthy et al., 2019) or UD (Zeman et al., 2018a) could create a minimal subset of settings upon which they would ask participants to test to get representative results; similarly, participants could possibly expedite the iteration of model development by testing on the representative subset only. A similar avenue for researchers and companies deploying systems over multiple languages could lead to not only financial savings, but potentially a significant cut-down of emissions from model training (Strubell et al., 2019). We present an approximate explorative solution to the problem mentioned above. Formally, assume that we have a set N, comprising experimental records (both features and scores) of n datasets for one task. We set a number m (< n) of datasets that we would like to select as the representative subset. By defining RMSEA(B) to be the RMSE score derived from evaluating on one subset B the predictor trained on another subset of experimental records A, we consider the most representative subset D to be the one that minimizes the RMSE score when predicting all of the other datasets: arg min D⊂N RMSED(N \ D). (5) Naturally, enumerating all n m  possible subsets would be prohibitively costly, even though it would lead to the optimal solution. Instead, we employ a beam-search-like approach to efficiently search for an approximate solution to the best performing subset of arbitrary size. Concretely, we start our approximate search with an exhaustive enumeration of all subsets of size 2. At each following step t, we only consider the best k subsets {D(i) t ; i ∈1, . . . , k} into account and discard the rest. As shown in Equation 6, for each candidate 8631 2 3 4 5 10 20 30 40 RMSE rus-eng lav-eng bos-eng ron-eng eng-fin cat-eng lav-eng eng-est eng-nor bos-eng ron-eng eng-fin kor-eng hrv-eng eng-spa eng-ben eng-spa eng-afr afr-eng eng-spa eng-nor eng-dan eng-afr eng-spa eng-dan afr-eng eng-afr eng-nor BLI 2 3 4 5 10 20 30 40 50 lt_hse gl_treegal lt_hse en_pud hy_armtdp lt_hse pcm_nsc pl_lfg pl_sz lt_hse br_keb en_line cs_fictree pl_sz kpv_ikdp sa_ufal kpv_ikdp tl_trg sa_ufal kpv_ikdp kpv_lattice sa_ufal tl_trg tl_trg sa_ufal cs_pud fo_oft tr_pud MA 2 3 4 5 5 10 15 20 25 RMSE sqi-eng kur-eng nob-eng msa-eng cmn-eng nob-eng msa-eng lit-eng ces-eng nob-eng msa-eng lit-eng fas-eng heb-eng spa-eng por-eng rus-eng por-eng vie-eng rus-eng por-eng vie-eng ron-eng spa-eng fra-eng ara-eng por-eng ita-eng TED-MT 2 3 4 5 5 10 15 20 25 glg-rus ron-por srp-ukr deu-epo eng-tur swe-fra ita-slk fin-eng glg-eng srp-ukr deu-epo ita-rus fra-por ukr-srp eng-spa spa-eng por-eng eng-spa spa-eng eng-spa por-eng eng-por fra-eng glg-spa eng-ita eng-spa vie-eng spa-glg Wiki-MT Most representative Least representative Random Search Figure 2: Beam search results (beam size=100) for up to the 5 most (and least) representative datasets for 4 NLP tasks. We also show random search results averaged over 100 random runs. subset, we expand it with one more data point, {D(i) t ∪{s}; ∀i ∈1 . . . k, s ∈N \ D(i) t }. (6) For tasks that involve multiple models, we take experimental records of the selected dataset from all models into account during expansion. Given all expanded subsets, we train a predictor for each to evaluate on the rest of the data sets, and keep the best performing k subsets {D(i) t+1; i ∈1, . . . , k} with minimum RMSE scores for the next step. Furthermore, note that by simply changing the arg min to an arg max in Equation 5, we can also find the least representative datasets. We present search results for four tasks4 as beam search progresses in Figure 2, with corresponding RMSE scores from all remaining datasets as the y-axis. For comparison, we also conduct random searches by expanding the subset with a randomly selected experimental record. In all cases, the most representative sets are an aggregation of datasets with diverse characteristics such as languages and dataset sizes. For example, in the Wiki-MT task, the 5 most representative datasets include languages that fall into a diverse range of language families such as Romance, Turkic, Slavic, etc. while the least representative ones include duplicate pairs (opposite directions) mostly 4Readers can find results on other tasks in Appendix B. involving English. The phenomenon is more pronounced in the TED-MT task, where not only the 5 most representative source languages are diverse, but also the dataset sizes. Specifically, the Malay-English (msa-eng) is a tiny dataset (5k parallel sentences), and Hebrew-English (heb-eng) is a high-resource case (212k parallel sentences). Notably, for BLI task, to test how representative the commonly used datasets are, we select the most frequent 5 language pairs shown in Table 1, namely en-de, es-en, en-es, fr-en, en-fr for evaluation. Unsurprisingly, we get an RMSE score as high as 43.44, quite close to the performance of the worst representative set found using beam search. This finding indicates that the standard practice of choosing datasets for evaluation is likely unrepresentative of results over the full dataset spectrum, well aligned with the claims in Anastasopoulos and Neubig (2020). A particularly encouraging observation is that the predictor trained with only the 5 most representative datasets can achieve an RMSE score comparable to k-fold validation, which required using all of the datasets for training.5 This indicates that one would only need to train NLP models on a small set of representative datasets to obtain reasonably plausible predictions for the rest. 5to be accurate, k −1 folds of all datasets. 8632 6 Can We Extrapolate Performance for New Models? In another common scenario, researchers propose new models for an existing task. It is both timeconsuming and computationally intensive to run experiments with all settings for a new model. In this section, we explore if we can use past experimental records from other models and a minimal set of experiments from the new model to give a plausible prediction over the rest of the datasets, potentially reducing the time and resources needed for experimenting with the new model to a large extent. We use the task of UD parsing as our testbed6 as it is the task with most unique models (25 to be exact). Note that we still only use a single categorical feature for the model type. To investigate how many experiments are needed to have a plausible prediction for a new model, we first split the experimental records equally into a sample set and a test set. Then we randomly sample n (0 ≤n ≤5) experimental records from the sample set and add them into the collection of experiment records of past models. Each time we re-train a predictor and evaluate on the test set. The random split repeats 50 times and the random sampling repeats 50 times, adding up to a total of 2500 experiments. We use the mean value of the results from other models, shown in Equation 7 as the prediction baseline for the leftout model, and because experiment results of other models reveal significant information about the dataset, this serves as a relatively strong baseline: ˆsk = 1 n −1 n X i=1 1(i ∈M/{k}) · si. (7) M denotes a collection of models and k denotes the left-out model. We show the prediction performance (in RMSE) over 8 systems7 in Figure 3. Interestingly, the predictor trained with no model records (0) outperforms the mean value baseline for the 4 best systems, while it is the opposite case on the 4 worst systems. Since there is no information provided about the new-coming model, the predictions are solely based on dataset and language features. One reason might explain the phenomenon the correlation between the features and the scores of the worse-performing systems is different from 6MA and BLI task results are in Appendix C 7The best and worst 4 systems from the shared task. those better-performing systems, so the predictor is unable to generalize well (ONLP). In the following discussion, we use RMSE@n to denote the RMSE from the predictor trained with n data points of a new model. The relatively low RMSE@0 scores indicate that other models’ features and scores are informative for predicting the performance of the new model even without new model information. Comparing RMSE@0 and RMSE@1, we observe a consistent improvement for almost all systems, indicating that NLPERF trained on even a single extra random example achieves more accurate estimates over the test sets. Adding more data points consistently leads to additional gains. However, predictions on worse-performing systems benefit more from it than for better-performing systems, indicating that their feature-performance correlation might be considerably different. The findings here indicate that by extrapolating from past experiments, one can make plausible judgments for newly developed models. 7 Related Work As discusssed in Domhan et al. (2015), there are two main threads of work focusing on predicting performance of machine learning algorithms. The first thread is to predict the performance of a method as a function of its training time, while the second thread is to predict a method’s performance as a function of the training dataset size. Our work belongs in the second thread, but could easily be extended to encompass training time/procedure. In the first thread, Kolachina et al. (2012b) attempt to infer learning curves based on training data features and extrapolate the initial learning curves based on BLEU measurements for statistical machine translation (SMT). By extrapolating the performance of initial learning curves, the predictions on the remainder allows for early termination of a bad run (Domhan et al., 2015). In the second thread, Birch et al. (2008) adopt linear regression to capture the relationship between data features and SMT performance and find that the amount of reordering, the morphological complexity of the target language and the relatedness of the two languages explains the majority of performance variability. More recently, Elsahar and Gallé (2019) use domain shift metrics such as H-divergence based metrics to predict drop in performance under domain-shift. Rosenfeld et al. 8633 0 1 2 3 4 5 4 6 8 RMSE HIT-SCIR (78.86) 0 1 2 3 4 5 3 4 UDPipe (76.07) 0 1 2 3 4 5 4 4.5 5 LATTICE (76.07) 0 1 2 3 4 5 4 4.5 5 ICS (75.98) 0 1 2 3 4 5 5 6 RMSE Phoenix (68.17) 0 1 2 3 4 5 6 7 8 BOUN (66.69) 0 1 2 3 4 5 6 7 8 CUNI (66.6) 0 1 2 3 4 5 10 12 ONLP (61.92) Figure 3: RMSE scores of UD task from dataset-wise mean value predictor (the dashed black line in each graph) and predictors trained with experimental records of other models and 0–5 records from a new model. (2020) explore the functional form of the dependency of the generalization error of neural models on model and data size. We view our work as a generalization of such approaches, appropriate for application on any NLP task. 8 Conclusion and Future Work In this work, we investigate whether the experiment setting itself is informative for predicting the evaluation scores of NLP tasks. Our findings promisingly show that given a sufficient number of past training experimental records, our predictor can 1) outperform human experts; 2) make plausible predictions even over new-coming models and languages; 3) extrapolate well on features like dataset size; 4) provide a guide on how we should choose representative datasets for fast iteration. While this discovery is a promising start, there are still several avenues on improvement in future work. First, the dataset and language settings covered in our study are still limited. Experimental records we use are from relatively homogeneous settings, e.g. all datasets in Wiki-MT task are sentencepieced to have 5000 subwords, indicating that our predictor may fail for other subword settings. Our model also failed to generalize to cases where feature values are out of the range of the training experimental records. We attempted to apply the predictor of Wiki-MT to evaluate on a low-resource MT dataset, translating from Mapudungun (arn) to Spanish (spa) with the dataset from Duan et al. (2019), but ended up with a poor RMSE score. It turned out that the average sentence length of the arn–spa data set is much lower than that of the training data sets and our predictors fail to generalize to this different setting. Second, using a categorical feature to denote model types constrains its expressive power for modeling performance. In reality, a slight change in model hyperparameters (Hoos and LeytonBrown, 2014; Probst et al., 2019), optimization algorithms (Kingma and Ba, 2014), or even random seeds (Madhyastha and Jain, 2019) may give rise to a significant variation in performance, which our predictor is not able to capture. While investigating the systematic implications of model structures or hyperparameters is practically infeasible in this study, we may use additional information such as textual model descriptions for modeling NLP models and training procedures more elaborately in the future. Lastly, we assume that the distribution of training and testing data is the same, which does not consider domain shift. On top of this, there might also be a domain shift between data sets of training and testing experimental records. We believe that modeling domain shift is a promising future direction to improve performance prediction. Acknowledgement The authors sincerely thank all the reviewers for their insightful comments and suggestions, Philipp Koehn, Kevin Duh, Matt Post, Shuoyang Ding, Xuan Zhang, Adi Renduchintala, Paul McNamee, Toan Nguyen and Kenton Murray for conducting human evaluation for the TED-MT task, Daniel Beck for discussions on Gaussian Processes, Shruti Rijhwani, Xinyi Wang, Paul Michel for discussions on this paper. This work is generously supported from the National Science Foundation under grant 1761548. 8634 References Antonios Anastasopoulos and Graham Neubig. 2020. Should all cross-lingual embeddings speak english? In Proc. ACL. To appear. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2289–2294. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. Bilingual lexicon induction through unsupervised machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5002–5007, Florence, Italy. Association for Computational Linguistics. Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604. Alexandra Birch, Miles Osborne, and Philipp Koehn. 2008. Predicting success in machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 745– 754. Association for Computational Linguistics. Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785– 794. ACM. Xilun Chen and Claire Cardie. 2018. Unsupervised multilingual word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 261–270, Brussels, Belgium. Association for Computational Linguistics. Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter. 2015. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In Twenty-Fourth International Joint Conference on Artificial Intelligence. Mingjun Duan, Carlos Fasola, Sai Krishna Rallabandi, Rodolfo M. Vega, Antonios Anastasopoulos, Lori Levin, and Alan W Black. 2019. A resource for computational experiments on mapudungun. In Proc. LREC. To appear. Hady Elsahar and Matthias Gallé. 2019. To annotate or not? predicting performance drop under domain shift. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2163–2173. Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189–1232. Geert Heyman, Bregt Verreet, Ivan Vuli´c, and MarieFrancine Moens. 2019. Learning unsupervised multilingual word embeddings with incremental multilingual hubs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1890–1902. Holger Hoos and Kevin Leyton-Brown. 2014. An efficient approach for assessing hyperparameter importance. In International conference on machine learning, pages 754–762. Jiaji Huang, Qiang Qiu, and Kenneth Church. 2019. Hubless nearest neighbor search for bilingual lexicon induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4072–4080, Florence, Italy. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Prasanth Kolachina, Nicola Cancedda, Marc Dymetman, and Sriram Venkatapathy. 2012a. Prediction of learning curves in machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 22–30, Jeju Island, Korea. Association for Computational Linguistics. Prasanth Kolachina, Nicola Cancedda, Marc Dymetman, and Sriram Venkatapathy. 2012b. Prediction of learning curves in machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 22–30. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In International Conference on Learning Representations. Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for crosslingual learning. In Proceedings of the 57th Annual 8635 Meeting of the Association for Computational Linguistics, pages 3125–3135, Florence, Italy. Association for Computational Linguistics. Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8–14. Pranava Madhyastha and Rishabh Jain. 2019. On model stability as a function of random seed. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 929– 939, Hong Kong, China. Association for Computational Linguistics. Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Garrett Nicolai, Christo Kirov, Miikka Silfverberg, Sebastian J. Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229– 244, Florence, Italy. Association for Computational Linguistics. Joakim Nivre, Mitchell Abrams, Željko Agi´c, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, John Bauer, Sandra Bellato, Kepa Bengoetxea, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Rogier Blokland, Victoria Bobicev, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Gül¸sen Cebiro˘glu Eryi˘git, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinková, Aurélie Collomb, Ça˘grı Çöltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaž Erjavec, Aline Etienne, Richárd Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta Gonzáles Saavedra, Matias Grioni, Normunds Gr¯uz¯ıtis, Bruno Guillaume, Céline Guillot-Barbance, Nizar Habash, Jan Hajiˇc, Jan Hajiˇc jr., Linh Hà M˜y, Na-Rae Han, Kim Harris, Dag Haug, Barbora Hladká, Jaroslava Hlaváˇcová, Florinel Hociung, Petter Hohle, Jena Hwang, Radu Ion, Elena Irimia, Tomáš Jelínek, Anders Johannsen, Fredrik Jørgensen, Hüner Ka¸sıkara, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, Václava Kettnerová, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phương Lê H`ông, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Nikola Ljubeši´c, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, C˘at˘alina M˘ar˘anduc, David Mareˇcek, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonça, Niko Miekka, Anna Missilä, C˘at˘alin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shinsuke Mori, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-B¯erzkalne, Lương Nguy˜ên Thi., Huy`ên Nguy˜ên Thi. Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adédayơ. Olúòkun, Mai Omura, Petya Osenova, Robert Östling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalnin, a, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela Rääbis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Ros,ca, Olga Rudina, Shoval Sadde, Shadi Saleh, Tanja Samardži´c, Stephanie Samson, Manuela Sanguinetti, Baiba Saul¯ıte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Yuta Takahashi, Takaaki Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeˇnka Urešová, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Veronika Vincze, Lars Wallin, Jonathan North Washington, Seyi Williams, Mats Wirén, Tsegay Woldemariam, Tak-sum Wong, Chunxiao Yan, Marat M. Yavrumyan, Zhuoran Yu, Zdenˇek Žabokrtský, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi Zhu. 2018. Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics 8636 (ÚFAL), Faculty of Mathematics and Physics, Charles University. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Philipp Probst, Anne-Laure Boulesteix, and Bernd Bischl. 2019. Tunability: Importance of hyperparameters of machine learning algorithms. Journal of Machine Learning Research, 20(53):1–32. Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL), New Orleans, USA. Brian Richards. 1987. Type/token ratios: What do they really tell us? Journal of child language, 14(2):201– 209. Shruti Rijhwani, Jiateng Xie, Graham Neubig, and Jaime Carbonell. 2019. Zero-shot neural transfer for cross-lingual entity linking. In Thirty-Third AAAI Conference on Artificial Intelligence (AAAI), Honolulu, Hawaii. Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. 2020. A constructive prediction of the generalization error across scales. In International Conference on Learning Representations. Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2019. Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia. arXiv preprint arXiv:1907.05791. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics. Christopher KI Williams and Carl Edward Rasmussen. 1996. Gaussian processes for regression. In Advances in neural information processing systems, pages 514–520. Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2465–2474. Pengcheng Yang, Fuli Luo, Peng Chen, Tianyu Liu, and Xu Sun. 2019. Maam: A morphology-aware alignment model for unsupervised bilingual lexicon induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3190–3196. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018a. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–21, Brussels, Belgium. Association for Computational Linguistics. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018b. Conll 2018 shared task: multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–21. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934– 1945. 8637 Appendix A Questionnaire An example of the first questionnaire from our user case study is shown below. The second sheet also included the results in 44 more language pairs. We provide an answer key after the second sheet. Please provide your prediction of the BLEU score based on the language pair and dataset features (the domain of the training and test sets is TED talks). After you finish, please go to sheet v2. idx Source Target Parallel Source Source Target Target BLEU Language Language Sentences vocab subword vocab subword (k) size (k) vocab size (k) vocab size (k) size( k) 1 Basque (eus) English 5 20 8 9 6 2 Slovak (slk) English 61 134 8 36 8 3 Burmese (mya) English 21 101 8 21 8 4 Korean (kor) English 206 386 9 67 8 5 Lithuanian (lit) English 42 108 8 29 8 6 Arabic (ara) English 214 308 8 69 8 7 Czech (ces) English 103 181 8 47 8 8 Esperanto (epo) English 7 21 8 10 6 9 Finnish (fin) English 24 77 8 22 8 10 Albanian (sqi) English 45 93 8 30 8 11 Vietnamese (vie) English 172 66 8 61 8 8638 Please provide your prediction of the BLEU score in the yellow area given all the information in this sheet. Note that all experiments are trained with the same model. idx Source Target Parallel Source Source Target Target BLEU Language Lang. Sentences vocab subword vocab subword (k) size (k) vocab size (k) vocab size (k) size( k) 1 Basque (eus) English 5 20 8 9 6 2 Slovak (slk) English 61 134 8 36 8 3 Burmese (mya) English 21 101 8 21 8 4 Korean (kor) English 206 386 9 67 8 5 Lithuanian (lit) English 42 108 8 29 8 6 Arabic (ara) English 214 308 8 69 8 7 Czech (ces) English 103 181 8 47 8 8 Esperanto (epo) English 7 21 8 10 6 9 Finnish (fin) English 24 77 8 22 8 10 Albanian (sqi) English 45 93 8 30 8 11 Vietnamese (vie) English 172 66 8 61 8 12 French (fra) English 192 158 8 65 8 37.74 13 Estonian (est) English 11 39 8 14 7 9.9 14 Macedonian (mkd) English 25 61 8 23 8 21.75 15 Bosnian (bos) English 6 23 8 9 6 32.42 16 Swedish (swe) English 57 84 8 34 8 33.92 17 Polish (pol) English 176 267 8 63 8 21.51 18 Persian (fas) English 151 148 8 57 8 24.5 19 Kurdish (kur) English 10 39 8 14 7 6.86 20 Hungarian (hun) English 147 305 8 56 8 22.67 21 Slovenian (slv) English 20 58 8 20 8 14.18 22 Romanian (ron) English 181 205 8 63 8 32.42 23 Russian (rus) English 208 291 8 68 8 22.6 24 Serbian (srp) English 137 239 8 54 8 30.41 25 Tamil (tam) English 6 27 8 10 6 1.82 26 Kazakh (kaz) English 3 15 8 7 5 2.05 27 Marathi (mar) English 10 29 8 13 7 3.68 28 Ukrainian (ukr) English 108 191 8 48 8 24.09 29 Thai (tha) English 98 323 8 45 8 20.34 30 Belarusian (bel) English 5 20 8 8 5 2.85 31 Turkish (tur) English 182 304 8 63 8 22.52 32 Azerbaijani (aze) English 6 23 8 9 6 3.1 33 German (deu) English 168 194 8 61 8 33.15 34 Bulgarian (bul) English 174 216 8 62 8 35.78 35 Norwegian (nob) English 16 36 8 17 7 29.63 36 Georgian (kat) English 13 44 8 15 7 4.94 37 Danish (dan) English 45 72 8 31 8 37.73 38 Armenian (hye) English 21 56 8 20 8 13.97 39 Mandarin (cmn) English 200 481 9 67 8 17.0 8639 idx Source Target Parallel Source Source Target Target BLEU Language Language Sentences vocab subword vocab subword 40 Indonesian (ind) English 87 76 8 43 8 27.27 41 Galician (glg) English 10 28 8 13 7 16.84 42 Portuguese (por) English 185 165 8 64 8 41.67 43 Urdu (urd) English 6 13 6 10 6 3.38 44 Italian (ita) English 205 195 8 67 8 35.67 45 Spanish (spa) English 196 179 8 66 8 39.48 46 Greek (ell) English 134 171 8 54 8 34.94 47 Bengali (ben) English 5 18 8 9 6 2.79 48 Japanese (jpn) English 204 584 9 67 8 11.42 49 Malay (msa) English 5 13 7 9 6 3.68 50 Dutch (nld) English 184 172 8 63 8 34.27 51 Croatian (hrv) English 122 191 8 52 8 31.84 52 Hebrew (heb) English 212 276 8 68 8 33.89 53 Mongolian (mon) English 8 21 8 11 6 2.96 54 Hindi (hin) English 19 31 8 19 7 14.25 Answer Key: eus: 3.37, slk: 25.36, mya: 3.93, kor: 16.23, lit: 13.75, ara: 28.38, ces: 25.07, epo: 3.28, fin: 13.79, sqi: 29.6, vie: 24.67. 8640 B Representative datasets In this section, we show the searching results of most/least representative subsets for the rest of the five tasks. 2 3 4 5 10 20 30 40 50 60 RMSE en_ewt la_proiel en_gum hu_szeged kmr_mg en_gum ug_udt gl_ctg kmr_mg en_gum ug_udt kmr_mg ko_kaist fr_gsd bxr_bdt vi_vtb fr_spoken bxr_bdt vi_vtb ko_kaist bxr_bdt ko_gsd ja_gsd ko_kaist bxr_bdt fr_spoken vi_vtb gl_ctg UD 2 3 4 5 5 10 15 20 25 eng-por (mon) eng-tam (msa) eng-por (hye) eng-tam (msa) eng-0por kaz) mkd-eng (dan) lit-eng (tam) aze-eng (hye) sqi-eng (tam) mkd-eng (dan) lit-eng (tam) bos-eng (mon) nob-eng (hye) fas-eng (aze) por-eng (mon) tam-eng (msa) por-eng (hye) tam-eng (msa) por-eng (kaz) por-eng (mon) tam-eng (msa) por-eng (kaz) por-eng (hye) por-eng (bel) tam-eng (msa) por-eng (hye) por-eng (mon) por-eng (kaz) TSFMT 2 3 4 5 20 30 40 50 60 RMSE af (ja) ta (ja) hu (ja) be (ja) yue (ja) hu (ja) be (ja) yue (ja) wbp (hr) mr (ja) hu (ja) yue (ja) hsb (af) tl (bg) af (ja) am (la) am (la) af (et) cop (la) cop (ja) am (la) fo (swl) hu (fr) cop (ja) am (la) fo (swl) hu (fr) br (lv) TSFPOS 2 3 4 5 10 20 30 40 50 no (he) pl (id) de (he) de (fr) id (nl) sk (da) sk (he) sl (ca) la (zh) de (it) de (he) ru (sv) hi (hr) fi (hi) cs (uk) zh (ar) ru (sv) zh (ar) cs (uk) cs (uk) zh (ar) cs (zh) la (nl) cs (uk) zh (ar) la (nl) cs (ar) cs (zh) TSFPARSING 2 3 4 5 10 20 30 40 50 60 RMSE jv (so) uk (fa) jv (so) pa (ms) mr (id) jv (so) pa (ms) bn (tl) ti (xh) jv (so) pa (ms) mr (id) te (ha) jv (ar) jv (id) jv (sw) jv (id) jv (yo) jv (sw) jv (id) ti (am) jv (yo) jv (sw) jv (id) ti (am) jv (ilo) jv (ceb) jv (tl) TSFEL Most representative Least representative Random Search Figure 4: Beam search results (beam size=100) for up to the 5 most (and least) representative datasets for the remaining NLP tasks. We also show random search results of corresponding sizes. 8641 C New Model In this section, we show the extrapolation performance for new models on BLI, MA and the remaining systems of UD. 0 1 2 3 4 5 68 10 12 RMSE Sinkhorn (38.43) 0 1 2 3 4 5 6 8 10 12 Artetxe17 (36.46) 0 1 2 3 4 5 8 10 12 Artetxe16 (46.7) Figure 5: RMSE scores of BLI task from dataset-wise mean value predictor (the dashed black line in each graph) and predictors trained with experimental records of other models and 0–5 records from a new model (as indicated by the title of each graph). 0 1 2 3 4 5 4 5 6 RMSE CHARLES-SAARLAND-02-2 (93.23) 0 1 2 3 4 5 4 5 6 Unknown (93.19) 0 1 2 3 4 5 2 2.2 2.4 EDINBURGH-01-2 (88.93) 0 1 2 3 4 5 3.5 4 4.5 5 RMSE OHIOSTATE-01-2 (87.42) 0 1 2 3 4 5 5.5 6 6.5 7 CMU-01-2-DataAug (86.53) 0 1 2 3 4 5 7 8 CARNEGIEMELLON-02-2 (85.06) Figure 6: RMSE scores of MA task from dataset-wise mean value predictor (the dashed black line in each graph) and predictors trained with experimental records of other models and 0–5 records from a new model (as indicated by the title of each graph) . 8642 0 1 2 3 4 5 3 3.5 4 4.5 RMSE TurkuNLP (75.93) 0 1 2 3 4 5 2.5 3 3.5 4 CEA (75.06) 0 1 2 3 4 5 5.5 6 Stanford (75.05) 0 1 2 3 4 5 3 3.5 Uppsala (74.76) 0 1 2 3 4 5 2.8 3 RMSE AntNLP (74.1) 0 1 2 3 4 5 2.6 2.8 3 3.2 ParisNLP (74.05) 0 1 2 3 4 5 2.6 2.8 3 3.2 3.4 NLP-Cube (73.96) 0 1 2 3 4 5 7 7.5 SLT-Interactions(72.92) 0 1 2 3 4 5 3 3.2 3.4 RMSE IBM (71.88) 0 1 2 3 4 5 2.4 2.6 2.8 3 LeisureX (71.7) 0 1 2 3 4 5 2 2.5 3 UniMelb (71.54) 0 1 2 3 4 5 4 4.5 5 Fudan (69.42) 0 1 2 3 4 5 4 5 RMSE KParse (69.39) 0 1 2 3 4 5 4 4.5 5 5.5 6 BASELINE (68.5) Figure 7: RMSE scores of UD task from dataset-wise mean value predictor (the dashed black line in each graph) and predictors trained with experimental records of other models and 0–5 records from a new model (as indicated by the title of each graph). 8643 D Feature importance In this section, we show the plots of feature importance for all the tasks. 0 250 500 750 1000 1250 1500 1750 F score featural geographic phonological Target lang vocab size genetic syntactic Target lang subword vocab size Target lang subword TTR Source lang subword vocab size Source lang vocab size inventory Source lang subword TTR Target lang average sent. length Source lang Average Sent. Length Target lang word TTR Source lang word TTR dataset size (sent) Features 200 340 466 491 508 585 635 637 645 682 696 787 811 829 1332 1547 1723 Wiki-MT: Feature Importance 0 100 200 300 400 500 600 F score Target lang vocab size Target lang word TTR FEATURAL Target lang subword vocab size GENETIC GEOGRAPHIC PHONOLOGICAL Target lang average sent. length Source lang subword vocab size Source lang subword TTR Source lang vocab size Source lang Average Sent. Length INVENTORY SYNTACTIC Source lang word TTR dataset size (sent) Features 5 9 9 11 15 49 66 81 99 108 110 129 131 191 341 591 TED-MT: Feature Importance 0 500 1000 1500 2000 2500 3000 3500 F score PHONOLOGICAL_2 INVENTORY_2 GENETIC_2 FEATURAL FEATURAL_2 GEOGRAPHIC_2 GEOGRAPHIC SYNTACTIC_2 GENETIC PHONOLOGICAL Target lang TTR INVENTORY Target lang dataset size Transfer lang TTR SYNTACTIC Transfer target TTR distance Transfer over target size ratio Overlap subword-level Transfer lang dataset size Overlap word-level Features 120 197 207 253 276 316 322 388 401 536 649 898 955 1047 1170 1315 1389 1756 1782 3529 TSFMT: Feature Importance 8644 0 500 1000 1500 2000 2500 3000 3500 F score GENETIC PHONOLOGICAL Target lang TTR INVENTORY FEATURAL Transfer lang TTR GEOGRAPHIC SYNTACTIC Transfer over target size ratio Transfer target TTR distance Target lang dataset size Transfer lang dataset size Word overlap Features 345 477 666 688 847 887 928 961 996 1001 1201 1204 3270 TSFPARSING: Feature Importance 0 500 1000 1500 2000 2500 3000 F score GENETIC PHONOLOGICAL Target lang TTR INVENTORY Target lang dataset size FEATURAL SYNTACTIC Transfer lang TTR Transfer target TTR distance GEOGRAPHIC Transfer over target size ratio Transfer lang dataset size Overlap word-level Features 361 549 632 839 991 1110 1147 1200 1241 1268 1365 1722 2938 TSFPOS: Feature Importance 0 200 400 600 800 1000 1200 1400 1600 F score Transfer over target size ratio GENETIC PHONOLOGICAL Target lang dataset size INVENTORY GEOGRAPHIC Transfer lang dataset size FEATURAL Entity overlap SYNTACTIC Features 292 429 572 604 928 971 1182 1241 1338 1605 TSFEL: Feature Importance 8645 0 200 400 600 800 1000 F score syntax_59 syntax_63 syntax_81 syntax_4 syntax_74 syntax_27 syntax_43_2 syntax_5 syntax_65_2 syntax_13 syntax_91 syntax_63_2 syntax_68_2 syntax_70 syntax_24 syntax_82 syntax_6_2 syntax_30 syntax_44_2 syntax_47_2 syntax_26 syntax_83_2 syntax_56_2 syntax_90_2 syntax_41 syntax_67_2 syntax_67 syntax_86 syntax_42 syntax_65 syntax_41_2 syntax_85_2 syntax_77_2 syntax_60 syntax_72 syntax_83 syntax_51_2 syntax_50_2 syntax_44 syntax_80_2 syntax_57_2 syntax_20_2 syntax_27_2 syntax_5_2 syntax_25_2 syntax_37 syntax_52_2 syntax_13_2 syntax_21 syntax_7 syntax_24_2 syntax_62_2 syntax_40 syntax_9 syntax_59_2 syntax_33_2 syntax_57 syntax_3 syntax_71 syntax_21_2 syntax_46 syntax_30_2 syntax_51 syntax_47 syntax_20 syntax_25 syntax_4_2 syntax_7_2 syntax_89_2 syntax_53_2 syntax_9_2 syntax_37_2 syntax_54 syntax_70_2 syntax_23_2 syntax_54_2 syntax_32 syntax_53 syntax_72_2 syntax_17 syntax_84 syntax_42_2 syntax_89 syntax_31_2 syntax_78_2 syntax_90 syntax_55 syntax_71_2 syntax_52 syntax_19 syntax_31 syntax_78 syntax_32_2 syntax_1_2 syntax_46_2 syntax_39 syntax_77 syntax_50 syntax_66_2 syntax_19_2 syntax_35 syntax_75 syntax_35_2 syntax_84_2 syntax_2_2 syntax_66 syntax_60_2 syntax_22_2 syntax_75_2 syntax_73_2 syntax_39_2 syntax_23 syntax_62 syntax_38 syntax_85 syntax_68 syntax_79 syntax_14 syntax_0 syntax_12_2 syntax_1 syntax_2 syntax_16 syntax_38_2 syntax_16_2 syntax_69_2 syntax_76 syntax_76_2 syntax_17_2 syntax_79_2 syntax_55_2 syntax_69 syntax_86_2 syntax_14_2 syntax_22 syntax_15_2 syntax_100 syntax_73 syntax_100_2 syntax_0_2 syntax_15 syntax_12 model_Sinkhorn PHONOLOGICAL SYNTACTIC GEOGRAPHIC GENETIC INVENTORY FEATURAL model_Artetxe16 model_Artetxe17 Features 1111111111122222222233334445555555566667777888888888889910 10 10 10 10 11 11 11 11 11 12 12 12 12 12 13 13 13 13 13 14 14 14 15 15 15 16 16 16 16 17 17 17 17 17 17 18 18 18 19 20 21 22 23 23 24 24 24 25 25 27 30 30 31 31 32 32 36 37 39 43 45 45 46 49 49 53 53 53 53 56 58 60 62 62 65 68 68 70 73 76 76 8194 96 163 202239 433 441 502 770 823 869 917 BLI: Feature Importance 8646 0 500 1000 1500 2000 2500 F score word type word num num type fusion tag tag per word num type tag data size model_CHARLES-SAARLAND-02-2 word type ratio model_Unknown model_CMU-01-2-DataAug model_OHIOSTATE-01-2 model_EDINBURGH-01-2 model_CARNEGIEMELLON-02-2 avg sent length average type tag for word average tag type length Features 269 278 321 326 347 405 405 417 454 537 549 584 618 738 1084 2387 MA: Feature Importance 0 500 1000 1500 2000 2500 F score word type word num num type fusion tag tag per word num type tag data size model_CHARLES-SAARLAND-02-2 word type ratio model_Unknown model_CMU-01-2-DataAug model_OHIOSTATE-01-2 model_EDINBURGH-01-2 model_CARNEGIEMELLON-02-2 avg sent length average type tag for word average tag type length Features 269 278 321 326 347 405 405 417 454 537 549 584 618 738 1084 2387 UD: Feature Importance
2020
764
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8647–8657 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8647 ScriptWriter: Narrative-Guided Script Generation Yutao Zhu1, Ruihua Song2,∗, Zhicheng Dou3,∗, Jian-Yun Nie1, Jin Zhou4 1Universit´e de Montr´eal, Montr´eal, Qu´ebec, Canada 2Microsoft XiaoIce, Beijing, China 3Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 4Beijing Film Academy, Beijing, China [email protected], [email protected] [email protected], [email protected], [email protected] Abstract It is appealing to have a system that generates a story or scripts automatically from a storyline, even though this is still out of our reach. In dialogue systems, it would also be useful to drive dialogues by a dialogue plan. In this paper, we address a key problem involved in these applications - guiding a dialogue by a narrative. The proposed model ScriptWriter selects the best response among the candidates that fit the context as well as the given narrative. It keeps track of what in the narrative has been said and what is to be said. A narrative plays a different role than the context (i.e., previous utterances), which is generally used in current dialogue systems. Due to the unavailability of data for this new application, we construct a new large-scale data collection GraphMovie from a movie website where endusers can upload their narratives freely when watching a movie. Experimental results on the dataset show that our proposed approach based on narratives significantly outperforms the baselines that simply use the narrative as a kind of context. 1 Introduction Narrative is generally understood as a way to tell a story. WordNet defines it as “a message that tells the particulars of an act or occurrence or course of events; presented in writing or drama or cinema or as a radio or television program”1. Narrative plays an important role in many natural language processing (NLP) tasks. For example, in storytelling, the storyline is a type of narrative, which helps generate coherent and consistent stories (Fan et al., 2018, 2019). In dialogue generation, narrative can be used to define a global plan for the whole conversation session, so as to avoid generating inconsistent ∗Corresponding authors. 1http://wordnetweb.princeton.edu/perl/ webwn?s=narrative Narrative Jenny doesn’t like to go home. To accompany Jenny, Gump decides to go home later. Gump is Jenny’s best friend. 珍妮不喜欢回家。为了陪珍妮,甘决定晚点回家。甘是珍妮最 好的朋友。 Initial line Mama's going to worry about me. 妈会担心我的 1st line Just stay a little longer. 再坐一会! Yeah, and I'll bet you $ 10,000 he laughs his ass off. 我打赌他会笑破肚皮 2nd line Ok, Jenny, I'll stay. 好,珍妮,我留下来 She lived in an old house. 她家的房子破旧 3rd line He was a very loving man. 他是个非常有爱心的人 You are my most special friend. 你是我最特别的朋友 Figure 1: An example of part of a script with a narrative extracted from our GraphMovie dataset. The checked lines are from a ground-truth session, while the unchecked responses are other candidates that are relevant but not coherent with the narrative. and scattered responses (Xing et al., 2018; Tian et al., 2017; Ghazvininejad et al., 2018). In this work, we investigate the utilization of narratives in a special case of text generation – movie script generation. This special form of conversation generation is chosen due to the unavailability of the data for a more general form of application. Yet it does require the same care to leverage narratives in general conversation, and hence can provide useful insight to a more general form of narrative-guided conversation. The dataset we use to support our study is collected from GraphMovie2, where an end-user retells the story of a movie by uploading descriptive paragraphs in his/her own words. More details about the dataset will be presented in Section 3.2. An example is shown in Figure 1, where the narrative 2http://www.graphmovies.com/home/2/ index.php. Unfortunately, we find this website was closed recently. 8648 is uploaded to retell several lines of a script in a movie. Our task is to generate/select the following lines by leveraging the narrative. Our problem is closely related to dialogue generation that takes into account the context (Wu et al., 2017; Zhang et al., 2018; Zhou et al., 2018b). However, a narrative plays a different and more specific role than a general context. In particular, a narrative may cover the whole story (a part of a script), thus a good conversation should also cover all the aspects mentioned in a narrative, which is not required with a general context. In this paper, we propose a new model called ScriptWriter to address the problem of script generation/selection with the help of a narrative. ScriptWriter keeps track of what in the narrative has been said and what is remaining to select the next line by an updating mechanism. The matching between updated narrative, context, and response are then computed respectively and finally aggregated as a matching score. As it is difficult to evaluate the quality of script generation, we frame our work in a more restricted case - selecting the right response among a set of candidates. This form of more limited conversation generation retrieval-based conversation - has been widely used in the previous studies (Wu et al., 2017; Zhou et al., 2018b), and it provides an easier way to evaluate the impact of narratives. We conduct experiments on a dataset we collected and made publicly available (see Section 5). The experiments will show that using a narrative to guide the generation/selection of script is a much more appropriate approach than using it as part of the general context. Our work has three main contributions: (1) To our best knowledge, this is the first investigation on movie script generation with a narrative. This task could be further extended to a more general text generation scenario when suitable data are available. (2) We construct the first large-scale data collection GraphMovie to support research on narrativeguided movie script generation, which is made publicly accessible. (3) We propose a new model in which a narrative plays a specific role in guiding script generation. This will be shown to be more appropriate than a general context-based approach. 2 Related Work 2.1 Narrative Understanding It has been more than thirty years since researchers proposed “narrative comprehension” as an important ability of artificial intelligence (Rapaport et al., 1989). The ultimate goal is the development of a computational theory to model how humans understand narrative texts. Early explorations used symbolic methods to represent the narrative (Turner, 1994; Bringsjord and Ferrucci, 1999) or rule-based approaches to generate the narrative (Riedl and Young, 2010). Recently, deep neural networks have been used to tackle the problem (Bamman et al., 2019), and related problems such as generating coherent and cohesive text (Cho et al., 2019) and identifying relations in generated stories (Roemmele, 2019) have also been addressed. However, these studies only focused on how to understand a narrative itself (e.g., how to extract information from a narrative). They did not investigate how to utilize the narrative in an application task such as dialogue generation. 2.2 Dialogue Systems Existing methods of open-domain dialogue can be categorized into two groups: retrieval-based and generation-based. Recent work on response generation is mainly based on sequence-to-sequence structure with attention mechanism (Shang et al., 2015; Vinyals and Le, 2015), with multiple extensions (Li et al., 2016; Xing et al., 2017; Zhou et al., 2018a, 2020; Zhu et al., 2020). Retrieval-based methods try to find the most reasonable response from a large repository of conversational data, instead of generating a new one (Wu et al., 2017; Zhou et al., 2018b; Zhang et al., 2018). In general, the utterances in the previous turns are taken together as the context for selecting the next response. Retrieval-based methods are widely used in real conversation products due to their more fluent and diverse responses and better efficiency. In this paper, we focus on extending retrieval-based methods by using a narrative as a plan for a session. This is a new problem that has not been studied before. Contrary to open-domain chatbots, task-oriented systems are designed to accomplish tasks in a specific domain (Seneff et al., 1998; Levin et al., 2000; Wang et al., 2011; Tur and Mori, 2011). In these systems, a dialogue state tracking component is designed for tracking what has happened in a dia8649 Table 1: Statistics of GraphMovie corpus. Training Validation Test # Sessions 14,498 805 806 # Micro-sessions 136,524 37,480 38,320 # Candidates 2 10 10 Min. #turns 2 2 2 Max. #turns 34 27 17 Avg. #turns 4.71 4.66 4.75 Avg. #words in Narr. 25.04 24.86 24.18 logue (Williams and Young, 2007; Henderson et al., 2014; Xu and Rudnicky, 2000). This inspires us to track the remaining information in the narrative that has not been expressed by previous lines of conversation. However, existing methods cannot be applied to our task directly as they are usually predefined for specific tasks, and the state tracking is often framed as a classification problem. 2.3 Story Generation Existing studies have also tried to generate a story. Early work relied on symbolic planning (Meehan, 1977; Cavazza et al., 2002) and case-based reasoning (y P´erez and Sharples, 2001; Gerv´as et al., 2005), while more recent work uses deep learning methods. Some of them focused on story ending generation (Peng et al., 2018; Guan et al., 2019), where the story context is given, and the model is asked to select a coherent and consistent story ending. This is similar to the dialogue generation problem mentioned above. Besides, attempts have been made to generate a whole story from scratch (Fan et al., 2018, 2019). Compared with the former task, this latter is more challenging since the story framework and storyline should all be controlled by the model. Some recent studies also tried to guide the generation of dialogues (Wu et al., 2019; Tang et al., 2019) or stories (Yao et al., 2019) with keywords the next response is asked to include the keywords. This is a step towards guided response generation and bears some similarities with our study. However, a narrative is more general than keywords, and it provides a description of the dialogue session rather than imposing keywords to the next response. 3 Problem Formulation and Dataset 3.1 Problem Formulation Suppose that we have a dataset D, in which a sample is represented as (y, c, p, r), where c = {s1, · · · , sn} represents a context formed by the preceding sentences/lines {si}n i=1; p is a predefined narrative that governs the whole script session, and r is a next line candidate (we refer to it as a response); y ∈{0, 1} is a binary label, indicating whether r is a proper response for the given c and p. Intuitively, a proper response should be relevant to the context, and be coherent and aligned with the narrative. Our goal is to learn a model g(c, p, r) with D to determine how suitable a response r is to the given context c and narrative p. 3.2 Data Collection and Construction Data is a critical issue in research on story/dialogue generation. Unfortunately, no dataset has been created for narrative-guided story/dialogue generation. To fill the gap, we constructed a test collection from GraphMovie, where an editor or a user can retell the story of a movie by uploading descriptive paragraphs in his/her own words to describe screenshots selected from the movie. A movie on this website has, on average, 367 descriptions. A description paragraph often contains one to three sentences to summarize a fragment of a movie. It can be at different levels - from retelling the same conversations to a high-level description. We consider these descriptions as narratives for a sequence of dialogues, which we call a session in this paper. Each dialogue in a session is called a line of script (or simply a line). To construct the dataset, we use the top 100 movies in IMDB3 as an initial list. For each movie, we collect its description paragraphs from GraphMovie. Then we hire annotators to watch the movie and annotate the start time and end time of the dialogues corresponding to each description paragraph through an annotation tool specifically developed for this purpose. According to the start and end time, the sequence of lines is extracted from the subtitle file and aligned with a corresponding description paragraph. As viewers of a movie can upload descriptions freely, not all description paragraphs correspond to a narrative and are suitable for our task. For example, some uploaded paragraphs express one’s subjective opinions about the movie, the actors, or simply copy the script. Therefore, we manually review the data and remove such non-narrative data. We also remove sessions that have less than two lines. Finally, we obtain 16,109 script sessions, 3https://www.imdb.com/ 8650 each of which contains a description paragraph (narrative) and corresponding lines of the script. As shown in Table 1, on average, a narrative has about 25 words, and a session has 4.7 lines. The maximum number of lines in a session is 34. Our task is to select one response from a set of candidates at any point during the session. By moving the prediction point through the session, we obtain a set of micro-sessions, each of which has a sequence of previous lines as context at that point of time, the same narrative as the session, and the next line to predict. The candidates to be selected contain one ground-truth line - the one that is genuinely the next line, together with one (in the training set) or nine (in the validation/test set) other candidates retrieved with the previous lines by Solr4. The above preparation of the dataset follows the practice in the literature (Wu et al., 2017) for retrieval-based dialogue. 4 Proposed Method: ScriptWriter 4.1 Overview A good response is required to be coherent with the previous lines, i.e., context, and be consistent with the given narrative. For example, “Just stay a little longer” can respond “Mama’s going to worry about me” and it has no conflict with the narrative in Figure 1. Furthermore, as our target is to generate all lines in the session successively, it is also required that the following lines should convey the information that the former lines have not conveyed. Otherwise, only a part of the narrative is covered, and we will miss some other aspects specified in the narrative. We propose an attention-based model called ScriptWriter to solve the problem. ScriptWriter follows a representation-matching-aggregation framework. First, the narrative, the context, and the response candidate are represented in multiple granularities by multi-level attentive blocks. Second, we propose an updating mechanism to keep track of what in a narrative has been expressed and explicitly lower their weights in the updated narrative so that more emphasis can be put on the remaining parts. Third, matching features are extracted between different elements: between context and response to capture whether it is a proper reply; between narrative and response to capture whether it is consistent with the narrative; and between context and narrative to implicitly track what in the 4https://lucene.apache.org/solr/ narrative has been expressed in the previous lines. Finally, the above matching features are concatenated together and a final matching score is produced by convolutional neural networks (CNNs) and a multi-layer perceptron (MLP). 4.2 Representation To better handle the gap in words between two word sequences, we propose to use an attentive block, which is similar to that used in Transformer (Vaswani et al., 2017). The input of an attentive block consists of three sequences, namely query (Q), key (K), and value (V). The output is a new representation of the query and is denoted as AttentiveBlock(Q, K, V) in the remaining parts. This structure is used to represent a response, lines in the context, and a narrative. More specifically, given a narrative p = (wp 1, · · · , wp np), a line si = (wsi 1 , · · · , wsi nsi) and a response candidate r = (wr 1, · · · , wr nr), ScriptWriter first uses a pre-trained embedding table to map each word w to a de-dimension embedding e, i.e., w ⇒e. Thus the narrative p, the line si and the response candidate r are represented by matrices P0 = (ep 1, · · · , ep np), S0 i = (esi 1 , · · · , esi nsi) and R0 = (er 1, · · · , er nr). Then ScriptWriter takes P0, {S0 i }n i=1 and R0 as inputs and uses stacked attentive blocks to construct multi-level self-attention representations. The output of the (l −1)th level of attentive block is input into the lth level. The representations of p, si, and r at the lth level are defined as follows: Pl = AttentiveBlock(Pl−1, Pl−1, Pl−1), (1) Sl i = AttentiveBlock(Sl−1 i , Sl−1 i , Sl−1 i ), (2) Rl = AttentiveBlock(Rl−1, Rl−1, Rl−1), (3) where l ranges from 1 to L. Inspired by a previous study (Zhou et al., 2018b), we apply another group of attentive blocks, which is referred to as cross-attention, to capture semantic dependency between p, si and r. Considering p and si at first, their cross-attention representations are defined by: P l si = AttentiveBlock(Pl−1, Sl−1 i , Sl−1 i ), (4) S l i,p = AttentiveBlock(Sl−1 i , Pl−1, Pl−1). (5) Here, the words in the narrative can attend to all words in the line, and vice verse. In this way, some inter-dependent segment pairs, such as “stay” in the 8651 𝑠! 𝑠" 𝑠# 𝐓!!,# $ 𝑝 Decay sum 1 − 𝑛" 𝑛#! How much information remained in the narrative 𝑝= ( 𝑤! 𝑤" 𝑤# 𝑤$ 𝑤% ⋯𝑤&"'! 𝑤&") sum 1 − … L layers Narrative Lines 𝑛" 𝑛## 𝐏 𝐒! 𝐒" 𝐒# … … 𝐃$,& 𝐓!$,# $ 𝐃',& Stacked Attentive Blocks (Self-Attention) Self-Attention Figure 2: Updating mechanism in ScriptWriter. The representation of the narrative is updated by lines in the context one by one. The information that has been expressed is decayed. Thus the updated narrative focuses more on the remaining information. line and “go home later” in the narrative, become close to each other in the representations. Similarly, we compute cross-attention representations between p and r and between r and si at different levels, which are denoted as P l r, R l p, S l i,r and R l si. These representations further provide matching information across different elements in the next step. 4.3 Updating Mechanism We design an updating mechanism to keep track of the coverage of the narrative by the lines so that the selection of the response will focus on the uncovered parts. The mechanism is illustrated in Figure 2. We update a narrative gradually by all lines in the context one by one. For the ith line si, we conduct a matching between Si and P by their cosine similarity at all levels (l) of attentive blocks: Tl si,p[j][k] = cos(Sl i[j], Pl[k]), (6) where j and k stand for the jth word in si and kth word in p respectively. To summarize how much information in p has been expressed by si, we compute a vector Di by conducting summations along vertical axis on each level in the matching 𝑠! 𝑠" 𝑠# 𝑝 Matching Cube Concatenation & Stack feature 𝑓(𝑐, 𝑝) 𝐒! L layers 𝐒" 𝐒# 𝐦$!,& $'() 𝐦$",& $'() 𝐦$#,& $'() Convolution & Max-pooling … … Narrative 𝐏 𝐦$!,& *+,$$ 𝐦$#,& *+,$$ 𝐦$",& *+,$$ 𝐐*& Lines Stacked Attentive Blocks (Self-Attention) Figure 3: The context-narrative matching. All lines and the narrative are represented by attentive blocks and the matching between them results in a matching cube Qcp. Matching features are aggregated and distilled by a CNN. map Tsi,p. The summation on the lth level is: Dl i = [dl i,1, dl i,2, · · · , dl i,np], (7) dl i,k = γ nsi X j=1 Tl si,p[j][k], (8) where np, nsi denotes the number of words in p and si; γ ∈[0, 1] is a parameter to learn and works as a gate to control the decaying degree of the mentioned information. Finally, we update the narrative’s representation as follows for the ith line si in the context: Pl i+1 = (1 −Dl i)Pl i. (9) The initial representation Pl 0 is equal to Pl defined in Equation (1). If there are n lines in the context, this update is executed n times, and (1 −Dl) will produce a continuous decaying effect. 4.4 Matching The matching between the narrative p and the line si is conducted based on both their self-attention and cross-attention representations, as shown in Figure 3. First, ScriptWriter computes the dot product on these two representations separately as follows: mself si,p,l[j, k] = Sl i[j]T · Pl[k], (10) mcross si,p,l[j, k] = S l i,p[j]T · P l si[k], (11) 8652 where l ranges from 0 to L. Each element is the dot product of the jth word representation in Sl i or S l i,p and the kth word representation in Pl or P l si. Then the matching maps in different layers are concatenated together as follows: mself si,p [j, k] = h mself si,p,0 [j, k] ; · · · ; mself si,p,L [j, k] i , mcross si,p [j, k] =  mcross si,p,0 [j, k] ; · · · ; mcross si,p,L [j, k]  , where [; ] is concatenation operation. Finally, the matching features computed by the self-attention representation and the cross-attention representation are fused as follows: Msi,p [j, k] = h mself si,p [j, k] ; mcross si,p [j, k] i . The matching matrices Mp,r and Msi,r for narrative-response and context-response are constructed in a similar way. For the sake of brevity, we omit the formulas. After concatenation, each cell in Msi,p, Mp,r or Msi,r has 2(L + 1) channels and contains matching information at different levels. The matching between narrative, context, and response serves for different purposes. Contextresponse matching (Msi,r) serves to select a response suitable for the context. Context-narrative matching (Msi,p) helps the model “remember” how much information has been expressed and implicitly influences the selection of the next responses. Narrative-response matching (Mp,r) helps the model to select a more consistent response with the narrative. As the narrative keeps being updated along with the lines in context, ScriptWriter tends to dynamically choose the response that matches what remains unexpressed in the narrative. 4.5 Aggregation To further use the information across two consecutive lines, ScriptWriter piles up all the contextnarrative matching matrices and all the contextresponse matching matrices to construct two cubes Qcp = {Msi,p[j, k]}n i=1 and Qcr = {Msi,r[j, k]}n i=1, where n is the number of lines in the session. Then ScriptWriter employs 3D convolutions to distill important matching features from the whole cube. We denote these two feature vectors as f(c, p) and f(c, r). For narrative-response matching, ScriptWriter conducts 2D convolutions on Mp,r to distill matching features between the narrative and the response, denoted as f(p, r). The three types of matching features are concatenated together, and the matching score g(c, p, r) for ranking response candidates is computed by an MLP with a sigmoid activation function, which is defined as: f(c, p, r) = [f(c, p); f(c, r); f(p, r)], (12) g(c, p, r) = sigmoid(WT f(c, p, r) + b), (13) where W and b are parameters. ScriptWriter learns g(c, p, r) by minimizing cross entropy with D. The objective function is formulated as: L(θ) = − X (y,c,p,r)∈D [y log(g(c, p, r)) + (1 −y) log(1 −g(c, p, r))]. (14) 5 Experiments 5.1 Evaluation setup As presented in Table 1, we randomly split the the GraphMovie collection into training, validation and test set. The split ratio is 18:1:1. We split the sessions into micro-sessions: given a session with n lines in the context, we will split it into n microsessions with length varying from 1 to n. These micro-sessions share the same narrative. By doing this, the model is asked to learn to select one line as the response from a set of candidates at any point during the session, and the dataset, in particular for training, can be significantly enlarged. We conduct two kinds of evaluation as follows: Turn-level task asks a model to rank a list of candidate responses based on its given context and narrative for a micro-session. The model then selects the best response for the current turn. This setting is similar to the widely studied response selection task (Wu et al., 2017; Zhou et al., 2018b; Zhang et al., 2018). We follow these previous studies and employ recall at position k in n candidates (Rn@k) and mean reciprocal rank (MRR) (Voorhees, 1999) as evaluation metrics. For example, R10@1 means recall at one when we rank ten candidates (one positive sample and nine negative samples). The final results are average numbers over all micro-sessions in the test set. Session-level task aims to predict all the lines in a session gradually. It starts with the first line of the session as the context and the given narrative and predicts the best next line. The predicted line is then incorporated into the context to predict the 8653 next line. This process continues until the last line of the session is selected. Finally, we calculate precision over the whole original session and report average numbers over all sessions in the test set. Precision is defined as the number of correct selection divided by the number of lines in a session. We consider two measures: 1) Pstrict which accepts a right response at the right position; 2) Pweak which accepts a right response at any position. 5.2 Baselines As no previous work has been done on narrativebased script generation, no proper baseline exists. Nevertheless, some existing multi-turn conversation models based on context can be adapted to work with a narrative: the context is simply extended with the narrative. Two different extension methods have been tested: the narrative is added into the context together with the previous lines; the narrative is used as a second context. In the latter case, two matching scores are obtained for contextnarrative and narrative-response. They are aggregated through an MLP to produce a final score. This second approach turns out to perform better. Therefore, we only report the results with this latter method5. (1) MVLSTM (Wan et al., 2016): it concatenates all previous lines as a context and uses an LSTM to encode the context and the response candidate. A matching score is determined by an MLP based on a map of cosine similarity between them. A matching score for narrative-response is produced similarly. (2) DL2R (Yan et al., 2016): it encodes the context by an RNN followed by a CNN. The matching score is computed similarly to MVLSTM. (3) SMN (Wu et al., 2017): it matches each line with response sequentially to produce a matching vector with CNNs. The matching vectors are aggregated with an RNN. (4) DAM (Zhou et al., 2018b): it represents a context and a response by using self-attention and cross-attention operation on them. It uses CNNs to extract features and uses an MLP to get a score. Different from our model, this model only considers the context-response matching and does not track what in the narrative has already been expressed by the previous lines, i.e., context. 5We also tested some basic models such as RNN, LSTM, and BiLSTM (Lowe et al., 2015) in our experiments. However, they cannot achieve comparable results to the selected baselines. (5) DUA (Zhang et al., 2018): it concatenates the last line with each previous line in the context and response, respectively. Then it performs a selfattention operation to get refined representations, based on which matching features are extracted with CNNs and RNNs. 5.3 Training Details All models are implemented in Tensorflow6. Word embeddings are pre-trained by Word2vec (Mikolov et al., 2013) on the training set with 200 dimensions. We test the stack number in {1,2,3} and report our results with three stacks. Due to the limited resources, we cannot conduct experiments with a larger number of stacks, which could be tested in the future. Two 3D convolutional layers have 32 and 16 filters, respectively. They both use [3,3,3] as kernel size, and the max-pooling size is [3,3,3]. Two 2D convolutional layers on narrative-response matching have 32 and 16 filters with [3,3] as kernel size. The max-pooling size is also [3,3]. All parameters are optimized with Adam optimizer (Kingma and Ba, 2015). The learning rate is 0.001 and decreased during training. The initial value for γ is 0.5. The batch size is 64. We use the validation set to select the best models and report their performance on the test set. The maximum number of lines in context is set as ten, and the maximum length of a line, response, and narrative sentence is all set as 50. All sentences are zero-padded to the maximum length. We also padded zeros if the number of lines in a context is less than 10. Otherwise, we kept the latest ten lines. The dataset and the source code of our model are available on GitHub7. 5.4 Results and Analysis 5.4.1 Evaluation Results The experimental results are reported in Table 2. The results on both turn-level and session-level evaluations indicate that ScriptWriter dramatically outperforms all baselines, including DAM and DUA, which are two state-of-the-art models on multi-turn response selection. All improvements are statistically significant (p-value ≤0.01). DAM performs better than other baselines, which confirms the effectiveness of the self and cross attention mechanism used in this model. The DUA model also uses the attention mechanism. It outper6https://www.tensorflow.org 7https://github.com/DaoD/ScriptWriter 8654 Table 2: Evaluation results on two response selection tasks: turn-level and session-level. Our ScriptWriter model is represented as SW. † and ⋆denote significant improvements with SW in t-test with p ≤0.01 and p ≤0.05 respectively. Turn-level Session-level Method R2@1 R10@1 R10@5 MRR Pstrict Pweak MVLSTM 0.651† 0.217† 0.732† 0.395† 0.198† 0.224† DL2R 0.643† 0.210† 0.638† 0.314† 0.230† 0.243† SMN 0.641† 0.176† 0.696† 0.392† 0.197† 0.236† DAM 0.631† 0.240† 0.733† 0.408† 0.226† 0.236† DUA 0.654† 0.237† 0.736† 0.396† 0.223† 0.251† SW 0.730 0.365 0.814 0.503 0.373 0.383 SWstatic 0.723 0.351 0.801 0.484† 0.338† 0.366 SW-PR 0.654† 0.246† 0.721† 0.398† 0.223† 0.239† SW-CP 0.710⋆0.326† 0.793† 0.473† 0.329† 0.352† SW-CR 0.725 0.316† 0.766† 0.466† 0.335† 0.382 forms the other baselines that do not use attention. Both observations confirm the advantage of using attention mechanisms over pure RNN. Between the two session-level measures, we observe that our model is less affected when moving from Pweak to Pstrict. This shows that ScriptWriter can better select a response in the right position. We attribute this behavior to the utilization of narrative coverage. 5.4.2 Model Ablation We conduct an ablation study to investigate the impact of different modules in ScriptWriter. First, we remove the updating mechanism by setting γ = 0 (i.e., the representation of the narrative is not updated but static). This model is denoted as ScriptWriterstatic in Table 2. Then we remove narrative-response, context-narrative, and matching-response, respectively. These variants are denoted as ScriptWriter-PR, ScriptWriter-CP, and ScriptWriter-CR. Model ablation results are shown in the second part of Table 2. We have the following findings: 1) ScriptWriter performs better than ScriptWriterstatic, demonstrating the effectiveness of updating mechanism for the narrative. The optimal value of γ is at around 0.647 after training, which means that only about 35% of information is kept when a line conveys it. 2) In both turn-level and session-level evaluations, the performance drops the most when we remove narrative-response matching. This indicates that the relevance of the response to the narrative is the most useful information in narrative0 (0,0.2) [0.2,0.4) [0.4,0.6) [0.6,0.8) [0.8,1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 P strict of SW P weak of SW P strict of DUA P weak of DUA Percentage(%) Figure 4: The performance of ScriptWriter (SW) and DUA on the test set with different types of narrative in session-level evaluation. guided script generation. 3) When we remove context-narrative matching, the performance drops too, indicating that context-narrative matching may provide implicit and complementary information for controlling the alignment of response and narrative. 4) In contrast, when we remove the contextresponse matching, the performance also drops, however, at a much smaller scale, especially on Pweak, than when narrative-response matching is removed. This contrast indicates that narrative is a more useful piece of information than context to determine what should be said next, thus it should be taken into account with an adequate mechanism. 5.4.3 Performance across Narrative Types As we explained, narratives in our dataset are contributed by netizens, and they vary in style. Some narratives are detailed, while others are general. The question we analyze is how general vs. detailed narratives affect the performance of response selection. We use a simple method to evaluate roughly the degree of detail of a narrative: a narrative that has a high lexical overlap with the lines in the session is considered to be detailed. Narratives are put into six buckets depending on their level of detail, as shown in Figure 4. We plot the performance of ScriptWriter and DUA in session-level evaluation over different types of narratives. The first type “0” means no word overlap between narrative and dialogue sessions. This is the most challenging case, representing extremely general narratives. It is not surprising to see that both ScriptWriter and DUA performs poorly on this type compared with other types in terms of Pstrict. The performance tends to become better when the overlap ratio is increased. This 8655 is consistent with our intuition: when a narrative is more detailed and better aligned with the session in wording, it is easier to choose the best responses. This plot also shows that our ScriptWriter can achieve better performance than DUA on all types of narratives, which further demonstrates the effectiveness of using narrative to guide the dialogue. We also observe that the buckets “[0, 0.2)” and “[0.2, 0.4)” contain the largest proportions of narratives. This indicates that most netizens do not use the original lines to retell a story. The problem we address in this paper is thus non-trivial. 6 Conclusion and Future Work Although story generation has been extensively studied in the literature, no existing work addressed the problem of generating movie scripts following a given storyline or narrative. In this paper, we addressed this problem in the context of generating dialogues in a movie script. We proposed a model that uses the narrative to guide the dialogue generation/retrieval. We keep track of what in the narrative has already been expressed and what is remaining to select the next line through an updating mechanism. The final selection of the next response is based on multiple matching criteria between context, narrative and response. We constructed a new large-scale data collection for narrative-guided script generation from movie scripts. This is the first public dataset available for testing narrativeguided dialogue generation/selection. Experimental results on the dataset showed that our proposed approach based on narrative significantly outperforms the baselines that use a narrative as an additional context, and showed the importance of using the narrative in a proper manner. As a first investigation on the problem, our study has several limitations. For example, we have not considered the order in the narrative description, which could be helpful in generating dialogues in correct order. Other methods to track the dialogue state and the coverage of narrative can also be designed. Further investigations are thus required to fully understand how narratives can be effectively used in dialogue generation. Acknowledgments Ruihua Song and Zhicheng Dou are the corresponding authors. This work was supported by National Natural Science Foundation of China No. 61872370 and No. 61832017, and Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098. References David Bamman, Snigdha Chaturvedi, Elizabeth Clark, Madalina Fiterau, and Mohit Iyyer, editors. 2019. Proceedings of the First Workshop on Narrative Understanding. Association for Computational Linguistics, Minneapolis, Minnesota. Selmer Bringsjord and David Ferrucci. 1999. Artificial intelligence and literary creativity: Inside the mind of brutus, a storytelling machine. Psychology Press. Marc Cavazza, Fred Charles, and Steven J. Mead. 2002. Planning characters’ behaviour in interactive storytelling. Journal of Visualization and Computer Animation, 13(2):121–131. Woon Sang Cho, Pengchuan Zhang, Yizhe Zhang, Xiujun Li, Michel Galley, Chris Brockett, Mengdi Wang, and Jianfeng Gao. 2019. Towards coherent and cohesive long-form text generation. In Proceedings of the First Workshop on Narrative Understanding, pages 1–11, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2650– 2660, Florence, Italy. Association for Computational Linguistics. Pablo Gerv´as, Bel´en D´ıaz-Agudo, Federico Peinado, and Raquel Herv´as. 2005. Story plot generation based on CBR. Knowl. Based Syst., 18(4-5):235– 242. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5110–5117. Jian Guan, Yansen Wang, and Minlie Huang. 2019. Story ending generation with incremental encoding and commonsense knowledge. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of 8656 Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6473–6480. Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292– 299, Philadelphia, PA, U.S.A. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Esther Levin, Shrikanth S. Narayanan, Roberto Pieraccini, Konstantin Biatov, Enrico Bocchieri, Giuseppe Di Fabbrizio, Wieland Eckert, Sungbok Lee, A. Pokrovsky, Mazin G. Rahim, P. Ruscitti, and Marilyn A. Walker. 2000. The at&t-darpa communicator mixed-initiative spoken dialog system. In Sixth International Conference on Spoken Language Processing, ICSLP 2000 / INTERSPEECH 2000, Beijing, China, October 16-20, 2000, pages 122– 125. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294, Prague, Czech Republic. Association for Computational Linguistics. James R. Meehan. 1977. Tale-spin, an interactive program that writes stories. In Proceedings of the 5th International Joint Conference on Artificial Intelligence. Cambridge, MA, USA, August 22-25, 1977, pages 91–98. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings. Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. 2018. Towards controllable story generation. In Proceedings of the First Workshop on Storytelling, pages 43–49, New Orleans, Louisiana. Association for Computational Linguistics. Rafael P´erez y P´erez and Mike Sharples. 2001. MEXICA: A computer model of a cognitive account of creative writing. J. Exp. Theor. Artif. Intell., 13(2):119–139. William J Rapaport, Erwin M Segal, Stuart C Shapiro, David A Zubin, Gail A Bruder, Judith Felson Duchan, and David M Mark. 1989. Cognitive and computer systems for understanding narrative text. Mark O. Riedl and Robert Michael Young. 2010. Narrative planning: Balancing plot and character. J. Artif. Intell. Res., 39:217–268. Melissa Roemmele. 2019. Identifying sensible lexical relations in generated stories. In Proceedings of the First Workshop on Narrative Understanding, pages 44–52, Minneapolis, Minnesota. Association for Computational Linguistics. Stephanie Seneff, Edward Hurley, Raymond Lau, Christine Pao, Philipp Schmid, and Victor Zue. 1998. GALAXY-II: a reference architecture for conversational system development. In The 5th International Conference on Spoken Language Processing, Incorporating The 7th Australian International Speech Science and Technology Conference, Sydney Convention Centre, Sydney, Australia, 30th November 4th December 1998. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1577–1586, Beijing, China. Association for Computational Linguistics. Jianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiaodan Liang, Eric Xing, and Zhiting Hu. 2019. Targetguided open-domain conversation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5624–5634, Florence, Italy. Association for Computational Linguistics. Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, and Dongyan Zhao. 2017. How to make context more useful? an empirical study on contextaware neural conversational models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231–236, Vancouver, Canada. Association for Computational Linguistics. G. Tur and R. D. Mori. 2011. Spoken language understanding: Systems for extracting semantic information from speech. John Wiley & Sons. Scott R Turner. 1994. Minstrel: A computer model of creativity and storytelling. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all 8657 you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008. Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. CoRR, abs/1506.05869. Ellen M. Voorhees. 1999. The TREC-8 question answering track report. In Proceedings of The Eighth Text REtrieval Conference, TREC 1999, Gaithersburg, Maryland, USA, November 17-19, 1999. Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, and Xueqi Cheng. 2016. A deep architecture for semantic matching with multiple positional sentence representations. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2835–2841. Yeyi Wang, Li Deng, and Alex Acero. 2011. Semantic frame-based spoken language understanding. Spoken language understanding: systems for extracting semantic information from speech, pages 41–91. Jason D. Williams and Steve J. Young. 2007. Partially observable markov decision processes for spoken dialog systems. Comput. Speech Lang., 21(2):393– 422. Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, and Haifeng Wang. 2019. Proactive human-machine conversation with explicit conversation goal. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3794–3804, Florence, Italy. Association for Computational Linguistics. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496–505, Vancouver, Canada. Association for Computational Linguistics. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3351–3357. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5610–5617. Wei Xu and Alexander I. Rudnicky. 2000. Task-based dialog management using an agenda. In ANLPNAACL 2000 Workshop: Conversational Systems. Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, SIGIR 2016, Pisa, Italy, July 17-21, 2016, pages 55–64. Lili Yao, Nanyun Peng, Ralph M. Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Planand-write: Towards better automatic storytelling. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 7378–7385. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018. Modeling multiturn conversation with deep utterance aggregation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3740–3752, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018a. Commonsense knowledge aware conversation generation with graph attention. In Proceedings of the TwentySeventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4623–4629. Kun Zhou, Wayne Xin Zhao, Yutao Zhu, Ji-Rong Wen, and Jingsong Yu. 2020. Improving multi-turn response selection models with complementary lastutterance selection by instance weighting. CoRR, abs/2002.07397. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018b. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1118–1127, Melbourne, Australia. Association for Computational Linguistics. Yutao Zhu, Zhicheng Dou, Jian-Yun Nie, and Ji-Rong Wen. 2020. Reboost: a retrieval-boosted sequenceto-sequence model for neural response generation. Inf. Retr. J., 23(1):27–48.
2020
765
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8658–8679 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8658 Should All Cross-Lingual Embeddings Speak English? Antonios Anastasopoulos and Graham Neubig Language Technologies Institute, Carnegie Mellon University {aanastas,gneubig}@cs.cmu.edu Abstract Most of recent work in cross-lingual word embeddings is severely Anglocentric. The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting. With this work, however, we challenge these practices. First, we show that the choice of hub language can significantly impact downstream lexicon induction and zero-shot POS tagging performance. Second, we both expand a standard Englishcentered evaluation dictionary collection to include all language pairs using triangulation, and create new dictionaries for under-represented languages.1 Evaluating established methods over all these language pairs sheds light into their suitability for aligning embeddings from distant languages and presents new challenges for the field. Finally, in our analysis we identify general guidelines for strong cross-lingual embedding baselines, that extend to language pairs that do not include English. 1 Introduction Continuous vectors for representing words (embeddings) (Turian et al., 2010) have become ubiquitous in modern, neural NLP. Cross-lingual representations (Mikolov et al., 2013) additionally represent words from various languages in a shared continuous space, which in turn can be used for Bilingual Lexicon Induction (BLI). BLI is often the first step towards several downstream tasks such as Part-Of-Speech (POS) tagging (Zhang et al., 2016), parsing (Ammar et al., 2016a), document classification (Klementiev et al., 2012), and machine translation (Irvine and Callison-Burch, 2013; Artetxe et al., 2018b; Lample et al., 2018). Often, such shared representations are learned with a two-step process, whether under bilingual or multilingual settings (hereinafter BWE and MWE, respectively). First, monolingual word embeddings are learned over 1Available at https://github.com/antonisa/ embeddings. large swaths of text. Such pre-trained word embeddings, such as the fastText Wikipedia vectors (Grave et al., 2018), are available for many languages and are widely used. Second, a mapping between the languages is learned in one of three ways: in a supervised manner if dictionaries or parallel data are available to be used for supervision (Zou et al., 2013), under minimal supervision e.g. using only identical strings (Smith et al., 2017), or even in an unsupervised fashion (Zhang et al., 2017; Conneau et al., 2018). Both in bilingual and multilingual settings, it is common that one of the language embedding spaces is the target to which all other languages get aligned (hereinafter “the hub"). We outline the details in Section 2. Despite all the recent progress in learning crosslingual embeddings, we identify a major shortcoming to previous work: it is by and large English-centric. Notably, most MWE approaches essentially select English as the hub during training by default, aligning all other language spaces to the English one. We argue and empirically show, however, that English is a poor hub language choice. In BWE settings, on the other hand, it is fairly uncommon to denote which of the two languages is the hub (often this is implied to be the target language). However, we experimentally find that this choice can greatly impact downstream performance, especially when aligning distant languages. This Anglocentricity is even more evident at the evaluation stage. The lexica most commonly used for evaluation are the MUSE lexica (Conneau et al., 2018) which cover 45 languages, but with translations only from and into English. Alternative evaluation dictionaries are also very English- and European-centric: (Dinu and Baroni, 2014) report results on English–Italian, (Artetxe et al., 2017) on English–German and English–Finnish, (Zhang et al., 2017) on Spanish–English and Italian–English, and (Artetxe et al., 2018a) between English and Italian, German, Finnish, Spanish, and Turkish. We argue that cross-lingual word embedding mapping methods should look beyond English for their evaluation benchmarks because, compared to all others, English is a language with disproportionately large available data and relatively poor inflectional morphology e.g., it lacks case, gender, and complex verbal inflection systems (Aronoff and Fudeman, 2011). These two factors allow for an 8659 overly easy evaluation setting which does not necessarily generalize to other language pairs. In light of this, equal focus should instead be devoted to evaluation over more diverse language pairs that also include morphologically rich and low-resource languages. With this work, we attempt to address these shortcomings, providing the following contributions: • We show that the choice of the hub when evaluating on diverse language pairs can lead to significantly different performance for iterative refinement methods that use a symbolic-based seed dictionary (e.g., by more than 10 percentage points for BWE over distant languages). We also show that often English is a suboptimal hub for MWE. • We identify some general guidelines for choosing a hub language which could lead to stronger performance; less isometry between the hub and source and target embedding spaces mildly correlates with performance, as does typological distance (a measure of language similarity based on language family membership trees). For distant languages, multilingual systems should be preferred over bilingual ones if the languages share alphabets, otherwise a bilingual system based on monolingual similarity dictionaries is preferable. • We provide resources for training and evaluation on language pairs that do not include English. We outline a simple triangulation method with which we extend the MUSE dictionaries to an additional 4704 lexicons covering 50 languages (for a total of 4900 dictionaries, including the original English ones), and we present results on a subset of them. We also create new evaluation lexica for under-resourced, under-represented languages using Azerbaijani, Belarusian, and Galician as our test cases. Finally, we provide recipes for creating such dictionaries for any language pair with available parallel data. 2 Cross-Lingual Word Embeddings and Lexicon Induction Bilingual Word Embeddings In the supervised BWE setting of Mikolov et al. (2013), given two languages L = {l1, l2} and their pre-trained row-aligned embeddings X1, X2, respectively, a transformation matrix M is learned such that: M = arg min M∈Ω ∥X1 −MX2∥. The set Ωcan potentially impose a constraint over M, such as the very popular constraint of restricting it to be orthogonal (Xing et al., 2015). Previous work has empirically found that this simple formulation is competitive with other more complicated alternatives (Xing et al., 2015). The orthogonality assumption ensures that there exists a closed-form solution through Singular Value Decomposition (SVD) of X1XT 2 .2 Note that in this case only a single matrix M needs to be learned, because ∥X1 −M X2∥= M −1X1 −X2 , while at the same time a model that minimizes ∥X1 −MX2∥is as expressive as one minimizing ∥M1X1 −M2X2∥, with half the parameters. In the minimally supervised or even the unsupervised setting, Zhang et al. (2017) and Conneau et al. (2018) reframe the task as an adversarial game, with a generator aiming to produce a transformation that can fool a discriminator. However, the most popular methods follow an iterative refinement approach (Artetxe et al., 2017). Starting with a seed dictionary (e.g. from identical strings (Zhou et al., 2019) or numerals) an initial mapping is learned in the same manner as in the supervised setting. The initial mapping, in turn, is used to expand the seed dictionary with high confidence word translation pairs. The new dictionary is then used to learn a better mapping, and so forth the iterations continue until convergence. The same iterative approach is followed by Artetxe et al. (2018a), with one important difference that allows their model (VecMap) to handle language pairs with different alphabets: instead of identical strings, the seed dictionary is constructed based on the similarity of the monolingual similarity distributions over all words in the vocabulary.3 Multilingual Word Embeddings In a multilingual setting, the simplest approach is to use BWE and align all languages into a target language (the hub). In this case, for N languages L = {l1, l2, . . . , lN} on has to learn N −1 bilingual mappings (Ammar et al., 2016b). Rather than using a single hub space, Heyman et al. (2019) propose an incremental procedure that uses an Incremental Hub Space (IHS): each new language is included to the multilingual space by mapping it to all languages that have already been aligned (e.g. language l3 would be mapped to the aligned space of {l1, l2}). Alternatively, all mappings could be learned jointly, taking advantage of the inter-dependencies between any two language pairs. Importantly, though, there is no closed form solution for learning the joint mapping, hence a solution needs to be approximated with gradientbased methods. The main approaches are: • Multilingual adversarial training with pseudorandomized refinement (Chen and Cardie, 2018, MAT+MPSR): a generalization of the adversarial approach of Zhang et al. (2017); Conneau et al. (2018) to multiple languages, also combined with an iterative refinement procedure.4 • Unsupervised Multilingual Hyperalignment (Alaux et al., 2019, UMH): an approach 2We refer the reader to Mikolov et al. (2013) for details. 3We refer the reader to Artetxe et al. (2018a) for details. 4MAT+MPSR has the beneficial property of being as computationally efficient as learning O(N) mappings (instead of O(N2)). We refer the reader to Chen and Cardie (2018) for exact details. 8660 which maps all languages to a single hub space,5 but also enforces good alignments between all language pairs within this space. Even though the architecture and modeling approach of all MWE methods are different, they share the same conceptual traits: one of the language spaces remains invariant and all other languages are effectively mapped to it. In all cases, English is by default selected to be the hub. The only exception is the study of triplets alignments in (Alaux et al., 2019), where Spanish is used as the Spanish–French–Portuguese triplet hub. Lexicon Induction One of the most common downstream evaluation tasks for the learned cross-lingual word mappings is Lexicon Induction (LI), the task of retrieving the most appropriate word-level translation for a query word from the mapped embedding spaces. Specialized evaluation (and training) dictionaries have been created for multiple language pairs. Of these, the MUSE dictionaries (Conneau et al., 2018) are most often used, providing word translations between English (En) and 48 other high- to mid-resource languages, as well as on all 30 pairs among 6 very similar Romance and Germanic languages (English, French, German, Spanish, Italian, Portuguese). Given the mapped embedding spaces, the translations are retrieved using a distance metric, with Cross-Lingual Similarity Scaling (Conneau et al., 2018, CSLS) as the most commonly used in the literature. Intuitively, CSLS decreases the scores of pairs that lie in dense areas, increasing the scores of rarer words (which are harder to align). The retrieved pairs are compared to the gold standard and evaluated using precision at k (P@k, evaluating how often the correct translation is within the k retrieved nearest neighbors of the query). Throughout this work we report P@1, which is equivalent to accuracy; we provide P@5 and P@10 results in the Appendix. 3 New LI Evaluation Dictionaries The typically used evaluation dictionaries cover a narrow breadth of the possible language pairs, with the majority of them focusing in pairs with English (as with the MUSE or Dinu et al. (2015) dictionaries) or among high-resource European languages. Glavaš et al. (2019), for instance, highlighted Anglocentricity as an issue, creating and evaluating on 28 dictionaries between 8 languages (Croatian, English, Finnish, French, German, Italian, Russian, Turkish) based on Google Translate. In addition, Czarnowska et al. (2019) focused on the morphology dimension, creating morphologically complete dictionaries for 2 sets of 5 genetically related languages (Romance: French, Spanish, Italian, Portuguese, Catalan; and Slavic: Polish, Czech, Slovak, Russian, Ukrainian). In contrast to these two (very valuable!) works, our method for creating dictionaries 5Note that Alaux et al. (2019) use the term pivot to refer to what we refer to as the hub language. Pt: En: Cs: trabalho job work prácu praca práca práce pracovní Figure 1: Transitivity example (Portuguese →English →Czech). for low-resource languages (§3.1) leverages resources that are available for about 300 languages. In addition, we propose a simple triangulation process (§3.2), that makes it possible to create dictionaries for arbitrary language pairs, given that dictionaries into a pivot language (usually English) are available for both languages. 3.1 Low-Resource Language Dictionaries Our approach for constructing dictionaries is straightforward, inspired by phrase table extraction techniques from phrase-based MT (Koehn, 2009). This is an automatic process, and introduces some degree of noise. Rather than controlling this through manual inspection, which would be impossible for all language pairs, we rely on fairly simple heuristics for controlling the dictionaries’ quality. The first step is collecting publicly available parallel data between English and the low-resource language of interest. We use data from the TED (Qi et al., 2018), OpenSubtitles (Lison and Tiedemann, 2016), WikiMatrix (Schwenk et al., 2019), bible (Malaviya et al., 2017), and JW300 (Agi´c and Vuli´c, 2019) datasets.6 This results in 354k, 53k, and 623k English-to-X parallel sentences for Azerbaijani (Az), Belarusian (Be), and Galician (Gl) respectively.7 We align the parallel sentences using fast_align (Dyer et al., 2013), and extract symmetrized alignments using the gdfa heuristic (Koehn et al., 2005). In order to ensure that we do not extract highly domain-specific word pairs, we only use the TED, OpenSubtitles, and WikiMatrix parts for wordpair extraction. Also, in order to control for quality, we only extract word pairs if they appear in the dataset more than 5 times, and if the symmetrized alignment probability is higher than 30% in both directions. With this process, we end up with about 6k, 7k, and 38k word pairs for Az–En, Be–En, and Gl–En respectively. Following standard conventions, we sort the word pairs according to source-side frequency, and use the intermediate-frequency ones for evaluation, typically using the [5000–6500) rank boundaries. The same process can be followed for any language pair with a sufficient volume of parallel data (needed for training a reasonably accurate word alignment model).8 6Not all languages are available in all these datasets. 7The anglocentricity in this step is by necessity – it is hard to find a large volume of parallel data in a language pair excluding English. 8In fact, we can produce similar dictionaries for a large number of languages, as the combination of the recently cre8661 Greek Italian Bridged Greek–Italian Lexicon word tag word tag Match Greek Italian ειρηνικός M;NOM;SG pacifico M;SG M;SG ειρηνικός pacifico, pacifici, pacifica ειρηνική F;NOM;SG pacifici M;PL F;SG ειρηνική pacifica, pacifico, pacifici ειρηνικό Neut;NOM;SG pacifica F;SG SG ειρηνικό pacifica, pacifico, pacifici ειρηνικά Neut;NOM;PL PL ειρηνικά pacifici, pacifica, pacifico Table 1: Triangulation and filtering example on Greek–Italian. All words are valid translations of the English word ‘peaceful’. We also show filtered-out translations. 3.2 Dictionaries for all Language Pairs through Triangulation Our second method for creating new dictionaries is inspired by phrase table triangulation ideas from the pre-neural MT community (Wang et al., 2006; Levinboim and Chiang, 2015). The concept can be easily explained with an example, visualized in Figure 1. Consider the Portuguese (Pt) word trabalho which, according to the MUSE Pt–En dictionary, has the words job and work as possible En translations. In turn, these two En words can be translated to 4 and 5 Czech (Cs) words respectively. By utilizing the transitive property (which translation should exhibit) we can identify the set of 5 possible Cs translations for the Pt word trabalho. Following this simple triangulation approach, we create 4,704 new dictionaries over pairs between the 50 languages of the MUSE dictionaries.9 For consistency, we keep the same train and test splits as with MUSE, so that the source-side types are equal across all dictionaries with the same source language. Triangulating through English (which is unavoidable, due to the relative paucity of non-English-centric dictionaries) is suboptimal – English is morphologically poor and lacks corresponding markings for gender, case, or other features that are explicitly marked in many languages. As a result, several inflected forms in morphologically-rich languages map to the same English form. Similarly, gendered nouns or adjectives in gendered languages map to English forms that lack gender information. For example, the MUSE Greek– English dictionary lists the word peaceful as the translation for all ειρηνικός, ειρηνική, ειρηνικό, ειρηνικά, which are the male, female, and neutral (singular and plural) inflections of the same adjective. Equivalently, the English–Italian dictionary translates peaceful into either pacifico, pacifici, or pacifica (male singular, male plural, and female singular, respectively; see Table 1). When translating from or into English lacking context, all of those are reasonable translations. When translating between Greek and Italian, though, one should at least take number into account (gramated JW300 and WikiMatrix datasets provide an average of more than 100k parallel sentences in 300 languages. Before publication, we plan to create these dictionaries and make them publicly available, along with the corresponding code. 9Available at https://github.com/antonisa/ embeddings. matical gender is a more complicated matter: it is not uncommon for word translations to be of different grammatical gender across languages). Hence, we devise a filtering method for removing blatant mistakes when triangulating morphologically rich languages. We rely on automatic morphological tagging which we can obtain for most of the MUSE languages, using the StanfordNLP toolkit (Qi et al., 2020).10 The morphological tagging uses the Universal Dependencies feature set (Nivre et al., 2016) making the tagging comparable across almost all languages. Our filtering technique iterates through the bridged dictionaries: for a given source word, if we find a target word with the exact same morphological analysis, we filter out all other translations with the same lemma but different tags. In the case of feature mismatch (for instance, Greek uses 2 numbers, 4 cases and 3 genders while Italian has 2 numbers, 2 genders, and no cases) or if we only find a partial tag match over a feature subset, we filter out translations with disagreeing tags. We ignore the grammatical gender and verb form features, as they are not directly comparable cross-lingually. Coming back to our Greek– Italian example, this means that for the form ειρηνικός we would only keep pacifico as a candidate translation (we show more examples in Table 1). Our filtering technique removes about 60.4% of the entries in 2964 of the 4900 dictionaries.11 Unsurprisingly, we find that bridged dictionaries between morphologically rich languages require a lot more filtering. For instance more than 80% of the entries of the Urdu-Greek dictionary get filtered out. On average, the languages with more filtered entries are Urdu (62.4%), Turkish (61.1%), and German (58.6%). On the other hand, much fewer entries are removed from dictionaries with languages like Dutch (36.2%) or English (38.1%). Naturally, this filtering approach is restricted to languages for which a morphological analyzer is available. Mitigating this limitation is beyond the scope of this work, although it is unfortunately a common issue. For example, Kementchedjhieva et al. (2019) manually corrected five dictionaries (between English and German, Danish, Bulgarian, Arabic, Hindi) but one needs to rely 10The toolkit has since been renamed to Stanza. See https: //stanfordnlp.github.io/stanfordnlp/. 11Due to the lack of morphological analysis tools, we were unable to filter dictionaries in the following 11 languages: aze, bel, ben, bos, lit, mkd, msa, sqi, tam, tha, tel. 8662 src Target Az Be Cs En Es Gl Pt Ru Sk Tr µbest µEn Az – 17.2En 35.1Es 35.7Es 48.0Tr 32.7Ru 41.5En 29.8Pt 31.7Cs 32.0Pt 33.7 31.7 Be 14.1Cs – 35.9Tr 29.9Pt 39.5En 25.8Es 34.4Es 41.1Gl 30.7Ru 20.4Pt 30.2 28.8 Cs 6.9 Es 9.3 Ru – 61.0Es 60.5En 27.9Pt 57.8En 45.9Pt 71.2En 35.8Sk 41.8 41.2 En 17.9Es 18.4Es 50.2Es – 77.5Ru 36.3Es 72.3Sk 43.3Pt 40.4Tr 41.9Pt 44.2 42.7 Es 12.1En 10.1Ru 47.4Pt 74.6Sk – 37.5Es 83.1Gl 41.9Tr 40.0Es 38.6Sk 42.8 41.4 Gl 5.5 En 3.6 Az 26.5Tr 43.2Es 60.8Tr – 52.9Cs 23.8Tr 26.8Cs 19.7Cs 29.2 27.7 Pt 5.8 Pt 8.6 Sk 47.2Gl 71.3En 88.1Pt 37.1Es – 38.0Es 38.7Es 38.1En 41.4 40.4 Ru 8.7 Es 12.8Az 50.3Gl 55.5Tr 54.8Cs 23.0Pt 52.4En – 45.5Tr 27.0Be 36.7 35.9 Sk 4.0 Be 10.9Ru 72.5Be 55.6Tr 53.9En 28.4En 52.0Es 44.0Gl – 28.5En 38.9 37.9 Tr 12.1Sk 9.0 Az 41.8Ru 51.1Cs 55.0En 18.4Tr 51.6En 34.6En 29.4Es – 33.7 33.0 µbest 9.7 11.1 45.2 53.1 59.8 29.7 55.3 38.0 39.4 31.3 37.3 µEn 9.1 9.9 43.3 51.0 59.3 28.2 54.9 36.5 37.7 30.8 36.0 Table 2: Lexicon Induction performance (measured with P@1) over 10 languages (90 pairs). In each cell, the superscript denotes the hub language that yields the best result for that language pair. µbest: average using the best hub language. µEn: average using the En as the hub. The lightly shaded cells are the language pairs where a bilingual VecMap system outperforms MAT+MSPR; in heavy shaded cells both MUSEs and VecMap outperform MAT+MSPR. on automated annotations in order to scale to all languages. Our method that uses automatically obtained morphological information combined with the guidelines proposed by Kementchedjhieva et al. (2019) (e.g. removing proper nouns from the evaluation set) scales easily to multiple languages, allowing us to create more than 4 thousand dictionaries. 4 Lexicon Induction Experiments The aim of our LI experiments is two-fold. First, the differences in LI performance show the importance of the hub language choice with respect to each evaluation pair. Second, as part of our call for moving beyond Anglo-centric evaluation, we also present LI results on several new language pairs using our triangulated dictionaries. 4.1 Methods and Setup We train and evaluate all models starting with pretrained Wikipedia FastText embeddings for all languages (Grave et al., 2018). We focus on the minimally supervised scenario which only uses similar character strings between any languages for supervision in order to mirror the hard, realistic scenario of not having annotated training dictionaries between the languages. We learn MWE with the MAT+MPSR method using the publicly available code,12 aligning several language subsets varying the hub language. We decided against comparing to the incremental hub (IHS) method of Heyman et al. (2019), because the order in which the languages are added is an additional hyperparameter that would explode the experimental space.13 We also do not compare to UMH, as we consider it conceptually similar to MAT+MPSR and no code is publicly available. For BWE 12https://github.com/ccsasuke/umwe 13We refer the reader to Table 2 from Heyman et al. (2019) which compares to MAT+MPSR, and to Table 7 of their appendix which shows the dramatic influence of language order. experiments, we use MUSEs14 (MUSE, semisupervised) and VecMap15 systems, and we additionally compare them to MAT+MPSR for completeness. We compare the statistical significance of the performance difference of two systems using paired bootstrap resampling (Koehn, 2004). Generally, a difference of 0.4–0.5 percentage points evaluated over our lexica is significant with p < 0.05. Experiment 1 We first focus on 10 languages of varying morphological complexity and data availability (which affects the quality of the pre-trained word embeddings): Azerbaijani (Az), Belarusian (Be), Czech (Cs), English (En), Galician (Gl), Portuguese (Pt), Russian (Ru), Slovak (Sk), Spanish (Es), and Turkish (Tr). The choice of these languages additionally ensures that for our three low-resource languages (Az, Be, Gl) we include at least one related higher-resource language (Tr, Ru, Pt/Es respectively), allowing for comparative analysis. Table 2 summarizes the best post-hoc performing systems for this experiment. Experiment 2 In the second setting, we use a set of 7 more distant languages: English, French (Fr), Hindi (Hi), Korean (Ko), Russian, Swedish (Sv), and Ukrainian (Uk). This language subset has large variance in terms of typology and alphabet. The best performing systems are presented in Table 3. 4.2 Analysis and Takeaways MWE: English is rarely the best hub language In multilingual settings, we conclude that the standard practice of choosing English as the hub language is sub-optimal. Out of the 90 evaluation pairs from our 10language experiment (Table 2) the best hub language is English in only 17 instances (less than 20% of the 14https://github.com/facebookresearch/MUSE 15https://github.com/artetxem/vecmap 8663 Source Target En Fr Hi Ko Ru Sv Uk µbest µEn En – 76.3Ru 23.9Uk 10.4Fr 42.0Uk 59.0Hi 28.3Ru 40.0 38.5 Fr 74.0Uk – 19.0Ru 7.5Sv 40.8Ru 51.8En 28.8En 37.0 36.4 Hi 31.4Fr 26.9Ru – 2.1En 14.6Uk 17.3En 10.5Fr 17.1 16.2 Ko 17.7Sv 13.6Sv 2.4Fr – 7.9En 7.2Ru 3.6Fr 8.8 7.9 Ru 53.4Ko 51.7Ko 15.3Uk 5.2En – 41.3Uk 56.3Ko 37.2 36.2 Sv 52.7Uk 48.2Ko 17.7Ru 5.1Uk 33.2Fr – 24.1Ru 30.2 29.2 Uk 41.4Ru 44.0Hi 14.4Sv 2.6En 59.7Hi 36.8Ko – 33.2 32.4 µbest 45.1 43.5 15.5 5.5 33.0 35.6 25.3 29.1 µEn 42.7 42.5 14.5 5.1 32.4 34.9 24.5 28.1 Table 3: Lexicon Induction performance (P@1) over MWEs from 7 typologically distant languages (42 pairs). The lightly shaded cells are the only language pairs where a bilingual MUSE system outperforms MAT+MSPR; in heavy shaded cells a bilingual VecMap (but not MUSEs) system outperform MAT+MSPR. time). In fact, the average performance (over all evaluation pairs) when using En as the hub (denoted as µEn) is 1.3 percentage points worse than the optimal (µbest). In our distant-languages experiment (Table 3) English is the best choice only for 7 of the 42 evaluation pairs (again, less than 20% of the time). As before, using En as the hub leads to an average drop of one percentage point in performance aggregated over all pairs, compared to the averages of the optimal selection. The rest of this section attempts to provide an explanation for these differences. Expected gain for a hub language choice As vividly outlined by the superscript annotations in Tables 2 and 3, there is not a single hub language that stands out as the best one. Interestingly, all languages, across both experiments, are the best hub language for some evaluation language pair. For example, in our 10-languages experiment, Es is the best choice for about 20% of the evaluation pairs, Tr and En are the best for about 17% each, while Gl and Be are the best for only 5 and 3 language pairs respectively. Clearly, not all languages are equally suited to be the hub language for many language pairs. Hence, it would be interesting to quantify how much better one could do by selecting the best hub language compared to a random choice. In order to achieve this, we define the expected gain Gl of using language l as follows. Assume that we are interested in mapping N languages into the shared space and pm l is the accuracy16 over a specified evaluation pair m when using language l as the hub. The random choice between N languages will have an expected accuracy equal to the average accuracy when using all languages as hub: E[pm] = P l pm l N . The gain for that evaluation dataset m when using language l as hub, then, is gm l = pm l −E[pm]. Now, for a collection of M evaluation pairs we simply average their gains, in order to obtain the expected gain for using 16This could be substituted with any evaluation metric. EN FR HI KO RU SV UK 0 1 2 0.9 1.1 1.2 1.3 1.4 1.7 1.1 0.1 0 −0.3 0 0.4 −0.2 0.1 Gl AZ BE CS EN ES GL PT RU SK TR 0 1 2 1.1 1 1.2 1.5 1.5 0.9 1.5 1.5 1 1.5 −0.4 0 0.2 0.2 0.4 0.4 0.2 0 −0.2 0 Gl when best overall Figure 2: Expected gain Gl for the MWE experiments. language l as the hub: Gl = E[gl] = P m gm l M . The results of this computation for both sets of experiments are presented in Figure 2. The bars marked ‘overall’ match our above definition, as they present the expected gain computed over all evaluation language pairs. For good measure, we also present the average gain per language aggregated over the evaluation pairs where that language was indeed the best hub language (‘when best’ bars). Perhaps unsurprisingly, Az seems to be the worst hub language choice among the 10 languages of the first experiment, with an expected loss (negative gain) of -0.4. This can be attributed to how distant Az is from all other languages, as well as to the fact that the Az pre-trained embeddings are of lower quality compared to all other languages (as the Az Wikipedia dataset is significantly smaller than the others). Similarly, Hi and Sv show expected loss for our second experiment. Note that English is not a bad hub choice per se – it exhibits a positive expected gain in both sets of experiments. However, there are languages with larger expected gains, like Es and Gl in the 10-languages experiment that have a twice-as-large expected gain, while Ru has a 4 times larger expected gain in the distant8664 languages experiment. Of course, the language subset composition of these experiments could possibly impact those numbers. For example, there are three very related languages (Es, Gl, Pt) in the 10 languages set, which might boost the expected gain for that subset; however, the trends stand even if we compute the expected gain over a subset of the evaluation pairs, removing all pairs that include Gl or Pt. For example, after removing all Gl results, Es has a slightly lower expected gain of 0.32, but is still the language with the largest expected gain. Identifying the best hub language for a given evaluation set The next step is attempting to identify potential characteristics that will allow us make educated decisions with regards to choosing the hub language, given a specific evaluation set. For example, should one choose a language typologically similar to the evaluation source, target, or both? Or should they use the source or the target of the evaluation set as the hub? Our first finding is that the best performing hub language will very likely be neither the source nor the target of the evaluation set. In our 10-languages experiments, a language different than the source and the target yields the best accuracy for over 93% of the evaluation sets, with the difference being statistically significant in more than half such cases. Similarly, in the distant-languages experiment, there is only a single instance where the best performing hub language is either the source or the target evaluation language (for Fr–Ru), and for the other 97% of cases the best option is a third language. This surprising pattern contradicts the mathematical intuition discussed in Section 2 according to which a model learning a single mapping (keeping another word embedding space fixed) is as expressive as a model that learns two mappings for each of the languages. Instead, we find that in almost all cases, learning mappings for both language spaces of interest (hence rotating both spaces) leads to better BLI performance compared to when one of the spaces is fixed. Our second finding is that the LI performance correlates with measures of distance between languages and language spaces. The typological distance (dgen) between two languages can be approximated through their genealogical distance over hypothesized language family trees, which we obtain from the URIEL typological database (Littell et al., 2017). Also, Patra et al. (2019) recently motivated the use of Gromov-Hausdroff (GH) distance as an a priori estimation of how well two language embedding spaces can be aligned under an isometric transformation (an assumption most methods rely on). The authors also note that vector space GH distance correlates with typological language distance. We find that there is a positive correlation between LI performance and the genealogical distances between the source–hub and target–hub languages. The average (over all evaluation pairs) Pearson’s correlation coefficient between P@1 and dgen is 0.49 for the distant languages experiment and 0.38 for the 10-languages one. A similar positive correlation of performance and the 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 40 42 44 AZ BE EN ES GL PT RU SK TR CS ρ = 0.73 Results on Gl–En GHhub Gl + GHhub En P@1 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 21 22 23 24 FR HI KO RU SV UK EN ρ = 0.87 Results on En–Hi GHhub En + GHhub Hi P@1 Figure 3: The Lexicon Induction accuracy generally correlates positively with the GH distance of the source and target language vector spaces to the hub language. sum of the GH distances between the source–hub and target–hub spaces. On our distant languages experiment, the correlation coefficient between P@1 and GH is 0.45, while it is slightly lower (0.34) for our 10-languages experiment. Figure 3 shows two high correlation examples, namely Gl–En and En–Hi. BWE: The hub matters for distant languages MUSEs implements a provably direction-independent closed form solution of the Procrustes problem, and we confirm empirically that the hub choice does not affect the outcome (we provide complete results on MUSEs in Table 7 in the Appendix). Similarly, because VecMap uses symmetric re-weighting and produces bidirectional dictionaries at its final step, the results are not dependent on the training direction. However, obtaining good performance with such methods requires the orthogonality assumption to hold, which for distant languages is rarely the case (Patra et al., 2019). In fact, we find that the gradient-based MAT+MPSR method in a bilingual setting over typologically distant languages exhibits better performance than MUSEs or VecMap. Across Table 2, in only a handful of examples (shaded cells) do VecMap or MUSEs systems outperform MAT+MPSR for BWE (with the majority being among En, Es, Gl, and Pt, all related high-resource languages). In the 7 distant languages setting, however, the results are different: VecMap outperforms MUSEs and the multilingual MAT+MPSR in the vast majority of the language pairs. The difference is more stark when the languages of the pair use completely different alphabets, where the same-character strings heuristic for bootstrapping the initial dictionary mapping fails. Instead, the monolingual similarity approach employed by VecMap is definitely more appropriate for settings such as those posed by languages like Korean or Hindi. This highlights the importance of actually evaluating and reporting results on such language pairs. On the one hand, we find that when aligning distant 8665 Results on Az–Cs Average Bilingual Az Cs 25.8 with hub: 22.7 29.1 Trilingual Az, Cs, +hub: Be En Es Gl 28.2 21.6 28.5 31.8 23.0 Pt Ru Sk Tr 29.6 27.4 30.4 32.9 Trilingual Az, hub:Cs, +extra: En Es Pt Ru Tr 30.8 30.1 30.1 33.2 27.1 33.7 Multilingual (10 languages) Az Be Cs En Es 33.9 33.7 34.0 32.3 34.5 35.1 Gl Pt Ru Sk Tr 34.0 34.8 34.5 32.9 33.7 Results on Ru–Uk Average Bilingual Ru Uk 57.5 with hub: 58.0 57.0 Trilingual Be, Ru, Uk with hub: Be Ru Uk 58.8 59.2 58.9 58.4 Trilingual Ru, Uk, +hub: Az Cs En Es Fr Hi Tr 57.8 57.4 58.5 58.4 58.3 58.0 57.0 57.2 Multilingual Be, Ru, Uk, +hub: Cs En Es Gl Ko Pt Sv 58.1 58.0 58.1 58.5 58.8 57.0 58.3 58.2 Multilingual Ru, Uk, En, Fr, Hi, Ko, Sv, with hub: En Fr Hi Ko Ru Sv Uk 55.6 55.3 56.1 55.8 56.3 55.3 55.3 54.9 Table 4: Comparison of bilingual, trilingual, and multilingual systems for distant (left) and related (right) languages. Multilinguality boosts performance significantly on distant languages. Test Hub Test Hub src trg src trg Az–Cs 22.7 29.1 Gl–Pt 53.5 53.6 Az–En 13.2 20.7 Pt–Gl 39.0 36.7 Az–Tr 30.1 30.1 Uk–Ru 61.6 61.8 Table 5: The hub is important for BWE between distant languages with MAT+MPSR. languages with MAT+MPSR, the difference between hub choices can be significant – in Az–En, for instance, using En as the hub leads to more than 7 percentage points difference compared to using Az. We show some examples in Table 5. On the other hand, when aligning typologically similar languages, the difference is less pronounced. For example, we obtain practically similar performance for Gl–Pt, Az–Tr, or Uk–Ru when using either the source or the target language as the hub. Note, though, that non-negligible differences could still occur, as in the case of Pt–Gl. In most cases, it is the case that the higher-resourced language is a better hub than the lower-resourced one, especially when the number of resources differ significantly (as in the case of Az and Be against any other language). Since BWE settings are not our main focus, we leave an extensive analysis of this observation for future work. Bi-, tri-, and multilingual systems This part of our analysis compares bilingual, trilingual, and multilingual systems, with a focus on the under-represented languages. Through multiple experiments (complete evaluations are listed in the Appendix) we reach two main conclusions. On one hand, when evaluating on typologically distant languages, one should use as many languages as possible. In Table 4 we present one such example with results on Az–Cs under various settings. On the other hand, when multiple related languages Transfer from En Transfer from Pt Hub Es Pt Gl Hub Es Gl En 38.7 21.8 19.4 En 48.4 32.9 Es 26.5 16.1 28.5† Es 41.4 25.5† Pt 28.1 25.7 15.6 Pt 44.3† 36.5 Gl 35.4 22.8 23.1 Gl 48.1 23.8 Be 35.6 30.5 13.2 Ru 28.6† 30.6 18.2 †: best train-test hub Sk 24.2 30.2† 14.6 for LI. Table 6: The choice of hub can significantly affect downstream zero-shot POS tagging accuracy. are available, one can achieve higher performance with multilingual systems containing all related languages and one more hub language, rather than learning diverse multilingual mappings using more languages. We confirm the latter observation with experiments on the Slavic (Be, Ru, Uk) and Iberian (Es, Gl, Pt) clusters, and present an example (Ru–Uk) in Table 4. 5 Downstream Task Experiments Differences in BLI performance do not necessarily translate to differences in other downstream tasks that use the aligned embeddings, so Glavaš et al. (2019) advocate for actual evaluation on such tasks. We extend our analysis to an example downstream task of zero-shot POS tagging using the aligned embeddings for select language pairs. We show that indeed the choice of the hub language can have dramatic impact. Using Universal Dependencies data (Nivre et al., 2016) we train simple bi-LSTM POS taggers on En and Pt using the respective embeddings produced from each MAT+MPSR run, and evaluate the zero-shot performance on Gl and 8666 Es.17 Although all taggers achieve consistent accuracies > 95% on English and Portuguese regardless of the original En or Pt embeddings, the zero-shot performance on the test languages, as shown in Table 6, varies widely. For instance, using the embeddings produced from using Pt as a hub, we obtain the highest zero-shot accuracy on Gl (36.5%), while using the ones from the Gl hub lead to significantly worse performance (23.8%). It should be noted that the best hub for POS-tagging does not always coincide with the best hub for LI, e.g. the best LI hub for Pt–Gl is Es, which leads to 11 percentage points worse Gl POS tagging performance than the best system. In fact, for the language pairs that we studied we observe no correlation between the two tasks performance as we vary the hub (with an average Spearman’s rank correlation ρ = 0.08). 6 Conclusion With this work we challenge the standard practice in learning cross-lingual word embeddings. We empirically show that the choice of the hub language is an important parameter that affects lexicon induction performance in both bilingual (between distant languages) and multilingual settings. More importantly, we hope that by providing new dictionaries and baseline results on several language pairs, we will stir the community towards evaluating all methods in challenging scenarios that include under-represented language pairs. Towards this end, our analysis provides insights and general directions for stronger baselines for non-Anglocentric crosslingual word embeddings. The problem of identifying the best hub language, despite our analysis based on the use of typological distance, remains largely unsolved. In the future, we will investigate a hub language ranking/selection model a la Lin et al. (2019). Acknowledgements The authors are grateful to the anonymous reviewers for their exceptionally constructive and insightful comments, and to Gabriela Weigel for her invaluable help with editing and proofreading the paper. This material is based upon work generously supported by the National Science Foundation under grant 1761548. References Željko Agi´c and Ivan Vuli´c. 2019. Jw300: A widecoverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204– 3210. Jean Alaux, Edouard Grave, Marco Cuturi, and Armand Joulin. 2019. Unsupervised hyperalignment for multilingual word embeddings. In Proceedings of the 17Note that our goal is not to achieve SOTA in zero-shot POS-tagging, but to show that embeddings resulting from different hub choices have different qualities. International Conference on Learning Representations. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016a. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431–444. Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016b. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925. Mark Aronoffand Kirsten Fudeman. 2011. What is morphology?, volume 8. John Wiley & Sons. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Xilun Chen and Claire Cardie. 2018. Unsupervised multilingual word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 261–270. Association for Computational Linguistics. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proceedings of the Sixth International Conference on Learning Representations. Paula Czarnowska, Sebastian Ruder, Edouard Grave, Ryan Cotterell, and Ann Copestake. 2019. Don’t forget the long tail! a comprehensive analysis of morphological generalization in bilingual lexicon induction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 974–983, Hong Kong, China. Association for Computational Linguistics. Georgiana Dinu and Marco Baroni. 2014. How to make words with vectors: Phrase generation in distributional semantics. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 624–633. 8667 Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of the International Conference on Learning Representations, workshop track. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648. Goran Glavaš, Robert Litschko, Sebastian Ruder, and Ivan Vuli´c. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 710–721, Florence, Italy. Association for Computational Linguistics. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). Geert Heyman, Bregt Verreet, Ivan Vuli´c, and MarieFrancine Moens. 2019. Learning unsupervised multilingual word embeddings with incremental multilingual hubs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1890–1902, Minneapolis, Minnesota. Association for Computational Linguistics. Ann Irvine and Chris Callison-Burch. 2013. Combining bilingual and comparable corpora for low resource machine translation. In Proceedings of the eighth workshop on statistical machine translation, pages 262–270. Yova Kementchedjhieva, Mareike Hartmann, and Anders Søgaard. 2019. Lost in evaluation: Misleading benchmarks for bilingual dictionary induction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING 2012, pages 1459–1474. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics. Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In International Workshop on Spoken Language Translation (IWSLT) 2005. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrasebased & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Tomer Levinboim and David Chiang. 2015. Multi-task word alignment triangulation for low-resource languages. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1221–1226, Denver, Colorado. Association for Computational Linguistics. Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125–3135, Florence, Italy. Association for Computational Linguistics. Pierre Lison and Jörg Tiedemann. 2016. Opensubtitles2015: Extracting large parallel corpora from movie and tv subtitles. In International Conference on Language Resources and Evaluation. Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8–14. Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv:1309.4168. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1659–1666. Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig. 2019. Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces. In The 57th Annual Meeting of the Association for Computational Linguistics (ACL), Florence, Italy. 8668 Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. arXiv:2003.07082. Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL), New Orleans, USA. Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2019. Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia. arXiv:1907.05791. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the Fifth International Conference on Learning Representations. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394, Uppsala, Sweden. Association for Computational Linguistics. Haifeng Wang, Hua Wu, and Zhanyi Liu. 2006. Word alignment for languages with scarce resources using bilingual corpora of other language pairs. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 874–881, Sydney, Australia. Association for Computational Linguistics. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011, Denver, Colorado. Association for Computational Linguistics. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959–1970. Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tag–multilingual pos tagging via coarse mapping between embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1307–1317. Chunting Zhou, Xuezhe Ma, Di Wang, and Graham Neubig. 2019. Density matching for bilingual word embedding. In Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL), Minneapolis, USA. Will Y Zou, Richard Socher, Daniel Cer, and Christopher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393–1398. 8669 A Does evaluation directionality matter? We also explored whether there are significant differences between the evaluated quality of aligned spaces, when computed on both directions (src–trg and trg–src). We find that the evaluation direction indeed matters a lot, when the languages of the evaluation pair are very distant, in terms of morphological complexity and data availability (which affects the quality of the original embeddings). A prominent example, from our Europeanlanguages experiment, are evaluation pairs involving Az or Be. When evaluating on the Az–XX and Be–XX dictionaries, the word translation P@1 is more than 20 percentage points higher than when evaluating on the opposite direction (XX-Az or XX-Be). For example, Es–Az has a mere P@1 of 9.9, while Az–Es achieves a P@1 of 44.9. This observation holds even between very related languages (cf. Ru–Be: 12.8, Be–Ru: 41.1 and Tr–Az: 8.4, Az–Tr: 32.0), which supports our hypothesis that this difference is also due to the quality of the pre-trained embeddings. It is important to note that such directionality differences are not observed when evaluating distant pairs with presumably high-quality pre-trained embeddings e.g. Tr–Sk or Tr–Es; the P@1 for both directions is very close. B Complete results for all experiments Here we provide complete evaluation results for our multilingual experiments. Table 7 presents the P@1 of the bilingual experiments using MUSE, and Table 8 presents accuracy using VecMap. Tables 9–14 present P@1, P@5, and P@10 respectively, for the experiment on the 10 European languages. Similarly, results on the distant languages experiment are shown in Tables 15, 16, and 17. 8670 Table 7: BWE results (P@1) with MUSE Source Target Az Be Cs En Es Gl Pt Ru Sk Tr Az – 4.8 21.4 23.6 32.6 13.6 26.7 10.4 15.0 31.8 Be 4.0 – 26.1 3.8 12.3 9.3 11.3 42.0 23.1 2.9 Cs 2.6 5.4 – 57.1 55.5 11.9 52.3 44.7 71.2 31.6 En 12.2 2.5 47.3 – 79.3 32.0 72.9 39.7 34.3 40.6 Es 7.8 2.4 45.0 76.7 – 37.1 83.4 38.9 34.3 38.2 Gl 2.7 1.8 14.0 38.5 61.2 – 53.3 11.4 12.9 8.5 Pt 2.9 2.3 44.9 72.2 88.7 36.3 – 33.7 33.7 34.6 Ru 1.7 12.0 48.6 50.2 49.4 6.6 46.8 – 44.6 21.1 Sk 0.3 5.2 71.8 48.0 46.4 9.3 44.4 43.2 – 21.2 Tr 10.8 0.3 35.8 48.0 50.9 3.5 45.9 26.9 20.3 – Source Target En Fr Hi Ko Ru Sv Uk En – 80.3 17.9 9.5 39.7 60.0 25.9 Fr 76.6 – 11.9 5.1 38.0 52.4 26.8 Hi 24.2 17.0 – 0.4 3.1 3.3 2.3 Ko 12.4 7.1 0.4 – 2.5 2.2 0.6 Ru 50.2 47.3 3.2 1.6 – 35.8 58.8 Sv 53.3 47.8 5.2 2.3 27.8 – 19.9 Uk 37.4 40.3 4.1 0.3 60.7 30.2 – Table 8: BWE results (P@1) with VecMap Source Target Az Be Cs En Es Gl Pt Ru Sk Tr Az – 15.86 32.43 32.38 37.81 28.48 37.29 26.58 29.38 28.71 Be 14.41 – 35.31 32.74 43.67 30.56 36.58 43.49 30.0 20.77 Cs 6.78 8.65 – 57.45 56.75 35.66 54.09 44.59 73.49 34.75 En 20.12 18.06 46.41 – 69.91 40.83 63.5 40.13 40.19 37.7 Es 11.66 8.9 45.09 69.49 – 39.37 81.19 40.52 40.7 39.89 Gl 5.34 2.44 29.14 46.11 58.44 – 51.64 26.57 28.53 22.47 Pt 6.72 6.97 43.48 66.21 85.68 41.17 – 38.29 39.81 36.61 Ru 8.06 10.33 52.43 59.03 59.29 29.87 55.55 – 49.93 27.73 Sk 2.92 9.26 70.16 56.73 52.35 36.62 50.96 45.47 – 31.23 Tr 14.2 9.74 42.37 45.51 50.21 28.06 49.11 32.33 34.57 – Source Target En Fr Hi Ko Ru Sv Uk En – 69.82 35.0 19.2 40.56 56.49 23.63 Fr 68.44 – 28.27 15.53 38.18 49.71 26.95 Hi 44.61 38.52 – 14.01 20.39 26.26 14.72 Ko 32.69 18.32 12.93 – 11.72 18.45 7.21 Ru 59.24 55.18 21.65 10.65 – 47.58 55.12 Sv 51.94 46.92 27.46 12.66 34.29 – 26.96 Uk 42.61 47.82 17.92 5.21 57.64 43.23 – 8671 Table 9: All results from the European-languages MWE experiment: P@1 (part 1). Test Hub language µ Az Be Cs En Es Gl Pt Ru Sk Tr Az–Be 13.7 12.6 14.2 17.2 16.4 13.9 15.0 15.6 14.5 15.8 14.9 Az–Cs 33.7 34.0 32.3 34.5 35.1 34.0 34.8 34.5 32.9 33.7 33.9 Az–En 31.1 34.7 32.8 32.6 35.7 34.2 33.6 33.6 34.0 33.2 33.5 Az–Es 42.7 46.6 45.2 46.1 44.9 44.4 44.9 43.3 46.1 48.0 45.2 Az–Gl 25.9 27.2 29.0 26.5 29.0 24.7 27.2 32.7 31.5 25.9 28.0 Az–Pt 37.5 41.5 39.3 41.5 39.8 39.0 39.8 41.5 38.5 40.0 39.8 Az–Ru 27.9 27.1 27.1 27.4 27.7 29.0 29.8 26.3 26.3 28.5 27.7 Az–Sk 28.8 30.1 31.7 29.1 30.4 30.4 28.8 28.5 29.5 30.4 29.8 Az–Tr 29.8 30.8 32.0 30.1 31.3 30.8 32.0 31.1 32.0 31.8 31.2 Be–Az 10.4 13.3 14.1 13.0 11.9 12.7 12.4 13.0 13.3 13.0 12.7 Be–Cs 30.5 31.6 33.3 33.0 30.8 31.6 32.5 32.2 33.0 35.9 32.5 Be–En 24.8 26.5 27.8 27.8 28.2 24.8 29.9 28.2 26.5 25.6 27.0 Be–Es 36.4 38.1 36.4 39.5 35.5 38.1 39.0 37.0 36.1 34.4 37.0 Be–Gl 24.4 24.4 22.9 24.9 25.8 22.6 24.9 23.5 22.6 24.4 24.0 Be–Pt 33.2 33.2 32.7 33.7 34.4 31.7 33.9 31.7 31.9 31.4 32.8 Be–Ru 40.9 40.9 40.6 40.3 40.0 41.1 39.1 38.9 39.7 40.0 40.1 Be–Sk 30.1 27.7 30.7 27.4 28.6 29.2 28.9 30.7 27.7 27.4 28.8 Be–Tr 17.7 17.2 18.9 19.9 17.4 18.9 20.4 18.7 16.9 18.4 18.5 Cs–Az 3.5 4.6 4.9 6.0 6.9 4.9 3.7 4.9 4.0 6.0 4.9 Cs–Be 8.6 7.8 8.6 8.6 8.8 7.8 8.8 9.3 9.3 8.6 8.6 Cs–En 59.7 60.5 59.4 59.2 61.0 60.4 60.1 59.7 60.2 58.8 59.9 Cs–Es 59.0 59.1 57.5 60.5 59.2 58.7 58.9 59.6 59.1 57.6 58.9 Cs–Gl 27.1 26.9 27.1 27.6 27.0 21.4 27.9 27.1 26.5 26.1 26.5 Cs–Pt 56.9 55.6 55.4 57.8 55.5 56.9 55.6 57.3 56.1 54.1 56.1 Cs–Ru 44.2 45.5 45.5 45.0 45.5 45.3 45.9 45.0 45.2 45.9 45.3 Cs–Sk 69.8 69.8 70.2 71.2 70.6 70.2 70.4 69.7 68.4 70.2 70.0 Cs–Tr 35.3 35.2 34.6 35.1 34.7 34.7 35.1 35.0 35.8 34.2 35.0 En–Az 15.8 17.7 16.6 17.5 17.9 16.9 17.5 16.1 16.6 17.2 17.0 En–Be 16.4 15.1 17.6 14.9 18.4 17.4 15.6 17.1 15.9 16.4 16.5 En–Cs 49.2 49.0 47.6 47.4 50.2 49.8 50.1 48.3 48.8 49.3 49.0 En–Es 76.3 77.5 77.2 77.0 76.8 76.5 76.6 77.5 77.3 76.6 76.9 En–Gl 35.0 35.8 36.0 35.2 36.3 31.9 35.9 36.2 35.3 35.0 35.3 En–Pt 71.3 71.8 71.3 72.1 71.5 72.0 71.0 71.5 72.3 71.3 71.6 En–Ru 42.5 43.3 42.7 40.8 43.1 43.3 43.3 41.3 41.4 42.8 42.4 En–Sk 38.7 39.6 40.2 38.0 40.4 39.3 38.5 38.6 36.8 40.4 39.0 En–Tr 40.5 41.7 41.3 41.6 39.4 40.9 41.9 41.0 41.3 40.9 41.0 Es–Az 8.4 10.8 9.0 12.1 10.5 10.5 10.8 9.6 11.8 11.8 10.5 Es–Be 9.9 7.2 8.5 9.3 7.5 9.9 9.9 10.1 9.1 8.8 9.0 Es–Cs 45.3 46.0 44.2 43.4 45.8 45.5 47.4 46.3 45.4 44.7 45.4 Es–En 73.0 74.5 73.8 73.2 74.0 74.1 73.1 73.5 74.6 73.6 73.7 Es–Gl 37.1 37.0 37.1 36.9 37.5 33.7 36.8 37.0 36.8 36.7 36.7 Es–Pt 82.1 82.9 82.7 83.0 83.1 83.1 82.5 83.0 82.9 83.0 82.8 Es–Ru 41.4 41.5 41.2 39.4 41.3 41.9 40.9 40.3 40.2 41.9 41.0 Es–Sk 37.0 39.2 38.8 37.4 40.0 39.2 39.5 39.5 35.2 38.8 38.5 Es–Tr 37.5 38.0 37.7 38.2 37.6 37.8 38.4 37.8 38.6 37.9 38.0 8672 Table 10: All results from the European-languages MWE experiment: P@1 (part 2). Test Hub language µ Az Be Cs En Es Gl Pt Ru Sk Tr Gl–Az 4.0 4.6 4.3 5.5 5.0 4.1 5.2 4.7 4.8 5.0 4.7 Gl–Be 3.6 3.0 2.4 3.0 3.0 2.4 3.0 2.4 1.2 3.0 2.7 Gl–Cs 23.2 25.7 25.0 23.8 26.5 23.0 25.6 25.4 25.6 26.5 25.0 Gl–En 40.3 41.8 41.9 39.6 43.2 40.8 41.5 41.9 41.6 42.1 41.5 Gl–Es 60.0 60.5 60.1 59.9 60.4 59.0 60.0 60.3 59.6 60.8 60.1 Gl–Pt 52.5 52.5 52.9 52.0 52.0 50.4 52.5 51.9 52.1 52.0 52.1 Gl–Ru 22.5 22.7 22.9 21.7 23.3 21.9 23.7 22.7 22.5 23.8 22.8 Gl–Sk 26.0 26.3 26.8 25.6 26.4 23.4 25.5 25.1 23.2 26.4 25.5 Gl–Tr 18.5 19.3 19.7 18.6 17.8 18.3 18.9 19.2 19.4 17.6 18.7 Pt–Az 3.8 4.7 5.8 5.0 5.0 3.2 5.8 5.0 5.5 4.7 4.8 Pt–Be 7.3 5.3 7.3 7.3 6.1 7.1 6.8 6.1 8.6 7.1 6.9 Pt–Cs 45.5 47.0 46.3 45.0 45.5 47.2 45.5 46.7 46.5 45.6 46.1 Pt–En 69.9 70.9 70.2 71.3 71.1 70.5 70.6 71.3 70.6 70.8 70.7 Pt–Es 87.4 88.1 87.7 87.6 88.0 87.4 88.1 87.8 87.6 88.1 87.8 Pt–Gl 35.7 36.9 36.3 36.3 37.1 32.7 36.0 35.9 35.2 36.4 35.8 Pt–Ru 37.4 37.7 36.4 36.5 38.0 38.0 36.2 37.0 37.1 37.4 37.2 Pt–Sk 37.6 37.0 37.3 36.7 38.7 37.7 38.3 37.9 33.6 38.0 37.3 Pt–Tr 36.5 37.4 37.2 38.1 35.9 36.4 35.5 37.2 36.2 36.3 36.7 Ru–Az 5.0 6.4 6.2 7.8 8.7 7.3 7.5 7.3 6.7 7.5 7.0 Ru–Be 12.8 9.9 10.7 11.5 11.2 11.0 11.5 12.3 11.0 11.8 11.4 Ru–Cs 49.2 50.0 49.2 50.1 49.7 50.3 50.3 49.8 50.1 50.1 49.9 Ru–En 53.6 53.8 54.4 52.7 54.7 55.5 54.8 52.0 54.5 55.5 54.1 Ru–Es 53.7 53.4 54.8 54.5 52.3 53.5 54.0 53.2 53.9 51.2 53.4 Ru–Gl 20.9 21.3 22.1 22.3 22.9 17.2 23.0 21.8 21.7 21.9 21.5 Ru–Pt 50.4 50.3 50.4 52.4 51.1 51.1 49.6 49.8 51.0 47.6 50.4 Ru–Sk 45.0 44.7 44.7 45.2 45.2 44.7 44.3 43.7 43.7 45.5 44.7 Ru–Tr 25.9 27.0 26.2 26.9 26.0 25.9 26.1 25.6 26.8 24.7 26.1 Sk–Az 2.8 4.0 1.5 3.7 2.1 2.8 3.4 3.1 1.8 3.4 2.9 Sk–Be 10.2 7.5 9.9 9.4 9.6 8.3 10.4 10.9 10.9 9.1 9.6 Sk–Cs 71.4 72.5 70.9 70.8 70.5 71.1 71.3 70.6 71.0 71.4 71.1 Sk–En 54.8 55.0 54.0 52.9 55.4 54.7 54.8 54.6 53.0 55.6 54.5 Sk–Es 52.5 51.6 52.2 53.9 52.3 52.0 50.4 50.5 51.5 51.1 51.8 Sk–Gl 27.0 27.3 27.2 28.4 27.8 20.6 26.2 26.0 27.0 27.0 26.4 Sk–Pt 49.3 50.3 48.2 50.4 52.0 49.2 49.1 48.7 48.5 47.7 49.3 Sk–Ru 43.8 43.4 43.5 43.2 43.7 44.0 42.8 42.9 41.2 43.4 43.2 Sk–Tr 28.2 27.5 27.2 28.5 27.1 26.1 26.2 27.6 27.4 26.0 27.2 Tr–Az 9.8 12.1 10.1 11.1 10.1 11.4 11.4 10.8 12.1 11.1 11.0 Tr–Be 9.0 4.8 8.7 8.1 7.8 7.5 8.1 6.9 7.5 7.2 7.6 Tr–Cs 40.3 41.6 40.3 41.6 41.6 40.8 41.6 41.8 40.9 39.2 41.0 Tr–En 51.1 49.3 51.1 50.2 50.4 48.5 50.5 50.2 50.7 50.1 50.2 Tr–Es 53.8 53.6 55.0 55.0 52.5 53.0 54.6 52.9 54.1 53.3 53.8 Tr–Gl 17.0 17.3 17.3 15.9 16.8 11.6 17.5 17.1 17.1 18.4 16.6 Tr–Pt 50.1 50.1 51.4 51.6 49.3 48.9 48.7 49.9 50.5 49.5 50.0 Tr–Ru 34.0 34.3 32.3 34.6 34.3 33.6 33.2 32.0 33.0 32.9 33.4 Tr–Sk 27.5 29.2 27.9 28.5 29.4 27.7 27.9 27.5 25.2 27.9 27.9 8673 Table 11: All results from the European-languages MWE experiment: P@5 (part 1). Test Hub language µ Az Be Cs En Es Gl Pt Ru Sk Tr Az–Be 26.0 22.5 26.5 26.0 26.5 25.2 25.7 26.0 25.7 25.7 25.6 Az–Cs 53.4 54.8 53.7 57.5 54.8 55.9 55.6 54.5 53.2 54.8 54.8 Az–En 44.7 48.0 47.6 45.7 45.9 47.4 46.8 46.3 46.1 47.2 46.6 Az–Es 60.1 62.6 60.7 62.6 60.7 60.4 60.7 62.4 61.8 62.9 61.5 Az–Gl 38.3 37.7 40.1 41.4 38.9 35.8 40.1 41.4 38.9 39.5 39.2 Az–Pt 52.8 55.3 55.3 56.3 55.8 55.3 55.8 57.8 55.3 56.8 55.7 Az–Ru 45.2 46.5 46.8 48.1 49.2 47.3 48.4 45.5 46.8 50.0 47.4 Az–Sk 43.9 46.1 47.0 48.3 49.2 48.3 49.2 48.3 46.7 46.7 47.4 Az–Tr 45.2 49.1 51.3 49.1 46.7 48.7 49.1 49.4 49.6 49.4 48.8 Be–Az 20.6 20.6 23.4 23.2 24.6 22.0 22.9 24.9 22.3 24.6 22.9 Be–Cs 44.5 44.8 47.6 48.5 46.5 47.9 48.7 46.8 45.7 47.9 46.9 Be–En 42.3 42.3 42.7 41.5 44.4 42.7 42.3 42.7 41.0 43.2 42.5 Be–Es 50.4 53.0 54.2 53.3 50.4 53.6 54.4 51.0 54.2 52.4 52.7 Be–Gl 38.8 36.5 37.7 38.8 38.0 36.5 38.3 38.0 38.6 37.7 37.9 Be–Pt 49.5 50.8 52.8 51.5 52.0 50.0 49.0 49.0 50.5 49.5 50.5 Be–Ru 53.0 53.2 52.1 51.8 53.8 52.7 53.0 53.0 53.2 51.8 52.8 Be–Sk 43.8 40.1 44.7 43.5 41.6 43.8 44.4 43.5 40.1 43.5 42.9 Be–Tr 33.4 33.2 34.6 37.8 32.2 34.4 36.9 33.4 33.2 32.2 34.1 Cs–Az 10.3 11.2 11.2 13.8 14.1 11.8 12.1 10.6 11.2 12.6 11.9 Cs–Be 14.8 15.5 15.5 16.3 16.3 16.6 16.1 16.1 14.8 15.8 15.8 Cs–En 75.6 76.4 75.1 75.7 76.2 76.9 76.1 75.8 75.9 76.0 76.0 Cs–Es 75.5 75.3 74.1 76.5 75.9 74.9 74.3 75.5 75.9 74.1 75.2 Cs–Gl 40.8 41.8 43.0 43.7 43.1 36.5 42.1 42.6 42.1 41.2 41.7 Cs–Pt 72.9 74.1 72.2 74.3 73.1 73.7 72.7 73.8 72.7 71.6 73.1 Cs–Ru 64.5 64.4 63.6 63.9 63.9 64.5 64.9 64.5 64.3 65.5 64.4 Cs–Sk 81.7 82.9 83.2 82.8 82.5 83.0 83.2 82.7 81.6 82.7 82.6 Cs–Tr 56.2 56.0 55.1 57.1 56.4 54.2 54.9 55.5 54.9 53.8 55.4 En–Az 28.3 29.1 30.3 29.9 28.9 29.2 30.2 29.1 28.8 30.6 29.4 En–Be 32.8 28.3 34.0 31.5 34.0 34.5 30.3 32.8 33.3 32.8 32.4 En–Cs 74.7 74.9 73.4 74.5 76.1 76.5 74.8 75.1 73.8 75.5 74.9 En–Es 88.9 89.5 88.8 89.3 89.1 89.3 89.1 89.3 89.0 89.1 89.1 En–Gl 49.0 50.4 50.5 50.4 51.3 47.8 50.9 51.4 49.1 50.7 50.1 En–Pt 86.0 86.6 86.2 86.6 86.2 86.4 86.3 86.3 86.4 85.8 86.3 En–Ru 68.0 68.1 68.2 66.0 68.6 69.6 68.7 67.7 67.4 68.2 68.1 En–Sk 62.3 62.7 62.5 60.8 62.5 62.1 63.5 62.7 59.9 63.2 62.2 En–Tr 63.6 62.6 64.3 62.4 62.4 63.8 63.8 63.0 63.2 63.2 63.2 Es–Az 16.3 16.9 16.9 17.5 18.4 17.8 17.2 17.2 19.0 18.1 17.5 Es–Be 16.8 15.5 17.1 18.9 16.3 18.9 18.7 17.1 18.1 16.5 17.4 Es–Cs 64.4 65.7 63.5 65.2 66.1 65.5 65.9 66.0 65.8 65.9 65.4 Es–En 85.2 86.3 86.0 85.5 85.8 85.5 85.8 86.1 86.0 86.0 85.8 Es–Gl 45.6 46.0 45.7 46.1 46.4 43.2 45.9 45.7 45.8 46.2 45.7 Es–Pt 90.8 91.1 90.7 91.3 91.4 91.1 91.3 90.7 90.9 90.9 91.0 Es–Ru 61.5 62.5 61.4 62.5 62.1 61.7 62.2 60.8 61.6 62.9 61.9 Es–Sk 57.9 59.1 58.7 58.5 59.1 57.8 58.1 57.6 57.0 58.5 58.2 Es–Tr 57.0 57.4 57.2 56.7 55.0 56.3 56.3 55.5 56.6 56.5 56.5 8674 Table 12: All results from the European-languages MWE experiment: P@5 (part 2). Test Hub language µ Az Be Cs En Es Gl Pt Ru Sk Tr Gl–Az 8.4 9.0 8.8 9.8 9.6 10.0 9.7 9.4 9.2 9.7 9.4 Gl–Be 7.3 6.1 6.1 6.7 6.7 6.7 7.9 6.1 6.1 7.3 6.7 Gl–Cs 41.8 42.1 43.0 42.3 44.5 40.2 42.5 42.5 42.0 43.0 42.4 Gl–En 56.8 57.4 58.6 56.3 59.7 57.6 57.2 57.8 56.7 58.1 57.6 Gl–Es 68.3 68.8 68.1 68.8 68.6 67.9 68.3 68.8 68.2 68.8 68.5 Gl–Pt 63.9 64.3 63.4 64.1 63.2 62.8 63.4 64.0 63.7 63.9 63.7 Gl–Ru 40.2 39.8 39.3 39.6 39.5 37.0 40.0 39.5 39.3 40.8 39.5 Gl–Sk 41.6 42.4 41.1 41.9 43.7 38.5 41.0 41.4 39.2 41.5 41.2 Gl–Tr 33.5 33.4 34.9 33.9 33.3 29.4 32.4 32.6 34.0 31.5 32.9 Pt–Az 8.7 11.1 10.2 12.5 11.1 10.2 10.5 9.9 12.0 11.1 10.7 Pt–Be 14.4 12.1 14.4 17.4 14.1 15.9 14.9 14.9 14.9 14.6 14.8 Pt–Cs 65.6 66.6 64.7 65.8 66.5 66.6 65.9 66.3 65.5 65.1 65.9 Pt–En 81.3 82.1 82.0 82.1 81.9 82.0 81.5 81.7 81.5 82.0 81.8 Pt–Es 92.1 92.6 92.4 92.1 92.0 91.8 92.4 92.4 92.0 92.3 92.2 Pt–Gl 45.4 46.4 46.2 46.9 46.8 43.5 45.8 45.4 45.2 46.7 45.8 Pt–Ru 57.6 57.8 57.7 58.7 58.1 58.5 57.0 57.5 57.6 57.6 57.8 Pt–Sk 57.2 56.9 57.0 57.8 56.6 55.4 56.6 56.8 53.1 56.4 56.4 Pt–Tr 53.9 54.8 54.2 56.3 53.3 53.6 52.7 54.5 54.4 54.6 54.2 Ru–Az 12.0 15.6 15.9 15.6 15.9 14.8 15.4 14.2 14.2 15.9 15.0 Ru–Be 20.1 18.3 20.6 20.1 20.9 20.6 20.6 20.9 21.1 20.4 20.4 Ru–Cs 65.7 65.0 65.1 64.7 65.0 66.7 66.1 65.8 65.1 65.5 65.5 Ru–En 72.8 73.0 73.9 72.0 73.8 73.5 72.7 72.3 72.9 73.5 73.0 Ru–Es 70.1 69.8 69.7 71.3 69.2 70.3 71.2 68.8 70.7 68.4 69.9 Ru–Gl 36.1 35.9 36.1 36.8 37.1 30.9 36.5 36.6 35.9 35.3 35.7 Ru–Pt 66.8 66.8 67.0 69.3 67.9 67.6 65.8 66.6 67.3 65.2 67.0 Ru–Sk 61.1 62.6 61.4 61.1 62.0 61.8 61.8 60.9 59.8 61.6 61.4 Ru–Tr 48.0 48.0 47.6 49.9 47.1 47.5 48.0 46.0 47.0 47.4 47.7 Sk–Az 7.7 9.2 7.1 9.5 7.4 8.3 8.9 8.9 8.3 8.6 8.4 Sk–Be 17.4 16.7 18.5 18.2 17.7 18.5 18.2 19.3 19.3 18.5 18.2 Sk–Cs 82.1 82.1 81.3 81.6 82.1 82.4 81.6 81.6 81.3 81.9 81.8 Sk–En 70.7 71.7 71.3 69.6 71.2 71.4 71.5 70.9 70.3 71.4 71.0 Sk–Es 69.2 69.7 70.2 71.2 70.1 68.8 70.0 68.6 69.2 69.4 69.6 Sk–Gl 43.4 43.3 42.9 45.1 43.7 36.0 42.9 42.0 43.0 42.7 42.5 Sk–Pt 68.2 67.5 67.5 68.7 69.9 67.6 66.1 67.6 66.7 66.7 67.7 Sk–Ru 59.2 58.1 58.2 58.8 59.4 59.5 58.8 58.5 57.5 59.5 58.8 Sk–Tr 47.2 48.7 47.6 48.7 47.1 46.7 48.2 47.8 46.7 46.2 47.5 Tr–Az 19.5 22.2 19.9 21.2 20.9 20.9 20.5 19.5 21.9 20.2 20.7 Tr–Be 17.1 12.3 16.2 17.1 16.8 15.6 16.5 16.5 16.2 16.2 16.1 Tr–Cs 61.6 62.1 60.1 61.8 62.4 61.9 61.6 61.5 61.4 60.1 61.4 Tr–En 68.0 68.2 68.1 67.2 67.8 67.5 69.6 67.7 67.9 67.2 67.9 Tr–Es 69.8 69.0 70.4 70.5 68.0 69.2 70.5 69.4 69.8 69.5 69.6 Tr–Gl 30.5 30.7 31.1 30.0 30.4 23.6 31.4 31.1 29.7 30.7 29.9 Tr–Pt 67.1 66.9 66.9 67.9 66.5 65.9 65.2 67.1 67.5 66.6 66.8 Tr–Ru 55.4 55.9 54.0 55.4 55.3 55.1 55.1 53.0 52.9 53.5 54.6 Tr–Sk 48.2 49.9 48.9 49.7 48.7 47.8 48.9 48.1 44.2 47.7 48.2 8675 Table 13: All results from the European-languages MWE experiment: P@10 (part 1). Test Hub language µ Az Be Cs En Es Gl Pt Ru Sk Tr Az–Be 31.1 27.1 30.8 31.4 31.9 31.1 29.8 30.3 32.2 31.1 30.7 Az–Cs 60.3 62.5 60.8 62.7 63.6 61.4 62.7 61.1 60.3 63.6 61.9 Az–En 49.3 51.1 52.6 50.5 49.5 50.7 51.4 50.3 50.1 50.7 50.6 Az–Es 63.8 65.7 65.4 67.1 65.2 66.3 68.0 64.6 66.6 67.4 66.0 Az–Gl 42.6 42.6 45.1 45.1 43.8 39.5 45.1 43.8 42.6 43.8 43.4 Az–Pt 58.5 61.2 62.7 62.5 61.5 61.7 61.0 61.2 61.7 62.5 61.5 Az–Ru 50.8 52.7 52.9 50.8 54.0 53.2 54.3 51.6 51.9 54.5 52.7 Az–Sk 48.9 52.0 53.0 52.0 53.9 54.2 53.0 52.4 51.7 51.7 52.3 Az–Tr 53.3 55.5 56.7 57.0 55.0 55.3 55.7 56.5 57.0 56.7 55.9 Be–Az 25.7 25.4 29.7 28.5 29.4 26.8 27.7 28.2 26.8 28.0 27.6 Be–Cs 50.7 51.0 52.1 51.3 51.8 53.8 52.7 51.8 50.7 51.8 51.8 Be–En 46.6 48.7 50.0 46.2 48.3 50.9 46.2 48.3 46.2 47.9 47.9 Be–Es 54.7 57.3 58.7 58.7 56.2 57.9 57.9 55.9 58.5 57.9 57.4 Be–Gl 47.0 45.2 44.6 46.1 43.8 41.4 43.5 43.8 44.3 42.9 44.3 Be–Pt 55.3 55.8 57.0 57.8 57.0 56.5 55.8 54.5 55.5 56.0 56.1 Be–Ru 56.3 56.3 56.1 56.1 56.9 56.1 56.3 56.3 56.9 55.5 56.3 Be–Sk 48.0 45.6 48.3 47.7 48.0 48.6 49.8 48.6 46.2 48.0 47.9 Be–Tr 38.3 40.5 41.5 43.2 40.3 40.3 41.8 41.5 40.3 38.3 40.6 Cs–Az 13.8 14.9 15.5 16.1 17.5 14.9 15.8 14.1 14.9 15.5 15.3 Cs–Be 18.9 17.9 19.2 19.9 19.4 19.9 19.9 19.2 17.9 19.2 19.1 Cs–En 80.2 80.5 79.8 80.0 80.1 81.0 80.2 80.5 80.5 81.1 80.4 Cs–Es 80.1 79.6 78.8 80.0 79.9 79.4 79.9 79.3 80.2 79.0 79.6 Cs–Gl 47.2 48.0 47.9 49.9 49.3 42.4 48.2 48.3 49.1 47.1 47.7 Cs–Pt 77.5 78.7 77.5 78.3 77.1 77.7 76.9 77.7 76.9 76.8 77.5 Cs–Ru 70.1 70.3 69.1 69.6 69.4 70.7 69.6 69.5 69.5 70.5 69.8 Cs–Sk 85.5 85.6 85.7 85.2 84.9 85.1 86.2 85.2 84.9 85.6 85.4 Cs–Tr 63.2 62.7 62.5 63.5 62.7 62.5 62.7 63.4 62.6 61.6 62.7 En–Az 32.2 33.3 34.3 34.3 33.8 32.5 34.4 33.0 34.3 33.8 33.6 En–Be 38.5 34.0 40.4 39.0 40.0 41.2 38.7 38.2 38.7 38.5 38.7 En–Cs 81.2 81.1 79.9 80.7 81.9 82.5 80.6 80.7 80.7 81.5 81.1 En–Es 91.3 92.1 91.7 91.5 91.9 91.7 91.8 91.6 91.9 91.7 91.7 En–Gl 53.9 56.3 56.4 55.7 55.8 53.2 55.9 56.2 54.9 55.5 55.4 En–Pt 89.4 90.0 89.2 89.5 89.1 89.5 89.3 89.0 89.4 89.0 89.3 En–Ru 74.6 74.0 75.8 72.2 74.8 76.0 74.8 73.8 74.0 74.4 74.4 En–Sk 69.3 69.7 69.9 68.0 69.6 68.7 69.9 69.5 67.1 69.9 69.2 En–Tr 69.9 70.1 71.0 69.3 69.5 69.8 70.3 71.1 70.0 69.2 70.0 Es–Az 20.2 20.8 20.2 21.1 20.8 20.2 19.3 20.2 21.1 21.1 20.5 Es–Be 20.8 18.9 20.8 22.9 21.3 22.4 21.1 23.2 21.3 21.3 21.4 Es–Cs 70.5 70.7 70.8 70.9 71.0 71.1 71.3 71.8 72.2 70.9 71.1 Es–En 88.5 88.4 88.5 88.3 88.5 88.5 88.5 88.5 88.5 88.4 88.5 Es–Gl 49.5 49.4 49.4 49.8 50.0 46.0 49.6 49.6 49.4 50.2 49.3 Es–Pt 92.7 92.5 92.5 92.5 93.0 92.9 92.8 92.4 92.1 92.7 92.6 Es–Ru 67.5 67.1 67.4 68.9 67.4 67.6 67.8 66.8 68.7 68.5 67.8 Es–Sk 64.5 64.3 63.9 65.4 65.4 63.5 64.3 64.8 63.0 63.8 64.3 Es–Tr 63.6 63.8 64.3 62.7 61.6 62.6 63.7 62.2 63.8 61.7 63.0 8676 Table 14: All results from the European-languages MWE experiment: P@10 (part 2). Test Hub language µ Az Be Cs En Es Gl Pt Ru Sk Tr Gl–Az 11.5 11.2 11.1 12.5 12.6 12.3 13.1 12.1 12.5 12.3 12.1 Gl–Be 8.5 7.3 8.5 9.1 8.5 7.9 7.9 7.9 8.5 9.7 8.4 Gl–Cs 48.0 49.0 48.8 49.0 50.7 46.6 48.3 49.1 49.0 49.0 48.8 Gl–En 64.1 64.4 64.7 62.2 64.4 62.5 63.4 64.4 62.4 63.0 63.6 Gl–Es 71.3 71.5 71.5 72.1 71.7 71.1 71.0 71.6 71.4 72.5 71.6 Gl–Pt 66.9 67.1 67.4 67.6 67.5 67.7 67.1 67.6 66.8 68.1 67.4 Gl–Ru 46.7 46.5 45.9 45.0 46.3 42.8 45.8 44.8 44.7 45.7 45.4 Gl–Sk 48.2 48.1 47.2 48.5 48.8 45.3 47.6 46.7 45.5 48.2 47.4 Gl–Tr 39.7 39.3 39.3 39.1 38.2 35.9 38.8 38.9 38.3 38.0 38.5 Pt–Az 11.7 14.6 13.4 14.6 15.2 12.5 13.4 13.1 13.4 15.7 13.8 Pt–Be 18.9 17.2 18.2 21.0 18.7 20.2 18.7 19.7 18.4 18.7 19.0 Pt–Cs 71.6 72.0 70.6 71.7 71.7 72.0 71.5 71.9 71.2 70.7 71.5 Pt–En 84.0 84.3 84.1 85.1 84.2 84.9 84.1 83.9 84.7 84.3 84.4 Pt–Es 92.8 93.2 93.2 93.2 93.6 93.0 93.4 93.3 93.2 93.4 93.2 Pt–Gl 49.3 49.6 48.9 50.1 49.9 46.8 49.3 48.9 47.9 49.6 49.0 Pt–Ru 63.6 64.3 62.8 64.7 64.4 64.3 63.0 63.4 63.8 62.4 63.7 Pt–Sk 63.6 62.4 62.6 63.9 63.0 62.6 62.4 62.1 59.7 62.2 62.4 Pt–Tr 60.4 60.8 60.4 62.3 59.5 60.4 60.3 60.9 60.5 60.9 60.6 Ru–Az 15.4 17.0 18.7 20.1 18.4 18.4 19.0 17.9 17.3 19.8 18.2 Ru–Be 25.1 22.2 24.5 23.8 24.3 24.0 24.5 24.3 25.3 24.3 24.2 Ru–Cs 70.8 70.3 70.9 70.4 70.8 71.3 71.0 70.5 70.8 71.1 70.8 Ru–En 76.9 77.8 78.6 76.6 78.4 77.8 77.4 76.8 77.1 77.5 77.5 Ru–Es 75.2 75.2 75.3 76.3 75.6 75.3 76.3 74.8 76.4 74.5 75.5 Ru–Gl 43.1 42.2 42.1 43.3 43.5 37.1 41.9 41.7 41.3 40.5 41.7 Ru–Pt 72.6 71.8 72.6 74.5 72.5 72.6 71.5 71.5 72.2 70.2 72.2 Ru–Sk 65.5 66.8 66.3 66.5 66.3 66.4 67.0 66.5 64.7 66.9 66.3 Ru–Tr 56.1 56.2 55.2 57.7 56.8 57.0 56.1 54.8 57.3 54.8 56.2 Sk–Az 11.0 11.0 10.7 13.8 10.7 13.2 13.2 10.4 11.3 12.0 11.7 Sk–Be 23.2 20.8 21.1 22.1 21.1 22.9 22.7 22.9 23.4 22.1 22.2 Sk–Cs 85.1 85.5 84.6 84.4 85.3 85.9 85.6 84.9 85.0 85.0 85.1 Sk–En 74.5 76.3 76.6 73.9 75.7 76.0 75.6 75.4 75.3 75.8 75.5 Sk–Es 75.7 75.5 74.9 76.2 74.4 74.2 74.6 74.4 74.7 74.7 74.9 Sk–Gl 49.1 48.7 48.9 51.7 50.1 40.9 49.4 48.5 49.6 49.7 48.7 Sk–Pt 73.7 73.2 72.6 74.7 74.0 73.1 71.7 72.8 72.9 72.0 73.1 Sk–Ru 63.5 64.4 62.8 64.0 64.0 64.2 64.0 62.6 62.6 64.6 63.7 Sk–Tr 55.4 57.0 56.2 57.4 55.7 55.4 57.0 56.0 54.4 55.2 56.0 Tr–Az 22.9 24.6 23.9 23.2 23.6 24.9 23.6 23.2 24.6 24.9 23.9 Tr–Be 22.2 16.8 21.6 20.7 21.3 21.6 23.4 19.8 19.5 21.3 20.8 Tr–Cs 68.5 68.0 66.7 67.2 68.0 68.1 68.4 67.1 67.8 66.3 67.6 Tr–En 73.5 74.0 73.7 73.2 73.0 73.2 74.2 74.0 72.9 72.2 73.4 Tr–Es 74.4 74.0 74.6 75.5 73.2 73.8 74.6 74.7 74.8 74.4 74.4 Tr–Gl 36.1 36.6 35.9 36.4 35.9 29.7 36.7 36.7 35.0 36.8 35.6 Tr–Pt 72.2 71.8 71.8 72.8 71.3 71.4 70.8 71.8 72.4 72.1 71.8 Tr–Ru 61.3 61.8 60.0 61.8 61.7 61.8 60.5 60.0 59.5 59.9 60.8 Tr–Sk 55.4 56.8 56.8 57.0 56.2 54.9 56.4 55.8 51.6 55.4 55.6 8677 Table 15: All results from the distant languages MWE experiment (P@1). Test Hub language µ En Fr Hi Ko Ru Sv Uk En–Fr 75.1 75.3 75.2 75.8 76.3 75.5 75.4 75.5 En–Hi 20.9 23.5 21.0 21.4 23.5 21.4 23.9 22.2 En–Ko 9.2 10.4 9.1 9.8 9.8 10.1 10.0 9.8 En–Ru 41.8 42.0 41.8 41.5 42.0 41.8 42.0 41.8 En–Sv 57.0 57.5 59.0 56.6 57.8 57.6 58.4 57.7 En–Uk 26.9 27.5 26.9 26.9 28.3 27.8 26.2 27.2 Fr–En 72.5 72.0 71.6 72.7 72.9 73.4 74.0 72.7 Fr–Hi 18.7 16.0 14.8 17.3 19.0 17.8 17.5 17.3 Fr–Ko 6.9 6.7 5.8 5.5 5.8 7.5 6.0 6.3 Fr–Ru 39.9 38.3 40.3 40.4 40.8 40.0 39.6 39.9 Fr–Sv 51.8 49.3 50.5 51.1 49.4 48.2 51.8 50.3 Fr–Uk 28.8 27.0 27.8 28.5 28.7 27.7 26.1 27.8 Hi–En 27.8 31.4 27.9 28.6 30.4 29.3 29.3 29.3 Hi–Fr 25.6 23.1 25.1 23.3 26.9 25.5 24.2 24.8 Hi–Ko 2.1 1.7 1.3 1.6 1.6 1.4 1.8 1.6 Hi–Ru 13.9 14.2 14.3 13.6 14.3 13.5 14.6 14.0 Hi–Sv 17.3 16.8 16.3 15.9 17.0 15.9 16.6 16.6 Hi–Uk 10.3 10.5 9.1 9.1 9.8 9.5 9.6 9.7 Ko–En 15.1 16.6 15.2 17.0 16.6 17.7 16.4 16.4 Ko–Fr 11.9 10.2 10.9 10.9 12.6 13.6 10.8 11.6 Ko–Hi 1.8 2.4 1.2 1.6 2.0 1.8 2.0 1.9 Ko–Ru 7.9 6.6 6.0 5.7 6.9 6.8 7.3 6.7 Ko–Sv 6.8 6.6 5.9 5.9 7.2 5.6 7.2 6.5 Ko–Uk 3.5 3.6 3.4 3.2 3.5 3.5 3.1 3.4 Ru–En 50.2 53.2 52.2 53.4 52.5 52.6 52.1 52.3 Ru–Fr 51.1 49.6 50.7 51.7 51.0 50.6 50.3 50.7 Ru–Hi 14.6 15.0 12.0 14.6 13.3 14.8 15.3 14.2 Ru–Ko 5.2 4.6 4.4 3.6 4.3 4.1 5.0 4.4 Ru–Sv 40.7 40.9 40.1 41.0 39.8 36.7 41.3 40.1 Ru–Uk 55.3 56.1 55.8 56.3 55.3 55.3 54.9 55.6 Sv–En 51.2 51.1 52.3 51.9 52.0 50.7 52.7 51.7 Sv–Fr 47.9 45.7 46.8 48.2 47.1 46.6 47.4 47.1 Sv–Hi 17.2 16.3 15.0 16.0 17.7 15.9 17.0 16.4 Sv–Ko 4.9 4.2 4.0 3.8 5.0 4.0 5.1 4.4 Sv–Ru 31.5 33.2 32.4 33.0 31.8 30.2 31.8 32.0 Sv–Uk 22.4 23.8 23.0 23.5 24.1 21.0 21.9 22.8 Uk–En 39.5 40.8 40.3 40.7 41.4 40.2 40.2 40.4 Uk–Fr 43.6 42.3 44.0 43.3 43.0 43.3 40.6 42.9 Uk–Hi 13.8 13.8 12.8 12.8 12.7 14.4 13.0 13.3 Uk–Ko 2.6 2.5 2.4 2.0 2.0 2.4 2.6 2.4 Uk–Ru 59.4 58.9 59.7 58.7 59.1 58.4 58.6 59.0 Uk–Sv 35.8 35.5 35.8 36.8 35.4 32.7 35.1 35.3 8678 Table 16: All results from the distant languages MWE experiment (P@5). Test Hub language µ En Fr Hi Ko Ru Sv Uk En–Fr 87.3 88.2 87.8 88.4 88.3 88.0 87.7 88.0 En–Hi 37.2 39.4 36.5 37.1 39.3 38.7 39.9 38.3 En–Ko 23.4 24.6 22.6 23.4 24.3 25.9 25.0 24.2 En–Ru 63.5 65.3 65.1 64.8 66.9 64.6 65.9 65.2 En–Sv 74.8 76.1 76.3 75.8 75.4 75.6 76.5 75.8 En–Uk 47.7 49.8 49.3 47.9 49.3 48.5 47.7 48.6 Fr–En 85.3 84.5 83.7 84.5 85.4 85.1 84.6 84.7 Fr–Hi 32.7 30.0 29.5 30.6 33.4 32.2 31.6 31.4 Fr–Ko 14.9 14.5 14.0 14.6 16.0 15.3 15.2 14.9 Fr–Ru 61.0 59.5 61.9 61.7 62.1 60.6 60.9 61.1 Fr–Sv 69.6 68.1 68.8 69.1 68.6 68.0 71.1 69.0 Fr–Uk 45.6 44.2 44.8 45.6 45.8 45.0 44.1 45.0 Hi–En 44.5 47.0 46.3 44.3 47.0 46.3 46.7 46.0 Hi–Fr 41.7 39.3 41.6 39.6 42.7 41.2 42.3 41.2 Hi–Ko 5.3 4.8 3.4 3.5 4.7 5.1 5.0 4.5 Hi–Ru 27.6 29.6 27.6 28.1 27.9 28.8 29.5 28.4 Hi–Sv 31.7 31.7 30.8 30.7 32.7 30.2 32.0 31.4 Hi–Uk 21.4 21.9 19.9 20.1 20.8 20.4 20.2 20.7 Ko–En 28.9 28.7 27.0 28.1 30.1 33.1 28.6 29.2 Ko–Fr 21.9 21.6 19.7 20.4 24.0 24.4 21.3 21.9 Ko–Hi 4.3 4.8 3.9 4.1 4.6 4.8 5.0 4.5 Ko–Ru 16.2 15.3 12.9 13.4 15.8 15.7 16.3 15.1 Ko–Sv 16.2 14.1 13.9 13.8 15.6 13.9 16.3 14.8 Ko–Uk 9.7 8.0 8.6 8.6 9.3 8.2 8.8 8.8 Ru–En 69.8 71.1 70.9 71.0 70.2 71.1 71.3 70.8 Ru–Fr 65.7 66.2 67.7 67.9 67.0 66.6 67.2 66.9 Ru–Hi 27.3 27.6 24.7 26.7 25.6 26.6 28.7 26.7 Ru–Ko 12.1 10.4 10.1 10.0 11.1 10.4 12.4 10.9 Ru–Sv 58.8 58.9 58.2 58.2 58.8 56.1 59.9 58.4 Ru–Uk 68.3 68.8 69.2 68.0 68.8 68.6 66.9 68.4 Sv–En 65.4 66.2 66.3 65.7 65.1 64.4 65.9 65.6 Sv–Fr 62.5 60.1 60.3 61.1 60.7 59.8 61.3 60.8 Sv–Hi 28.2 28.0 26.6 27.4 29.3 27.1 28.6 27.9 Sv–Ko 11.7 10.7 10.9 9.8 11.5 11.6 11.4 11.1 Sv–Ru 50.5 51.0 50.7 50.9 50.3 47.8 49.9 50.2 Sv–Uk 40.2 42.1 41.6 41.6 41.7 38.3 39.2 40.6 Uk–En 56.3 58.1 57.5 57.2 59.1 58.1 56.1 57.5 Uk–Fr 58.3 56.4 58.5 58.7 58.9 58.0 56.4 57.9 Uk–Hi 27.2 25.8 24.0 25.4 26.5 25.8 25.3 25.7 Uk–Ko 7.4 7.2 6.8 6.0 7.3 7.3 7.3 7.0 Uk–Ru 71.0 71.0 71.2 70.1 70.4 70.7 70.5 70.7 Uk–Sv 53.3 53.3 52.5 53.1 53.7 48.9 53.1 52.5 8679 Table 17: All results from the distant languages MWE experiment (P@10). Test Hub language µ En Fr Hi Ko Ru Sv Uk En–Fr 90.8 91.3 90.1 91.0 91.1 91.1 90.7 90.9 En–Hi 44.0 45.9 43.3 43.1 45.0 45.2 45.6 44.6 En–Ko 31.1 31.5 28.4 30.5 31.6 33.7 32.1 31.3 En–Ru 70.1 71.7 71.0 70.7 72.4 71.1 72.3 71.3 En–Sv 80.0 81.1 80.9 80.4 80.8 80.4 81.2 80.7 En–Uk 55.3 57.5 56.5 55.2 57.4 56.4 54.6 56.1 Fr–En 87.6 87.8 86.6 87.7 88.0 87.9 88.0 87.6 Fr–Hi 39.1 35.3 35.5 36.5 38.6 38.1 38.5 37.4 Fr–Ko 20.1 18.4 18.4 19.6 20.3 19.4 19.7 19.4 Fr–Ru 67.1 65.9 68.1 67.5 66.8 66.8 67.4 67.1 Fr–Sv 74.4 73.3 74.2 74.8 73.3 73.3 75.5 74.1 Fr–Uk 51.7 49.7 51.3 51.8 52.0 51.2 49.9 51.1 Hi–En 50.0 52.3 53.0 50.8 52.7 51.7 52.3 51.8 Hi–Fr 49.0 45.5 46.8 46.8 48.3 48.1 48.9 47.6 Hi–Ko 7.9 7.2 5.1 5.1 6.4 6.6 7.2 6.5 Hi–Ru 34.5 35.3 34.5 34.7 33.6 35.3 36.3 34.9 Hi–Sv 38.0 37.5 36.1 37.9 38.9 36.3 38.5 37.6 Hi–Uk 27.3 27.6 25.8 25.4 26.2 25.9 25.5 26.3 Ko–En 34.2 34.3 32.3 35.2 37.1 38.4 35.4 35.3 Ko–Fr 27.0 25.9 23.7 24.6 28.5 30.1 26.4 26.6 Ko–Hi 6.2 6.9 5.6 6.0 6.7 6.7 6.9 6.4 Ko–Ru 21.2 19.3 16.4 18.2 20.4 20.9 20.8 19.6 Ko–Sv 20.9 18.1 17.8 17.5 21.1 18.4 20.6 19.2 Ko–Uk 12.9 12.1 11.5 11.3 12.6 12.0 11.7 12.0 Ru–En 74.9 75.8 75.4 75.5 75.5 76.2 75.6 75.6 Ru–Fr 71.8 72.5 73.0 72.2 72.7 72.7 72.6 72.5 Ru–Hi 33.0 32.9 30.1 32.1 31.9 32.1 34.6 32.4 Ru–Ko 17.2 14.6 13.2 13.5 15.9 15.0 16.7 15.2 Ru–Sv 64.7 64.7 63.6 64.6 64.2 62.5 64.6 64.1 Ru–Uk 73.3 72.8 73.1 72.0 73.1 72.9 71.7 72.7 Sv–En 69.5 70.4 71.0 70.6 70.9 69.3 70.0 70.2 Sv–Fr 67.0 64.2 65.0 65.3 65.5 64.2 65.7 65.3 Sv–Hi 33.6 32.6 32.0 30.9 33.3 31.9 33.2 32.5 Sv–Ko 15.7 14.7 14.0 12.9 15.7 14.9 15.6 14.8 Sv–Ru 57.2 56.4 56.5 56.2 56.4 53.8 56.4 56.1 Sv–Uk 47.5 47.9 47.7 47.7 48.5 44.8 46.4 47.2 Uk–En 61.6 63.4 62.9 62.2 63.5 62.7 61.1 62.5 Uk–Fr 63.5 62.4 63.9 63.4 64.3 63.5 61.9 63.3 Uk–Hi 32.7 32.3 28.6 30.2 31.7 31.5 30.7 31.1 Uk–Ko 10.6 10.2 9.5 8.7 10.1 10.4 10.2 10.0 Uk–Ru 74.5 73.8 74.1 73.9 74.5 74.1 73.9 74.1 Uk–Sv 59.1 58.8 58.8 58.7 59.3 55.2 57.8 58.2
2020
766
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8680–8689 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8680 Smart To-Do: Automatic Generation of To-Do Items from Emails Sudipto Mukherjee†∗ Subhabrata Mukherjee‡ Marcello Hasegawa‡ Ahmed Hassan Awadallah‡ Ryen White‡ †University of Washington, Seattle ‡Microsoft Research AI [email protected], {submukhe, marcellh, hassanam, ryenw}@microsoft.com Abstract Intelligent features in email service applications aim to increase productivity by helping people organize their folders, compose their emails and respond to pending tasks. In this work, we explore a new application, Smart-To-Do, that helps users with task management over emails. We introduce a new task and dataset for automatically generating To-Do items from emails where the sender has promised to perform an action. We design a two-stage process leveraging recent advances in neural text generation and sequenceto-sequence learning, obtaining BLEU and ROUGE scores of 0.23 and 0.63 for this task. To the best of our knowledge, this is the first work to address the problem of composing ToDo items from emails. 1 Introduction Email is one of the most used forms of communication especially in enterprise and work settings (Radicati and Levenstein, 2015). With the growing number of users in email platforms, service providers are constantly seeking to improve user experience for a myriad of applications such as online retail, instant messaging and event management (Feddern-Bekcan, 2008). Smart Reply (Kannan et al., 2016) and Smart Compose (Chen et al., 2019) are two recent features that provide contextual assistance to users aiming to reduce typing efforts. Another line of work in this direction is for automated task management and scheduling. For example. the recent Nudge feature1 in Gmail and Insights in Outlook2 are designed to remind users to follow-up on an email or pay attention to pending tasks. Smart To-Do takes a step further in task assistance and seeks to boost user productivity by automatically generating To-Do items from their email ∗Work done as an intern at Microsoft Research. 1 Gmail Nudge 2 Outlook Insights From: Alice To: [email protected] Subject: Sales Report Hi John, From: John To: [email protected] Subject: RE: Sales Report I am doing well. Thanks! I am travelling now. I will send it to you once I am back. Send the product launch sales report to Alice How are you? I wanted to follow up on our previous product launch meeting. Could you send me the sales report you mentioned? I want to forward it to my manager and others in the team. Best, Alice Hi Alice, -John Figure 1: An illustration showing the email and a commitment sentence (in yellow) and the target To-Do item, along with other email meta-data. context. Text generation from emails, like creating To-Do items, is replete with complexities due to the diversity of conversations in email threads, heterogeneous structure of emails and various meta-deta involved. As opposed to prior works in text generation like news headlines, email subject lines and email conversation summarization, To-Do items are action-focused, requiring the identification of a specific task to be performed. In this work, we introduce the task of automatically generating To-Do items from email context and meta-data to assist users with following up on their promised actions (also referred to as commitments in this work). Refer to Figure 1 for an illustration. Given an email, its temporal context (i.e. thread), and associated meta-data like the name of the sender and recipient, we want to generate a short and succinct To-Do item for the task mentioned in the email. This requires identifying the task sentence (also referred to as a query), relevant sentences in the email that provide contextual information about the query along with the entities (e.g., people) associated with the task. We utilize existing work to identify the task sentence via a commitment classifier that detects action intents in the emails. Thereafter 8681 C Commitment Classifier D Does the email contain commitment ? Generate To-Do Item No Yes C Stage 1 (Extractive): Select ‘Helpful’ Sentences. C Stage 2 (Abstractive): Seq2Seq with Copy Mechanism No To-Do Item Figure 2: Smart To-Do flowchart: The email content is first scanned to detect any possible commitment sentence. If present, a To-Do item is generated using a two-stage Smart To-Do framework. we use an unsupervised technique to extract key sentences in the email that are helpful in providing contextual information about the query. These pieces of information are further combined to generate the To-Do item using a sequence-to-sequence architecture with deep neural networks. Figure 2 shows a schematic diagram of the process. Since there is no existing work or dataset on this problem, our first step is to collect annotated data for this task. Overall, our contributions can be summarized as follows: • We create a new dataset for To-Do item generation from emails containing action items based on the publicly available email corpus Avocado (Oard et al., 2015). 3 • We develop a two-stage algorithm, based on unsupervised task-focused content selection and subsequent text generation combining contextual information and email meta-data. • We conduct experiments on this new dataset and show that our model performs at par with human judgments on multiple performance metrics. 2 Related Works Summarization of email threads has been the focus of multiple research works in the past (Rambow et al., 2004; Carenini et al., 2007; Dredze et al., 2008). There has also been considerable research on identifying speech acts or tasks in emails (Carvalho and Cohen, 2005; Lampert et al., 2010; Scerri et al., 2010) and how it can be robustly adapted across diverse email corpora (Azarbonyad et al., 2019). Recently, novel neural architectures have been explored for modeling action items in emails 3 We will release the code and data (in accordance with LDC and Avocado policy) at https://aka.ms/SmartToDo. Email examples in this paper are similar to those in our dataset but are not reproducing text from the Avocado dataset. (Lin et al., 2018) and identifying intents in email conversations (Wang et al., 2019). However, there has been less focus on task-specific email summarization (Corston-Oliver et al., 2004). The closest to our work is that of email subject line generation (Zhang and Tetreault, 2019). But it focuses on a common email theme and uses a supervised approach for sentence selection, whereas our method relies on identifying the task-related context. 3 Dataset Preparation We build upon the Avocado dataset (Oard et al., 2015)4 containing an anonymized version of the Outlook mailbox for 279 employees with various meta-data and 938, 035 emails overall. 3.1 Identifying Action Items in Emails Emails contain various user intents including planning and scheduling meetings, requests for information, exchange of information, casual conversations, etc. (Wang et al., 2019). For the purpose of this work, we first need to extract emails containing at least one sentence where the sender has promised to perform an action. It could be performing a task, providing some information, keeping others informed about a topic and so on. We use the term commitment to refer to such intent in an email and the term commitment sentence to refer to each sentence with that intent. Commitment classifier: A commitment classifier C : S 7→[0, 1] takes as input an email sentence S and returns a probability of whether the sentence is a commitment or not. The classifier is built using labels from an annotation task with 3 judges. The Cohen’s kappa value is 0.694, depicting substantial agreement. The final label is obtained from the majority vote, generating a total of 9076 instances (with 2586 positive/commitment labels and 6490 negative labels). The classifier is an RNN-based model with word embeddings and self-attention geared for binary classification with the input being the entire email context (Wang et al., 2019). The classifier has a precision of 86% and recall of 84% on sentences in the Avocado corpus. 3.2 To-Do Item Annotation Candidate emails: We extracted 500k raw sentences from Avocado emails and passed them 4 Avocado is a more appropriate test bed than the Enron collection (Klimt and Yang, 2004) since it contains additional meta-data and it entered the public domain via the cooperation and consent of the legal owner of the corpus. 8682 Ground-truth Update our quarterly sales in the head-office financial database. Annotation Update our quarterly sales in the database. Fluency 4 (Grammatically correct, follows structure of To-Do item.) Completeness 1 (Which database ? Does not include additional details from email context.) Ground-truth Test the server for load fault on Friday morning PST and let Bob know the result. Annotation Testing on server load fault on Friday morning PST and let Bob know the result. Fluency 2 (Grammatically incorrect; starts with ‘ing’ verb and deviates from structure.) Completeness 4 (Explains the context and contains all keywords) Table 1: Snapshot of qualitative analysis of human annotations for fluency and completeness. through the commitment classifier. We threshold the commitment classifier confidence to 0.9 and obtained 29k potential candidates for To-Do items. Of these, a random subset of 12k instances were selected for annotation. Annotation guideline: For each candidate email ec and the previous email in the thread ep (if present), we obtained meta-data like ‘From’, ‘SentTo’, ‘Subject’ and ‘Body’. The commitment sentence in ec was highlighted and annotators were asked to write a To-Do item using all of the information in ec and ep. We prepared a comprehensive guideline to help human annotators write To-Do Items containing the definition and structure of To-Do Items and commitment sentences, along with illustrative examples. Annotators were instructed to use words and phrases from the email context as closely as possible and introduce new vocabulary only when required. Each instance was annotated by 2 judges. Analysis of human annotations: We obtained a total of 9349 email instances with To-Do items, each of which was annotated by two annotators. To-Do items have a median token length of 9 and a mean length of 9.71. For 60.42% of the candidate emails, both annotators agreed that the subject line was helpful in writing the To-Do Item. To further analyze the annotation quality, we randomly sampled 100 annotated To-Do items and asked a judge to rate them on (a) fluency (grammatical and spelling correctness), and (b) completeness (capturing all the action items in the email) on a 4 point scale (1: Poor, 2: Fair, 3: Good, 4: Excellent). Overall, we obtained a mean rating of 3.1 and 2.9 respectively for fluency and completeness. Table 1 shows a snapshot of the analysis. 4 Smart To-Do : Two Stage Generation In this section, we describe our two-stage approach to generate To-Do items. In the first stage, we select sentences that are helpful in writing the ToDo item. Emails contain generic sentences such as salutations, thanks and casual conversations not relevant to the commitment task. The objective of the first stage is to select sentences containing informative concepts necessary to write the To-Do. 4.1 Identifying Helpful Sentences for Commitment Task In the absence of reliable labels to extract helpful sentences in a supervised fashion, we resort to an unsupervised matching-based approach. Let the commitment sentence in the email be denoted as H, and the rest of the sentences from the current email ec and previous email ep be denoted as {s1, s2, . . . sd}. The unsupervised approach seeks to obtain a relevance score Ω(si) for each sentence. The top K sentences with the highest scores will be selected as the extractive summary for the commitment sentence (also referred to as the query). Enriched query context: We first extract top τ maximum frequency tokens from all the sentences in the given email, the commitment and the subject (i.e., {s1, s2, . . . sd} ∪H ∪Subject). Tokens are lemmatized and stop-words are removed. We set τ = 10 in our experiments. An enriched context for the query E is formed by concatenating the commitment sentence H, subject and top τ tokens. Relevance score computation: Task-specific relevance score Ωfor a sentence si is obtained by inner product in the embedding space with the enriched context. Let h(·) be the function denoting the embedding of a sentence with Ω(si) = h(si)T h(E). Our objective is to find helpful sentences for the commitment given by semantic similarity between concepts in the enriched context and a target sentence. In case of a short or less informative query, the subject and topic of the email provide useful information via the enriched context. We experiment with three different embedding functions. (1) Term-frequency (Tf) – The binarized term 8683 At-least One Helpful Algorithm @ K=2 @ K=3 Tf 0.80 0.85 FastText (Mean) 0.76 0.90 FastText (Max) 0.85 0.92 BERT (Pre-trained) 0.76 0.89 BERT (Fine-tuned) 0.80 0.89 Table 2: Performance of unsupervised approaches in identifying helpful sentences for a given query. frequency vector is used to represent the sentence. (2) FastText Word Embeddings – We trained FastText embeddings (Bojanowski et al., 2017) of dimension 300 on all sentences in the Avocado corpus. The embedding function h(sj) is given by taking the max (or mean) across the word-embedding dimension of all tokens in the sentence sj. (3) Contextualized Word Embeddings – We utilize recent advances in contextualized representations from pre-trained language models like BERT (Devlin et al., 2019). We use the second last layer of pre-trained BERT for sentence embeddings. We also fine-tuned BERT on the labeled dataset for commitment classifier. The dataset is first made balanced (2586 positive and 2586 negative instances). Uncased BERT is trained for 5 epochs for commitment classification, with the input being word-piece tokenized email sentences. This model is denoted as BERT (fine-tuned) in Table 2. Evaluation of unsupervised approaches: Retrieving at-least one helpful sentence is crucial to obtain contextual information for the To-Do item. Therefore, we evaluate our approaches based on the proportion of emails where at-least one helpful sentence is present in the top K retrieved sentences. We manually annotated 100 email instances and labeled every sentence as helpful or not based on (a) whether the sentence contains concepts appearing in the target To-Do item, and (b) whether the sentence helps to understand the task context. Interannotator agreement between 2 judgments for this task has a Cohen Kappa score of 0.69. This annotation task also demonstrates the importance of the previous email in a thread. Out of 100 annotated instances, 44 have a replied-to email of which 31 contains a helpful sentence in the replied-to email body (70.4%). Table 2 shows the performance of the various unsupervised extractive algorithms. FastText with max-pooling of embeddings performed the best and used in the subsequent generation stage. <to> john <sub> hello <query>I’ll send … <eos> send <START> … … Attention Distribution 𝑝𝑎𝑡𝑡𝑛 Vocabulary Distribution 𝑝𝑣𝑜𝑐𝑎𝑏 𝑝𝑤= 1 −𝑝𝑔𝑒𝑛× 𝑝𝑎𝑡𝑡𝑛+ 𝑝𝑔𝑒𝑛× 𝑝𝑣𝑜𝑐𝑎𝑏 Encoder Decoder Figure 3: Seq2Seq with copy mechanism. Tokens involving named entities and task-specific keywords from the email are learned to copy in the To-Do item. 4.2 To-Do Item Generation The generation phase of our approach can be formulated as sequence-to-sequence (Seq2Seq) learning with attention (Sutskever et al., 2014; Bahdanau et al., 2014). It consists of two neural networks, an encoder and a decoder. The input to the encoder consists of concatenated tokens from different meta-data fields of the email like ‘sent-to’, ‘subject’, commitment sentence H and extracted sentences I separated by special markers. For instance, the input to the encoder for the example in Figure 1 is given as: <to> alice <sub> hello ? <query> i will send it to you < sent> could you send me the sales report ? <eos> We experiment with multiple versions of the generation model as follows: Vanilla Seq2Seq: Input tokens {x1, x2, . . . xT } are passed through a word-embedding layer and a single layer LSTM to obtain encoded representations ht = f(xt, ht−1) ∀t for the input. The decoder is another LSTM that makes use of the encoder state ht and prior decoder state st−1 to generate the target words at every timestep t. We consider Seq2Seq with attention mechanism where the decoder LSTM uses attention distribution at over timesteps t to focus on important hidden states to generate the context vector ht. This is the first baseline in our work. et,t′ = vT tanh(Wh · ht + Ws · st′ + b) at,t′ = softmax(et,t′) ht = P t′ at,t′ · ht′ (1) Seq2Seq with copy mechanism: As the second model, we consider Seq2Seq with copy mechanism (See et al., 2017) to copy tokens from important email fields. Copying is pivotal for To-Do item generation since every task involves named 8684 From: John Carter To: Helena Watson; Daniel Craig; Rupert Grint Subject: Thanks Thank you for helping me prepare the paper draft for ACL conference. Attached is the TeX file. Please feel free to make any changes to the revised version. I sent to my other collaborators already and am waiting for their suggestions. I’ll keep you posted. Thanks, John. GOLD: Keep Helena posted about paper draft for ACL conference. PRED: Keep Helana posted about ACL conference. From: Raymond Jiang To:[email protected] Subject: Bug 62 Hi, there is a periodic bug 62 appearing in my cellphone browser, whenever I choose to open the request. It might be a JavaScript issue on our side, but it would be nice if you take a look. Thanks, Ray. From: Criag Johnson To: Raymond Jiang Subject: Bug 62 Good Morning Ray, I shall take a look at it and get back to you. GOLD: Take a look at Bug 62 and get back to Raymond. PRED: Take a look at periodic and get back to Raymond. Table 3: Generation example (GOLD: manual annotation, PRED: machine-generated) with email context. Algorithm BLEU-4 Rouge-1 Rouge-2 Rouge-L Concatenate 0.13 0.52 0.28 0.50 Seq2Seq (vanilla) 0.14 0.53 0.31 0.56 Seq2Seq (copy) 0.23 0.60 0.41 0.63 Seq2Seq (BiFocal) 0.18 0.56 0.34 0.58 Human Judgment 0.21 0.60 0.37 0.60 Table 4: Comparison of various models for To-Do generation with BLEU and ROUGE (higher is better). entities in terms of the persons involved, specific times and dates when the task has to be accomplished and other task-specific details present in the email context. To understand the copy mechanism, consider the decoder input at each decoding step as yt and the context vector as ht. The decoder at each timestep t has the choice of generating the output word from the vocabulary V with probability pgen = φ(ht, st, yt), or with probability 1 −pgen it can copy the word from the input context. To allow that, the vocabulary is extended as V′ = V ∪{x1, x2, . . . xT }. The model is trained end-to-end to maximize the log-likelihood of target words (To-Do items) given the email context. Seq2Seq BiFocal: As a third model, we experimented with query-focused attention having two encoders – one containing only tokens of the query and the other containing rest of the input context. We use a bifocal copy mechanism that can copy tokens from either of the encoders. We refer the reader to the Appendix for more details about training and hyper-parameters used in our models. 5 Experimental Results We trained the above neural networks for To-Do item generation on our annotated dataset. Of the 9349 email instances with To-Do items, we used 7349 for training and 1000 each for validation and testing. For each instance, we chose the annotation with fewer tokens as ground-truth reference. The median token length of the encoder input is 43 (including the helpful sentence). Table 4 shows the performance comparison of various models. We report BLEU-4 (Papineni et al., 2002) and the F1-scores for Rouge-1, Rouge-2 and Rouge-L (Lin, 2004). We also report the human performance for this task in terms of the above metrics computed between annotations from the two judges. A trivial baseline – which concatenates tokens from the ‘sent-to’ and ‘subject’ fields and the commitment sentence – is included for comparison. The best performance is obtained with Seq2Seq using copying mechanism. We observe our model to perform at par with human performance for writing To-Do items. Table 3 shows some examples of To-Do item generation from our best model. 6 Conclusions In this work, we study the problem of automatic ToDo item generation from email context and metadata to provide smart contextual assistance in email applications. To this end, we introduce a new task and dataset for action-focused text intelligence. We design a two stage framework with deep neural networks for task-focused text generation. There are several directions for future work including better architecture design for utilizing structured meta-data and replacing the two-stage framework with a multi-task generation model that can jointly identify helpful context for the task and perform corresponding text generation. 8685 References Hosein Azarbonyad, Robert Sim, and Ryen W White. 2019. Domain adaptation for commitment detection in email. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 672–680. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. ICLR. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Giuseppe Carenini, Raymond T Ng, and Xiaodong Zhou. 2007. Summarizing email conversations with clue words. In Proceedings of the 16th international conference on World Wide Web, pages 91–100. ACM. Vitor R Carvalho and William W Cohen. 2005. On the collective classification of email speech acts. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 345–352. ACM. Mia Xu Chen, Benjamin N. Lee, Gagan Bansal, Yuan Cao, Shuyuan Zhang, Justin Lu, Jackie Tsay, Yinan Wang, Andrew M. Dai, Zhifeng Chen, Timothy Sohn, and Yonghui Wu. 2019. Gmail smart compose: Real-time assisted writing. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19. ACM. Simon Corston-Oliver, Eric Ringger, Michael Gamon, and Richard Campbell. 2004. Task-focused summarization of email. In Text Summarization Branches Out. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Mark Dredze, Hanna M Wallach, Danny Puller, and Fernando Pereira. 2008. Generating summary keywords for emails using topics. In Proceedings of the 13th international conference on Intelligent user interfaces, pages 199–206. ACM. Tanya Feddern-Bekcan. 2008. Google calendar. Journal of the Medical Library Association: JMLA, 96(4):394. Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos, Greg Corrado, Laszlo Lukacs, Marina Ganea, Peter Young, et al. 2016. Smart reply: Automated response suggestion for email. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 955– 964. ACM. Bryan Klimt and Yiming Yang. 2004. The enron corpus: A new dataset for email classification research. In European Conference on Machine Learning, pages 217–226. Springer. Andrew Lampert, Robert Dale, and Cecile Paris. 2010. Detecting emails containing requests for action. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 984– 992. Association for Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Chu-Cheng Lin, Dongyeop Kang, Michael Gamon, and Patrick Pantel. 2018. Actionable email intent modeling with reparametrized rnns. In ThirtySecond AAAI Conference on Artificial Intelligence. Douglas Oard, William Webber, David Kirsch, and Sergey Golitsynskiy. 2015. Avocado research email collection. In LDC2015T03. DVD. Philadelphia: Linguistic Data Consortium. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Sara Radicati and J Levenstein. 2015. Email statistics report, 2015-2019. Radicati Group, Palo Alto, CA, USA, Tech. Rep. Owen Rambow, Lokesh Shrestha, John Chen, and Chirsty Lauridsen. 2004. Summarizing email threads. In Proceedings of HLT-NAACL 2004: Short Papers, HLT-NAACL-Short ’04. Association for Computational Linguistics. Simon Scerri, Gerhard Gossen, Brian Davis, and Siegfried Handschuh. 2010. Classifying action items for semantic email. In LREC. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. I Sutskever, O Vinyals, and QV Le. 2014. Sequence to sequence learning with neural networks. Advances in NIPS. Wei Wang, Saghar Hosseini, Ahmed Hassan Awadallah, Paul N. Bennett, and Chris Quirk. 2019. Context-aware intent identification in email conversations. In Proceedings of the 42Nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR’19. Rui Zhang and Joel R. Tetreault. 2019. This email could save your life: Introducing the task of email subject line generation. In ACL. 8686 A Appendix A.1 Hyper-parameters We now provide the hyper-parameters and training details for ease of reproducibility of our results. The encoder-decode architecture consists of LSTM units. The word embedding look-up matrix is initialized using Glove embeddings and then trained jointly to adapt to the structure of the problem. We found this step crucial for improved performance. Using random initialization or static Glove embeddings degraded performance. We also experimented with using either a shared or a separate vocabulary for the encoder and decoder. A token was included in the vocabulary if it occurred at least 2 times in the training input/target. Separate vocabulary for source and target had better performance. Typically, source vocabulary had higher number of tokens than target. A shared dictionary led to increased number of parameters in the decoder and to subsequent over-fitting. The validation data was used for early stopping. The patience was decreased whenever either the validation token accuracy or perplexity failed to improve. We used the OpenNMT framework in PyTorch for all our Seq2Seq experiments. Table 5 lists the hyper-parameters of the best performing model. Hyper-parameter Value Rnn-type LSTM Rnn-size 256 # Layers 1 Word-embedding 100 Embedding init. Glove Batch size 64 Optimizer Adagrad Learning rate 0.15 Adagrad accumulator init. 0.1 Max. Gradient norm 2.0 Dropout 0.5 Attention dropout 0.5 Tokenizer spacy Vocabulary Separate Early Stopping (Patience) 5 Beam width 5 Table 5: Seq2Seq with copy mechanism : Hyperparameters for the best model. A.2 Illustrative Examples In this Section, we provide further examples of the email threads along with the highlighted commitment sentence. Note that some of the emails have previous thread email present, and some do not have it. For each of these examples, we also provide the To-Do item written by the human judge (denoted as GOLD) and that predicted by our best model (denoted as PRED). As in the main text, the sentences have been paraphrased and names changed due to the data sensitivity of Avocado. 8687 From: Beverly Evans To: Carlos Simmons Subject: Amazon.com update Carlos, I came to know today from John Carter than we received a PHP script that is not decoding the correct database. Can you check with them why they sent us the eCommerce PHP code when the loss of functionality was not out fault? I have registered the error log in the eCommerce section because the staff scientist from Amazon mentioned it in his email. He also said they have not been able to resolve the issue and surprisingly did not mention who we should contact next. (This email exchange was about a week ago when I had handed them the cloud expenditures.) Also, we need to generate a PHP example to replicate the error. Could you update me if the team is working on it? Thanks, Beverly From: Carlos Simmons To: Beverly Evans Subject: Amazon.com update The PHP they shared with us is an example. eCommerce is not what they want us to resolve. I feel we should wait until their engineers test all possibilities. Joseph informed us that they need to test the database more carefully and figure out which PHP code to send to us and whether they want our feedback on the database. I am not sure why they sent me a ’relevant PHP example’ I thought there was the only file they sent us yesterday. I will forward that to you and Renata. GOLD: Forward PHP example to Beverly and Renata. PRED: Forward eCommerce PHP to Beverly. Table 6: Illustrative Example 1 From: Kirstin Barnes To: Nannie Jacobs Subject: Ready for Product Launch Nannie, I am ready for the product launch. I need to include some of the enhancements in the presentation. I’ll submit what is already completed and then do the remaining after the meeting.. Kirstin Barnes Product Engineer AvocadoIT, Inc. GOLD: Submit presentation with product enhancements. PRED: Submit the enhancements for product launch. Table 7: Illustrative Example 2 From: Rishabh Iyer To: R&D Subject: Software not ready yet for deployment Hello, Unlike our plan last month, the software is still not ready for deployment. The team put together some errors last week. We must plan to make it available latest by next week. I will keep you posted. Thanks, Rishabh Iyer. Software Engineer AvocadoIT Inc. GOLD: Keep r&d posted about deployment of software. PRED: Keep r&d posted about deployment. Table 8: Illustrative Example 3 From: Justine Sparrow To: Roma Patterson Subject: 24x7 Helpline Roma, I will bring this up in the Staff meeing today. I’ll let you know the outcome. Could you confirm if this is for a license agreement or a shared solution ? Thanks, Justine. GOLD: Let Roma know result. PRED: Let Roma know about the license agreement. Table 9: Illustrative Example 4 8688 From: Rebecca Anderson To: Julia Roberts Subject: Run a bash script while synchronize Julia, When synchronizing is done, we want to run a bash script to delete old records on the machine and remove all activity logs. How can I do this ? What is the way to perform this operation ? Also, in the bash script, is there a way to sort the dates so that we can identify older activities ? Thanks, Rebecca. From: Julia Roberts To: Rebecca Anderson Subject: Run a bash script while synchronize Rebecca, We had exactly the same feature to delete activities which you mentioned in our previous release. But we no longer have that in the new version due to resource constraints. I will take to John to review this again. Thanks, Julia. GOLD: Talk to John to review bash script again. PRED: Talk to John to review the activities. Table 10: Illustrative Example 5 From: Ramesh Paul To: Gopal Majumdar Subject: Updates List for 3/11 Here’s the update for this week. 1. The R&D team is working on a presentation for the knowledge tranfer for v5. It should be ready within next two weeks. 2. I have received their email, but need to review the ppt. 3. Did you want to know more about the new cloud feature for automatic version management ? Or was it a different feature ? 4. I am constantly working on this. 5. Didn’t we discuss this point in our last email ? 6. We are making similar tests in the desktop for v5 before migrating to the cloud. We first have to make sure things work well for the desktop. I will send you more details soon. Did you get a chance to update your blog with information about these new features ? Thanks, Ramesh. GOLD: Send Gopal more details about tests in the desktop for v5. PRED: Send Gopal more details on presentation for the knowledge transfer. Table 11: Illustrative Example 6 From: Lori Howard To: Karen James; Bruce Thomas; Steve Perry Subject: Room reservations Team, This needs to be done through a formal training session, but as of now let me point out some crucial points about room reservations. 1. In case you allocate a room for general meetings and administrative work, then make sure you book it for that month, but not for long periods of time. (Karen, can you check with Renata whether this is fulfilled for our meetings next week?) 2. In case of clients who do not need the entire month, make sure to reserve only for the particular month. If it exceeds that time, the system will authomatically resolve it and reserve it for next month. 3. For room reservation, either enter the number of hours required or the % of month, but not both. I would prefer precise hours. I will inform you when we can provide training, perhaps we can next week. Thanks, Lori. GOLD: Let Karen know about the training provide for room reservations. PRED: Let Karen know about room reservations. Table 12: Illustrative Example 7 8689 From: Matthew White To: Frank; Paul; Dennis Subject: Draft Agenda for Software Training Dear All, As discussed before, we have finally come to a concrete plan. I have attached the draft for your review. Please go over it and let me know asap your suggestions so that I can send them to the organizers. Please check the agenda and the names of trainees. I’ll put together the Training plan and the overall 5-day agenda as soon as I can. Matthew. GOLD: Put together the training plan and the overall day agenda of software training. PRED: Put together the draft agenda for software training. Table 13: Illustrative Example 8 From: Diana Wilson To: Alba Deacon Subject: DHL package from IBM Alba, I was able to track the package and as per the website it was in Sao Luis, Brazil at noon. I am not sure where it is, but it is Brazil so ... Send me an update if you receive it from them. I just tracked the package and as of 10:00am today it was in Toluca, Mexico. Where that is I have no idea but it is in Mexico so ... Let me know if you hear from them when they receive it. Thanks. Diana Wilson. From: Alba Deacon To: Diana Wilson Subject: DHL package from IBM Thanks Diana. If I hear anything I’ll let you know.. Alba. GOLD: Let Diana know about DHL package from IBM. PRED: Let Diana know about DHL package from IBM. Table 14: Illustrative Example 9
2020
767
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8690–8705 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8690 Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature and PRESupposition Paloma Jeretiˇc∗1, Alex Warstadt∗1, Suvrat Bhooshan2, Adina Williams2 1Department of Linguistics, New York University 2Facebook AI Research {paloma,warstadt}@nyu.edu, {sbh,adinawilliams}@fb.com Abstract Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by “some” as entailments. For some presupposition triggers like only, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences. 1 Introduction One of the most foundational semantic discoveries is that systematic rules govern the inferential relationships between pairs of natural language sentences (Aristotle, De Interpretatione, Ch. 6). In natural language processing, Natural Language Inference (NLI)—a task whereby a system determines whether a pair of sentences instantiates in an entailment, a contradiction, or a neutral relation—has been useful for training and evaluating models on sentential reasoning. However, linguists and philosophers now recognize that there ∗Equal Contribution Figure 1: Illustration of key properties of classical entailments, implicatures, and presuppositions. Solid arrows indicate valid commonsense entailments, and arrows with X’s indicate lack of entailment. Dashed arrows indicate follow up statements with the addition of in fact, which can either be acceptable (marked with ‘’) or unacceptable (marked with ‘’). are separate semantic and pragmatic modes of reasoning (Grice, 1975; Clark, 1996; Beaver, 1997; Horn and Ward, 2004; Potts, 2015), and it is not clear which of these modes, if either, NLI models learn. We investigate two pragmatic inference types that are known to differ from classical entailment: scalar implicatures and presuppositions. As shown in Figure 1, implicatures differ from entailments in that they can be denied, and presuppositions differ from entailments in that they are not canceled when placed in entailment-cancelling environments (e.g., negation, questions). To enable research into the relationship between NLI and pragmatic reasoning, we introduce IMPPRES, a fine-grained NLI-style diagnostic test dataset for probing how well NLI models perform implicature and presupposition. Containing 25.5K sentence pairs illustrating key properties of these pragmatic inference types, IMPPRES is automatically generated according to linguist-crafted templates, allowing us to create a large, lexically varied, and well controlled dataset targeting specific 8691 instances of both types. We first investigate whether presuppositions and implicatures are present in NLI models’ training data. We take MultiNLI (Williams et al., 2018) as a case study, and find it has few instances of pragmatic inference, and almost none that arise from specific lexical triggers (see §4). Given this, we ask whether training on MultiNLI is sufficient for models to generalize about these largely absent commonsense reasoning types. We find that generalization is possible: the BERT NLI model shows evidence of pragmatic reasoning when tested on the implicature from some to not all, and the presuppositions of certain triggers (only, cleft existence, possessive existence, questions). We obtain some negative results, that suggest that models like BERT still lack a sophisticated enough understanding of the meanings of the lexical triggers for implicature and presupposition (e.g., BERT treats several word pairs as synonyms, e.g., most notably, or and and). Our contributions are: (i) we provide a new diagnostic test set to probe for pragmatic inferences, complete with linguistic controls, (ii) to our knowledge, we present the first work evaluating deep NLI models on specific pragmatic inferences, and (iii) we show that BERT models can perform some types of pragmatic reasoning very well, even when trained on NLI data containing very few explicit examples of pragmatic reasoning. We publicly release all IMPPRES data, models evaluated, annotations of MultiNLI, and the scripts used to process data.1 2 Background: Pragmatic Inference We take pragmatic inference to be a relation between two sentences relying on the utterance context and the conversational goals of interlocutors. Pragmatic inference contrasts with semantic entailment, which instead captures the logical relationship between isolated sentence meanings (Grice, 1975; Stalnaker, 1974). We present implicature and presupposition inferences below. 2.1 Implicature Broadly speaking, implicatures contrast with entailments in that they are inferences suggested by the speaker’s utterance, but not included in its literal (Grice, 1975). Although there are many types 1github.com/facebookresearch/ImpPres Type Example Trigger Jo’s cat yawned. Presupposition Jo has a cat. Negated Trigger Jo’s cat didn’t yawn. Modal Trigger It’s possible that Jo’s cat yawned. Interrog. Trigger Did Jo’s cat yawn? Cond. Trigger If Jo’s cat yawned, it’s OK. Negated Prsp. Jo doesn’t have a cat. Neutral Prsp. Amy has a cat. Table 1: Sample generated presupposition paradigm. Examples adapted from the ‘change-of-state’ dataset. of implicatures we focus here on scalar implicatures. Scalar implicatures are inferences, often optional,2 which can be drawn when one member of a memorized lexical scale (e.g., ⟨some, all⟩) is uttered (see §6.1). For example, when someone utters Jo ate some of the cake, they suggest that Jo didn’t eat all of the cake, (see Figure 1 for more examples). According to Neo-Gricean pragmatic theory (Horn, 1989; Levinson, 2000), the inference Jo didn’t eat all of the cake arises because some has a more informative lexical alternative all that could have been uttered instead. We expect the speaker to make the most informative true statement:3 as a result, the listener should infer that a stronger statement, where some is replaced by all, is false. Implicatures differ from entailments (and, as we will see, presuppositions; see Figure 1) in that they are deniable, i.e., they can be explicitly negated without resulting in a contradiction. For example, someone can utter Jo ate some of the cake, followed by In fact, Jo ate all of it. In this case, the implicature (i.e., Jo didn’t eat all the cake from above) has been denied. We thus distinguish implicated meaning from literal, or logical, meaning. 2.2 Presupposition Presuppositions of a sentence are facts that the speaker takes for granted when uttering a sentence (Stalnaker, 1974; Beaver, 1997). Presuppositions are generally associated with the presence of certain expressions, known as presupposition triggers. For example, in Figure 1, the definite de2Implicature computation can depend on the cooperativity of the speakers, or on any aspect of the context of utterance (lexical, syntactic, semantic/pragmatic, discourse). See Degen (2015) for a study of the high variability of implicature computation, and the factors responsible for it. 3This follows if we assume that speakers are cooperative (Grice, 1975) and knowledgeable (Gazdar, 1979). 8692 scription the cake triggers the presupposition that there is a cake (Russell, 1905). Other examples of presupposition triggers are shown in Table 1. Presuppositions differ from other inference types in that they generally project out of operators like questions and negation, meaning that they remain valid inferences even when embedded under these operators (Karttunen, 1973). The inference that there is a cake survives even when the presupposition trigger is in a question (Did Jordan eat some of the cake?), as shown in Figure 1. However, in questions, classical entailments and implicatures disappear. Table 1 provides examples of triggers projecting out of several entailment canceling operators: negation, modals, interrogatives, and conditionals. It is necessary to clarify in what sense presupposition is a pragmatic inference. There is no consensus on whether presuppositions should be considered part of the semantic content of expressions (see Stalnaker, 1974; Heim, 1983, for opposing views). However, presuppositions may come to be inferred via accommodation, a pragmatic process by which a listener infers the truth of some new fact based on its being presupposed by the speaker (Lewis, 1979). For instance, if Jordan tells Harper that the King of Sweden wears glasses, and Harper did not previously know that Sweden has a king, they would learn this fact by accommodation. With respect to NLI, any presupposition in the premise (short of world knowledge) will be new information, and therefore accommodation is necessary to recognize it as entailed. 3 Related Work NLI has been framed as a commonsense reasoning task (Dagan et al., 2006; Manning, 2006). One early formulation of NLI defines “entailment” as holding for sentences p and h whenever, “typically, a human reading p would infer that h is most likely true. . . [given] common human understanding of language [and] common background knowledge” (Dagan et al., 2006). Although this sparked debate regarding the terms inference and entailment—and whether an adequate notion of “inference” could be defined (Zaenen et al., 2005; Manning, 2006; Crouch et al., 2006)—in recent work, a commonsense formulation of “inference” is widely adopted (Bowman et al., 2015; Williams et al., 2018) largely because it facilitates untrained annotators’ participation in dataset creation. NLI itself has been steadily gaining in popularity; many datasets for training and/or testing systems are now available including: FraCaS (Cooper et al., 1994), RTE (Dagan et al., 2006; Mirkin et al., 2009; Dagan et al., 2013), Sentences Involving Compositional Knowledge (Marelli et al., 2014, SICK), large scale imaging captioning as NLI (Bowman et al., 2015, SNLI), recasting other datasets into NLI (Glickman, 2006; White et al., 2017; Poliak et al., 2018), ordinal commonsense inference (Zhang et al., 2017, JOCI), MultiPremise Entailment (Lai et al., 2017, MPE), NLI over multiple genres of written and spoken English (Williams et al., 2018, MultiNLI), adversarially filtered common sense reasoning sentences (Zellers et al., 2018, 2019, (Hella)SWAG), explainable annotations for SNLI (Camburu et al., 2018, e-SNLI), cross-lingual NLI (Conneau et al., 2018, XNLI), scientific questioning answering as NLI (Khot et al., 2018, SciTail), NLI recastquestion answering (part of Wang et al. 2019, GLUE), NLI for dialog (Welleck et al., 2019), and NLI over narratives that require drawing inferences to the most plausible explanation from text (Bhagavatula et al., 2020, αNLI). Other NLI datasets are created to identify where models fail (Glockner et al., 2018; Naik et al., 2018; McCoy et al., 2019; Schmitt and Sch¨utze, 2019), many of which are also automatically generated (Geiger et al., 2018; Yanaka et al., 2019a,b; Kim et al., 2019; Nie et al., 2019; Richardson et al., 2020). As datasets for NLI become increasingly numerous, one might wonder, do we need yet another NLI dataset? In this case, the answer is clearly yes: despite NLI’s formulation as a commonsense reasoning task, it is still unknown whether this framing has resulted in models that learn specific modes of pragmatic reasoning. IMPPRES is the first NLI dataset to explicitly probe whether models trained on commonsense reasoning actually do treat pragmatic inferences like implicatures and presuppositions as entailments without additional training on these specific inference types. Beyond NLI, several recent works introduce resources for evaluating sentence understanding models for knowledge of pragmatic inferences. On the presupposition side, datasets such as MegaVeridicality (White and Rawlins, 2018) and CommitmentBank (de Marneffe et al., 2019) compile gradient crowdsourced judgments regarding how likely a clause embedding predicate is to trig8693 ger a presupposition that its complement clause is true. White et al. (2018) and Jiang and de Marneffe (2019) find that LSTMs trained on a gradient event factuality prediction task on these respective datasets make systematic errors. Turning to implicatures, Degen (2015) introduces a dataset measuring the strength of the implicature from some to not all with crowd-sourced judgments. Schuster et al. (2020) find that an LSTM with supervision on this dataset can predict human judgments well. These resources all differ from IMPPRES in two respects: First, their empirical scopes are all somewhat narrower, as all these datasets focus on only a single class of presupposition or implicature triggers. Second, the use of gradient judgments makes it non-trivial to use these datasets to evaluate NLI models, which are trained to make categorical predictions about entailment. Both approaches have advantages, and we leave a direct comparison for future work. Outside the topic of sentential inference, Rashkin et al. (2018) propose a new task where a model must label actor intents and reactions for particular actions described using text. Cianflone et al. (2018) create sentence-level adverbial presupposition datasets and train a binary classifier to detect contexts in which presupposition triggers (e.g., too, again) can be used. 4 Annotating MultiNLI for Pragmatics In this section, we present results of an annotation effort that show that MultiNLI contains very little explicit evidence of pragmatic inferences of the type tested by IMPPRES. Although Williams et al. (2018) report that 22% of the MultiNLI development set sentence pairs contain lexical triggers (such as regret or stopped) in the premise and/or hypothesis, the mere presence of presuppositiontriggering lexical items in the data does not show that MultiNLI contains evidence that presuppositions are entailments, since the sentential inference may focus on other types of information. To address this, we randomly selected 200 sentence pairs from the MultiNLI matched development set and presented them to three expert annotators with a combined total of 17 years of training in formal semantics and pragmatics.4 Annotators answered the following questions for each pair: (1) are the sentences P and H related by a presupposition/implicature relation (entails/is en4The full annotations are on the IMPPRES repository. tailed by, negated or not); (2) what subtype of inference (e.g., existence presupposition, ⟨some, all⟩ implicature); (3) is the presupposition trigger embedded under an entailment-cancelling operator? Agreement among annotators was low, suggesting that few MultiNLI pairs are paradigmatic cases of implicatures or presuppositions. We found only 8 presupposition pairs and 3 implicature pairs on which two or more annotators agreed. Moreover, we found only one example illustrating a particular inference type tested in IMPPRES (the presupposition of possessed definites). All others were tagged as existence presuppositions and conversational implicatures (i.e. loose inferences dependent on world knowledge). The union of annotations was much larger: 42% of examples were identified by at least one annotator as a presupposition or implicature (51 presuppositions and 42 implicatures, with 10 sentences receiving divergent tags). However, of these, only 23 presuppositions and 19 implicatures could reliably be used to learn pragmatic inference (in 14 cases, the given tag did not match the pragmatic inference, and in 27 cases, computing the inference did not affect the relation type). Again, the large majority of implicatures were conversational, and most presuppositions were existential, and generally not linked to particular lexical triggers (e.g., topic marking). We conclude that the MultiNLI dataset at best contains some evidence of loose pragmatic reasoning based on world knowledge and discourse structure, but almost no explicit information relevant to lexically triggered pragmatic inference, which is of the type tested in this paper. 5 Methods Data Generation. IMPPRES consists of semiautomatically generated pairs of sentences with NLI labels illustrating key properties of implicatures and presuppositions. We generate IMPPRES using a codebase developed by Warstadt et al. (2019a) and significantly expanded for the BLiMP dataset (Warstadt et al., 2019b). The codebase, including our scripts and documentation, are publicly available.5 Each sentence type in IMPPRES is generated according to a template that specifies the linear order of the constituents in the sentence. The constituents are sampled from a vocabulary of over 3000 lexical items annotated with grammatical features needed to ensure morphological, 5github.com/alexwarstadt/data generation 8694 Premise Hypothesis Relation type Logical label Pragmatic label Item type some not all implicature (+ to −) neutral entailment target not all some implicature (−to +) neutral entailment target some all negated implicature (+) neutral contradiction target all some reverse negated implicature (+) entailment contradiction target not all none negated implicature (−) neutral contradiction target none not all reverse negated implicature (−) entailment contradiction target all none opposite contradiction contradiction control none all opposite contradiction contradiction control some none negation contradiction contradiction control none some negation contradiction contradiction control all not all negation contradiction contradiction control not all all negation contradiction contradiction control Table 2: Paradigm for the scalar implicature datasets, with ⟨some, all⟩as an example. syntactic, and semantic well-formedness. All sentences generated from a given template are structurally analogous up to the specified constituents, but may vary in sub-constituents. For instance, if the template calls for a verb phrase, the generated constituent may include a direct object or complement clause, depending on the argument structure of the sampled verb. See §6.1 and 7.1 for descriptions of the sentence types in the implicature and presupposition data. Generating data lets us control the lexical and syntactic content so that we can guarantee that the sentence pairs in IMPPRES evaluate the desired phenomenon (see Ettinger et al., 2016, for related discussion). Furthermore, the codebase we use allows for greater lexical and syntactic variety than in many other templatic datasets (see discussion in Warstadt et al., 2019b). One limitation of this methodology is that generated sentences, while generally grammatical, often describe highly unlikely scenarios, or include low frequency combinations of lexical items (e.g., Sabrina only reveals this pasta). Another limitation is that generated data is of limited use for training models, since it contains simple regularities that supervised classifiers may learn to exploit. Thus, we create IMPPRES solely for the purpose of evaluating NLI models trained on standard datasets like MultiNLI. Models. Our experiments evaluate NLI models trained on MultiNLI and built on top of three sentence encoding models: a bag of words (BOW) model, InferSent (Conneau et al., 2017), and BERT-Large (Devlin et al., 2019). The BOW and InferSent models use 300D GloVe embeddings as word representations (Pennington et al., 2014). For the BOW baseline, word embeddings for premise and hypothesis are separately summed to create sentence representations, which are concatenated to form a single sentence-pair representation which is fed to a logistic regression softmax classifier. For the InferSent model, GloVe embeddings for the words in premise and hypothesis are respectively fed into a bidirectional LSTM, after which we concatenate the representations for premise and hypothesis, their difference, and their element-wise product (Mou et al., 2016). BERT is a multilayer bidirectional transformer pretrained with the masked language modelling and next sequence prediction objectives, and finetuned on the MultiNLI dataset. We concatenate the premise and hypothesis after a special [CLS] token and separated them with the [SEP] token. The BERT representation for the [CLS] token is fed into classifier. We use Huggingface’s pre-trained BERT trained on Toronto books (Zhu et al., 2015).6 The BOW and InferSent models have development set accuracies of 49.6% and 67.6%. The development set accuracy for BERT-Large on MultiNLI is 86.6%, similar to the results achieved by (Devlin et al., 2019), but somewhat lower than state-of-the-art (currently 90.8% on test from the ensembled RoBERTa model with long pretraining optimization, Liu et al. 2019). 6 Experiment 1: Scalar Implicatures 6.1 Scalar Implicature Datasets The scalar implicature portion of IMPPRES includes six datasets, each isolating a different scalar implicature trigger from six types of lexical scales (of the type described in §2): determiners ⟨some, all⟩, connectives ⟨or, and⟩, modals ⟨can, have to⟩, numerals ⟨2,3⟩, ⟨10,100⟩, scalar adjectives, and 6github.com/huggingface/pytorch-pretrained-BERT/ 8695 Figure 2: Results on Controls (Implicatures) Figure 3: Results on Target Conditions (Implicatures) verbs, e.g., ⟨good, excellent⟩, ⟨run, sprint⟩. Examples pairs of each implicature trigger can be found in Table 4 in the Appendix. For each type, we generate 100 paradigms, each consisting of 12 unique sentence pairs, as shown in Table 2. The six target sentence pairs comprise two main relation types: ‘implicature’ and ‘negated implicature’. Pairs tagged as ‘implicature’ have a premise that implicates the hypothesis (e.g., some and not all). For ‘negated implicature’, the premise implicates the negation of the hypothesis (e.g., some and all), or vice versa (e.g., all and some). Six control pairs are logical contradictions, representing either scalar ‘opposites’ (e.g., all and none), or ‘negations’ (e.g., not all and all; some and none), probing the models’ basic grasp of negation. As mentioned in §2.1, implicature computation is variable and dependent on the context of utterance. Thus, we anticipate two possible rational behaviors for a MultiNLI-trained model tested on an implicature: (a) be pragmatic, and compute the implicature, concluding that the premise and hypothesis are in an ‘entailment’ relation, (b) be logical, i.e., consider only the literal content, and not compute the implicature, concluding they are in a ‘neutral’ relation. Thus, we measure both possible conclusions, by tagging sentence pairs for scalar implicature with two sets of NLI labels to reflect the behavior expected under “logical” and “pragmatic” modes of inference, as shown in Table 2. 6.2 Implicatures Results & Discussion We first evaluate model performance on the controls, shown in Figure 2. Success on these controls is a necessary condition for us to conclude that a model has learned the basic function of negation (not, none, neither) and the scalar relationship between terms like some and all. We find that BERT performs at ceiling on control conditions for all implicature types, in contrast with InferSent and BOW, whose performance is very variable. Since only BERT passes all controls, its results on the target items are most interpretable. Full results for all models and target conditions by implicature trigger are in Figures 8–13 in the Appendix. For connectives, scalar adjectives and verbs, the BERT model results correspond neither to the hypothesized pragmatic nor logical behavior. In fact, for each of these subdatasets, the results are consistent with a treatment of scalemates (e.g., and and or; good and excellent) as synonyms, e.g. it evaluates the ‘negated implicature’ sentence pairs as ‘entailment’ in both directions. This reveals a coarse-grained knowledge of these meanings that lacks information about asymmetric informativity relations between scalemates. Results for modals (can and have to) are split between the three labels, not showing any predicted logical or pragmatic pattern. We conclude that BERT has insufficient knowledge of the meaning of these words. In addition to pragmatic and logical interpretations, numerals can also be interpreted as exact cardinalities. We thus predict three different behaviors: logical “at least n”, pragmatic “at least n”, and “exactly n”. We observe that results are inconsistent: neither the “exactly” nor “at least” interpretations hold across the board. 8696 Figure 4: BERT results for scalar implicatures triggered by determiners ⟨some, all⟩, by target condition. For the determiner dataset (some-all), Figure 4 breaks down the results by condition and shows that BERT behaves as though it performs pragmatic and logical reasoning in different conditions. Overall, it predicts a pragmatic relation more frequently (55% vs. 36%), and only 9% of results are consistent with neither mode of reasoning. Furthermore, the proportion of pragmatic reasoning shows consistent effects of sentence order (i.e., whether the implicature trigger is in the premise or the hypothesis), and the presence of negation in one or both sentences. Pragmatic reasoning is consistently higher when the implicature trigger is in the premise, which we can see in the results for negated implicatures: the some–all condition shows more pragmatic behavior compared to the all–some condition (a similar behavior is observed with the not all vs. none conditions). Generally, the presence of negation lowers rates of pragmatic reasoning. First, the negated implicature conditions can be subdivided into pairs with and without negation. Among the negated ones, pragmatic reasoning is lower than for nonnegated ones. Second, having negation in the premise rather than the hypothesis makes pragmatic reasoning lower: among pairs tagged as direct implicatures (some vs. not all), there is higher pragmatic reasoning with non-negated some in the premise than with negated not all. Finally, we observe that pragmatic rates are lower for some vs. not all than for some vs. all. In this final case, pragmatic reasoning could be facilitated by explicit presentation of the two items on the scale. In sum, for the datasets besides determiners, we find evidence that BERT fails to learn even the logical relations between scalemates, ruling out the possibility of computing scalar implicatures. It remains possible that BERT could learn these logical relations with explicit supervision (see RichardPresuppositions Label Item Premise Hypothesis Type *Trigger Prsp entailment target *Trigger Neg. Prsp contradiction target *Trigger Neut. Prsp neutral target Neg. Trigger Trigger contradiction control Modal Trigger Trigger neutral control Interrog. Trigger Trigger neutral control Cond. Trigger Trigger neutral control Table 3: Paradigm for the presupposition target (top) and control datasets (bottom). For space, *Trigger refers to either plain, Negated, Modal, Interrogative, or Conditional Triggers as per Table 1. son et al., 2020), but it is clear that these are not learned from training on MultiNLI. Only the determiner dataset was informative in showing the extent of the NLI BERT model’s pragmatic reasoning, since it alone showed a fine-grained enough understanding of the semantic relationship of the scalemates, like some and all. In this setting BERT returned impressive results showing a high proportion of pragmatic reasoning compared to logical reasoning, which was affected by sentence order and presence of negation in a predictable way. 7 Experiment 2: Presuppositions 7.1 Presupposition Datasets The presupposition portion of IMPPRES includes eight datasets, each isolating a different kind of presupposition trigger. The full set of triggers is shown in Table 5 in the Appendix. For each type, we generate 100 paradigms, with each paradigm consisting of 19 unique sentence pairs. (Examples of the sentence types are in Table 1). Of the 19 sentence pairs, 15 contain target items. The first target item tests whether the model correctly determines that the presupposition trigger entails its presupposition. The next two alter the presupposition, either negating it, or replacing a constituent, leading to contradiction and neutrality, respectively. The remaining 12 show that the relation between the trigger and the (altered) presupposition is not affected by embedding the trigger under various entailment-canceling operators. 4 control items are designed to test the basic effect of entailment-canceling operators—negation, modals, interrogatives, and conditionals. In each control, the premise is a presupposition trigger embedded under an entailment-canceling operator, and the hypothesis is an unembedded sentence containing the trigger. These controls are neces8697 Figure 5: Results on Controls (Presuppositions). sary to establish whether models learn that presuppositions behave differently under these operators than do classical semantic entailments. 7.2 Presupposition Results & Discussion The results from presupposition controls are in Figure 5. BERT performs well above chance on each control (acc. > 0.33), whereas BOW and InferSent perform at or below chance. In the “negated” condition, BERT correctly identifies that the trigger is contradicted by its negation 100% of the time, e.g., Jo’s cat didn’t go contradicts Jo’s cat went. In the other conditions, it correctly identifies the neutral relation the majority of the time, e.g., Did Jo’s cat go? is neutral with respect to Jo’s cat went. This indicates that BERT mostly learns that negation, modals, interrogatives, and conditionals cancel classical entailments, while BOW and InferSent do not capture the ordinary behavior of these common operators. Next, we test whether models identify presuppositions of the premise as entailments, e.g., that Jo’s cat went entails that Jo has a cat. Recall from §2.2 that this is akin to a listener accommodating a presupposition. The results in Figure 6 show that each of the three models accommodates some presuppositions, but this depends on both the nature of the presupposition and the model. For instance, the BOW and InferSent models accommodate presuppositions of nearly all trigger types at well above chance rates (acc. ≫33%). For the uniqueness presupposition of clefts, these models generally correctly predict an entailment (acc. > 90%), but for most triggers, performance is less reliable. By contrast, BERT’s behavior is bimodal. It always accommodates the existence presuppositions of clefts and possessed definites, as well as the presupposition of only, but almost never accommodates any presupposition involving numeracy, e.g. Both flowers that bloomed died entails Figure 6: Results for the unembedded trigger paired with positive presupposition. There are exactly two flowers that bloomed.7 Finally, we evaluate whether models predict that presuppositions project out of entailment canceling operators (e.g., that Did Jo’s cat go? entails that Jo has a cat). We can only consider such a prediction as evidence of projection if two conditions hold: (a) the model correctly identifies that the relevant operator cancels entailments in the control from the same paradigm (e.g., Did Jo’s cat go? is neutral with respect to Jo’s cat went), and (b) the model identifies the presupposition as an entailment when the trigger is unembedded in the same paradigm (e.g. Jo’s cat went entails Jo has a cat). Otherwise, a model might correctly predict entailment essentially by accident if, for instance, it systematically ignores negation. For this reason, we filter out results for the target conditions that do not meet these criteria. Figure 7 shows results for the target conditions after filtering. While InferSent rarely predicts that presuppositions project, we find strong evidence that the BERT and BOW models do. Specifically, they correctly identify that the premise entails the presupposition (acc. ≥80% for BERT, acc. ≥90% for BOW). Furthermore, BERT is the only model to reliably identify (i.e., over 90% of the time) that the negation of the presupposition is contradicted. These results hold irrespective of the entailment canceling operator. No model reliably performs above chance when the presupposition is altered to be neutral (e.g., Did Jo’s cat go? is neu7The presence of exactly might contribute to poor performance on numeracy examples. We suspect MultiNLI annotators may have used it disproportionately for neut. hypotheses. 8698 Figure 7: Results for presupposition target conditions involving projection. tral with respect to Jo has a cat). It is surprising that the simple BOW model can learn some of the projective behavior of presuppositions. One explanation for this finding is that many of the key features of presupposition projection are insensitive to word order. If a lexical presupposition trigger is present at all in a sentence, a presupposition will generally arise irrespective of its position in the sentence. There are some edge cases where this heuristic is insufficient, but IMPPRES is not designed to test such cases. To summarize, training on NLI is sufficient for all models we evaluate to learn to accommodate presuppositions of a wide variety of unembedded triggers, though BERT rejects presuppositions involving numeracy. Furthermore, BERT and even the BOW model appear to learn the characteristic projective behavior of some presuppositions. 8 General Discussion & Conclusion We observe some encouraging results in §6–7. We find strong evidence that BERT learns scalar implicatures associated with determiners some and all. Pragmatic or logical reasoning was not diagnosable for the other scales, whose meaning was not fully understood by our models (as most scalar pairs were treated as synonymous). In the case of presuppositions, the BERT NLI models, and BOW to some extent, perform well on a number of our subdatasets (only, cleft existence, possessive existence, questions). For the other subdatasets, the models did not perform as expected on the basic unembedded presupposition triggers, again suggesting the model’s lack of knowledge of the basic meaning of these words. Though their behavior is far from systematic, this is suggestive evidence that some NLI models can perform in ways that correlate with human-like pragmatic behavior. Given that MultiNLI contains few examples of the type found in IMPPRES (see §4), where might our positive results come from? There are two potential sources of signal for the BERT model: NLI training, and pretraining (either BERT’s masked language modeling objective or its input word embeddings). NLI training provides specific examples of valid (or invalid) inferences constituting an incomplete characterization of what commonsense inference is in general. Since presuppositions and scalar implicatures triggered by specific lexical items are largely absent from the MultiNLI data used for NLI training, any positive results on IMPPRES would likely use prior knowledge from the pretraining stage to make an inductive leap that pragmatic inferences are valid commonsense inferences. The natural language text used for pretraining certainly contains pragmatic information, since, like any natural language data, it is produced with the assumption that readers are capable of pragmatic reasoning. Maybe this induces patterns in the data that make the nature of those assumptions recoverable from the data itself. This work is an initial step towards rigorously investigating the extent to which NLI models learn semantic versus pragmatic inference types. We have introduced a new dataset IMPPRES for probing this question, which can be reused to evaluate pragmatic performance of any NLI given model. Acknowledgments This material is based upon work supported by the National Science Foundation (NSF) under Grant No. 1850208 awarded to A. Warstadt. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. Thanks to the FAIR NLP & Conversational AI Group, the Google AI NLP group, and the NYU ML2, including Sam Bowman, He He, Phu Mon Htut, Katharina Kann, Haokun Liu, Ethen Perez, Richard Pang, Clara Vania for discussions on the topic, and/or feedback on an earlier draft. Additional thanks to Marco Baroni, Hagen Blix, Emmanuel Chemla, Aaron Steven White, and Luke Zettlemoyer for insightful comments. 8699 References Aristotle. De Interpretatione. David I. Beaver. 1997. Presupposition. In Handbook of logic and language, pages 939–1008. Elsevier. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In Proceedings of the 2020 International Conference on Learning Representations (ICLR). Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Oana-Maria Camburu, Tim Rockt¨aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural Language Inference with Natural Language Explanations. In Advances in Neural Information Processing Systems, pages 9539–9549. Andre Cianflone, Yulan Feng, Jad Kabbara, and Jackie Chi Kit Cheung. 2018. Let’s do it “again”: A first computational approach to detecting adverbial presupposition triggers. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2747– 2755, Melbourne, Australia. Association for Computational Linguistics. Herbert H Clark. 1996. Using language. Cambridge University Press. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680. Association for Computational Linguistics. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485. Association for Computational Linguistics. Robin Cooper, Richard Crouch, Jan Van Eijck, Chris Fox, Josef Van Genabith, Jan Jaspers, Hans Kamp, Manfred Pinkal, Massimo Poesio, Stephen Pulman, et al. 1994. FraCaS: A framework for computational semantics. Deliverable D6. Richard Crouch, Lauri Karttunen, and Annie Zaenen. 2006. Circumscribing is not excluding: A reply to Manning. Ms., Palo Alto Research Center. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, pages 177– 190. Springer. Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing textual entailment: Models and applications. Synthesis Lectures on Human Language Technologies, 6(4):1–220. Judith Degen. 2015. Investigating the distribution of some (but not all) implicatures using corpora and web-based methods. Semantics and Pragmatics, 8:11–1. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134–139, Berlin, Germany. Association for Computational Linguistics. Gerald Gazdar. 1979. Pragmatics, implicature, presuposition and logical form. Academic Press, NY. Atticus Geiger, Ignacio Cases, Lauri Karttunen, and Christopher Potts. 2018. Stress-testing neural models of natural language inference with multiplyquantified sentences. In CoRR. Oren Glickman. 2006. Applied textual entailment. Bar-Ilan University. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–655. Association for Computational Linguistics. H Paul Grice. 1975. Logic and conversation. 1975, pages 41–58. Irene Heim. 1983. On the projection problem for presuppositions. Formal semantics: The essential readings, pages 249–260. Laurence Horn. 1989. A natural history of negation. University of Chicago Press. Laurence R Horn and Gregory L Ward. 2004. The handbook of pragmatics. Wiley Online Library. 8700 Nanjiang Jiang and Marie-Catherine de Marneffe. 2019. Do you know that florence is packed with visitors? evaluating state-of-the-art models of speaker commitment. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4208–4213, Florence, Italy. Association for Computational Linguistics. Lauri Karttunen. 1973. Presuppositions of compound sentences. Linguistic inquiry, 4(2):169–193. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In Proceedings of Association for the Advancement of Artificial Intelligence (AAAI). Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 235–249, Minneapolis, Minnesota. Association for Computational Linguistics. Alice Lai, Yonatan Bisk, and Julia Hockenmaier. 2017. Natural language inference from multiple premises. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 100–109, Taipei, Taiwan. Asian Federation of Natural Language Processing. Stephen C Levinson. 2000. Presumptive meanings: The theory of generalized conversational implicature. MIT press. David Lewis. 1979. Scorekeeping in a language game. In Semantics from different points of view, pages 172–187. Springer. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Christopher D. Manning. 2006. Local textual inference: It’s hard to circumscribe, but you know it when you see it – and NLP needs it. Ms., Stanford University. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 1–8, Dublin, Ireland. Association for Computational Linguistics. Marie-Catherine de Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The CommitmentBank: Investigating projection in naturally occurring discourse. In Proceedings of Sinn und Bedeutung, volume 23, pages 107–124. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Shachar Mirkin, Roy Bar-Haim, Ido Dagan, Eyal Shnarch, Asher Stern, Idan Szpektor, and Jonathan Berant. 2009. Addressing discourse and document structure in the RTE search task. In Textual Analysis Conference. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 130–136, Berlin, Germany. Association for Computational Linguistics. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Yixin Nie, Yicheng Wang, and Mohit Bansal. 2019. Analyzing compositionality-sensitivity of NLI models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6867–6874. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 67–81, Brussels, Belgium. Association for Computational Linguistics. Christopher Potts. 2015. Presupposition and implicature. The handbook of contemporary semantic theory, 2:168–202. Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2Mind: 8701 Commonsense inference on events, intents, and reactions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 463–473, Melbourne, Australia. Association for Computational Linguistics. Kyle Richardson, Hai Hu, Lawrence S Moss, and Ashish Sabharwal. 2020. Probing natural language inference models through semantic fragments. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20). Bertrand Russell. 1905. On denoting. Mind, 14(56):479–493. Martin Schmitt and Hinrich Sch¨utze. 2019. SherLIiC: A typed event-focused lexical inference benchmark for evaluating natural language inference. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 902–914, Florence, Italy. Association for Computational Linguistics. Sebastian Schuster, Yuxing Chen, and Judith Degen. 2020. Harnessing the richness of the linguistic signal in predicting pragmatic inferences. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Robert Stalnaker. 1974. Pragmatic presuppositions. In Milton K. Munitz and Peter K. Unger, editors, Semantics and Philosophy, pages 135–148. New York University Press. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the Interantional Conference on Learning Representations (ICLR). Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretiˇc, and Samuel R. Bowman. 2019a. Investigating BERT’s knowledge of language: Five analysis methods with NPIs. In Proceedings of EMNLPIJCNLP, pages 2870–2880. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2019b. BLiMP: The benchmark of linguistic minimal pairs for English. arXiv preprint arXiv:1912.00582. Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731–3741, Florence, Italy. Association for Computational Linguistics. Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is everything: Recasting semantic resources into a unified evaluation framework. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 996–1005, Taipei, Taiwan. Asian Federation of Natural Language Processing. Aaron Steven White and Kyle Rawlins. 2018. The role of veridicality and factivity in clause selection. In Proceedings of the 48th Annual Meeting of the North East Linguistic Society, Amherst, MA, USA. GLSA Publications. Aaron Steven White, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2018. Lexicosyntactic inference in neural models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4717–4724, Brussels, Belgium. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019a. Can neural networks understand monotonicity reasoning? In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 31–40, Florence, Italy. Association for Computational Linguistics. Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019b. HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 250–255, Minneapolis, Minnesota. Association for Computational Linguistics. Annie Zaenen, Lauri Karttunen, and Richard Crouch. 2005. Local textual inference: Can it be defined or circumscribed? In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pages 31–36, Ann Arbor, Michigan. Association for Computational Linguistics. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93–104. Association for Computational Linguistics. 8702 Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791–4800, Florence, Italy. Association for Computational Linguistics. Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal common-sense inference. Transactions of the Association for Computational Linguistics, 5:379–395. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19–27. 8703 Appendix Type Premise Hypothesis Connectives These cats or those fish appear. These cats and those fish don’t both appear. Determiners Some skateboards tipped over. Not all skateboards tipped over. Numerals Ten bananas were scorching. One hundred bananas weren’t scorching. Modals Jerry could wake up. Jerry didn’t need to wake up. Scalar adjectives Banks are fine. Banks are not great. Scalar verbs Dawn went towards the hills. Dawn did not get to the hills. Table 4: The scalar implicature triggers in IMPPRES. Examples are automatically generated sentences pairs from each of the six datasets for the scalar implicatures experiment. The pairs belong to the “Implicature (+ to −)” condition. Type Premise (Trigger) Hypothesis (Presupposition) All N All six roses that bloomed died. Exactly six roses bloomed. Both Both flowers that bloomed died. Exactly two flowers bloomed. Change of State The cat escaped. The cat used to be captive. Cleft Existence It is Sandra who disliked Veronica. Someone disliked Veronica. Cleft Uniqueness It is Sandra who disliked Veronica. Exactly one person disliked Veronica. Only Only Lucille went to Spain. Lucille went to Spain. Possessed Definites Bill’s handyman won. Bill has a handyman. Question Sue learned why Candice testified. Candice testified. Table 5: The presupposition triggers in IMPPRES. Examples are automatically generated sentences pairs from each of the eight datasets for the presupposition experiment. The pairs belong to the “Plain Trigger / Presupposition” condition. 8704 Figure 8: Results for the scalar implicatures triggered by adjectives, by target condition. Figure 9: Results for the scalar implicatures triggered by adjectives, by target condition. 8705 Figure 10: Results for the scalar implicatures triggered by determiners, by target condition. Figure 11: Results for the scalar implicatures triggered by modals, by target condition. Figure 12: Results for the scalar triggered by numerals, by target condition. Figure 13: Results for the scalar implicatures triggered by verbs, by target condition.
2020
768
End-to-End Bias Mitigation by Modelling Biases in Corpora Rabeeh Karimi Mahabadi♣♥Yonatan Belinkov♦James Henderson♣ ♥EPFL, Switzerland ♣Idiap Research Institute, Switzerland ♦Harvard University and Massachusetts Institute of Technology, Cambridge, MA, USA {rabeeh.karimi,james.henderson}@idiap.ch [email protected] Abstract Several recent studies have shown that strong natural language understanding (NLU) models are prone to relying on unwanted dataset biases without learning the underlying task, resulting in models that fail to generalize to out-of-domain datasets and are likely to perform poorly in real-world scenarios. We propose two learning strategies to train neural models, which are more robust to such biases and transfer better to out-of-domain datasets. The biases are specified in terms of one or more bias-only models, which learn to leverage the dataset biases. During training, the bias-only models’ predictions are used to adjust the loss of the base model to reduce its reliance on biases by down-weighting the biased examples and focusing training on the hard examples. We experiment on large-scale natural language inference and fact verification benchmarks, evaluating on out-of-domain datasets that are specifically designed to assess the robustness of models against known biases in the training data. Results show that our debiasing methods greatly improve robustness in all settings and better transfer to other textual entailment datasets. Our code and data are publicly available in https: //github.com/rabeehk/robust-nli. 1 Introduction Recent neural models (Devlin et al., 2019; Radford et al., 2018; Chen et al., 2017) have achieved high and even near human-performance on several largescale natural language understanding benchmarks. However, it has been demonstrated that neural models tend to rely on existing idiosyncratic biases in the datasets, and leverage superficial correlations between the label and existing shortcuts in the training dataset to perform surprisingly well,1 without learning the underlying task (Kaushik and Lipton, 2018; Gururangan et al., 2018; Poliak et al., 2018; Schuster et al., 2019; 1We use biases, heuristics or shortcuts interchangeably. McCoy et al., 2019b). For instance, natural language inference (NLI) is supposed to test the ability of a model to determine whether a hypothesis sentence (There is no teacher in the room) can be inferred from a premise sentence (Kids work at computers with a teacher’s help) (Dagan et al., 2006).2 However, recent work has demonstrated that large-scale NLI benchmarks contain annotation artifacts; certain words in the hypothesis that are highly indicative of inference class and allow models that do not consider the premise to perform unexpectedly well (Poliak et al., 2018; Gururangan et al., 2018). As an example, in some NLI benchmarks, negation words such as “nobody”, “no”, and “not” in the hypothesis are often highly correlated with the contradiction label. As a result of the existence of such biases, models exploiting statistical shortcuts during training often perform poorly on out-of-domain datasets, especially if the datasets are carefully designed to limit the spurious cues. To allow proper evaluation, recent studies have tried to create new evaluation datasets that do not contain such biases (Gururangan et al., 2018; Schuster et al., 2019; McCoy et al., 2019b). Unfortunately, it is hard to avoid spurious statistical cues in the construction of large-scale benchmarks, and collecting new datasets is costly (Sharma et al., 2018). It is, therefore, crucial to develop techniques to reduce the reliance on biases during the training of the neural models. We propose two end-to-end debiasing techniques that can be used when the existing bias patterns are identified. These methods work by adjusting the crossentropy loss to reduce the biases learned from the training dataset, down-weighting the biased examples so that the model focuses on learning the hard examples. Figure 1 illustrates an example of applying our strategy to prevent an NLI model from predicting the labels using existing biases in the hypotheses, where the bias-only model only sees the hypothesis. Our strat2The given sentences are in the contradictory relation, and the hypothesis cannot be inferred from the premise. Premise Hypothesis The woman is not sleeping. Bias-only Model 0.1 0.8 0.9 0.05 0.05 0 1 0 ^ Combination Model 0.6 0.3 0.1 0 1 0 _ Training NLI Model Evaluation The woman is awake. C C C E E E N N N 0.1 Target Target Forward pass    Backward pass  Figure 1: An illustration of our debiasing strategies applied to an NLI model. The bias-only model only sees the hypothesis, where negation words like “not” are highly correlated with the contradiction label. We train a robust NLI model by training it in combination with the bias-only model and motivate it to learn different strategies than the ones used in the bias-only model. The robust NLI model does not rely on the shortcuts and obtains improved performance on the test set. egy involves adding this bias-only branch fB on top of the base model fM during training. We then compute the combination of the two models fC in a way that motivates the base model to learn different strategies than the ones used by the bias-only branch fB. At the end of the training, we remove the bias-only classifier and use the predictions of the base model. In our first proposed method, Product of Experts, the training loss is computed on an ensemble of the base model and the bias-only model, which reduces the base model’s loss for the examples that the bias-only model classifies correctly. For the second method, Debiased Focal Loss, the bias-only predictions are used to directly weight the loss of the base model, explicitly modulating the loss depending on the accuracy of the bias-only model. We also extend these methods to be robust against multiple sources of bias by training multiple bias-only models. Our approaches are simple and highly effective. They require training only a simple model on top of the base model. They are model agnostic and general enough to be applicable for addressing common biases seen in many datasets in different domains. We evaluate our models on challenging benchmarks in textual entailment and fact verification, including HANS (Heuristic Analysis for NLI Systems) (McCoy et al., 2019b), hard NLI sets (Gururangan et al., 2018) of Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (MNLI) (Williams et al., 2018), and FEVER Symmetric test set (Schuster et al., 2019). The selected datasets are highly challenging and have been carefully designed to be unbiased to allow proper evaluation of the out-of-domain performance of the models. We additionally construct hard MNLI datasets from MNLI development sets to facilitate the out-of-domain evaluation on this dataset.3 We show that including our strategies on training baseline models, including BERT (Devlin et al., 2019), provides a substantial gain on out-of-domain performance in all the experiments. In summary, we make the following contributions: 1) Proposing two debiasing strategies to train neural models robust to dataset bias. 2) An empirical evaluation of the methods on two large-scale NLI datasets and a fact verification benchmark; obtaining a substantial gain on their challenging out-of-domain data, including 7.4 points on HANS, 4.8 points on SNLI hard set, and 9.8 points on FEVER symmetric test set, setting a new state-of-the-art. 3) Proposing debiasing strategies capable of combating multiple sources of bias. 4) Evaluating the transfer performance of the debiased models on 12 NLI datasets and demonstrating improved transfer to other NLI benchmarks. To facilitate future work, we release our datasets and code. 2 Related Work To address dataset biases, researchers have proposed to augment datasets by balancing the existing cues (Schuster et al., 2019) or to create an adversarial dataset (Jia and Liang, 2017). However, collecting new datasets, especially at a large scale, is costly, and thus remains an unsatisfactory solution. It is, therefore, crucial to develop strategies to allow models to be trained on the existing biased datasets. 3Removing the need to submit to an online evaluation system for MNLI hard test sets. Schuster et al. (2019) propose to first compute the n-grams in the dataset’s claims that are the most associated with each fact-verification label. They then solve an optimization problem to assign a balancing weight to each training sample to alleviate the biases. In contrast, we propose several end-to-end debiasing strategies. Additionally, Belinkov et al. (2019a) propose adversarial techniques to remove from the NLI sentence encoder the features that allow a hypothesisonly model to succeed. However, we believe that in general, the features used by the hypothesis-only model can include some information necessary to perform the NLI task, and removing such information from the sentence representation can hurt the performance of the full model. Their approach consequently degrades the performance on the hard SNLI set, which is expected to be less biased. In contrast, we propose to train a bias-only model to use its predictions to dynamically adapt the classification loss to reduce the importance of the most biased examples. Concurrently to our work, Clark et al. (2019) and He et al. (2019) have also proposed to use the product of experts (PoE) models for avoiding biases. They train their models in two stages, first training a bias-only model and then using it to train a robust model. In contrast, our methods are trained in an end-to-end manner, which is convenient in practice. We additionally show that our proposed Debiased Focal Loss model is an effective method to reduce biases, sometimes superior to PoE. We have evaluated on new domains of NLI hard sets and fact verification. Moreover, we have included an analysis showing that our debiased models indeed have lower correlations with the bias-only models, and have extended our methods to guard against multiple bias patterns simultaneously. We furthermore study transfer performance to other NLI datasets. 3 Reducing Biases Problem formulation We consider a general multi-class classification problem. Given a dataset D ={xi,yi}N i=1 consisting of the input data xi ∈X, and labels yi ∈Y, the goal of the base model is to learn a mapping fM parameterized by θM that computes the predictions over the label space given the input data, shown as fM : X →R|Y|. Our goal is to optimize θM parameters such that we build a model that is more resistant to benchmark dataset biases, to improve its robustness to domain changes where the biases typically observed in the training data do not exist in the evaluation dataset. The key idea of our approach, depicted in Figure 1, is first to identify the dataset biases that the base model is susceptible to relying on, and define a biasonly model to capture them. We then propose two strategies to incorporate this bias-only knowledge into the training of the base model to make it robust against the biases. After training, we remove the bias-only model and use the predictions of the base model. 3.1 Bias-only Branch We assume that we do not have access to any data from the out-of-domain dataset, so we need to know a priori about the possible types of shortcuts we would like the base model to avoid relying on. Once these patterns are identified, we train a bias-only model designed to capture the identified shortcuts that only uses biased features. For instance, a hypothesis-only model in the large-scale NLI datasets can correctly classify the majority of samples using annotation artifacts (Poliak et al., 2018; Gururangan et al., 2018). Motivated by this work, our bias-only model for NLI only uses hypothesis sentences. Note that the bias-only model can, in general, have any form, and is not limited to models using only a part of the input data. For instance, on the HANS dataset, our bias-only model makes use of syntactic heuristics and similarity features (see Section 4.3). Let xb i ∈X b be biased features of xi that are predictive of yi. We then formalize this bias-only model as a mapping fB :X b →R|Y|, parameterized by θB and trained using cross-entropy (CE) loss LB: LB(θB)=−1 N N X i=1 log(σ(fyi B (xb i;θB))), (1) where fj B(xb i,θB) is the jth element of fB(.), and σ(uj)=euj/P|Y| k=1euk is the softmax function. 3.2 Proposed Debiasing Strategies We propose two strategies to incorporate the bias-only fB knowledge into the training of the base model fM. In our strategies, the predictions of the bias-only model are combined with either the predictions of the base model or its error, to down-weight the loss for the examples that the bias-only model can predict correctly. We then update parameters of the base model θM based on this modified loss LC. Our learning strategies are end-to-end. Therefore, to prevent the base model from learning the biases, the bias-only loss LB is not back-propagated to any shared parameters of the base model, such as a shared sentence encoder. 3.2.1 Method 1: Product of Experts Our first approach is based on the product of experts (PoE) method (Hinton, 2002). Here, we use this method to combine the bias-only and base model’s predictions by computing the element-wise product ⊙ between their predictions as σ(fB(xb i))⊙σ(fM(xi)). We compute this combination in the logarithmic space, making it appropriate for the normalized exponential below: fC(xi,xb i)=log(σ(fB(xb i)))+log(σ(fM(xi))), The key intuition behind this model is to combine the probability distributions of the bias-only and the base model to allow them to make predictions based on different characteristics of the input; the bias-only branch covers prediction based on biases, and the base model focuses on learning the actual task. Then the base model parameters θM are trained using the cross-entropy loss LC of the combined classifier fC: LC(θM;θB)=−1 N N X i=1 log(σ(fyi C (xi,xb i))). (2) When updating the base model parameters using this loss, the predictions of the bias-only model decrease the updates for examples that it can accurately predict. Justification: Probability of label yi for the example xi in the PoE model is computed as: σ(fyi C (xi,xb i))= σ(fyi B (xb i))σ(fyi M(xi)) P|Y| k=1σ(fk B(xb i))σ(fk M(xi)) Then the gradient of cross-entropy loss of the combined classifier (2) w.r.t θM is (Hinton, 2002): ∇θMLC(θM;θB)=−1 N N X i=1 |Y| X k=1   δyik−σ(fk C(xi,xb i))  ∇θMlog(σ(fk M(xi)))  , where δyik is 1 when k=yi and 0 otherwise. Generally, the closer the ensemble’s prediction σ(fk C(.)) is to the target δyik, the more the gradient is decreased through the modulating term, which only happens when the bias-only and base models are both capturing biases. In the extreme case, when the bias-only model correctly classifies the sample, σ(fyi C (xi,xb i)) = 1 and therefore ∇θMLC(θM; θB) = 0, the biased examples are ignored during training. Conversely, when the example is fully unbiased, the bias-only classifier predicts the uniform distribution over all labels σ(fk B(xb i)) = 1 |Y| for k ∈Y, therefore σ(fyi C (xi, xb i)) = σ(fyi M(xi)) and the gradient of ensemble classifier remains the same as the CE loss. 3.2.2 Method 2: Debiased Focal Loss Focal loss was originally proposed in Lin et al. (2017) to improve a single classifier by down-weighting the well-classified points. We propose a novel variant of this loss that leverages the bias-only branch’s predictions to reduce the relative importance of the most biased examples and allows the model to focus on learning the hard examples. We define Debiased Focal Loss (DFL) as: LC(θM;θB)= (3) −1 N N X i=1  1−σ(fyi B (xb i)) γ log(σ(fyi M(xi))) where γ is the focusing parameter, which impacts the down-weighting rate. When γ is set to 0, DFL is equivalent to the cross-entropy loss. For γ>0, as the value of γ is increased, the effect of down-weighting is increased. We set γ=2 through all experiments, which works well in practice, and avoid fine-tuning it further. We note the properties of this loss: (1) When the example xi is unbiased, and the bias-only branch does not do well, σ(fyi B (xb i)) is small, therefore the scaling factor is close to 1, and the loss remains unaffected. (2) As the sample is more biased and σ(fyi B (xb i)) is closer to 1, the modulating factor approaches 0 and the loss for the most biased examples is down-weighted. 3.3 RUBi baseline (Cadene et al., 2019) We compare our models to RUBi (Cadene et al., 2019), a recently proposed model to alleviate unimodal biases learned by Visual Question Answering (VQA) models. Cadene et al. (2019)’s study is limited to VQA datasets. We, however, evaluate the effectiveness of their formulation on multiple challenging NLU benchmarks. RUBi consists in first applying a sigmoid function φ to the bias-only model’s predictions to obtain a mask containing an importance weight between 0 and 1 for each label. It then computes the element-wise product between the obtained mask and the base model’s predictions: fC(xi,xb i)=fM(xi)⊙φ(fB(xb i)), The main intuition is to dynamically adjust the predictions of the base model to prevent it from leveraging the shortcuts. Then the parameters of the base model θM are updated by back-propagating the cross-entropy loss LC of the combined classifier. 3.4 Joint Debiasing Strategies Neural models can, in practice, be prone to multiple types of biases in the datasets. We, therefore, propose methods for combining several bias-only models. To avoid learning relations between biased features, we do not consider training a classifier on top of their concatenation. Instead, let {xbj i }K j=1 be different sets of biased features of xi that are predictive of yi, and let fBj be an individual bias-only model capturing xbj i . Next, we extend our debiasing strategies to handle multiple bias patterns. Method 1: Joint Product of Experts We extend our proposed PoE model to multiple bias-only models by computing the element-wise product between the predictions of bias-only models and the base model as: σ(fB1(xb1 i ))⊙···⊙σ(fBK(xbK i ))⊙σ(fM(xi)), computed in the logarithmic space: fC(xi,{xbj i }K j=1)= K X j=1 log(σ(fBj(xbj i ))) +log(σ(fM(xi))). Then the base model parameters θM are trained using the cross-entropy loss of the combined classifier fC. Method 2: Joint Debiased Focal Loss To extend DFL to handle multiple bias patterns, we first compute the element-wise average of the predictions of the multiple bias-only models: fB({xbj i }K j=1) = 1 K PK j=1 fBj(xbj i ), and then compute the DFL (3) using the computed joint bias-only model. 4 Evaluation on Unbiased Datasets We provide experiments on a fact verification (FEVER) and two large-scale NLI datasets (SNLI and MNLI). We evaluate the models’ performance on recently-proposed challenging unbiased evaluation sets. We use the BERT (Devlin et al., 2019) implementation of Wolf et al. (2019) as our main baseline, known to work well for these tasks. In all the experiments, we use the default hyperparameters of the baselines. 4.1 Fact Verification Dataset: The FEVER dataset contains claimevidence pairs generated from Wikipedia. Schuster et al. (2019) collected a new evaluation set for the FEVER dataset to avoid the idiosyncrasies observed in the claims of this benchmark. They made the original claim-evidence pairs of the FEVER evaluation dataset symmetric, by augmenting them and making each claim and evidence appear with each label. Therefore, by balancing the artifacts, relying on statistical cues in claims to classify samples is equivalent to a random guess. The collected dataset is challenging, and the performance of the models relying on biases evaluated on this dataset drops significantly. Base models: We consider BERT as the base model, which works the best on this dataset (Schuster et al., 2019), and predicts the relations based on the concatenation of the claim and the evidence with a delimiter token (see Appendix A). Bias-only model: The bias-only model predicts the labels using only claims as input. Results: Table 1 shows the results. Our proposed debiasing methods, PoE and DFL, are highly effective, boosting the performance of the baseline by 9.8 and 7.5 points respectively, significantly surpassing the prior work of Schuster et al. (2019). Loss Dev Test ∆ CE 85.99 56.49 RUBi 86.23 57.60 +1.1 Schuster et al. (2019) 84.6 61.6 +5.1 DFL 83.07 64.02 +7.5 PoE 86.46 66.25 +9.8 Table 1: Results on FEVER development and symmetric test set. ∆are absolute differences with CE loss. 4.2 Natural Language Inference Datasets: We evaluate on hard datasets of SNLI and MNLI (Gururangan et al., 2018), which are the splits of these datasets where a hypothesis-only model cannot correctly predict the labels. Gururangan et al. (2018) show that the success of the recent textual entailment models is attributed to the biased examples, and the performance of these models is substantially lower on the hard sets. Base models: We consider BERT and InferSent (Conneau et al., 2017) as our base models. We choose InferSent to be able to compare with the prior work of Belinkov et al. (2019b). Bias-only model: The bias-only model predicts the labels using the hypothesis (Appendix B). Results on SNLI: Table 2 shows the SNLI results. With InferSent, DFL and PoE result in 4.1 and 4.8 points gain. With BERT, DFL and PoE improve the results by 2.5 and 1.6 absolute points. Compared to the prior work of Belinkov et al. (2019b) (AdvCls), our PoE model obtains a 7.4 points gain, setting a new state-of-the-art. Loss BERT InferSent Test Hard ∆ Test Hard ∆ CE 90.53 80.53 84.24 68.91 RUBi 90.69 80.62 +0.1 83.93 69.64 +0.7 AdvCls* — — — 83.56 66.27 -2.6 AdvDat* — — — 78.30 55.60 -13.3 DFL 89.57 83.01 +2.5 73.54 73.05 +4.1 PoE 90.11 82.15 +1.6 80.35 73.69 +4.8 Table 2: Results on the SNLI test, hard set, and differences with CE loss. *: results from Belinkov et al. (2019b). Results on MNLI: We construct hard sets from the validation sets of MNLI Matched and Mismatched (MNLI-M). Following Gururangan et al. (2018), we train a fastText classifier (Joulin et al., 2017) that predicts the labels using only the hypothesis and consider the subset on which it fails as hard examples. We report the results on MNLI mismatched sets in Table 3 (see Appendix B for similar results on MNLI matched). With BERT, DFL and PoE obtain 1.4 and 1.7 points gain on the hard development set, while with InferSent, they improve the results by 2.5 and 2.6 points. To comply with limited access to the MNLI submission system, we evaluate only the best result of the baselines and our models on the test sets. Our PoE model improves the performance on the hard test set by 1.1 points while retaining in-domain accuracy. BERT InferSent Loss MNLI Hard ∆ MNLI Hard ∆ Development set results CE 84.53 77.55 69.99 56.53 RUBi 85.17 78.63 +1.1 70.53 58.08 +1.5 DFL 84.85 78.92 +1.4 61.12 59.05 +2.5 PoE 84.85 79.23 +1.7 65.85 59.14 +2.6 Test set results CE 83.51 75.75 — — — PoE 83.47 76.83 +1.1 — — — Table 3: Results on MNLI mismatched benchmark and MNLI mismatched hard set. ∆are absolute differences with CE loss. 4.3 Syntactic Bias in NLI Dataset: McCoy et al. (2019b) show that NLI models trained on MNLI can adopt superficial syntactic heuristics. They introduce HANS, consisting of several examples on which the syntactic heuristics fail. Base model: We use BERT as our base model and train it on the MNLI dataset. Bias-only model: We consider the following features for the bias-only model. The first four features are based on the syntactic heuristics proposed in McCoy et al. (2019b): 1) Whether all words in the hypothesis are included in the premise; 2) If the hypothesis is the contiguous subsequence of the premise; 3) If the hypothesis is a subtree in the premise’s parse tree; 4) The number of tokens shared between premise and hypothesis normalized by the number of tokens in the premise. We additionally include some similarity features: 5) The cosine similarity between premise and hypothesis’s pooled token representations from BERT followed by min, mean, and max-pooling. We consider the same weight for contradiction and neutral labels in the bias-only loss to allow the model to recognize entailment from not-entailment. During the evaluation, we map the neutral and contradiction labels to not-entailment. Results: McCoy et al. (2019a) observe large variability in the linguistic generalization of neural models. We, therefore, report the averaged results across 4 runs with the standard deviation in Table 4. PoE and DFL obtain 4.4 and 7.4 points gain (see Appendix C for accuracy on individual heuristics of HANS). Loss MNLI HANS ∆ CE 84.51 61.88 ±1.9 RUBi 84.53 61.76±2.7 -0.1 Reweight g 83.54 69.19 +7.3 Learned-Mixin g 84.29 64.00 +2.1 Learned-Mixin+H g  83.97 66.15 +4.3 PoE 84.19 66.31±0.6 +4.4 DFL 83.95 69.26 ±0.2 +7.4 DFL 82.76 71.95±1.4 +10.1 Table 4: Results on MNLI Matched dev set and HANS. g: results from Clark et al. (2019). : perform hyper-parameter tuning. ∆are differences with CE loss. We compare our results with the concurrent work of Clark et al., who propose a PoE model similar to ours, which gets similar results. The main difference is that our models are trained end-to-end, which is convenient in practice, while Clark et al.’s method requires two steps, first training a bias-only model and then using this pre-trained model to train a robust model. The Reweight baseline in Clark et al. is a special case of our DFL with γ=1 and performs similarly to our DFL method (using default γ=2). Their Learned-Mixin+H method requires hyperparameter tuning. Since the assumption is not having access to any out-of-domain test data, and there is no available dev set for HANS, it is challenging to perform hyper-parameter tuning. Clark et al. follow prior work (Grand and Belinkov, 2019; Ramakrishnan et al., 2018) and perform model section on the test set. To provide a fair comparison, we consequently also tuned γ in DFL by sweeping over {0.5,1,2,3,4}. DFL is the selected model, with γ = 3. With this hyperparameter tuning, DFL is even more effective, and our best result performs 2.8 points better than Clark et al. (2019). 4.4 Jointly Debiasing Multiple Bias Patterns To evaluate combating multiple bias patterns, we jointly debias a base model on the hypothesis artifacts and syntactic biases. Base model: We use BERT as our base model and train it on the MNLI dataset. Loss MNLI Hard ∆ HANS ∆ CE 84.53 77.55 61.88±1.9 PoE p 84.85 79.23 +1.7 60.43 -1.5 DFLp 84.85 78.92 +1.4 60.63 -1.2 PoE n 84.55 77.90±0.3 +0.4 66.31±0.6 +4.4 DFLn 84.30 77.66±0.6 +0.1 69.26±0.2 +7.4 PoE-Joint 84.39 78.61±0.1 +1.1 68.04±1.2 +6.2 DFL-Joint 84.49 78.36±0.4 +0.8 69.10±0.7 +7.2 Table 5: Results on MNLI mismatched dev set, MNLI mismatched hard set, and HANS when training independently to debias against either hypothesis artifacts (p) or syntactic biases (n), compared with jointly training to debias against both bias types. ∆: differences with baseline CE loss. Bias-only models: We use the hypothesis-only and syntactic bias-only models as in Sections 4.2 and 4.3. Results: Table 5 shows the results. Models trained to be robust to hypothesis biases (p) do not generalize to HANS. On the other hand, models trained to be robust on HANS (n) use a powerful bias-only model resulting in a slight improvement on MNLI mismatched hard dev set. We expect a slight degradation when debiasing for both biases since models need to select samples accommodating both debiasing needs. The jointly debiased models successfully obtain improvements on both datasets, which are close to the improvements on each dataset by the individually debiased models. 5 Transfer Performance To evaluate how well the baseline and proposed models generalize to solving textual entailment in domains that do not share the same annotation biases as the large NLI training sets, we take trained NLI models and test them on several NLI datasets. Datasets: We consider a total of 12 different NLI datasets. We use the 11 datasets studied by Poliak et al. (2018). These datasets include MNLI, SNLI, SciTail (Khot et al., 2018), AddOneRTE (ADD1) (Pavlick and Callison-Burch, 2016), Johns Hopkins Ordinal Commonsense Inference (JOCI) (Zhang et al., 2017), Multiple Premise Entailment (MPE) (Lai et al., 2017), Sentences Involving Compositional Knowledge (SICK) (Marelli et al., 2014), and three datasets from White et al. (2017) which are automatically generated from existing datasets for other NLP tasks including: Semantic Proto-Roles (SPR) (Reisinger et al., 2015), Definite Pronoun Resolution (DPR) (Rahman and Ng, 2012), FrameNet Plus (FN+) (Pavlick et al., 2015), and the GLUE benchmark’s diagnostic test (Wang et al., 2019). We additionally consider the Quora Question Pairs (QQP) dataset, where the task is to determine whether two given questions are semantically matching (duplicate) or not. As in Gong et al. (2017), we interpret duplicate question pairs as an entailment relation and neutral otherwise. We use the same split ratio mentioned by Wang et al. (2017). Since the datasets considered have different label spaces, when evaluating on each target dataset, we map the model’s labels to the corresponding target dataset’s space. See Appendix D for more details. We strictly refrained from using any out-of-domain data when evaluating on the unbiased split of the same benchmark in Section 4. However, as shown by prior work (Belinkov et al., 2019a), since different NLI target datasets contain different amounts of the bias found in the large-scale NLI dataset, we need to adjust the amount of debiasing according to each target dataset. We consequently introduce a hyperparameter α for PoE to modulate the strength of the bias-only model in ensembling. We follow prior work (Belinkov et al., 2019a) and perform model selection on the dev set of each target dataset Data CE DFL ∆ PoE ∆ SICK 57.05 57.91 +0.9 57.28 +0.2 ADD1 87.34 88.89 +1.5 87.86 +0.5 DPR 49.50 50.68 +1.2 50.14 +0.6 SPR 59.85 61.41 +1.6 62.45 +2.6 FN+ 53.16 54.77 +1.6 53.51 +0.4 JOCI 50.06 51.13 +1.1 50.85 +0.8 MPE 69.50 70.2 +0.7 70.1 +0.6 SCITAIL 67.64 69.33 +1.7 71.40 +3.8 GLUE 54.08 54.80 +0.7 54.71 +0.6 QQP 67.78 69.28 +1.5 68.61 +0.8 MNLI 74.40 73.58 -0.8 73.61 -0.8 MNLI-M 73.98 74.0 0.0 73.49 -0.5 Table 6: Accuracy results of models with BERT transferring to new target datasets. All models are trained on SNLI and tested on the target datasets. ∆are absolute differences between our methods and the CE loss baseline. and then report results on the test set.4 We select hyper-parameters γ, α from {0.4,0.6,0.8,2,3,4,5}. Results: Table 6 shows the results of the debiased models and baseline with BERT. As shown in prior work (Belinkov et al., 2019a), the MNLI datasets have very similar biases to SNLI, which the models are trained on, so we do not expect any improvement in the relative performance of our models and the baseline for MNLI and MNLI-M. On all the remaining datasets, our proposed models perform better than the baseline, showing a substantial improvement in generalization by using our debasing techniques. We additionally compare with Belinkov et al. (2019a) in Appendix D and show that our methods substantially surpass their results. 0 2 4 6 γ 50 60 70 80 Accuracy SNLI Hard SNLI Figure 2: Accuracy of InferSent model trained with DFL, on the SNLI test and SNLI hard sets for different γ. 4Since the test sets are not available for MNLI, we tune on the matched dev set and evaluate on the mismatched dev set or vice versa. For GLUE, we tune on MNLI mismatched dev set. 6 Discussion Analysis of Debiased Focal Loss: As expected, improving the out-of-domain performance could come at the expense of decreased in-domain performance since the removed biases are useful for performing the in-domain task. This happens especially for DFL, in which there is a trade-off between in-domain and out-of-domain performance that depends on the parameter γ, and when the baseline model is not very powerful like InferSent. To understand the impact of γ in DFL, we train an InferSent model using DFL for different values of γ on the SNLI dataset and evaluate its performance on SNLI test and SNLI hard sets. As illustrated in Figure 2, increasing γ increases debiasing and thus hurts in-domain accuracy on SNLI, but out-of-domain accuracy on the SNLI hard set is increased within a wide range of values (see a similar plot for BERT in Appendix E). Correlation Analysis: In contrast to Belinkov et al. (2019a), who encourage only the encoder to not capture the unwanted biases, our learning strategies influence the parameters of the full model to reduce the reliance on unwanted patterns more effectively. To test this assumption, in Figure 3, we report the correlation between the element-wise loss of the debiased models and the loss of a bias-only model on the considered datasets. SNLI MNLI HANS 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Correlation Baseline RUBi DFL PoE Figure 3: Pearson correlation between the element-wise cross-entropy loss of the debiasing models and the bias-only model trained on each dataset. The results show that compared to the baselines, our debiasing methods, DFL and PoE, reduce the correlation to the bias-only model, confirming that our models are effective at reducing biases. Interestingly, on MNLI, PoE has less correlation with the bias-only model than DFL and also has better performance on the unbiased split of this dataset. On the other hand, on the HANS dataset, DFL loss is less correlated with the bias-only model than PoE and also obtains higher performance on the HANS dataset. 7 Conclusion We propose two novel techniques, product-of-experts and debiased focal loss, to reduce biases learned by neural models, which are applicable whenever one can specify the biases in the form of one or more bias-only models. The bias-only models are designed to leverage biases and shortcuts in the datasets. Our debiasing strategies then work by adjusting the cross-entropy loss based on the performance of these bias-only models, to focus learning on the hard examples and downweight the importance of the biased examples. Additionally, we extend our methods to combat multiple bias patterns simultaneously. Our proposed debiasing techniques are model agnostic, simple, and highly effective. Extensive experiments show that our methods substantially improve the model robustness to domainshift, including 9.8 points gain on FEVER symmetric test set, 7.4 on HANS dataset, and 4.8 points on SNLI hard set. Furthermore, we show that our debiasing techniques result in better generalization to other NLI datasets. Future work may include developing debiasing strategies that do not require prior knowledge of bias patterns and can automatically identify them. Acknowledgments We would like to thank Daniel Andor and Suraj Srinivas for their helpful comments. We additionally would like to thank the authors of Schuster et al. (2019); Cadene et al. (2019); McCoy et al. (2019b); Belinkov et al. (2019a) for their support to reproduce their results. This research was supported by the Swiss National Science Foundation under the project Learning Representations of Abstraction for Opinion Summarization (LAOS), grant number “FNS-30216”. Y.B. was supported by the Harvard Mind, Brain, and Behavior Initiative. References Yonatan Belinkov, Adam Poliak, Stuart Shieber, Benjamin Van Durme, and Alexander Rush. 2019a. Don’t take the premise for granted: Mitigating artifacts in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 877–891, Florence, Italy. Association for Computational Linguistics. Yonatan Belinkov, Adam Poliak, Stuart M Shieber, Benjamin Van Durme, and Alexander M Rush. 2019b. On adversarial removal of hypothesis-only bias in natural language inference. In Proceedings of the 8th Joint Conference on Lexical and Computational Semantics. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Remi Cadene, Corentin Dancette, Hedi Ben-younes, Matthieu Cord, and Devi Parikh. 2019. Rubi: Reducing unimodal biases in visual question answering. In Advances in neural information processing systems. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Yichen Gong, Heng Luo, and Jian Zhang. 2017. Natural language inference over interaction space. In International Conference on Learning Representations. Gabriel Grand and Yonatan Belinkov. 2019. Adversarial regularization for visual question answering: Strengths, shortcomings, and side effects. In Proceedings of the Second Workshop on Shortcomings in Vision and Language, pages 1–13. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 132–142, Hong Kong, China. Association for Computational Linguistics. Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural computation. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Divyansh Kaushik and Zachary C Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In Thirty-Second AAAI Conference on Artificial Intelligence. Alice Lai, Yonatan Bisk, and Julia Hockenmaier. 2017. Natural language inference from multiple premises. In Proceedings of the Eighth International Joint Conference on Natural Language Processing. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A sick cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), Reykjavik, Iceland. European Language Resources Association (ELRA). R Thomas McCoy, Junghyun Min, and Tal Linzen. 2019a. Berts of a feather do not generalize together: Large variability in generalization across models with similar test set performance. arXiv preprint arXiv:1911.02969. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019b. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Ellie Pavlick and Chris Callison-Burch. 2016. Most” babies” are” little” and most” problems” are” huge”: Compositional entailment in adjective-nouns. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Ellie Pavlick, Travis Wolfe, Pushpendre Rastogi, Chris Callison-Burch, Mark Dredze, and Benjamin Van Durme. 2015. Framenet+: Fast paraphrastic tripling of framenet. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 408–413. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: the winograd schema challenge. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. 2018. Overcoming language priors in visual question answering with adversarial regularization. In Advances in Neural Information Processing Systems, pages 1541–1551. Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. Transactions of the Association for Computational Linguistics. Tal Schuster, Darsh J Shah, Yun Jie Serene Yeo, Daniel Filizzola, Enrico Santus, and Regina Barzilay. 2019. Towards debiasing fact verification models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Rishi Sharma, James Allen, Omid Bakhshandeh, and Nasrin Mostafazadeh. 2018. Tackling the story ending biases in the story cloze test. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is everything: Recasting semantic resources into a unified evaluation framework. In Proceedings of the Eighth International Joint Conference on Natural Language Processing. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal common-sense inference. Transactions of the Association for Computational Linguistics. A Fact Verification Base model: We fine-tune all models using BERT for 3 epochs and use the default parameters and default learning rate of 2e−5. Bias-only model: Our bias-only classifier is a shallow nonlinear classifier with 768, 384, 192 hidden units with Tanh nonlinearity. B Natural Language Inference Base model: InferSent uses a separate BiLSTM encoder to learn sentence representations for premise and hypothesis. It then combines these embeddings following Mou et al. (2016) and feeds them to the default nonlinear classifier. With InferSent we train all models for 20 epochs as default without using earlystopping. We use the default hyper-parameters and following Wang et al. (2019), we set the BiLSTM dimension to 512. We use the default nonlinear classifier with 512 and 512 hidden neurons with Tanh nonlinearity. With BERT, we finetune all models for 3 epochs. Bias-only model: For debiasing models using BERT, we use the same shallow nonlinear classifier explained in Appendix A, and for the ones using InferSent, we use a shallow linear classifier with 512 and 512 hidden units. Results: Table 7 shows results on the MNLI matched development and hard test sets. BERT InferSent Loss MNLI Hard ∆ MNLI Hard ∆ Development set results CE 84.41 76.56 69.97 57.03 RUBi 84.48 77.13 +0.6 70.51 57.97 +0.9 DFL 83.72 77.37 +0.8 60.78 57.88 +0.9 PoE 84.58 78.02 +1.5 66.02 59.37 +2.3 Test set results None 84.11 75.88 — — — PoE 84.11 76.81 +0.9 — — — Table 7: Results on the MNLI matched benchmark and MNLI matched hard set. ∆are absolute differences with CE loss. C Syntactic Bias in NLI Base model: We finetune all models for 3 epochs. Bias-only model: We use a nonlinear classifier with 6 and 6 hidden units with Tanh nonlinearity. Results: Table 8 shows the performance for each label (entailment and non entailment) on individual heuristics of the HANS dataset. Loss HANS Constituent Lexical Subsequence gold label: Entailment CE 98.98±0.6 96.41±0.8 99.72±0.1 RUBi 99.22±0.3 95.59±0.8 99.50±0.3 DFL 90.90±4.3 84.78±5.0 94.33±4.9 PoE 97.24±1.9 92.16±0.9 98.58±0.5 gold label: Non-entailment CE 20.12±5.8 48.86±5.7 7.18±0.7 RUBi 21.89±7.0 46.82±12.5 7.58±2.3 DFL 50.20±9.2 71.06±3.1 24.28±4.4 PoE 36.08±5.1 59.18±8.0 14.63±3.0 Table 8: Accuracy for each label (entailment or non-entailment) on individual heuristics of HANS. D Transfer Performance Mapping: We train all models on SNLI and evaluate their performance on other target datasets. SNLI contains three labels, contradiction, neutral, and entailment. Some of the datasets we consider contain only two labels. In the case of labels entailed and not-entailed, as in DPR, we map contradiction and neutral to the not-entailed class. In the case of labels entailment and neutral, as in SciTail, we map contradiction to neutral. Comparison with Belinkov et al. (2019a): We modified the implementations of Belinkov et al. (2019a) and corrected some implementation issues in the InferSent baseline (Conneau et al., 2017). Compared to the original InferSent implementation, the main differences in our implementation include: (a) We incorporated the fixes suggested for the bugs in the implementation of mean/max-pooling over BiLSTM in the InferSent baseline5 (b). We additionally observed that the aggregation of losses over each batch was computed with the average instead of the intended summation and we corrected it.6 (c) We followed the implementation of InferSent and we removed out-of-vocabulary (OOV) words from the sentence representation, while Belinkov et al. keep 5https://github.com/facebookresearch/ InferSent/issues/51 6The same observation is reported in https://github. com/facebookresearch/InferSent/pull/107. Data CE DFL ∆% PoE ∆% M1 ∆% M2 ∆% SICK 54.09 55.00 1.68 55.79 3.14 49.77 -7.99 49.77 -7.99 ADD1 75.19 78.29 4.12 77.00 2.41 67.44 -10.31 67.44 -10.31 DPR 49.95 50.59 1.28 49.95 0.00 50.87 1.84 50.87 1.84 SPR 41.31 47.95 16.07 50.50 22.25 51.51 24.69 51.51 24.69 FN+ 48.65 49.58 1.91 49.35 1.44 53.23 9.41 53.23 9.41 JOCI 46.47 46.48 0.02 47.53 2.28 44.83 -3.53 44.83 -3.53 MPE 60.60 60.70 0.17 61.80 1.98 56.40 -6.93 56.40 -6.93 SCITAIL 64.25 65.19 1.46 63.17 -1.68 56.40 -12.22 56.40 -12.22 GLUE 48.73 46.83 -3.90 49.09 0.74 43.93 -9.85 43.93 -9.85 QQP 61.80 66.24 7.18 66.36 7.38 62.46 1.07 62.46 1.07 MNLI 56.99 56.70 -0.51 56.59 -0.70 51.72 -9.25 51.72 -9.25 MNLI-M 57.01 57.75 1.30 57.84 1.46 53.99 -5.30 53.99 -5.30 Average — — 2.57 — 3.39 — -2.36 — -2.36 Table 9: Accuracy results of models with InferSent transferring to new target datasets. All models are trained on SNLI and tested on the target datasets. M1 and M2 are our re-implementation of Belinkov et al. (2019a). ∆are relative differences in percentage with respect to CE loss. . them by introducing an OOV token. We additionally observed during the pre-processing of some of the target datasets in the implementation of Belinkov et al., some of the samples are not considered due to the preprocessing issues. We fix the pre-processing issues and evaluate our models and our reimplementations of Belinkov et al. (2019a) on the same corpora. We set the BiLSTM dimension to 512 across all models. Note that Belinkov et al. use BiLSTM dimension of 2048, and due to the mentioned differences in implementations and datasets, the results reported in Belinkov et al. (2019a) are not comparable. However, we still on average surpass their reported results substantially. Our reimplementations and scripts to reproduce the results are publicly available in https: //github.com/rabeehk/robust-nli-fixed. As used in prior work to adjust the learning-rate of the bias-only and baseline models (Belinkov et al., 2019a), we introduce a hyperparameter β for the bias-only model to modulate the loss of the bias-only model in ensembling. We sweep hyper-parameters γ, α over {0.02, 0.05, 0.1, 0.6, 2.0, 4.0, 5.0} and β over {0.05,0.2,0.4,0.8,1.0}. Table 9 shows the results of our debiasing models (DFL, PoE), our reimplementations of proposed methods in Belinkov et al. (2019a) (M1, M2), and the baseline with InferSent (CE). The DFL model outperforms the baseline in 10 out of 12 datasets, while the PoE model outperforms the baseline in 9 datasets and does equally well on the DPR dataset. As shown in prior work (Belinkov et al., 2019a), the MNLI dataset has very similar biases to SNLI, which the models are trained on, so we do not expect any improvement in the relative performance of our models and the baseline for MNLI dataset. Interestingly, our methods obtain improvement on MNLI-M, in which the test data differs from training distribution. Our proposed debiasing methods, PoE and DFL, are highly effective, boosting the relative generalization performance of the baseline by 3.39% and 2.57% respectively, significantly surpassing the prior work of Belinkov et al. (2019a). Compared to M1 and M2, our methods outperform them on 9 datasets, while they do better on two datasets of SPR and FN+, and slightly better on the DPR dataset. However, note that DPR is a very small dataset and all models perform close to random-chance on this dataset. E Analysis of Debiased Focal Loss Figure 4 shows the impact of γ on BERT trained with DFL. 2 4 6 γ 85 90 Accuracy SNLI Hard SNLI Figure 4: Accuracy of the BERT model trained with DFL, on SNLI and SNLI hard sets for different γ.
2020
769
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 836–845 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 836 Fine-grained Interest Matching for Neural News Recommendation Heyuan Wang†, Fangzhao Wu†, Zheng Liu†, and Xing Xie† † Microsoft Research Asia, Beijing 100080, China [email protected], [email protected], {zhengliu, xingx}@microsoft.com Abstract Personalized news recommendation is a critical technology to improve users’ online news reading experience. The core of news recommendation is accurate matching between user’s interests and candidate news. The same user usually has diverse interests that are reflected in different news she has browsed. Meanwhile, important semantic features of news are implied in text segments of different granularities. Existing studies generally represent each user as a single vector and then match the candidate news vector, which may lose fine-grained information for recommendation. In this paper, we propose FIM, a Finegrained Interest Matching method for neural news recommendation. Instead of aggregating user’s all historical browsed news into a unified vector, we hierarchically construct multilevel representations for each news via stacked dilated convolutions. Then we perform finegrained matching between segment pairs of each browsed news and the candidate news at each semantic level. High-order salient signals are then identified by resembling the hierarchy of image recognition for final click prediction. Extensive experiments on a real-world dataset from MSN news validate the effectiveness of our model on news recommendation. 1 Introduction Recently, people’s news reading habits have gradually shifted to digital content services. Many online news websites, such as Google News 1 and MSN News 2, aim to collect news from various sources and distribute them for users (Das et al., 2007; Lavie et al., 2010). However, the overwhelming number of newly-sprung news makes it difficult for users to find their interested content (Wu et al., 2019c). Therefore, personalized news recommendation becomes an important technology to 1https://news.google.com/ 2https://www.msn.com/news Historical Browsed News D1 Watch: Philip Rivers hilariously trolls Chiefs fans after win Dog's hilarious reaction to carrot NFL playoff picture: Saints close to clinching; Patriots fall behind Texans This woman lost 245 pounds over 5 years. Here's how she did it. Protective golden retriever prevents puppy from being scolded by owner 50 Genius Weight Loss Tricks You Haven't Tried Ranking the eight starting quarterbacks remaining in the NFL playoffs Candidate News D2 D3 D4 C1 C2 C3 Figure 1: Example of one user’s reading behavior from MSN News. The user has various interests including NFL sports, pets and the issue about weight loss. The highlighted text segments are crucial semantic clues, and the arrows of different colors indicate the relevant matching pairs for candidate news recommendation. alleviate information overload and improve users’ online reading experience (IJntema et al., 2010). The key to news recommendation lies in the accurate matching of user’s interests and candidate news. The same user usually has diverse interests, which are reflected in different news she has browsed. Meanwhile, the important semantic features of news are implied in text segments of different granularities. Figure 1 illustrates the challenges with an example. As demonstrated, different historical browsed news can reveal user’s interests about different topics or events. The first and second historical news are about pet dogs and the issue of weight loss respectively. Naturally, they provide critical clues to select the candidate news C2 and C3 which reveal relevant information. However, they are less informative to identify the candidate news C1, which is about the competition of National Football League (NFL). Besides, the matched segment pairs across browsed news and candidate news lie in different granularities, such as the words “Dog’s”-“puppy” and phrases “lost 245 pounds”-“Weight Loss”. Moreover, different segments in news texts have different importance 837 for selecting proper news candidates. For example, in the third historical browsed news D3, “Philip Rivers” and “Chiefs” are more important than other words like “hilariously” and “after” for inferring that the user is a fan of NFL, since they refer to the famous quarterback and team of this sport. Existing work, however, usually learns a single representation for each user by integrating all historical news that the user has browsed, then recommendations are performed by matching the final user vector and the candidate news vector (Okura et al., 2017; Wu et al., 2019e,b). For instance, Okura et al. (2017) encode news via denoising autoencoders, and learn representations of users from their browsed news via a GRU network. Wu et al. (2019e) apply multi-head self-attentions to learn news representations, then learn user representations by modeling the relatedness between browsed news. Wu et al. (2019b) enhance personalized news and user representations by exploiting the embedding of user’s ID to generate a query vector for attending to important words and news. Despite the improvements of these methods in news recommendation performance, they are limited in capturing fine-grained user-news matching signals, since user’s various latent interests implied in distinct historical readings cannot match with the candidate news until the final step of click prediction. In this paper, we propose a Fine-grained Interest Matching network (FIM), which is a new architecture for news recommendation that can tackle the above challenges. The advantages of FIM lie in two cores: the multi-level user/news representation and the fine-grained interest matching. Instead of representing each user as a single abstract vector, we employ hierarchical dilated convolutions in a unified module to construct multi-level representations of each news article based on the title and category annotations. By hierarchically stacking the dilated convolutions, the receptive input width at each layer grows exponentially, while the number of parameters increases only linearly. Meanwhile, the outputs of each layer are preserved as feature maps across different length of text segments, with no loss in coverage since any form of pooling or stride convolution is not applied. In this way, we can gradually obtain the semantic features of news from local correlation and long-term dependency at different granularities, including word, phrase, and sentence levels. Furthermore, to avoid information loss, FIM matches the text segments of the candidate news and each historical news browsed by the user at each semantic granularity. In practice, for each pair of news, the model constructs a segment-segment similarity matrix from word-level to sentence-level based on the hierarchical news representations. By this means, user’s reading interests implied in the browsing history can be recognized under the supervision of candidate news, and carried into matching with minimal loss, so as to provide sufficient clues about the content relevance for recommending proper news. Afterwards, we merge the multiple matching matrices of each news pair at each granularity into a 3D image, whose channels indicate the relevant degrees of different kinds of user-news matching patterns. By resembling the CNN-based hierarchy of image recognition, higherorder salient signals are identified to predict the probability of the user clicking the candidate news. We conducted extensive experiments on a realworld dataset collected from MSN news. Experimental results validate that our approach can effectively improve the performance of news recommendation compared with the state-of-the-art methods. 2 Related Works With the explosive growth of digital news, building personalized news recommender systems has drawn more attentions in both natural language processing and data mining fields (Phelan et al., 2011; Zheng et al., 2018; Wu et al., 2019a). Conventional news recommendation methods focus on utilizing manual feature engineering to build news and user representations for matching (Phelan et al., 2009; Li et al., 2010; Liu et al., 2010; Son et al., 2013; Li et al., 2014; Bansal et al., 2015). For example, Liu et al. (2010) used topic categories and interest features generated by a Bayesian model to build news and user representations. Son et al. (2013) extracted topic and location features from Wikipedia pages to build news representations for locationbased news recommendation. In recent years, deep learning based models have achieved better performance than traditional methods for news recommendation, due to their capabilities of distilling implicit semantic features in news content (Okura et al., 2017; Wang et al., 2018; An et al., 2019; Wu et al., 2019e,d). For example, Okura et al. (2017) learned news representations via denoising auto-encoders, then used recurrent neural networks to aggregate historical browsed 838 ... ... ... ... 3D CNN Word Embedding Multi-grained News Representation News-by-News Matching Matching Matrices Aggregation News Representation Module Cross Matching Module Click Prediction Module Historical Browsed News Candidate News 3D Matching Image Q ... HDC, dilation=1 HDC, dilation=2 HDC, dilation=3 HDC, dilation=1 HDC, dilation=2 HDC, dilation=3 HDC, dilation=1 HDC, dilation=2 HDC, dilation=3 HDC, dilation=1 HDC, dilation=2 HDC, dilation=3 Figure 2: Architecture of our FIM model. HDC (hierarchical dilated convolution) is the news encoder. news to learn user representations. Wang et al. (2018) enhanced the representation of news by exploiting the embeddings of extracted entities in a knowledge graph as a separate channel of the CNN input. Wu et al. (2019e) leveraged multi-head selfattentions to construct news representations based on the interactions between words, and constructed user representations based on the relatedness between news. An et al. (2019) proposed to learn long-term user preferences from the embeddings of their IDs, and learn short-term user interests from their recently browsed news via GRU network. (Wu et al., 2019a) proposed an attentive multi-view learning model to learn unified news representations from titles, bodies and topic categories by regarding them as different views of news. Different from these existing methods, in FIM, the representations of user’s multiple browsed news are not fused into an abstract user vector before matching with the candidate news. Instead, we perform matching between each pair of segments in the news texts from multiple semantic levels. Therefore, more fine-grained information can be distilled for the final recommendation. 3 Our Approach 3.1 Problem Definition The news recommendation problem can be formulated as follows. Given a user u, the set of historical news she has browsed at the online news platform is formulated as su = {d1, . . . , dn}. For a news candidate ci, a binary label yi ∈{0, 1} is adopted to indicate whether u will click ci in latter impressions. The aim is to build a prediction model g(·, ·). For each pair of user and candidate news (u, c), we can predict the probability that u would like to click c using the function g : su, c →ˆy. Recommendations are performed based on the ranking of candidate news according to their click scores. 3.2 Model Overview We present a Fine-grained Interest Matching network (FIM) to model g(·, ·). The architecture of FIM is illustrated in Figure 2, which contains three major components, i.e., a news representation module to construct hierarchical semantic features for news text segments, a cross interaction module to exploit and aggregate matching information from each pair of news at each level of granularity, and a prediction module to calculate the probability that the user will click the candidate news. Next, we introduce each component in detail. 3.2.1 News Representation Module We design a hierarchical dilated convolution (HDC) encoder to learn representations of news from multiple semantic views. Besides titles that can reflect the central information of news, at many digital platforms such as MSN, news articles are usually labeled with a category annotation (e.g., “sports”, “entertainment”) and a subcategory annotation (e.g., “football nba”, “movies celebrity”) to help indicate news topics and target users’ in839 2020/4/21 dilated_cnn.drawio dilation=1 dilation=2 dilation=3 word embedding Figure 3: Hierarchical Dilated Convolution (HDC). terests. HDC encodes each news by connecting its title, category and subcategory annotations into a sequence of words as input. Given the word sequence d = [x1, . . . , xN], where N is the sequence length, the model first looks up an embedding table to transform d into a matrix d0 = [x1, . . . , xN], where xj ∈Rd is a d-dimensional word embedding. Then hierarchical dilated convolution layers are applied to capture multi-grained semantic features in news texts. Different from standard convolution that convolves a contiguous subsequence of the input at each step, dilated convolution (Yu and Koltun, 2016) has a wider receptive field by skipping over δ input elements at a time, where δ is the dilation rate. For a context of xj and a convolution kernel W of size 2w + 1, the dilated convolution operation is: F(xt) = ReLU(W w M k=0 xj±kδ + b) , (1) where L is the vector concatenation, b is the bias and ReLU (Nair and Hinton, 2010) is the nonlinear activation function. As shown in Figure 3, the darker output of each convolution layer is a weighted combination of the lighter regular spaced inputs in the previous layer. We start with δ = 1 (equals to standard convolution) for the first layer to ensure that no element of the input sequence is excluded. Afterwards, by hierarchically stacking the dilated convolutions with wider dilation rates, the length of convolved text segments expands exponentially, and the semantic features of different n-grams can be covered using only a few layers and a modest number of parameters. Moreover, to prevent vanishing or exploding of gradients, we apply layer normalization (Ba et al., 2016) at the end of each convolution layer. Since there may be irrelevant information introduced to semantic units at a long distance, we practically design the multi-level dilation rates based on the performance in validation. The output of each stacked layer l is preserved as feature maps of the news text at a specific level of granularity, formulated as dl = [xl j]N j=1 ∈RN×fs, where fs is the number of filters for each layer. Suppose there are L layers stacked, the multi-grained news representations can be defined as [d0, d1, . . . , dL]. By this means, HDC gradually harvests lexical and semantic features from word and phrase levels with small dilation rates, and captures long dependences from sentence level with larger dilation rates. Meanwhile, the computational path is greatly shortened, and the negative effects of information loss caused by down-sampling methods such as max-pooling can be reduced. Our news encoder is superior to the recurrent units in parallel ability and the entirely attention-based approach in reducing token-pair memory consumptions. 3.2.2 Cross Interaction Module Given representations of the k-th browsed news [dl k]L l=0 and the candidate news [cl]L l=0, a segmentsegment matching matrix is constructed for each granularity, i.e., Ml k,c ∈RNdk×Nc, where l ∈ {0, L} is the semantic level, Ndk and Nc are the length of the news dk and c. The (i, j)-th element of Ml k,c is calculated by scaled dot product as: Ml k,c[i, j] = dl k[i] · cl[j]T √fs , (2) indicating the relevance between the i-th segment in dk and the j-th segment in c according to the l-th representation type. The L + 1 matching matrices for the news pair <dk, c> can be viewed as different feature channels of their matching information. To summarize the information of user’s entire reading sequence, FIM fuses all interaction matrices across each browsed news and the candidate news into a 3D matching image Q, formulated as: Q = {Qk,i,j}n×Ndk×Nc , (3) where n denotes the total number of browsed news in user history, and each pixel Qk,i,j is defined as: Qk,i,j = [Ml k,c[i, j]]L l=0 . (4) Specifically, each pixel is a concatenated vector with L + 1 channels, indicating the matching degrees between a certain segment pair of the news content at different levels of granularity. As user’s click behaviors may be driven by personalized interests or temporary demands and events, different historical browsed news has different usefulness and representativeness for matching 840 and recommending the proper candidate news. Inspired by Zhou et al. (2018) in the issue of dialogue system, we resemble the compositional hierarchy of image recognition, and employ a layered 3D convolution & max-pooling neural network to identify the salient matching signals from the whole image. The 3D convolution is the extension of typical 2D convolution, whose filters and strides are 3D cubes. Formally, the higher-order pixel at (k, i, j) on the z-th feature map of the t-th layer is computed as: Q(t,z) k,i,j =ELU X z′ Wt−1 X w=0 Ht−1 X h=0 Rt−1 X r=0 K(t,z) w,h,r·Q(t−1,z′) k+w,i+h,j+r+b(t) ! , (5) where z′ denotes each feature map of the previous layer, K(t,z) ∈RWt×Ht×Rt is a 3D convolution kernel with the size of Wt × Ht × Rt, and b(t) is the bias for the t-th layer. A max pooling operation is then adopted to extract salient signals as follows: \ Q(t,z) k,i,j =max  Q(t,z) [k:k+P (t,z) w −1],[i:i+P (t,z) h −1],[j:j+P (t,z) r −1]  , (6) where P (t,z) w , P (t,z) h and P (t,z) r are sizes of 3D maxpooling. Outputs of the final layer are concatenated as the integrated matching vector between the user and the candidate news, denoted as su,c ∈Rv. 3.2.3 Click Prediction Module In the recommendation scenario studied in this paper, recommendations are made based on ranking the candidate news articles according to their probabilities of being clicked by a user in an impression. Given the integrated matching vector su,c of a user and candidate news pair, the final click probability is calculated as: ˆyu,c = WT o su,c + bo , (7) where Wo and bo are learned parameters. Motivated by (Huang et al., 2013b) and (Wu et al., 2019e), we leverage the negative sampling technique for model training. For each news browsed by a user (regarded as a positive sample), we randomly sample K news which are showcased in the same impression but not clicked by the user as negative samples. Besides, the orders of these news are shuffled to avoid positional biases. FIM jointly predicts the click probability scores of the positive news and the K negative news during training. By this means, the news click prediction problem is reformulated as a (K +1)-way classification task. The loss function is designed to minimize the summation of negative log-likelihood of all positive samples, which is defined as: − S X i=1 log exp(ˆy+ ui,ci) exp(ˆy+ ui,ci) + PK k=1 exp(ˆy− ui,ci,k) , (8) where S is the number of positive training samples, and ci,k is the k-th negative sample in the same impression with the i-th positive sample. 4 Experiments 4.1 Dataset and Experimental Settings We conducted experiments on the Microsoft News dataset used in (Wu et al., 2019b)3, which was built from the user click logs of Microsoft News4. The detailed statistics are shown in Table 1. Logs in the last week were used for test, and the rest for model training. Besides, we randomly sampled 10% of logs in the training data for validation. In our experiments, the word embeddings are 300-dimensional and initialized using pre-trained Glove embedding vectors (Pennington et al., 2014). Due to the limitation of GPU memory, the maximum length of the concatenated word sequence of news title and category is set to 20, and at most 50 browsed news are kept for representing the user’s recently reading behaviors. We tested stacking 1-5 HDC layers with different dilation rates. The reported results utilize [1-2-3] hierarchy (dilation rate for each convolution layer) as it gains the best performance on the validation set. The window size and number of convolution filters for news representation are 3 and 150 respectively. For the cross interaction module, we use two-layered composition to distill higher-order salient features of the 3D matching image, and the number and window size of 3D convolution filters are 32-[3,3,3] for the first layer and 16-[3,3,3] for the second layer, with [1,1,1] stride. The followed max-pooling size is [3,3,3] with [3,3,3] stride. Meanwhile, the negative sampling ratio K is set to 4. Adam (Kingma and Ba, 2014) is used as the optimizer, the mini-batch size is 100, and the initial learning rate is 1e-3. Following the settings of state-of-the-art methods (Okura et al., 2017; Wu et al., 2019e), we use popular ranking metrics to evaluate the performance of each model, including AUC (Area 3A large-scale public version of Microsoft News dataset for news recommendation can be found at https://msnews. github.io 4https://microsoftnews.msn.com 841 # users 10,000 # topic categories 14 # news 42,255 # subtopic categories 284 # impressions 445,230 # positive samples 489,644 avg. # words per title 11.29 # negative samples 6,651,940 Table 1: Statistics of the dataset. Under the ROC Curve) (Bradley, 1997), MRR (Mean Reciprocal Rank) (Voorhees et al., 1999), and NDCG (Normalized Discounted Cumulative Gain) (J¨arvelin and Kek¨al¨ainen, 2002). We independently repeated each experiment for 10 times and reported the average performance. 4.2 Comparison Methods We compare FIM with the following methods: Manual Feature-based Methods: Traditional recommendation methods which rely on manual feature engineering to build news and user representations, including (1) LibFM (Rendle, 2012), a feature-based matrix factorization model that is widely used in recommendations. We extract TFIDF features from users’ browsed news and candidate news, and concatenate them as the input for LibFM; (2) DSSM (Huang et al., 2013a), a deep structured semantic model with word hashing via character trigram and multiple dense layers. All browsed news are merged into a long document as the query; (3) Wide & Deep (Cheng et al., 2016), a popular recommendation method that combines a wide channel for linear transformations and a deep channel with multiple dense layers. The same features with LibFM are used for both channels; (4) DeepFM (Guo et al., 2017), combining factorization machines and deep neural networks with the same features as LibFM. Neural Recommendation Methods: Neural networks specially designed for news recommendation, including (1) DFM (Lian et al., 2018), a deep fusion model combining dense layers with different depths and using attention mechanism to select important features; (2) DKN (Wang et al., 2018), incorporating entity information in knowledge graphs with Kim CNN (Kim, 2014) to learn news representations and using news-level attention network to learn user representations; (3) GRU (Okura et al., 2017), using auto-encoders to represent news and a GRU network to represent users; (4) NRMS (Wu et al., 2019e), leveraging multi-head self-attentions for news and user representation learning; (5) HiFi Ark (Liu et al., 2019), summarizing user history into highly compact and complementary vectors as archives, and learning candidate-dependent user Methods AUC MRR NDCG@5 NDCG@10 LibFM 0.5661 0.2414 0.2689 0.3552 DSSM 0.5949 0.2675 0.2881 0.3800 Wide&Deep 0.5812 0.2546 0.2765 0.3674 DeepFM 0.5830 0.2570 0.2802 0.3707 DFM 0.5861 0.2609 0.2844 0.3742 DKN 0.6032 0.2744 0.2967 0.3873 GRU 0.6102 0.2811 0.3035 0.3952 NRMS 0.6275 0.2985 0.3217 0.4139 Hi-Fi Ark 0.6027 0.3162 0.3335 0.4204 NPA 0.6243 0.3321 0.3535 0.4380 FIM 0.6359⋆ 0.3354⋆ 0.3582⋆ 0.4436⋆ FIMfirst 0.6258 0.3266 0.3484 0.4348 FIMlast 0.6319 0.3323 0.3549 0.4407 Table 2: The performance of different methods on news recommendation. The best and second best results are highlighted in boldface and underlined respectively. ⋆The improvement over all baseline methods is significant at p-value < 0.05. representation via attentive aggregation of such archives; (6) NPA (Wu et al., 2019b), using personalized attention with user ID’s embedding as the query vector to select important words and news. Ablation Variants: To verify the effects of multi-grained representation and sequential matching, we further setup two comparing ablation models, i.e., (1) FIMfirst: a variant in which we use feature maps of the first news representation layer for matching and recommendation. In this scenario, the HDC module degenerates into a one-layer standard CNN encoder. (2) FIMlast: a variant using the outputs of the last layer in HDC (namely, the L-th embedding type) to represent each news for matching. Due to the hierarchical representation architecture, higher-level features synthesize information from lower-level features, and can model more complex lexical and semantic clues. 4.3 Experimental Results Table 2 shows the results of our model and all comparative methods. Several observations can be made. First, neural news recommendation methods (e.g., GRU, NRMS, Hi-Fi Ark, NPA) are generally better than traditional methods (e.g., LibFM, DeepFM) that are based on manual feature engineering. The reason might be that handcrafted features are usually not optimal, and deep neural networks take the advantages of extracting implicit semantic features and modeling latent relationships between user and news representations. Second, our model FIM consistently outperforms other baselines in terms of all metrics, including the state-of-the-art deep learning based mod842 8 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 1 2 3 4 5 6 7 8 63.0 64.0 AUC nDCG@5 MRR nDCG@10 43.5 44.5 1 2 3 4 5 6 7 8 32.0 33.0 34.0 35.0 36.0 ratio K (a) LSTUR-ini. 1 2 3 4 5 6 7 8 9 63.0 64.0 AUC nDCG@5 MRR nDCG@10 43.5 44.5 1 2 3 4 5 6 7 8 9 32.0 33.0 34.0 35.0 36.0 layer a b (b) LSTUR-ini2. Figure 4: Influence of mask probability p. (a) Negative sampling ratio K 8 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 1 2 3 4 5 6 7 8 32.0 33.0 34.0 35.0 ratio K (a) LSTUR-ini. 1 2 3 4 5 6 7 8 9 63.0 64.0 AUC nDCG@5 MRR nDCG@10 43.5 44.5 1 2 3 4 5 6 7 8 9 32.0 33.0 34.0 35.0 36.0 layer a b (b) LSTUR-ini2. Figure 4: Influence of mask probability p. 16_8 16_16 32_8 32_16 32_32 64_8 64_16 64_32 64_64 (b) 3D CNN hierarchy for image Q 50 100 150 200 1 2 3 4 5 stack layers in HDC filter num of each layer 0.635 0.634 0.633 0.632 0.631 0.630 0.629 0.628 0.627 AUC (c) Hierarchy of HDC news encoder AUC NDCG@10 0.600 0.608 0.616 0.624 0.632 0.640 0.422 0.428 0.434 0.440 0.446 0.452 None +Category +Subcategory +Both (d) Incorporating two-level category annotations Figure 4: Performances w.r.t. different hyper-parameters and input information. els. This validates the advantage of the pair-wise multi-level matching architecture in synthetically detecting fine-grained matching information from news segment pairs to predict the probability of a user clicking a candidate news. Third, both FIMfirst and FIMlast show a decrease of performance compared to FIM. The latter is better than the former, indicating the effectiveness of constructing higher-level representations on the basis of low levels via the hierarchical mechanism of HDC. Besides, compared with DKN that utilizes knowledge-enhanced CNNs to learn news representations, FIMfirst has a better performance, illustrating the advantage of pair-wise matching fashion. Another notable thing is that while FIMlast underperforms FIM, it can outperform all other competitors on all metrics. However, the benefit of interacting news pairs at multigrained semantic levels is still significant. 5 Analysis In this section, we further investigate the impacts of different parameters and inputs on the model performance, and discuss the contribution of multigrained representation and matching architecture. 5.1 Quantity & Input Analysis We first study how FIM perfroms with different negative sampling ratio K. Figure 4(a) shows the experimental results. We can find that the performance consistently improves when K is lower than 5, then begins to decline. The possible reason is that with a too small K, the useful information exploited from negative samples is limited. However, when too many negative samples are incorporated, they may become dominant and the imbalance of training data will be increased. Thus it is more difficult for the model to precisely recognize the positive samples, which will also affect the recommendation performance. Overall, the optimal setting of K is moderate (e.g., K = 4). We then explore the influence of the 3D convolution & max-pooling neural network for processing the matching image Q. Comparing results are illustrated in Figure 4(b), where the CNN hierarchy a b means that the number of filters for the first layer and the second layer are set to a and b, separately. As shown, given the filter number a for the first layer, the performance first increases with a larger filter number b for the second layer, since more high-order information can be extracted. Then the performance begins to decrease, possibly because 843 (a) M1 (b) M2 (c) M3 Figure 5: Matching matrices visualization, darker area means larger value. more noisy patterns are introduced to the model (e.g., the group of [32 8, 32 16, 32 32]). Besides, a similar trend exists in the hierarchies with the same value b and different value a (e.g., the group of [16 8, 32 8, 64 8]). We conduct other experiments by changing the window size in [2,3,4,5] and the number of convolution layers in [1,2,3]. Results show that the optimal hierarchy is two-layered CNNs, with 32×[3,3,3] filters for the first layer and 16×[3,3,3] filters for the second layer. We further compare different combinations of the number of dilated convolution filters and stacked layers in the HDC news representation module. Figure 4(c) demonstrates the results, where darker areas represent larger values. We observe a consistent trend over settings with different number of filters at each layer, i.e., there is a significant improvement during the first few stacked layers, and then the performance decreases a lot when the depth grows to 5. The results indicate that depth of representation layers indeed matters in terms of matching and recommendation accuracy. The optimal setting of the number of stacked layers and convolution filters is 3 and 150 respectively. We think the reason might be that in this scenario, the perceived field of dilated convolution filters at each layer ranges among [3-7-13] (with dilation rates as [1-2-3]), which is sufficient for modeling multi-grained n-gram features through hierarchical composition of local interactions, compared to the average length of news word sequences. We also investigate the effectiveness of incorporating two-level category annotations of news as inputs. The results are shown in Figure 4(d). We can find that incorporating either categories or subcategories can benefit the performance of our model. This is interpretable since category annotations are helpful to reveal user’s interested aspects more explicitly. In addition, enhancing news representations with subcategories is better than with categories. This is probably because compared to the general category labels, subcategories can provide more concrete and detailed information to indicate the core topic of news content. Overall, jointly incorporating the two-level category annotations can achieve the best performance. 5.2 Visualization In this subsection, we further study the effectiveness of constructing hierarchical news representations and performing multi-grained interest matching. Figure 5 gives visualizations of the multigrained matching matrices (defined as formula 2) between historical browsed news and candidate news for a user, where Ml denotes a matching matrix of a news pair at the l-th representation level. We observe that the important matching information captured by the 1st-level matching matrix is mainly lexical relevance. For example, the words “football”, “nfl”, “playoff”, “playoffs” and “quarterbacks” are more correlated and assigned higher matching values in M1, which may due to their similar co-occurrence information encoded in word embeddings. Differently, higher-level matching matrices have the ability to identify more sophisticated semantic structures and latent long-term dependencies. From Figure 5(b), the interactive areas between the segments “weight loss” in the candidate news and “lost pounds” in the browsed news significantly gain larger matching scores among the 2-nd level semantic representations. In the matching matrix M3 in Figure 5(c), the subsequences about “trump walks out” are distinguished, since the expressions have correlated meanings. Mean844 while, the results also indicate that our model has the ability to identify important segments of a sentence and ignore the parts with less information, which is helpful to capture user’s interested topics or events more accurately. 6 Conclusion and Future Work In this paper, we propose a new architecture for neural news recommendation based on multi-grained representation and matching. Different from previous work that first integrates user’s reading history into a single representation vector and then matches the candidate news representation, our model can capture more fine-grained interest matching signals by performing interactions between each pair of news at multi-level semantic granularities. Extensive experiments on a real-world dataset collected from MSN news show that our model significantly outperforms the state-of-the-art methods. In the future, we will do more tests and surveys on the improvement of business objectives such as user experience, user engagement and service revenue. References Mingxiao An, Fangzhao Wu, Chuhan Wu, Kun Zhang, Zheng Liu, and Xing Xie. 2019. Neural news recommendation with long- and short-term user representations. In ACL, pages 336–345. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Trapit Bansal, Mrinal Das, and Chiranjib Bhattacharyya. 2015. Content driven user profiling for comment-worthy recommendations of news and blog articles. In RecSys, pages 195–202. Andrew P Bradley. 1997. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7):1145–1159. Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. 2016. Wide & deep learning for recommender systems. In DLRS, pages 7–10. Abhinandan S Das, Mayur Datar, Ashutosh Garg, and Shyam Rajaram. 2007. Google news personalization: scalable online collaborative filtering. In WWW, pages 271–280. Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. Deepfm: A factorizationmachine based neural network for CTR prediction. In IJCAI, pages 1725–1731. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013a. Learning deep structured semantic models for web search using clickthrough data. In CIKM, pages 2333–2338. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry P. Heck. 2013b. Learning deep structured semantic models for web search using clickthrough data. In CIKM, pages 2333–2338. Wouter IJntema, Frank Goossen, Flavius Frasincar, and Frederik Hogenboom. 2010. Ontology-based news recommendation. In EDBT/ICDT Workshops, page 16. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pages 1746– 1751. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Talia Lavie, Michal Sela, Ilit Oppenheim, Ohad Inbar, and Joachim Meyer. 2010. User attitudes towards news content personalization. International journal of human-computer studies, 68(8):483–495. Lei Li, Li Zheng, Fan Yang, and Tao Li. 2014. Modeling and broadening temporal user interest in personalized news recommendation. Expert Systems with Applications, 41(7):3168–3177. Lihong Li, Wei Chu, John Langford, and Robert E Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In WWW, pages 661–670. Jianxun Lian, Fuzheng Zhang, Xing Xie, and Guangzhong Sun. 2018. Towards better representation learning for personalized news recommendation: a multi-channel deep fusion approach. In IJCAI, pages 3805–3811. Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. 2010. Personalized news recommendation based on click behavior. In IUI, pages 31–40. Zheng Liu, Yu Xing, Fangzhao Wu, Mingxiao An, and Xing Xie. 2019. Hi-fiark: Deep user representation via high-fidelity archive network. In IJCAI, pages 3059–3065. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In ICML, pages 807–814. Shumpei Okura, Yukihiro Tagami, Shingo Ono, and Akira Tajima. 2017. Embedding-based news recommendation for millions of users. In KDD, pages 1933–1942. 845 Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Owen Phelan, Kevin McCarthy, Mike Bennett, and Barry Smyth. 2011. Terms of a feather: Contentbased news recommendation and discovery using twitter. In ECIR, pages 448–459. Owen Phelan, Kevin McCarthy, and Barry Smyth. 2009. Using twitter to recommend real-time topical news. In RecSys, pages 385–388. Steffen Rendle. 2012. Factorization machines with libfm. ACM Transactions on Intelligent Systems and Technology (TIST), 3(3):57. Jeong-Woo Son, A Kim, Seong-Bae Park, et al. 2013. A location-based news article recommendation with explicit localized semantic analysis. In SIGIR, pages 293–302. Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec, volume 99, pages 77– 82. Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi Guo. 2018. Dkn: Deep knowledge-aware network for news recommendation. In WWW, pages 1835– 1844. Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019a. Neural news recommendation with attentive multiview learning. In IJCAI, pages 3863–3869. Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019b. Npa: Neural news recommendation with personalized attention. In KDD, pages 2576–2584. Chuhan Wu, Fangzhao Wu, Mingxiao An, Yongfeng Huang, and Xing Xie. 2019c. Neural news recommendation with topic-aware news representation. In ACL, pages 1154–1159. Chuhan Wu, Fangzhao Wu, Mingxiao An, Tao Qi, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019d. Neural news recommendation with heterogeneous user behavior. In EMNLP, pages 4873–4882. Chuhan Wu, Fangzhao Wu, Suyu Ge, Tao Qi, Yongfeng Huang, and Xing Xie. 2019e. Neural news recommendation with multi-head selfattention. In EMNLP-IJCNLP, pages 6390–6395. Fisher Yu and Vladlen Koltun. 2016. Multi-scale context aggregation by dilated convolutions. In ICLR. Guanjie Zheng, Fuzheng Zhang, Zihan Zheng, Yang Xiang, Nicholas Jing Yuan, Xing Xie, and Zhenhui Li. 2018. Drn: A deep reinforcement learning framework for news recommendation. In WWW, pages 167–176. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. In ACL, pages 1118–1127.
2020
77
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8717–8729 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8717 Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance Prasetya Ajie Utama†‡ , Nafise Sadat Moosavi‡, Iryna Gurevych‡ †Research Training Group AIPHES ‡Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨at Darmstadt https://www.ukp.tu-darmstadt.de Abstract Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution. Recently, several proposed debiasing methods are shown to be very effective in improving out-of-distribution performance. However, their improvements come at the expense of performance drop when models are evaluated on the in-distribution data, which contain examples with higher diversity. This seemingly inevitable trade-off may not tell us much about the changes in the reasoning and understanding capabilities of the resulting models on broader types of examples beyond the small subset represented in the outof-distribution data. In this paper, we address this trade-off by introducing a novel debiasing method, called confidence regularization, which discourage models from exploiting biases while enabling them to receive enough incentive to learn from all the training examples. We evaluate our method on three NLU tasks and show that, in contrast to its predecessors, it improves the performance on out-of-distribution datasets (e.g., 7pp gain on HANS dataset) while maintaining the original in-distribution accuracy.1 1 Introduction Despite the impressive performance on many natural language understanding (NLU) benchmarks (Wang et al., 2018), recent pre-trained language models (LM) such as BERT (Devlin et al., 2019) are shown to rely heavily on idiosyncratic biases of datasets (McCoy et al., 2019b; Schuster et al., 2019; Zhang et al., 2019). These biases are commonly characterized as surface features of input examples that are strongly associated with the target labels, e.g., occurrences of negation words in 1The code is available at https://github.com/ UKPLab/acl2020-confidence-regularization natural language inference (NLI) datasets which are biased towards the contradiction label (Gururangan et al., 2018; Poliak et al., 2018). As a ramification of relying on biases, models break on the out-of-distribution data, in which such associative patterns between the surface features and the target labels are not present. This brittleness has, in turn, limited their practical applicability in some extrinsic use cases (Falke et al., 2019). This problem has sparked interest among researchers in building models that are robust against dataset biases. Proposed methods in this direction build on previous works, which have largely explored the format of several prominent labelrevealing biases on certain datasets (Belinkov et al., 2019). Two current prevailing methods, product-ofexpert (He et al., 2019; Mahabadi and Henderson, 2019) and learned-mixin (Clark et al., 2019a) introduce several strategies to overcome the known biases by correcting the conditional distribution of the target labels given the presence of biased features. They achieve this by reducing the importance of examples that can be predicted correctly by using only biased features. As a result, models are forced to learn from harder examples in which utilizing solely superficial features is not sufficient to make correct predictions. While these two state-of-the-art debiasing methods provide a remarkable improvement on the targeted out-of-distribution test sets, they do so at the cost of degrading the model’s performance on the in-distribution setting, i.e., evaluation on the original test data which contains more diverse inference phenomena. It raises a question on whether these debiasing methods truly help in capturing a better notion of language understanding or simply biasing models to other directions. Ideally, if such an improvement is achieved for the right reasons (i.e., better reasoning capabilities by learning a more general feature representation), a debiased model 8718 product-ofexpert learnedmixin conf-reg (our) in-distribution out-of-distribution calibration requires biased model    requires hyperparameter    Table 1: Comparison of our method against the state-ofthe-art debiasing methods. Learned-mixin (Clark et al., 2019a) is a parameterized variant of Product-of-expert (He et al., 2019; Mahabadi and Henderson, 2019). Our novel confidence regularization method improves the out-of-distribution performance while optimally maintain the in-distribution accuracy. should still be able to maintain its accuracy on previously unambiguous instances (i.e., instances that are predicted correctly by the baseline model), even when they contain biases. In this work, we address this shortcoming by introducing a novel debiasing method that improves models’ performance on the out-of-distribution examples while preserves the in-distribution accuracy. The method, called confidence regularization, draws a connection between the robustness against dataset biases and the overconfidence prediction problem in neural network models (Feng et al., 2018; Papernot et al., 2016). We show that by preventing models from being overconfident on biased examples, they are less likely to exploit the simple cues from these examples. The motivation of our proposed training objective is to explicitly encourage models to make predictions with lower confidence (i.e., assigning a lower probability to the predicted label) on examples that contain biased features. Table 1 shows the comparison of our method with the existing state-of-the-art debiasing methods: product-of-expert and learned-mixin. We show that our method is highly effective in improving outof-distribution performance while preserving the in-distribution accuracy. For example, our method achieves 7 points gain on an out-of-distribution NLI evaluation set, while slightly improves the in-distribution accuracy. Besides, we show that our method is able to improve models’ calibration (Guo et al., 2017) so that the confidences of their predictions are more aligned with their accuracies. Overall, our contributions are the following: • We present a novel confidence regularization method to prevent models from utilizing biased features in the dataset. We evaluate the advantage of our method over the state-of-theart debiasing methods on three tasks, including natural language inference, fact verification, and paraphrase identification. Experimental results show that our method provides competitive out-of-distribution improvement while retaining the original in-distribution performance. • We provide insights on how the debiasing methods behave across different datasets with varying degrees of biases and show that our method is more optimal when enough biasfree examples are available in the dataset. 2 Related Work Biases in Datasets Researchers have recently studied more closely the success of large fine-tuned LMs in many NLU tasks and found that models are simply better in leveraging biased patterns instead of capturing a better notion of language understanding for the intended task (Bender and Koller, 2020). Models’ performance often drops to a random baseline when evaluated on out-of-distribution datasets which are carefully designed to be void of the biases found in the training data. Using such targeted evaluation, McCoy et al. (2019b) observe that models trained on MNLI dataset (Williams et al., 2018) leverage syntactic patterns involving word overlap to blindly predict entailment. Similarly, Schuster et al. (2019) show that the predictions of fact verification models trained for the FEVER task (Thorne et al., 2018) are largely driven by the presence of indicative words in the input claim sentences. Following similar observations across other tasks and domains, e.g., visual question-answering (Agrawal et al., 2016), paraphrase identification (Zhang et al., 2019), and argument reasoning comprehension (Niven and Kao, 2019), researchers proposed improved data collection techniques to reduce the artifacts that result in dataset biases. While these approaches are promising, only applying them without additional efforts in the modeling part may still deliver an unsatisfactory outcome. For instance, collecting new examples by asking human annotators to conform to specific rules may be costly and thus limit the scale and diversity of the resulting data (Kaushik et al., 2020). Recently proposed adversarial filtering methods (Zellers et al., 2019; Sakaguchi et al., 2019) are more cost effective but are not guaranteed to be artifacts-free. It is, 8719 therefore, crucial to develop learning methods that can overcome biases as a complement to the data collection efforts. Debiasing Models There exist several methods that aim to improve models’ robustness and generalization by leveraging the insights from previous work about the datasets’ artifacts. In the NLI task, Belinkov et al. (2019) make use of the finding that partial input information from the hypothesis sentence is sufficient to achieve reasonable accuracy. They then remove this hypothesis-only bias from the input representation using an adversarial training technique. More recently, three concurrent works (Clark et al., 2019a; He et al., 2019; Mahabadi and Henderson, 2019) introduce a modelagnostic debiasing method for NLU tasks called product-of-expert. Clark et al. (2019a) also propose an adaptive variant of this method called learned-mixin. These two methods first identify examples that can be predicted correctly based only on biased features. This step is done by using a biased model2, which is a weak classifier that is trained using only features that are known to be insufficient to perform the task but work well due to biases. The output of this pre-trained biased model is then used to adjust the loss function such that it down-weights the importance of examples that the biased model can solve. While this approach prevents models from learning the task mainly using biased features, it also reduces model’s ability to learn from examples that can be solved using these features. As a result, models are unable to optimize accuracy on the original training distribution, and they possibly become biased in some other ways. Similar to these methods, our method also uses a biased model to identify examples that exhibit biased features. However, instead of using it to diminish the training signal from these examples, we use it to scale the confidence of models’ predictions. This enables the model to receive enough incentive to learn from all of the training examples. Confidence Regularization Methods for regularizing the output distribution of neural network models have been used to improve generalization. Pereyra et al. (2017) propose to penalize the entropy of the output distribution for encouraging models to be less confident in their predictions. Previously, Szegedy et al. (2016) introduce a label smoothing mechanism to reduce overfitting by pre2We follow the terminology used by He et al. (2019). venting the model from assigning a full probability to each training example. Our method regularizes models’ confidence differently: we first perform an adaptive label smoothing for the training using knowledge distillation (Hinton et al., 2015), which, by itself, is known to improve the overall performance. However, our method involves an additional bias-weighted scaling mechanism within the distillation pipelines. As we will show, our proposed scaling mechanism is crucial in leveraging the knowledge distillation technique for the purpose of overcoming the targeted bias while maintaining high accuracy in the training distribution. Similar to our work, Feng et al. (2018) propose a regularization method that encourages the model to be uncertain on specific examples. However, the objective and the methodology are different: they apply an entropy penalty term on examples that appear nonsensical to humans with the goal of improving models’ interpretability. On the contrary, we apply our confidence regularization on every training example with a varying strength (i.e., higher uncertainty on more biased examples) to improve models’ performance on the out-ofdistribution data. 3 Method Overview We consider the common formulation of NLU tasks as a multi-class classification problem. Given a dataset D that consists of n examples (xi, yi)i∈[1,n], with xi ∈X as a pair of sentences, and yi ∈{1, 2, ..., K} where K is the number of classes. The goal is to learn a robust classifier Fm, which computes the probability distribution over target labels, i.e., Fm(xi) = pi. The key idea of our method is to explicitly train Fm to compute lower probability, i.e., less confidence, on the predicted label when the input example exhibits a bias. This form of confidence regularization can be done by computing the loss function with the “soft” target labels that are obtained through our proposed smoothing mechanism. The use of soft targets as the training objective is motivated by the observation that the probability distribution of labels for each sample provides valuable information about the underlying task (Hinton et al., 2015; Pereyra et al., 2017). When the soft targets of certain examples have higher entropy, models can be explicitly taught that some labels are more likely to be correct than the others. Based on this intuition, we argue that adjusting the con8720 Dataset P: The air defense of America began with this call. H: This call began the air defense of America. Teacher Model Biased Model Bias-weighted scaling Main Model distill y: entailment cont. ent. neut. cont. ent. neut. 1.0 0.0 0.0 0.6 Figure 1: An overview of our debiasing strategy when applied to the MNLI dataset. An input example that contains lexical-overlap bias is predicted as entailment by the teacher model with a high confidence. When biased model predicts this example well, the output distribution of the teacher will be re-scaled to indicate higher uncertainty (lower confidence). The re-scaled output distributions are then used to distill the main model. fidence on soft labels can better inform the model about the true conditional distribution of the labels given the presence of the biased features. We first produce a meaningful softened target distribution for each training example by performing knowledge distillation (Hinton et al., 2015). In this learning framework, a “teacher” model Ft, which we parameterize identically to the main model Fm, is trained on the dataset D using a standard classification loss. We then use Ft to compute output probability distribution ˆpi, where Ft(xi) = ˆpi. In the original knowledge distillation approach, the output of the teacher model ˆpi is then used to train Fm. We extend this approach by adding a novel scaling procedure before we distill the teacher model into Fm. We define a scaling function S that takes the probability distribution ˆpi and scale it such that the probability assigned to its predicted label is lowered when the example can be predicted well by only relying on the biased features. Training the biased model For several NLU tasks, biased features are known a-priori, e.g., the word overlapping features in NLI datasets are highly correlated with the entailment label (McCoy et al., 2019b). We leverage this a-priori knowledge to design a measure of how well an example can be predicted given only the biased features. We refer to this measure as bias weight, denoted as βi for every example xi. Similar to previous debiasing methods (Clark et al., 2019a), we compute bias weights using a biased model. This biased model, denoted as Fb, predicts the probability distribution bi, where Fb(xi) = bi = ⟨bi,1, bi,2, ..., bi,K⟩. We define the bias weight βi as the scalar value of the assigned probability by Fb to the ground truth label: βi = bi,c (c-th label is the ground truth). Bias-weighted scaling As illustrated in Figure 1, our method involves scaling the teacher output ˆpi using βi. We do this by defining a scaling function S : RK →RK: S( ˆpi, βi)j = ˆ pi,j(1−βi) PK k=1 ˆ pi,k(1−βi) for j = 1, ..., K. The value of βi controls the strength of the scaling: as βi →1, the scaled probability assigned to each label approaches 1 K , which presents a minimum confidence. Conversely, when βi →0, the teacher’s probability distribution remains unchanged, i.e., S( ˆpi, 0) = ˆpi. Training the main model The final step is to train Fm by distilling from the scaled teacher model’s outputs. Since the main model is parameterized identically to the teacher model, we refer to this step as self-distillation (Furlanello et al., 2018). Self-distillation is performed by training Fm on pairs of input and the obtained soft target labels (xi, S( ˆpi, βi)). Specifically, Fm is learned by minimizing a standard cross-entropy loss between the scaled teacher’s output S( ˆpi, βi) and the current prediction of the main model: L(xi, S( ˆpi, βi)) = −S( ˆpi, βi) · log Fm(xi) In practice, each S( ˆpi, βi) is computed only once as a preprocessing step. Our method does not require hyperparameters, which can be an advantage since most out-of-distribution datasets do not provide a development set for tuning the hyperparameters. 8721 4 Experimental Setup In this section, we describe the datasets, models, and training details used in our experiments. 4.1 Natural Language Inference We use the MNLI dataset (Williams et al., 2018) for training. The dataset consists of pairs of premise and hypothesis sentences along with their inference labels (i.e., entailment, neutral, and contradiction). MNLI has two in-distribution development and test sets, one that matches domains of the training data (MNLI-m), and one with mismatching domains (MNLI-mm). We consider two out-of-distribution datasets for NLI: HANS (Heuristic Analysis for NLI Systems) (McCoy et al., 2019b) and MNLIhard test sets (Gururangan et al., 2018). HANS The dataset is constructed based on the finding that the word overlapping between premise and hypothesis in NLI datasets is strongly correlated with the entailment label. HANS consists of examples in which such correlation does not exist, i.e., hypotheses are not entailed by their wordoverlapping premises. HANS is split into three test cases: (a) Lexical overlap (e.g., “The doctor was paid by the actor” ⇏“The doctor paid the actor”), (b) Subsequence (e.g., “The doctor near the actor danced” ⇏“The actor danced”), and (c) Constituent (e.g., “If the artist slept, the actor ran” ⇏“The artist slept”). Each category contains both entailment and non-entailment examples. MNLI-hard Hypothesis sentences in NLI datasets often contain words that are highly indicative of target labels (Gururangan et al., 2018; Poliak et al., 2018). It allows a simple model that predicts based on the hypothesis-only input to perform much better than the random baseline. Gururangan et al. (2018) presents a “hard” split of the MNLI test sets, in which examples cannot be predicted correctly by the simple hypothesis-only model. 4.2 Fact Verification For this task, we use the training dataset provided by the FEVER challenge (Thorne et al., 2018). The task concerns about assessing the validity of a claim sentence in the context of a given evidence sentence, which can be labeled as either support, refutes, and not enough information. We use the Fever-Symmetric dataset (Schuster et al., 2019) for the out-of-distribution evaluation. Fever-Symmetric Schuster et al. (2019) introduce this dataset to demonstrate that FEVER models mostly rely on the claim-only bias, i.e., the occurrence of words and phrases in the claim that are biased toward certain labels. The dataset is manually constructed such that relying on cues of the claim can lead to incorrect predictions. We evaluate the models on the two versions (version 1 and 2) of their test sets.3 4.3 Paraphrase Identification We use the Quora Question Pairs (QQP) dataset for training. QQP consists of pairs of questions which are labeled as duplicate if they are paraphrased, and non-duplicate otherwise. We evaluate the out-of-distribution performance of QQP models on the QQP subset of PAWS (Paraphrase Adversaries from Word Scrambling) (Zhang et al., 2019). PAWS The QQP subset of PAWS consists of question pairs that are highly overlapping in words. The majority of these question pairs are labeled as non-duplicate. Models trained on QQP are shown to perform worse than the random baseline on this dataset. This partly indicates that models largely rely on lexical-overlap features to perform well on QQP. We report models’ performance on the duplicate and non-duplicate examples separately. 4.4 Models Baseline Model We apply all of the debiasing methods across our experiments on the BERT base model (Devlin et al., 2019), which has shown impressive in-distribution performance on the three tasks. In our method, BERT base is used for both Ft and Fm. We follow the standard setup for sentence pair classification tasks, in which the two sentences are concatenated into a single input and the special token [CLF] is used for classification. Biased Model (Fb) We consider the biased features of each of the examined out-of-distribution datasets to train the biased models. For HANS and PAWS, we use hand-crafted features that indicate how words are shared between the two input sentences. Following Clark et al. (2019a), these features include the percentage of hypothesis words that also occur in the premise and the average of cosine distances between word embedding in the premise and hypothesis.4 We then train a simple 3https://github.com/TalSchuster/ FeverSymmetric 4We include the detailed description in the appendix. 8722 Method MNLI-m MNLI-mm HANS Hard subset dev test dev test lex. subseq. const. avg. MNLI-m MNLI-mm BERT-base 84.3 ± 0.3 84.6 84.7 ± 0.1 83.3 72.4 52.7 57.9 61.1 ± 1.1 76.8 75.9 Learned-mixin hans 84.0 ± 0.2 84.3 84.4 ± 0.3 83.3 77.5 54.1 63.2 64.9 ± 2.4 Product-of-expert hans 82.8 ± 0.2 83.0 83.1 ± 0.3 82.1 72.9 65.3 69.6 69.2 ± 2.6 Regularized-conf hans 84.3 ± 0.1 84.7 84.8 ± 0.2 83.4 73.3 66.5 67.2 69.1 ± 1.2 Learned-mixin hypo 80.5 ± 0.4 79.5 81.2 ± 0.4 80.4 79.2 78.2 Product-of-expert hypo 83.5 ± 0.4 82.8 83.8 ± 0.2 84.1 79.8 78.7 Regularized-conf hypo 84.6 ± 0.2 84.1 85.0 ± 0.2 84.2 78.3 77.3 Table 2: The in-distribution accuracy (in percentage point) of the NLI models along with their accuracy on outof-distribution test sets: HANS and MNLI hard subsets. Models are only evaluated against their targeted out-ofdistribution dataset. nonlinear classifier using these features. We refer to this biased model as the hans model. For MNLI-hard and Fever-Symmetric, we train a biased model on only hypothesis sentences and claim sentences for MNLI and FEVER, respectively. The biased model is a nonlinear classifier trained on top of the vector representation of the input sentence. We obtain this vector representation by max-pooling word embeddings into a single vector for FEVER, and by learning an LSTM-based sentence encoder for MNLI. State-of-the-art Debiasing Models We compare our method against existing state-of-the-art debiasing methods: product-of-expert (He et al., 2019; Mahabadi and Henderson, 2019) and its variant learned-mixin (Clark et al., 2019a). product-ofexpert ensembles the prediction of the main model (pi) with the prediction of the biased model (bi) using p′ i = softmax(log pi + log bi), where p′ i is the ensembled output distribution. This ensembling enables the main model to focus on learning from examples that are not predicted well by the biased model. Learned-mixin improves this method by parameterizing the ensembling operation to let the model learn when to incorporate or ignore the output of the biased model for the ensembled prediction. On FEVER, we also compare our method against the example-reweighting method by Schuster et al. (2019). They compute the importance weight of each example based on the correlation of the ngrams within the claim sentences with the target labels. These weights are then used to compute the loss of each training batch. Training Details As observed by McCoy et al. (2019a), models can show high variance in their out-of-distribution performance. Therefore, we run each experiment five times and report both average and standard deviation of the scores.5 We also use training configurations that are known to work well for each task.6 For each experiment, we train our confidence regularization method as well as product-of-expert and learned-mixin using the same biased-model. Since the challenge datasets often do not provide a development set, we could not tune the hyperparameter of learned-mixin. We, therefore, use their default weight for the entropy penalty term.7 5 Results The results for the tasks of NLI, fact verification, and paraphrase identification are reported in Table 2, Table 3, and Table 4, respectively. 5.1 In-distribution Performance The results on the original development and test sets of each task represent the in-distribution performance. Since we examine two types of biases in NLI, we have two debiased NLI models, i.e., Regularized-conf hans and Regularizedconf hypo which are trained for debiasing HANS and hypothesis-only biases, respectively. We make the following observations from the results: (1) Our method outperforms product-ofexpert and learned-mixin when evaluated on the corresponding in-distribution data of all the three tasks; (2) Product-of-expert and learned-mixin drop the original BERT baseline accuracy on most 5Due to the limited number of possible submissions, we report the MNLI test scores only from a model that holds the median out-of-distribution performance. 6We set a learning rate of 5e−5 for MNLI and 2e−5 for FEVER and QQP. 7E.g., w = 0.03 for training on MNLI. 8723 Method FEVER dev Symm. v1 Symm. v2 BERT-base 85.8 ± 0.1 57.9 ± 1.1 64.4 ± 0.6 Learned-mixin claim 83.1 ± 0.7 60.4 ± 2.4 64.9 ± 1.6 Product-of-expert claim 83.3 ± 0.3 61.7 ± 1.5 65.5 ± 0.7 Reweighting bigrams 85.5 ± 0.3 61.7 ± 1.1 66.5 ± 1.3 Regularized-conf claim 86.4 ± 0.2 60.5 ± 0.4 66.2 ± 0.6 Table 3: Accuracy on the FEVER dataset and the corresponding challenge datasets. of the in-distribution experiments; (3) Regardless of the type of bias, our method preserves the indistribution performance. However, it is not the case for the other two methods, e.g., learned-mixin only results in a mild decrease in the accuracy when it is debiased for HANS, but suffers from substantial drop when it is used to address the hypothesis-only bias; (4) Our method results in a slight in-distribution improvement in some cases, e.g., on FEVER, it gains 0.6pp over BERT baseline. The models produced by Regularized-conf hans also gain 0.1 points to both MNLI-m and MNLI-mm test sets; (5) All methods, including ours decrease the in-distribution performance on QQP, particularly on its duplicate examples subset. We will discuss this performance drop in Section 6. 5.2 Out-of-distribution Performance The rightmost columns of each table report the evaluation results on the out-of-distribution datasets for each task. Based on our out-of-distribution evaluations, we observe that: (1) Our method minimizes the trade-off between the in-distribution and outof-distribution performance compared to the other methods. For example, on HANS, learned-mixin maintains the in-distribution performance but only improves the average HANS accuracy from 61.1% to 64.9%. product-of-expert gains 7 points improvement over the BERT baseline while reducing the MNLI-m test accuracy by 1.6 points. On the other hand, our method achieves the competitive 7 points gain without dropping the in-distribution performance; (2) The performance trade-off is stronger on some datasets. On PAWS, the two compared methods improve the accuracy on the non-duplicate subset while reducing models’ ability to detect the duplicate examples. Our method, on the other hand, finds a balance point, in which the non-duplicate accuracy can no longer be improved without reducing the duplicate accuracy; (3) depending on the use of hyperparameters, learned-mixin can make a lower Method QQP dev PAWS test dupl ¬dupl dupl ¬dupl BERT-base 88.4 ± 0.3 92.5 ± 0.3 96.9 ± 0.3 9.8 ± 0.4 LMixin hans 77.5 ± 0.7 91.9 ± 0.2 69.7 ± 4.3 51.7 ± 4.3 Prod-exp hans 80.8 ± 0.2 93.5 ± 0.1 71.0 ± 2.3 49.9 ± 2.3 Reg-conf hans 85.0 ± 0.7 91.5 ± 0.4 91.0 ± 1.8 19.8 ± 1.3 Table 4: Results of the evaluation on the QQP task. out-of-distribution improvement compared to ours, even after substantially degrading in-distribution performance, e.g., on FEVER-symmetricv2, it only gains 0.5 points while dropping 3 points on the FEVER development set. 6 Discussions and Analysis Ablation studies In this section, we show that the resulting improvements from our method come from the combination of both self-distillation and our scaling mechanism. We perform ablation studies to examine the impact of each of the components including (1) self-distillation: we train a model using the standard self-distillation without bias-weighted scaling, and (2) examplereweighting: we train a model with the standard cross-entropy loss with an example reweighting method to adjust the importance of individual examples to the loss. The weight of each example is obtained from the (scaled) probability that is assigned by the teacher model to the ground truth label.8 The aim of the second setting is to exclude the effect of self-distillation while keeping the effect of our scaling mechanism. Table 5 presents the results of these experiments on MNLI and HANS. We observe that each component individually still gains substantial improvements on HANS over the baseline, albeit not as strong as the full method. The results from the self-distillation suggest that the improvement from our method partly comes from the regularization effect of the distillation objective (Clark et al., 2019b; Furlanello et al., 2018). In the examplereweighting experiment, we exclude the effect of all the scaled teacher’s output except for the probability assigned to the ground truth label. Compared to self-distillation, the proposed example-reweighting has a higher impact on improving the performance in both in-distribution and out-of-distribution eval8Details of the ablation experiments are included in the supplementary materials. 8724 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.0 0.5 MNLI-m bert-base. acc: 84.2% prediction count correct count 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.0 0.5 product-of-expert. acc: 83.7% 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.0 0.2 regularized-conf. acc: 84.4% (a) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.0 0.5 (¬ent) subseq. bert-base. acc: 5.6% prediction count correct count 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.0 0.1 0.2 product-of-expert. acc: 36.0% 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.0 0.2 0.4 regularized-conf. acc: 43.0% (b) Figure 2: Distribution of models’ confidence on their predicted labels. The blue areas indicate the fraction of each bin that are correct. (a) Distribution on MNLI-m dev by models trained using hypothesis-only biased model. (b) Distribution on non-entailment subsequence subset of HANS by models trained using hans biased-model. Method MNLI HANS BERT-base 84.3 61.1 Full method 84.3 69.1 self-distillation 84.6 64.4 example-reweighting 84.7 65.3 Table 5: Results of the ablation experiments. The MNLI column refers to the MNLI-m dev set. BERTbaseline product-ofexpert learnedmixin conf-reg (our) MNLI-m 9.0 7.7 9.9 5.4 MNLI-mm 8.5 7.6 9.5 5.6 Table 6: The calibration scores of models measured by ECE (lower is better). uations. However, both components are necessary for the overall improvements. In-distribution performance drop of productof-expert The difference between our method with product-of-expert and its variants is the use of biased examples during training. Product-ofexpert in practice scales down the gradients on the biased training examples to allow the model to focus on learning from the harder examples (He et al., 2019). As a result, models often receive little to no incentive to solve these examples throughout the training, which can effectively reduce the training data size. Our further examination on a product-ofexpert model (trained on MNLI for HANS) shows that its degradation of in-distribution performance largely comes from the aforementioned examples. Ensembling back the biased-model to the main model can indeed bring the in-distribution accuracy back to the BERT baseline. However, this also leads to the original poor performance on HANS, which is counterproductive to the goal of improving the out-of-distribution generalization. Impact on Models’ Calibration We expect the training objective used in our method to discourage models from making overconfident predictions, i.e., assigning high probability to the predicted labels even when they are incorrect. We investigate the changes in models’ behavior in terms of their confidence using the measure of calibration, which quantifies how aligned the confidence of the predicted labels with their actual accuracy are (Guo et al., 2017). We compute the expected calibration error (ECE) (Naeini et al., 2015) as a scalar summary statistic of calibration. Results in Table 6 show that our method improves model’s calibration on MNLI-m and MNLI-mm dev sets, with the reduction of ECE ranging from 3.0 to 3.6. The histograms in figure 2 show the distribution of models’ confidences in their predictions. Figure 2a demonstrates that the prediction confidences of our resulting model on MNLI-m are more smoothly distributed. In figure 2b, we observe that our debiased model predicts examples that contain lexical overlap features with lower confidence, and when the confidence is higher, the prediction is more likely to be correct. Impact of biased examples ratio To investigate the slight in-distribution drop by our method in QQP (Table 4), we examine the ratio of biased examples in the QQP training data by evaluating the 8725 0 100 250 500 1000 1500 2000 2500 40 60 80 dupl. acc. 20 40 60 80 ¬dupl. acc. QQP bert-base PAWS bert-base QQP prod-exp PAWS prod-exp QQP reg-conf PAWS reg-conf Figure 3: Results on the PAWS-augmented QQP dataset. performance of the biased model on the dataset. We find that almost 80% of the training examples can be solved using the lexical overlap features alone, which indicates a severe lexical overlap bias in QQP.9 Moreover, in 53% of all examples, the biased model makes correct predictions with a very high confidence (βi > 0.8). For comparison, the same biased model predicts only 12% of the MNLI examples with confidence above 0.8 (more comparisons are shown in the supplementary material. As a result, there are not enough unbiased examples in QQP and the resulting soft target labels in this dataset are mostly close to a uniform distribution, which in turn may provide insufficient training signal to maximize the accuracy on the training distribution. Impact of adding bias-free examples Finally, we investigate how changing the ratio of biased examples affects the behavior of debiasing methods. To this end, we split PAWS data into training and test sets. The training set consists of 2500 examples, and we use the remaining 10K examples as a test set. We train the model on QQP that is gradually augmented with fractions of this PAWS training split and evaluate on a constant PAWS test set. Figure 3 shows the results of this experiment. When more PAWS examples are added to the training data, the accuracy of the BERT baseline gradually improves on the non-duplicate subset while its accuracy slowly drops on the duplicate subset. We observe that product-of-expert exaggerates this effect: it reduces the duplicate accuracy up 9The random baseline is 50% for QQP. to 40% to obtain the 93% non-duplicate accuracy. We note that our method is the most effective when the entire 2500 PAWS examples are included in the training, obtaining the overall accuracy of 77.05% compared to the 71.63% from the baseline BERT. 7 Conclusion Existing debiasing methods improve the performance of NLU models on out-of-distribution datasets. However, this improvement comes at the cost of strongly diminishing the training signal from a subset of the original dataset, which in turn reduces the in-distribution accuracy. In this paper, we address this issue by introducing a novel method that regularizes models’ confidence on biased examples. This method allows models to still learn from all training examples without exploiting the biases. Our experiments on four out-of-distribution datasets across three NLU tasks show that our method provides a competitive outof-distribution performance while preserves the original accuracy. Our debiasing framework is general and can be extended to other task setups where the biases leveraged by models are correctly identified. Several challenges in this direction of research may include extending the debiasing methods to overcome multiple biases at once or to automatically identify the format of those biases which simulate a setting where the prior knowledge is unavailable. Acknowledgments We thank Leonardo Ribeiro and Max Glockner for the thoughtful discussion on the earlier version of this work and the anonymous reviewers for their constructive comments. We also thank Tal Schuster for the support in using the Fever-Symmetric dataset. This work is supported by the German Research Foundation through the research training group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES, GRK 1994/1) and by the German Federal Ministry of Education and Research and the Hessian State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. References Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question an8726 swering models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1955–1960, Austin, Texas. Association for Computational Linguistics. Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, and Alexander M. Rush. 2019. On adversarial removal of hypothesis-only bias in natural language inference. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2019, Minneapolis, MN, USA, June 6-7, 2019, pages 256– 262. Association for Computational Linguistics. Emily Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, page to appear, virtual conference. Association for Computational Linguistics. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019a. Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4067–4080, Hong Kong, China. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. 2019b. BAM! born-again multi-task networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5931–5937, Florence, Italy. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2214–2220, Florence, Italy. Association for Computational Linguistics. Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719–3728, Brussels, Belgium. Association for Computational Linguistics. Tommaso Furlanello, Zachary Chase Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born-again neural networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 1602–1611. PMLR. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1321–1330. PMLR. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP, DeepLo@EMNLP-IJCNLP 2019, Hong Kong, China, November 3, 2019, pages 132–142. Association for Computational Linguistics. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In 8th International Conference on Learning Representations, ICLR 2020, Virtual Conference, 26 April - 1 May, 2019. OpenReview.net. Rabeeh Karimi Mahabadi and James Henderson. 2019. Simple but effective techniques to reduce biases. CoRR, abs/1909.06321. R Thomas McCoy, Junghyun Min, and Tal Linzen. 2019a. Berts of a feather do not generalize together: Large variability in generalization across models with similar test set performance. arXiv preprint arXiv:1911.02969. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019b. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. 8727 Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, pages 2901–2907. AAAI Press. Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658–4664, Florence, Italy. Association for Computational Linguistics. Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In IEEE Symposium on Security and Privacy, SP 2016, San Jose, CA, USA, May 22-26, 2016, pages 582–597. IEEE Computer Society. Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180–191, New Orleans, Louisiana. Association for Computational Linguistics. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. WINOGRANDE: an adversarial winograd schema challenge at scale. CoRR, abs/1907.10641. Tal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel Roberto Filizzola Ortiz, Enrico Santus, and Regina Barzilay. 2019. Towards debiasing fact verification models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3417–3423, Hong Kong, China. Association for Computational Linguistics. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 2818–2826. IEEE Computer Society. James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018. The fact extraction and VERification (FEVER) shared task. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 1–9, Brussels, Belgium. Association for Computational Linguistics. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791–4800. Association for Computational Linguistics. Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298–1308, Minneapolis, Minnesota. Association for Computational Linguistics. 8728 A Ablation Details For the second setting of our ablation studies, we perform an example reweighting using the scaled probability of the teacher model Ft on the ground truth label. Specifically, the cross entropy loss assigned to each batch of size m is computed by the following: − b X s=1 ˆ ps,c Pb u=1 ˆ pu,c · log(ps,c) where we assume that cth label is the ground truth label. The probability assigned to the correct label by the teacher model is then denoted as ˆ ps,c. The currect predicted probability of the main model is denoted as ps,c. B Bias Weights Distribution Figure 4 shows the performance of biased models on QQP, MNLI, and FEVER. For QQP and MNLI we show the results of biased model trained using lexical overlap features. For FEVER, the biased model is trained with claim-only partial input. We show that on PAWS (figure 4a), a large portion of examples can be predicted with a very high confidence by the biased model. C HANS Biased Model We use the hand-crafted HANS-based features proposed by Clark et al. (2019a). These features include: (1) whether all words in the hypothesis exist in the premise; (2) whether the hypothesis is a contiguous subsequence of the premise; (3) the fraction of hypothesis words that exist in the premise; (4) the average and the max of cosine distances between word vectors in the premise and the hypothesis. 8729 0.0 0.2 0.4 0.6 0.8 1.0 0 10000 20000 30000 40000 50000 60000 70000 not-duplicate duplicate (a) 0.0 0.2 0.4 0.6 0.8 1.0 0 10000 20000 30000 40000 50000 60000 non-entailment entailment (b) 0.0 0.2 0.4 0.6 0.8 1.0 0 2000 4000 6000 8000 10000 12000 14000 SUPPORTS REFUTES NOT ENOUGH INFO (c) Figure 4: The distribution of biased model confidence on three training datasets of QQP, MNLI, and FEVER.
2020
770
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8730–8742 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8730 NILE : Natural Language Inference with Faithful Natural Language Explanations Sawan Kumar Indian Institute of Science, Bangalore [email protected] Partha Talukdar Indian Institute of Science, Bangalore [email protected] Abstract The recent growth in the popularity and success of deep learning models on NLP classification tasks has accompanied the need for generating some form of natural language explanation of the predicted labels. Such generated natural language (NL) explanations are expected to be faithful, i.e., they should correlate well with the model’s internal decision making. In this work, we focus on the task of natural language inference (NLI) and address the following question: can we build NLI systems which produce labels with high accuracy, while also generating faithful explanations of its decisions? We propose Naturallanguage Inference over Label-specific Explanations (NILE), a novel NLI method which utilizes auto-generated label-specific NL explanations to produce labels along with its faithful explanation. We demonstrate NILE’s effectiveness over previously reported methods through automated and human evaluation of the produced labels and explanations. Our evaluation of NILE also supports the claim that accurate systems capable of providing testable explanations of their decisions can be designed. We discuss the faithfulness of NILE’s explanations in terms of sensitivity of the decisions to the corresponding explanations. We argue that explicit evaluation of faithfulness, in addition to label and explanation accuracy, is an important step in evaluating model’s explanations. Further, we demonstrate that task-specific probes are necessary to establish such sensitivity. 1 Introduction Deep learning methods have been employed to improve performance on several benchmark classification tasks in NLP (Wang et al., 2018, 2019). Typically, these models aim at improving label accuracy, while it is often desirable to also produce explanations for these decisions (Lipton, 2016; Chakraborty et al., 2017). In this work, we focus on producing natural language explanations for Natural Language Inference (NLI), without sacrificing much on label accuracy. There has been growing interest in producing natural language explanations for deep learning systems (Huk Park et al., 2018; Kim et al., 2018; Ling et al., 2017), including NLI (Camburu et al., 2018). In general, the explanations from these methods can typically be categorized as post-hoc explanations (Lipton, 2016). Camburu et al. (2018) propose an NLI system which first produces an explanation and then processes the explanation to produce the final label. We argue that these explanations also resemble post-hoc explanations (Section 4.2). Further, existing methods don’t provide a natural way to test the faithfulness of the generated explanations, i.e., how well do the provided explanations correlate with the model’s decision making. We therefore propose Natural-language Inference over Label-specific Explanations (NILE)1, which we train and evaluate on English language examples. Through NILE, we aim to answer the following question: Can we build NLI systems which produce faithful natural language explanations of predicted labels, while maintaining high accuracy? Briefly, in NILE, we first generate natural language explanations for each possible decision, and subsequently process these explanations to produce the final decision. We argue that such a system provides a natural way of explaining its decisions. The key advantage is the testability of these explanations, in themselves, as well as in terms of the sensitivity of the system’s prediction 1NILE source code available at https://github.com/SawanKumar28/nile 8731 V8 S Step I: Generate Label-specific candidate explanations Step II: Process explanations to infer the task label 0.8 0.0 0.2 Gentail Gcontradict Gneutral A white dog with long hair jumps to catch a red and green toy. An animal is jumping to catch an object. Hypothesis Premise A dog is an animal. A dog cannot be jumping to catch a toy and object simultaneously. Contradiction explanation Entailment explanation The object may not be a toy. Neutral explanation Instance Candidate Explanation Generators Generated explanations Explanation Processor Label Scores lentail lcontradict lneutral A dog is an animal. Predicted Explanation Figure 1: Overview of NILE: A Premise and Hypothesis pair is input to label-specific Candidate Explanation Generators G which generate natural language explanations supporting the corresponding label. The generated explanations are then fed to the Explanation Processor S, which generates label scores using the evidence present in these explanations (see Figure 3 for the architectures used in this work). In addition to the explanations, NILE also utilizes the premise and hypothesis pair (See Section 4.4.2 for a discussion on the challenges in building such a system). Please see Section 4 for details. to these explanations. We choose NLI due to its importance as an NLP task, and the availability of e-SNLI, a large dataset annotated both with entailment relation labels and natural language human explanations of those labels (Camburu et al., 2018; Bowman et al., 2015). In summary, we make the following contributions in this work. 1. We propose NILE, an NLI system which generates and processes label-specific explanations to infer the task label, naturally providing explanations for its decisions. 2. We demonstrate the effectiveness of NILE compared to existing systems, in terms of label and explanation accuracy. 3. Through NILE, we provide a framework for generating falsifiable explanations. We propose ways to evaluate and improve the faithfulness of the system’s predictions to the generated explanations. We claim that task-specific probes of sensitivity are crucial for such evaluation. We have released the source code of NILE to aid reproducibility of the results. 2 Related Work Explainability of a model’s predictions has been studied from different perspectives, including feature importance based explanations (Ribeiro et al., 2016; Lundberg and Lee, 2017; Chen et al., 2018), or post-hoc natural language explanations (Huk Park et al., 2018; Kim et al., 2018; Ling et al., 2017). Hendricks et al. (2018) produce counterfactual natural language explanations for image classification given an image and a counter-class label. Camburu et al. (2018) propose a model for NLI to first generate a free-form natural language explanation and then infer the label from the explanation. However, as noted by Oana-Maria et al. (2019a), the system tends to generate inconsistent explanations. We reason that requiring a model to generate an explanation of the correct output requires it to first infer the output, and the system thus resembles post-hoc explanation generation methods. Given the diversity of desiderata and techniques for interpretability, the need for understanding interpretation methods and evaluating them has grown. Difficulty in building interpretation models and the lack of robustness of the same are some of the major issues in existing deep neural networks systems (Feng et al., 2018; Ghorbani et al., 2019; Oana-Maria et al., 2019b). Given these observations, measuring faithfulness, i.e., how well do the provided explanations correlate with the model’s decision making, is crucial. DeYoung et al. (2019) propose metrics to evaluate such faithfulness of rationales (supporting evidence) for NLP tasks. Through NILE, we propose a framework for generating faithful natural language explanations by requiring the model to condition on generated natural language explanations. The idea of using natural language strings as a latent space has been explored to capture compositional task structure (Andreas et al., 2018). Wu et al. (2019) explore improving 8732 visual question answering by learning to generate question-relevant captions. Rajani et al. (2019) aim to improve commonsense question answering by first generating commonsense explanations for multiple-choice questions, where the question and the choices are provided as the prompt. Similar to (Camburu et al., 2018), they learn by trying to generate human-provided explanations and subsequently conditioning on the generated explanation. In NILE, we instead aim to produce an explanation for each possible label and subsequently condition on the generated label-specific explanations to produce the final decision. 3 Background In this section, we discuss the datasets (Section 3.1) and pre-trained models (Section 3.2) used to build NILE. 3.1 Data SNLI: The Stanford NLI dataset (Bowman et al., 2015) contains samples of premise and hypothesis pairs with human annotations, using Amazon Mechanical Turk. The premises were obtained from pre-existing crowdsourced corpus of image captions. The hypotheses were obtained by presenting workers with a premise and asking for a hypothesis for each label (entailment, neutral and contradiction), resulting in a balanced set of ∼570K pairs. e-SNLI: Camburu et al. (2018) extend the SNLI dataset with natural language explanations of the ground truth labels. The explanations were crowdsourced using Amazon Mechanical Turk. Annotators were first asked to highlight words in the premise and hypothesis pairs which could explain the labels. Next, they were asked to write a natural language explanation using the highlighted words. Similar to Camburu et al. (2018), for all our experiments, we filter out non-informative examples where the explanations contain the entire text of the premise or hypothesis. In particular, we drop any training example where the uncased premise or hypothesis text appears entirely in the uncased explanation. This leads to a training data size of ∼532K examples. 3.2 Pretrained Language Models Transformer architectures (Vaswani et al., 2017) pre-trained on large corpora with self-supervision have shown significant improvements on various NLP benchmarks (Devlin et al., 2019; Radford et al., 2019; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019). Improvements have been demonstrated for text classification as well as text generation tasks (Lewis et al., 2019; Raffel et al., 2019). In this work, we leverage the implementation of transformer architectures and pre-trained models provided by Wolf et al. (2019). GPT-2: We use the GPT-2 architecture (Radford et al., 2019), which is trained using a causal language modeling loss (CLM), and includes a leftto-right decoder suitable for text generation. In particular, we use the gpt2-medium model. This model has 24 layers, 16 attention heads and a hidden size of 1024 (∼345M parameters). For text generation, the model can be finetuned using CLM on desired text sequences. RoBERTa: For classification modules, we leverage RoBERTa (Liu et al., 2019), which is trained using a masked language modeling loss (MLM). In particular, we use the roberta-base model. This model has 12 layers, 12 attention heads and a hidden size of 768 (∼125M parameters). For downstream classifications tasks, a classification layer is added over the hidden-state of the first token in the last layer. 4 Natural-language Inference over Label-specific Explanations (NILE) The overall architecture employed in NILE is shown in Figure 1. We introduce the notation used in this paper in Section 4.1. We then discuss the motivation for the major design choices in Section 4.2. NILE performs the following steps to produce labels and explanations: 1. Candidate Explanation Generators: Labelspecific Candidate Explanation Generators first generate explanations supporting the respective labels (Section 4.3). 2. Explanation Processor: The Explanation Processor takes the explanations and also the premise and hypothesis pairs as input to produce the task label (Section 4.4). We also build NILE-PH, where the Explanation Processor has access only to the generated explanations (Section 4.4.1). We note that NILE-PH more naturally fits the desiderata described in Section 1, while we design and evaluate NILE for the more general case 8733 where the Explanation Processor also accesses the premise and hypothesis pair. In Section 4.5, we describe comparable baseline architectures. 4.1 Notation We denote each data point by (p, h), where p is the premise and h the hypothesis sentence. G denotes a model trained to generate natural language explanations. Specifically, Gx denotes a model which generates natural language explanations tx of type x, where x ∈{entail, contradict, neutral}. We denote the human-provided gold explanation for the correct predictions as tg. S denotes a module which predicts label scores. The true label for an example is denoted by y, while a model prediction is denoted by y′, and label scores by lx. V2 Hypothesis Premise Explanation Gpre Spost B Hypothesis Premise Explanation Gpost Spre A Figure 2: Existing alternative architectures.: A. Posthoc generation: Given an input instance, first the label is predicted and then an explanation generated conditioned on the label and the input text. B. ExplainThenPredict (Camburu et al., 2018): Given the input instance, first the desired explanation is generated, and then the label is predicted using only the generated explanation. We argue that neither architecture provides a natural way to test the sensitivity of the model’s predictions to the generated explanation. Please see Section 4.2 for details. 4.2 Why do it this way? In this section, we describe the motivation for adopting a two-step pipelined approach. Label-specific explanations: Consider two alternative existing architectures in Figure 2. In Figure 2A, a model Spre is trained directly on the example sentences (p & h) to produce a label (y′), which together with the example sentences are used to produce an explanation t′ g using Gpost. It can be argued that while the target explanations may regularize the system, there is no reason for t′ g to be aligned with the reason why the model chose a particular label. Figure 2B corresponds to a model which has also been trained on e-SNLI (Camburu et al., 2018). Gpre is first trained to produce natural language explanations t′ g using human-provided explanations (tg) as targets, using only the example sentences as inputs. A model Spost then chooses the label corresponding to the generated explanation t′ g. While at first, it appears that this system may provide faithful explanations of its decisions, i.e., the generated explanations are the reason for the label prediction, we argue that it may not be so. In Figure 2B, Gpre is required to generate the explanation of the correct label for an example. It must first infer that label and then produce the corresponding explanation. Further analysis of the free-form human-provided explanations has revealed clear differences in the form of explanations, through alignment to label-specific templates (Camburu et al., 2018; Oana-Maria et al., 2019a). The Explanation Processor Spost then only needs to infer the form of t′ g. Gpre then resembles post-hoc generation methods, with the label (as the form of t′ g) and explanation t′ g being produced jointly. The claim is supported by inconsistencies found in the generated explanations (Oana-Maria et al., 2019a). Neither architecture allows a natural way to test the sensitivity of the model’s predictions to its explanations. In NILE, we first allow explanations for each label, and then require the Explanation Processor to select the correct explanation. This allows us to naturally test whether the model’s predictions are indeed due to the selected explanation. This can be done, for example, by perturbing the input to the Explanation Processor. A pipelined approach: We use a pipelined approach in NILE (Figure 1). The Candidate Explanation Generators are first trained using humanprovided explanations. The Explanation Processor takes as input the generated label-specific explanations. This prevents the system from producing degenerate explanations to aid task performance. It also allows perturbing the generated explanations to probe the system in a more natural way compared to an unintelligible intermediate state of a learnt model. We believe that systems can be designed to work in this setting without compromising task performance. 4.3 Candidate Explanation Generators We train label-specific explanation generators, Gx, x ∈{entail, contradict, neutral}, using humanprovided explanations of examples with the corresponding label. For example, to train Gentail, we 8734 FAgg FAgg FAgg tentail tcontradict tneutral FApn tentail tcontradict tneutral B C V6 Aggregate Append FInd FInd FInd tentail tcontradict tneutral A Independent 0.8 0.0 0.2 lentail lcontradict lneutral Figure 3: Explanation Processor architectures. A. Independent (Ind) collects evidence for a label symmetrically from the corresponding explanation. B. Aggregate (Agg) allows handling missing explanations by looking for contradictory evidence. C. Append (Apn) allows arbitrary evidence collection for each label. Please see Section 4.4.1 for details. Premise and hypothesis sentences are processed by additionally providing them to each block Fz where z ∈{Ind, Agg, Apn}. Please see Section 4.4.2 for details. collect all triplets (p, h, tg) annotated as entailment. We create text sequences of the form: “Premise: p Hypothesis: h [EXP] tg [EOS]” to fine-tune a pretrained language model, where [EXP] and [EOS] are special tokens added to the vocabulary. During fine-tuning, the language modeling loss function is used only over the explanation tokens. Next, we create prompts of the form “Premise: p Hypothesis: h [EXP]” and require each trained language model to independently complete the sequence. In this way we obtain label specific explanations tx, tx = Gx(p, h), for x ∈{entail, contradict, neutral}. 4.4 Explanation Processor The Explanation Processor in NILE takes as input the generated label-specific explanations, as well as the premise and hypothesis pair to generate label scores lx, x ∈{entail, contradict, neutral}. During training, these scores are passed through a softmax layer and a cross-entropy loss is used to generate the training signal. During testing, the label with the maximum score is selected. We leverage a pre-trained roberta-base model for all our experiments, and fine-tune it as specified in the following subsections. In each case, any intermediate scores are generated through transformations of the first token ([CLS]) embedding from the last layer. We define: Fmodel(inp) = tanh(W.CLSembed(inp)) where inp is a pair of sequences in NILE, a single sequence in NILE-PH, and W are the learnable parameters for the model. For simplicity, and to elucidate the desired behavior, we first describe how explanations are processed in NILE-PH (Section 4.4.1). We then discuss the construction of NILE, a potential issue, and a fix for the same (Section 4.4.2). 4.4.1 Processing Explanations In this section, we describe how explanations are processed in NILE-PH, which is generalized in NILE (Section 4.4.2). We experiment with three architectures, described below (also see Figure 3). A. Independent: In the Independent model, explanations are fed to FInd, which generates a score for each explanations independently: lx = WIndFInd(tx) (1) where x ∈{entail, contradict, neutral}. We expect this score to represent the truthfulness of the input explanation. B. Aggregate: The Independent model would need all three explanations to be available to reliably produce label scores. We believe a system should be able to handle one or more missing or ambiguous explanations. For example, the entailment explanation: “tentail: A dog is a cat” would provide evidence for contradiction. To capture this notion, we require the Explanation Processor to produce two intermediate scores V1 and V2, where we expect V1 to collect evidence supporting an input claim and V2 to collect evidence against an input claim: Vi(x) = WAgg,iFAgg(tx), where i ∈{1, 2} (2) The intermediate score are then aggregated into the final label scores: lentail = Cmb(V1(tentail), V2(tcontradict)) lcontradict = Cmb(V1(tcontradict), V2(tentail)) lneutral = V1(tneutral) (3) 8735 where Cmb is the LogSumExp function. The reason for this choice of aggregation is that while evidence against entailment might point to contradiction and vice versa, evidence against neutral doesn’t necessarily provide any information about entailment or contradiction relations. C. Append: Finally, to allow the model to reason arbitrarily between the three generated explanations, we created a single sequence, concatecn: “entailment: tentail contradiction: tcontradict neutral: tneutral”, and generate the scores as follows: lx = WApn,xFApn(concatecn) (4) where x ∈{entail, contradict, neutral}. 4.4.2 Processing Premise and Hypothesis In NILE, to process premise p and hypothesis h, we first concatenate p and h into concatph: “Premise: p Hypothesis: h”. The label scores are then obtained as in Section 4.4.1, by modifying Equation 1, 2 and 4 as follows: replace Fz(x) by Fz(concatph, x), where z ∈{Ind, Agg, Apn}. We note that appending the example sentences to the generated explanations (as in Append) would result in having no control over whether the explanations are used for the final prediction. The case for Independent and Aggregate is not immediately clear. We now discuss a potential issue with these architectures when processing premise and hypothesis text, and suggest a fix for the same. The issue: We expect NILE to answer the question: Is (concatph, tx), where x ∈{entail, contradict, neutral}, a valid instance-explanation pair? The Independent and Aggregate architectures for NILE have been designed such that the model can’t ignore the label-specific explanations. For example, the Independent model will produce identical scores for each output label, if it chooses to completely ignore the input explanations. However, the model is still free to learn a different kind of bias which is an outcome of the fact that natural language explanations convey ideas through both content and form. If the form for explanations of different labels is discriminative, an unconstrained learning algorithm could learn to infer first the type of explanation and use it to infer the task. For example, given the input (concatph, tx), where x ∈ {entail, contradict, neutral}, if a model could learn whether tx is an entailment explanation, it then only has to output whether concatph corresponds to an entailment relation. Essentially, high label accuracy can be achieved by inferring first what task to do using only the form of tx. The fix: To prevent NILE from exploiting the form of an explanation as described above, we create additional training examples, where we require NILE to score valid instance-explanation pairs higher. In particular, we sample negative explanations for an instance, of the same form as the correct label. For example, an instance labeled as entailment would have an additional training signal: Score (concatph, tentail) higher than (concatph, t′ entail) and (concatph, t′′ entail), where t′ entail and t′′ entail are randomly sampled entailment form explanations. We note that the fix leaves room for other kinds of biases to be learnt. However, the key advantage with NILE is that it is easy to design probes to test for such biases and subsequently fix them (see Section 5.3). 4.5 Baselines We now describe baselines which use the same underlying blocks as NILE, for generating explanations and classification. NILE:post-hoc: To understand the drop in performance which could be associated with constraining models as we have done, we train a model with full access to input examples (See Figure 2A). lx = WxFpre(p, h) where x ∈{entail, contradict, neutral}. Further, we provide a strong baseline for posthoc generators using this model, where using the model’s predictions, we simply pick the corresponding label-specific generated explanation. t′ g = Gpost(lx) = tx We note that the model’s predictions have no sensitivity to the generated explanations in NILE: posthoc. ExplainThenPredictAttention (ETPA): Following (Camburu et al., 2018), (see Figure 2B), we train a pipelined system, where we first learn to generate the gold explanation t′ g, followed by a classification of t′ g to predict the label: t′ g = Gpre(concatecn) lx = WxFpost(t′ g) where x ∈{entail, contradict, neutral}. 8736 Model SNLI Dev SNLI Test Explanation evaluation on first 100 SNLI Test Samples Label Accuracy Label Accuracy A: Correct Labels Averaged over annotators Annotators in-agreement B: Correct Expl. B/A C: Correct Expl. C/A SemBERT# (Zhang et al., 2019) 92.2 91.9 ETPA (Camburu et al., 2018) Reported 81.71 64.27 Reproduced 86.98 86.22 77 71.2 92.47 59 76.62 NILE:post-hoc 91.86 91.49 90 81.4 90.44 68 75.56 NILE-PH Independent 84.69 84.13 78 72.0 92.31 61 78.21 Aggregate 85.71 85.29 80 73.4 91.75 62 77.50 Append 88.49 88.11 85 78.0 91.76 66 77.65 NILE-NS Independent 91.56 90.91 88 80.8 91.82 69 78.41 Aggregate 91.55 91.08 89 80.6 90.56 68 76.40 Append 91.74 91.12 89 80.4 90.34 67 75.28 NILE Independent 91.29 90.73 91 82.4 90.55 69 75.82 Aggregate 91.19 90.91 90 81.4 90.44 68 75.56 Table 1: Comparison of label and explanation accuracy on the in-domain SNLI evaluation sets. Models are selected using the Dev set label accuracy over 5 runs with different seeds of random initialization. Mean (and standard deviation) over the 5 runs are reported in the Appendix. # indicates the best reported result at https://nlp.stanford.edu/projects/snli/ at the time of writing. Note that SemBERT does not provide natural language explanations and is reported here only for reference. Bold numbers indicate highest among methods that produce explanations. Explanations are evaluated on the first 100 SNLI Test examples. We present reported numbers of ETPA (Camburu et al., 2018) as well as the results with our reproduction of ETPA. ETPA (reproduced) is directly comparable with NILE (Section 4.5). NILE-PH competes with or outperforms ETPA baselines on label accuracy, while NILE-NS and NILE provide significant gains in label accuracy. NILE and NILE-NS are competitive with the best reported results in terms of label accuracies. We report the number of correct explanations, averaged across annotators (B) as well as when all annotators agree on correctness (C). All NILE variants are able to provide more correct explanations than the ETPA baseline. We also report the percentage of correct explanations in the subset of correct label predictions (B/A, C/A). On this metric, NILE variants are comparable with the ETPA baseline. However, the real value of NILE lies in being able to probe the faithfulness of its decisions (Section 5.3). Further, NILE explanations generalize significantly better on out-of-domain examples (See Table 2). Please see Section 5.1 for details. 5 Experiments In this section, we aim to answer the following questions: Q1 How does NILE compare with the baselines and other existing approaches in terms of final task performance, and explanation accuracy, on in-domain evaluation sets (train and test on SNLI)? (Section 5.1) Q2 How well does NILE transfer to out-of-domain examples (train on SNLI, and test on MNLI)? (Section 5.2) Q3 How faithful are the model’s predictions to the generated explanations? (Section 5.3) We provide training details in Appendix A, and examples of generated label-specific explanations in Appendix B. 5.1 In-domain Results We report the label accuracies of the baselines and proposed architectures on the SNLI Dev and Test set in Table 1. We also report explanation accuracies, obtained through human evaluation of the generated explanations in the first 100 test examples. Binary scores on correctness were sought from five annotators (non-experts in NLP) on the generated explanations. For both label and explanation accuracies, we report using a model selected using the SNLI Dev set label accuracy across 5 runs with 5 different seeds of random initialization. Please see the Appendix for more details on the the 5 runs. First, through NILE:post-hoc, we provide a strong baseline for obtaining high label and explanation accuracy. Our aim in this work is to learn explanations that serve as the reason for the model’s 8737 Model MNLI Dev MNLI Dev-mm Explanation evaluation on first 100 MNLI Dev Samples Label Accuracy Label Accuracy A: Correct Labels Averaged over annotators Annotators in-agreement B: Correct Expl. B/A C: Correct Expl. C/A ETPA (Camburu et al., 2018) Reproduced 56.11 56.42 48 22.67 47.22 14 29.17 NILE:post-hoc 79.29 79.29 69 47.67 69.08 35 50.72 NILE-PH Independent 54.95 55.35 46 34.33 74.64 28 60.87 Aggregate 56.45 56.66 49 34.67 70.75 26 53.06 Append 61.33 61.98 58 43.33 74.71 34 58.62 NILE-NS Independent 74.84 75.20 68 49.67 73.04 37 54.41 Aggregate 75.73 76.22 69 49.33 71.50 37 53.62 Append 77.07 77.22 72 52.33 72.69 38 52.78 NILE Independent 72.91 73.04 64 45.67 71.35 33 51.56 Aggregate 72.94 73.01 63 45.67 72.49 34 53.97 Table 2: Testing the generalization capability of NILE on the out-of-domain MNLI Dev sets. Training and model selection is done on the SNLI dataset (Section 5.1), and evaluation on the out-of-domain MNLI Dev (matched) and MNLI Dev-mm (mismatched) sets. Label accuracies are reported for both MNLI Dev (matched) and MNLI Dev-mm (mismatched) sets, while explanations are evaluated on the first 100 MNLI Dev set examples. We report the number of correct explanations, averaged across annotators (B) as well as when all annotators agree on correctness (C). All NILE variants provide more correct explanations than the ETPA baseline (B, C). Further, the percentage of correct explanations in the subset of correct label predictions (B/A, C/A) is significantly better for all NILE variants. The results demonstrate that NILE provides a more generalizable framework for producing natural language explanations. Please see Section 5.2 for details. predictions. Nevertheless, we are able to compete or outperform this baseline, in terms of explanation accuracy, while incurring a only a small drop in label accuracy. All variants of NILE, including NILE-PH and NILE-NS (which is not trained using negative samples of explanations as described in Section 4.4.2), produce more correct explanations than the ETPA baseline. NILE-PH:Append, NILE and NILE-NS provide gains over label accuracies compared to the ETPA baseline. Additionally, NILE and its variants provide natural ways to probe the sensitivity of the system’s predictions to the explanations, as demonstrated in the subsequent sections. Finally, the explanations generated by all NILE variants generalize significantly better on out-of-distribution examples when compared to the ETPA baseline (See Section 5.2). 5.2 Transfer to Out-of-domain NLI To test the generalization capability of NILE, we do training and model selection on the SNLI dataset (Section 5.1), and evaluate on the out-of-domain MNLI (Williams et al., 2018) development sets. Transfer without fine-tuning to out-of-domain NLI has been a challenging task with transfer learning for generating explanations in MNLI being particularly challenging (Camburu et al., 2018). We report label accuracies on the Dev (matched) and Dev-mm (mismatched) sets, and explanation evaluation on the first 100 Dev samples in Table 2. Explanation evaluation was done by three annotators (who also annotated the SNLI explanations). While the label accuracies follow a similar pattern as the in-domain SNLI Test set, all variants of NILE provide gains in the quality of generated explanations. All variants of NILE produce more correct explanations (B, C) as well as a higher percentage of correct generated explanations among correct predictions (B/A, C/A). This demonstrates that NILE, through intermediate label-specific natural language explanations, provides a more general way for building systems which can produce natural language explanations for their decisions. 5.3 Evaluating Faithfulness using Sensitivity Analysis NILE and its variants allow a natural way to probe the sensitivity of their predictions to the generated explanations, which is by perturbing the explanations themselves. In this way, NILE resembles 8738 Model I+ Exp I only Exp only NILE-NS Independent 91.6 33.8 69.4 Aggregate 91.6 33.8 74.5 Append 91.7 91.2 72.9 NILE Independent 91.3 33.8 46.1 Aggregate 91.2 33.8 40.7 Table 3: Estimating the sensitivity of the system’s predictions to input explanations through erasure. During testing, we erase either the instance or the explanations from the input to NILE-NS and NILE. The results seem to indicate that NILE-NS’s predictions are more faithful, in the sense of having a higher sufficiency. However, as demonstrated subsequently, the sensitivity of NILE-NS’s prediction to the input explanations is not as desired. Please see Section 5.3 for details. Model Dev Set Shuffled Dev Set NILE-NS Independent 91.6 88.1 Aggregate 91.6 89.6 Append 91.7 88.5 NILE Independent 91.3 35.3 Aggregate 91.2 31.6 Table 4: Probing the sensitivity of the system’s predictions by shuffling instance-explanation pairs. Each instance is attached to a randomly selected explanation of the same form as the original pair. The results demonstrate a much weaker link between NILE-NS’s predictions and associated explanations. On the other hand, NILE behaves more expectedly. Note that the baselines don’t allow a similar mechanism to test their faithfulness, and such testability is a key advantage of NILE. Please see Section 5.3 for details. explanation systems which provide input text fragments as reasons for their decisions. DeYoung et al. (2019) propose metrics to evaluate the faithfulness of such explanations. Following their work, we first attempt to measure the explanations generated by the methods proposed in this paper for comprehensiveness (what happens when we remove the explanation from the input) and sufficiency (what happens if we keep only the explanations). In Table 3, we show these measures for NILE and NILE-NS. The results seem to indicate that explanations for both NILE and NILE-NS are comprehensive, while having higher sufficiency in the case of NILE-NS. We first note that the comprehensiveness of these systems is ensured by design, and the input is indistinguishable without an explanation. Second, we argue that sufficiency may indicate correlations which don’t necessarily exist in the system otherwise. We study the sensitivity of the explanations through a probe motivated by an understanding of the task and the training examples (see Section 4.4.2). We perturb the instanceexplanation inputs such that for each test instance, the explanation is replaced by a randomly selected explanation of the same label. The results (Table 4) indicate that NILE-NS is more robust to random perturbations of input explanations, and presumably uses the form of the explanation to infer the task (see Section 4.4.2 for a discussion). It is true that NILE behaves expectedly as we have specifically designed NILE to prevent the associated bias, and that this could potentially lead the system to learn other such biases. However, a key advantage of the proposed architecture is the ability to identify and fix for such biases. We leave it as an interesting and challenging future work to find and fix more such biases. 6 Conclusion In this paper we propose NILE, a system for Natural Language Inference (NLI) capable of generating labels along with natural language explanations for the predicted labels. Through extensive experiments, we demonstrate the effectiveness of this approach, in terms of both label and explanation accuracy. NILE supports the hypothesis that accurate systems can produce testable natural language explanations of their decisions. In the paper, we also argue the importance of explicit evaluation of faithfulness of the generated explanations, i.e., how correlated are the explanations to the model’s decision making. We evaluate faithfulness of NILE’s explanations using sensitivity analysis. Finally, we demonstrate that task-specific probes are necessary to measure such sensitivity. Acknowledgments We thank the anonymous reviewers for their constructive comments. This work is supported by the Ministry of Human Resource Development (Government of India). We would also like to thank HuggingFace for providing a state-of-the-art Transformers library for natural language understanding. Finally, we want to thank the annotators who annotated generated explanations for correctness. 8739 References Jacob Andreas, Dan Klein, and Sergey Levine. 2018. Learning with latent language. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2166–2179, New Orleans, Louisiana. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Oana-Maria Camburu, Tim Rockt¨aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural Language Inference with Natural Language Explanations. In Advances in Neural Information Processing Systems, pages 9539–9549. Supriyo Chakraborty, Richard Tomsett, Ramya Raghavendra, Daniel Harborne, Moustafa Alzantot, Federico Cerutti, Mani Srivastava, Alun Preece, Simon Julier, Raghuveer M Rao, et al. 2017. Interpretability of Deep Learning Models: A Survey of Results. In 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), pages 1–6. IEEE. Jianbo Chen, Le Song, Martin Wainwright, and Michael Jordan. 2018. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation. In International Conference on Machine Learning, pages 882–891. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. ERASER: A Benchmark to Evaluate Rationalized NLP Models. arXiv preprint arXiv:1911.03429. Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of Neural Models Make Interpretations Difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719–3728, Brussels, Belgium. Association for Computational Linguistics. Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of Neural Networks is Fragile. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3681–3688. Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata. 2018. Generating counterfactual explanations with natural language. In ICML Workshop on Human Interpretability in Machine Learning, pages 95–98. Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Marcus Rohrbach. 2018. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8779– 8788. Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, and Zeynep Akata. 2018. Textual Explanations for Self-Driving Vehicles. In Proceedings of the European conference on computer vision (ECCV), pages 563–578. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In International Conference on Learning Representations. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation, Translation, and Comprehension. arXiv preprint arXiv:1910.13461. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 158–167, Vancouver, Canada. Association for Computational Linguistics. Zachary C Lipton. 2016. The Mythos of Model Interpretability. arXiv preprint arXiv:1606.03490. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems, pages 4765–4774. Camburu Oana-Maria, Shillingford Brendan, Minervini Pasquale, Lukasiewicz Thomas, and Blunsom Phil. 2019a. Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations. arXiv preprint arXiv:1910.03065. 8740 Camburu Oana-Maria, Giunchiglia Eleonora, Foerster Jakob, Lukasiewicz Thomas, and Blunsom Phil. 2019b. Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods. arXiv preprint arXiv:1910.02065. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. OpenAI Blog, 1(8). Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv preprint arXiv:1910.10683. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. ACM. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in neural information processing systems, pages 5998–6008. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. In Advances in Neural Information Processing Systems, pages 3261–3275. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. ArXiv, abs/1910.03771. Jialin Wu, Zeyuan Hu, and Raymond Mooney. 2019. Generating question relevant captions to aid visual question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3585–3594, Florence, Italy. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in neural information processing systems, pages 5754–5764. Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2019. Semantics-aware BERT for Language Understanding. arXiv preprint arXiv:1909.02209. A Experimental Setup Model SNLI Dev Label Accuracy Mean Stddev ETPA Reproduced 86.96 0.02 NILE:post-hoc 91.77 0.06 NILE-PH Independent 84.53 0.18 Aggregate 85.47 0.26 Append 88.30 0.12 NILE-NS Independent 90.17 2.76 Aggregate 91.44 0.06 Append 91.57 0.14 NILE Independent 91.09 0.19 Aggregate 90.94 0.22 Table 5: Mean and Standard Deviation for label accuracues on SNLI Dev set are reported. NILENS:Independent system has a high standard deviation and relatively lower mean accuracy. This is due to a bad random initialization with seed 219. When seed 219 results are excluded, the mean and standard deviation are 91.41 and 0.20 respectively. For fine-tuning gpt2-medium language models for explanation generation as well as roberta-base models, we leverage code and pre-trained models from the “transformers” library available at https://github.com/huggingface. In each case we train on the train split for three epochs. Apart from batch size, sequence length and seed for random initialization, we keep the other hyperparameters fixed throughout the experiments. We don’t do any fine-tuning on seeds of random initialization. 8741 For roberta-base models, we report results through model selection on models trained using 5 seeds of random initialization - 42, 219, 291, 67 and 741. Model selection is done using label accuracies on SNLI Dev set. In Table 5, we report the mean and standard deviation for the label accuracies across 5 runs. We ran our experiments on GeForce GTX 1080 Ti GPUs. We adjust the batch size to be the largest multiple of 16 to fit on the GPU memory (∼12GB). We now list all the hyper-parameters used. GPT2: The hyper-parameters used for finetuning gpt2-medium include a maximum sequence length of 128, batch size of 2, learning rate of 5e-5, Adam epsilon of 1e-8, max gradient norm of 1.0 and a seed of 42. For generating text, we used greedy decoding. Model Batch size Max seq length ETPA Reproduced 32 100 NILE:post-hoc 32 100 NILE-PH Independent 32 50 Aggregate 32 50 Append 32 100 NILE-NS Independent 16 100 Aggregate 16 100 Append 16 200 NILE Independent 16 100 Aggregate 16 100 Table 6: Hyper-parameters (batch size and maximum sequence length) used for fine-tuning roberta-base RoBERTa: The fixed parameters for fine-tuning robera-base included a learning rate of 2e-5, Adam epsilon of 1e-8 and max gradient norm 1.0. Other parameters are captured in Table 6. B Generated Explanations In this section, we list the label-specific explanations generated for five Dev set examples, along with the premise, hypothesis and gold labels, for both SNLI (Section B.1) and MNLI (Section B.2) datasets. B.1 SNLI • Example 1 PairID: 4705552913.jpg#2r1n gold label: neutral Premise: Two women are embracing while holding to go packages. Hypothesis: The sisters are hugging goodbye while holding to go packages after just eating lunch. entailment: Sisters are women. contradiction: The women cannot be embracing if they are hugging goodbye. neutral: Two women embracing does not imply they are sisters hugging goodbye. • Example 4 PairID: 2407214681.jpg#0r1e gold label: entailment Premise: Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink. Hypothesis: Two kids in numbered jerseys wash their hands. entailment: Two young children are two kids, and blue jerseys implies numbered jerseys. contradiction: The kids are either standing on wooden steps or wash their hands. neutral: The kids could be washing their hands in a sink for any reason, not just because they wash their hands. • Example 18 PairID: 3667788497.jpg#0r1c gold label: contradiction Premise: Under a blue sky with white clouds, a child reaches up to touch the propeller of a plane standing parked on a field of grass. Hypothesis: A child is playing with a ball. entailment: A propeller is a type of ball. contradiction: A child cannot touch the propeller of a plane while playing with a ball. neutral: Just because the child reaches up to touch the propeller of a plane does not mean the child is playing with a ball. • Example 24 PairID: 6170429974.jpg#3r1c gold label: contradiction Premise: Two men on bicycles competing in a race. Hypothesis: A few people are catching fish. 8742 entailment: Two men are a few people, and catching fish is a form of competition. contradiction: competing and catching are different actions neutral: Competing in a race does not imply catching fish. • Example 30 PairID: 173499538.jpg#2r1c gold label: contradiction Premise: Families waiting in line at an amusement park for their turn to ride. Hypothesis: People are waiting to see a movie. entailment: People are waiting to see a movie at an amusement park. contradiction: People cannot be waiting to ride an amusement park and a movie at the same time. neutral: Not all rides at amusement parks are a movie. B.2 MNLI • Example 1 PairID: 63735n gold label: neutral Premise: The new rights are nice enough Hypothesis: Everyone really likes the newest benefits entailment: Nice enough is another way of saying everyone really likes. contradiction: If everyone really likes theBenefits, then the rights wouldn’t be nice enough. neutral: New benefits are not necessarily the newest benefits. • Example 14 PairID: 73734e gold label: entailment Premise: You and your friends are not welcome here, said Severn. Hypothesis: Severn said the people were not welcome there. entailment: Friends are people. contradiction: People who are not welcome here are not welcome here. neutral: You can not infer they were not welcome. • Example 17 PairID: 70726e gold label: entailment Premise: so i have to find a way to supplement that Hypothesis: I need a way to add something extra. entailment: Add something extra is a rephrasing of Supplement that. contradiction: A way to supplement something extra is not the same as a way to add something extra. neutral: Just because someone is adding something doesn’t mean they are doing it extra. • Example 26 PairID: 67610c gold label: contradiction Premise: Sorry but that’s how it is. Hypothesis: This is how things are and there are no apologies about it. entailment: oops that’s how it is is is same as there are no apologies about it contradiction: A person can’t be sorry and have no apologies. neutral: Just because someone is sorry does not mean they are saying no apologies. • Example 45 PairID: 98811c gold label: contradiction Premise: yeah i mean just when uh the they military paid for her education Hypothesis: The military didn’t pay for her education. entailment: The military paid for her education, doesn’t matter if it was for college or not. contradiction: The military either paid for her education or they didn’t. neutral: Just because the military paid for her education doesn’t mean she didn’t get paid for it.
2020
771
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8743–8758 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8743 QUASE: Question-Answer Driven Sentence Encoding Hangfeng He† Qiang Ning‡∗ Dan Roth† † University of Pennsylvania ‡ Allen Institute for AI hangfeng,[email protected] [email protected] Abstract Question-answering (QA) data often encodes essential information in many facets. This paper studies a natural question: Can we get supervision from QA data for other tasks (typically, non-QA ones)? For example, can we use QAMR (Michael et al., 2017) to improve named entity recognition? We suggest that simply further pre-training BERT is often not the best option, and propose the questionanswer driven sentence encoding (QUASE) framework. QUASE learns representations from QA data, using BERT or other state-ofthe-art contextual language models. In particular, we observe the need to distinguish between two types of sentence encodings, depending on whether the target task is a single- or multisentence input; in both cases, the resulting encoding is shown to be an easy-to-use plugin for many downstream tasks. This work may point out an alternative way to supervise NLP tasks.1 1 Introduction It is labor-intensive to acquire human annotations for NLP tasks which require research expertise. For instance, one needs to know thousands of semantic frames in order to provide semantic role labelings (SRL) (Palmer et al., 2010). It is thus an important research direction to investigate how to get supervision signals from indirect data and improve one’s target task. This paper studies the case of learning from question-answering (QA) data for other tasks (typically not QA). We choose QA because (1) a growing interest of QA has led to many large-scale QA datasets available to the community; (2) a QA task often requires comprehensive understanding of language and may encode rich information that ∗Part of this work was done while the author was at the University of Illinois at Urbana-Champaign. 1Our code and online demo are publicly available at https://github.com/CogComp/QuASE. is useful for other tasks; (3) it is much easier to answer questions relative to a sentence than to annotate linguistics phenomena in it, making this a plausible supervision signal (Roth, 2017). There has been work showing that QA data for task A can help another QA task T , conceptually by further pre-training the same model on A (an often larger) before training on T (a smaller) (Talmor and Berant, 2019; Sun et al., 2019). However, it remains unclear how to use these QA data when the target task does not share the same model as the QA task, which is often the case when the target task is not QA. For instance, QA-SRL (He et al., 2015), which uses QA pairs to represent those predicateargument structures in SRL, should be intuitively helpful for SRL parsing, but the significant difference in their surface forms prevents us from using the same model in both tasks. The success of modern language modeling techniques, e.g., ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and many others, has pointed out an alternative solution to this problem. That is, to further pre-train2 a neural language model (LM) on these QA data in certain ways, obtain a sentence encoder, and use the sentence encoder for the target task, either by fine-tuning or as additional feature vectors. We call this general framework question-answer driven sentence encoding (QUASE). A straightforward implementation of QUASE is to first further pre-train BERT (or other LMs) on the QA data in the standard way, as if this QA task is the target, and then fine-tune it on the real target task. This implementation is technically similar to STILTS (Phang et al., 2018), except that 2We clarify three types of training: pre-training, further pre-training, and fine-tuning. Pre-training refers to the training of sentence encoders on unlabeled text; further pre-training refers to continuing training the sentence encoders on an intermediate, non target-task-specific labeled data (e.g. QA data); fine-tuning refers to training on the target task in the fine-tuning approach. 8744 STILTS is mainly further pre-trained on textual entailment (TE) data. However, similar to the observations made in STILTS and their follow-up works (Wang et al., 2019), we find that additional QA data does not necessarily help the target task using the implementation above. While it is unclear how to predict this behaviour , we do find that this happens a lot for tasks whose input is a single sentence, e.g., SRL and named entity recognition (NER), instead of a sentence pair, e.g., TE. This might be because QA is itself a paired-sentence task, and the implementation above (i.e., to further pre-train BERT on QA data) may learn certain attention patterns that can transfer to another paired-sentence task more easily than to a single-sentence task. Therefore, we argue that, for single-sentence target tasks, QUASE should restrict the interaction between the two sentence inputs when it further pre-trains on QA data. We propose a new neural structure for this and name the resulting implementation sQUASE, where “s” stands for “single;” in contrast, we name the straightforward implementation mentioned above p-QUASE for “paired.” Results show that s-QUASE outperforms p-QUASE significantly on 3 single-sentence tasks—SRL, NER, and semantic dependency parsing (SDP)—indicating the importance of this distinction. Let QUASEA be the QUASE further pre-trained on QA data A. We extensively compare 6 different choices of A: TriviaQA (Joshi et al., 2017), NewsQA (Trischler et al., 2017), SQuAD (Rajpurkar et al., 2016), relation extraction (RE) dataset in QA format (QA-RE for short) (Levy et al., 2017), Large QA-SRL (FitzGerald et al., 2018), and QAMR (Michael et al., 2017). Interestingly, we find that if we use s-QUASE for singlesentence tasks and p-QUASE for paired-sentence tasks, then QUASEQAMR improves all 7 tasks3 in low resource settings, with an average error reduction rate of 7.1% compared to BERT.4 While the set of tasks we experimented with here is nonexhaustive, we think that QUASEQAMR has the potential of improving on a wide range of tasks. This work has three important implications. First, it provides supporting evidence to an important alternative to supervising NLP tasks: using QA to annotate language, which has been discussed in works such as QA-SRL, QAMR, and 3SRL, SDP, NER, RE, co-reference resolution (Coref), TE and machine reading comprehension (MRC). 4BERT is close to the state-of-the-art in all these tasks. QA-RE. If it is difficult to teach annotators the formalism of a certain task, perhaps we can instead collect QA data that query the target phenomena and thus get supervision from QA for the original task (and possibly more). Second, the distinction between s-QUASE and p-QUASE suggests that sentence encoders should consider some properties of the target task (e.g., this work distinguishes between single- and multi-sentence tasks). Third, the good performance of QUASEQAMR suggests that predicate-argument identification is an important capability that many tasks rely on; in contrast, many prior works observed that only language modeling would improve target tasks generally. 2 QA Driven Sentence Encoding This work aims to find an effective way to use readily available QA data to improve a target task that is typically not QA. A natural choice nowadays— given the success of language models—is to further pre-train sentence encoders, e.g. BERT, on QA data in certain ways, and then use the new encoder in a target task. This general framework is called QUASE in this work, and the assumption is that the sentence encoders learned from QA data have useful information for the target task. A straightforward implementation of QUASE is to further pre-train BERT on QA data in the standard way, i.e., fine-tune BERT as if this QA dataset is the target task, and then fine-tune BERT on the real target task. However, we find that this straightforward implementation is less effective or even negatively impacts target tasks with single-sentence input; similar observations were also made in STILTS (Phang et al., 2018) and its follow-ups (Wang et al., 2019): They further pretrain sentence encoders, e.g., ELMo, BERT, and GPT (Radford et al., 2018), on TE data and find that it is not effective for the syntax-oriented CoLA task and the SST sentiment task in GLUE, which are both single-sentence tasks (Wang et al., 2018). One plausible reason is that the step of further pre-training on QA data does not take into account some properties of the target task, for instance, the number of input sentences. QA is inherently a paired-sentence task; a typical setup is, given a context sentence and a question sentence, predict the answer span. Further pre-training BERT on QA data will inevitably learn how to attend to the context given the question. This is preferable when the target task is also taking a pair of sentences 8745 [CLS] Q1 Q2 … QM [SEP] Question [CLS] S1 S2 … SN [SEP] Sentence BERT BERT T[CLS] T1 T2 … TM T[SEP] T’[CLS] T'1 T’2 … T’N T’[SEP] Sentence2Question and Question2Sentence Attention G[CLS] G1 G2 … GM G[SEP] Question Modeling Sentence Modeling U[CLS] U1 U2 … UM U[SEP] H[CLS] H1 H2 … HN H[SEP] Interaction Layer C[CLS] C1 C2 … CN C[SEP] Classification Layer START/END s-QuASE (a) s-QUASE for single-sentence tasks [CLS] Q1 Q2 … QM [SEP] S1 S2 … SN [SEP] Question Sentence BERT T[CLS] T1 T2 … TM T[SEP] T'1 T’2 … T’N T’[SEP] Classification Layer START/END p-QuASE (b) p-QUASE for paired-sentence tasks Figure 1: Two implementations of QUASE: s-QUASE for single-sentence tasks, and p-QUASE for pairedsentence tasks. Both structures are further pre-trained on QA data, and the parts in the black boxes are used by target tasks. While p-QUASE is the standard way of fine-tuning BERT , s-QUASE restricts the interaction between the sentence and the question. Specifically, the sentence encodings in s-QUASE does not depend on the existence of the question. More details are given in Sec. 2.2 and Appendix A (including experimental settings in A.1, error analysis in A.2 and ablation analysis in A.3). as input, while it may be irrelevant or harmful for single-sentence tasks. It points out that we may need two types of sentence encodings when further pre-training BERT on QA data, depending on the type of the target task. The following subsection discusses this issue in detail. 2.1 Two Types of Sentence Encodings Standard sentence encoding is the problem of converting a sentence S=[w1, w2, · · ·, wn] to a sequence of vectors h(S)=[h1, h2, · · ·, hn] (e.g., skipthoughts (Kiros et al., 2015)). Ideally, h(S) should encode all the information in S, so that it is taskagnostic: given a target task, one can simply probe h(S) and retrieve relevant information. In practice, however, only the information relevant to the training task of h(S) is kept. For instance, when we have a task with multi-sentence input (e.g., QA and TE), the attention pattern A among these sentences will affect the final sentence encoding, which we call hA(S); in comparison, we denote the sentence encoding learned from single-sentence tasks by h(S), since there is no cross-sentence attention A. In a perfect world, the standard sentence encoding h(S) expresses also the conditional sentence encoding hA(S). However, we believe that there is a trade-off between the quality and the quantity of semantic information a model can encode. Our empirical results corroborate this conclusion and more details can be found in Appendix A.2. The distinction between the sentence encodings types may explain the negative impact of using QA data for some single-sentence tasks: Further pre-training BERT on QA data essentially produces a sentence encoding with cross-sentence attentions hA(S), while the single-sentence tasks expect h(S). These two sentence encodings may be very different: One view is from the theory of information bottleneck (Tishby et al., 1999; Tishby and Zaslavsky, 2015), which argues that training a neural network on a certain task is extracting an approximate minimal sufficient statistic of the input sentences with regard to the target task; information irrelevant to the target task is maximally compressed. In our case, this corresponds to the process where the conditional sentence encoding compresses the information irrelevant to the relation, which will enhance the quality but reduce the quantity of the sentence information. 2.2 Two Implementations of QUASE In order to fix this issue, we need to know how to learn h(S) from QA data. However, since QA is a paired-sentence task, the attention pattern between the context sentence and the question sentence is important for successful further pre-training on QA. Therefore, we propose that if the target task is single-sentence input, then fur8746 ther pre-training on QA data should also focus on single-sentence encodings in the initial layers; the context sentence should not interact with the question sentence until the very last few layers. This change is expected to hurt the capability to solve the auxiliary QA task, but it is later proved to transfer better to the target task. This new treatment is called s-QUASE with “s” representing “singlesentence,” while the straightforward implementation mentioned above is called p-QUASE where “p” means “paired-sentence.” The specific structures are shown in Fig. 1. 2.2.1 s-QUASE The architecture of s-QUASE is shown in Fig. 1(a). When further pre-training it on QA data, the context sentence and the question sentence are fed into two pipelines. We use the same Sentence2Question and Question2Sentence attention as used in BiDAF (Seo et al., 2017). Above that, “Sentence Modeling,” “Question Modeling,” and “Interaction Layer” are all bidirectional transformers (Vaswani et al., 2017) with 2 layers, 2 layers, and 1 layer, respectively. Finally, we use the same classification layer as BERT, which is needed for training on QA data. Overall, this implementation restricts interactions between the paired-sentence input, especially from the question to the context, because when serving the target task, this attention will not be available. Using s-QUASE in target tasks. Given a sentence S, s-QUASE can provide a sequence of hidden vectors h(S), i.e., the output of the “Sentence Modeling” layer in Fig. 1(a). Although h(S) does not rely on the question sentence, h(S) is optimized so that upper layers can use it to handle those questions in the QA training data, so h(S) indeed captures information related to the phenomena queried by those QA pairs. For single-sentence tasks, we use h(S) from s-QUASE as additional features, and concatenate it to the word embeddings in the input layer of any specific neural model.5 2.2.2 p-QUASE The architecture of p-QUASE is shown in Fig. 1(b), which is the standard way of pre-training BERT. That is, when further pre-training it on QA data, the context sentence and the question sentence form a single sequence (separated by special tokens) and are fed into BERT. 5We mainly use concatenation in both types of QUASE. However, we also use replacement in some experiments and we will note these cases later in this paper. Using p-QUASE in target tasks. Given a sentence pair S (concatenated), p-QUASE produces hA(S), i.e., the output of the BERT module in Fig. 1(b). One can of course continue fine-tuning pQUASE on the target task, but we find that adding p-QUASE to an existing model for the target task is empirically better (although not very significant); specifically, we try to add hA(S) to the final layer before the classification layer, and we also allow pQUASE to be updated when training on the target task, although it is conceivable that other usages may lead to even stronger results. For instance, when the target task is token classification, e.g., MRC, we can simply concatenate the vectors of hA(S) at each timestamp to any existing model; when the target task is sentence classification, e.g., TE, we apply max-pooling and average-pooling on hA(S), respectively, and concatenate the two resulting vectors to any existing model before the final classification layer. 2.3 Related Work on Sentence Encoding Modern LMs are essentially sentence encoders pretrained on unlabeled data and they outperform early sentence encoders such as skip-thoughts (Kiros et al., 2015). While an LM like BERT can handle lexical and syntactic variations quite well, it still needs to learn from some annotations to acquire the “definition” of many tasks, especially those requiring complex semantics (Tenney et al., 2019). Although we extensively use BERT here, we think that the specific choice of LM is orthogonal to our proposal of learning from QA data. Stronger LMs, e.g., RoBERTa (Liu et al., 2019) or XLNet (Yang et al., 2019), may only strengthen the proposal here. This is because a stronger LM represents unlabeled data better, while the proposed work is about how to represent labeled data better. CoVe (McCann et al., 2017) is another attempt to learn from indirect data, translation data specifically. However, it does not outperform ELMo or BERT in many NLP tasks (Peters et al., 2018) and probing analysis (Tenney et al., 2019). In contrast, our QUASE will show stronger experimental results than BERT on multiple tasks. In addition, we think QA data is generally cheaper to collect than translation data. The proposed work is highly relevant to Phang et al. (2018) and their follow-up works (Wang et al., 2019), which use further pre-training on data-rich intermediate supervised tasks and aim 8747 Single-sentence Paired-sentence System SRL RE TE MRC BERT 34.17 62.99 78.29 79.90 BERTQAMR 32.92 50.16 78.73 82.96 Table 1: The naive way of training BERT on QAMR (BERTQAMR) negatively impacts singlesentence tasks. We only use 10% training data for simplicity. We use BERT/BERTQAMR to produce feature vectors for a BiLSTM model (SRL) and a CNN model (RE); for TE and MRC, we fine-tune BERT/BERTQAMR. to improve another target task. The key differences are as follows: First, we distinguish two types of sentence encodings, which provide explanation to their puzzle that sentence-pair tasks seem to benefit more from further pre-training than single-sentence tasks do. Second, they only focus on fine-tuning based methods which cannot be easily plugged in many single-sentence tasks such as SRL and Coref, while we analyze both fine-tuning based and feature-based approaches. Third, they mainly use TE signals for further pre-training, and evaluate their models on GLUE (Wang et al., 2018) which is a suite of tasks very similar to TE. Our work instead makes use of QA data to help tasks that are typically not QA. Fourth, from their suite of further pre-training tasks, they observe that only further pre-training on language modeling tasks has the power to improve a target task in general, while we find that QAMR may also have this potential, indicating the universality of predicate-argument structures in NLP tasks. Our work is also related to Sentence-BERT (Reimers and Gurevych, 2019) in terms of providing a better sentence representation. However, their focus was deriving semantically meaningful sentence embeddings that can be compared using cosine-similarity, which reduces the computational cost of finding the most similar pairs. In contrast, QUASE provides a better sentence encoder in the same format as BERT (a sequence of word embeddings) to better support tasks that require complex semantics. 3 Applications of QUASE In this section, we conduct thorough experiments to show that QUASE is a good framework to get supervision from QA data for other tasks. We first give an overview of the datasets and models used in these experiments before diving into the details of each experiment. Specifically, we use PropBank (Kingsbury and Palmer, 2002) (SRL), the dataset from the SemEval’15 shared task (Oepen et al., 2015) with DELPH-IN MRS-Derived Semantic Dependencies target representation (SDP), CoNLL’03 (Tjong Kim Sang and De Meulder, 2003) (NER), the dataset in SemEval’10 Task 8 (Hendrickx et al., 2009) (RE), the dataset in the CoNLL’12 shared task (Pradhan et al., 2012) (Coref), MNLI (Williams et al., 2018) (TE), and SQuAD 1.0 (Rajpurkar et al., 2016) (MRC). In Table 4, we use CoNLL’12 English subset of OntoNotes 5.0 (Pradhan et al., 2013), which is larger than PropBank. The performance of TE and MRC is evaluated on the development set.6 For single-sentence tasks, we use both simple baselines (e.g., BiLSTM and CNN; see Appendix B.1) and near-state-of-the-art models published in recent years. As in ELMo, we use the deep neural model in He et al. (2017) for SRL, the model in Peters et al. (2018) for NER, and the end-to-end neural model in Lee et al. (2017) for Coref. We also use the biaffine network in Dozat and Manning (2018) for SDP but we removed part-of-speech tags from its input, and the attention-based BiLSTM in Zhou et al. (2016) is the strong baseline for RE. In addition, we replace the original word embeddings in these models (e.g., GloVe (Pennington et al., 2014)) by BERT. Throughout this paper, we use the pre-trained case-insensitive BERT-base implementation. More details on our experimental setting can be found in Appendix B, including the details of simple models in B.1, some common experimental settings of QUASE in B.2, and s-QUASE combined with other SOTA embeddings (ELMo and Flair (Akbik et al., 2018)) in B.3. 3.1 Necessity of Two Representations We first consider a straightforward method to use QA data for other tasks—to further pre-train BERT on these QA data. We compare BERT further pre-trained on QAMR (denoted by BERTQAMR) with BERT on two single-sentence tasks (SRL and RE) and two paired-sentence tasks (TE and MRC). We use a feature-based approach for singlesentence tasks and a fine-tuning approach for paired-sentence tasks. The reason is two-fold. On the one hand, current SOTAs of all singlesentence tasks considered in this paper are still 6For TE, we mean matched examples in MNLI. 8748 Single-Sentence Tasks Paired-Sentence Tasks Tasks SRL SDP NER TE MRC Split 10% 100% 10% 100% 10% 100% 10% 30% 10% 100% s-QUASE 46.42 70.13 76.08 87.29 70.69 87.10 52.25 57.30 44.67 67.09 p-QUASE 32.92 66.40 70.92 86.43 49.97 85.23 57.29 60.49 48.29 72.97 Table 2: Probing results of the sentence encoders from s-QUASE and p-QUASE. In all tasks, we fix the model QUASE and use the sentence encodings as input feature vectors for the model of each task. In order to keep the model structure as simple as possible, we use BiLSTM for SRL, NER, and TE, Biaffine for SDP, and BiDAF for MRC. We compare on 10% and 100% of the data in all tasks except TE, where we use 30% to save run-time. (a) s-QUASE (b) p-QUASE Figure 2: Sample complexity analysis of using BERT and QUASE on SRL and MRC. We find that much fewer training examples are needed with the help of QUASEQAMR: with 50% SRL training data, s-QUASE can achieve comparable performance as BERT trained on 100%; with 0.1% training data for MRC, p-QUASE can achieve a reasonably good performance of 69.81%. feature-based. How to efficiently use sentence encoders (e.g. BERT) in a fine-tuning approach for some complicated tasks (e.g. SRL and SDP) is unclear. On the other hand, the fine-tuning approach shows great advantage over feature-based on many paired-sentence tasks (e.g. TE and MRC). Similar to Phang et al. (2018), we find in Table 1 that the two single-sentence tasks benefit less than the two paired-sentence tasks from BERTQAMR, which indicates that simply “further pre-training BERT” is not enough. We then compare s-QUASEQAMR and pQUASEQAMR on three single-sentence tasks (SRL, SDP and NER) and two paired-sentence tasks (TE and MRC) to show that it is important to distinguish two types of sentence representations. Rather than concatenating two embeddings as proposed in Sec. 2.2, here we replace BERT embeddings with QUASE embeddings for convenience. The results are shown in Table 2. We find that sQUASE has a great advantage over p-QUASE on single-sentence tasks and p-QUASE is better than s-QUASE on paired-sentence tasks. The proposal of two types of sentence encoders tackles the problem one may encounter when there is only further pre-training BERT on QAMR for single-sentence tasks. In summary, it is necessary to distinguish two types of sentence representations for singlesentence tasks and paired-sentence tasks. 3.2 Sample Complexity of QUASE To see whether adding QUASE to BERT reduces the sample complexity, we compare QUASEQAMR with BERT on one single-sentence task (SRL) and one paired-sentence task (MRC) with different percentages of training examples. For convenience, we replace BERT embeddings with QUASE embeddings for SRL. As shown in Figure 2, we find that s-QUASEQAMR outperforms BERT on SRL with small training data, and p-QUASEQAMR outperforms BERT on MRC with small training data. The results support that (1) adding QUASE to BERT reduces the sample complexity, (2) QUASE is very important in the low-resource setting. For instance, s-QUASEQAMR achieves an F1 score of 61 in SRL with 30% (27K) training examples (compared to 50.92 F1 by BERT). And pQUASEQAMR achieves 69.81 average F1 on MRC with 0.1% (about 100) training examples (compared to 13.29 F1 by BERT). 8749 Models s-QUASE p-QUASE Tasks SRL SDP NER RE TE Avg Split small full small full small full small full small full small full BERT 34.17 66.02 75.49 90.13 88.89 91.38 71.48 86.33 78.29 84.09 69.66 83.59 QUASE 50.16 72.59 78.30 90.78 90.64 92.16 77.14 86.80 78.94 84.97 75.04 85.46 TriviaQA 17.75 39.69 77.29 90.43 89.74 91.70 75.41 86.80 78.50 84.95 67.74 78.71 NewsQA 27.99 53.05 77.27 90.41 89.96 91.65 73.08 85.88 78.85 84.30 69.43 81.06 SQuAD 34.35 61.86 76.90 90.51 89.94 91.07 77.14 85.80 78.21 84.97 71.31 82.84 QA-RE 35.50 65.85 78.30 90.78 90.64 91.73 63.36 85.80 78.94 84.68 69.35 83.77 Large QA-SRL 50.16 72.59 76.92 90.68 90.12 91.73 68.99 85.46 78.88 84.61 73.01 85.01 QAMR 46.42 70.13 77.53 90.57 89.90 92.16 72.23 86.37 78.73 84.79 72.96 84.80 Table 3: Further pre-training QUASE on different QA datasets of the same number of QA pairs (51K). As we propose, s-QUASE is used as features for single-sentence tasks, and p-QUASE is further fine-tuned for the paired-sentence task. The specific models are all strong baselines except for SRL, where we use a simple BiLSTM model to save run-time. “Small” means 10% training examples for all tasks except NER, where “small” means the dev set (about 23%) of the corresponding training set. We further show the results of QUASE with the best QA dataset, which are significantly better than those of BERT. Single-Sentence Tasks Paired-Sentence Tasks Small SRL SDP NER RE Coref TE MRC Avg BERT 76.65 75.49 88.89 71.48 62.76 78.29 79.90 76.21 Proposed (abs. imp.) +3.95 +2.04 +1.01 +0.75 +0.60 +0.44 +3.06 +1.69 Proposed (rel. imp.) 16.9% 8.3% 9.1% 2.6% 1.6% 2.0% 15.2% 7.1% Full SRL SDP NER RE Coref TE MRC Avg BERT 84.54 90.13 91.38 86.33 69.05 84.09 88.23 84.82 Proposed (abs. imp.) +0.15 +0.44 +0.78 +0.04 -0.14 +0.7 +0.35 +0.33 Proposed (rel. imp.) 0.9% 4.5% 9.0% 0.3% -0.5% 4.4% 3.0% 2.2% Table 4: QUASEQAMR (almost) universally improves on 5 single-sentence tasks and 2 paired-sentence tasks. Note BERT is close to the state of the art for these tasks. Both absolute improvement (abs. imp.) and relative improvement (rel. imp.; error reduction rate) are reported. “Small/Full” refers to the size of training data for each target task. For SDP, RE, TE, and MRC, “small” means 10% of the training set, while for NER, SRL, and Coref, “small” means the development set (about 10%-30% compared to each training set). 3.3 Data Choice for Further Pre-training We compare BERT with QUASE further pretrained with the same numbre of QA pairs on 6 different QA datasets (TriviaQA (Joshi et al., 2017), NewsQA (Trischler et al., 2017), SQuAD, QA-RE (Levy et al., 2017), Large QA-SRL (FitzGerald et al., 2018), and QAMR). s-QUASE further pretrained on different QA datasets are evaluated on four single-sentence tasks in a feature-based approach: SRL, SDP, NER and RE. p-QUASE further pre-trained on different QA datasets is evaluated on one task (TE) in a fine-tuning approach. In Table 3, we find that the best options are quite different across different target tasks, which is expected because a task usually benefits more from a more similar QA dataset. However, we also find that QAMR is generally a good furtherpre-training choice for QUASE. This is consistent with our intuition: First, QAMR has a simpler concept class than other paragraph-level QA datasets, such as TriviaQA, NewsQA and SQuAD. It is easier for QUASE to learn a good representation with QAMR to help sentence-level tasks. Second, QAMR is more general than other sentence-level QA datasets, such as QA-RE and Large QA-SRL.7 Therefore, we think that the capability to identify predicate-argument structures can generally help many sentence-level tasks, as we discuss next. 3.4 The Effectiveness of QUASE Here we compare QUASEQAMR with BERT on 5 single-sentence tasks and 2 paired-sentence tasks, where QUASEQAMR is further pre-trained on the training set (51K QA pairs) of the QAMR dataset. As shown in Table 4, we find that QUASEQAMR 7Although the average performance of QUASEQAMR on five tasks is slightly below QUASELarge QA−SRL, for which the benefit mostly comes from SRL. QUASE is mainly designed to improve a lot of tasks, so QAMR is a better choice in our setup, but in practice, we do not limit QUASE to any specific QA dataset and one can use the best one for corresponding target tasks. 8750 has a better performance than BERT on both singlesentence tasks and paired-sentence tasks, especially in the low-resource setting8, indicating that QUASEQAMR can provide extra features compared to BERT. Admittedly, the improvement in the “Full” setting is not significantly large, but we think that this is expected because large direct training data are available (such as SRL with 278K training examples in OntoNotes). However, it is still promising that 51K indirect QA pairs can improve downstream tasks in the low-resource setting (i.e. several thousands direct training examples). That is because they help the scalability of machine learning methods, especially for some specific domains or some low-resource languages where direct training data do not exist in large scale. 4 Discussion In this section we discuss a few issues pertaining to improving QUASE by using additional QA datasets and the comparison of QUASE with related symbolic representations. 4.1 Further Pre-training QUASE on Multiple QA Datasets We investigate whether adding the Large QA-SRL dataset (FitzGerald et al., 2018) or the QA-RE9 dataset into QAMR in the further pre-training stage can help SRL and RE. We use s-QUASE embeddings to replace BERT embeddings instead of concatenating the two embeddings. The effectiveness of adding existing resources (Large QA-SRL or QA-RE) into QAMR in the further pre-training stage of s-QUASE on SRL and RE are shown in Table 5. We find that adding related QA signals (Large QA-SRL for SRL and QA-RE for RE) into QAMR can help improve specific tasks. Noteworthy is the fact that QA-RE can help SRL (Large QA-SRL can also help RE), though the improvement is minor compared to Large QA-SRL (QARE). These results indicate that adding more QA signals related to the sentence can help get a better sentence representation in general. 8Another interesting finding is that simple models usually benefit more from QUASE embeddings than SOTA models. 9Because the training set of QA-RE is too large, we randomly choose 100, 000 training examples. Tasks SRL RE Split 10% 100% 10% 100% BERT 34.16 66.02 59.36 83.28 QUASEQAMR 46.42 70.13 61.09 82.22 QUASEQAMR+Large QA−SRL 49.92 71.74 65.76 83.16 QUASEQAMR+QA−RE 47.25 72.52 68.12 83.89 Table 5: The potential of further improving QUASEQAMR by further pre-training it on more QA data. The “+” between datasets means union with shuffling. Both Large QA-SRL and QA-RE help achieve better results than QAMR alone. For simplicity, we use a simple BiLSTM model for SRL and a simple CNN model for RE. See more in Appendix B. 4.2 Comparison with Symbolic Meaning Representations Traditional (symbolic) shallow meaning representations such as SRL and AMR, suffer from having a fixed set of relations one has to commit to. Moreover, inducing these representations requires costly annotation by experts. Proposals such as QA-SRL, QAMR, semantic proto-roles (Reisinger et al., 2015), and universal dependencies (White et al., 2016) avoid some of these issues by using natural language annotations, but it is unclear how other tasks can take advantage of them. QUASE is proposed to facilitate inducing distributed representations instead of symbolic representations from QA signals; it benefits from cheaper annotation and flexibility, and can also be easily used in downstream tasks. The following probing analysis, based on the Xinhua subset in the AMR dataset, shows that s-QUASEQAMR embeddings encode more semantics related to AMR than BERT embeddings. Specifically, we use the same edge probing model as Tenney et al. (2019), and find that the probing accuracy (73.59) of s-QUASEQAMR embeddings is higher than that (71.58) of BERT. At the same time, we find that p-QUASEQAMR can achieve 76.91 F1 on the PTB set of QA-SRL, indicating that p-QUASEQAMR can capture enough information related to SRL to have a good zero-shot SRL performance. More details can be found in Appendix C.1. Another fact worth noting is that AMR can be used to improve downstream tasks, such as MRC (Sachan and Xing, 2016), TE (Lien and Kouylekov, 2015), RE (Garg et al., 2019) and SRL (Song et al., 2018). The benefits of QUASEQAMR on downstream tasks show that we can take advantage of AMR by learning from much cheaper QA signals dedicated to it. 8751 4.3 Difficulties in Learning Symbolic Representations from QA Signals QUASE is designed to learn distributed representations from QA signals to help down-stream tasks. We further show the difficulties of learning two types of corresponding symbolic representations from QA signals, which indicates that the two other possible methods are not as tractable as ours. One option of symbolic representation is the QAMR graph. Michael et al. (2017) show that question generation for QAMR representations can only achieve a precision of 28%, and a recall of 24%, even with fuzzy matching (multi-BLEU10 > 0.8). Furthermore, it is still unclear how to use the complex QAMR graph in downstream tasks. These results indicate that learning a QAMR parser for down-stream tasks is mainly hindered by question generation, and how to use the full information of QAMR for downstream tasks is still unclear. Another choice of symbolic representation is AMR, since QAMR is proposed to replace AMR. We consider a simpler setting, learning an SRL parser from Large QA-SRL. We propose three models in different perspectives, but the best performance of them is only 54.10 F1, even with fuzzy matching (Intersection/Union ≥0.5). More details can be found in Appendix C.2. Although a lot of methods (Khashabi et al., 2018; Marcheggiani and Titov, 2017; Strubell et al., 2018) can be adopted to use SRL/AMR in downstream tasks, the difficulty of learning a good SRL/AMR parser from QA signals hinders this direction. The difficulties of learning the two types of symbolic representations from QA signals indicate that our proposal of learning distributed representations from QA signals is a better way of making use of the latent semantic information in QA pairs for down-stream tasks. 5 Conclusion In this paper, we investigate an important problem in NLP: Can we make use of low-cost signals, such as QA data, to help related tasks? We retrieve signals from sentence-level QA pairs to help NLP tasks via two types of sentence encoding approaches. For tasks with a single-sentence input, such as SRL and NER, we propose s-QUASE that provides latent sentence-level representations; for tasks with a sentence pair input, such as TE and MRC we propose p-QUASE, that generates latent 10An average of BLEU1–BLEU4 scores. representations related to attentions. Experiments on a wide range of tasks show that the distinction of s-QUASE and p-QUASE is highly effective, and QUASEQAMR has the potential to improve on many tasks, especially in the low-resource setting. Acknowledgements This material is based upon work supported by the US Defense Advanced Research Projects Agency (DARPA) under contracts FA8750-192-0201, W911NF-15-1-0461, and FA8750-19-21004, a grant from the Army Research Office (ARO), and Google Cloud. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING, pages 1638–1649. M. Chang, L. Ratinov, and D. Roth. 2007. Guiding semi-supervision with constraint-driven learning. In Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 280–287. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, pages 4171–4186. Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In ACL, pages 484–490. Nicholas FitzGerald, Julian Michael, Luheng He, and Luke Zettlemoyer. 2018. Large-scale QA-SRL parsing. In ACL, pages 2051–2060. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. AllenNLP: A deep semantic natural language processing platform. Sahil Garg, Aram Galstyan, Greg Ver Steeg, Irina Rish, Guillermo Cecchi, and Shuyang Gao. 2019. Kernelized hashcode representations for relation extraction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6431–6440. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In ACL, pages 473–483. Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. 8752 In Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP), pages 643–653. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 94–99. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611. Association for Computational Linguistics. Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2018. Question answering as global reasoning over semantic abstractions. In Proceedings of The Conference on Artificial Intelligence (Proc. of the Conference on Artificial Intelligence (AAAI)). Paul Kingsbury and Martha Palmer. 2002. From treebank to propbank. In LREC, pages 1989–1993. Citeseer. Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In EMNLP, pages 188–197. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In CoNLL, pages 333–342. Elisabeth Lien and Milen Kouylekov. 2015. Semantic parsing for textual entailment. In Proceedings of the 14th International Conference on Parsing Technologies, pages 40–49. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In EMNLP, pages 1506–1515. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In NeurIPS, pages 6294– 6305. Julian Michael, Gabriel Stanovsky, Luheng He, Ido Dagan, and Luke Zettlemoyer. 2017. Crowdsourcing question-answer meaning representations. NAACL. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, and Zdeˇnka Ureˇsov´a. 2015. SemEval 2015 task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 915–926. M. Palmer, D. Gildea, and N. Xue. 2010. Semantic Role Labeling, volume 3. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL, pages 2227–2237. Jason Phang, Thibault F´evry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In CoNLL, pages 143–152. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 1– 40. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP, pages 2383–2392. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3973–3983. Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. Transactions of the Association for Computational Linguistics, 3:475–488. 8753 Dan Roth. 2017. Incidental supervision: Moving beyond supervised learning. In Proc. of the Conference on Artificial Intelligence (AAAI). Mrinmaya Sachan and Eric Xing. 2016. Machine comprehension using rich semantic representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 486–492. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. ICLR. Li Song, Yuan Wen, Sijia Ge, Bin Li, Junsheng Zhou, Weiguang Qu, and Nianwen Xue. 2018. An easier and efficient framework to annotate semantic roles: Evidence from the chinese amr corpus. In The 13th Workshop on Asian Language Resources, page 29. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In EMNLP, pages 5027–5038. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2019. Improving machine reading comprehension with general reading strategies. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2633–2643, Minneapolis, Minnesota. Association for Computational Linguistics. Alon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and transfer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4911–4921. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, et al. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. ICLR. Naftali Tishby, Fernando C Pereira, and William Bialek. 1999. The information bottleneck method. In Proc. of the Annual Allerton Conference on Communication, Control and Computing. Naftali Tishby and Noga Zaslavsky. 2015. Deep learning and the information bottleneck principle. In IEEE Information Theory Workshop (ITW). Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proc. of the Annual Meeting of the North American Association of Computational Linguistics (NAACL). Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, et al. 2019. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4465–4476. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355. Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal Decompositional Semantics on Universal Dependencies. In Empirical Methods in Natural Language Processing (EMNLP). Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL, pages 1112–1122. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attentionbased bidirectional long short-term memory networks for relation classification. In ACL, pages 207– 212. 8754 A Additional Details for QUASE In this section, we show the experimental details of QUASE. We first show the experimental settings of training p-QUASE and s-QUASE in Section A.1. After that, we conduct error analysis of QUASE to show the shortcomings of QUASE in Section A.2. Finally, the ablation analysis of s-QUASE is in Section A.3. A.1 Experimental Settings Our QUASE is based on the re-implementation of BERT with pytorch (Wolf et al., 2019). Although we might change a bit to fit the memory of GPU sometimes, the common hyper parameters for further pre-training s-QUASE and p-QUASE are as follows: Further pre-training p-QUASE. For sentencelevel QA datasets (QAMR, Large QA-SRL, and QA-RE), we further pre-train BERT for 4 epochs with a learning rate of 5e-5, a batch size of 32, a maximum sequence length of 128. For paragraph-level QA datasets (SQuAD, TrivaQA, and NewsQA), we further pre-train BERT for 4 epochs with a learning rate of 5e-5, a batch size of 16, a maximum sequence length of 384. Further pre-training s-QUASE. For sentencelevel QA datasets (QAMR, Large QA-SRL, and QA-RE), we further pre-train s-QUASE for 64 epochs with a learning rate of 1e-4, a batch size of 72, a maximum sentence length of 128 and a maximum question length of 24. For paragraph-level QA datasets (SQuAD, TrivaQA, and NewsQA), we further pre-train s-QUASE for 32 epochs with a learning rate of 1e-4, a batch size of 8, a maximum sentence of 384, and a maximum question length of 64. We need to note that s-QUASE contains more architectures than BERT, so the hyper parameters for BERT fine-tuning are not good for s-QUASE further pre-training A.2 Error Analysis of QUASE The F1 scores of s-QUASEQAMR and pQUASEQAMR on the development set are 76.20 and 90.35. In general, the results of s-QUASE are similar to BiDAF (Seo et al., 2017) but are significantly worse than the p-QUASE on QAMR. We conduct thorough error analysis including: sentence length, answer length, question length, question words and the PoS tag of the answer. We find that s-QUASE is not good at dealing with long sentences compared to p-QUASE. The analysis of model performance with regard to sentence length is shown in Figure 3(a). The average number of QA pairs is much larger when the sentence is longer as shown in Figure 3(b). The distribution of training set and development set is quite different, which makes the situation more complicated. We further compare s-QUASELarge QA−SRL and p-QUASELarge QA−SRL on Large QA-SRL whose distribution of training and development sets are the same. From the results, s-QUASE is still not as good as p-QUASE on long sentences. We think that the failure of s-QUASE in long sentences is mainly because there are more relations to encode, while p-QUASE only needs to encode information based on specific questions. We believe that there is a trade-off between the quality and the quantity of sentence information that a model can encode in practice, although h(S) also include the information in hA(S) in a perfect world. A.3 Ablation Analysis for s-QUASE s-QUASE consists of three basic components: a sentence encoder for the sentence representation, a question encoder for the question representation, an interaction layer between the sentence component and the question component. We carefully designed five variants of s-QUASE with increasing complexity and performance: (I) Basic model: a fixed BERT and one-layer bidirectional transformer for sentence modeling, a fixed BERT and one-layer bidirectional transformer for question modeling, and a two-layer multi-layer perceptron (MLP) for the interaction layer; (II) a fine-tuned BERT; (III) the same as model II, with a bi-directional attention flow added to the question component; (IV) the same as model III, with the interaction layer changed from a two-layer MLP to a bidirectional transformer; (V) the same as model IV, with the sentence modeling layer and question modeling layer changed from a single-layer bi-directional transformer to a two-layer one, and beam search is used in the inference stage. Table 6 shows the results of our models further pre-trained on the development set of the QAMR dataset. B Detailed Experimental Setup In this section, we show the details of experimental setup in Section 3. Because the corresponding settings are too many, we show some common settings here and more details are in our code. We first show the details of simple models in Section B.1, 8755 (a) Error analysis for QUASE. (b) Number of QA pairs. Figure 3: Error analysis of QUASE on the sentence length. We compare the performance of s-QUASE and p-QUASE on examples with different sentence lengths in the development set. The average number of QA pairs corresponding to the sentence length in the train and development sets is also shown. Models Model I Model II Model III Model IV Model V Average EM 34.97 41.64 55.68 64.18 66.77 Average F1 40.05 45.49 62.98 72.96 76.20 Table 6: The results of five variants of s-QUASEQAMR on the development set of QAMR. We use the average exact match (EM) and average F1 as our evaluation metrics. Embeddings ELMo Flair Tasks SRL Coref NER Split small full small full small full Baselines 78.32 83.87 60.72 66.89 89.86 92.37 s-QUASEQAMR 79.40 84.14 61.54 66.58 90.18 92.54 Table 7: Comparison between s-QUASEQAMR and other STOA embeddings. We use the same experimental settings as Section 3.4 for the three single-sentence tasks, SRL, Coref and NER. We use ELMo embeddings for SRL and Coref, and Flair embeddings for NER as our baselines. and then show some common experimental settings of QUASE in Section B.2. Finally, we compare s-QUASE with other SOTA embeddings (ELMo and Flair) in Section B.3 B.1 Simple Models When QUASE is used in the feature-based approach, we need use models for the tasks. For simplicity, we sometimes choose to use some simple models rather than strong baselines in Section 3 in our analysis. Following standard practice, we use a simple BiLSTM model with the input of word embeddings and binary features of predicates for SRL, a simple biaffine model based on BiLSTM for SDP, a simple BiLSTM mode for NER, a simple CNN baseline with the input of word embeddings and position features for RE, and a simple BiLSTM model for TE. B.2 Experimental Settings We use the re-implementation of SRL, NER and Coref from AllenNLP (Gardner et al., 2017) for strong baselines, and we implement the strong baselines of SDP and RE ourselves. As for MRC and TE, we use the re-implementation of BERT with pytorch (Wolf et al., 2019). As for simple models, we implement them by ourselves. As for the hyper parameters for strong baselines of single-sentence tasks, we use the same hyper parameters in the related papers (shown in Section 3). As for the hyper parameters for simple models, we tune them ourselves to find some reasonable hyper parameters. The hyper parameters of MRC and TE for p-QUASE are based on (Wolf et al., 2019). B.3 Comparison with Other Embeddings To show whether s-QUASE can also provide extra features than other STOA embeddings11, such 11The reported STOA models for SRL and Coref is based on ELMo embeddings and the reported STOA model for NER is based on Flair embeddings. 8756 Sentence Ann. Question Answers (1) Mr. Agnew was vice president of the U.S. from 1969 until he resigned in 1973 . INF What did someone resign from? vice president of the U.S. (2) This year , Mr. Wathen says the firm will be able to service debt and still turn a modest profit . INF When will something be serviced? this year (3) Mahlunga has said he did nothing wrong and Judge Horn said he ”failed to express genuine remorse”. INF Who doubted his remorse was genuine? Judge Horn (4) Volunteers are presently renovating the former post office in the town of Edwards, Mississippi, United States for the doctor to have an office. IMP What country are the volunteers renovating in? United States Table 8: Some examples of question-answer pairs in QA-SRL and QAMR datasets. The first two examples are from QA-SRL dataset and predicates are bolded. The last two examples are from QAMR dataset. We show two phenomena that are not modeled by traditional symbolic representations of predicate-argument structure (e.g SRL and AMR), inferred relations (INF) and implicit arguments (IMP). Span IOU ≥0.5 Token Models Precision Recall F1 Precision Recall F1 Precision Recall F1 Rules + EM 24.31 22.78 23.52 34.34 32.27 33.27 50.46 28.19 36.17 PerArgument + CoDL + Multitask 32.02 12.30 17.77 46.99 18.06 26.09 70.76 17.80 28.45 Argument Detector + Argument Classifier 49.19 43.09 45.94 57.84 50.82 54.10 69.37 47.60 56.45 Mapping: upper-bound 67.82 48.58 56.61 89.09 65.82 75.70 91.57 70.25 79.50 Table 9: Results of learning an SRL parser from question-answer pairs. as ELMo and Flair, we compare s-QUASEQAMR embeddings with ELMo embeddings on SRL and Coref, and compare s-QUASEQAMR embeddings with Flair on NER. The results are shown in Table 7. We find that s-QUASEQAMR has a better performance than ELMo and Flair, especially in the low-resource setting, which indicates that sQUASE can provide extra features than ELMo and Flair. C On the Strength of Distributed Meaning Representations In this section, we first show more details of the comparison between QUASE with symbolic meaning representations in Section C.1. After that, we show the details of learning an SRL parser from QA-SRL in Section C.2. C.1 Comparison with Symbolic Meaning Representations Probing Analysis. We first show the details of our probing analysis on the Xinhua subset12 of AMR dataset. Our probing task can be formulated as follows: given two nodes in order, the probing model needs to predict the directed relation from one node to the other. We only consider the cases where there is indeed a relation between them. 12Only four subsets in AMR dataset contain both training and development sets, but the other three subsets either use informal languages or templatic and report-like structures, which are quite different from the domain of QAMR. There are 741 sentences and 9008 relations in valid alignments with 70 different types of relations in the training set, and 99 sentences with 1098 relations in valid alignments with 43 different types of relations in the development set. We use the same edge probing model as (Tenney et al., 2019), but we train it by minimizing a softmax loss rather than binary cross-entropy loss. Therefore, our probing results are based on the classification accuracy, not binary F1 score. Systematic Analysis. We use Large QA-SRL as a testbed to analyze the representation ability of p-QUASEQAMR. Our p-QUASEQAMR achieves 85.79 F1 score on the development set of Large QA-SRL, while BERT further pre-trained on SQuAD with the same number of QA pairs only achieves an F1 score of 64.63 (it achieves 86.98 F1 on SQuAD). For reference, BERT further pre-trained on Large QA-SRL can achieve 92.19 F1 on Large QA-SRL. All these numbers indicate that p-QUASEQAMR has a strong ability to answer questions related to SRL. On the other hand, BERT further pre-trained on Large QA-SRL can only achieve 72.17 F1 on the development set of QAMR, while pQUASEQAMR can achieve 85.79 F1 on Large QA-SRL (it achieves 90.35 F1 on QAMR). These results show that QAMR can cover the questions related to SRL, but Large QA-SRL cannot cover many questions related to AMR. Therefore, 8757 QAMR is a good choice for QUASE to be further pre-trained on. Some Examples. He et al. (2015) show that QA pairs in QA-SRL often contain inferred relations, especially for why, when and where questions. These inferred relations are typically correct, but outside the scope of PropBank annotations (Kingsbury and Palmer, 2002). This indicates that QA-SRL contains some extra information about predicates. Some examples are shown in Table 8. We further verify that p-QUASEQAMR can correctly answer questions in the examples, which means that QUASE can encode some extra information that SRL cannot. Michael et al. (2017) show that QAMR can capture a variety of phenomena that are not modeled in traditional representations of predicate-argument structure, including instances of co-reference, implicit and inferred arguments, and implicit relations (for example, between nouns and their modifiers). Some examples of QAMR are shown in Table 8. Similar to SRL, we find that p-QUASE precedes traditional representations, such as AMR, by correctly answering questions in the examples and hence encoding extra information. C.2 Learning an SRL Parser from QA-SRL C.2.1 Learng an SRL Parser We consider learning a SRL parser from QA-SRL. It reduces the problem of learning AMR from QAMR to a simplified case. Challenges. There are three main challenges to learn an SRL parser from Large QA-SRL. Partial issues. Only 78% of the arguments have overlapped with answers; 47% of the arguments are exact match; 65% of the arguments have Intersection/Union ≥0.513. Irrelevant question-answer pairs. 89% of the answers are “covered” by SRL arguments; 54% of the answers are exact match with arguments; 73% of the answers have Intersection/Union ≥0.5. These statistics show that we also get some irrelevant signals: some of the answers are not really arguments (for the corresponding predicate). Different guidelines. Even if the arguments and the answer overlap, the overlap is only partial. A reasonable upperbound. We treat the answers that have overlapped with some arguments as our predicted arguments. If two predicted argu13These statistics of partial issues and irrelevant questionanswer pairs are based on the PTB set of QA-SRL. ments intersect each other, we will use the union of them as new predicted arguments. The results are shown in Table 9. We know from the table that this mapping algorithm achieves a span F1 of 56.61, which is a reasonable upper bound of our SRL system. Baselines. We consider three models to learn an SRL parser from Large QA-SRL dataset. Rules + EM. We first use rules to change QA pairs to labels of SRL. We keep the labels with high precision and then use an EM algorithm to do bootstrapping. A simple BiLSTM is used as our model for SRL. The results are shown in Table 9. We think that low token F1 is due to the low partial rate of tokens (37.97%) after initialization. PerArgument + CoDL + Multitask. We consider a simpler setting here. A small number of gold SRL annotations are provided as seeds. To alleviate the negative impact of low partial rate, we propose to train different BiLSTM models for different arguments (PerArgument) and do global inference to get structured predictions14. We first use seeds to train the PerArgument model and then use CoDL (Chang et al., 2007) to introduce constraints, such as SRL constraints, into bootstrapping. At the same time, we train a model to predict the argument type from question-answer pairs. These two tasks (argument type prediction and SRL) are learned together through soft parameter sharing. In this way, we make use of the information from QA pairs for SRL. We use 500 seeds to bootstrap. The span F1 of our method is 17.77 and the span F1 with only seeds is 13.65. More details are in Table 9. The performance of this model has only improved several percents compared to the model trained only on seeds. Argument Detector + Argument Classifier. Given a small number of gold SRL annotations and a large number of QA pairs, there are two methods to learn an end-to-end SRL15 system. One is to assign argument types to answers in the context of corresponding questions using rules, and learn an end-to-end SRL model based on the predicted SRL data. This is exactly our first model, Rules + EM. However, the poor precision of argument classification leads to unsatisfactory results. An14Given a predicate in the sentence with three arguments and one of them is annotated, the sentence is partial for a traditional SRL model but not partial for a PerArgument model. 15Note that an end-to-end SRL system is with gold predicates. This is different from the generic definition. 8758 other method is to learn from small seeds and bootstrap from large number of QA pairs. Thich is our second model, PerArgument + CoDL + Multitask. However, bootstrapping can not improve argument detection much, leading to mediocre results. We also notice that argument detection is hard with a small number of annotated data, but argument classification is easy with little high-quality annotated data. Fortunately, most answers in Large QA-SRL overlap with arguments. Furthermore, the mapping results of argument detection is about 56.61, good enough compared to two baselines. We propose to learn two components for SRL, one is for argument detection and the other is for argument classifier. We use the span-based model in (FitzGerald et al., 2018) for argument detection. The argument classifier is trained on predicates in the PTB set of QA-SRL. The results are shown in Table 9. C.2.2 Using SRL/AMR Parsers in Downstream Tasks There have already been some attempts to use semantics in downstream tasks. We discuss three types of application here. Traditionally, semantic parsers can be used to extract semantic abstractions, and can be applied to question answering (Khashabi et al., 2018). Second, dependency graphs, such as SDP, can be incorporated into neural networks. For example, (Marcheggiani and Titov, 2017) encodes semantic information in Graph Convolution Networks (GCN). In order to use constituent based traditional symbolic meaning representations, one can encode related semantic information by multi-task learning (MTL). (Strubell et al., 2018) mentioned such an example of application. The main difficulty of retrieving SRL/AMR from QA signals for downstream tasks is to learn a good parser for SRL/AMR from question-answer pairs.
2020
772
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8759–8771 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8759 Towards Robustifying NLI Models Against Lexical Dataset Biases Xiang Zhou Mohit Bansal UNC Chapel Hill {xzh, mbansal}@cs.unc.edu Abstract While deep learning models are making fast progress on the task of Natural Language Inference, recent studies have also shown that these models achieve high accuracy by exploiting several dataset biases, and without deep understanding of the language semantics. Using contradiction-word bias and word-overlapping bias as our two bias examples, this paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases. First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method. Next, we also compare two ways of directly debiasing the model without knowing what the dataset biases are in advance. The first approach aims to remove the label bias at the embedding level. The second approach employs a bag-of-words submodel to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features by forcing orthogonality between these two submodels. We performed evaluations on new balanced datasets extracted from the original MNLI dataset as well as the NLI stress tests, and show that the orthogonality approach is better at debiasing the model while maintaining competitive overall accuracy.1 1 Introduction In this work, we focus on investigating and reducing biases in the task of Natural Language Inference (NLI), where the target of the model is to classify the relations between a pair of sentences into three categories: entailment, neutral and contradiction. With the release of large-scale standard datasets (Bowman et al., 2015; Williams et al., 2018), significant success has been made on 1Our code and data are available at: https://github. com/owenzx/LexicalDebias-ACL2020 this task, and recent state-of-the-art neural models have already reached competitive performance even compared to humans. However, a number of papers (Gururangan et al., 2018; Poliak et al., 2018; Nie et al., 2019; Naik et al., 2018) have shown that despite the high accuracy on these datasets, these models are far from mastering the required nature of natural language inference. Instead of deeply understanding the sentences in the correct semantic way, these models tend to exploit shortcuts or annotation artifacts in the dataset and actually overfit to these datasets to predict the label using simple patterns. However, most shortcuts are only valid within the datasets and fail to hold for general natural language. Hence, these models fail to generalize to other datasets for the same task (Talman and Chatzikyriakidis, 2019), perform badly on challenge analysis datasets (Glockner et al., 2018; McCoy et al., 2019; Wang et al., 2019b), and are fooled by adversarial attacks (Naik et al., 2018). One major cause of this problem is the existence of dataset biases. Since most NLP datasets are often collected and processed by crowdworkers, bias can be added to the data at every step of data collection. For example, when writing contradiction pairs, workers are likely to use negation words such as ‘not’, and when creating entailment pairs, workers are likely to keep most of the words in the premise sentence. This results in ‘annotation artifacts’ in the dataset (Gururangan et al., 2018). In reality, almost every dataset contains countless such diverse biases. In our paper, we focus on the Multi-Genre Natural Language Inference (MNLI) dataset (Williams et al., 2018) in English, and on two specific kinds of dataset bias: Contradiction Word Bias (CWB): If the hypothesis sentence contains some specific words (such as negation words) that are always used by the crowdworkers to generate contradiction pairs, then the sentence pair is very likely to be contradiction. 8760 Contradiction-Word Bias Word-Overlapping Bias MNLI Prem. A recorded menu will provide information on how to obtain these lists. This is especially true on Menocra, where cold winter winds limit the seasons length. Hypo. Recorded menus do not provide any information at this time. On Menocra, where cold winter winds limit the seasons length, this is especially true. Stress Prem. Understanding is the key. This is especially true on Menocra, where cold winter winds limit the seasons length. Hypo. Understanding is the most important and false is not true. On Menocra, where cold winter winds limit the seasons length, this is especially true and true is true. Table 1: The example samples for the CWB and WOB in the original dataset and the test samples in NLI stress tests (Naik et al., 2018) designed to reveal these biases (the stress test samples aim to fool the model to predict contradiction by adding negation word and to not predict entailment by reducing word overlapping). Word Overlapping Bias (WOB): If the premise sentence and the hypothesis sentence have a high word-overlap, then the sentence pair is very likely to be entailment. These two types of biases are selected as the focus of our experiments because: (1) there exist a significant number of samples in the dataset where they are a major problem; (2) they are conceptually easy to understand and relatively easier to evaluate. In our experiments, we not only used current existing evaluation datasets from Naik et al. (2018), but also extracted balanced evaluation datasets from the original data to evaluate these two biases. Although we only focus on these two kinds of dataset biases throughout our experiments, our methods are not specifically designed for these two biases and should be able to reduce other similar lexical biases simultaneously. Using these two example lexical biases, our paper discusses the following three questions: Q1. Is lexical bias a problem that can be solved by only balancing the dataset? Q2. Can the lexical bias problem be solved using existing ideas from the gender bias problem? Q3. What are some promising new modeling directions towards reducing lexical biases? As responses to these three questions, we conduct three lines of experiments. Firstly, we expand the discussion of Q1 by studying whether and how the bias can be reduced by debiasing the dataset. For this, we add new training data which does not follow the bias pattern. This new data can come from two sources, either from the original training set or via manually generated synthetic data. We show that both methods can slightly reduce the model’s bias. However, even after adding a large amount of additional data, the model still cannot be completely bias-free. Another critical problem with these data augmentation/enhancement based debiasing methods is that we need to know the specific behaviour of the biases before making some related changes to the dataset. However, in reality, models are always faced with new training datasets containing unknown and inseparable biases. Hence, the answer to Q1 is mostly negative for simple data-level approaches and we also need to focus on designing direct model-debiasing methods. Therefore, we turn our focus to directly debiasing the model (Q2 and Q3). The first method is to debias the model at the lower level, i.e., by directly debiasing the embeddings so that they do not show strong biases toward any specific label. This is one of the most prevalent methods for reducing gender biases, so through the examination of this idea, we aim to compare lexical bias problems to gender bias problems and highlight its uniqueness (hence answering Q2). Finally, we debias the model at the higher level, i.e., by designing another bag-of-words (BoW) sub-model to capture the biased representation, and then preventing the primary model from using the highly-biased lexical features by forcing orthogonality between the main model and the BoW model (via HEX projection (Wang et al., 2019a)). In our experiments, we show that debiasing the prediction part of the model at higher levels using BoW-orthogonality is more effective towards reducing lexical biases than debiasing the model’s low-level components (embeddings). This approach can significantly robustify the model while maintaining its overall performance, hence providing a response to Q3. We also present qualitative visualizations using LIMEanalysis for the important features before and after applying the BoW-orthogonality projection. 2 Related Work Problems with NLI Models and Datasets. Despite the seemingly impressive improvements in NLI tasks, recently a number of papers revealed different problems with these models. Gururangan et al. (2018) showed that annotation artifacts in the datasets are exploited by neural models to get high 8761 accuracy without understanding the sentence. Poliak et al. (2018) showed a similar phenomenon by showing models getting good performance but only taking one sentence as the input. Nie et al. (2019) showed that NLI models achieved high accuracy by word/phrase level matching instead of learning the compositionality. Naik et al. (2018) constructed bias-revealing datasets by modifying the development set of MNLI. In our evaluation, besides using the datasets from Naik et al. (2018), we also extract new datasets from the original MNLI dataset to maintain the consistency of input text distribution. Adversarial Removal Methods. Adversarial removal techniques are used to control the content of representations. They were first used to do unsupervised domain adaptation in Ganin and Lempitsky (2015). Xie et al. (2017) later generalized this approach to control specific information learned by the representation. Li et al. (2018) used a similar approach to learn privacy-preserving representations. However, Elazar and Goldberg (2018) showed that such adversarial approach fails to completely remove demographic information. Minervini and Riedel (2018) generate adversarial examples and regularize models based on first-order logic rules. Belinkov et al. (2019a,b) showed that adversarial removal methods can be effective for the hypothesisonly NLI bias. Our focus is on two different lexical biases and our results are complementary to theirs.2 Recently, Wang et al. (2019a) proposed HEX projection to force the orthogonality between the target model and a superficial model to improve domain generalization for image classification tasks. Here, to make the model less lexically biased, we apply the HEX projection with specially-designed NLP model architectures to regularize the representation in our models. Even more recently, Clark et al. (2019) and He et al. (2019) propose to robustify the task model with the help of an additional simple model, using ensembling to encourage cooperation of the two models. On the other hand, our main motivation to compare the advantages/limitations of dataset vs. embedding vs. classifier debiasing methods (against two different types of problematic lexical biases in NLI), and also our classifier debiasing method forces the task model to capture orthogonal information via HEX projection. Removing Gender Bias in NLP Models. There 2We have tried a similar approach via gradient reversal w.r.t. BoW sub-model in preliminary experiments and observed less effectiveness (than HEX-projection), which hints that different types of biases can lead to different behaviors. is also a line of work in NLP on analyzing and reducing gender bias in NLP models. Bolukbasi et al. (2016); Caliskan et al. (2017); Zhao et al. (2018a) studied the bias problem in word embeddings. Zhao et al. (2017) reduced gender bias in visual recognition using corpus-level constraints. Zhao et al. (2018b) discussed the gender bias problem in co-reference resolution. These problems are related to our work, but lexical biases are more complex. Multiple inseparable lexical dataset biases can influence one single example and the same word can have different lexical biases in different contexts. Later in our experiments, we show that these two problems behave differently and we present the need for different solutions. 3 Data-Level Debiasing Models naturally learn the biases from the dataset they are trained on. Therefore, as we mentioned in Q1 in Sec. 1, one may first wonder if lexical bias can be completely removed by fixing the source of the bias, i.e., datasets. While collecting largescale datasets (Bowman et al., 2015; Williams et al., 2018) already takes a lot of time and effort, collecting bias-free datasets is even more time-consuming and hard to control. Therefore, here we focus on getting additional data from currently-available resources. We conducted experiments using two resources of data. The first one is to do ‘data enhancement’ by repeating samples in the original training data. The second source is ‘data augmentation’ by manually creating synthetic data. We follow the construction of existing synthetic bias-revealing datasets to create new samples for the training set so that these targeted biases can be reduced. Data Enhancement by Repeating Training Data. For most kinds of biases, there still exists a small portion of samples that don’t follow the bias. Therefore, we reduce biases in datasets by repeating this portion of samples. For CWB, we select non-contradiction samples containing contradiction words (details see Sec. 5.1) in the hypothesis sentence but not in the premise sentence. For the WOB, we select non-entailment samples with highest word overlapping (measured by the Jaccard distance (Hamers et al., 1989) of words). Next, since the number of these unbiased samples may not be large enough, we repeatedly add those selected samples to make the training set more balanced. The results from adding 500 new samples to 50,000 new samples are shown in Sec. 6.1. 8762 Data Augmentation by Adding Synthetic Data. Researchers have been using synthetic rules to generate harder or perturbed samples to fool the model. Here, besides using these datasets only as the evaluation set, we also add these samples back to the training set, similar to the concept of adversarial training (Jia and Liang, 2017; Wang et al., 2019c; Niu and Bansal, 2018) where the adversarial examples are added back to the training set so that the resulting model will be more robust to similar adversarial attacks. In our experiments, we follow Naik et al. (2018) to append meaningless sentences at the end of the hypothesis sentence like in Table 1 to create additional new samples. The detailed construction of these samples can be seen in Appendix. By learning from these augmented datasets, the model should also be more robust to certain types of perturbations/biases of the data. In Sec. 6.1, our experiments showed that while this approach can lead to less biased models, it cannot make the model completely biasfree. Another disadvantage of these data enhancement/augmentation approaches is that we need to know all the specific kinds of biases in advance. For instance, in order to reduce the CWB for ‘not’, one needs carefully balance the samples containing ‘not’ in the training set. However, lots of other words will exhibit similar biases (e.g., the model tends to predict neutral when it sees ‘also’) and it is impractical to identify and debias the dataset w.r.t. every type of bias. Therefore, besides fixing the dataset, we should also focus on directly debiasing models against lexical biases. 4 Model-Level Debiasing Model-level debiasing methods have the advantage that there is no need to know the specific bias type in advance. Here we propose two different methods. The first method focuses on debiasing the content of word/sentence embeddings, where we aim to remove strong bias in the embeddings towards any of the labels so that there will be fewer shortcuts for models to exploit. The second method builds a separate shallow bag-of-words (BoW) sub-model and projects the primary model’s representation onto the subspace orthogonal to this BoW sub-model via the HEX projection algorithm (Wang et al., 2019a). Our proposed methods can be applied to a wide range of baseline model architectures. In addition, none of our methods is bias-type specific, so the results on CWB and WOB should generalize to other similar lexical biases. 4.1 Baselines We use sentence-embedding based models as our baseline since they are more controllable, and because the interaction of sentences only appears at the top classifier, which makes it easier to compare the different effects of different regularization.3 Our baseline structures can be divided into three stages. The first stage is to embed the words into word embeddings. The second stage is to get the representations for each sentence. We use three layers of BiLSTM to get the representation. We also added residual and skip-connections as Nie et al. (2019), and find that it leads to better performance. For the final stage, our baseline follows Mou et al. (2016); Conneau et al. (2017) to concatenate these two sentence embeddings, their difference, and their element-wise product as follows: m = [h1; h2; h1 −h2; h1 ⊙h2] (1) The resulting vector is passed through another multi-layer perceptron (MLP) to get the final classification result.4 Next, we will describe two different methods to directly debias the model. 4.2 Debiasing Embeddings Word embeddings are an important component in all neural NLP models. They contain the most basic semantics of words. Recent studies have shown that removing gender bias from word embeddings can lead to less biased models (Zhao et al., 2018a). In our work, as we discussed in Q2 in Sec. 1, we explore whether similar ideas can be applied to reducing lexical dataset biases. For a large number of lexical dataset biases (e.g., CWB), the model tends to predict the label based only on the existence of certain words. Hence, one 3Another popular choice of NLI model architecture is the cross-attention based models (Chen et al., 2017; Devlin et al., 2019). In our current work, we choose to only apply our BoW Sub-Model approach on sentence-embedding based models since our approach directly regularizes the representation vector learned by the main model, and hence it is most suitable for models with a single vector containing rich information. On the other hand, cross-attention based models do most of the inference through cross-attention and do not learn such a single vector, making it hard to regularize the model effectively in a similar way. Investigation of similar HEX regularization methods for cross-attention models is future work. 4Our baseline models achieve close to the best sentence embedding based/cross-attention based models reported on the NLI stress tests (Naik et al., 2018) and are hence good starting points for this bias/debias analysis. 8763 Main Model ... Embeddings Encoder Encoder Fully-Connected ... Embeddings Fully-Connected Debias Network Sentence embedding Sentence embedding label label Sentence Word Input Reverse Gradient Figure 1: The overall architecture for reducing bias using an embedding debiasing network. The red dashed line denotes gradient reversal. natural conjecture is that there is a strong bias towards some labels in the word embeddings. Since the label bias is not an attribute of the word, but it is brought in by the model above, hence in order to remove such label bias from the embeddings at training time, we differ from Zhao et al. (2018a) to use the gradient-reversal trick (Ganin and Lempitsky, 2015; Xie et al., 2017). The architecture of this approach is illustrated in Figure 1. We denote the embeddings of the two input sequences for our model as w(a) = {w(a) 1 , w(a) 2 , . . . , w(a) la } and w(b) = {w(b) 1 , w(b) 2 , . . . , w(b) lb } respectively, where a denotes the premise sentence while b denotes the hypothesis sentence. In order to apply the reverse gradient trick (Ganin and Lempitsky, 2015) to the embeddings, we add a small embedding-debias network (the left blue box in Figure 1) for each of the embedding wi in our model. The embeddingdebias network is a simple MLP. Since the other parts of the sentence context may also contribute to the bias, the debiasing network takes both w(a) i and the sentence embedding of b (and vice versa for debiasing w(b)) as the input and predicts the label y. Therefore, the total loss of this method is: L(θc, θe, θed) = Lc(θc, θe) − λ la + lb Led(θe, θed) Here, λ is the multitask coefficient. la and lb are the lengths of two input sentences. Lc is the standard classification loss using the main model and Led is the sum of all the classification loss using the debias network. θe are parameters of the embeddings and sentence encoder of the main model, θc are parameters of the top classifier of the main model, and θed are parameters of the embedding-debias network. In order to find the optimal parameters, we follow Ganin and Lempitsky (2015) to reverse the gradient for θe w.r.t. Led. Besides this approach, we also tried two variants by changing the input of the debias network. The first one is emb basic, where we only take the single embedding wi as the input. The second one only takes one sentence embedding as the input and is called ind sent. The results of our embeddingdebias methods are shown in Sec. 6.2. 4.3 Bag-of-Words Sub-Model Orthogonality While debiasing the embeddings can robustify the models against certain biases, it may not be effective for all the lexical biases. Some lexical bias may exist at the deeper compositionality level (e.g., WOB), while debiasing the embeddings can regularize only the most basic semantics units instead of how these semantics units are composed by the model. In addition, removing the label biases may also hurt the useful semantics contained in the embeddings, leading to significant performance drops. A better approach is to leave the embedding intact, but try to regularize how the classifier uses these features. We observe that models exploiting dataset biases in the training set (e.g., CWB and WOB) tend to use very simple and superficial features to make the prediction. These models tend to ignore the order of the words, fail to learn compositionality, and do not have a deep semantic understanding of the sentences. Therefore, we aim to robustify the model by letting it use fewer simple and superficial features. With this motivation, we train a bag-of-words (BoW) model that only captures superficial patterns of the words without any word order/compositionality information. Then we use HEX projection (Wang et al., 2019a) to project the representation of the original primary model to the orthogonal space of the representation of the BoW model. BoW Model. For the BoW sub-model, we first get the embedding of all the words. Then, in order to capture more co-occurrence information of the words, we add a multi-head self-attention layer like the one used in Vaswani et al. (2017) (but without position embeddings), because we empirically find that this improves the performance. Finally, we use mean-pooling among all the vectors to get the BoW sentence-embedding: hbow = 1 l {self att(w)}. To get a single representation for the sentence-pair, we used the same concatenation layer as in Eqn 1 and pass the vector through an additional MLP to get the representation ubow. HEX Projection. Next, in order to encourage the 8764 Main Model Embeddings BiLSTM BoW  Sub-Model HEX Premise  Embeddings BiLSTM Š‚w~ƒ Embeddings Self-Attention Embeddings Self-Attention Šx„Œ Š‚w~ƒ Šx„Œ Hypothesis Label HEX Projection Layer b] bh bc Figure 2: The overall architecture for debiasing the model via orthogonal projection w.r.t. BoW sub-model. primary model to learn better features that are not learn-able by the BoW model, we used the HEX projection layer from Wang et al. (2019a), which was originally proposed to improve the domain generalization performance of computer vision models; here we combine HEX with BoW sub-model to robustify NLI models. With the addition of the BoW sub-model, we can get two representations of the sentence pair umain and ubow. In order to let the final prediction to use high-level features that are to some extent independent of the shallow and highbiased BoW feature, HEX projection layer projects these two representations into orthogonal spaces to achieve the independence. The inputs of the HEX projection layers are the BoW model output ubow and the corresponding output of the main model umain. We use f to denote the final classification network parameterized by ξ. Next, by zero-masking one of the two inputs, the HEX projection layer can receive three different inputs and calculate three different vector outputs: FA = f([ubow; umain], ξ) FP = f([0; umain], ξ) FG = f([ubow; 0], ξ) (2) To ensure that the overall model learns different features than the BoW model, we project the joint output FA to the orthogonal space of FG to get FL: FL = (I −FG(FT GFG)−1FT G)FA (3) The output learns good representations for both sentences but lies in the orthogonal space of the output got from BoW sub-model’s input, thus not overemphasizing on word-pattern information. This vector goes through the softmax layer to calculate the probabilities for each label. Finally, we follow the original paper (Wang et al., 2019a) to minimize a weighted combination of the loss for FL and FG, and use FP for testing. In Sec. 6.2, we show that by adding the BoW sub-model orthogonality, the model can be more robust against CWB and WOB while maintaining competitive overall accuracy. Hence, as a response to Q3 in Sec. 1, our results indicate that debiasing models at the upper level with regularization on the compositionality is a more promising direction against lexical biases. 5 Experimental Setup 5.1 Datasets We evaluate our models using both off-the-shelf testing datasets as well as new datasets extracted from the original MNLI dataset. We use the word overlap and the negation sets from the NLI stress tests dataset (Naik et al., 2018). These two evaluation sets from the NLI stress tests modified the original MNLI development set by appending some meaningless phrases (examples shown in Table 1). If the model has certain biases, then the model will be fooled by such perturbations and make the wrong classification. In addition, we also extract samples from the original MNLI development dataset to get bias testing sets with exactly the same data distribution. We first select samples that follow the bias pattern from the matched development set. For CWB, we use ‘not’, ‘no’, ‘any’, ‘never’ ,and ‘anything’ as five example contradiction words. To make this testing set balanced for labels (contradiction vs non-contradiction for CWB and entailment vs nonentailment for WOB), we move some samples with the same pattern from the training set to this testing set.5 Later we refer to this dataset as Bal. Since the negation dataset from NLI stress tests dataset only considers the word ‘not’, it fails to evaluate the bias for other contradiction words. We augment this dataset by creating new samples for other contradiction words. We denote the original NLI stress tests dataset as Stress and this augmented one as Stress*. Please refer to the Appendix for a detailed description of how we chose the example contradiction words and created our test sets. Throughout our experiments, we select the best 5While this makes our model’s performance incomparable to other literature, we train all the models in our experiments in this same setting to ensure the fairness of our analysis comparisons. All our experiments use the same val/test set. 8765 MNLI Bal Stress* Train/Test Acc Acc Acc hr Acc Acc hr baseline 69.8 70.5 45.7 50.9 38.7 + origin 69.7/69.2/69.1 71.2/71.1/70.6 46.3/49.0/47.9 49.7/49.2/50.7 40.2/40.2/42.1 + synthetic 69.8/70.0/69.7 71.0/70.7/71.2 45.7/45.9/47.1 67.2/68.5/68.4 65.8/68.3/68.4 Table 2: The performance for reducing the CWB via data enhancement/augmentation. The numbers each representing the result after adding 500/20,000/50,000 additional data. model during training on the MNLI mismatched development dataset and we tune all the hyperparameters on the NLI Stress mismatch datasets. All the other datasets are only used as test sets and we only report results on these test sets. We use the MNLI matched development dataset to evaluate the overall performance of the model. 5.2 Metrics Overall accuracy is widely used as the only metric for NLI. However, models can get very high accuracy by exploiting the bias patterns. Hence, in order to test how the model performs when it cannot exploit the bias pattern, we focus on model’s accuracy on the harder parts of the data (Acc hr) where the bias pattern is wrong6. For the balanced testing set, this subset means samples with ‘noncontradiction’ label for CWB case and samples with ‘non-entailment’ label for the WOB case. For the NLI stress tests dataset7, this subset means the samples with ‘non-contradiction’ label for the CWB set and the samples with ‘entailment’ label for the WOB set. Ideally, for an unbiased model, it should both have competitive overall performance and perform almost equally well on these harder parts of the data. Hence, we focus on maintaining the accuracy on the whole dataset and improving the Acc hr metric. All training details and hyperparameter settings are presented in Appendix. 6 Results 6.1 Data-Level Debiasing Results We first show our baseline’s performance on the CWB biases in the first row of Table 2. Since we ob6One may wonder if biases can also be evaluated simply using generalization performance. However, good generalization to current datasets (e.g., SNLI (Bowman et al., 2015), MNLI (Williams et al., 2018), SICK (Marelli et al., 2014), etc.) is different from being bias-free. As shown in Gururangan et al. (2018), similar annotation artifacts can appear in multiple different datasets. So by overfitting to common lexical biases across multiple datasets, biased models might still reach higher generalization accuracy. 7Another metric on NLI-stress can be checking the portion of model predictions on the hard data that is correct both before and after adding the extra words. We empirically verified that this metric shows the same result trends as Acc hard. serve similar performance for CWB and WOB, we leave the results for WOB in Appendix. On every dataset, there’s a significant gap between Acc and Acc hr, showing the baseline has both strong CWB bias and strong WOB bias. For the data augmentation/enhancement experiments, we report results after adding 500/20,000/50,000 additional samples. We demonstrate the effect of adding a small portion of data for the 500 case and the limitation of this method using the 20,000 and 50,000 cases.8 The results are again shown in Table 2. We use “+origin” to denote the results from data enhancement using the original dataset and use “+synthetic” to denote the results from data augmentation by generating new synthetic data similar to NLI stress tests.9 With a small number of additional data (500), wherever the data comes from, the performance on the balanced testing set remains very close. However, the performance on the NLI stress tests improves significantly when it sees 500 synthetic new samples generated in the same way. The gap between the overall accuracy and the Acc hr on NLI stress tests is reduced to less than 5%, which means that the models can easily learn how the synthetic data is generated through only 500 samples. Next, we compare the performance after adding 20,000 and 50,000 additional data to check the limitation of the improvement from adding additional data. With this amount of additional original data, the Acc hr on the balanced dataset improves and the model is less biased. However, adding 20,000/50,000 synthetic samples doesn’t always lead to the improvement on the balanced dataset. This reflects that the generation rules of NLI stress tests dataset are too simple so that training on these adversarial samples is not a good way to robustify the model. However, more natural and diverse synthetic data may be helpful to robustify the models. There is still a significant gap between overall accuracy and Acc hr even after 50,000 sam8Adding additional data (e.g., 50,000) can change the label distribution, but we have experimented with different numbers of additional data between 500 and 50,000 and the reported trend always holds. 9We run all the experiments 5 times and report the mean. 8766 CWB WOB MNLI Bal Stress* Bal Stress Model Acc Acc Acc hr Acc Acc hr Acc Acc hr Acc Acc hr baseline 70.0 70.6 45.3 49.9 37.0 75.4 58.5 59.8 40.2 emb basic 67.8 70.3 49.5 50.2 41.1 73.9 56.2 56.9 35.6 emb cond 67.9 68.5 46.4 48.8 38.3 74.5 54.5 56.7 39.6 sgl sent 67.2 68.9 47.3 48.8 37.4 73.8 55.5 54.1 29.1 Table 3: The performance for debiasing the embeddings on CWB and WOB. CWB WOB MNLI Bal Stress* Bal Stress Model Acc Acc Acc hr Acc Acc hr Acc Acc hr Acc Acc hr baseline 69.8±0.25 70.5±0.75 45.7±2.28 50.9±1.50 38.7±3.94 76.3±0.59 59.4±0.82 58.2±3.04 37.6±9.63 + BoW 68.4±0.25 72.6±0.84 56.3±1.69 54.9±0.66 48.0±1.44 75.1±0.90 69.3±1.51 60.8±1.05 46.6±4.64 #layers=2 69.8±0.34 69.9±0.93 44.8±1.70 51.4±0.94 40.0±2.05 76.6±0.85 58.6±0.84 58.7±1.56 40.5±5.49 + BoW 68.5±0.47 71.2±1.05 54.1±1.65 56.3±1.26 49.9±1.24 74.2±1.41 68.1±0.76 62.2±1.41 49.6±4.34 Table 4: The performance for BoW sub-model orthogonality on CWB and WOB. The means and standard deviation here are averaged over five random runs. ples. Also, the effect of adding the last 30,000 data is very small, indicating a clear limitation of this method. Thus, doing simple data augmentation/enhancement only using the currently available resources is insufficient to fully debias the model. In addition, one has to carefully select which data to add for each different bias, so we need to also design inherently more robust models. 6.2 Model-Level Debiasing Results Debiasing Embeddings (Lower Level Model Debiasing). We compared three variants of debiasing embeddings in Table 3. Empirically, we observe that training the whole model with the debias network from a pre-trained baseline can significantly improve the stability of results, so we perform our experiments from one baseline with average performance for fair comparisons. The multi-task coefficient λ controls the trade-off between high accuracy and little bias. Here we report the results with λ = 1, which we find is one good balance point. From both tables, none of the methods achieved a significant improvement on the Acc hr metrics. The best results come from the emb basic approach, but even this method only achieves small improvement on the Acc hr metric for CWB but does worse on WOB and has a comparable loss on overall Acc. We do not observe any significantly larger improvements with smaller or larger λ. We also tried other techniques to further stabilize the training (e.g., freezing the main model when training, using different optimization algorithms), but we observe no significant improvement. Therefore, while removing the bias from the embeddings is effective for reducing gender bias (e.g., remove the male bias from the word ‘doctor’ to make the embedding gender-neutral), it does not help in debiasing certain lexical biases. Directly removing information from the embedding only slightly debiases the model but also hurts the overall performance. The difference in these results highlights the difference between gender bias and lexical bias problems. As shown in these experiments, lexical biases cannot be effectively reduced at the embedding level. We argue that this is because a majority of lexical biases appear at the compositionality level. For example, for WOB, a biased model will predict “entailment” entirely relying on the overlapping word embeddings on both sides. Here, even when we make the embeddings completely unbiased, as long as the upper model learns to directly compare the overlapping of embeddings on both sides, there will still exist a strong WOB bias in the model. Hence, in order to robustify models towards lexical bias, we need to develop methods that regularize the upper-interaction part of the model. BoW Sub-Model Orthogonality (Higher Level Model Debiasing). Results for adding the BoW sub-model are shown in Table 4. Here, we also show that the improvement trend holds regardless of minor hyper-parameter changes in the model (number of layers). On both CWB and WOB, the model shows a large improvement on Acc hr for both Bal and stress-test datasets. We achieve close or higher Acc on all the bias testing sets and the overall Acc is only 1.4%/1.3% lower than the baseline, showing that adding a BoW sub-model orthogonality will only slightly hurt the model. In 8767 conclusion, this approach significantly robustifies the model against CWB and WOB while maintaining competitive overall performance. In comparison to the debiasing embeddings results, we can see that instead of regularizing the content in the word embeddings, regularizing the model’s compositionality at the upper interaction level is a more promising direction for debiasing lexical biases. We have also tried combining this method with the data-level debiasing approach above but get no further improvement.10 6.3 Qualitative Feature Analysis We use LIME (Ribeiro et al., 2016) to qualitatively visualize how orthogonal projection w.r.t. BoW sub-model changes the features used by the model. We selected one example from the CWB Bal dataset to see how applying the BoW model with HEX corrects previous mistakes. From Fig. 3, we can see that before applying the BoW submodel (the upper part of the figure), the model predicts the contradiction label almost solely based on the existence of the word “no” in the hypothesis. However, after applying our BoW sub-model with HEX projection, our model can give higher importance to other useful features (e.g., the match of the two “bad” tokens, and the match of important pasttense temporal words such as “passed” and “longer” in the premise-hypothesis pair) despite the fact that “no” still has high influence towards the contradiction label. Another example from the CWB Stress* dataset can be seen in Appendix. 7 Conclusion We study the problem of lexical dataset biases using WOB and CWB as two examples. We first showed that lexical dataset biases cannot be solved by simple dataset changes and motivate the importance of directly designing model-level changes to solve this problem. For model-level changes, we first show the ineffectiveness of embedding-debiasing approaches, thus highlighting the uniqueness of lexical bias against gender bias problems. Next, 10We also tried some initial simple ensembles of 2 different initializations of BoW sub-models, so that we can potentially regularize against a more diverse set of lexicon biases. When training, the main model is paired with each BoW sub-models to go through each HEX layers and then the output logits are averaged to get the final logits. This ensembling results also outperform the baseline significantly and is higher than the single BoW Sub-Model in WOB Stress, but equal or worse in the other cases. We leave the exploration of different/better ways of ensembling to future work. Figure 3: LIME analysis on the CWB Bal dataset showing the 6 most important features used by the model. we robustify the model by forcing orthogonality between a BoW sub-model and the main model and demonstrate its effectiveness through several experiments. Since none of our methods is biastype specific, we believe these results can also be generalized to other similar lexical biases. Finally, we would like to point out that our methods and results here do not mean to belittle the importance of collecting clean/unbiased data. We strongly believe in the importance of unbiased data for model design and evaluation. However, some biases are inherent and inevitable in the natural distribution of the task (e.g., for NLI, it is natural that sentences with high overlapping are most likely entailment pairs). Therefore, our work stresses that it is also very important to encourage the development of models that are unlikely to exploit these inevitable biases/shortcuts in the dataset. Neither model-level debiasing nor data-level debiasing alone is the conclusive solution for this problem. Joint efforts are needed for promoting unbiased models that learn true semantics; and we hope our paper can encourage more work towards this important direction. Acknowledgments We thank Snigdha Chaturvedi, Shashank Srivastava, and the reviewers for their helpful comments. This work was supported by DARPA YFA17D17AP00022, NSF-CAREER Award 1846185, ONR Grant N00014-18-1-2871. The views in this article are the authors’, not of the funding agency. 8768 References Yonatan Belinkov, Adam Poliak, Stuart Shieber, Benjamin Van Durme, and Alexander Rush. 2019a. On adversarial removal of hypothesis-only bias in natural language inference. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 256–262, Minneapolis, Minnesota. Association for Computational Linguistics. Yonatan Belinkov, Adam Poliak, Stuart M Shieber, Benjamin Van Durme, and Alexander M Rush. 2019b. Dont take the premise for granted: Mitigating artifacts in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 877–891. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NeurIPS, pages 4349–4357. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP, pages 632–642. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In ACL, volume 1, pages 1657–1668. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Dont take the easy way out: Ensemble based methods for avoiding known dataset biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4060– 4073. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP, pages 670–680. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In EMNLP, pages 11–21. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In ICML, pages 1180–1189. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In ACL, volume 2, pages 650–655. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In NAACL-HLT, volume 2, pages 107–112. Lieve Hamers, Yves Hemeryck, Guido Herweyers, Marc Janssen, Hans Keters, Ronald Rousseau, and Andr Vanhoutte. 1989. Similarity measures in scientometric research: The jaccard index versus salton’s cosine formula. Information Processing & Management, 25(3):315 – 318. He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 132–142. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In EMNLP, pages 2021–2031. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. In ACL, volume 2, pages 25–30. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In SemEval, pages 1–8. R Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. arXiv preprint arXiv:1902.01007. Pasquale Minervini and Sebastian Riedel. 2018. Adversarially regularising neural nli models to integrate logical background knowledge. CoNLL 2018, page 65. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In ACL, volume 2, pages 130–136. 8769 Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In COLING, pages 2340–2353. Yixin Nie, Yicheng Wang, and Mohit Bansal. 2019. Analyzing compositionality-sensitivity of nli models. In AAAI, volume 33, pages 6867–6874. Tong Niu and Mohit Bansal. 2018. Adversarial oversensitivity and over-stability strategies for dialogue models. In CoNLL, pages 486–496. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In *SEM, pages 180–191. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958. Aarne Talman and Stergios Chatzikyriakidis. 2019. Testing the generalization power of neural network models across nli benchmarks. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 85–94. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS, pages 5998–6008. Haohan Wang, Zexue He, Zachary L. Lipton, and Eric P. Xing. 2019a. Learning robust representations by projecting superficial statistics out. In ICLR. Haohan Wang, Da Sun, and Eric P Xing. 2019b. What if we simply swap the two text fragments? a straightforward yet effective way to test the robustness of methods to confounding signals in nature language inference tasks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7136–7143. Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, and Vicente Ordonez. 2019c. Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In Proceedings of the IEEE International Conference on Computer Vision, pages 5310–5319. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACLHLT, volume 1, pages 1112–1122. Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In NeurIPS, pages 585–596. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In EMNLP. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In NAACL-HLT, volume 2. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847–4853. Appendix A Training Details For all our models except BERT (Devlin et al., 2019), we use pre-trained 300-dimension GloVe (Pennington et al., 2014) word embeddings to initialize the embedding layers. The hidden dimension of LSTM (Hochreiter and Schmidhuber, 1997) is 300. We use Adam (Kingma and Ba, 2015) as the optimizer and the initial learning rate is set to 0.0004. We apply dropout (Srivastava et al., 2014) with a rate of 0.4 to regularize our model. For the model with HEX projection, we apply all the tricks in the original paper (Wang et al., 2019a) (columnwise normalize the input features in every batch, fine-tune from a trained model with the bottom layer fixed) to stabilize the training. In our experiments, we set the multi-task coefficient between loss for FL and FG to 1.0 and 0.3. B Detailed Description of the Extraction of Balanced Testing Sets B.1 Extraction of the Contradiction-Word-Bias Testing Set For evaluating the contradiction-word-bias (CWB), we look for words that both have a strong bias towards the ‘contradiction’ label and have a significant number of samples in the training set. We first select ‘no’, ‘any’, ‘never’ and ‘anything’, which are four most frequent words with over 50% of 8770 contradiction-word appended phrase no and false is no true any and any true is true never and false is never true anything and anything true is true not and false is not true Table 5: The phrases to append at the end of the hypothesis sentence for each contradiction word. samples in the training data containing these words labeled as ’contradiction’. Since most of the analysis papers also study the bias of ‘not’, here we also include the ‘not’ as the contradiction word. However, as in the training set of MNLI (Williams et al., 2018), only 45.3% of the samples are ‘contradiction’, so the bias of ‘not’ is actually not as strong as the other words. Next, in order to create a balanced dataset for these selected contradiction-words, we first select the samples containing these words from the matched development set. In order to let the samples be more difficult and better test the model’s bias. We only select the samples where the hypothesis samples contain the contradiction word, while there’s no negation word in the premise sentence (so that the contradiction word is generated by the annotator instead of copying from the premise sentence). Since the bias of ‘not’ is not uniformly strong, here we only select samples that both contain ‘not’ and have small Jaccard distance (Hamers et al., 1989) between the sentence pairs, which we empirically find that the bias is stronger. After selecting these samples, we can extract a testing set with most of the samples labeled as contradiction, but the label distribution is severely unbalanced. In order to balance the label distribution, we randomly sample some examples from the training set using the same criterion (containing contradiction word in the hypothesis sentence but no negation word in the premise sentence) and put them in the testing set. Our resulting dataset contains 1100 samples with 550 are labeled as contradiction and the other 550 are non-contradiction labels. Since the domain of the training set is different from the domain of the mismatched validation set, we only extract a balanced test set based on the matched validation set. B.2 Extraction of the Word-Overlapping-Bias Testing set We first sort the samples in the MNLI matched validation set using Jaccard distance (Hamers et al., 1989) and choose the samples with the smallest distance (highest overlapping). In order to match the size of the contradiction-word-bias testing set, we select the top 550 samples with entailment label and the top 550 samples with non-entailment label to get a dataset with high word overlapping but balanced label distribution. C Construction of Synthetic Data We follow the construction rule of the NLI stress tests (Naik et al., 2018) to generate synthetic data for the training set. We appended meaningless sentences at the end of the hypothesis sentence and keep the original label unchanged. For CWB, we focus on 5 different contradiction words: ‘no’, ‘any’, ‘never’, ‘anything’ ,and ‘not’. Therefore, for each sentence pair, we create five different new pairs by appending five different phrases for evaluating the bias of each contradiction word. The appended phrases are listed in Table 5. For WOB, we also follow (Naik et al., 2018) to append ‘and true is true’ to every hypothesis sentence to create one new pair for each sample. D Data Augmentation/Enhancement Results for BERT The data augmentation/enhancement results for BERT-base (Devlin et al., 2019) is shown in Table 7 and Table 8. 11 As is shown in Table 7, BERT shows significant performance gap between Acc and Acc hr on both CWB datasets, indicating BERT’s clear bias on CWB. As for WOB, the gap between Acc and Acc hr for Bal is much smaller, however, the performance on Stress is very poor. Therefore, we assume that even though BERT achieves a high score on the WOB Bal dataset, BERT is just overfitting the dataset in another different way, i.e., there is still significant WOB bias in BERT. In conclusion, in our experiment, BERT still shows significant CWB and WOB. Similar to our main data augmentation/enhancement results, here we find that after adding 500 additional synthetic samples, BERT can quickly learn their pattern. But still, adding more synthetic data doesn’t help improve the performance on the Bal dataset. For BERT, we also cannot see any significant improvement when adding additional original samples. In all the + origin experiments, BERT performs similarly. Again, this shows the limitation of the data 11We run all the experiments 5 times and report the mean. 8771 MNLI Bal Stress Train/Test Acc Acc Acc hr Acc Acc hr baseline 69.8 76.3 59.4 58.2 37.6 + origin 70.1/70.0/69.4 77.1/77.5/76.4 61.5/64.1/64.7 56.0/58.0/55.4 31.0/37.3/29.5 + synthetic 70.0/69.8/69.6 77.2/75.7/75.7 61.3/58.8/58.6 67.7/68.8/68.7 66.2/72.9/72.0 Table 6: The performance of LSTM baseline model for reducing the WOB via data enhancement/augmentation. The numbers each representing the result after adding 500/20,000/50,000 additional data. MNLI Bal Stress* Train/Test Acc Acc Acc hr Acc Acc hr baseline 82.3 84.2 71.2 55.8 41.9 + origin 82.3/82.6/82.7 83.8/83.7/83.6 70.7/70.6/70.2 55.7/55.3/55.2 42.4/41.7/43.2 + synthetic 82.6/82.4/82.4 84.3/84.1/84.3 71.9/71.2/71.5 83.3/84.0/83.9 81.9/83.2/83.0 Table 7: The performance of BERT for reducing the CWB via data enhancement/augmentation. The numbers each representing the result after adding 500/20,000/50,000 additional data. MNLI Bal Stress Train/Test Acc Acc Acc hr Acc Acc hr baseline 82.3 90.5 87.0 58.1 6.49 + origin 82.7/82.4/82.4 91.3/90.5/90.8 87.9/87.2/87.5 58.1/58.2/58.1 7.43/7.61/5.88 + synthetic 82.4/82.5/82.5 90.7/90.6/91.1 87.0/86.7/87.5 83.4/84.0/83.9 82.4/83.8/83.8 Table 8: The performance of BERT for reducing the WOB via data enhancement/augmentation. The numbers each representing the result after adding 500/20,000/50,000 additional data. Figure 4: LIME analysis on the CWB Stress* dataset showing the 6 most important features used by the model. augmentation/enhancement approach, especially starting with a stronger baseline as BERT. E More Qualitative Feature Analysis In Fig. 4, we can see the feature importance change before/after adding the BoW sub-model for a CWB Stress* example (we chose a borderline example where the prediction distribution change to the correct label is not extreme). We can see that before adding the BoW sub-model orthogonalityprojection, the extra misleading words (both “and” and “not”) confused the model to predict the wrong contradiction label, while after adding the BoW sub-model, our model can assign higher weights to useful features such as “have”, “before”, etc.
2020
773
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8772–8779 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8772 Uncertain Natural Language Inference Tongfei Chen1∗ Zhengping Jiang2∗† Adam Poliak1 Keisuke Sakaguchi3† Benjamin Van Durme1 1 Johns Hopkins University 2 Columbia University 3 Allen Institute for AI {tongfei,azpoliak,vandurme}@jhu.edu [email protected], [email protected] Abstract We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments. We demonstrate the feasibility of collecting annotations for UNLI by relabeling a portion of the SNLI dataset under a probabilistic scale, where items even with the same categorical label differ in how likely people judge them to be true given a premise. We describe a direct scalar regression modeling approach, and find that existing categorically labeled NLI data can be used in pre-training. Our best models approach human performance, demonstrating models may be capable of more subtle inferences than the categorical bin assignment employed in current NLI tasks. 1 Introduction Variants of entailment tasks have been used for decades in benchmarking systems for natural language understanding. Recognizing Textual Entailment (RTE) or Natural Language Inference (NLI) is traditionally a categorical classification problem: predict which of a set of discrete labels apply to an inference pair, consisting of a premise (푝) and hypothesis (ℎ). The FraCaS consortium offered the task as an evaluation mechanism, along with a small challenge set (Cooper et al., 1996), which was followed by the RTE challenges (Dagan et al., 2005). Despite differences between these and recent NLI datasets (Marelli et al., 2014; Lai et al., 2017; Williams et al., 2018; Khot et al., 2018, i.a.), NLI hsa remained a categorical prediction problem. However, entailment inference is uncertain and has a probabilistic nature (Glickman et al., 2005). Maintaining NLI as a categorical classification ∗Equal contribution. † Work performed while at Johns Hopkins University. Premise { Hypothesis NLI UNLI A man in a white shirt taking a picture { A man takes a picture ENT 100% A boy hits a ball, with a bat { The kid is playing in a baseball game ENT 78% A wrestler in red cries, one in blue celebrates { The wrestler in blue is undefeated CON 50% Man laying on a platform outside on rocks { Man takes a nap on his couch CON 0% Table 1: Probability assessments on NLI pairs. The NLI and UNLI columns respectively indicate the categorical label (from SNLI) and the subjective probability for the corresponding pair. problem is not ideal since coarse categorical labels mask the uncertain and probabilistic nature of entailment inference. NLI pairs may share a coarse label, but the probabilities that the hypotheses are entailed by their corresponding premises may vary greatly (see Table 1). Hence, not all contradictions are equally contradictory and not all entailments are equally entailed. We propose Uncertain Natural Language Inference (UNLI), a refinement of NLI that captures more subtle distinctions in meaning by shifting away from categorical labels to the direct prediction of human subjective probability assessments. We illustrate that human-elicited probability assessments contain subtle distinctions on the likelihood of a hypothesis conditioned on a premise, and UNLI captures these distinctions far beyond categorical labels in popular NLI datasets. We demonstrate how to elicit UNLI annotations. Using recent large-scale language model pre-training, we provide experimental results illustrating that systems can often predict UNLI judgments, but with clear gaps in understanding. We conclude that scalar annotation protocols should be adopted in future NLI-style dataset creation, which should enable new work in modeling a richer space of interesting inferences. 8773 Premise { Hypothesis SNLI u-SNLI A man is singing into a microphone. { A man performs a song. NEU 95% { A man is performing on stage. NEU 84% { A male performer is singing a special and meaningful song. NEU 15% { A man performing in a bar. NEU 14% { A man is singing the national anthem at a crowded stadium. NEU 0.6% Table 2: A premise in SNLI with its 5 hypotheses (labeled as neutral in SNLI) annotated in u-SNLI. 2 Eliciting UNLI annotations We elicit subjective probabilities from crowdsource workers (MTurk) for premise-hypothesis pairs from existing NLI data. Annotators are asked to estimate how likely the situation described in the hypothesis sentence would be true given the premise. Following the Efficient Annotation of Scalar Labels framework (EASL; Sakaguchi and Durme, 2018), we present annotators 5 sentence-pairs, each with a slider bar enabling direct assessment for each pair and ask annotators to calibrate their score for a sentence-pair based on the scores they provided to the other four pairs.1 In contrast to the uniform scale employed in the original EASL protocol, we modify the interface to allow finer-grained values near 0.0 and 1.0, following psychological findings that humans are especially sensitive to values near the ends of the probability spectrum (Tversky and Kahneman, 1981).2 This interface decision is a key distinction of this work contrasting prior efforts that averaged Likertscale (ordinal) annotations. This allows us to capture the difference between NLI pairs that are both appropriately contradicted or entailed under NLI, but that have a perceived difference of less than 1% probability. In order to capture the sensitivity near these ends, we adopt a more fine-grained slider bar with 10,000 steps with a logistic transformation. Specifically, for raw score 푥∈[0, 10000], we apply a scaled logistic function 푓(푥) = 휎(훽(푥−5000)) to re-scale the final result range to [0, 1]. We ran pilots to tune 훽, and determine that people tend to choose much lower probability for some events even though they are just slightly less likely (e.g., just below 50%).3 1 Example pairs were provided in the instructions along with suggested probability values. See Appendix A for details of the annotation interface and qualifications. 2 This is called the certainty effect: more sensitivity to the difference between, e.g., 0% and 1% than 50% and 51%. 3 This phenomenon accords with the weighting function in Prospect Theory (Kahneman and Tversky, 1979; Tversky and Kahneman, 1992), where people tend to downweight probabilities with around 0.4 or above. ENT NEU CON 0 0.01 13 23 0.99 1 Figure 1: Dev set statistics, illustrating median and quartile for each of the 3 categories under our scalar probability scheme. Light / dark shade covers 96% / 50% of each category, and the bar denotes the median. Note that 푥-axis is logistic to allow fine-grained distinctions near 0.0 and 1.0. Therefore, we use different 훽’s depending on the range of [0, 0.5] or (0.5, 1]. Each sentence pair is annotated with 2- or 3-way redundancy. The individual responses are averaged to create a gold standard label for a premise-hypothesis pair. Data We annotate, i.e. elicit a probability 푦∈ [0, 1], for a subset of SNLI (Bowman et al., 2015) examples and refer to this data as u-SNLI.4 SNLI’s training set contains 7,931 distinct premises paired with at least 5 distinct neutral (NEU) hypotheses. For each premise, we sample 5 neutral hypotheses, resulting in 39,655 of these NEU pairs annotated. An additional 15,862 contradicted (CON) and entailed (ENT) pairs are annotated for our training set, resulting in 55,517 training examples. For our dev and test sets, we respectively annotated 3,040 examples sampled from SNLI’s dev and test splits. In total, we annotated 61,597 examples, about 12% of all examples in SNLI. Figure 1 plots the resultant median and quartile for each categorical SNLI label in the u-SNLI dev set, showing the wide range of probability judgments elicited for each label (see Table 2 for examples).5 3 Prediction Formally, given a premise 푝∈P and a hypothesis ℎ∈H, a UNLI model 퐹: P × H →[0, 1] should output an uncertainty score ˆ푦∈[0, 1] of the 4We use SNLI due to its popularity and its feature that each premise is paired with multiple hypotheses. 5 Data is available at http://nlp.jhu.edu/unli. 8774 Premise { Hypothesis SNLI u-SNLI Predicted A man perched on a row of aquariums is using a net to scoop a fish from another aquarium. { A man is standing by the aquariums. ENT 1.0 0.119 A man and woman are drinking at a bar. { A couple is out on a date. NEU 0.755 0.377 Couple walking on the beach. { The couple are holding hands. NEU 0.808 0.308 An elderly woman crafts a design on a loom. { The woman is a seamstress. NEU 0.923 0.197 Two girls riding an amusement park ride. { The two girls are screaming. NEU 0.909 0.075 A man and woman sit at a cluttered table. { The table is neat and clean. CON 4.91×10−4 0.262 A race car sits in the pits. { The car is going fast. CON 2.88×10−7 0.724 A guy is standing in front of a toilet with a coffee cup in one hand and a toilet brush in the other. { A man is attempting to brew coffee. CON 8.32×10−6 0.504 Table 3: Selected u-SNLI dev examples where BERT predictions greatly deviate from gold assessments. premise-hypothesis pair that correlates well with a human-provided subjective probability assessment. We train a regression UNLI model to predict the probability that a premise entails a hypothesis. We modify the sentence pair classifier6 in BERT to exploit recent advancements in large-scale language model pre-training. Following Devlin et al. (2019), we concatenate the premise and the hypothesis, with a special sentinel token (CLS) inserted at the beginning and a separator (SEP) inserted after each sentence, tokenized using WordPiece. After encoding the concatenated token sequence with BERT, we take the encoding of the first sentinel token. f(푝, ℎ) = BERT(CLS ; 푝; SEP ; ℎ; SEP)[0] . We pass the resulting feature vector f(푝, ℎ) through a sigmoid-activated linear layer to obtain a probability, instead of a softmax used in categorical NLI. We directly model UNLI as a regression problem, trained using a binary cross-entropy loss7 between the human annotation 푦and the model output ˆ푦. Owing to the concerns raised with annotation artifacts in SNLI (Gururangan et al., 2018; Tsuchiya, 2018; Poliak et al., 2018), we include a hypothesis-only baseline.8 Metrics We compute Pearson correlation (푟), the Spearman rank correlation (휌), and the mean square error (MSE) between y and ˆ푦as the metrics to measure the to performance of UNLI models. Pearson 푟measures the linear correlation between the gold probability assessments and model’s output; Spearman 휌measures the ability of the model ranking the premise-hypothesis pairs with 6 The neural architecture for MultiNLI (Williams et al., 2018) in Devlin et al. (2019). 7 No significant difference is observed with an 퐿2 loss. 8 See Appendix D for additional training details. respect to their subjective probability; MSE measures whether the model can recover the subjective probability value from premise-hypothesis pairs. A high 푟and 휌, but a low MSE is desired. 4 Results & Analysis Table 4 reports results on u-SNLI dev and test sets. Just training on 55, 517 u-SNLI examples yields a 62.71% Pearson 푟on test. The hypothesis-only baseline achieved a correlation around 40%. This result corroborates the findings that a hidden bias exists in the SNLI dataset’s hypotheses, and shows this bias may also exist in u-SNLI.9 Hyp-only Full-model Dev Test Dev Test 풓 0.3759 0.4120 0.6383 0.6271 흆 0.3853 0.4165 0.6408 0.6346 MSE 0.1086 0.1055 0.0751 0.0777 Table 4: Metrics for training on u-SNLI. Human Performance We elicit additional annotations on u-SNLI dev set to establish a randomly sampled human performance. We use the same annotators as before but ensure each annotator has not previously seen the pair they are annotating. We average the scores from three-way redundant elicitation,10 yielding 푟= 0.6978, 휌= 0.7273, and MSE = 0.0759: our regression model trained on uSNLI is therefore approaching human performance. While encouraging, the model fails drastically for some examples. 9 This is unsurprising because u-SNLI examples are sampled from SNLI. 10 This setting approximates the performance of a randomly sampled human on u-SNLI, and is therefore a reasonable lower bound on the performance one could achieve with a dedicated, trained single human annotator. 8775 Qualitative Error Analysis Table 3 illustrates examples with large gaps between the gold probability assessment and the BERT-based model output. The model seems to have learned lexiconlevel inference (e.g., race cars { going fast, but ignored crucial information (sits in the pits), and fails to learn certain commonsense patterns (e.g. riding amusement park ride { screaming; man and woman drinking at a bar { on a date). These examples illustrate the model’s insufficient commonsense reasoning and plausibility estimation. Pre-training with SNLI Can we leverage the remaining roughly 500,000 SNLI training pairs that only have categorical labels? One method would be to train a categorical NLI model on SNLI and when fine-tuning on u-SNLI, replace the last layer of the network from a categorical prediction with a sigmoid function.11 However, a typical categorical loss function would not take into account the ordering between the different categorical labels.12 Instead, we derive a surrogate function 푠: T →[0, 1] that maps SNLI categorical labels 푡∈{ENT, NEU, CON} to the average score of all u-SNLI training annotations labeled with 푡in SNLI.13 SNLI SNLI + u-SNLI Dev Test Dev Test 풓 0.5198 0.4958 0.6762 0.6589 흆 0.5238 0.5231 0.6806 0.6708 MSE 0.1086 0.0928 0.0694 0.0733 Table 5: Metrics for training only on mapped SNLI or fine-tuning on u-SNLI. We use this mapping to pre-train a regression model on the SNLI training examples not included in u-SNLI. We also fine-tune the model on uSNLI’s training set. Table 5 reports the results evaluated on u-SNLI’s dev and test sets. The model trained on the roughly 500퐾mapped SNLI examples, performs much worse than when trained on just about 55퐾u-SNLI examples. When we pretrain the model on the mapped SNLI and fine-tune on u-SNLI, results noticeably improve. This improvement is akin to the Phang et al. (2018)’s finding that many NLI datasets cover informative signal 11 This is similar to how Pavlick and Callison-Burch (2016) pre-train on SNLI, then fine-tune the model using their AddOne pairs. 12 That the score of ENT > score of NEU > score of CON. 13 푠: {ENT ↦→0.9272; NEU ↦→0.4250; CON ↦→0.0209}. for different tasks, explaining why pre-training on NLI can be advantageous. Here, an impoverished version of UNLI is helpful. Model behavior Figure 2 depicts the model behavior when training just on SNLI or fine-tuning with u-SNLI. When using the original SNLI data, under the surrogate regression setting, the model’s prediction concentrates on the 3 surrogate scalar values of the 3 SNLI classes. After fine-tuning on u-SNLI, the model learns smoother predictions for premise-hypothesis pairs, supported by the superior Pearson correlation score. The darker boxes in bottom-right corner of the heatmaps (Figure 2) indicate high accuracy on samples with ≈1.0 gold u-SNLI labels and ≈1.0 model predictions, signifying that our UNLI models are very good at recognizing entailments. 0.1 0.3 0.5 0.7 0.9 Prediction (pre-trained) 0.1 0.3 0.5 0.7 0.9 Gold 0.1 0.3 0.5 0.7 0.9 Prediction (fine-tuned) 0.00 0.08 0.16 0.24 0.32 0.40 Figure 2: Heatmap on u-SNLI dev predictions when trained only on SNLI (left) or fine-tuned on u-SNLI (right). Prediction frequencies are normalized along each gold label row. 5 Related Work The probabilistic nature and the uncertainty of NLI has been considered from a variety of perspectives. Glickman et al. (2005) modified the task to explicitly include the probabilistic aspect of NLI, stating that “푝probabilistically entails ℎ... if 푝 increases the likelihood of ℎbeing true,” while Lai and Hockenmaier (2017) noted how predicting the conditional probability of one phrase given another would be helpful in predicting textual entailment. Other prior work has elicited ordinal annotations (e.g. Likert scale) reflecting likelihood judgments (Pavlick and Callison-Burch, 2016; Zhang et al., 2017), but then collapsed the annotations into coarse categorical labels for modeling. Vuli´c et al. (2017) proposed graded lexical entailment, which is similar to our idea but applied to lexical-level inference, asking “to what degree 푥is a type of 푦.” Additionally, Lalor et al. (2016, 2018) tried capturing the uncertainty of each inference pair by item response theory (IRT), showing fine-grained 8776 differences in discriminative power in each label. Pavlick and Kwiatkowski (2019) recently argued that models should “explicitly capture the full distribution of plausible human judgments” as plausible human judgments cause inherent disagreements. Our concern is different as we are interested in the uncertain and probabilistic nature of NLI. We are the first to propose a method for direct elicitation of subjective probability judgments on NLI pairs and direct prediction of these scalars, as opposed to reducing to categorical classification. Recent work have also modeled the uncertainty of other semantic phenomena as direct scalar regression (and collected scalar versions of data for them) instead of categorical classification, e.g. factuality (Lee et al., 2015; Stanovsky et al., 2017; Rudinger et al., 2018), and semantic proto-roles (Teichert et al., 2017). Plausiblity tasks such as COPA (Roemmele et al., 2011) and ROCStories (Mostafazadeh et al., 2016) ask models to choose the most probable examples given a context, capturing relative uncertainty between examples, but do not force a model to predict the probability of ℎgiven 푝. Li et al. (2019) viewed the plausibility task of COPA as a learning to rank problem, where the model is trained to assign the highest scalar score to the most plausible alternative given context. Our work can be viewed as a variant to this, with the score being an explicit human probability judgment instead. Linguists such as van Eijck and Lappin (2014), Goodman and Lassiter (2015), Cooper et al. (2015) and Bernardy et al. (2018) have described models for natural language semantics that introduce probabilities into the compositional, model-theoretic tradition begun by those such as Davidson (1967) and Montague (1973). Where they propose probabilistic models for interpreting language, we are concerned with illustrating the feasibility of eliciting probabilistic judgments on examples through crowdsourcing, and contrasting with prior efforts restricted to limited categorical label sets. 6 Conclusion We proposed Uncertain Natural Language Inference (UNLI), a new task of directly predicting human likelihood judgments on NLI premisehypothesis pairs. In short, we have shown that not all NLI contradictions are created equal, nor neutrals, nor entailments. We demonstrated that (1) eliciting supporting data is feasible, and (2) annotations in the data can be used for improving a scalar regression model beyond the information contained in existing categorical labels, using recent contextualized word embeddings, e.g. BERT. Humans are able to make finer distinctions between meanings than is being captured by current annotation approaches; we advocate the community strives for systems that can do the same, and therefore shift away from categorical NLI labels and move to something more fine-grained such as our UNLI protocol. Acknowledgments We thank anonymous reviewers from current and past versions of the article for their insightful comments and suggestions. This research benefited from support by DARPA AIDA and DARPA LORELEI. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government. References Jean-Philippe Bernardy, Rasmus Blanck, Stergios Chatzikyriakidis, and Shalom Lappin. 2018. A compositional Bayesian semantics for natural language. In Proceedings of the First International Workshop on Language Cognition and Computational Models, pages 1–10. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al. 1996. Using the framework. Technical report, The FraCaS Consortium. Robin Cooper, Simon Dobnik, Shalom Lappin, and Stefan Larsson. 2015. Probabilistic type theory and natural language semantics. Linguistic Issues in Language Technology, 10(1):1–43. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First 8777 PASCAL Machine Learning Challenges Workshop, pages 177–190. Donald Davidson. 1967. Truth and meaning. Synthese, 17(1):304–323. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pages 4171–4186. Jan van Eijck and Shalom Lappin. 2014. Probabilistic semantics for natural language. In Zoe Christoff, Paulo Galeazzi, Nina Gierasimczuk, Alexandru Marcoci, and Sonja Smets, editors, The Logic and Interactive Rationality Yearbook 2012, volume II. Oren Glickman, Ido Dagan, and Moshe Koppel. 2005. A probabilistic classification approach for lexical textual entailment. In Proc. AAAI, AAAI’05, pages 1050–1055. AAAI Press. Noah D. Goodman and Daniel Lassiter. 2015. Probabilistic semantics and pragmatics: Uncertainty in language and thought. In Shalom Lappin and Chris Fox, editors, The Handbook of Contemporary Semantic Theory, 2nd edition. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2, pages 107– 112. Daniel Kahneman and Amos Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica, 47(2):263–292. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In AAAI. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. Alice Lai, Yonatan Bisk, and Julia Hockenmaier. 2017. Natural language inference from multiple premises. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, Volume 1, pages 100–109. Alice Lai and Julia Hockenmaier. 2017. Learning to predict denotational probabilities for modeling entailment. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Volume 1, pages 721–730. John P. Lalor, Hao Wu, Tsendsuren Munkhdalai, and Hong Yu. 2018. Understanding deep learning performance through an examination of test set difficulty: A psychometric case study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4711–4716. John P. Lalor, Hao Wu, and Hong Yu. 2016. Building an evaluation scale using item response theory. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 648–657. Kenton Lee, Yoav Artzi, Yejin Choi, and Luke Zettlemoyer. 2015. Event detection and factuality assessment with non-expert supervision. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2015, pages 1643–1648. Zhongyang Li, Tongfei Chen, and Benjamin Van Durme. 2019. Learning to rank for plausible plausibility. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 4818–4823. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. The SICK (Sentences Involving Compositional Knowledge) dataset for relatedness and entailment. Richard Montague. 1973. The proper treatment of quantification in ordinary english. In K. J. J. Hintikka, J. M. E. Moravcsik, and P. Suppes, editors, Approaches to Natural Language: Proceedings of the 1970 Stanford Workshop on Grammar and Semantics, pages 221–242. Springer Netherlands, Dordrecht. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James F. Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849. Ellie Pavlick and Chris Callison-Burch. 2016. Most "babies" are "little" and most "problems" are "huge": Compositional entailment in adjective-nouns. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Volume 1, pages 2164–2173. Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Trans. Assoc. Comput. Linguistics, 7:677–694. Jason Phang, Thibault Févry, and Samuel R. Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. CoRR, abs/1811.01088. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180–191. 8778 Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In Logical Formalizations of Commonsense Reasoning, Papers from the 2011 AAAI Spring Symposium. Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018. Neural models of factuality. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pages 731–744. Keisuke Sakaguchi and Benjamin Van Durme. 2018. Efficient online scalar annotation with bounded support. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Volume 1, pages 208–218. Amram Shapiro, Louise Firth Campbell, and Rosalind Wright. 2014. Book of Odds: From Lightning Strikes to Love at First Sight, the Odds of Everyday Life. William Morrow Paperbacks. Gabriel Stanovsky, Judith Eckle-Kohler, Yevgeniy Puzikov, Ido Dagan, and Iryna Gurevych. 2017. Integrating deep linguistic features in factuality prediction over unified datasets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 2, pages 352–357. Adam R. Teichert, Adam Poliak, Benjamin Van Durme, and Matthew R. Gormley. 2017. Semantic protorole labeling. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 4459–4466. Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recognizing textual entailment. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation. Amos Tversky and Daniel Kahneman. 1981. The framing of decisions and the psychology of choice. Science, 211(4481):453–458. Amos Tversky and Daniel Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4):297– 323. Ivan Vuli´c, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017. Hyperlex: A large-scale evaluation of graded lexical entailment. Computational Linguistics, 43(4). Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pages 1112–1122. Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal common-sense inference. Trans. Assoc. Comput. Linguistics, 5:379– 395. 8779 A Annotation Here we include information about the qualifications used to vet annotators. We also include screenshots of the interface used to collect annotations. A.1 Qualification Test Annotators were given a qualification test to ensure non-expert workers were able to give reasonable subjective probability estimates. We first extracted seven statements from Book of Odds (Shapiro et al., 2014), and manually split the statement into a bleached premise and hypothesis. We then wrote three easy premise-hypothesis pairs with definite probabilities like (푝= “A girl tossed a coin.”, ℎ= “The coin comes up a head.”, probability: 0.5). We qualify users that meet both criteria: (1) For the three easy pairs, their annotations had to fall within a small error range around the correct label 푦, computed as 훿= 1 4 min{푦, 1 −푦}. (2) Their overall annotations have a Pearson 푟> 0.7 and Spearman 휌> 0.4. This qualification test led to a pool of 40 trusted annotators, which were employed for the entirety of our dataset creation. A.2 Annotation Interface We include screenshots of the instructions and examples shown to crowdsource workers ( Figure 4) as the interface we provided (Figure 3) B Redundant Annotations By default, we use two crowdsource workers to annotate each UNLI sentence-pair. If the two annotations on the raw slider bar {0, · · · , 10000} differ by more than 2000, we then elicit a third annotator. C Dataset Statistics Table 6 summarizes the statistics of u-SNLI. D Additional Training Details We use the BERT-BASE-UNCASED model, with the Adam optimizer (Kingma and Ba, 2015), an initial learning rate of 10−5, and maximum gradient norm 1.0. Our model is trained for 3 epochs, where the epoch resulting in the highest Pearson 푟on the dev set is selected. Figure 3: An example of our annotation interface. Figure 4: Three examples from the instructions. 0 2000 4000 6000 8000 10000 Raw annotation 0.0 0.5 1.0 Transformed β = 0.3 β = 0.02 (scaled to [0.5, 1]) Figure 5: Our logistic transformation function. Partition Breakdown SNLI U-SNLI train Distinct premises 151k 7,931 ENT hypotheses 183k 7,931 NEU hypotheses 183k 39,655 CON hypotheses 183k 7,931 Total P-H pairs 550k 55,517 dev Distinct premises 3,319 2,647 ENT hypotheses 3,329 162 NEU hypotheses 3,235 2,764 CON hypotheses 3,278 114 Total P-H pairs 10k 3,040 test Distinct premises 3,323 2,635 ENT hypotheses 3,368 156 NEU hypotheses 3,219 2,770 CON hypotheses 3,237 114 Total P-H pairs 10k 3,040 Table 6: Statistics of SNLI data re-annotated under UNLI.
2020
774
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8780–8794 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8780 Extracting Headless MWEs from Dependency Parse Trees: Parsing, Tagging, and Joint Modeling Approaches Tianze Shi Cornell University [email protected] Lillian Lee Cornell University [email protected] Abstract An interesting and frequent type of multiword expression (MWE) is the headless MWE, for which there are no true internal syntactic dominance relations; examples include many named entities (“Wells Fargo”) and dates (“July 5, 2020”) as well as certain productive constructions (“blow for blow”, “day after day”). Despite their special status and prevalence, current dependency-annotation schemes require treating such flat structures as if they had internal syntactic heads, and most current parsers handle them in the same fashion as headed constructions. Meanwhile, outside the context of parsing, taggers are typically used for identifying MWEs, but taggers might benefit from structural information. We empirically compare these two common strategies—parsing and tagging—for predicting flat MWEs. Additionally, we propose an efficient joint decoding algorithm that combines scores from both strategies. Experimental results on the MWE-Aware English Dependency Corpus and on six non-English dependency treebanks with frequent flat structures show that: (1) tagging is more accurate than parsing for identifying flat-structure MWEs, (2) our joint decoder reconciles the two different views and, for non-BERT features, leads to higher accuracies, and (3) most of the gains result from feature sharing between the parsers and taggers. 1 Introduction Headless multi-word expressions (MWEs), including many named entities and certain productive constructions, are frequent in natural language and are important to NLP applications. In the context of dependency-based syntactic parsing, however, they pose an interesting representational challenge. Dependency-graph formalisms for syntactic structure represent lexical items as nodes and headdominates-modifier/argument relations between Officials at Mellon Capital were unavailable for comment O O B I O O O O nsubj case nmod mwe_NNP xcomp case nmod Figure 1: Dependency tree from the MWE-Aware English Dependency Corpus, imposing a “head” relationship between the words in the actually headless MWE Mellon Capital. Also shown are MWE BIO labels. lexical items as directed arcs on the corresponding pair of nodes. Most words can be assigned clear linguistically-motivated syntactic heads, but several frequently occurring phenomena do not easily fit into this framework, including punctuation, coordinating conjunctions, and “flat”, or headless MWEs. While the proper treatment of headless constructions in dependency formalisms remains debated (Kahane et al., 2017; Gerdes et al., 2018), many well-known dependency treebanks handle MWEs by giving their component words a “default head”, which is not indicative of a true dominance relation, but rather as “a tree encoding of a flat structure without a syntactic head” (de Marneffe and Nivre, 2019, pg. 213). Fig. 1 shows an example: the headless MWE Mellon Capital has its first word, Mellon, marked as the “head” of Capital. Despite the special status of flat structures in dependency tree annotations, most state-of-theart dependency parsers treat all annotated relations equally, and thus do not distinguish between headed and headless constructions. When headless-span identification (e.g., as part of namedentity recognition (NER)) is the specific task at hand, begin-chunk/inside-chunk/outside-chunk (BIO) tagging (Ramshaw and Marcus, 1995) is generally adopted. It is therefore natural to ask whether parsers are as accurate as taggers in identifying these “flat branches” in dependency trees. Additionally, since parsing and tagging represent 8781 two different views of the same underlying structures, can joint decoding that combines scores from the two modules and/or joint training under a multitask learning (MTL) framework derive more accurate models than parsing or tagging alone? To facilitate answering these questions, we introduce a joint decoder that finds the maximum sum of scores from both BIO tagging and parsing decisions. The joint decoder incorporates a special deduction item representing continuous headless spans, while retaining the cubic-time efficiency of projective dependency parsing. The outputs are consistent structures across the tagging view and the parsing view. We perform evaluation of the different strategies on the MWE-Aware English Dependency Corpus and treebanks for five additional languages from the Universal Dependencies 2.2 corpus that have frequent multi-word headless constructions. On average, we find taggers to be more accurate than parsers at this task, providing 0.59% (1.42%) absolute higher F1 scores with(out) pretrained contextualized word representations. Our joint decoder combining jointly-trained taggers and parsers further improves the tagging strategy by 0.69% (1.64%) absolute. This corroborates early evidence (Finkel and Manning, 2009) that joint modeling with parsing improves over NER. We also show that neural representation sharing through MTL is an effective strategy, as it accounts for a large portion of our observed improvements. Our code is publicly available at https://github.com/tzshi/flat-mwe-parsing. 2 Background on Headless Structures A (multi-word) headless construction, or flat structure, is a span of lexical items that together reference a single concept and where no component is a syntactically more plausible candidate for the span’s head than any other component. Examples are boldfaced in the following English sentences. (1) Within the scope of this paper: a. ACL starts on July 5, 2020. b. My bank is Wells Fargo. c. The candidates matched each other insult for insult. (Jackendoff, 2008) (1)a and (1)b show that dates and many named entities can be headless constructions, suggesting that they are frequent. Indeed, in the MWE-Aware English Dependency Corpus (Kato et al., 2017), nearly half of the sentences contain headless constructions, 75% of which are named entities. For comparison, (2) shows examples of non-flat MWEs, which are also interesting and important, but they are beyond the focus of our paper. (2) Outside the scope of this paper: a. congressman at large (Sag et al., 2002) [head = “congressman”] b. I have moved on. [verb-particle construction, head = “moved”] c. I take your argument into account. (Constant et al., 2017) [light-verb construction, head = “take”] Returning to headless MWEs, the choice of representation for headless spans depends on the task. In named-entity recognition, such spans are often treated as BIO tag sequences:1 for example, in Fig. 1, “Mellon” is tagged as “B” and “Capital” is tagged as “I”. In dependency parsing, where labeled dependency arcs are the only way to express a syntactic analysis (short of treating MWEs as atomic lexical items, which would result in a chicken-and-egg problem) is to impose arcs within the MWE’s span. Different corpora adopt different annotation conventions. The MWE-Aware English Dependency Corpus uses the arc label mwe_NNP, as shown in Fig. 1. The Universal Dependencies (UD; Nivre et al., 2018) annotation guidelines have all following tokens in such constructions attached to the first one via arcs labeled flat, a choice that is admittedly “in principle arbitrary”.2 The frequency of flat structures across different treebanks varies according to language, genre, and even tokenization guidelines, among other factors. Table 1 lists the UD 2.2 treebanks with the highest and lowest percentage of flat relations. While the Korean treebank ko_gsd (with the highest percentage) splits up most names into multiple tokens and connects them through flat, the Japanese treebank ja_gsd (no flats at all) treats all names as compound nouns, and thus represents them as having internal structure without any indication that a special case has occurred.3 Fig. 2 shows examples from the UD parallel treebanks, illustrating 1In this paper, we adopt the original BIO tagset, which cannot properly represent discontinuous MWEs. See Schneider et al. (2014) for modified tagsets providing such support. 2universaldependencies.org/u/dep/flat.html 3Some flat structures can end up using other dependency labels such as compound, as a result of the fact that many UD treebanks, including ja_gsd, are automatically converted from non-UD style annotations. The UD annotations depend 8782 It contains a monument to Martin Luther King , Jr. O O O O O B I I I I nsubj det obj case nmod flat flat punct flat Es beherbergt ein Denkmal für Martin Luther King , jr . O O O O O B I I O O O nsubj det obj case nmod flat flat punct appos punct 裡面 有 馬丁 · 路德 · 金 ( Martin Luther King, Jr. ) 的 紀念碑 。 O O B I I I I O B I I I O O O O nsubj obl punct flat punct flat punct appos flat flat flat punct case obj punct ここ に は マーチン ルーサー キング Jr の モニュメント が ある 。 O O O O O O O O O O O O iobj case case compound compound compound nmod case nsubj case punct Burada Martin Luther King , Jr’ye adanmı¸s bir anıt bulunmaktadır . O B I I I I O O O O O advmod advmod flat flat punct flat acl det nsubj punct Possui um monumento a Martin Luther King Jr . O O O O B I I O O det obj case nmod flat flat compound punct Figure 2: An illustration of flat-structure annotation variation across treebanks: a set of parallel sentences, all containing the conceptually headless MWE “Martin Luther King, Jr.” (underlined), from UD 2.2 (treebank code _pud) in English, German, Chinese, Japanese, Turkish, and Portuguese (top to bottom). The intent of this figure is not to critique particular annotation decisions, but to demonstrate the notation, concepts, and data extraction methods used in our paper. To wit: Highlights/black-background indicate well-formed flat-MWE tree fragments according to the principles listed in §4. BIO sequences are induced by the longest-spanning flat arcs. When there is a mismatch between the highlighted tree fragments and the BI spans—here, in the German, Chinese and Turkish examples—it is because the dependency trees do not fully conform to the UD annotation guidelines on headless structures. 8783 Treebank (Language) % of flat graphs Ó arcs 19 treebanks with highest percentages: ko_gsd (Korean) 67.84 15.35 id_gsd (Indonesian) 61.63 9.39 ca_ancora (Catalan) 41.11 3.32 nl_lassysmall (Dutch) 38.90 5.87 ar_nyuad (Arabic) 37.63 2.19 es_ancora (Spanish), sr_set (Serbian), it_postwita (Italian), pt_bosque (Portuguese), pt_gsd (Portuguese), fa_seraji (Persian), de_gsd (German), hu_szeged (Hungarian), fr_gsd (French), es_gsd (Spanish), he_htb (Hebrew), kk_ktb (Kazakh), be_hse (Belarusian), nl_alpino (Dutch) ą 20.00 . . . ... 12 treebanks without flat arcs: cs_cltt (Czech), grc_perseus (Ancient Greek), hi_hdtb (Hindi), ja_gsd (Japanese), ja_bccwj (Japanese), la_ittb (Latin), la_perseus (Latin), no_nynorsklia (Norwegian), swl_sslc (Swedish Sign Language), ta_ttb (Tamil), ur_udtb (Urdu), vi_vtb (Vietnamese) 0.00 0.00 Table 1: The UD 2.2 training treebanks with highest and lowest percentage of flat arcs, out of 90 treebanks. the diversity of annotation for the same sentence rendered in different languages. Overall, more than 20% of the treebanks in the UD 2.2 collection have flat structures in more than 20% of their training-set sentences.4 Therefore, a parsing approach taking into account the special status of headless structural representations can potentially benefit models for a large number of languages and treebanks. 2.1 Notation and Definitions Formally, given an n-word sentence w “ w1, w2, . . . , wn, we define its dependency structure to be a graph G “ pV, Eq. Each node in V corresponds to a word in the sentence. Each (labeled) edge ph, m, rq P E denotes a syntactic relation labeled r between the head word wh and modifier word wm, where h, m P t0, 1, . . . , nu and 0 denotes the dummy root of the sentence. Since we work with dependency treebanks, we require that the edges in E form a tree. To represent a multiword headless span wi, . . . , wj, all subsequent words in the span are attached to the beginning word wi, i.e., @k P ti ` 1, . . . , ju, pi, k, fq P E, where f is the special syntactic relation label deon how detailed the original syntactic analyses are and the accuracies of the conversion algorithms. 4Measured on the 90 treebanks with training splits. noting headless structures (flat in UD annotation). Alternatively, one can also use a BIO tag sequence T “ pt1, t2, . . . , tnq P tB, I, Oun to indicate the location of any headless spans within w. The headless MWE span wi, . . . , wj has the corresponding tags ti “ B and @k P ti`1, . . . , ju, tk “ I; tokens outside any spans are assigned the tag O. We call G and T consistent if they indicate the same set of headless spans for w. 3 Three Approaches We first present the standard approaches of edgefactored parsing (§3.2) and tagging (§3.3) for extracting headless spans in dependency trees, and then introduce a joint decoder (§3.4) that finds the global maximum among consistent (tree structure, tag sequence) pairs. 3.1 Preliminaries Given a length-n sentence w—which we henceforth denote with the variable x for consistency with machine-learning conventions—we first extract contextualized representations from the input to associate each word with a vector x0 (for the dummy word “root”), x1, ..., xn. We consider two common choices of feature extractors: (1) bidirectional long short-term memory networks (biLSTMs; Graves and Schmidhuber, 2005) which 8784 have been widely adopted in dependency parsing (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017) and sequence tagging (Ma and Hovy, 2016); and (2) the Transformer-based (Vaswani et al., 2017) BERT feature extractor (Devlin et al., 2019), pre-trained on large corpora and known to provide superior accuracies on both tasks (Kitaev et al., 2019; Kondratyuk and Straka, 2019). For BERT models, we fine-tune the representations from the final layer for our parsing and tagging tasks. When the BERT tokenizer renders multiple tokens from a single pre-tokenized word, we follow Kitaev et al. (2019) and use the BERT features from the last token as its representation. 3.2 (Edge-Factored) Parsing Since we consider headless structures that are embedded inside parse trees, it is natural to identify them through a rule-based post-processing step after full parsing. Our parsing component replicates that of the state-of-the-art Che et al. (2018) parser, which has the same parsing model as Dozat and Manning (2017). We treat unlabelled parsing as a head selection problem (Zhang et al., 2017) with deep biaffine attention scoring: hattach i “ MLPattach-headpxiq mattach j “ MLPattach-modpxjq si,j “ rhattach i ; 1sJU attachrmattach j ; 1s Pphj “ i | xq “ softmaxips:,jq, where MLPattach-head and MLPattach-mod are multilayer perceptrons (MLPs) that project contextualized representations into a d-dimensional space; r¨; 1s indicates appending an extra entry of 1 to the vector; U att P Rpd`1qˆpd`1q generates a score si,j for wj attaching to wi (which we can then refer to as the head of wj, hj); a softmax function defines a probability distribution over all syntactic head candidates in the argument vector (we use the range operator “:” to evoke a vector); and, recall, we represent potential heads as integers, so that we may write hj “ i P t0, . . . , nu. The model for arc labeling employs an analogous deep biaffine scoring function: hrel i “ MLPrel-headpxiq mrel j “ MLPrel-modpxjq vi,j,r “ rhrel i ; 1sJU rel r rmrel j ; 1s Pprj “ r | x, hj “ iq “ softmaxrpvi,j,:q, where rj is the arc label between whj and wj. The objective for training the parser is to minimize the cumulative negative log-likelihood Lparse “ ÿ pi˚,j˚,r˚qPE r´ log Pphj˚ “ i˚ | xq ´ log Ppri “ r˚ | x, hj˚ “ i˚qs. After the model predicts a full parse, we extract headless structures as the tokens “covered” by the longest-spanning f-arcs (f “ flat in UD). 3.3 Tagging For extracting spans in texts, if one chooses to ignore the existence of parse trees, BIO tagging is a natural choice. We treat the decision for the label of each token as an individual multi-class classification problem. We let Ppti “ t | xq “ softmaxtpMLPtagpxiqq, where MLPtag has 3 output units corresponding to the scores for tags B, I and O respectively.5 We train the tagger to minimize Ltag “ ÿ i ´ log Ppti “ t˚ i | xq, where t˚ corresponds to the gold BIO sequence. During inference, we predict the BIO tags independently at each token position and interpret the tag sequence as a set of MWE spans. As a postprocessing step, we discard all single-token spans, since the task is to predict multi-word spans. 3.4 A Joint Decoder A parser and a tagger take two different views of the same underlying data. It is thus reasonable to hypothesize that a joint decoding process that combines the scores from the two models might yield more accurate predictions. In this section, we propose such a joint decoder to find the parser+taggerconsistent structure with the highest product of probabilities. Formally, if Y is the output space for all consistent parse tree structures and BIO tag sequences, for y P Y with components consisting 5Sequence tagging is traditionally handled by conditional random fields (Lafferty et al., 2001, CRFs). However, in recent experiments using contextualized representations on tagging (Clark et al., 2018; Devlin et al., 2019), CRF-style loss functions provide little, if any, performance gains compared with simple multi-class classification solutions, at slower training speeds, to boot. Our preliminary experiments with both biLSTM and BERT-based encoders corroborate these findings, and thus we report results trained without CRFs. 8785 Axioms: R-INIT: i i : log Ppti “ Oq L-INIT: i i : 0 R-MWE: i j : δpi, jq , where δpi, jq “ log Ppti “ Bq ` řj k“i`1 plog Pptk “ Iq ` log Pphk “ iqq Deduction Rules: R-COMB: i k : s1 k j : s2 i j : s1 ` s2 R-LINK: i k : s1 k ` 1 j : s2 i j : s1 ` s2 ` log Pphj “ iq L-COMB: j k : s1 k i : s2 j i : s1 ` s2 L-LINK: j k ´ 1 : s1 k i : s2 j i : s1 ` s2 ` log Pphj “ iq Figure 3: Eisner’s (1996) algorithm adapted to parsing headless structures (unlabeled case), our modifications highlighted in blue. All deduction items are annotated with their scores. R-MWE combines BIO tagging scores and head selection parsing scores. We need no L-MWE because of the rightward headless-structure-arc convention. of tags ti, head assignments hi, and relation labels ri, our decoder aims to find ˆy satisfying ˆy “ arg max yPY Ppy | xq, where Ppy | xq “ ź i Ppti | xqPphi | xqPpri | x, hiq. Fig. 3 illustrates our joint decoder in the unlabeled case.6 It builds on Eisner’s (1996) decoder for projective dependency parsing. In addition to having single-word spans as axioms in the deduction system, we further allow multi-word spans to enter the decoding procedures through the axiom R-MWE. Any initial single-word spans receive an O-tag score for that word, while the newly introduced MWE spans receive B-tag, I-tag, attachment and relation scores that correspond to the two consistent views of the same structure. The time complexity for this decoding algorithm remains the same Opn3q as the original Eisner algorithm. During training, we let the parser and the tagger share the same contextualized representation x and optimize a linearly interpolated joint objective Ljoint “ λLparse ` p1 ´ λqLtag, 6In the labeled case, the parser further adds the arc-labeling scores to the R-MWE and LINK rules. where λ is a hyper-parameter adjusting the relative weight of each module.7 This is an instance of multi-task learning (MTL; Caruana, 1993, 1997). MTL has proven to be a successful technique (Collobert and Weston, 2008) on its own; thus, in our experiments, we compare the joint decoder and using the MTL strategy alone. 4 Experiments Data We perform experiments on the MWEAware English Dependency Corpus (Kato et al., 2017) and treebanks selected from Universal Dependencies 2.2 (UD; Nivre et al., 2018) for having frequent occurrences of headless MWE structures. The MWE-Aware English Dependency Corpus provides automatically unified named-entity annotations based on OntoNotes 5.0 (Weischedel et al., 2013) and Stanford-style dependency trees (de Marneffe and Manning, 2008). We extract MWE spans according to mwe_NNP dependency relations. We choose the UD treebanks based on two basic properties that hold for flat structures 7The joint decoder combines tagging and parsing scores regardless of whether the two modules are jointly trained. However, since feature extraction is the most time-consuming step in our neural models, especially with BERT-based feature extractors, it is most practical to save memory and time by sharing common feature representations across modules. 8786 Treebank # tokens # headless % # headless Average Compliance arcs spans span length ratio English 731,677 32,065 4.38% 16,997 2.89 100.00% UD 2.2 de_gsd 263,804 6,786 2.57% 5,663 2.59 93.00% it_postwita 99,441 2,733 2.75% 2,277 2.26 94.89% nl_alpino 186,046 4,734 2.54% 3,269 2.45 100.00% nl_lassysmall 75,134 4,408 5.87% 3,018 2.46 99.82% no_nynorsk 245,330 5,578 2.27% 3,670 2.54 99.78% pt_bosque 206,739 5,375 2.60% 4,310 2.25 97.38% Table 2: Dataset statistics. Language codes: de=German; it=Italian; nl=Dutch; no=Norwegian; pt=Portuguese. conforming to the UD annotation guidelines: (1) all words that are attached via flat relations must be leaf nodes and (2) all words within a flat span should be attached to a common “head” word, and each arc label should be either flat or punct.8 For each treebank, we compute its compliance ratio, defined as the percentage of its trees containing flat arc labels that satisfy both properties above; and we filter out those with compliance ratios below 90%.9 We rank the remaining treebanks by their ratios of flat relations among all dependency arcs, and pick those with ratios higher than 2%. Six treebanks representing 5 languages, German (McDonald et al., 2013), Italian (Sanguinetti et al., 2018), Dutch (Bouma and van Noord, 2017), Norwegian (Solberg et al., 2014) and Portuguese (Rademaker et al., 2017), are selected for our experiments.10 Data statistics are given in Table 2. To construct gold-standard BIO labels, we extract MWE spans according to the longest-spanning arcs that correspond to headless structures. Implementation Details We use 3-layer biLSTMs where each layer has 400 dimensions 8punct inside a headless span is often used for hyphens and other internal punctuation in named entities. See the English sentence in Fig. 2 for an example. 9The two properties defined in the UD guidelines for headless structures provide us with a common basis for uniform treatment across languages and treebanks. Unfortunately, the two properties can be violated quite often, due to issues in annotation and automatic treebank conversion into UD style. In 6 out of the top 10 treebanks containing the most flat relations, (at least one of) these properties are violated in more than 35% of the sentences with flat relations and have to be excluded from our experiments. We hope that ongoing community effort in data curation will facilitate evaluation on more diverse languages. 10It is a coincidence that all the selected languages are IndoEuropean (IE). Although there are some non-IE treebanks with high flat ratio, such as Korean (see Table 1), the annotated structures frequently break one or both of the basic properties. See Fig. 2 for violation examples. in both directions and the inputs are concatenations of 100-dimensional randomly-initialized word embeddings with the final hidden vectors of 256-dimensional single-layer character-based bi-LSTMs; for BERT, we use pre-trained cased multi-lingual BERT models11 and fine-tune the weights. We adopt the parameter settings of Dozat and Manning (2017) and use 500 and 100 dimensions for U att and U rel r , respectively. The MLP in the taggers have 500 hidden dimensions. We use a dropout (Srivastava et al., 2014) rate of 0.33, a single hidden layer, and a ReLU activation function (Nair and Hinton, 2010) for all MLPs. The models are trained with the Adam optimizer (Kingma and Ba, 2015) using a batch size of 16 sentences. The learning rates are set to 1e´3 for bi-LSTMs and 1e´5 for BERT initially and then multiplied by a factor of 0.1 if the performance on the development set stops improving within 3200 training iterations. For the parsing models, we use the projective Eisner (1996) decoder algorithm. For the joint training and joint decoding models, we tune λ P t0.02, 0.05, 0.1, 0.3, 0.5, 0.9u for each treebank independently and fix the settings based on the best dev-set scores. We run each model with 5 different random seeds and report the mean and standard deviation for each setting. Results We report F1 scores based on multi-word headless-structure extraction. Table 3 compares different strategies for identifying headless MWEs in parse trees. Tagging is consistently better than parsing except for two treebanks with BERT feature extractor. Tagging beats parsing in all but two combinations of treebank and feature extractor. As hypothesized, our joint decoder improves over both strategies by 0.69% (1.64%) absolute through combined decisions from parsing and tagging with(out) 11https://github.com/huggingface/transformers 8787 w/ bi-LSTM Compl. MTL Joint Treebank Ratio Ó Parsing Tagging Parsing Tagging Decoding English 100.00 91.24˘0.60 91.81˘0.45 93.00˘0.83 93.24˘0.76 93.49˘0.43 UD 2.2 nl_alpino 100.00 72.66˘1.73 74.94˘1.00 77.29˘0.80 75.58˘1.18 79.65˘1.05 nl_lassysmall 99.82 76.44˘1.56 77.98˘1.56 78.13˘0.98 77.58˘1.17 78.92˘1.00 no_nynorsk 99.78 85.34˘0.81 87.67˘0.90 86.72˘0.76 87.44˘0.76 88.40˘0.39 pt_bosque 97.38 89.55˘1.10 90.97˘0.46 91.30˘0.75 92.07˘1.04 90.63˘1.56 it_postwita 94.89 75.35˘1.05 76.37˘1.72 78.46˘1.08 77.87˘0.57 78.38˘1.04 de_gsd 93.00 63.32˘1.36 64.10˘1.31 64.81˘2.05 65.07˘1.35 65.86˘1.34 Average 79.13 80.55 81.39 81.26 82.19 w/ BERT Compl. MTL Joint Treebank Ratio Ó Parsing Tagging Parsing Tagging Decoding English 100.00 94.98˘0.26 95.45˘0.23 95.01˘0.20 95.86˘0.19 95.51˘0.58 UD 2.2 nl_alpino 100.00 83.87˘1.61 83.32˘1.01 84.65˘1.48 85.90˘1.51 86.61˘1.52 nl_lassysmall 99.82 87.16˘1.20 87.52˘0.59 88.10˘0.80 87.68˘0.78 88.35˘0.49 no_nynorsk 99.78 92.16˘0.93 93.48˘0.48 92.45˘0.34 93.11˘0.21 93.08˘0.62 pt_bosque 97.38 92.98˘0.82 93.47˘0.55 93.42˘0.65 93.85˘0.57 94.01˘0.19 it_postwita 94.89 80.80˘1.51 80.80˘1.52 80.90˘1.78 81.33˘0.43 80.83˘1.20 de_gsd 93.00 68.21˘1.43 70.28˘0.70 70.04˘1.14 71.05˘1.12 70.72˘0.90 Average 85.74 86.33 86.37 86.97 87.02 Table 3: Flat-structure identification test-set F1 scores (%) with bi-LSTM (top) and BERT (bottom). The cell with the best result for each treebank has blue shading; results within one standard deviation of the best are bolded. BERT. We also compare the joint decoding setting with MTL training strategy alone. While joint decoding yields superior F1 scores, MTL is responsible for a large portion of the gains: it accounts for over half of the average gains with bi-LSTMs, and when we use pre-trained BERT feature extractors, the accuracies of jointly-trained taggers are essentially as good as joint decoding models. Interestingly, the choice of feature extractors also has an effect on the performance gap between parsers and taggers. With bi-LSTMs, tagging is 1.42% absolute F1 higher than parsing, and the gap is mitigated through MTL. While pre-trained BERT reduces the performance difference dramatically down to 0.59% absolute, MTL no longer helps parsers overcome this gap. Additionally, we observe that MTL helps both parsing and tagging models, demonstrating that the two views of the same underlying structures are complementary to each other and that learning both can be beneficial to model training. By resolving such representational discrepancies, joint decoding exhibits further accuracy improvement. In terms of dependency parsing accuracies, we confirm that our parsing-only models achieve state-of-the-art performance on the UD treebanks, but there are no significant differences in parsing results among parsing-only, MTL and jointlydecoded models. See Appendix for detailed results. 5 Related Work Syntactic analysis in conjunction with MWE identification is an important line of research (Wehrli, 2000). The span-based representations that form the basis of phrase-structure trees (as opposed to dependency trees) are arguably directly compatible with headless spans. This motivates approaches using joint constituency-tree representations based on context-free grammars (Arun and Keller, 2005; Constant et al., 2013) and tree substitution grammars (Green et al., 2011, 2013). Finkel and Manning (2009) add new phrasal nodes to denote named entities, enabling statistical parsers trained on this modified representation to produce both parse trees and named entity spans simultaneously. Le Roux et al. (2014) use dual decomposition to develop a joint system that combines phrase-structure parsers and taggers for compound recognition. These ap8788 proaches do not directly transfer to dependencybased representations since dependency trees do not explicitly represent phrases. In the context of dependency parsing, Eryi˘git et al. (2011) report that MWE annotations have a large impact on parsing. They find that the dependency parsers are more accurate when MWE spans are not unified into single lexical items. Similar to the phrase-structure case, Candito and Constant (2014) consider MWE identification as a side product of dependency parsing into joint representations. This parse-then-extract strategy is widely adopted (Vincze et al., 2013; Nasr et al., 2015; Simkó et al., 2017). Waszczuk et al. (2019) introduce additional parameterized scoring functions for the arc labelers and use global decoding to produce consistent structures during arc-labeling steps once unlabeled dependency parse trees are predicted. Our work additionally proposes a joint decoder that combines the scores from both parsers and taggers. Alternative approaches to graph-based joint parsing and MWE identification include transition-based (Constant and Nivre, 2016) and easy-first (Constant et al., 2016) dependency parsing. These approaches typically rely on greedy decoding, whereas our joint decoder finds the globally optimal solution through dynamic programming. Our work only focuses on a subset of MWEs that do not have internal structures. There is substantial research interest in the broad area of MWEs (Sag et al., 2002; Constant et al., 2017) including recent releases of datasets (Schneider and Smith, 2015), editions of shared tasks (Savary et al., 2017; Ramisch et al., 2018) and workshops (Savary et al., 2018, 2019). We leave it to future work to extend the comparison and combination of taggers and dependency parsers to other MWE constructions. 6 Conclusion and Further Directions Our paper provides an empirical comparison of different strategies for extracting headless MWEs from dependency parse trees: parsing, tagging, and joint modeling. Experiments on the MWE-Aware English Dependency Corpus and UD 2.2 across five languages show that tagging, a widely-used methodology for extracting spans from texts, is more accurate than parsing for this task. When using bi-LSTM (but not BERT) representations, our proposed joint decoder reaches higher F1 scores than either of the two other strategies, by combining scores of the two different and complementary representations of the same structures. We also show that most of the gains stem from a multi-task learning strategy that shares common neural representations between the parsers and the taggers. An interesting additional use-case for our joint decoder is when a downstream task, e.g., relation extraction, requires output structures from both a parser and a tagger. Our joint decoder can find the highest-scoring consistent structures among all candidates, and thus has the potential to provide simpler model designs in downstream applications. Our study has been limited to a few treebanks in UD partially due to large variations and inconsistencies across different treebanks. Future community efforts on a unified representation of flat structures for all languages would facilitate further research on linguistically-motivated treatments of headless structures in “headful” dependency treebanks. Another limitation of our current work is that our joint decoder only produces projective dependency parse trees. To handle non-projectivity, one possible solution is pseudo-projective parsing (Nivre and Nilsson, 2005). We leave it to future work to design a non-projective decoder for joint parsing and headless structure extraction. Acknowledgments We thank the three anonymous reviewers for their comments, and Igor Malioutov, Ana Smith and the Cornell NLP group for discussion and comments. TS was supported by a Bloomberg Data Science Ph.D. Fellowship. References Abhishek Arun and Frank Keller. 2005. Lexicalization in crosslinguistic probabilistic parsing: The case of French. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 306–313, Ann Arbor, Michigan. Association for Computational Linguistics. Gosse Bouma and Gertjan van Noord. 2017. Increasing return on annotation investment: The automatic construction of a Universal Dependency treebank for Dutch. In Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017), pages 19–26, Gothenburg, Sweden. Association for Computational Linguistics. Marie Candito and Matthieu Constant. 2014. Strategies for contiguous multiword expression analysis and dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 8789 pages 743–753, Baltimore, Maryland. Association for Computational Linguistics. Rich Caruana. 1993. Multitask learning: A knowledgebased source of inductive bias. In Proceedings of the Tenth International Conference on International Conference on Machine Learning, ICML’93, pages 41–48, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 55–64, Brussels, Belgium. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914– 1925, Brussels, Belgium. Association for Computational Linguistics. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 160–167, New York, NY, USA. ACM. Mathieu Constant, Gül¸sen Eryiˇgit, Johanna Monti, Lonneke van der Plas, Carlos Ramisch, Michael Rosner, and Amalia Todirascu. 2017. Multiword expression processing: A survey. Computational Linguistics, 43(4):837–892. Matthieu Constant, Joseph Le Roux, and Anthony Sigogne. 2013. Combining compound recognition and PCFG-LA parsing with word lattices and conditional random fields. ACM Transactions on Speech and Language Processing, 10(3):8:1–8:24. Matthieu Constant, Joseph Le Roux, and Nadi Tomeh. 2016. Deep lexical segmentation and syntactic parsing in the easy-first dependency framework. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1095–1101, San Diego, California. Association for Computational Linguistics. Matthieu Constant and Joakim Nivre. 2016. A transition-based system for joint lexical and syntactic analysis. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 161–171, Berlin, Germany. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional Transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Linguistics, pages 340–345. Gül¸sen Eryi˘git, Tugay ˙Ilbay, and Ozan Arkan Can. 2011. Multiword expressions in statistical dependency parsing. In Proceedings of the Second Workshop on Statistical Parsing of Morphologically Rich Languages, pages 45–55, Dublin, Ireland. Association for Computational Linguistics. Jenny Rose Finkel and Christopher D. Manning. 2009. Joint parsing and named entity recognition. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 326–334, Boulder, Colorado. Association for Computational Linguistics. Kim Gerdes, Joakim Nivre, Agata Savary, and Nathan Schneider. 2018. Working group on multiword expressions. https://universaldependencies.org/ workgroups/mwe.html. Webpage accessed May 5, 2020. Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5):602–610. Spence Green, Marie-Catherine de Marneffe, John Bauer, and Christopher D. Manning. 2011. Multiword expression identification with tree substitution grammars: A parsing tour de force with French. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 725–735, Edinburgh, Scotland, UK. Association for Computational Linguistics. Spence Green, Marie-Catherine de Marneffe, and Christopher D. Manning. 2013. Parsing models for identifying multiword expressions. Computational Linguistics, 39(1):195–227. Ray Jackendoff. 2008. Construction after construction and its theoretical challenges. Language, 84(1):8– 28. 8790 Sylvain Kahane, Marine Courtin, and Kim Gerdes. 2017. Multi-word annotation in syntactic treebanks – Propositions for Universal Dependencies. In Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories, pages 181–189, Prague, Czech Republic. Akihiko Kato, Hiroyuki Shindo, and Yuji Matsumoto. 2017. English multiword expression-aware dependency parsing including named entities. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 427–432, Vancouver, Canada. Association for Computational Linguistics. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, California. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499–3505, Florence, Italy. Association for Computational Linguistics. Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing Universal Dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2779–2795, Hong Kong, China. Association for Computational Linguistics. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Joseph Le Roux, Antoine Rozenknop, and Matthieu Constant. 2014. Syntactic parsing and compound recognition via dual decomposition: Application to French. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1875–1885. Dublin City University and Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. Stanford typed dependencies manual. Technical report, Stanford University. Marie-Catherine de Marneffe and Joakim Nivre. 2019. Dependency grammar. Annual Review of Linguistics, 5(1):197–218. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, Claudia Bedini, Núria Bertomeu Castelló, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92–97, Sofia, Bulgaria. Association for Computational Linguistics. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, pages 807–814, Haifa, Israel. Omnipress. Alexis Nasr, Carlos Ramisch, José Deulofeu, and André Valli. 2015. Joint dependency parsing and multiword expression tokenization. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1116–1126, Beijing, China. Association for Computational Linguistics. Joakim Nivre, Mitchell Abrams, Željko Agi´c, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, John Bauer, Sandra Bellato, Kepa Bengoetxea, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Rogier Blokland, Victoria Bobicev, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Gül¸sen Cebiro˘glu Eryi˘git, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinková, Aurélie Collomb, Ça˘grı Çöltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaž Erjavec, Aline Etienne, Richárd Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta Gonzáles Saavedra, Matias Grioni, Normunds Gr¯uz¯ıtis, Bruno Guillaume, Céline Guillot-Barbance, Nizar Habash, Jan Hajiˇc, Jan Hajiˇc jr., Linh Hà M˜y, Na-Rae Han, Kim Harris, Dag Haug, Barbora Hladká, Jaroslava Hlaváˇcová, 8791 Florinel Hociung, Petter Hohle, Jena Hwang, Radu Ion, Elena Irimia, Tomáš Jelínek, Anders Johannsen, Fredrik Jørgensen, Hüner Ka¸sıkara, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, Václava Kettnerová, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phương Lê H`ông, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Nikola Ljubeši´c, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, C˘at˘alina M˘ar˘anduc, David Mareˇcek, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonça, Niko Miekka, Anna Missilä, C˘at˘alin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shinsuke Mori, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-B¯erzkalne, Lương Nguy˜ên Thi., Huy`ên Nguy˜ên Thi. Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adédayò. Olúòkun, Mai Omura, Petya Osenova, Robert Östling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalnin, a, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela Rääbis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Ros,ca, Olga Rudina, Shoval Sadde, Shadi Saleh, Tanja Samardži´c, Stephanie Samson, Manuela Sanguinetti, Baiba Saul¯ıte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Yuta Takahashi, Takaaki Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeˇnka Urešová, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Veronika Vincze, Lars Wallin, Jonathan North Washington, Seyi Williams, Mats Wirén, Tsegay Woldemariam, Tak-sum Wong, Chunxiao Yan, Marat M. Yavrumyan, Zhuoran Yu, Zdenˇek Žabokrtský, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi Zhu. 2018. Universal Dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University. Joakim Nivre and Jens Nilsson. 2005. Pseudoprojective dependency parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 99–106, Ann Arbor, Michigan. Association for Computational Linguistics. Peng Qi, Timothy Dozat, Yuhao Zhang, and Christopher D. Manning. 2018. Universal dependency parsing from scratch. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 160–170, Brussels, Belgium. Association for Computational Linguistics. Alexandre Rademaker, Fabricio Chalub, Livy Real, Cláudia Freitas, Eckhard Bick, and Valeria de Paiva. 2017. Universal Dependencies for Portuguese. In Proceedings of the Fourth International Conference on Dependency Linguistics (Depling 2017), pages 197–206, Pisa, Italy. Linköping University Electronic Press. Carlos Ramisch, Silvio Ricardo Cordeiro, Agata Savary, Veronika Vincze, Verginica Barbu Mititelu, Archna Bhatia, Maja Buljan, Marie Candito, Polona Gantar, Voula Giouli, Tunga Güngör, Abdelati Hawwari, Uxoa Iñurrieta, Jolanta Kovalevskait˙e, Simon Krek, Timm Lichte, Chaya Liebeskind, Johanna Monti, Carla Parra Escartín, Behrang QasemiZadeh, Renata Ramisch, Nathan Schneider, Ivelina Stoyanova, Ashwini Vaidya, and Abigail Walsh. 2018. Edition 1.1 of the PARSEME shared task on automatic identification of verbal multiword expressions. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), pages 222–240, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Lance Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the Third Workshop on Very Large Corpora, pages 82–94, Cambridge, Massachusetts. Ivan A Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: A pain in the neck for NLP. In Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics, pages 1–15, Mexico City, Mexico. Springer. Manuela Sanguinetti, Cristina Bosco, Alberto Lavelli, Alessandro Mazzei, Oronzo Antonelli, and Fabio Tamburini. 2018. PoSTWITA-UD: An Italian Twitter treebank in Universal Dependencies. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), pages 1768–1775, Miyazaki, Japan. European Language Resources Association (ELRA). 8792 Agata Savary, Carla Parra Escartín, Francis Bond, Jelena Mitrovi´c, and Verginica Barbu Mititelu, editors. 2019. Proceedings of the Joint Workshop on Multiword Expressions and WordNet (MWE-WN 2019). Association for Computational Linguistics, Florence, Italy. Agata Savary, Carlos Ramisch, Silvio Cordeiro, Federico Sangati, Veronika Vincze, Behrang QasemiZadeh, Marie Candito, Fabienne Cap, Voula Giouli, Ivelina Stoyanova, and Antoine Doucet. 2017. The PARSEME shared task on automatic identification of verbal multiword expressions. In Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017), pages 31–47, Valencia, Spain. Association for Computational Linguistics. Agata Savary, Carlos Ramisch, Jena D. Hwang, Nathan Schneider, Melanie Andresen, Sameer Pradhan, and Miriam R. L. Petruck, editors. 2018. Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWECxG-2018). Association for Computational Linguistics, Santa Fe, New Mexico, USA. Nathan Schneider, Emily Danchik, Chris Dyer, and Noah A. Smith. 2014. Discriminative lexical semantic segmentation with gaps: Running the MWE gamut. Transactions of the Association for Computational Linguistics, 2:193–206. Nathan Schneider and Noah A. Smith. 2015. A corpus and model integrating multiword expressions and supersenses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1537–1547, Denver, Colorado. Association for Computational Linguistics. Katalin Ilona Simkó, Viktória Kovács, and Veronika Vincze. 2017. USzeged: Identifying verbal multiword expressions with POS tagging and parsing techniques. In Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017), pages 48– 53, Valencia, Spain. Association for Computational Linguistics. Per Erik Solberg, Arne Skjærholt, Lilja Øvrelid, Kristin Hagen, and Janne Bondi Johannessen. 2014. The Norwegian dependency treebank. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 789–795, Reykjavik, Iceland. European Language Resources Association (ELRA). Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Veronika Vincze, János Zsibrita, and István Nagy T. 2013. Dependency parsing for identifying Hungarian light verb constructions. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 207–215, Nagoya, Japan. Asian Federation of Natural Language Processing. Jakub Waszczuk, Rafael Ehren, Regina Stodden, and Laura Kallmeyer. 2019. A neural graph-based approach to verbal MWE identification. In Proceedings of the Joint Workshop on Multiword Expressions and WordNet (MWE-WN 2019), pages 114– 124, Florence, Italy. Association for Computational Linguistics. Eric Wehrli. 2000. Parsing and collocations. In Proceedings of the Second International Conference on Natural Language Processing, NLP ’00, pages 272– 282, London, UK. Springer-Verlag. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. Ontonotes release 5.0 LDC2013T19. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–21, Brussels, Belgium. Association for Computational Linguistics. Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 665–676, Valencia, Spain. Association for Computational Linguistics. 8793 Appendix A Evaluation of the Strengths of Our Parsing Models To confirm that we work with reasonable parsing models, we compare our parsers with those in the CoNLL 2018 shared task (Zeman et al., 2018). The shared task featured an end-to-end parsing task, requiring all levels of text processing including tokenization, POS tagging, morphological analysis, etc. We focus on the parsing task only, and predict syntactic trees based on sentences tokenized by the Qi et al. (2018) submission.12 Table A1 shows that our parsing models are highly competitive with the current state-of-the-art. Indeed, on four out of the six treebanks we selected for their density of flat structures, our baseline models actually achieve higher labeled attachment scores (LAS) than the the top scorer did in the official shared task. Treebank Our CoNLL 2018 Parsers Best de_gsd 80.65 80.36 it_ostwita 79.33 79.39 nl_alpino 89.78 89.56 nl_lassysmall 87.96 86.84 no_nynorsk 90.44 90.99 pt_bosque 89.25 87.81 Table A1: Comparison of our (non-MTL) parsing models with the best-performing systems (Che et al., 2018; Qi et al., 2018) from the CoNLL 2018 shared task, measured by labeled attachment scores (LAS, %). 12We thank the shared task participants and the organizers for making system predictions available at https://lindat. mff.cuni.cz/repository/xmlui/handle/11234/1-2885. Appendix B Do MTL and Joint Decoding Help Parsing Performance? In Table A2 (next page), we investigate whether MTL and combining scores from both representations of flat-structure MWEs can improve parsing performance. We observe very little difference among the various strategies. This fact can be explained by the relatively low ratios of flat relations and the already-high base performance: the room for improvement on the standard LAS metrics is quite small. 8794 w/ bi-LSTM Compl. MTL Joint Treebank Ratio Ó Parsing Parsing Decoding English 100.00 89.30˘0.41 89.39˘0.67 89.77˘0.52 UD 2.2 nl_alpino 100.00 81.97˘1.27 82.57˘0.99 82.79˘0.77 nl_lassysmall 99.82 82.06˘1.30 82.90˘0.64 81.55˘1.26 no_nynorsk 99.78 86.54˘0.50 86.35˘0.37 86.65˘0.64 pt_bosque 97.38 84.29˘2.15 84.48˘1.61 85.28˘0.25 it_postwita 94.89 77.39˘0.69 76.75˘1.29 76.59˘1.46 de_gsd 93.00 76.66˘0.64 76.35˘0.83 75.22˘1.98 Average 82.60 82.69 82.55 w/ BERT Compl. MTL Joint Treebank Ratio Ó Parsing Parsing Decoding English 100.00 93.73˘0.24 93.52˘0.17 93.38˘0.39 UD 2.2 nl_alpino 100.00 89.82˘0.55 89.95˘0.41 89.86˘0.59 nl_lassysmall 99.82 89.78˘0.46 89.76˘0.17 89.67˘0.16 no_nynorsk 99.78 90.77˘0.20 90.98˘0.38 90.85˘0.32 pt_bosque 97.38 89.78˘0.32 89.51˘0.39 89.79˘0.39 it_postwita 94.89 81.61˘0.32 81.70˘0.14 81.53˘0.63 de_gsd 93.00 81.51˘0.23 81.74˘0.23 81.52˘0.17 Average 88.14 88.17 88.09 Table A2: Dependency-parsing labeled attachment scores (LAS, %) on the test sets with bi-LSTM (top) and BERT (bottom) feature extractors. The cell containing the best result for each treebank has blue shading; results within one standard deviation of the best are in boldface.
2020
775
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8795–8800 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 8795 Revisiting Higher-Order Dependency Parsers Erick Fonseca Instituto de Telecomunicac¸˜oes Lisbon, Portugal [email protected] Andr´e F. T. Martins Instituto de Telecomunicac¸˜oes & Unbabel Lisbon, Portugal [email protected] Abstract Neural encoders have allowed dependency parsers to shift from higher-order structured models to simpler first-order ones, making decoding faster and still achieving better accuracy than non-neural parsers. This has led to a belief that neural encoders can implicitly encode structural constraints, such as siblings and grandparents in a tree. We tested this hypothesis and found that neural parsers may benefit from higher-order features, even when employing a powerful pre-trained encoder, such as BERT. While the gains of higher-order features are small in the presence of a powerful encoder, they are consistent for long-range dependencies and long sentences. In particular, higher-order models are more accurate on full sentence parses and on the exact match of modifier lists, indicating that they deal better with larger, more complex structures. 1 Introduction Before the advent of neural networks in NLP, dependency parsers relied on higher-order features to better model sentence structure (McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Martins et al., 2013, inter alia). Common choices for such features were siblings (a head word and two modifiers) and grandparents (a head word, its own head and a modifier). Kiperwasser and Goldberg (2016) showed that even without higher order features, a parser with an RNN encoder could achieve state-of-the-art results. This led folk wisdom to suggest that modeling higher-order features in a neural parser would not bring additional advantages, and nearly all recent research on dependency parsing was restricted to first-order models (Dozat and Manning, 2016; Smith et al., 2018a). Kulmizev et al. (2019) further reinforced this belief comparing transition and graph-based decoders (but none of which higher order); Falenska and Kuhn (2019) suggested that higher-order features become redundant because the parsing models encode them implicitly. However, there is some evidence that neural parsers still benefit from structure modeling. Zhang et al. (2019) showed that a parser trained with a global structure loss function has higher accuracy than when trained with a local objective (i.e., learning the head of each word independently). Falenska and Kuhn (2019) examined the impact of consecutive sibling features in a neural dependency parser. While they found mostly negative results in a transition-based setting, a graph-based parser still showed significant gains on two out of 10 treebanks. In this paper, we test rigorously the hypothesis of the utility of second-order features. In particular, we experiment with consecutive sibling and grandparent features in a non-projective, graphbased dependency parser. We found that without a pretrained encoder, these features are only useful for large treebanks; however, when using BERT, they can improve performance on most treebanks we tested on — especially true for longer sentences and long-distance dependencies, and full sentence parses1. This challenges the hypothesis that encoders can single-handedly improve parsers, or more generally, structured models in general. 2 Model 2.1 Notation We use x to refer to a sentence with tokens (x1, x2, . . . , xn), plus the ROOT pseudo-token, and y to refer to a valid tree composed of n arcs (h, m). We overload the notation sθ(·) to indicate the model score for a part or complete sentence, de1Our code is available at https://github.com/ deep-spin/pyturbo/ 8796 pending on its arguments. 2.2 Encoding We encode a x with a bidirectional LSTM, producing hidden states (h0, h1, . . . , hn), with h0 corresponding to ROOT. Each token is represented by the concatenation of its pretrained word embeddings, a character-level left-to-right LSTM and, optionally, BERT embeddings. Similar to Straka et al. (2019), when using BERT, we take the mean of its last four layers. When the BERT tokenizer splits a token into more than one, we take the first one and ignore the rest, and we use the special token [CLS] to represent ROOT. The word embeddings we use are the ones provided in the CoNLL 2018 shared task. 2.3 First-Order Model We start with a first-order model, which is used as a pruner before running the second-order parser as in Martins et al. (2013). It uses biaffine attention to compute arc and label scores (Dozat and Manning, 2016), and similarly to Qi et al. (2018), we also add distance and linearization terms.2 We want our pruner to be capable of estimating arc probabilities, and thus we train it with a marginal inference loss, maximizing the log probability of the correct parse tree y: Lθ(x, y) = −log pθ(y | x) = −sθ(y) + log X i exp(sθ(yi)). We can compute the partition function over all possible trees yi efficiently using the Matrix-Tree Theorem (Koo et al., 2007), which also gives us arc marginal probabilities. The sentence score sθ(x, y) is computed as the sum of the score of its parts. Additionally, we try first-order models trained with a hinge loss, as Zhang et al. (2019) (also used with our second-order models; see §2.4), maximizing the margin between the correct parse tree y and any other tree ˆy: Lθ(x, y) = max ˆy [sθ(x, ˆy) −sθ(x, y) + ∆(y, ˆy)], where ∆(y, ˆy) is the Hamming cost between y and ˆy, i.e., the number of arcs in which they differ. 2We refer the reader to Qi et al. (2018) for further definition of the distance and linearization terms. Also, like them, we only backpropagate error for these scores for the gold arcs. 2.4 Second-Order Model We train second-order models with a hinge loss. It is computed in the same way as in the firstorder case, except now the sentence scores include second-order parts. Notice that the Hamming cost still only considers differing arcs. Consecutive siblings A consecutive sibling part is a tuple (h, m, s) such that h is the parent of both m and s, which are both to the left or to the right of h, and no other child of h exists between them. Additionally, we consider tuples (h, m, ∅) to indicate that m is the first child (if to the left of h) or the last child (if to the right). Grandparents A grandparent part is a tuple (h, m, g) such that g is the parent of h and h is the parent of m. There are no grandparent parts such that h is ROOT. Scoring The score for a higher order part (h, m, r) of type ρ (in our case, either grandparent or consecutive sibling) is computed as: sθ(h, m, r) = wρ⊤· (λρ 1 tanh(hρ h + hρ r) + λρ 2 tanh(hρ m + hρ r) + λρ 3 tanh(hρ h + hρ m + hρ r)), hρ h = fρ h(hh), hρ m = fρ m(hm), hρ r = fρ r (hr). where λρ 1, λρ 2 and λρ 3 are learnable scalars, wρ is a learnable vector, fρ h(·), fρ m(·) and fρ r (·) are learnable affine transforms. There is a set of these parameters for consecutive siblings and another for grandparents. The factors that compose the score represent different combinations of a second-order part with h, m, or both. There is no factor combining h and m only, since they are already present in the first-order scoring. We also introduce a parameter vector h∅to account for ∅. Decoding The drawback of higher-order feature templates is that exact decoding is intractable for the non-projective case. Classically, researchers have resorted to approximate decoding as well as using a first-order parser to eliminate unlikely arcs and their respective higher-order parts. We employ both of these techniques; specifically, we use the dual decomposition algorithm AD3 (Martins et al., 2011, 2013) for decoding, which often arrives at the exact solution. We use head automata factors 8797 to handle sibling and grandparent structures (Koo et al., 2010), and the traditional Chu-Liu-Edmonds algorithm to handle the tree constraint factor (McDonald et al., 2005). 2.5 Additional Training Details Multitask Learning Our models also predict UPOS, XPOS and morphology tags (UFeats), as training for these additional objectives increases parsing performance. They are implemented via softmax layers on top of the BiLSTM output, and have a cross-entropy loss. Parser and tagger share two BiLSTM layers, with an additional layer for each one (similar to Straka, 2018). We only consider UFeats singletons in the training data, i.e., we do not decompose them into individual features. Perturb and MAP During training with a hinge loss, we add noise sampled from a standard Gumbel distribution to the arc scores, as in Papandreou and Yuille (2011). This effectively makes decoding behave as sampling from the tree space. 3 Experiments Data We evaluate our models on 19 treebanks from Universal Dependencies 2.3: Afrikaans (AfriBooms), Ancient Greek (Perseus), Arabic (PADT), Basque (BDT), Chinese (GSD), Czech (PDT), Finnish (TDT), Hebrew (HTB), Hindi (HDTB), Hungarian (Szeged), Italian (ISDT), Japanese (GSD), Korean (GSD), Persian (Seraji), Portuguese (Bosque), Russian (SynTagRUS), Swedish (Talbanken) and Turkish (IMST). In all cases, we use gold tokenization. They represent varied language families, writing systems and typology, inspired by Smith et al. (2018b). Hyperparameters All LSTM cells have 400 units in each direction, as well as arc and label biaffine projections. Second-order layers have 200 units, and character embeddings have 250. We apply dropout with p = 0.5 to all linear layers, and we use word dropout (replacing an encoded word vector with a trainable vector) with p = 0.33 in models without BERT and 0.2 in the ones with it. We use Adam with β1 = 0.9, β2 = 0.99, and constant learning rate of 10−3 for the first-order models without BERT and 5 · 10−4 for all others. We used bert-chinese for Chinese and Japanese, and bert-base-multilingual-cased for other languages; and did not fine-tune its weights. We run the AD3 decoder for up to 500 iterations with a step size of 0.05. We use batches of 1,000 tokens for first-order models and 800 for secondorder, and train for up to 100k batches. We evaluate on the dev set each 200 batches and stop early after 50 evaluations without improvement. Pruning Before training or evaluating a secondorder parser, we run a first-order model trained with marginal inference to prune unlikely arcs and any second-order parts including them. When using BERT in the main parser, we also use a pruner trained with BERT. We keep up to 10 candidate heads for each token, and further prune arcs with posterior probability lower than a threshold t times the probability of the most likely head. Without BERT, t = 10−6, and with it t = 10−8, as we found BERT makes the pruner overconfident. The lowest pruner recall on the dev set was 98.91% (on Turkish); all other treebanks are above 99%. During training, we never prune out gold arcs. 3.1 Results Table 1 shows the test set UAS and LAS for our models. Parsers with BERT and hinge loss achieve the best performance in most datasets; secondorder models are generally better at UAS. An interesting case is Ancient Greek, which is not in BERT’s pretraining data. First-order models with BERT perform worse than the ones without it in UAS and LAS, but the second-order model achieves the highest UAS. Without BERT, second-order features are only beneficial in some medium-to-large treebanks. In the smallest ones, as Turkish and Hungarian, they actually lead to a performance drop; when using BERT, however, they increase accuracy in these datasets. On the other hand, large treebanks such as Russian and Czech have improvements from second-order features even without BERT. This suggests that in order for them to be beneficial, either large amounts of annotated training data are needed (which not all UD treebanks have) or a powerful encoder such as BERT. Considering first-order models, Zhang et al. (2019) found no particular advantage of a hinge loss objective over a cross-entropy one or viceversa. In our experiments, this is mostly the case for models trained with small-to-medium treebanks and without BERT. When more training data or a pretrained encoder is available, the hinge loss objective tends to reach higher accuracy than the cross-entropy one. 8798 First Order First Order Second Order FO + BERT FO + BERT SO + BERT Marginal Hinge Hinge Marginal Hinge Hinge Tokens UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS AF 33.8k 88.08 85.24 88.38 85.15 87.93 84.85 90.54 87.99 90.96 88.22 90.66 88.03 AR 223.8k 88.07 83.51 88.36 83.62 88.37 83.71 88.37 83.79 88.78 84.16 88.97 84.29 CS 1.1M 92.35 89.88 92.91 90.44 93.25 90.89 93.61 91.49 93.96 91.79 93.90 91.71 EN 204.6k 89.82 87.15 90.02 87.29 90.20 87.53 92.51 90.22 92.80 90.53 92.63 90.31 EU 72.9k 86.32 83.02 86.35 83.02 86.24 82.66 87.42 84.11 87.34 83.93 87.42 84.03 FA 121.1k 90.76 87.15 90.59 87.33 90.60 86.97 91.95 88.79 92.27 89.14 91.91 88.83 FI 162.6k 90.51 88.20 90.97 88.69 91.07 88.90 91.84 89.81 91.66 89.38 91.72 89.47 GRC 159.8k 79.81 74.40 80.11 74.61 80.12 74.74 79.72 73.94 78.61 72.51 80.33 74.33 HE 137.7k 89.65 86.86 89.89 87.10 89.56 86.67 91.00 88.25 91.44 88.59 91.25 88.43 HI 281k 94.79 91.52 95.12 91.97 95.03 91.86 95.26 92.00 95.30 92.24 95.34 92.11 HU 20.2k 83.02 77.78 83.66 78.26 82.30 76.97 87.71 83.21 87.90 83.21 86.62 82.38 IT 276k 93.35 91.27 93.63 91.65 93.65 91.64 94.98 93.28 95.23 93.53 95.25 93.42 JA 160.4k 94.82 93.21 94.76 93.25 94.19 92.56 95.14 93.62 95.07 93.62 95.18 93.62 KO 56.6k 86.89 82.97 87.69 84.00 88.02 84.16 89.06 85.69 89.71 86.33 89.62 86.26 PT 206.7k 91.76 89.37 91.59 88.95 92.09 89.64 92.55 90.20 92.63 90.14 92.97 90.58 RU 870.4k 93.02 91.14 93.43 91.51 93.87 92.06 94.47 93.01 94.51 92.98 94.70 93.16 SV 66.6k 89.50 86.62 89.31 86.16 87.00 83.95 91.49 88.93 91.79 89.31 91.82 89.08 TR 37.9k 74.48 67.63 72.42 65.22 73.30 65.86 74.59 67.96 75.43 68.72 75.66 68.88 ZH 98.6k 85.06 80.94 84.98 80.65 84.97 80.40 90.08 87.32 90.03 87.17 90.43 87.53 Table 1: Results on 19 UD treebanks. FO: first order, SO: second order. Figures 1, 2 and 3 show LAS by sentence length, dependency length and depth in the tree (distance to root). While BERT reduces the gap between first and second-order models, the latter are consistently more accurate in sentences longer than 10 tokens, and in dependencies longer than four tokens. Varying distance to root shows a somewhat irregular pattern (similar to what Kulmizev et al., 2019 found); the three BERT models are close to each other, but among the other three, the second-order parser is clearly best for depths 2–9. Table 2 shows complete sentence matches and head words with exact match of their modifier set, over all treebanks. Second-order models are better on both metrics. Table 3 shows results for models that do not employ multitask learning (in our case, jointly learning UPOS, XPOS and morphological features) on the development set for a subset of the treebanks, and the results for the models that employ it on the same data. All models are first order with a probabilistic loss function. MTL parsers performed better except for Arabic UAS, and even then only by a small difference, which motivated us to use MTL in all our experiments. Runtime Our first-order parsers without BERT process 2,000 tokens per second on average, and the second-order ones around 600 (averaged across all treebanks). For models with BERT, the figures Figure 1: LAS by sentence length. Figure 2: LAS by dependency distance. are 1,600 and 460, respectively.3 This slowdown of 3.5x for second-order models is even smaller than the ones reported by Martins et al. (2013). 4 Conclusion We compared second-order dependency parsers to their more common, first-order counterparts. 3Runtime on an NVidia Titan Xp GPU. 8799 Figure 3: LAS by distance to root. MODEL FULL SENT ALL MOD FO, Marginal 47.36/37.25 75.38/71.41 FO, Hinge 49.05/38.34 76.51/72.42 SO, Hinge 51.14/39.79 77.90/73.75 FO+BERT, Marg. 51.87/41.63 78.82/75.11 FO+BERT, Hinge 53.23/42.42 79.34/75.50 SO+BERT, Hinge 54.39/42.88 80.14/76.13 Table 2: Unlabeled/labeled full correct sentences and head words with full correct set of modifiers per model. While their overall performance gain was small, they are distinctively better for longer sentences and long-range dependencies. Considering the exact match of complete parse trees or all modifiers of a word, second-order models exhibit an advantage over first-order ones. Our results indicate that even a powerful encoder as BERT can still benefit from explicit output structure modelling; this would be interesting to explore in other NLP tasks as well. Another interesting line of research would be to evaluate the contribution of higher-order features in a cross-lingual setting, leveraging structure learned from larger treebanks to underresourced languages. Acknowledgments This work was supported by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundac¸˜ao para a Ciˆencia e Tecnologia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal). References Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 957–961, Prague, Czech Republic. Association for Computational Linguistics. NON-MTL MTL UAS LAS UAS LAS AR 87.23 82.86 87.17 82.94 CS 92.51 90.25 92.93 90.77 EN 90.17 87.48 90.61 87.97 GRC 78.43 72.92 79.09 73.72 ZH 83.49 79.18 83.65 79.70 Table 3: Results on the development set for models with and without multitask learning (MTL; with UPOS, XPOS and morphological tagging objectives besides parsing). Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency parsing. CoRR, abs/1611.01734. Agnieszka Falenska and Jonas Kuhn. 2019. The (non)utility of structural features in BiLSTM-based dependency parsers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 117–128, Florence, Italy. Association for Computational Linguistics. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1–11, Uppsala, Sweden. Association for Computational Linguistics. Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 141–150, Prague, Czech Republic. Association for Computational Linguistics. Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1288–1298, Cambridge, MA. Association for Computational Linguistics. Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, and Joakim Nivre. 2019. Deep contextualized word embeddings in transitionbased and graph-based dependency parsing - a tale of two parsers revisited. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2755–2768, Hong Kong, China. Association for Computational Linguistics. 8800 Andr´e Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order nonprojective turbo parsers. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 617–622, Sofia, Bulgaria. Association for Computational Linguistics. Andr´e Martins, Noah Smith, M´ario Figueiredo, and Pedro Aguiar. 2011. Dual decomposition with many overlapping components. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 238–249, Edinburgh, Scotland, UK. Association for Computational Linguistics. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 523–530, Vancouver, British Columbia, Canada. Association for Computational Linguistics. G. Papandreou and A. Yuille. 2011. Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models. In Proc. IEEE Int. Conf. on Computer Vision (ICCV), pages 193–200, Barcelona, Spain. Peng Qi, Timothy Dozat, Yuhao Zhang, and Christopher D. Manning. 2018. Universal dependency parsing from scratch. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 160–170, Brussels, Belgium. Association for Computational Linguistics. Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018a. 82 treebanks, 34 models: Universal dependency parsing with multi-treebank models. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 113–123, Brussels, Belgium. Association for Computational Linguistics. Aaron Smith, Miryam de Lhoneux, Sara Stymne, and Joakim Nivre. 2018b. An investigation of the interactions between pre-trained word embeddings, character models and POS tags in dependency parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2711–2720, Brussels, Belgium. Association for Computational Linguistics. Milan Straka. 2018. UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197–207, Brussels, Belgium. Milan Straka, Jana Strakov´a, and Jan Hajiˇc. 2019. Evaluating contextualized embeddings on 54 languages in pos tagging, lemmatization and dependency parsing. Zhisong Zhang, Xuezhe Ma, and Eduard Hovy. 2019. An empirical investigation of structured output modeling for graph-based neural dependency parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5592–5598, Florence, Italy. Association for Computational Linguistics.
2020
776